id
stringlengths
20
20
content
stringlengths
211
2.4M
meta
dict
BkiUeWHxK7IDA31U1hyU
\section{Introduction} It has recently been pointed out (e.g.,~\cite{Asplund:2005yt}) that the abundances of both $^6$Li and $^7$Li observed in metal poor halo stars (MPHSs) are not in agreement with those predicted from standard big-bang nucleosynthesis (BBN). Specifically, the $^6$Li abundance as a function of metallicity exhibits a plateau similar to that for $^7$Li in very metal-poor stars, suggesting a primordial origin for both isotopes. This $^6$Li abundance plateau, however, is a factor of $\sim 10^3$ larger than that predicted by BBN. A less severe problem exists for $^7$Li; the BBN value based upon the baryon-to-photon ratio fixed by the WMAP analysis~\cite{Spergel:2006hy} of the cosmic microwave background is roughly a factor of three higher than is observed. Moreover, a longstanding effort in cosmology has been the search for evidence of unstable particles that might have existed in the early universe. It is thus natural to ask whether the two lithium abundance anomalies might be a manifestation of the existence of such a particle. In this context a number of possible solutions to the $^6$Li problem have been proposed which relate the Li anomalies to the possible existence of unstable particles in the early universe~\cite{decaying}. This has been extended in several recent studies~\cite{Pospelov:2006sc,Cyburt:2006uv,Kaplinghat:2006qr,Kohri:2006cn,Bird:2007ge,Hamaguchi:2007mp} to consider heavy negatively charged decaying particles that modify BBN, but in rather different ways. In these latter studies, the heavy particles, here denoted as $X^-$, bind to the nuclei produced in BBN to form $X$-nuclei. The massive $X^-$ particles would be bound in orbits with radii comparable to those of normal nuclei. Hence, they would reduce the reaction Coulomb barriers thereby enhancing the thermonuclear reaction rates and extending the duration of BBN to lower temperatures. Pospelov~\cite{Pospelov:2006sc} suggested that a large enhancement of the $^6$Li abundance could result from a transfer reaction involving an $X^-$ bound to $^4$He (denoted $^4$He$_X$), i.e. $^4$He$_X$($d$,$X^-$)$^6$Li. Although this was an intriguing idea, Hamaguchi et al.~\cite{Hamaguchi:2007mp} have recently pointed out via a more complete quantum mechanical calculation that Pospelov's estimate for the $^4$He$_X$($d$,$X^-$)$^6$Li cross section was too large; this leads to too high a $^6$Li abundance. Cyburt et al.~\cite{Cyburt:2006uv} further motivated this hypothesis by identifying the $X^-$ as the supersymmetric partner of the tau lepton, i.e., a stau, and considered $X^-$ transfer reactions for $^7$Li and $^7$Be production, too. Although their calculation is based on a fully dynamical treatment of the recombination of nuclei with $X^-$ particles and also of BBN, they used cross sections for all $X^-$ transfer reactions involving $^6$Li, $^7$Li, and $^7$Be that were too large, as we discuss below. Therefore, the calculated abundances are to be viewed with caution. Kaplinghat and Rajaraman~\cite{Kaplinghat:2006qr} observed that the decay of an $X^-$ when bound to $^4$He would occasionally knock out a proton or neutron to produce $^3$He or $^3$H, thereby enhancing their abundances and the abundance of $^6$Li through reactions with other primordial $^4$He nuclei at higher energies. Kohri and Takayama~\cite{Kohri:2006cn} studied the recombination of nuclei with $X^-$ particles, and suggested the possibility of solving the $^7$Li problem. However, they did not carry out dynamical calculations involving recombination processes and BBN simultaneously. This forced them to introduce artificial parameters for the fractions of the captured nuclei, which turn out to be different from the fractions obtained by solving the recombination plus BBN fully dynamically. A new resonant reaction $^7$Be$_X$($p$,$\gamma$)$^8$B$_X$ has recently been proposed by Bird et al.~\cite{Bird:2007ge} that destroys $^7$Be$_X$ through an atomic excited state of $^8$B$_X$, and the present study identifies another effect in this reaction that might also destroy $^7$Be. Thus there has been a great deal of recent progress in $X^-$ catalyzed BBN in three important aspects: the simultaneous description of recombination and ionization processes of $X^-$ particles with nuclei in the description of BBN, use of updated reaction rates involving the $X$- nuclei, and inclusion of new resonant processes by which $^7$Be is destroyed. No previous calculation has involved all of these effects in a single dynamical calculation of BBN in order to study their effects on the $^6$Li and $^7$Li problems. In this Article, we present the results of a thorough dynamic analysis of the effects of $X^-$ particles on BBN. The important difference from previous works is, firstly, that we carried out a fully dynamical BBN calculation by taking account of the recombination and ionization processes of $X^-$ particles by normal and $X$-nuclei as well as the many possible nuclear reactions among them. Secondly, the reaction rates on normal and $X$-nuclei used in the present study are based on quantum mechanical calculations of the cross section like those of Hamaguchi et al.~\cite{Hamaguchi:2007mp}, which we believe to be correct. Thirdly, we have not only included the important $^7$Be destruction mechanism identified by Bird et al.~\cite{Bird:2007ge}, but have identified another potentially important destruction mechanism involving the reaction channel $^7$Be$_X$+$p$ $\rightarrow ^8$B$^*$($1^+$, 0.770 MeV)$_X$ $\rightarrow ^8$B$_X$+$\gamma$ which has the potential to destroy $^7$Be$_X$ via capture through the $1^+$ nuclear excited state of $^8$B. We show that when all these effects are included, the single hypothesis of the existence of the $X^-$ particle provides a remarkable solution to both the $^6$Li and $^7$Li abundance anomalies. We can then use this constraint to place interesting limits on the $X^-$ relic abundance and its decay lifetime and mass. \section{Model}\label{sec2} We assume that the $X^-$ particle is leptonic and of spin 0, consistent with its identification as the supersymmetric partner of a lepton. The $X^-$ would be thermally produced at an earlier epoch together with $X^+$. Their small annihilation cross section allows a significant abundance to survive to the BBN epoch. The mass and decay lifetime of the $X^-$ is ultimately constrained by WMAP and the present BBN study. Only the $X^-$ can bind to nuclei and the $X^+$ remains inert during BBN. The binding energies and the eigenstate wave functions of the $X$-nuclei were calculated by assuming uniform finite-size charge distributions of radii $r_0=1.2 A^{1/3}$~fm for nuclear mass number $A$~\cite{cahn:1981}. When the $X^-$ abundance is very high, some nuclei can bind two $X^-$ particles, such as $^3$He$_{XX}$ and $^4$He$_{XX}$. In that case their binding energies were calculated using a variational calculation with a trial wave function for $X$-nuclides bound to one $X^-$ particle, analogous to the case of the H$_2^+$ ion. Thermonuclear reaction rates (TRRs) for all reactions that might take place in $X^-$ catalyzed BBN, including the $X^-$ transfer reaction suggested in~\cite{Pospelov:2006sc} and $X^-$ decay, were added to the BBN network code. See Kusakabe et al.~\cite{kusakabe} for details on the calculations. These were corrected for the modified nuclear charges and the effective mass resulting from the binding of one or two $X^-$ particle(s). If the $X^-$ decayed at some later stage, they would be expected to destroy some fraction of the nuclei to which they had become bound during BBN. However, that fraction would be small~\cite{Kaplinghat:2006qr,rosen:1975}. We found that the inclusion of the $X$-nuclei $^8$Be$_X$ and $^8$Be$_{XX}$ (both are bound) results in a leakage of the nuclear reaction flow out of the light nuclei or $X$-nuclei to produce slightly heavier $A \geq$~9 nuclei. This might be an additional BBN signature resulting from binding the $X^-$ particles. We determined most thermonuclear reaction rates involving the $X$-nuclei by taking account of the lowered Coulomb barriers and modified reduced masses. However, as discussed below, there are a number of reactions that require careful additional considerations. As noted by Pospelov~\cite{Pospelov:2006sc}, reactions in which an $X^-$ particle is transferred can be very important in circumventing some normally inhibited reactions, especially the $^4$He$_X$($d$,$X^-$)$^6$Li reaction. Its rate could be orders of magnitude larger than that of the $^4$He($d$,$\gamma$)$^6$Li reaction, which is suppressed due to its occurrence through an electric quadrupole transition. Hamaguchi et al.~\cite{Hamaguchi:2007mp} have recently carried out a theoretical calculation of the cross section for $^4$He$_X$($d$,$X^-$)$^6$Li in a quantum three-body model. Their value was about an order of magnitude smaller than that of~\cite{Pospelov:2006sc}. This difference can be attributed to the use of an exact treatment of quantum tunneling and a better nuclear potential. We, therefore, adopt the result of~\cite{Hamaguchi:2007mp} in the present study. Cyburt et al.~\cite{Cyburt:2006uv} estimated astrophysical S-factors for the $^4$He$_X$($t$,$X^-$)$^7$Li, $^4$He$_X$($^3$He,$X^-$)$^7$Be, $^6$Li$_X$($p$,$X^-$)$^7$Be and other reactions by applying a scaling relation~\cite{Pospelov:2006sc}, $S_X$/$S_\gamma \propto p_{\rm f}a_0/(\omega_\gamma a_0)^{2\lambda+1}$. Here, $S_X$ and $S_\gamma$ are the S-factors for the $X^-$ transfer and radiative processes, respectively, $a_0$ is the $X^-$ Bohr radius of $^4$He$_X$ or $^6$Li$_X$, $p_{\rm f}$ is the linear momentum of the outgoing $^7$Li or $^7$Be in the $X^-$ transfer reactions, and $\omega_\gamma$ is the energy of the emitted $\lambda=1$ (electric dipole) photon in the radiative capture. However, the reaction dynamics are important to these results. $^4$He, $^{6,7}$Li, and $^7$Be occupy an s-wave orbit around the $X^-$ particle (assuming the $X^-$ particle to be much heavier than these nuclei). The $^6$Li nucleus is an $\alpha$+$d$ cluster system in a relative s-wave orbit, while the $A=7$ nuclei are $\alpha$+$t$ and $\alpha$+$^3$He cluster systems in relative p-wave orbits. This difference in the orbital angular momentum will produce a critical difference in the reaction dynamics between the $^4$He$_X$($d$,$X^-$)$^6$Li and the $^4$He$_X$($t$,$X^-$)$^7$Li, $^4$He$_X$($^3$He,$X^-$)$^7$Be, and $^6$Li$_X$($p$,$X^-$)$^7$Be reactions. Specifically, the latter three reactions must involve $\Delta l$=1 angular momentum transfer. In order to conserve total angular momentum the outgoing $^7$Li and $^7$Be in the final state must therefore occupy a scattering p-wave orbit from the $X^-$ particle, leading to a large hindrance of the overlap matrix for the $X^-$ transfer processes. Thus, a realistic quantum mechanical calculation results in much smaller $S_X$-factors than those estimated in~\cite{Cyburt:2006uv}. Therefore, in the present study, the above three reaction processes were found to be negligible and were therefore omitted. Bird et al.~\cite{Bird:2007ge} suggested that the $^7$Be$_X$($p$,$\gamma$)$^8$B$_X$ resonant reaction could destroy $^7$Be$_X$ through an atomic excited state of $^8$B$_X$. They also proposed that a charged weak-boson exchange reaction $^7$Be$_X \rightarrow ^7$Li+$X^0$ followed by $^7$Li($p$,$\alpha$)$^4$He could destroy $A=7$ nuclides. We included only the former resonant reaction in the present study, although we confirmed their assertion on the weak process as will be discussed in a separate paper. In our exhaustive study of additional processes related to $^6$Li, $^7$Li, and $^7$Be destruction, we found that the reaction channel which proceeds through the $1^+$, $E^*=0.770\pm 0.010$ MeV nuclear excited state of $^8$B via $^7$Be$_X$+$p$ $\rightarrow ^8$B$^*$($1^+$, 0.770 MeV)$_X$ $\rightarrow ^8$B$_X$+$\gamma$ could also destroy some $^7$Be$_X$, and that the destruction processes $^6$Li$_X$($p$,$^3$He)$^4$He$_X$, $^7$Li$_X$($p$,$\alpha$)$^4$He$_X$ might also be significant. Our calculated binding energies of the $X^-$ particle in $^7$Be$_X$ and $^8$B$_X$ are respectively 1.488 MeV and 2.121 MeV. Adopting these values without any correction to the energy levels of the nuclear excited states of $^8$B$_X$, this $1^+$ state of $^8$B$_X$ is located near the particle threshold for the $^7$Be$_X$+$p$ separation channel. Thus, the $^7$Be$_X$($p$,$\gamma$)$^8$B$_X$ reaction can proceed through a zero-energy resonance of $^8$B$^\ast_X$. However, the measured energy uncertainty of the $1^+$ state of $^8$B is $\pm 10$~keV, and moreover, the excitation energy of this level is very sensitive to the model parameters used to calculate the binding energies of the $X$-nuclei. Even such a small uncertainty of the resonance energy as 10-100~keV would dramatically change the TRR because the BBN catalyzed by the $X$-nuclei proceeds at effective temperatures as low as $T_9\sim 0.1$. Taking account of the uncertainties associated with the $1^+$ resonance energy, $E$, from the $^7$Be$_X$+$p$ separation threshold, we found that $E \approx 30$~keV maximizes the TRR. This threshold energy would be achieved when, for example, the uniform charge radii are 2.2955~fm for $^7$Be$_X$ and 2.4564~fm for $^8$B$_X$, respectively. This resonant reaction is potentially as effective as $^7$Be$_X$+$p$ $\rightarrow$ $^8$B$_X^{*a} \rightarrow ^8$B$_X+\gamma$ in destroying $^7$Be. However, the charge radii we have adopted tend to be smaller than the measured charge radii, and this might overestimate the binding energies of $X^-$. If a more realistic calculation were performed, the resulting binding energies might shift this $1^+$ excited state upward, which would diminish the effect of this destruction process. In addition, the transition through this state would be E2 or M1, which might also weaken its effect. Even in this case, though, the atomic resonance $^8$B$_X^{*a}$~\cite{Bird:2007ge} plays the important role in destroying $^7$Be$_X$. Since it is important to know precisely when during BBN the $X^-$ particles become bound to nuclei, and what their distribution over the BBN nuclei would be~\cite{Kohri:2006cn} at any time it is necessary to consider the thermodynamics associated with binding the $X^-$ particles. We thus included both recombination and ionization processes for $X^-$ particles in our BBN network code and dynamically solved the set of rate equations to find when the $X$-nuclei decoupled from the cosmic expansion. Regarding the thermonuclear reaction rates we note that since the mass of $X^-$ particle $m_X$ is assumed to be $\gtrsim$ 50 GeV, the reduced mass for the $X^-$+$A(N,Z)$ system can be approximated as $\mu_X \equiv m_A m_X/(m_A+m_X) \approx m_A$, rendering the thermonuclear reaction rate for the first recombination process $A(X^-,\gamma)A_X$~\cite{Kohri:2006cn} to be $\langle \sigma_r v\rangle_X \approx 2^9 \pi \alpha Z^2 (2\pi)^{1/2}/(3\exp(4.0)) E_{\rm bind}/(\mu_X^2 (\mu_X T)^{1/2}) \propto m_A^{-2.5}$, where $\alpha$ is the fine structure constant. This rate is almost independent of $m_X$. However, the rate for the second recombination process $A_X(X^-,\gamma)A_{XX}$ is dependent upon $m_X$, i.e., $\langle \sigma_r v\rangle_{XX} \approx 2^9 \pi \alpha (Z-1)^2 (2\pi)^{1/2}/(3\exp(4.0)) E_{\rm bind}/(\mu_{XX}^2 (\mu_{XX}T)^{1/2}) \propto m_X^{-2.5}$. This arises because $\mu_{XX} \equiv m_{AX} m_X/(m_{AX}+m_X) \approx m_X/2$. Since $m_X$ is assumed to be much larger than the mass of the light nuclei $m_X \gg m_A$, the rate for the second or higher-order recombination process is hindered. \section{Results}\label{sec3} The evolution of the BBN abundances when $X^-$ particles are included exhibits some particularly notable features. During the nucleosynthesis epoch, the abundances for $^6$Li, $^7$Li and $^7$Be assume their normal BBN values until the temperature reaches $T_9 \sim 0.5-0.2$. Below that temperature the $X^-$ particles bind to the heaviest nuclides, $^7$Li and $^7$Be. When the abundance ratio, $Y_X$, of $X^-$ particles to baryons is larger than 0.1 these nuclides are then partially destroyed by reactions that would have previously been inhibited by the Coulomb barrier. At around $T_9 = 0.1$, the $X^-$ particles are captured onto $^4$He. Then a new round of $X$-nuclei nucleosynthesis occurs. In particular, the reaction $^4$He$_X$($d$,$X^-$)$^6$Li produces normal $^6$Li nuclei with an abundance which is orders of magnitude above that from standard BBN. An interesting feature is that the $^6$Li formed in this way is not easily destroyed by the $^6$Li($p$,$\alpha$)$^3$He reaction, the dominant $^6$Li destruction reaction in BBN, because the $X^-$ transfer reaction restores the charge to $^6$Li. Hence, the Coulomb barrier is too high at this temperature for its destruction resulting in a large $^6$Li/$^7$Li abundance ratio. The final calculated abundances of the mass 6 and 7 nuclides, however, depend strongly on the assumed $X^-$ abundance. At high $X^-$ abundance levels, more than one $X^-$ capture can occur. Although the abundance of these multiple $X^-$ bound particles is too small to significantly contribute to BBN, they nevertheless interact readily since their Coulomb barriers are greatly reduced. This is especially true of charge-neutral $^4$He$_{XX}$. To clarify the nucleosynthesis yields, we have thus made a study in which the $X^-$ abundance $Y_X$ was varied over a wide range. In Fig. 1 we show contours of an interesting region in the decay lifetime $\tau_X$ vs. $Y_X$ plane. Curves are drawn for constant lithium abundance relative to the observed value in MPHSs, i.e., d($^6$Li) = $^6$Li$^{\rm Calc}$/$^6$Li$^{\rm Obs}$ (solid curves) and d($^7$Li) = $^7$Li$^{\rm Calc}$/$^7$Li$^{\rm Obs}$ (dashed curves) for several values of the stellar depletion factor ^^ ^^ d''. The adopted abundances are $^7$Li/H$=(1.23^{+0.68}_{-0.32})\times 10^{-10}$~\cite{Ryan:1999vr} and $^6$Li/H$=(7.1\pm 0.7)\times 10^{-12}$~\cite{Asplund:2005yt}. Shaded regions for the d($^6$Li) = 1 and d($^7$Li) = 1 curves illustrate the 1~$\sigma$ uncertainties in the adopted observational constraints based upon the dispersion of the observed plateaus. We also show curves for stellar depletion factors of d($^7$Li) = 2, 3 and d($^6$Li) = 4, 25. Since $^6$Li is more fragile to stellar processing than $^7$Li ~\cite{Richard:2004pj}, its possible depletion factors could be larger than those for $^7$Li. The main point of this figure is that, independent of stellar destruction, it is possible to find a simultaneous solution to both the $^7$Li overproduction problem and the $^6$Li underproduction problem. This occurs in the parameter region $Y_X \approx 0.09-0.6$, $\tau_X \approx (1.6-2.8)\times 10^3$~s consistent with the suggestion of~\cite{Bird:2007ge}. Assuming that the products of the decaying $X^-$ particles are progenitors of the CDM particles, the WMAP-CMB observational constraint on $\Omega_{\rm CDM}$ = 0.2 limits the mass of the $X^-$, i.e., $Y_X m_X \lesssim$ 4.5 GeV and $m_X \lesssim$ 50 GeV when we include the destruction reaction processes of $A=7$ nuclide $^7$Be$_X$+$p$ $\rightarrow$ $^8$B$_X^{*a} \rightarrow ^8$B$_X+\gamma$~\cite{Bird:2007ge} and (assumed maximal value of the) $^7$Be$_X+p\rightarrow ^8$B$^*$($1^+$, 0.770 MeV)$_X \rightarrow ^8$B$_X+\gamma$. When we include the destruction process $^7$Be$_X \rightarrow ^7$Li+$X^0$~\cite{Bird:2007ge}, these parameter ranges slightly change to $Y_X \approx 0.04-0.1$, $\tau_X \approx (1.8-3.2)\times 10^3$~s, and $m_X \lesssim 100$~GeV. Figure 2 illustrates the final calculated BBN yields as a function of baryon to photon ratio $\eta$ for the case of ($Y_X$, $\tau_X$)=(0.6, $1.6\times 10^3$~s). This choice leads to $^6$Li and $^7$Li abundances consistent with the observed values without stellar depletion. Note, though, that the same conclusion is reached if the destruction reaction through the $^8$B($1^+$) state is not included, so the general conclusion is robust. \begin{figure}[tbp] \includegraphics[width=8.0cm,clip]{param.eps} \caption{\label{contour} Contours of constant lithium abundance relative to the observed value in MPHSs, i.e., d($^6$Li) = $^6$Li$^{\rm Calc}$/$^6$Li$^{\rm Obs}$ (solid curves) and d($^7$Li) = $^7$Li$^{\rm Calc}$/$^7$Li$^{\rm Obs}$ (dashed curves). The adopted abundances are $^7$Li/H$= (1.23^{+0.68}_{-0.32})\times 10^{-10}$~\cite{Ryan:1999vr} and $^6$Li/H$=(7.1\pm 0.7)\times 10^{-12}$~\cite{Asplund:2005yt}. Shaded regions for the d($^6$Li) = 1 and d($^7$Li) = 1 curves illustrate the 1~$\sigma$ uncertainties in the adopted observational constraints based upon the dispersion of the observed plateaus.} \end{figure} \begin{figure}[tbp] \includegraphics[width=8.0cm,clip]{bbn.eps} \caption{\label{bbn} Abundances of $^4$He (mass fraction), D, $^3$He, $^7$Li and $^6$Li (by number relative to H) as a function of the baryon-to-photon ratio $\eta$ or $\Omega_B h^2$. The dashed and solid curves are respectively the calculated results in the standard BBN and the $X^-$ catalyzed BBN for the case of ($Y_X$, $\tau_X$)=(0.6, $1.6\times 10^3$s). There is virtually no difference between the dashed and solid curves for $^4$He, D, and $^3$He. The band of theoretical curve for each nucleus displays 1~$\sigma$ limits taken from~\cite{Coc:2003ce}. The hatched regions represent the adopted abundance constraints from~\cite{oli04} for $^4$He,~\cite{Kirkman:2003uv} for D,~\cite{Ryan:1999vr} for $^7$Li, and~\cite{Asplund:2005yt} for $^6$Li, respectively. The vertical stripe represents the 1~$\sigma$~$\Omega_B h^2$ limits provided by WMAP~\cite{Spergel:2006hy}.} \end{figure} \section{Summary}\label{sec4} In summary, we have investigated light-element nucleosynthesis during BBN taking into account the possibility of massive, negatively-charged $X^-$ particles which would bind to the light-nuclei. When the chemical and kinetic processes associated with such particles are included in a BBN code in a fully dynamical manner, along with the reactions enabled by the $X^-$ particles, the $X^-$ particles are found to enhance the reaction rates in BBN, both by reducing the charge of the resulting $X$-nuclei, and by enabling transfer reactions of the $X^-$ particles. $X^-$ particles greatly enhance the production of $^6$Li, primarily from the $X^-$ transfer reaction $^4$He$_X$($d$,$X^-$)$^6$Li. The $^7$Li abundance, however, decreases when the $X^-$ particle abundance is larger than~0.1 times the total baryon abundance. In this case, the $^7$Li abundance decreases with the $X^-$ particle abundance due to the inclusion of two resonance channels for $^7$Be$_X$($p$,$\gamma$)$^8$B$_X$ through the nuclear and atomic excited states of $^8$B$_X$. It was found to be important to predict precisely the binding energies and excited states of exotic $X$-nuclei in realistic quantum mechanical calculations. Both abundance ratios of $^6$Li/H and $^7$Li/H observed in MPHSs are obtained with an appropriate choice for the lifetime and abundance of the $X^-$ particle. These observational constraints imply a lifetime and abundance roughly in the range of $\tau_X \sim 2 \times 10^3$~s and $Y_X \sim 0.1$. We deduce that this $Y_X$ value requires that $m_X \sim 50$~GeV in order to guarantee that this abundance of $X^-$ particles survives to the epoch of nucleosynthesis. \begin{acknowledgments} We are very grateful to Professor Masayasu Kamimura for enlightening suggestions on the nuclear reaction rates for transfer and radiative capture reactions. This work has been supported in part by the Mitsubishi Foundation, the Grant-in-Aid for Scientific Research (17540275) of the Ministry of Education, Science, Sports and Culture of Japan, and JSPS Core-to-Core Program of International Research Network for Exotic Femto Systems (EFES). MK acknowledges the support by the Japan Society for the Promotion of Science. Work at the University of Notre Dame was supported by the U.S. Department of Energy under Nuclear Theory Grant DE-FG02-95-ER40934. RNB gratefully acknowledges the support of the National Astronomical Observatory of Japan during his stay there. \end{acknowledgments}
{ "attr-fineweb-edu": 1.751953, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUf7rxK6wB9mn5Bq_E
\section{Introduction}\label{sect:intro} We study an explicit full numerical discretization of the one-dimensional stochastic heat equation \begin{align} &\frac{\partial}{\partial t}u(t,x) = \frac{\partial^2}{\partial x^2} u(t,x) + f(t,x,u(t,x))+\sigma(t,x,u(t,x))\frac{\partial^2}{\partial t\partial x}W(t,x) \quad \mathrm{in}\: \ (0,\infty)\times (0,1), \nonumber \\ &u(t,0) = u(t,1)=0 \quad \mathrm{for}\:\ t\in(0,\infty),\nonumber \\ &u(0,x)=u_0 \quad \mathrm{for}\:\ x\in[0,1], \label{heateq} \end{align} where $W$ is a Brownian sheet on $[0,\infty)\times[0,1]$ defined on some probability space $(\Omega,\mathcal{F},\mathbb{P})$ satisfying the usual conditions, and $u_0$ is a continuous function in $[0,1]$ such that $u_0(0)=u_0(1)=0$. Assumptions on the coefficients $f$ and $\sigma$ will be specified below. As far as the spatial discretization is concerned, we use a standard finite difference scheme, as in \cite{gyongy1}. In order to discretize \eqref{heateq} with respect to the time variable, we consider an exponential method similar to the time integrators used in \cite{MR3033008,cqs14,MR3484400} for stochastic wave equations or in \cite{ac16s,Cohen2017} for stochastic Schr\"odinger equations. \smallskip Our main aim is to improve the temporal rate of convergence that has been obtained by Gy\"ongy in the reference \cite{gyongy2}. Indeed, in \cite{gyongy2}, the explicit as well as the semi-implicit Euler-Maruyama scheme have been applied for the time discretization of problem \eqref{heateq}. When the functions $f$ and $\sigma$ are globally Lipschitz continuous in the third variable, a temporal convergence order of $\frac18-$ in the $L^q(\Omega)$-norm, for all $q\geq 2$, is obtained for these numerical schemes (see Theorem~3.1 in \cite{gyongy2} for a precise statement). Our first objective is to see if an explicit exponential method can provide a higher rate of convergence. In the present work, we answer this question positively and obtain the temporal rate $\frac14-$ (see the first part of Theorem~\ref{fullLip} below). We note that, as in \cite{gyongy2}, the latter estimate for the $L^q(\Omega)$-error holds for any fixed $t\in (0,T]$ and uniformly in the spatial variable, where $T>0$ is some fixed time horizon. On the other hand, we should also remark that, in \cite{gyongy2}, a rate of convergence $\frac14$ could be obtained only in the case where the initial condition $u_0$ belongs to $C^3([0,1])$. Finally, as in \cite{gyongy2}, we also prove that the exponential scheme converges almost surely to the solution of \eqref{heateq}, uniformly with respect to time and space variables (cf. Theorem~\ref{th:as}). \smallskip Our second objective consists in refining the above-mentioned temporal rate of convergence in order to end up with a convergence order which is exactly $\frac14$ and with an estimate which is uniform both with respect to time and space variables. To this end, we assume that the initial condition $u_0$ belongs to some fractional Sobolev space (see \eqref{eq:22} for the precise definition). Indeed, as it can be deduced from the second part of Theorem~\ref{fullLip} and well-known Sobolev embedding results, in order to have the rate $\frac14$, the hypothesis on $u_0$ implies that it is $\delta$-H\"older continuous for all $\delta\in (0,\frac12)$. Eventually, as in \cite{gyongy2}, we remove the globally Lipschitz assumption on the coefficients $f$ and $\sigma$ in equation \eqref{heateq}, and we prove convergence in probability for the proposed explicit exponential integrator (see Theorem~\ref{th:proba} below). \smallskip We should point out that there are also other important advantages with using the exponential method proposed here. Namely, first, it does not suffer a step size restriction (imposed by a CFL condition) as the explicit Euler-Maruyama scheme from \cite{gyongy2}. Secondly, it is an explicit scheme and therefore has implementation advantages over the implicit Euler-Maruyama scheme studied in \cite{gyongy2}. These facts will be illustrated numerically. \smallskip The numerical analysis of the stochastic heat equation \eqref{heateq} is an active research area. Without being too exhaustive, beside the above mentioned papers \cite{gyongy1} and \cite{gyongy2}, we mention the following works regarding numerical discretizations of stochastic parabolic partial differential equations: \cite{gyongy1,Yan,MR2916876,MR3290962} (spatial approximations); \cite{MR1352735,MR1341554,gn97,anz98,MR1683281,m2an0068,DavieGaines,MR1951901, MR1953619,ps05,Walsh,mm05,ritter,MR2471778,j09,MR2646103,MR2830608,j11,klns11,MR3047942,wg12,MR3027891,MR3101829,MR3320928,MR3534472,drw016} (temporal and full discretizations); \cite{TaWangNie,lpt17} (stability). Observe that most of these references are concerned with an interpretation of stochastic partial differential equations in Hilbert spaces and thus error estimates are provided in the $L^2([0,1])$ norm (or similar norms). The reader is referred to the monographs \cite{MR2856611,MR3154916,MR3308418} for a more comprehensive reference list. \smallskip In the present publication, we follow a similar approach as in \cite{cqs14} and \cite{gyongy2}. The main idea consists in establishing suitable \textit{mild} forms for the spatial approximation $u^M$ and for the fully discretization scheme $u^{M,N}$. The obtained mild equations, together with some auxiliary results and taking into account the hypotheses on the coefficients and initial data, will allow us to deal with the $L^q(\Omega)$-error \[ \Bigl(\mathbb{E}[|u^M(t,x)-u^{M,N}(t,x)|^q]\Bigr)^{\frac 1{q}}, \] for all $q\geq 2$. The $L^q(\Omega)$-error comparing $u^M$ with the exact solution of \eqref{heateq} has already been studied in \cite{gyongy1}. \smallskip The paper is organized as follows. In Section \ref{section:Lipschitz}, we study the numerical approximation of the solution to equation \eqref{heateq} in the case of globally Lipschitz continuous coefficients. More precisely, we first recall the spatial discretization $u^M$ of \eqref{heateq} and prove some properties of $u^M$ needed in the sequel. Next, we introduce the full discretization scheme and prove that it satisfies a suitable mild form, and provide three auxiliary results which will be invoked in the convergence results' proofs. At this point, we state and prove the main result on $L^q(\Omega)$-convergence along with some numerical experiments illustrating its conclusion. Section \ref{section:Lipschitz} concludes with the result on almost sure convergence, where we also provide some numerical experiments. Finally, Section \ref{section:nonLipschitz} is devoted to deal with the convergence in probability of the numerical solution to the exact solution of \eqref{heateq}, in the case where the coefficients $f$ and $\sigma$ are non-globally Lipschitz continuous. \smallskip Observe that, throughout this article, $C$ will denote a generic constant that may vary from line to line. \section{Error analysis for globally Lipschitz continuous coefficients}\label{section:Lipschitz} This section is divided into three subsections. We begin by stating the assumptions we will make and by recalling the mild solution of \eqref{heateq}. The first subsection is dedicated to recalling the finite difference approximation from \cite{gyongy1} and some (new) results about it. In the second subsection, we numerically integrate the resulting semi-discrete system of stochastic differential equations in time to obtain a full approximation of \eqref{heateq}. We also state and prove our main result about convergence in the $2p$-th mean. Finally, in the third subsection, we prove almost sure convergence of the full approximation to the exact solution. In addition, numerical experiments are provided to illustrate the theoretical results of this section. In this section, we shall make the following assumptions on the coefficients of the stochastic heat equation \eqref{heateq}: for a given positive real number $T$, there exist a constant $C$ such that \begin{align}\label{L} |f(t,x,u)-f(t,y,v)|+|\sigma(t,x,u)-\sigma(t,y,v)|\leq C\bigl(|x-y|+|u-v|\bigr),\tag{L} \end{align} for all $t\in[0,T]$, $x,y\in[0,1]$, $u,v\in\mathbb{R}$, and \begin{align}\label{LG} |f(t,x,u)|+|\sigma(t,x,u)|\leq C(1+|u|),\tag{LG} \end{align} for all $t\in[0,T]$, $x\in[0,1]$, $u\in\mathbb{R}$. Assume also that the initial condition $u_0$ defines a continuous function on $[0,1]$ with $u_0(0)=u_0(1)=0$. The assumptions \eqref{L} and \eqref{LG} imply existence and uniqueness of a solution $u$ of equation \eqref{heateq} on the time interval $[0, T]$, see e.g. Theorem 3.2 and Exercise 3.4 in \cite{walsh1}. Let us recall that, for a stochastic basis $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\geq 0}, \mathbb{P})$, a solution to equation \eqref{heateq} is an $\mathcal{F}_t$-adapted continuous process $\{u(t,x), (t,x)\in [0,T]\times[0,1]\}$ satisfying that, for every $\Phi\in C^\infty(\mathbb{R}^2)$ such that $\Phi(t,0)=\Phi(t,1)=0$ for all $t\geq 0$, we have \begin{align} \int_0^1 u(t,x)\Phi(t,x)\, \mathrm{d} x = & \int_0^1 u_0(x)\Phi(t,x)\, \mathrm{d} x \nonumber \\ & \quad + \int_0^t\int_0^1 u(s,x) \left(\frac{\partial^2 \Phi}{\partial x^2}(s,x) + \frac{\partial \Phi}{\partial s}(s,x)\right)\, \mathrm{d} x\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 f(s,x,u(s,x)) \Phi(s,x)\, \mathrm{d} x\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 \sigma(s,x,u(s,x)) \Phi(s,x)\, W(\mathrm{d} s,\mathrm{d} x), \quad \mathbb{P}\text{-a.s.}, \label{eq:13} \end{align} for all $t\in [0,T]$. It is well-known that the above equation implies the following {\it{mild}} form for \eqref{heateq}: \begin{equation} \label{exactsol} \begin{aligned} u(t,x)&= \int_0^1 G(t,x,y)u_0(y)\,\mathrm{d} y+\int_0^t\int_0^1 G(t-s,x,y)f(s,y,u(s,y))\,\mathrm{d} y\,\mathrm{d} s\\ &\quad+\int_0^t\int_0^1 G(t-s,x,y)\sigma(s,y,u(s,y))\,W(\mathrm{d} s,\mathrm{d} y), \quad \mathbb{P}\text{-a.s.}, \end{aligned} \end{equation} where $G(t,x,y)$ is the Green function of the linear heat equation with homogeneous Dirichlet boundary conditions: \[ G(t,x,y)=\sum_{j=1}^\infty e^{-j^2 \pi^2 t} \varphi_j(x)\varphi_j(y), \quad t>0,\, x,y\in[0,1], \] with $\varphi_j(x):=\sqrt{2} \sin(j\pi x)$, $j\geq 1$. Note that these functions form an orthonormal basis of $L^2([0,1])$. \subsection{Spatial discretization of the stochastic heat equation}\label{sect:spatial} In this subsection we recall the finite difference discretization and some results obtained in \cite{gyongy1}. In addition to this, we show new regularity results for the approximated Green function $G^M(t,x,y)$ defined below, and for the space discrete approximation, which will be needed in the sequel. Let $M\geq 1$ be an integer and define the grid points $x_m=\frac{m}{M}$ for $m=0,\ldots,M$, and the mesh size $\Delta x=\frac{1}{M}$. We now use the standard finite difference scheme for the spatial approximation of \eqref{heateq} from \cite{gyongy1}. Let the process $u^M(t,\cdot)$ be defined as the solution of the system of stochastic differential equations (for $m=1,\ldots,M-1$) \begin{equation} \begin{aligned} \text du^M(t,x_m)&=M^2\left(u^M(t,x_{m+1})-2u^M(t,x_m)+u^M(t,x_{m-1})\right)\,\mathrm{d} t\\ &\quad+f(t,x_m,u^M(t,x_m))\,\mathrm{d} t\\ &\quad+M\sigma(t,x_m,u^M(t,x_m))\,\mathrm{d} (W(t,x_{m+1})-W(t,x_m)) \end{aligned} \label{fdeq} \end{equation} with Dirichlet boundary conditions $$u^M(t,0)=u^M(t,1)=0,$$ and initial value $$u^M(0,x_m)=u_0(x_m),$$ for $m=1,\ldots,M-1$. For $x\in[x_m,x_{m+1})$ we define \begin{equation} u^M(t,x):=u^M(t,x_m)+(Mx-m)(u^M(t,x_{m+1})-u^M(t,x_m)). \label{eq:-1} \end{equation} We use the notations $u_m^M(t):=u^M(t,x_m)$ and $W_m^M(t):=\sqrt{M}(W(t,x_{m+1})-W(t,x_m))$, for $m=1,\ldots,M-1$ and write the system \eqref{fdeq} as \begin{align*} \text du^M_m(t)&=M^2\sum_{i=1}^{M-1}D_{mi}u^M_i(t)\,\mathrm{d} t+f(t,x_m,u^M_m(t))\,\mathrm{d} t\\ &\quad+\sqrt{M}\sigma(t,x_m,u^M_m(t))\,\mathrm{d} W_m^M(t), \end{align*} with initial value $$u_m^M(0)=u_0(x_m),$$ for $m=1,\ldots,M-1,$ where $D=(D_{mi})_{m,i}$ is a square matrix of size $M-1$, with elements $D_{mm}=-2$, $D_{mi}=1$ for $|m-i|=1$, $D_{mi}=0$ for $|m-i|>1$. Also $W^M(t):=(W^M_m(t))_{m=1}^{M-1}$ is an $M-1$ dimensional Wiener process. Observe that the matrix $M^2D $ has eigenvalues $$\lambda_j^M:=-4\sin^2\left(\frac{j\pi}{2M}\right)M^2=-j^2\pi^2c_j^M,$$ where $$\frac{4}{\pi^2}\leq c_j^M:=\frac{\sin^2\left(\frac{j\pi}{2M}\right)}{\left(\frac{j\pi}{2M}\right)^2}\leq 1,$$ for $j=1,2,\ldots,M-1$ and every $M\geq 1$. Using the variation of constants formula, the exact solution to \eqref{fdeq} reads \begin{equation} \begin{aligned} u^M(t,x_m)&=\frac{1}{M}\sum_{l=1}^{M-1}\sum_{j=1}^{M-1}\exp(\lambda_j^M t)\varphi_j(x_m)\varphi_j(x_l)u_0(x_l)\\ &\quad+\int_0^t\frac{1}{M}\sum_{l=1}^{M-1}\sum_{j=1}^{M-1}\exp(\lambda_j^M(t-s))\varphi_j(x_m)\varphi_j(x_l)f(s,x_l,u^M(s,x_l))\,\mathrm{d} s\\ &\quad+\int_0^t\frac{1}{\sqrt{M}}\sum_{l=1}^{M-1}\sum_{j=1}^{M-1}\exp(\lambda_j^M(t-s))\varphi_j(x_m)\varphi_j(x_l)\sigma(s,x_l,u^M(s,x_l))\,\mathrm{d} W_l^M(s), \end{aligned} \label{spacediscsol} \end{equation} where we recall that $\varphi_j(x):=\sqrt{2}\sin(jx\pi)$ for $j=1,\ldots,M-1$. We next define the discrete kernel $G^M(t,x,y)$ by \begin{equation} G^M(t,x,y):=\sum_{j=1}^{M-1}\exp(\lambda_j^M t)\varphi_j^M(x)\varphi_j(\kappa_M(y)), \label{eq:1} \end{equation} where $\kappa_M(y):=\frac{[My]}{M}$, $\varphi_j^M(x):=\varphi_j(x_l)$ for $x=x_l$ and \[ \varphi_j^M(x):=\varphi_j(x_l)+(Mx-l)\left(\varphi_j(x_{l+1})-\varphi_j(x_l)\right), \quad \text{if } x\in(x_l,x_{l+1}]. \] With these definitions in hand, one sees that the semi-discrete solution $u^M$ satisfies the mild equation: \begin{equation} \begin{aligned} u^M(t,x)&=\int_0^1 G^M(t,x,y)u_0(\kappa_M(y))\,\mathrm{d} y\\ &\quad+\int_0^t\int_0^1 G^M(t-s,x,y)f(s,\kappa_M(y),u^M(s,\kappa_M(y)))\,\mathrm{d} y\,\mathrm{d} s\\ &\quad+\int_0^t\int_0^1 G^M(t-s,x,y)\sigma(s,\kappa_M(y),u^M(s,\kappa_M(y)))\,\mathrm{d} W(s,y) \end{aligned} \label{spaceapp} \end{equation} $\mathbb{P}$-a.s., for all $t\geq 0$ and $x\in[0,1].$ Next, we proceed by collecting some useful results for the error analysis of the fully discrete numerical discretization presented in the next subsection. The following two results are proved in \cite{gyongy1}. Recall that $u^M$ is the space discrete approximation given by \eqref{spaceapp} and that $u$ is the exact solution given by equation \eqref{exactsol}. \begin{proposition}[Proposition $3.5$ in \cite{gyongy1}]\label{xreg} Assume that $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$, and that the functions $f$ and $\sigma$ satisfy the condition \eqref{LG}. Then, for every $p\geq 1$, there exists a constant $C$ such that $$ \sup_{M\geq1}\sup_{(t,x)\in[0,T]\times[0,1]} \mathbb{E}[|u^M(t,x)|^{2p}]\leq C. $$ \end{proposition} \begin{theorem}[Theorem $3.1$ in \cite{gyongy1}]\label{th:space} Assume that $f$ and $\sigma$ satisfy the conditions \eqref{L} and \eqref{LG}, and that $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$. Then, for every $0<\alpha<\frac14$, $p\geq 1$ and for every $t>0$, there is a constant $C=C(\alpha,p,t)$ such that \begin{align}\label{xconv} \sup_{x\in[0,1]}\Big(\mathbb{E}[|u^M(t,x)-u(t,x)|^{2p}]\Big)^{\frac{1}{2p}}\leq C(\Delta x)^{\alpha}. \end{align} We recall that $\Delta x=1/M$ is the mesh size in space. Moreover, $u^M(t,x)$ converges to $u(t,x)$ almost surely as $M\rightarrow\infty$, uniformly in $t\in[0,T]$ and $x\in [0,1]$, for every $T>0$. If $u_0$ is sufficiently smooth (e.g. $u_0\in C^3([0,1])$) then for every $T>0$, estimate \eqref{xconv} holds with $\alpha=\frac12$ and with the same constant $C$ for all $t\in[0,T]$ and integer $M\geq 1$. \end{theorem} \medskip We will also make use of the following estimates on the discrete Green function. \smallskip \begin{lemma}\label{greenreg} There is a constant $C$ such that the following estimates hold: \begin{enumerate}[label=(\roman*)] \item For all $0<s<t\leq T$: \begin{align}\label{greenreg1} \sup_{M\geq 1} \sup_{x\in[0,1]}\int_0^s\int_0^1|G^M(t-r,x,y)-G^M(s-r,x,y)|^2\,\mathrm{d} y\,\mathrm{d} r\leq C(t-s)^{1/2}. \end{align} \item For all $t\in (0,T]$: $$ \sup_{M\geq1}\sup_{x\in[0,1]}\int_0^1|G^M(t,x,y)|^2\,\mathrm{d} y\leq C\frac{1}{\sqrt{t}}. $$ \item For all $0<s<t\leq T$ and $\alpha\in (\frac12,\frac52)$: $$ \sup_{M\geq1} \sup_{x\in[0,1]}\int_0^1 |G^M(t,x,y)-G^M(s,x,y)|^2\,\mathrm{d} y\leq Cs^{-\alpha}(t-s)^{\alpha-\frac12}. $$ \end{enumerate} \end{lemma} \begin{proof} Recall that $$G^M(t,x,y)=\sum_{j=1}^{M-1}\exp(\lambda_j^M t)\varphi_j^M(x)\varphi_j(\kappa_M(y)),$$ where $\kappa_M(y)=\frac{[My]}{M},$ $\varphi_j^M(x)=\varphi_j\left(\frac{l}{M}\right)$ for $x=\frac{l}{M}$ and \[ \varphi_j^M(x)=\varphi_j\left(\frac{l}{M}\right)+(Mx-l)\left(\varphi_j \left(\frac{l+1}{M}\right)-\varphi_j\left(\frac{l}{M}\right)\right),\quad \text{if } x\in \left(\frac{l}{M},\frac{l+1}{M}\right]. \] We first prove $(i)$. Observe that a general version of this result is used in the proof of \cite[Lem.~3.6]{gyongy1} (see the term $A_1^{2p}$ therein). Using the definition of the discrete Green function, we have \begin{align*} &\int_0^s \int_0^1 |G^M(t-r,x,y)-G^M(s-r,x,y)|^2\,\mathrm{d} y\,\mathrm{d} r \\ &\quad= \int_0^s\int_0^1\left|\sum_{j=1}^{M-1}(\exp(\lambda_j^M (t-r))-\exp(\lambda_j^M (s-r)))\varphi_j^M(x)\varphi_j(\kappa_M(y))\right|^2\,\mathrm{d} y\,\mathrm{d} r. \end{align*} At this point, we use the fact that the vectors \[ e_j=\left(\sqrt{\frac2M} \sin\left(j\frac kM \pi\right),\; k=1,\dots,M-1\right), \quad j=1,\dots,M-1, \] form an orthonormal basis of $\mathbb{R}^{M-1}$, which implies that \begin{equation} \int_0^1 \varphi_j(\kappa_M(y)) \varphi_l(\kappa_M(y))\, \mathrm{d} y= \delta_{\{j=l\}}. \label{eq:8} \end{equation} Hence, using also the definitions of $\varphi_j^M$ and $\lambda_j^M$, \begin{align*} &\int_0^s \int_0^1 |G^M(t-r,x,y)-G^M(s-r,x,y)|^2\,\mathrm{d} y\,\mathrm{d} r\\ &\quad=\int_0^s\sum_{j=1}^{M-1} |\exp(\lambda_j^M (t-r)-\exp(\lambda_j^M (s-r))|^2|\varphi_j^M(x)|^2\,\mathrm{d} r\\ &\quad\leq C\sum_{j=1}^{M-1}\int_0^s\exp(\lambda_j^M (s-r))^2\,\mathrm{d} r |1-\exp(\lambda_j^M(t-s))|^2 \\ &\quad\leq C\sum_{j=1}^{M-1}\int_0^s\exp(-2j^2\pi^2 c_j^M (s-r))\,\mathrm{d} r (1-\exp(-j^2\pi^2c_j^M(t-s)))^2 \\ &\quad\leq C\sum_{j=1}^\infty j^{-2}(j^4(t-s)^2\wedge 1). \end{align*} Here we have used that $1-\exp(-x)\leq x$, and that $(c_j^M)^{-1}$ is bounded. Let $N:=\left[\frac{1}{\sqrt{t-s}}\right]$, where $[\cdot]$ denotes the integer part, and observe that (by comparing sums with integrals) \begin{align*} \sum_{j=1}^\infty j^{-2}(j^4(t-s)^2\wedge 1) & = \sum_{j=1}^N j^2 (t-s)^2 + \sum_{j=N+1}^\infty j^{-2} \\ & \leq C (t-s)^2 (N+1)^3 + (N+1)^{-1} \\ & \leq C (t-s)^2 \left(\left[\frac{\sqrt{t-s}+1}{\sqrt{t-s}}\right]\right)^3 + (N+1)^{-1} \\ & \leq C (t-s)^2 \left(\left[\frac{1}{\sqrt{t-s}}\right]\right)^3 + \left(\frac{1}{\sqrt{t-s}}\right)^{-1}\\ & \leq C (t-s)^{\frac12}. \end{align*} This proves part $(i)$. The proof of $(ii)$ follows by similar arguments as those used in the proofs of \cite[Lem.~8.1, Thm~8.2]{Walsh}. First note that, as above, we have \begin{align*} \int_0^1 |G^M(t,x,y)|^2\,\mathrm{d} y&=\int_0^1 \left|\sum_{j=1}^{M-1}\exp(\lambda_j^M t)\varphi_j^M(x)\varphi_j(\kappa_M(y))\right|^2\,\mathrm{d} y\\ &\leq C\sum_{j=1}^{M-1}\exp(-2j^2\pi^2c_j^M t). \end{align*} The estimate in $(ii)$ now follows from the inequality $$\sum_{j=1}^{M-1}\exp(-2j^2\pi^2c_j^M t)\leq C\left(M\wedge\frac{1}{\sqrt{2c_j^M}\pi\sqrt{t}}\right),$$ which is proved in \cite[Lem.~8.1]{Walsh}. We now prove $(iii)$. Using the definition of the discrete Green function, properties of $\varphi_j$, and the definition of $\lambda_j^M$, we have \begin{align*} \int_0^1|G^M(t,x,y)-G^M(s,x,y)|^2\,\mathrm{d} y &\leq \sum_{j=1}^{M-1} |\exp(\lambda_j^M t)-\exp(\lambda_j^M s)|^2\\ &\leq \sum_{j=1}^{M-1} |\exp(-j^2\pi^2 c_j^M s)|^2|1-\exp(-j^2\pi^2 c_j^M(t-s))|^2. \end{align*} Since $1-\exp(-x)\leq x$ and $\exp(-x^2)\leq C_{\alpha}|x|^{-\alpha}$, for all $\alpha\in \mathbb{R}$, it follows that \begin{align*} \int_0^1|G^M(t,x,y)-G^M(s,x,y)|^2\,\mathrm{d} y &\leq C_{\alpha}\sum_{j=1}^{M-1} j^{-2\alpha} s^{-\alpha}(1\wedge j^4(t-s)^2)\\ &\leq\tilde C_1(t-s)^2s^{-\alpha}\sum_{j=1}^N j^{4-2\alpha}+\tilde C_2s^{-\alpha}\sum_{j=N+1}^\infty j^{-2\alpha}, \end{align*} where $N=\left[\frac{1}{\sqrt{t-s}}\right]$ and $\tilde C_1$ and $\tilde C_2$ are independent of $t$ and $s$. We now estimate these two terms as we did in the proof of part $(i)$. Namely, whenever $\alpha<\frac52$ we have that \begin{align*} (t-s)^2s^{-\alpha}\sum_{j=1}^N j^{4-2\alpha}&\leq C(t-s)^2s^{-\alpha}(N+1)^{5-2\alpha}\\ &\leq C(t-s)^{\alpha-1/2}s^{-\alpha}, \end{align*} using the fact that $N+1\leq\frac{1+\sqrt{t-s}}{\sqrt{t-s}}\leq\frac{C_T}{\sqrt{t-s}}$. For the second term, if $\alpha>\frac12$ we obtain \begin{align*} s^{-\alpha}\sum_{j=N+1}^\infty j^{-2\alpha}&=s^{-\alpha}(N+1)^{-2\alpha}+s^{-\alpha}\sum_{j=N+2}^\infty j^{-2\alpha} \leq Cs^{-\alpha}(N+1)^{1-2\alpha}\\ &\leq (t-s)^{\alpha-1/2}s^{-\alpha}. \end{align*} Collecting these two estimates leads to the conclusion of the theorem. \end{proof} \medskip For the numerical analysis of the exponential method applied to the nonlinear stochastic heat equation \eqref{heateq} presented in the next subsection, the initial data $u_0$ will be in the space $H^\alpha([0,1])$, which we now define. For $\alpha\in\mathbb{R}$, we define the space $H^\alpha([0,1])$ to be the set of functions $g\colon[0,1]\to\mathbb{R}$ such that \begin{equation} \norm{g}_{\alpha}=\left(\sum_{j=1}^\infty(1+j^2)^\alpha|\left\langle g,\varphi_j\right\rangle|^2\right)^{1/2}<\infty, \label{eq:22} \end{equation} where we recall that $\varphi_j(x)=\sqrt{2}\sin(jx\pi)$, for $j\geq 1$. The inner product in the above sum stands for the usual $L^2([0,1])$ inner product. Further restrictions on $\alpha$ will be made in the results below. For the sake of simplicity, the space $H^\alpha([0,1])$ will be denoted by $H^\alpha$. Note that this space is a subspace of the fractional Sobolev space of fractional order $\alpha$ and integrability order $p=2$ (see \cite{Triebel}). Moreover, for any $\alpha>\frac12$, the space $H^\alpha$ is continuously embedded in the space of $\delta$-H\"older-continuous functions for all $\delta\in (0,\alpha-\frac12)$ (see, e.g., \cite[Thm.~8.2]{MR2944369}). \medskip Finally, we need the following regularity results for the finite difference approximation $u^M$ given by \eqref{spaceapp}. \begin{proposition}\label{reg} Assume that $f$ and $\sigma$ satisfy the condition \eqref{LG}. \begin{enumerate} \item Assume that $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$. For any $0< s\leq t\leq T$, any $p\geq 1$, and $\frac12<\alpha<\frac52$, we have $$ \sup_{M\geq 1} \sup_{x\in[0,1]}\mathbb{E}[|u^M(t,x)-u^M(s,x)|^{2p}]\leq Cs^{-\alpha p}(t-s)^{\nu p}, $$ where $\nu=\frac12 \land (\alpha-\frac12)$. \item Assume that $u_0\in H^{\beta}([0,1])$, with $u_0(0)=u_0(1)=0$, for some $\beta>\frac{1}{2}$. For any $0\leq s\leq t\leq T$ and any $p\geq 1$, we have $$ \sup_{M\geq 1} \sup_{x\in[0,1]}\mathbb{E}[|u^M(t,x)-u^M(s,x)|^{2p}]\leq C(t-s)^{\tau p}, $$ where $\tau = \frac12\wedge(\beta-\frac12)$. \end{enumerate} \end{proposition} \begin{proof} For ease of presentation, we consider functions $f(u)$ and $\sigma(u)$ depending only on $u$. Let us first define \begin{align*} F^M(t,x)&:=\int_0^t\int_0^1 G^M(t-s,x,y)f(u^M(s,y))\,\mathrm{d} y\,\mathrm{d} s\\ H^M(t,x)&:=\int_0^t\int_0^1 G^M(t-s,x,y)\sigma(u^M(s,y))\,\mathrm{d} W(s,y). \end{align*} Then we have \begin{align*} u^M(t,x)-u^M(s,x)&=\int_0^1(G^M(t,x,y)-G^M(s,x,y))u_0(\kappa_M(y))\,\mathrm{d} y\\ &\quad+F^M(t,x)-F^M(s,x)\\ &\quad+H^M(t,x)-H^M(s,x). \end{align*} By \cite[Lem.~3.6]{gyongy1}, the last two terms can be estimated by \begin{align}\label{est1} \mathbb{E}[|F^M(t,x)-F^M(s,x)|^{2p}]+\mathbb{E}[|H^M(t,x)-H^M(s,x)|^{2p}]\leq C|t-s|^\frac{p}{2}. \end{align} It remains to estimate the term involving $u_0$. Assume first that $u_0\in C([0,1])$. We use the third part of Lemma~\ref{greenreg} to get the following estimate: \begin{align*} &\left(\mathbb{E}\left[\left|\int_0^1(G^M(t,x,y)-G^M(s,x,y))u_0(\kappa_M(y))\,\mathrm{d} y\right|^{2p}\right]\right)^{1/p}\\ &\quad=\left|\int_0^1(G^M(t,x,y)-G^M(s,x,y))u_0(\kappa_M(y))\,\mathrm{d} y\right|^2\\ &\quad\leq C\int_0^1 |G^M(t,x,y)-G^M(s,x,y)|^2|u_0(\kappa_M(y))|^2\,\mathrm{d} y\\ &\quad\leq Cs^{-\alpha}(t-s)^{\alpha-\frac{1}{2}}. \end{align*} Collecting the above estimates and taking into account that $s^{-\alpha p}\geq T^{-\alpha p}$ in \eqref{est1}, we get $$ \sup_{M\geq 1} \sup_{x\in[0,1]}\mathbb{E}[|u^M(t,x)-u^M(s,x)|^{2p}]\leq C s^{-\alpha p}(t-s)^{\nu p}, $$ where $\nu=\frac12 \land (\alpha-\frac12)$. Assume now that $u_0\in H^{\beta}([0,1])$ for some $\beta>\frac{1}{2}$. Using the explicit expression of $G^M$, Cauchy-Schwarz inequality and that $1-\exp(-x)\leq x$, we have \begin{align*} &\left|\int_0^1(G^M(t,x,y)-G^M(s,x,y))u_0(\kappa_M(y))\,\mathrm{d} y\right|^{2p}\\ &\quad=\left|\sum_{j=1}^{M-1}(\exp(\lambda_j^M t)-\exp(\lambda_j^M s))\langle u_0(\kappa_M(y)),\varphi_j(\kappa_M(y))\rangle\varphi_j^M(x)\right|^{2p}\\ &\quad\leq \left(\sum_{j=1}^{M-1} |\exp(\lambda_j^M t)-\exp(\lambda_j^M s)| |\langle u_0,\varphi_j\rangle|\right)^{2p}\\ &\quad \leq C\left(\sum_{j=1}^{M-1} j^{-2\beta}|\exp(\lambda_j^M t)-\exp(\lambda_j^M s)|^2\right)^p\left(\sum_{j=1}^\infty j^{2\beta}|\langle u_0,\varphi_j\rangle|^2\right)^p\\ &\quad\leq C\left(\sum_{j=1}^{M-1}j^{-2\beta}\exp(2\lambda_j^M s)|\exp(\lambda_j^M (t-s))-1|^2\right)^p\norm{u_0}_{\beta}^{2p}\\ &\quad\leq C\left(\sum_{j=1}^{\infty}j^{-2\beta}(j^4(t-s)^2\wedge 1)\right)^p. \end{align*} Here we have used that $\langle u_0(\kappa_M(y)),\varphi_j(\kappa_M(y))\rangle = \langle u_0, \varphi_j\rangle$, which can be verified by a simple calculation (see equation $(21)$ in \cite{Quer-Sardanyons2006}). Furthermore, for $\beta>\frac{5}{2}$, we have $$\sum_{j=1}^{\infty}j^{-2\beta}(j^4(t-s)^2\wedge 1)\leq C(t-s)^2.$$ On the other hand, if $\beta\in (\frac{1}{2},\frac{5}{2}]$, $$\sum_{j=1}^{\infty}j^{-2\beta}(j^4(t-s)^2\wedge 1) = (t-s)^2 \sum_{j=1}^{N}j^{4-2\beta} + \sum_{j=N+1}^\infty j^{-2\beta},$$ where $N=\left[\frac{1}{\sqrt{t-s}}\right]$, and $[\cdot]$ denotes the integer part. Note that $$(t-s)^2\sum_{j=1}^{N}j^{4-2\beta}\leq C(t-s)^{\beta-\frac{1}{2}}$$ and $$\sum_{j=N+1}^\infty j^{-2\beta}\leq C(t-s)^{\beta-\frac{1}{2}}.$$ Hence, we arrive at the estimate \begin{align}\label{est2} \mathbb{E}\left[\left|\int_0^1(G^M(t,x,y)-G^M(s,x,y))u_0(\kappa_M(y))\,\mathrm{d} y\right|^{2p}\right]\leq C(t-s)^{\gamma p}, \end{align} where $\gamma = 2 \wedge (\beta-\frac{1}{2})$, for $\beta>\frac{1}{2}$. By the estimates \eqref{est1} and \eqref{est2} we have \begin{align*} \sup_{M\geq 1} \sup_{x\in[0,1]}\mathbb{E}[|u^M(t,x)-u^M(s,x)|^{2p}]&\leq C(|t-s|^\frac{p}{2}+|t-s|^{\gamma p})\\ &\leq C|t-s|^{\tau p}, \end{align*} where $\tau=\frac12 \wedge (\beta-\frac12)$, for $\beta>\frac{1}{2}$. \end{proof} \subsection{Full discretization: $\boldmath{L^{2p}(\Omega)}$-convergence} \label{sect:temporal} This section is devoted to introduce the time discretization of the semi-discrete problem presented in the previous subsection, which will be denoted by $u^{M,N}$. Next we prove properties of $u^{M,N}$ which will be needed in the sequel and we will state and prove the main result of the present section (cf. Theorem \ref{th:time} below). Finally, some numerical experiments will be performed in order to illustrate the theoretical results obtained so far. \medskip We start by discretizing the space discrete solution \eqref{spacediscsol} in time using an exponential integrator. For an integer $N\geq1$ and some fixed final time $T>0$, let $\Delta t=\frac{T}{N}$ and define the discrete times $t_n=n\Delta t$ for $n=0,1,\ldots,N$. For simplicity of presentation, we consider that the functions $f$ and $\sigma$ only depend on the third variable. Let us now consider the mild equation \eqref{spacediscsol} on the small time interval $[t_n,t_{n+1}]$ written in a more compact form (recall the notation $u^M_m(t)=u^M(t,x_m)$), as follows: $$ u^M(t_{n+1})=e^{A\Delta t}u^M(t_n)+\int_{t_n}^{t_{n+1}}e^{A(t_{n+1}-s)}F(u^M(s))\,\mathrm{d} s+ \int_{t_n}^{t_{n+1}}e^{A(t_{n+1}-s)}\Sigma(u^M(s))\, \mathrm{d} W^M(s), $$ with the finite difference matrix $A:=M^2D$, the vector $F(u^M(s))$ with entries $f(u^M_m(s))$ for $m=1, 2, \ldots, M-1$, and the diagonal matrix $\Sigma(u^M(s))$ with elements $\sqrt{M}\sigma(u^M_m(s))$ for $m=1, 2, \ldots, M-1$. The matrix $D$ has been defined in Section \ref{sect:spatial}. We next discretize the integrals in the above mild equation by freezing the integrands at the left endpoints of the intervals, so we obtain the explicit exponential integrator (omitting the explicit dependence on $M$ for clarity) \begin{equation} \begin{aligned} {{\mathcal U}}^0&:=u^M(0),\\ {{\mathcal U}}^{n+1}&:=e^{A\Delta t}\bigl({{\mathcal U}}^n+F({{\mathcal U}}^n)\Delta t+\Sigma({{\mathcal U}}^n)\Delta W^n\bigr), \end{aligned} \label{sexp} \end{equation} where the terms $\Delta W^n:=W^{M}(t_{n+1})-W^M(t_n)$ denote the $(M-1)$-dimensional Wiener increments. The above formulation of the exponential integrator will be used for the practical computations presented below. \begin{remark} In some particular situations, alternative approximations of the integrals in the mild equations are possible, see for instance \cite{MR2652783,MR2471778,MR3047942}. This could possibly lead to better numerical schemes or improved error estimates, which will be investigated in future works. \end{remark} For the theoretical parts presented below, we will make use of the discrete Green function $G^M$ (see \eqref{eq:1}) in order to write the numerical scheme in a more suitable form. We thus obtain the approximation $U_m^{n+1}\approx u(t_{n+1},x_m)$ given by (with a slight abuse of notations for the functions $f$ and $\sigma$) \begin{align*} U_m^{n+1}&=\frac{1}{M}\sum_{l=1}^{M-1}\sum_{j=1}^{M-1}\exp(\lambda_j^M\Delta t)\varphi_j(x_m)\varphi_j(x_l)U_l^n\\ &\quad+\Delta t\frac{1}{M}\sum_{l=1}^{M-1}\sum_{j=1}^{M-1}\exp(\lambda_j^M\Delta t)\varphi_j(x_m)\varphi_j(x_l)f(U_l^n)\\ &\quad+\frac{1}{\sqrt{M}}\sum_{l=1}^{M-1}\sum_{j=1}^{M-1}\exp(\lambda_j^M\Delta t)\varphi_j(x_m)\varphi_j(x_l)\sigma(U_l^n)(W_l^M(t_{n+1})-W_l^M(t_n)). \end{align*} The above equation can be written in the equivalent form \begin{align*} U_m^{n+1}&=\int_0^1 G^M(t_{n+1}-t_n,x_m,y)U_{M\kappa_M(y)}^n\,\mathrm{d} y\\ &\quad+\int_{t_n}^{t_{n+1}}\int_0^1 G^M(t_{n+1}-t_n,x_m,y)f(U_{M\kappa_M(y)}^n)\,\mathrm{d} y\,\mathrm{d} s\\ &\quad+\int_{t_n}^{t_{n+1}}\int_0^1 G^M(t_{n+1}-t_n,x_m,y)\sigma(U_{M\kappa_M(y)}^n)\,W(\mathrm{d} s,\mathrm{d} y), \end{align*} where we recall that $$G^M(t,x,y)=\sum_{j=1}^{M-1}\exp(\lambda_j^M t)\varphi_j^M(x)\varphi_j(\kappa_M(y)),$$ and $\kappa_M(y)=\frac{[My]}{M}$, $\varphi_j^M(x)=\varphi_j(x_l)$ for $x=x_l$ and $\varphi_j^M(x)=\varphi_j(x_l)+(Mx-l)(\varphi_j(x_{l+1})-\varphi_j(x_l))$ for $x\in(x_l, x_{l+1}].$ In order to exhibit a more convenient mild form of the numerical solution $U_m^n$, we iterate the integral equation above to obtain \begin{align*} U_m^{n+1}&=\int_0^1 G^M(t_{n+1},x_m,y)u_0(\kappa_M(y))\,\mathrm{d} y\\ &\quad+\sum_{r=0}^n\int_{t_r}^{t_{r+1}}\int_0^1 G^M(t_{n+1}-t_r,x_m,y)f(U_{M\kappa_M(y)}^r)\,\mathrm{d} y\,\mathrm{d} s\\ &\quad+\sum_{r=0}^n\int_{t_r}^{t_{r+1}}\int_0^1 G^M(t_{n+1}-t_r,x_m,y)\sigma(U_{M\kappa_M(y)}^r)\,W(\mathrm{d} s,\mathrm{d} y), \end{align*} for all $m=1,\dots,M-1$ and $n=0,1,\dots,N$. This implies that \begin{align} U_m^{n+1}&=\int_0^1 G^M(t_{n+1},x_m,y)u_0(\kappa_M(y))\,\mathrm{d} y \nonumber \\ &\quad+\int_0^{t_{n+1}}\int_0^1 G^M(t_{n+1}-\kappa_N^T(s),x_m,y) f\big(U_{M\kappa_M(y)}^{\kappa_N^T(s)/\Delta t}\big)\,\mathrm{d} y\,\mathrm{d} s \nonumber \\ &\quad+ \int_0^{t_{n+1}} \int_0^1 G^M(t_{n+1}-\kappa_N^T(s),x_m,y)\sigma\big(U_{M\kappa_M(y)}^{\kappa_N^T(s)/\Delta t}\big)\,W(\mathrm{d} s,\mathrm{d} y), \label{eq:2} \end{align} where we have used the notation $\kappa_N^T(s):=T\kappa_N(\frac{s}{T})$. Set $u^{M,N}(t_n,x_m):=U_m^n$. Then, equation \eqref{eq:2} yields \begin{align} &u^{M,N}(t_n,x_m)=\int_0^1 G^M(t_n,x_m,y)u_0(\kappa_M(y))\,\mathrm{d} y \nonumber \\ &\quad+\int_0^{t_n}\int_0^1 G^M(t_n-\kappa_N^T(s),x_m,y) f(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,\mathrm{d} y\,\mathrm{d} s \nonumber \\ &\quad+ \int_0^{t_n} \int_0^1 G^M(t_n-\kappa_N^T(s),x_m,y) \sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,W(\mathrm{d} s,\mathrm{d} y). \label{eq:3} \end{align} At this point, we will introduce the {\it{weak}} form associated to the full discretization scheme, and in particular to equation \eqref{eq:3}. This will allow us to define a continuous version of the scheme, which will be denoted by $u^{M,N}(t,x)$, with $(t,x)\in [0,T]\times [0,1]$. More precisely, let $\{v(t,x),\, (t,x)\in[0,T]\times [0,1]\}$ be the unique $\mathcal{F}_t$-adapted continuous random field satisfying the following: for all $\Phi\in C^\infty(\mathbb{R}^2)$ with $\Phi(t,0)=\Phi(t,1)=0$ for all $t$, it holds \begin{align} \int_0^1 v(t,\kappa_M(y))\Phi(t,y) \mathrm{d} y = & \int_0^1 u_0(\kappa_M(y))\Phi(t,y)\, \mathrm{d} y \nonumber \\ & \quad + \int_0^t\int_0^1 v(s,\kappa_M(y)) \left(\Delta_M \Phi(s,y) + \frac{\partial \Phi}{\partial s}(s,y)\right)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 f(v(\kappa_N^T(s),\kappa_M(y))) \Phi(s,y)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 \sigma(v(\kappa_N^T(s),\kappa_M(y))) \Phi(s,y)\, W(\mathrm{d} s,\mathrm{d} y), \quad \mathbb{P}\text{-a.s.}, \label{eq:6} \end{align} for all $t\in [0,T]$. Here, $\Delta_M$ denotes the discrete Laplacian, which is defined by, recalling that $\Delta x=\frac1M$, \[ \Delta_M\Phi(s,y):= (\Delta x)^{-2} \left\{\Phi(s,y+\Delta x) - 2\Phi(s,y) + \Phi(s,y-\Delta x)\right\}. \] Let us prove that, on the time-space grid points, the random field $v$ fulfills equation \eqref{eq:3}. That is, we have the following result. \medskip \begin{lemma}\label{lem:1} With the above notations at hand, we have that, for all $m=1,\dots,M-1$ and $n=0,1,\dots,N$, \begin{align} &v(t_n,x_m)=\int_0^1 G^M(t_n,x_m,y)u_0(\kappa_M(y))\,\mathrm{d} y \nonumber \\ &\quad+\int_0^{t_n}\int_0^1 G^M(t_n-\kappa_N^T(s),x_m,y) f(v(\kappa_N^T(s),\kappa_M(y)))\,\mathrm{d} y\,\mathrm{d} s \nonumber \\ &\quad+ \int_0^{t_n} \int_0^1 G^M(t_n-\kappa_N^T(s),x_m,y) \sigma(v(\kappa_N^T(s),\kappa_M(y)))\,W(\mathrm{d} s,\mathrm{d} y). \label{eq:4} \end{align} \end{lemma} \begin{proof} We will follow some of the arguments developed in the proof of \cite[Thm.~3.2]{walsh1}. Indeed, for any $\phi\in C^\infty(\mathbb{R})$ and any $(t,y)\in [0,T]\times [0,1]$, we define \[ G_t^M(\phi,y):=\int_0^1 G^M(t,z,y)\phi(z)\, \mathrm{d} z. \] Since the Green function $G^M$ solves the discretized homogeneous heat equation with Dirichlet boundary conditions, that is, we have $G^M(t,x,0)=G^M(t,x,1)=0$ and, for any fixed $x\in (0,1)$, \[ \frac{\partial}{\partial t} G^M(t,x,y)-\Delta_M G^M(t,x,y)=0, \] we can infer that \begin{align*} G_t^M(\phi,y) & = \int_0^1 \left(G^M(0,z,y)+\int_0^t \Delta_M G^M(s,z,y) \mathrm{d} s\right) \phi(z)\, \mathrm{d} z \\ & = \int_0^1 G^M(0,z,y)\phi(z)\, \mathrm{d} z + \int_0^t\int_0^1 \Delta_M G^M(s,z,y)\phi(z)\, \mathrm{d} z\,\mathrm{d} s. \end{align*} Hence $\displaystyle\frac{\partial}{\partial t}G_t^M(\phi,y)= \int_0^1 \Delta_M G^M(s,z,y)\phi(z)\, \mathrm{d} z$. On the other hand, since \[\Delta_M G_t^M(\phi,y)=\int_0^1 \Delta_M G^M(t,z,y)\phi(z)\, \mathrm{d} z,\] we deduce that \begin{equation} \frac{\partial}{\partial t}G_t^M(\phi,y) - \Delta_M G_t^M(\phi,y)=0, \label{eq:5} \end{equation} with $(t,y)\in [0,T]\times [0,1]$. At this point, we take $\Phi(s,y)=G_{t-\kappa_N^T(s)}^M (\phi,y)$, with $t\in[0,T]$ and $\phi\in C^\infty(\mathbb{R})$, and plug this $\Phi$ in \eqref{eq:6}. Thus, by \eqref{eq:5} we get that \begin{align*} \int_0^1 v(t,\kappa_M(y)) G_{t-\kappa_N^T(t)}^M (\phi,y)\,\mathrm{d} y & = \int_0^1 u_0(\kappa_M(y)) G_t^M(\phi,y)\,\mathrm{d} y \\ & \quad + \int_0^t\int_0^1 f(v(\kappa_N^T(s),\kappa_M(y))) G_{t-\kappa_N^T(s)}^M (\phi,y)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 \sigma(v(\kappa_N^T(s),\kappa_M(y))) G_{t-\kappa_N^T(s)}^M (\phi,y)\, W(\mathrm{d} s,\mathrm{d} y). \end{align*} Let $(\phi_\epsilon)_{\epsilon\geq 0}$ be an approximation of the Dirac delta $\delta_x$, for some $x\in (0,1)$ (e.g. $\phi_\epsilon$ could be taken to be Gaussian kernels), so that we have \begin{align*} \int_0^1 v(t,\kappa_M(y)) G_{t-\kappa_N^T(t)}^M (\phi_\epsilon,y)\,\mathrm{d} y & = \int_0^1 u_0(\kappa_M(y)) G_t^M(\phi_\epsilon,y)\,\mathrm{d} y \\ & \quad + \int_0^t\int_0^1 f(v(\kappa_N^T(s),\kappa_M(y))) G_{t-\kappa_N^T(s)}^M (\phi_\epsilon,y)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 \sigma(v(\kappa_N^T(s),\kappa_M(y))) G_{t-\kappa_N^T(s)}^M (\phi_\epsilon,y)\, W(\mathrm{d} s,\mathrm{d} y). \end{align*} Then, as it is done in the proof of \cite[Thm.~3.2]{walsh1}, take $\epsilon\rightarrow 0$ in the latter equation, so we will end up with \begin{align} & \int_0^1 G^M(t-\kappa_N^T(t),x,y) v(t,\kappa_M(y))\, \mathrm{d} y \nonumber \\ & \quad = \int_0^1 G^M(t,x,y) u_0(\kappa_M(y))\, \mathrm{d} y \nonumber \\ & \quad \quad + \int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y) f(v(\kappa_N^T(s),\kappa_M(y)))\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad \quad+ \int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y) \sigma(v(\kappa_N^T(s),\kappa_M(y)))\, W(\mathrm{d} s,\mathrm{d} y). \label{eq:7} \end{align} Note that this equation, which is valid for any $(t,x)\in [0,T]\times [0,1]$, is very similar to the one we would like to get, that is \eqref{eq:4}. In fact, taking $t=t_n$ and $x=x_m$ in \eqref{eq:7} for some $n\in \{0,\dots,N\}$ and $m\in \{1,\dots,M-1\}$, respectively, we have, using the explicit expression of $G^M$, \begin{align*} \int_0^1 G^M(0,x_m,y) v(t_n,\kappa_M(y))\, \mathrm{d} y & = \int_0^1 \left(\sum_{j=1}^{M-1} \varphi_j(x_m) \varphi_j(\kappa_M(y))\right) v(t_n,\kappa_M(y))\, \mathrm{d} y\\ & = \sum_{j=1}^{M-1} \varphi_j(x_m) \int_0^1 \varphi_j(\kappa_M(y)) v(t_n,\kappa_M(y))\, \mathrm{d} y \\ & = \sum_{k=1}^{M-1} v(t_n,x_k) \frac1M \sum_{j=1}^{M-1} \varphi_j(x_m) \varphi_j(x_k)\\ & = v(t_n,x_m), \end{align*} where in the last step we have applied \eqref{eq:8}. This concludes the lemma's proof. \end{proof} \medskip As a consequence of Lemma \ref{lem:1}, comparing equations \eqref{eq:3} and \eqref{eq:4} we deduce that $u^{M,N}(t_n,x_m)=v(t_n,x_m)$ for all $m=1,\dots,M-1$ and $n=0,1,\dots,N$. Thus, we can define a continuous version of $u^{M,N}$ as follows: for any $(t,x)\in [0,T]\times [0,1]$, set \[ u^{M,N}(t,x):= \int_0^1 G^M(t-\kappa_N^T(t),x,y) v(t,\kappa_M(y))\, \mathrm{d} y. \] Observe that, by \eqref{eq:7}, the random field $\{u^{M,N}(t,x),\, (t,x)\in [0,T]\times [0,1]\}$ satisfies \begin{equation} \begin{aligned} u^{M,N}(t,x)&:=\int_0^1G^M(t,x,y)u_0(\kappa_M(y))\,\mathrm{d} y\\ &\quad+\int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y)f(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,\mathrm{d} y\,\mathrm{d} s\\ &\quad+\int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y)\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,W(\mathrm{d} s,\mathrm{d} y). \end{aligned} \label{fullapp} \end{equation} The above mild form of the fully discrete approximation will be used in the proof of the main result of the paper (see Theorem \ref{th:time}). \medskip \begin{remark} It can be easily proved that, if $t_n$ is any discrete time and $x\in (x_m,x_{m+1})$, then $u^{M,N}(t_n,x)$ turns out to be the linear interpolation between $u^{M,N}(t_n,x_m)$ and $u^{M,N}(t_n,x_{m+1})$. This is consistent with the definition of the space discrete approximation $u^M(t,x)$ whenever $x\in (x_m,x_{m+1})$ (see \eqref{eq:-1}). \end{remark} \subsubsection{Some properties of ${u^{M,N}}$} This section is devoted to provide three results establishing properties of the full approximation $u^{M,N}$ which will be needed in the sequel. \medskip First, we note that the full approximation \eqref{fullapp} is bounded. Indeed, the proof of the following proposition is very similar to that of Proposition~\ref{xreg} above and is therefore omitted. \smallskip \begin{proposition}\label{fullappbdd} Assume that $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0,$ and that the functions $f$ and $\sigma$ satisfy the condition \eqref{LG}. Then, for every $p\geq 1$, there exists a constant $C$ such that $$ \sup_{M,N\geq1}\sup_{(t,x)\in[0,T]\times[0,1]}\mathbb{E}[|u^{M,N}(t,x)|^{2p}]\leq C. $$ \end{proposition} Next, we define the following quantities: $$ w^{M,N}(t,x):=u^{M,N}(t,x)-\int_0^1 G^M(t,x,y)u_0(\kappa_M(y))\,\mathrm{d} y $$ and $$ w^M(t,x):=u^{M}(t,x)-\int_0^1 G^M(t,x,y)u_0(\kappa_M(y))\,\mathrm{d} y, $$ where we recall that $u^M$ stands for the spatial discretization introduced in Section \ref{sect:spatial}. Then, we have the following result. \smallskip \begin{proposition}\label{holder} Assume that $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$, and that $f$ and $\sigma$ satisfy condition \eqref{LG}. Then, for every $p\geq 1$, $t,r\in[0,T]$ and $x,z\in[0,1]$, we have \begin{align} &\mathbb{E}[|w^{M}(t,x)-w^{M}(r,z)|^{2p}]\leq C\left(|t-r|^{1/4}+|x-z|^{1/2}\right)^{2p}\label{holderM}\\ &\mathbb{E}[|w^{M,N}(t,x)-w^{M,N}(r,z)|^{2p}]\leq C\left(|t-r|^{1/4}+|x-z|^{1/2}\right)^{2p},\label{holderMN} \end{align} where the constant $C$ does not depend on $M$ neither on $N$. \end{proposition} \begin{proof} Inequality \eqref{holderM} is proved in \cite[Prop.~3.7]{gyongy1}. Let us now show inequality \eqref{holderMN}. By definition, we have \begin{align*} w^{M,N}(t,x)&=\int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y)f(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,\mathrm{d} y\,\mathrm{d} s\\ &\quad+\int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y)\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,W(\mathrm{d} s,\mathrm{d} y)\\ &=:F^{M,N}(t,x)+H^{M,N}(t,x), \end{align*} and hence $$ w^{M,N}(t,x)-w^{M,N}(r,z)=F^{M,N}(t,x)-F^{M,N}(r,z)+H^{M,N}(t,x)-H^{M,N}(r,z). $$ Therefore \begin{align*} \mathbb{E}[|w^{M,N}(t,x)-w^{M,N}(r,z)|^{2p}]&\leq C\bigl(\mathbb{E}[|F^{M,N}(t,x)-F^{M,N}(r,z)|^{2p}]\\ &\quad+\mathbb{E}[|H^{M,N}(t,x)-H^{M,N}(r,z)|^{2p}]\bigr). \end{align*} We will next prove that $$ \mathbb{E}[|H^{M,N}(t,x)-H^{M,N}(r,z)|^{2p}]\leq C\left(|t-r|^{1/4}+|x-z|^{1/2}\right)^{2p}. $$ The estimate for $F^{M,N}$ follows in a similar way. We have \begin{align*} |H^{M,N}(t,x)-H^{M,N}(r,z)|^{2p}&\leq C\big(|H^{M,N}(t,x)-H^{M,N}(r,x)|^{2p}\\ &\quad+|H^{M,N}(r,x)-H^{M,N}(r,z)|^{2p}\bigr) \end{align*} and define \begin{align*} A^{2p}&:=\mathbb{E}[|H^{M,N}(t,x)-H^{M,N}(r,x)|^{2p}]\\ B^{2p}&:=\mathbb{E}[|H^{M,N}(r,x)-H^{M,N}(r,z)|^{2p}]. \end{align*} Then $A^{2p}\leq C(A_1^{2p}+A_2^{2p})$, where, for $r\leq t$ without loss of generality, \begin{align*} A_1^{2p}&= \mathbb{E}\left[\left|\int_0^r\int_0^1(G^M(t-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),x,y))\right.\right.\\ &\quad\times\left.\left.\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\, W(\mathrm{d} s,\mathrm{d} y)\right|^{2p}\right]\\ A_2^{2p}&= \mathbb{E}\left[\left|\int_r^t\int_0^1G^M(t-\kappa_N^T(s),x,y)\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\, W(\mathrm{d} s,\mathrm{d} y)\right|^{2p}\right]. \end{align*} Using Burkholder-Davies-Gundy's inequality, Lemma \ref{greenreg}, assumption \eqref{LG} on $\sigma$, Minkowski's inequality and Proposition \ref{fullappbdd}, we have the estimates \begin{align*} A_1^2&=\left(\mathbb{E}\left[\left|\int_0^r\int_0^1(G^M(t-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),x,y))\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y))\, W(\mathrm{d} s,\mathrm{d} y)\right|^{2p}\right]\right)^{1/p}\\ &\leq C\left(\mathbb{E}\left[\left(\int_0^r\int_0^1|G^M(t-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),x,y)|^2|\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))|^2\, \mathrm{d} y\,\mathrm{d} s\right)^p\right]\right)^{1/p}\\ &=C\tnorm{\int_0^r\int_0^1|G^M(t-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),x,y)|^2|\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))|^2\,\mathrm{d} y\,\mathrm{d} s}_p\\ &\leq C\int_0^r\int_0^1|G^M(t-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),x,y)|^2\tnorm{\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))}_{2p}^2\,\mathrm{d} y\,\mathrm{d} s\\ &\leq C\int_0^r\int_0^1|G^M(t-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),x,y)|^2\,\mathrm{d} y\,\mathrm{d} s\\ &\leq C(t-r)^{1/2}, \end{align*} where we set $\tnorm{\cdot}_{2p}=\left(\mathbb{E}\left[|\cdot|^{2p}\right]\right)^{1/(2p)}$. Using similar arguments we have \begin{align*} A_2^2&=\left(\mathbb{E}\left[\left|\int_r^t\int_0^1G^M(t-\kappa_N^T(s),x,y)\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y))\, W(\mathrm{d} s,\mathrm{d} y)\right|^{2p}\right]\right)^{1/p}\\ &\leq C\left(\mathbb{E}\left[\left(\int_r^t\int_0^1|G^M(t-\kappa_N^T(s),x,y)|^2|\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))|^2\, \mathrm{d} y\,\mathrm{d} s\right)^p\right]\right)^{1/p}\\ &\leq C\int_r^t\int_0^1|G^M(t-\kappa_N^T(s),x,y)|^2\tnorm{\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))}_{2p}^2\,\mathrm{d} y\,\mathrm{d} s\\ &\leq C\int_r^t \frac{1}{(t-\kappa_N^T(s))^{1/2}}\,\mathrm{d} s\\ &\leq C\int_r^t \frac{1}{(t-s)^{1/2}}\,\mathrm{d} s\\ &\leq C(t-r)^{1/2}. \end{align*} Thus, we obtain $$\mathbb{E}[|H^{M,N}(t,x)-H^{M,N}(r,x)|^{2p}]\leq C|t-r|^{p/2},$$ and we remark that this estimate is uniform with respect to $x\in [0,1]$. \smallskip It remains to estimate the term $B$. We have \begin{align*} B^{2p}:=\mathbb{E}\left[\left|\int_0^r\int_0^1(G^M(r-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),z,y))\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\, W(\mathrm{d} s,\mathrm{d} y)\right|^{2p}\right], \end{align*} and estimating $B$ as we did for $A_1$ and $A_2$, we obtain \begin{align*} B^2&\leq C\int_0^r\int_0^1|G^M(r-\kappa_N^T(s),x,y)-G^M(r-\kappa_N^T(s),z,y)|^2\,\mathrm{d} y\,\mathrm{d} s\\ &\leq C\int_0^r\sum_{j=1}^{M-1} \exp(-2j^2\pi^2c_j^M(r-\kappa_N^T(s)))|\varphi_j^M(x)-\varphi_j^M(z)|^2\,\mathrm{d} s\\ &\leq C\int_0^r\sum_{j=1}^{M-1} \exp(-2j^2\pi^2c_j^M(r-s))|\varphi_j^M(x)-\varphi_j^M(z)|^2\,\mathrm{d} s. \end{align*} At this point, we note that the latter term also appears in the proof of \cite[Lem.~3.6]{gyongy1}, so we can estimate it in the same way and obtain $$ \mathbb{E}[|H^{M,N}(r,x)-H^{M,N}(r,z)|^{2p}]\leq C|x-z|^{p}, $$ with a constant $C$ independent of $r$. Collecting the estimates obtained so far we obtain the bound \[ \mathbb{E}[|H^{M,N}(t,x)-H^{M,N}(r,z)|^{2p}]\leq C\left(|t-r|^{1/4}+|x-z|^{1/2}\right)^{2p}, \] which finally leads to \eqref{holderMN}. \end{proof} \medskip Finally, we shall also need the following regularity result for the full approximation. \smallskip \begin{proposition}\label{regMN} Assume that $f$ and $\sigma$ satisfy condition \eqref{LG}. \begin{enumerate} \item If $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$, then for any $s, t\in [0,T]$ and $x\in [0,1]$, $p\geq 1$ and $\frac12<\alpha<\frac52$, we have $$ \mathbb{E}[|u^{M,N}(t,x)-u^{M,N}(s,x)|^{2p}]\leq C s^{-\alpha p} |t-s|^{\tau p}, $$ where $\tau = \frac12 \wedge(\alpha-\frac12)$ and with a constant $C$ independent of $M$, $N$ and $x$. \item If $u_0\in H^{\beta}([0,1])$, with $u_0(0)=u_0(1)=0$, for some $\beta>\frac{1}{2}$, then for any $s, t\in [0,T]$ and $x,z\in [0,1]$, and any $p\geq 1$, we have $$ \mathbb{E}[|u^{M,N}(t,x)-u^{M,N}(s,z)|^{2p}]\leq C\bigl(|t-s|^{\tau p}+|x-z|^{2 \tau p}\bigr), $$ where $\tau = \frac12 \wedge(\beta-\frac12)$ and with a constant $C$ independent of $M$ and $N$. \end{enumerate} \end{proposition} \begin{proof} The proof can be built on the proof of Proposition \ref{reg}, so we will only sketch the main steps. To start with, part 1 can be proved by following the same arguments used in the proof of part 1 of Proposition \ref{reg} and it is based on three estimates. First, one applies that \[ \int_0^1 |G^M(t,x,y)-G^M(s,x,y)|^2\, \mathrm{d} y \leq C s^{-\alpha} |t-s|^{\alpha-\frac12}, \] which corresponds to part $(iii)$ in Lemma \ref{greenreg}. Secondly, we have \[ \int_s^t \int_0^1 |G^M(t-\kappa_N^T(r),x,y)|^2\, \mathrm{d} y\, \mathrm{d} r\leq C |t-s|^\frac12, \] which can be verified by using $(ii)$ of Lemma \ref{greenreg}. Finally, it holds that \[ \int_0^s \int_0^1 |G^M(t-\kappa_N^T(r),x,y)- G^M(s-\kappa_N^T(r),x,y)|^2\, \mathrm{d} y\, \mathrm{d} r \leq C |t-s|^\frac12. \] The latter estimate can be checked by doing some simple modifications in the proof of part $(i)$ in Lemma \ref{greenreg}. As far as part 2 is concerned, the time increments can be analyzed following the same steps as those used in the proof of part 2 in Proposition \ref{reg}. We will sketch the proof for the spatial increments. More precisely, taking into account equation \eqref{fullapp}, in order to control the term $\mathbb{E}[|u^{M,N}(t,x)-u^{M,N}(t,z)|^{2p}]$ first we need to estimate the expression \[ \left|\int_0^1(G^M(t,x,y)-G^M(t,z,y))u_0(\kappa_M(y))\,\mathrm{d} y\right|^{2p}. \] Using the same techniques as in the proof of part 2 in Proposition \ref{reg}, the above term can be bounded by \[ \|u_0\|^{2p}_{H^\beta} \left|\sum_{j=1}^{M-1} j^{-2\beta} \big|\varphi_j^M(x)- \varphi_j^M(z)\big|^2 \right|^p, \] where we recall that $\beta>\frac12$. Next, it can be easily proved that $\big|\varphi_j^M(x)- \varphi_j^M(z)\big|\leq C (1 \wedge j (z-x))$, where the constant $C$ does not depend on $M$ and we have assumed, without loosing generality, that $x<z$. Hence, \[ \left|\int_0^1(G^M(t,x,y)-G^M(t,z,y))u_0(\kappa_M(y))\,\mathrm{d} y\right|^{2p} \leq C \left(\sum_{j=1}^\infty j^{-2\beta} (1 \wedge j^2 (z-x)^2)\right)^p. \] The latter series can be estimated, up to some constant, by $(z-x)^{(2\beta-1)p}$. As far as the spatial increments of the remaining two terms in equation \eqref{fullapp} is concerned, applying Burkholder-Davies-Gundy and Minkowski's inequalities, as well as the linear growth on $f$ and $\sigma$ and Proposition~\ref{fullappbdd}, the analysis reduces to control the term \[ \left(\int_0^t\int_0^1 |G^M(t-\kappa_N^T(s),x,y)-G^M(t-\kappa_N^T(s),z,y)|^2\, \mathrm{d} y\, \mathrm{d} s\right)^p. \] The same arguments as above yield that this term can be bounded by \begin{align*} \left(\int_0^t \sum_{j=1}^{M-1} e^{2 \lambda_j^M (t-s)} \big(1 \wedge j^2(z-x)^2\big)\, \mathrm{d} s\right)^p & \leq C \left( \sum_{j=1}^\infty j^{-2} \big(1 \wedge j^2(z-x)^2\big)\right)^p \\ & \leq C (z-x)^p. \end{align*} This concludes the proof. \end{proof} \medskip \begin{remark} Whenever $u_0\in H^{\beta}([0,1])$ for some $\beta>\frac{1}{2}$, the above result implies, thanks to Kolmogorov's continuity criterion, that the random field $u^{M,N}$ has a version with H\"older-continuous sample paths. \end{remark} \subsubsection{Main result} We are now ready to formulate and prove the main result of this section. Recall that $u^M$ is the space discrete approximation given by \eqref{spaceapp} and $u^{M,N}$ is the full discretization given by \eqref{fullapp}. \begin{theorem}\label{th:time} Assume that $f$ and $\sigma$ satisfy the conditions \eqref{L} and \eqref{LG}. \begin{enumerate} \item If $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$, then for any $p\geq 1$, $0<\mu<\frac14$ and $t\in [0,T]$, there exists a constant $C=C(p,\mu,t)$ such that $$ \sup_{x\in [0,1]}\Big(\mathbb{E}[|u^{M,N}(t,x)-u^M(t,x)|^{2p}]\Big)^{\frac{1}{2p}}\leq C(\Delta t)^\mu. $$ \item If $u_0\in H^{\beta}([0,1])$ for some $\beta>\frac{1}{2}$, with $u_0(0)=u_0(1)=0$, then for any $p\geq 1$, we have $$ \sup_{t\in[0,T]}\sup_{x\in [0,1]}\Big(\mathbb{E}[|u^{M,N}(t,x)-u^M(t,x)|^{2p}]\Big)^{\frac{1}{2p}}\leq C(\Delta t)^\nu, $$ where $\nu=\frac14 \wedge(\frac{\beta}{2}-\frac14)$. \end{enumerate} \end{theorem} \begin{proof} We have, using the notation $\tnorm{\cdot}_{2p}=\left(\mathbb{E}\left[|\cdot|^{2p}\right]\right)^{1/(2p)}$, \begin{align*} &\tnorm{u^{M,N}(t,x)-u^M(t,x)}_{2p}\\ &\leq\tnorm{\int_0^t\int_0^1\left(G^M(t-\kappa_N^T(s),x,y)f(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\right.\\ &\quad\quad\left. -G^M(t-s,x,y)f(u^M(s,\kappa_M(y)))\right)\,\mathrm{d} y\,\mathrm{d} s}_{2p}\\ &\quad+\tnorm{\int_0^t\int_0^1\left(G^M(t-\kappa_N^T(s),x,y)\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\right.\\ &\quad\quad\left. -G^M(t-s,x,y)\sigma(u^M(s,\kappa_M(y)))\right)\, W(\mathrm{d} s,\mathrm{d} y)}_{2p}\\ &=: A+B. \end{align*} We show in detail the estimates for $B$. It will then be clear that similar estimates can be made for $A$. First we note that \begin{align*} B^2 &\leq C(B_1^2+B_2^2), \end{align*} where \begin{align*} B_1^2&=\tnorm{\int_0^t\int_0^1 (G^M(t-\kappa_N^T(s),x,y)-G^M(t-s,x,y))\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\, W(\mathrm{d} s,\mathrm{d} y)}_{2p}^2 \end{align*} and \begin{align*} B_2^2 &=\tnorm{\int_0^t\int_0^1 G^M(t-s,x,y)(\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))-\sigma(u^M(s,\kappa_M(y))))\, W(\mathrm{d} s,\mathrm{d} y)}_{2p}^2. \end{align*} By Burkholder-Davies-Gundy and Minkowski's inequalities, we have \begin{align*} B_1^2 &= \left(\mathbb{E}\left[\left|\int_0^t\int_0^1 (G^M(t-\kappa_N^T(s),x,y)-G^M(t-s,x,y))\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\, W(\mathrm{d} s,\mathrm{d} y)\right|^{2p}\right]\right)^{1/p}\\ &\leq C\left(\mathbb{E}\left[\left(\int_0^t\int_0^1 |G^M(t-\kappa_N^T(s),x,y)-G^M(t-s,x,y)|^2|\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))|^2\, \mathrm{d} y\,\mathrm{d} s\right)^p\right]\right)^{1/p}\\ &= C\tnorm{\int_0^t\int_0^1|G^M(t-\kappa_N^T(s),x,y)-G^M(t-s,x,y)|^2|\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))|^2\, \mathrm{d} y\,\mathrm{d} s}_{p}\\ &\leq C\int_0^t\int_0^1 |G^M(t-\kappa_N^T(s),x,y)-G^M(t-s,x,y)|^2\tnorm{\sigma(u^{M,N}(\kappa_N^T(s),\kappa_M(y)))}_{2p}^2\,\mathrm{d} y\,\mathrm{d} s. \end{align*} By assumption \eqref{LG} and Proposition~\ref{fullappbdd}, we obtain \begin{align*} B_1^2&\leq\sup_{(s,y)\in [0,T]\times[0,1]}\tnorm{\sigma(u^{M,N}(s,y))}_{2p}^2\\ &\quad\quad\times\int_0^t\int_0^1 |G^M(t-\kappa_N^T(s),x,y)-G^M(t-s,x,y)|^2\,\mathrm{d} y\,\mathrm{d} s\\ &\leq C(\Delta t)^{1/2}. \end{align*} Here we have also used that $$\sup_{x\in [0,1]} \int_0^t\int_0^1 |G^M(t-\kappa_N^T(s),x,y)-G^M(t-s,x,y)|^2\,\mathrm{d} y\,\mathrm{d} s\leq C(\Delta t)^{1/2},$$ where the constant $C$ does not depend on $M$. This is only a slight variation of \eqref{greenreg1} in Lemma \ref{greenreg}. The proof is very similar and is therefore omitted. Concerning the term $B_2$, using analogous arguments we have \begin{align*} B_2^2&\leq C\int_0^t\int_0^1|G^M(t-s,x,y)|^2\,\mathrm{d} y\\ &\quad\quad\times\sup_{y\in[0,1]}\tnorm{\sigma(u^{M,N}(\kappa_N^T(s),y))-\sigma(u^M(s,y))}_{2p}^2\,\mathrm{d} s. \end{align*} By the Lipschitz assumption on $\sigma$ and $(ii)$ in Lemma~\ref{greenreg}, we get \begin{align} B_2^2&\leq C\int_0^t\int_0^1 |G^M(t-s,x,y)|^2\,\mathrm{d} y\sup_{x\in[0,1]}\tnorm{u^{M,N}(\kappa_N^T(s),x)-u^M(s,x)}_{2p}^2\,\mathrm{d} s \nonumber \\ &\leq C\int_0^t\frac{1}{\sqrt{t-s}}\left(\sup_{x\in[0,1]}\tnorm{u^{M,N}(\kappa_N^T(s),x)-u^{M,N}(s,x)}_{2p}^2\right. \nonumber\\ &\quad\quad\left. +\sup_{x\in[0,1]}\tnorm{u^{M,N}(s,x)-u^M(s,x)}_{2p}^2\right)\,\mathrm{d} s \nonumber \\ &\leq C\int_0^t\frac{1}{\sqrt{t-s}} \sup_{x\in[0,1]}\tnorm{u^{M,N}(\kappa_N^T(s),x)-u^{M,N}(s,x)}_{2p}^2 \, \mathrm{d} s \nonumber \\ &\quad\quad + C \int_0^t\frac{1}{\sqrt{t-s}} \sup_{x\in[0,1]}\tnorm{u^{M,N}(s,x)-u^M(s,x)}_{2p}^2\,\mathrm{d} s. \label{eq:9} \end{align} At this point, We need to distinguish between the two different cases of the initial value $u_0$. If we assume $u_0\in C([0,1])$, then we apply Proposition \ref{regMN} to the first term in \eqref{eq:9}, so we get \begin{align*} \int_0^t\frac{1}{\sqrt{t-s}} \sup_{x\in[0,1]}\tnorm{u^{M,N}(\kappa_N^T(s),x)-u^{M,N}(s,x)}_{2p}^2 \, \mathrm{d} s & \leq C (\Delta t)^\tau \int_0^t (t-s)^{-\frac12} s^{-\alpha}\, \mathrm{d} s \\ & = C (\Delta t)^\tau \, B\Big(1-\alpha,\frac12\Big)\, t^{\frac12-\alpha}, \end{align*} where $B$ denotes the Beta function. In order to obtain the last equality, we need to restrict the range on $\alpha$ to $(\frac12,1)$ (part 1 in Proposition~\ref{regMN} was valid for any $\alpha \in (\frac12,\frac52)$). In this case, notice that we have $\tau=\frac12 \wedge (\alpha-\frac12)=\alpha-\frac12$. Plugging the above estimate in \eqref{eq:9} and taking into account that we obtained the bound $B_1^2\leq C (\Delta t)^\frac12$, we have thus proved that \[ B^2 \leq C(t) (\Delta t)^{\alpha-\frac12} + C \int_0^t\frac{1}{\sqrt{t-s}} \sup_{x\in[0,1]}\tnorm{u^{M,N}(s,x)-u^M(s,x)}_{2p}^2\,\mathrm{d} s. \] As commented at the beginning of the proof, the analysis of the term $A^2$ can be performed in a similar way, in such a way that the same type of estimate can be obtained. Summing up, we have that \[ z(t) \leq C(t) (\Delta t)^{\alpha-\frac12} + C \int_0^t\frac{1}{\sqrt{t-s}} z(s)\,\mathrm{d} s, \] where $z(s):=\sup_{x\in [0,1]}\tnorm{u^{M,N}(s,x)-u^M(s,x)}_{2p}^2$. Then, applying a version of Gronwall's Lemma (see for instance \cite[Chap.~1]{pachpatte06}) we conclude this part of the proof. \smallskip If we instead assume $u_0\in H^\beta([0,1])$ for some $\beta>\frac{1}{2}$, then we apply part 2 of Proposition~\ref{regMN} to the first term in \eqref{eq:9}, obtaining \[ \int_0^t\frac{1}{\sqrt{t-s}} \sup_{x\in[0,1]}\tnorm{u^{M,N}(\kappa_N^T(s),x)-u^{M,N}(s,x)}_{2p}^2 \, \mathrm{d} s \leq C (\Delta t)^\tau, \] where $\tau=\frac12 \wedge (\beta-\frac12)$. Hence, in this case we get that \[ z(t) \leq C (\Delta t)^\tau + C \int_0^t\frac{1}{\sqrt{t-s}} z(s)\,\mathrm{d} s, \] and we conclude applying again a version of Gronwall's Lemma, see for instance \cite[Lem.~3.4]{gyongy1}. \end{proof} \medskip Combining Theorems~\ref{th:space} and~\ref{th:time}, we arrive at the following error estimate for the full discretization. \begin{theorem}\label{fullLip} Let $f$ and $\sigma$ satisfy conditions \eqref{L} and \eqref{LG}. \begin{enumerate} \item Assume that $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$. Then, for every $p\geq 1$, $t\in (0,T]$, $0<\alpha_1<\frac{1}{4}$ and $0<\alpha_2<\frac14$, there are constants $C_i=C_i(t)$, $i=1,2$, such that \begin{align*} \sup_{x\in[0,1]}\left( \mathbb{E}[|u^{M,N}(t,x)-u(t,x)|^{2p}]\right)^{\frac{1}{2p}} \leq C_1 (\Delta x)^{\alpha_1}+C_2 (\Delta t)^{\alpha_2}. \end{align*} \item Assume that $u_0\in H^\beta([0,1])$ with $u_0(0)=u_0(1)=0$, for some $\beta>\frac{1}{2}$. Then, for every $p\geq 1$, $t\in (0,T]$, $0<\alpha_1<\frac{1}{4}$, there are constants $C_1=C_1(t)$ and $C_2$ such that \begin{align*} \sup_{x\in[0,1]}\left(\mathbb{E}[|u^{M,N}(t,x)-u(t,x)|^{2p}]\right)^{\frac{1}{2p}}\leq C_1(\Delta x)^{\alpha_1}+C_2(\Delta t)^\tau, \end{align*} where $\tau=\frac14\wedge(\frac{\beta}{2}-\frac14)$. \end{enumerate} \end{theorem} \medskip \begin{remark} For ease of presentation, we stated the above results for functions $f$ and $\sigma$ depending only on $u$. Observe that the above results remain true in the case of functions $f$ and $\sigma$ depending on $(t,x,u)$ if one replaces the condition \eqref{L} by the following one \begin{align}\label{H} |f(t,x,u)-f(s,y,v)|+|\sigma(t,x,u)-\sigma(s,y,v)|\leq C\bigl(|t-s|^{1/4}+|x-y|^{1/2}+|u-v|\bigr)\tag{H} \end{align} for all $s,t\in[0,T]$, $x,y\in[0,1]$, $u,v\in\mathbb{R}$. In this case, the fully discrete solution reads \begin{equation*} \begin{aligned} u^{M,N}(t,x)&=\int_0^1G^M(t,x,y)u_0(\kappa_M(y))\,\mathrm{d} y\\ &\quad+\int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y)f(\kappa_N^T(s),\kappa_M(y),u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,\mathrm{d} y\,\mathrm{d} s\\ &\quad+\int_0^t\int_0^1 G^M(t-\kappa_N^T(s),x,y)\sigma(\kappa_N^T(s),\kappa_M(y),u^{M,N}(\kappa_N^T(s),\kappa_M(y)))\,W(\mathrm{d} s,\mathrm{d} y), \end{aligned} \end{equation*} where we recall that $\kappa_M=\frac{[My]}{M}$ and $\kappa_N^T(s)=T\kappa_N(\frac{s}{T})$. \end{remark} \subsubsection{Numerical experiments: strong convergence}\label{sect-numexpstrong} We now numerically illustrate the results from Theorem~\ref{th:time}. To do so, we first discretize the problem \eqref{heateq}, with $u_0(x)=\cos(\pi(x-1/2))$, $f(u)=u/2$, $\sigma(u)=1-u$ with centered finite differences using the mesh $\Delta x=2^{-9}$. The time discretizations are done using the semi-implicit Euler-Maruyama scheme (see e.g. \cite{gyongy2}), the Crank-Nicolson-Maruyama scheme (see e.g. \cite{Walsh}) and the explicit exponential integrator \eqref{sexp} with step sizes $\Delta t$ ranging from $2^{-1}$ to $2^{-16}$. The loglog plots of the errors $\sup_{(t,x)\in[0,0.5]\times[0,1]}\mathbb{E}[|u^{M,N}(t,x)-u^M(t,x)|^2]$ are shown is Figure~\ref{fig:strong}, where convergence of order $1/2$ for the exponential integrator is observed. The reference solution is computed with the exponential integrator using $\Delta x_{\text{ref}}=2^{-9}$ and $\Delta t_{\text{ref}}=2^{-16}$. The expected values are approximated by computing averages over $M_s=500$ samples. \begin{figure} \begin{center} \includegraphics*[height=7cm,keepaspectratio]{msSupHeat.eps} \caption{Temporal rates of convergence for the exponential integrator (SEXP), the semi-implicit Euler-Maruyama scheme (SEM), and the Crank-Nicolson-Maruyama scheme (CNM). The reference line has slope $1/2$ (dashed line).} \label{fig:strong} \end{center} \end{figure} Next, we compare the computational costs of the explicit stochastic exponential method \eqref{sexp}, the semi-implicit Euler-Maruyama scheme, and the Crank-Nicolson-Maruyama scheme for the numerical integration of problem \eqref{heateq} with the same parameters as in the previous numerical experiments. We run the numerical methods over the time interval $[0,1]$. We discretize the spatial domain $[0,1]$ with a mesh $\Delta x=2^{-6}$. We run $100$ samples for each numerical method. For each method and each sample, we run several time steps and compare the error at final time with a reference solution provided for the same sample with the same method for the very small time step $\Delta t_{\text{ref}}=2^{-15}$. Figure~\ref{fig:compcost} shows the total computational time for all the samples, for each method and each time step, as a function of the averaged final error we obtain. \begin{figure} \begin{center} \includegraphics*[height=6cm,keepaspectratio]{compcost.eps} \caption{Computational time as a function of the averaged final error for the following numerical methods: the stochastic exponential scheme \eqref{sexp} (SEXP), the semi-implicit Euler-Maruyama (SEM), and the Crank-Nicholson-Maruyama scheme (CNM).} \label{fig:compcost} \end{center} \end{figure} We observe that the computational cost of the Crank-Nicolson-Maruyama scheme is slightly higher than the cost of the semi-implicit Euler-Maruyama scheme which is a little bit higher than the one for the explicit scheme \eqref{sexp}. \subsection{Full discretization: almost sure convergence} In this subsection we prove almost sure convergence of the fully discrete approximation $u^{M,N}$ \eqref{fullapp} to the exact solution $u$ of the stochastic heat equation \eqref{heateq} with globally Lipschitz continuous coefficients. The main result is the following. \smallskip \begin{theorem}\label{th:as} Assume that the functions $f$ and $\sigma$ satisfy the conditions \eqref{LG} and \eqref{L}, and that $u_0\in C([0,1])$ with $u_0(0)=u_0(1)=0$. Then, the full approximation $u^{M,N}(t,x)$ converges to $u(t,x)$ almost surely, as $M,N\rightarrow \infty$, uniformly in $t\in[0,T]$ and $x\in[0,1]$. \end{theorem} \begin{proof} In \cite[Thm.~3.1]{gyongy1}, it was shown that $u^M(t,x)$ converges to $u(t,x)$ almost surely uniformly in $(t,x)$ as $M\rightarrow\infty$. It is therefore enough to show that $u^{M,N}(t,x)$ converges to $u^M(t,x)$ almost surely, as $N\rightarrow\infty$, uniformly in $(t,x)$ and $M\in\mathbb{N}$. To achieve this, it suffices to prove that $w^{M,N}(t,x)$ converges to $w^M(t,x)$ almost surely in $(t,x)$ as $N\rightarrow\infty$. This is because the terms involving $u_0$ in the approximations $u^M$ given by \eqref{spaceapp} and $u^{M,N}$ given by \eqref{fullapp} are the same. We first observe that $$ |w^{M,N}(t,x)-w^M(t,x)|^{2p}\leq C(A_1+A_2+A_3), $$ where \begin{align*} A_1&=\sum_{n=0}^N\sum_{i=0}^N\left|w^{M,N}(t_n,x_i)-w^M(t_n,x_i)\right|^{2p}\\ A_2&=\sup_{n=0,\ldots,N}\sup_{i=0,\ldots,N}\sup_{|x-x_i|\leq 1/N}\sup_{|t-t_n|\leq \Delta t}\left|w^{M,N}(t,x)-w^{M,N}(t_n,x_i)\right|^{2p}\\ A_3&=\sup_{n=0,\ldots,N}\sup_{i=0,\ldots,N}\sup_{|x-x_i|\leq 1/N}\sup_{|t-t_n|\leq \Delta t}\left|w^{M}(t,x)-w^{M}(t_n,x_i)\right|^{2p} \end{align*} and we recall that $x_i$ and $t_n$ are the discrete points in space and time, respectively, given by $x_i=\frac{i}{N}$ for $i=0,1\ldots,N$ and $t_n=n\Delta t$ for $n=0,1,\ldots,N$. By Theorem~\ref{th:time} we obtain $$ \mathbb{E}[A_1]\leq C\left(\frac{1}{N}\right)^{2\mu p-2}, $$ for all $0<\mu<\frac14$. Also, by Proposition~\ref{holder} we have $$ \mathbb{E}[A_2+A_3]\leq C\left(\frac{1}{N}\right)^{2p\delta} $$ for $\delta\in(0,1/4)$. Using that $$ \left(\frac{1}{N}\right)^{2\mu p-2}+\left(\frac{1}{N}\right)^{2p\delta}\leq 2\left(\frac{1}{N}\right)^{2 p\min(\delta,\mu)-2} $$ we thus get $$ \mathbb{E}\left[\sup_{M\geq 1}\sup_{(t,x)\in[0,T]\times [0,1]}|w^{M,N}(t,x)-w^M(t,x)|^{2p}\right]\leq C\left(\frac{1}{N}\right)^{2p\min(\delta,\mu)-2}, $$ where the constant $C$ does not depend on $M$ neither on $N$. Hence, using Markov's inequality we obtain that $$ \mathbb{P}\left(\sup_{M\geq 1}\sup_{(t,x)\in[0,T]\times[0,1]}|w^{M,N}(t,x)-w^M(t,x)|^{2p}>\left(\frac{1}{N}\right)^2\right)\leq C\left(\frac{1}{N}\right)^{2 p\min(\delta,\mu)-4} $$ for all integers $N\geq 1$. It thus follows that $$ \sum_{N=1}^\infty \mathbb{P}\left(\sup_{M\geq 1}\sup_{(t,x)\in[0,T]\times[0,1]}|w^{M,N}(t,x)-w^M(t,x)|^{2p}>\left(\frac{1}{N}\right)^2\right)<\infty $$ for $p$ large enough. By the Borel-Cantelli lemma we now know that for sufficiently large $p$ we have $$ \sup_{M\geq 1}\sup_{(t,x)\in[0,T]\times[0,1]}|w^{M,N}(t,x)-w^M(t,x)|^{2p}\leq \frac{1}{N^2}, $$ with probability one. Taking the limit $N\to\infty$ concludes the proof. \end{proof} \subsubsection{Numerical experiments: almost sure convergence}\label{sect-numexpAS} We now numerically illustrate Theorem~\ref{th:as}. To do so, we first discretize the stochastic heat equation \eqref{heateq}, with $u_0(x)=\cos(\pi(x-1/2))$, $f(u)=1-u$, $\sigma(u)=\sin(u)$ with centered finite differences using the mesh $\Delta x=2^{-9}$. The time discretization is done using the explicit exponential integrator \eqref{sexp} with step sizes $\Delta t$ ranging from $2^{-6}$ to $2^{-18}$ (only every second power). Figure~\ref{fig:as} displays, for a fixed spatial discretization, profiles of one realization of the numerical solution at the fixed time $T=0.5$ as well as a reference solution computed with the exponential integrator using $\Delta x_{\text{ref}}=2^{-9}$ and $\Delta t_{\text{ref}}=2^{-18}$. Convergence to this reference solution as the time step goes to zero (from light to dark grey plots) is observed. \begin{figure} \begin{center} \includegraphics*[height=7cm,keepaspectratio]{prof1.eps} \caption{Almost sure convergence of the exponential integrator (SEXP). The reference solution is displayed in red.} \label{fig:as} \end{center} \end{figure} \section{Convergence analysis for non-globally Lipschitz continuous coefficients}\label{section:nonLipschitz} In this section, we remove the globally Lipschitz assumption on the coefficients $f$ and $\sigma$ in equation \eqref{heateq} and we prove convergence in probability of the fully discrete approximation $u^{M,N}$ given by \eqref{fullapp} to the exact solution $u$ of \eqref{heateq}. Throughout the section we will assume that the initial condition $u_0$ belongs to $H^\beta$ for some $\beta>\frac12$. \smallskip Furthermore, we shall consider the following hypotheses: \begin{itemize} \item[(PU)] Pathwise uniqueness holds for problem \eqref{heateq}: whenever $u$ and $v$ are carried by the same filtered probability space and if both $u$ and $v$ are solutions to problem \eqref{heateq} on the stochastic time interval $[0,\tau)$, then $u(t,\cdot)=v(t,\cdot)$ for all $t\in[0,\tau)$, almost surely. \item[(C)] The coefficient functions $f(t,x,u)$ and $\sigma(t,x,u)$ are continuous in the variable $u$. \end{itemize} \smallskip \begin{remark} For general conditions ensuring pathwise uniqueness in equation \eqref{heateq}, we refer the reader to \cite{GP1,GP2}. Nevertheless, note that pathwise uniqueness for parabolic stochastic partial differential equations is an active research topic. Indeed, we mention, for instance, the works \cite{MR1608641} (Lipschitz coefficients), \cite{MR2773025,MR3357612} (H\"older coefficients), \cite{MR3127884,MR3422943} (additive noise), where this question is investigated. These results provide examples of parabolic stochastic partial differential equations where assumption (PU) is fulfilled. \end{remark} \smallskip In order to prove the main result of the section (cf Theorem~\ref{th:proba}), we will follow a similar approach as in \cite{gyongy1} (see also \cite{ps05}). More precisely, we will first use the results from Section~\ref{section:Lipschitz} to deduce that the family of laws determined by $u^{M,N}$ are tight in the space of continuous functions. Then, we will apply Skorokhod's representation theorem and make use of the weak form \eqref{eq:6} corresponding to the fully discrete approximation $u^{M,N}$. Finally, a suitable passage to the limit and assumption (PU) will let us conclude the proof. \smallskip We will use the above strategy in a successful way thanks the following two auxiliary results. \begin{lemma}[Lemma $4.5$ in \cite{gyongy1}]\label{limit} For all $k\geq 0$, let $z^k=\{z^k(t,x)\colon t\geq 0, x\in[0,1]\}$ be a continuous $\mathcal{F}_t^k$-adapted random field and let $W^k=\{W^k(t,x)\colon t\geq 0, x\in[0,1]\}$ be a Brownian sheet carried by some filtered probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t^k)_{t\geq 0},P)$. Assume also that, for every $\epsilon>0$ $$ \lim_{k\rightarrow\infty}P\left(\sup_{t\in[0,T]}\sup_{x\in[0,1]}(|z^k-z^0|+|W^k-W^0|)(t,x)\geq \epsilon\right)=0. $$ Let $h=h(t,x,r)$ be a bounded Borel function of $(t,x,r)\in\mathbb{R}_+\times[0,1]\times\mathbb{R}$, which is continuous in $r\in\mathbb{R}$. Then, letting $k\rightarrow \infty$, \begin{align*} &\int_0^t\int_0^1 h(s,x,z^k(s,x))\,\mathrm{d} x\,\mathrm{d} s\longrightarrow \int_0^t\int_0^1 h(s,x,z^0(s,x))\,\mathrm{d} x\,\mathrm{d} s,\\ &\int_0^t\int_0^1 h(s,x,z^k(s,x))\, W^k(\mathrm{d} s,\mathrm{d} x)\longrightarrow \int_0^t\int_0^1 h(s,x,z^0(s,x))\, W^0(\mathrm{d} s,\mathrm{d} x), \end{align*} in probability for every $t\in[0,T]$. \end{lemma} \smallskip \begin{lemma}[Lemma $4.4$ in \cite{gyongy1}]\label{polish} Let $E$ be a Polish space equipped with the Borel $\sigma$-algebra. A sequence of $E$-valued random elements $(z_n)_{n\geq 1}$ converges in probability if and only if, for every pair of subsequences $z_l:=z_{n_l}$ and $z_m:=z_{n_m}$, there exists a subsequence $v_k:=(z_{l_k}, z_{m_k})$ converging weakly to a random element $v$ supported on the diagonal $\{(x,y)\in E\times E\colon x=y\}$. \end{lemma} \smallskip We are now ready to state and prove the main result of this section. \smallskip \begin{theorem}\label{th:proba} Assume that the coefficients $f$ and $\sigma$ satisfy condition \eqref{LG}, and that hypotheses (PU) and (C) are fulfilled. Then, there exists a random field $u=\{u(t,x)\colon t\geq 0, x\in[0,1]\}$ such that, for every $\epsilon>0$, $$ \mathbb{P}\left(\sup_{t\in[0,T]}\sup_{x\in[0,1]}|u^{M_k,N_k}(t,x)-u(t,x)|\geq\epsilon\right)\rightarrow 0, $$ as $k$ tends to infinity, for all sequences of positive integers $(M_k,N_k)_{k\geq 1}$ such that $M_k, N_k\rightarrow\infty$, as $k\rightarrow \infty$, where we recall that $u^{M,N}$ denotes the fully discrete solution \eqref{fullapp}. Furthermore, the random field $u$ is the unique solution to the stochastic heat equation \eqref{heateq}. \end{theorem} \begin{proof} We first show that the sequence $(u^{M,N})_{M,N\geq1}$ defines a tight family of laws in the space $C([0,T]\times[0,1])$. To do so, we invoke part 2 in Proposition \ref{regMN} on the regularity of the numerical solution and we apply the tightness criterion on the plane \cite[Thm.~2.2]{MR2678391}, which generalizes a well-known result of Billingsley. Furthermore, Prokhorov's theorem implies that the sequence of laws $(u^{M,N})_{M,N\geq1}$ is relatively compact in $C([0,T]\times[0,1])$. \smallskip Fix any pair of sequences $(M_k,N_k)_{k\geq 1}$ such that $M_k, N_k\rightarrow\infty$, as $k\rightarrow \infty$. Then, the laws of $v_k:=u^{M_k,N_k}$, $k\geq 1$, form a tight family in the space $C([0,T]\times[0,1])$. \smallskip Let now $(v^1_j)_{j\geq 1}$ and $(v^2_\ell)_{\ell\geq 1}$ be two subsequences of $(v_k)_{k\geq 1}$. By Skorokhod's Representation Theorem, there exist subsequences of positive integers $(j_r)_{r\geq 1}$ and $(\ell_r)_{r\geq 1}$ of the indices $j$ and $\ell$, a probability space $(\widehat\Omega,\widehat{\mathcal{F}},(\widehat{\mathcal{F}_t})_{t\geq1},\widehat{\mathbb{P}})$, and a sequence of continuous random fields $(z_r)_{r\geq 1}$ with $z_r:=\bigl(\widetilde u_r,\overline u_r,\widehat W_r\bigr)$, $r\geq1$, such that \smallskip \begin{enumerate} \item $z_r\underset{r\to\infty}{\longrightarrow} z:=(\widetilde u,\overline u,\widehat W)$ a.s. in $C([0,T]\times[0,1],\mathbb{R}^3)$, where the random field $z$ is defined on $(\widehat\Omega,\widehat{\mathcal{F}}, (\widehat{\mathcal{F}_t})_{t\geq1},\widehat{\mathbb{P}})$, $\widehat W$ is a Brownian sheet defined on this basis, and $\widehat{\mathcal{F}_t}=\sigma(z(s,x), \, (s,x)\in [0,t]\times [0,1])$ (and conveniently completed). \item For every $r\geq 1$, the finite dimensional distributions of $z_r$ coincide with those of the random field $\zeta_r:=\bigl(v^1_{j_r},v^2_{\ell_r},W\bigr)$, and thus $\text{law}(z_r)=\text{law}(\zeta_r)$ for all $r\geq 1$. \end{enumerate} \smallskip Note that $\widehat W_r$ is a Brownian sheet defined on $(\widehat\Omega,\widehat{\mathcal{F}}, (\widehat{\mathcal{F}^r_t})_{t\geq1},\widehat{\mathbb{P}})$, where $\widehat{\mathcal{F}^r_t}=\sigma(z_r(s,x), \\ (s,x)\in [0,t]\times [0,1])$ (and conveniently completed). \smallskip We now fix $(t,x)\in[0,T]\times[0,1]$. Since the laws of $z_r$ and $\zeta_r$ coincide and the first two components of $\zeta_r$ satisfy the weak form \eqref{eq:6}, so do the components of $z_r$. Namely, for all $\Phi\in C^\infty(\mathbb{R}^2)$ with $\Phi(t,0)=\Phi(t,1)=0$ for all $t$, it holds \begin{align} \int_0^1 \widetilde{u}_r(t,\kappa_M(y))\Phi(t,y)\, \mathrm{d} y = & \int_0^1 u_0(\kappa_M(y))\Phi(t,y)\, \mathrm{d} y \nonumber \\ & \quad + \int_0^t\int_0^1 \widetilde{u}_r(s,\kappa_M(y)) \left(\Delta_M \Phi(s,y) + \frac{\partial \Phi}{\partial s}(s,y)\right)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 f(\widetilde{u}_r(\kappa_N^T(s),\kappa_M(y))) \Phi(s,y)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 \sigma(\widetilde{u}_r(\kappa_N^T(s),\kappa_M(y))) \Phi(s,y)\, W(\mathrm{d} s,\mathrm{d} y), \quad \widehat{\mathbb{P}}\text{-a.s.}, \label{eq:11} \end{align} for all $t\in [0,T]$, and also \begin{align} \int_0^1 \overline u_r(t,\kappa_M(y))\Phi(t,y)\, \mathrm{d} y = & \int_0^1 u_0(\kappa_M(y))\Phi(t,y)\, \mathrm{d} y \nonumber \\ & \quad + \int_0^t\int_0^1 \overline u_r(s,\kappa_M(y)) \left(\Delta_M \Phi(s,y) + \frac{\partial \Phi}{\partial s}(s,y)\right)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 f(\overline u_r(\kappa_N^T(s),\kappa_M(y))) \Phi(s,y)\, \mathrm{d} y\, \mathrm{d} s \nonumber \\ & \quad + \int_0^t\int_0^1 \sigma(\overline u_r(\kappa_N^T(s),\kappa_M(y))) \Phi(s,y)\, W(\mathrm{d} s,\mathrm{d} y), \quad \widehat{\mathbb{P}}\text{-a.s.}, \label{eq:12} \end{align} for all $t\in [0,T]$. We recall that $\Delta_M$ denotes the discrete Laplacian, which is defined by \[ \Delta_M\Phi(s,y):= (\Delta x)^{-2} \left\{\Phi(s,y+\Delta x) - 2\Phi(s,y) + \Phi(s,y-\Delta x)\right\}, \] where we remind that $\Delta x=\frac1M$. Taking $r\rightarrow\infty$ in the above formulas \eqref{eq:11} and \eqref{eq:12}, and using Lemma \ref{limit}, we show that the random fields $\tilde u$ and $\bar u$ are solutions of \eqref{eq:13}, and hence of equation \eqref{heateq}, on the same stochastic basis $(\widehat\Omega,\widehat{\mathcal{F}},(\widehat{\mathcal{F}_t})_{t\geq1},\widehat{\mathbb{P}})$. Thus, by the pathwise uniqueness assumption, we obtain that $\widetilde u(t,x)=\overline u(t,x)$ for all $(t,x)\in[0,T]\times[0,1]\:\:$ $\widehat{\mathbb{P}}$-a.s. Hence, by Lemma \ref{polish}, we get that $\{u^{M_{k},N_{k}}\}_{k\geq1}$ converges in probability to $u$, uniformly on $[0,T]\times[0,1]$, the solution of the stochastic heat equation \eqref{heateq}. \end{proof} \section{Acknowledgement} L. Quer-Sardanyons' research is supported by grants 2014SGR422 and MTM2015-67802-P. This work was partially supported by the Swedish Research Council (VR) (project nr. $2013-4562$). The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at HPC2N, Ume{\aa} University. \bibliographystyle{plain}
{ "attr-fineweb-edu": 1.084961, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUf97xK4sA-9F9nHeJ
\section{Introduction} The analysis of network data has received considerable attention in diverse areas of research such as social sciences, biology, statistics and computer science. At a high level, the central task in this area is to study underlying structural characteristics, given the network data. A statistically principled approach formalizes any such question as an inference problem, given a suitable, simple, probabilistic generative model for the observed data. Thus statistical research in this direction has focussed on a few principal themes. The first theme concerns the design of suitable models which reflect some of the features observed in real networks \citep{barabasi1999emergence,watts1998collective}, while the second theme concentrates on developing statistical methodology for inference on data from these generative models. A third, perhaps equally important, but often less emphasized, aspect of this endeavor is to determine the effectiveness of the proposed models. This is intimately related to the classical goodness of fit testing paradigm in statistical inference. In this paper, we concentrate on a concrete example of this general problem, and study it using the lens of asymptotic minimax testing procedures. It has been empirically observed that real networks often have small groups of vertices which are more homogeneous compared to the remaining vertices. For example, in a social network setup, such a group might represent vertices which share the same profession. Such a group is loosely referred to as a ``community", and the task of finding such set of vertices from the data, referred to as the ``community detection problem", has emerged as a central challenge in network analysis. The stochastic block model (henceforth referred to as SBM), introduced by \cite{holland1983sbm}, has emerged as the canonical setup to study this problem. In the simplest case, we observe a labeled undirected graph $\mathcal{G} = (V,E)$, with vertex set $V=[n]$ (where for any $n \in \mathbb{N}$, we let $[n]=\{1,\ldots,n\}$) and adjacency matrix ${\bf{Y}} = (Y_{ij})$. The edge-set $E$ of the graph is generated by first choosing a partition of the vertices $V= \mathcal{C} \cup \mathcal{C}^c$ with $|C| = \frac{n}{2}$ (assuming $n$ is even throughout), and then adding edges independently with \begin{equs} \mathbb{P} [ \{i,j\} \in E] = \begin{cases} \frac{a}{n} \quad {\rm{if }}\, \{i,j\} \subset \mathcal{C} \,{\rm{or}}\, \{i,j\} \subset \mathcal{C}^c. \nonumber \\ \frac{b}{n} \quad {\rm{ow}}, \nonumber \end{cases}\label{eq:model_vanilla} \end{equs} for $0<a,b\leq n$. One usually sets $a \geq b$, so that vertices in the same community have a higher probability of forming an edge. The ``community detection" problem is formally phrased as the estimation of the true memberships $\mathcal{C}$ from the observed graph $\mathcal{G}$. The model \eqref{eq:model_vanilla} can be easily extended to capture more general community structures, such as multiple communities, communities with unequal size, etc. A sharp analysis of the limits of statistical inference under this model has received considerable attention recently. We do not attempt to survey the extensive literature in this area, and instead refer the reader to the two excellent surveys \cite{abbe2017recent}, \cite{moore2017survey}, and the references therein, for an extensive overview of the recent progress on this problem and related open questions. Practitioners often fit these models to real networks to form preliminary ideas about community structure, and for exploratory data analysis (\cite{snijders1997estimation}). However, while a theoretical understanding of the model has attained considerable maturity, it has also been widely reported that the model is often inappropriate for real data. The model favors graphs where the vertex degrees concentrate around a fixed value, and fails to model networks with a non-trivial degree distribution, in addition to a community structure. If this issue is ignored, and algorithms for community detection developed in the context of the SBM used on these examples, the algorithm often splits the vertices into high and low degree groups, and fails completely to uncover the true community memberships. A notable example, which exhibits this phenomenon is the political blog data of \cite{adamic2005blogs}. To address this issue, \cite{karrer2011stochastic} have introduced the ``degree corrected Stochastic Block model" (henceforth abbreviated as DCSBM), which incorporates a separate ``degree" parameter for each vertex. Under the DCSBM, we again observe a graph $\mathcal{G}= (V,E)$, with vertex set $V=[n]$. To generate the graph, we consider a fixed partition $[n] = \mathcal{C} \cup \mathcal{C}^c$ with $|\mathcal{C}| = n/2$, and a vector $\mathbf{\Theta} = (\theta_1, \cdots, \theta_n)$ of positive reals. The $\mathbf{\Theta}$ parameters represent the activity or the attractiveness of individual vertices. Given the parameter $\mathbf{\Theta}$, we add edges independently with \begin{equs} \mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})} [ \{i,j\} \in E] = \begin{cases} \theta_i \theta_j \frac{a}{n} \quad {\rm{if }}\, \{i,j\} \subset \mathcal{C} \,{\rm{or}}\, \{i,j\} \subset \mathcal{C}^c. \nonumber \\ \theta_i \theta_j \frac{b}{n} \quad {\rm{ow}}. \nonumber \end{cases}\label{eq:model_main} \end{equs} In \eqref{eq:model_main} we have implicitly assumed that $\theta_i\theta_j\frac{a}{n}\leq 1$ for all $i,j$, as will be the case throughout the rest of the paper. Note that upon setting $\theta_i =1$ for all $ i \in [n]$, the model \eqref{eq:model_main} reduces to \eqref{eq:model_vanilla}. The expectation and variance operators under model \eqref{eq:model_main} will be denoted by $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}$ and $\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}$ respectively. In the sequel, whenever $\mathcal{C}$ is clear from the context, we drop the notational dependence of the above quantities on $\mathcal{C}$. Finally, we note that in the model above, $0<b < a <n$ are sequences dependent on $n$. We assume throughout that $ 0< \liminf \frac{b}{a} \leq \limsup \frac{b}{a} <1$ and define $\tau_a := \lim_{n \to \infty} \frac{a}{n}$ and $\tau_b := \lim_{n \to \infty} \frac{b}{n}$. \cite{karrer2011stochastic} show empirically that model fits are often considerably improved under this more general model \eqref{eq:model_main}. Motivated by the success of the DCSBM in modeling real networks, numerous authors have, in turn, developed powerful machinery for community detection under this model (\cite{zhao2012consistency}, \cite{jin2015fast}, \cite{gao2016community}, \cite{lei2016goodness}). Given a dataset, these results do not provide a principled method to choose between the SBM or the DCSBM. This question assumes greater importance in light of the contrast in the inferred memberships under the two setups. While the DCSBM is more flexible, it adds an extra parameter for each vertex, and thus the fitting process is often complicated. Further, from a statistical viewpoint, introducing so many extra parameters might lead to loss in power to detect the presence of an underlying community structure. A natural instinct at this point is to use a likelihood ratio test (LRT) for goodness of fit. However, we note that classical asymptotics for LRTs for goodness of fit are no longer immediately valid in this case, due to the divergence in the number of parameters. This concern had been raised classically by \cite{fienberg1981comment}, who emphasize the need for proper model selection criteria in the context of the $p_1$ model, which exhibits similar features. In our context, this issue was partially addressed by \cite{yan2014model}, who use techniques motivated by statistical physics to approximate the likelihood, and derive valid sampling distributions for the test statistic. Following the work of \cite{yan2014model}, some other model selection approaches have also been introduced (see e.g. \cite{peixoto2015model}, \cite{yan2016bayesian}). In this paper, we study this question rigorously under the asymptotic minimax setup. In the context of model \eqref{eq:model_main}, we will formulate our problem as a goodness-of-fit type global null hypothesis testing problem against a structured hypothesis. To this end, we define the parameter space \begin{equs} \Xi(s,A):=\left\{\begin{array}{c}\mathbf{\Theta}\in \mathbb{R}_{+}^n: |S(\mathbf{\Theta})|=s, \theta_i\ge 1+A, i\in S(\mathbf{\Theta})\end{array}\right\}, \label{eq:parameterspace} \end{equs} where $S(\mathbf{\Theta}):=\{1\le i\le n:\theta_i\ne 1\}$ and $\mathbb{R}_{+} = [0, \infty)$. The vertices $i \in S(\mathbf{\Theta})$ can be interpreted as the ``popular" vertices. \cite{karrer2011stochastic} emphasized that in many real networks, these ``popular" vertices are comparatively rare, and ensuring their correct classification is often more challenging. Since we expect such vertices to be sparse, mathematically we consider the following sequence of hypothesis testing problems \begin{equs} H_0: \mathbf{\Theta} =\mathbf{1} \quad \textrm{vs.} \quad H_1: \mathbf{\Theta} \in \Xi(s_n, A_n) \subset \mathbb{R}_+^n\setminus \{\mathbf{0}\} \label{eqn:hypo} \end{equs} for any pair of sequences $s_n, A_n$. Throughout we shall refer to $\mathbf{\Theta}$ as the signals and parametrize signal sparsity $s_n=n^{1-\alpha}$ with $\alpha \in (0,1)$. A statistical test for $H_0$ versus $H_1$ is a measurable $\{0,1\}$ valued function of the data $\bY$, with $1$ denoting the rejection of the null hypothesis $H_0$ and $0$ denoting the failure to reject $H_0$. The worst case risk of a test $T_n(\bY)$ is defined as \begin{equs} \mathrm{Risk}_n(T_n,\Xi (s_n, A_n))&:= \max_{\mathcal{C} \subset [n]: |\mathcal{C}| = \frac{n}{2} }\mathbb{P}^{(\mathcal{C})}_{\mathbf{1}, a, b}\left(T_n=1\right)+\sup_{\mathbf{\Theta} \in \Xi(s_n, A_n)} \max_{\mathcal{C}\subset [n]: |\mathcal{C}|= \frac{n}{2}}\mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}\left(T_n=0\right). \nonumber\label{eq:risk} \end{equs} A sequence of tests $T_n$ corresponding to a sequence of model-problem pairs \eqref{eq:model_main}-\eqref{eqn:hypo}, is said to be asymptotically powerful (respectively asymptotically powerless) against $\Xi(s_n ,A_n)$ if $$\limsup\limits_{n\rightarrow \infty}\mathrm{Risk}_n(T_n,\Xi(s_n, A_n))= 0\text{ (respectively }\liminf\limits_{n\rightarrow \infty}\mathrm{Risk}_n(T_n,\Xi(s_n, A_n))=1).$$ The results in this paper derive the smallest deviations necessary to detect the ``inhomogeneity" in the behavior of the vertex degrees. We also provide matching procedures for detection, which work as soon as one has enough signal. Our results exhibit an interesting interplay among signal sparsity, graph sparsity, and signal strength. To our knowledge, this is the first instance where sharp detection thresholds have been achieved in the presence of a high dimensional nuisance parameter, without any additional assumptions. We discuss more on the implications of our main results in Section \ref{section:discussion}. \subsection*{Notation} For any $n \in \mathbb{N}$, we let $[n]=\{1,\ldots,n\}$. For any $i\in [n]$ we denote the degree of vertex $i$ by $d_i:=\sum_{j=1}^nY_{ij}=\sum_{j\ne i}Y_{ij}.$ We will denote the null mean and standard deviation of a degree by $\mu_{n0}=:\mathbb{E}_{\mathbf{1},a,b}^{(\mathcal{C})}(d_i)$ and $\sigma_{n0}:=\sqrt{\mathrm{Var}^{(\mathcal{C})}_{\mathbf{1},a,b}(d_i)}$. Note that these do not depend on $\mathcal{C}$. Throughout $\mathrm{Bin}(n,p)$ will stand for a generic binomial random variable with $n\in \mathbb{N}$ trials and success probability $p \in [0,1]$. The results in this paper are mostly asymptotic in nature and thus requires some standard asymptotic notations. If $a_n$ and $b_n$ are two sequences of real numbers then $a_n \gg b_n$ (and $a_n \ll b_n$) implies that ${a_n}/{b_n} \rightarrow \infty$ (respectively ${a_n}/{b_n} \rightarrow 0$) as $n \rightarrow \infty$. Similarly $a_n \gtrsim b_n$ (and $a_n \lesssim b_n$) implies that $\liminf{{a_n}/{b_n}} = C$ for some $C \in (0,\infty]$ (and $\limsup{{a_n}/{b_n}} =C$ for some $C \in [0,\infty)$). Alternatively, $a_n=o(b_n)$ will also imply $a_n \ll b_n$ and $a_n=O(b_n)$ will imply that $\limsup{{a_n}/{b_n}} =C$ for some $C \in [0,\infty)$). We write $a_n \sim b_n$ if $\lim \frac{a_n}{b_n}= 1$. We need the following function to define our detection thresholds. For $\beta_1,\beta_2>0$ let $$\rho(\beta_1,\beta_2)=\Big[ \frac{(\beta_1^2 \tau_a (1 - \tau_a) + \beta_2^2 \tau_b(1- \tau_b) ) ( \tau_a (1 - \tau_a) + \tau_b (1- \tau_b) ) }{(\beta_1 \tau_a + \beta_2 \tau_b )^2} \Big],$$ where $\tau_a,\tau_b$ are defined earlier. Further, we shall always assume $b<a\leq \frac{n}{2}$ for concreteness, although the particular choice of $n/2$ can be easily replaced by $cn$ for any fixed $c\in (0,1)$. Also, throughout we drop the subscript $n$ whenever it is understood that $s, A$ are allowed to vary with $n$. \section{Tests}\label{section:tests} In this section we formally describe the testing procedures to be used. In order to construct these tests we begin with a few definitions. Fix any $\mathcal{C}\subset [n]$ with $|\mathcal{C}|=n/2$ and for any $i \in [n]$, let $\mathcal{C}(i) = \mathcal{C}$ if $i \in \mathcal{C}$ and $\mathcal{C}(i) = \mathcal{C}^c$ otherwise. Define the within-group-degree of a vertex $i$ to be $d_i(1,\mathcal{C}) = \sum_{j \in \mathcal{C}(i)} Y_{ij}$. Similarly set the across-group-degree of a vertex $i$ to be $d_i(2,\mathcal{C}) = \sum_{j \in \mathcal{C}(i)^c} Y_{ij}$. Define \begin{align} \mu^0_{n1}(\mathcal{C}):=\mathbb{E}_{\mathbf{1},a,b}^{(\mathcal{C})} \Big[d_i(1,\mathcal{C}) \Big] = \left(\frac{n}{2}-1\right) \cdot \frac{a}{n} ,\,\,\,\, \mu^0_{n2}(\mathcal{C}):=\mathbb{E}_{\mathbf{1},a,b}^{(\mathcal{C})} \Big[d_i(2,\mathcal{C}) \Big]= \frac{n}{2} \cdot \frac{b}{n}. \nonumber \\ \mathrm{Var}^{(\mathcal{C})}_{\mathbf{1},a,b} (d_i(1,\mathcal{C})) = \left(\frac{n}{2}-1\right) \cdot \frac{a}{n} \cdot \Big(1- \frac{a}{n} \Big),\,\,\,\, \mathrm{Var}^{(\mathcal{C})}_{\mathbf{1},a,b} (d_i(2,\mathcal{C})) = \frac{n}{2} \cdot \frac{b}{n} \cdot \Big( 1 -\frac{b}{n} \Big),\nonumber \end{align} and note that under $H_0$ the above quantities do not depend on $\mathcal{C}$. Hence, in the sequel, whenever $\mathcal{C}$ is clear from the context, we drop the notational dependence of the above quantities on $\mathcal{C}$. Finally, for any fixed positive constants $\beta_1$ and $\beta_2$ define \begin{equs} D_i(\mathcal{C},\beta_1,\beta_2):=\frac{\beta_1(d_i(1,\mathcal{C}) - \mu^0_{n1}(\mathcal{C})) + \beta_2 (d_i(2,\mathcal{C}) - \mu^0_{n2}(\mathcal{C}))}{\sigma_{n0}(\mathcal{C},\beta_1,\beta_2)}, \quad i=1,\ldots,n, \end{equs} where \begin{equs} \sigma_{n0}(\mathcal{C},\beta_1,\beta_2):=\sqrt{\beta_1^2 \mathrm{Var}^{(\mathcal{C})}_{\mathbf{1},a,b} (d_i(1,\mathcal{C})) + \beta_2^2 \mathrm{Var}^{(\mathcal{C})}_{\mathbf{1},a,b} (d_i(2,\mathcal{C}))}. \end{equs} Once again, note that under $H_0$ the above quantity do not depend on $\mathcal{C}$. Hence, in the sequel, whenever $\mathcal{C}$ is clear from the context, we drop the notational dependence of the above quantities on $\mathcal{C}$. We are now ready to define our testing procedures. \begin{description}[align=left]\itemsep15pt \item [\textbf{Total Degree Test} :] This test is based on the total degree in the observed graph i.e. $\sum_{i=1}^n d_i$. The test rejects when the observed total degree is large. The calibration of this test can be achieved by looking at the behavior of $\sum_{i=1}^n d_i$ under the null hypothesis in \eqref{eqn:hypo}. More precisely, by the Total Degree Test we mean a testing procedure which rejects when $\sum_{i=1}^n d_i$ is large (See proof of Theorem \ref{thm:dense}\ref{thm:dense_upper}). \item [\textbf{The Higher Criticism Tests} :] For any $\beta_1,\beta_2>0$, $\mathcal{C}\subset [n]$ with $|\mathcal{C}|=n/2$, and $t>0$ let \begin{equs} HC(\C,\beta_1,\beta_2;t):=\sum_{i=1}^n \left(\mathcal{I}\left(D_i(\C,\beta_1,\beta_2)>t\right)-\mathbb{P}_{\mathbf{1},a,b}^{(\mathcal{C})}\left(D_i(\C,\beta_1,\beta_2)>t\right)\right). \end{equs} We then construct a version of the higher criticism test as follows. Define \begin{equs} HC(\C,\beta_1,\beta_2):=\sup\left\{\begin{array}{c}GHC(\C,\beta_1,\beta_2;t):=\frac{HC(\C,\beta_1,\beta_2;t)}{\sqrt{\mathrm{Var}_{\mathbf{1},a,b}\left(HC(\C,\beta_1,\beta_2;t)\right)}},\\ t\in \{\sqrt{2r\log{n}}:r\in (0,5)\}\cap \mathbb{N}\end{array}\right\}. \end{equs} By the Higher Criticism Test based on $HC(\C,\beta_1,\beta_2)$ we then mean a testing procedure that rejects when the observed value of $HC(\C,\beta_1,\beta_2)$ defined above is large. In particular, we let $T_{HC}(\mathcal{C},\beta_1,\beta_2)$ be the test that rejects when $HC(\mathcal{C},\beta_1,\beta_2)>\sqrt{\log{n}}$. Note that for any $\mathcal{C},\mathcal{C}'\subset[n]$ with $|\mathcal{C}|=|\mathcal{C}'|=n/2$, $HC(\mathcal{C},1,1)=HC(\mathcal{C}',1,1)$ and hence any such test is referred to as the test based on $HC(1,1)$. It is easy to see the test based on $HC(1,1)$ is the degree based Higher Criticism Test introduced in \cite{mms2016} and will be also referred to as the vanilla Higher Criticism Test. \item [\textbf{The Maximum Degree Tests} :] For any $\beta_1,\beta_2>0$ and $\mathcal{C}\subset [n]$ with $|\mathcal{C}|=n/2$, by the Maximum Degree Test based on $d_{\max}(\mathcal{C},\beta_1,\beta_2)$ we mean the procedure that rejects for large values of $\max_{i \in [n]} D_i(\mathcal{C},\beta_1, \beta_2)$. In particular, for any $\delta>0$, we let $T_{d_{\max}}(\mathcal{C},\beta_1,\beta_2,\delta)$ be the test that rejects when $\max_{i \in [n]}D_i(\mathcal{C},\beta_1,\beta_2)>\sqrt{2(1+\delta)\log{n}}$. Note that for any $\mathcal{C},\mathcal{C}'\subset[n]$ with $|\mathcal{C}|=|\mathcal{C}'|=n/2$, $\max_{i \in [n]} D_i(\mathcal{C},1,1)=\max_{i \in [n]} D_i(\mathcal{C}',1,1)$and hence any such test is referred to as the test based on $d_{\max}(1,1)$. It is easy to see the test based on $d_{\max}(1,1)$ is simply the test that rejects for large values of of the maximum degree $d_{\max}:=\max\{d_1,\ldots,d_n\}$ and will be also referred to as the vanilla Maximum Degree Test. \end{description} \section{Main Results}\label{section:main_results} In this section we present the main results of the paper along with their implications. Owing to the differential behavior of the detection problem, we divide our presentation into two main subsections based on the signal sparsity $\alpha$. \subsection{Dense Signal Regime $\alpha \leq \frac{1}{2}$} The behavior of the detection problem in the dense signal $\alpha\leq \frac{1}{2}$ regime is particularly simple. Intuitively, since there are many vertices in the graph which have a higher connection probability than under the null hypothesis, under the dense signal regime a natural test statistic to look at is the Total Degree Test introduced in Section \ref{section:tests}. This intuition indeed turns out to be correct in the sense that no other test works when the Total Degree Test fails. The next theorem makes this precise. \begin{theorem}\label{thm:dense} Fix $0 < \alpha < \frac{1}{2}$ and set $C_{\mathrm{dense}}(\alpha) = \frac{1}{2} - \alpha$. \begin{enumerate}[label=\textbf{\roman*}.] \item\label{thm:dense_upper} The Total Degree Test is asymptotically powerful if \begin{align} A \geq \frac{n^{-r}}{\sqrt{a}}, \,\,\, r < C_{\mathrm{dense}}(\alpha). \nonumber \end{align} \item \label{thm:dense_lower}All tests are asymptotically powerless if \begin{align} A \leq \frac{n^{-r}}{\sqrt{a}}, \,\,\, r > C_{\mathrm{dense}}(\alpha). \nonumber \end{align} \end{enumerate} \end{theorem} One feature of Theorem \ref{thm:dense} above is that the detection thresholds given by $C_{\mathrm{dense}}$ do not change based on the nature of $\tau_a$ and $\tau_b$. We will see later that this behavior is in stark contrast to that of the detection thresholds in the sparse regime $\alpha > \frac{1}{2}$. \subsection{Sparse Signal Regime $\alpha > \frac{1}{2}$} The behavior of detection problem in the sparse signal regime is subtle. Intuitively, since we are testing for degree heterogeneity which are sparse in occurrence, one should in principle be able to produce tests similar to those in \cite{mms2016} by looking at abnormal behavior of extreme degrees. Indeed this intuition is captured by the degree based Higher Criticism Test and the Maximum Degree Test studied in \cite{mms2016}. Success of similar tests naturally fits into the narrative that the behavior of the detection problem for degree heterogeneity does not depend on the knowledge of community assignment. Although the heart of this narrative is correct, the implications should be taken with a grain of salt. In particular, as we argue in this section, this intuition of constructing tests surprisingly fails for dense graphs i.e. when $0<\tau_b<\tau_a$. More precisely, for dense graphs, the optimal procedures require the knowledge of the community assignments. Although this is problematic at first glance, the experienced reader will immediately realize that when $0<\tau_b<\tau_a$, it is very easy to recover the communities consistently, at least when the degree heterogeneity parameter $\theta_1,\ldots,\theta_n$ are not too rough \citep{gao2016community}. To elaborate on this peculiar behavior of the detection problem it is instructive to start with the information theoretic lower bound. \begin{thm}\label{thm:sparse_lower} Let $\log{n}\ll b<a\leq \frac{n}{2}$, $\alpha>\frac{1}{2}$ and consider the signal strength \begin{align} A = \sqrt{\frac{C\log n}{\sigma_{n0}^2}}. \label{eq:signal_const} \end{align} Then all tests are asymptotically powerless if $C < C_{\mathrm{sparse}}(\alpha)$, where \begin{align} C_{\mathrm{sparse}}(\alpha) = \begin{cases} 2 \Big( \frac{\tau_a(1-\tau_a) + \tau_b(1-\tau_b)}{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b}} \Big) \Big(\alpha - \frac{1}{2} \Big) \quad {\rm{for}}\,\, \frac{1}{2} < \alpha < \frac{3}{4}. \\ 2 \Big( \frac{\tau_a(1-\tau_a) + \tau_b(1-\tau_b)}{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b}} \Big) \Big( 1- \sqrt{1-\alpha}\Big)^2 \quad {\rm{for}}\,\, \alpha \geq \frac{3}{4}. \end{cases} \nonumber \end{align} In particular, when $\tau_a = \tau_b =0$, the correct constant $C_{\mathrm{sparse}}(\alpha)$ is obtained by taking the limit as $\tau_a, \tau_b \to 0$, so that \begin{align} C_{\mathrm{sparse}}(\alpha) = \begin{cases} 2 \Big( \alpha - \frac{1}{2} \Big) \quad {{\rm for }} \,\, \frac{1}{2} < \alpha < \frac{3}{4}. \\ 2 \Big( 1 - \sqrt{1- \alpha} \Big)^2 \quad{{\rm for }}\,\, \alpha > \frac{3}{4}. \end{cases}\nonumber \end{align} \end{thm} We derive Theorem \ref{thm:sparse_lower} using an information theoretic lower bound for a simpler problem, where the true community assignments are known in advance. The proof is based on the truncated second moment argument with the main challenge being the choice of the truncation event. Unlike \cite{mms2016}, a non-signal-edge-deleted degree based truncation is not enough to yield the desired sharp thresholds. Instead one needs to take into account the knowledge of community assignments as well (at least when $\tau_a,\tau_b$ are positive). Finally we note that this simpler problem with known community assignments always furnishes a lower bound for problem \eqref{eqn:hypo}. If we can produce valid statistical procedures which work up to this threshold, this furnishes strong evidence that the true community assignments are ancilliary for this problem. To this end, the next results establish performance bounds on Higher Criticism based tests. \begin{theorem}\label{thm:sparsesignal_vanilla_upper_gen} Let $\log{n}\ll b<a\leq \frac{n}{2}$, $\alpha>\frac{1}{2}$ and consider the signal strength \begin{align} A = \sqrt{\frac{C\log n}{\sigma_{n0}^2}}.\nonumber \end{align} \begin{enumerate}[label=\textbf{\roman*}.] \item \label{thm:sparsesignal_vanilla_hc_gen} If $C>C_{\mathrm{HC}}(\beta_1,\beta_2,\alpha)$ where \begin{align} C_{\mathrm{HC}}(\beta_1,\beta_2,\alpha) = \begin{cases} 2 \rho(\beta_1,\beta_2,) \Big(\alpha - \frac{1}{2} \Big) \quad {\rm{for}}\,\, \frac{1}{2} < \alpha < \frac{3}{4}. \\ 2 \rho(\beta_1,\beta_2,) \Big( 1- \sqrt{1-\alpha}\Big)^2 \quad {\rm{for}}\,\, \alpha \geq \frac{3}{4}, \end{cases} \nonumber \end{align} then \begin{equs} \max_{\mathcal{C} \subset [n]:\atop |\mathcal{C}| = \frac{n}{2} }\mathbb{P}^{(\mathcal{C})}_{\mathbf{1}, a, b}\left(T_{HC}(\mathcal{C},\beta_1,\beta_2)=1\right)+\sup_{\mathbf{\Theta} \in \Xi(s_n, A_n)} \max_{\mathcal{C}\subset [n]:\atop |\mathcal{C}|= \frac{n}{2}}\mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}\left(T_{HC}(\mathcal{C},\beta_1,\beta_2)=0\right)=0. \\\label{eq:risk_hc_gen} \end{equs} \item \label{thm:sparsesignal_vanilla_max_upper_gen} If $C>C_{\mathrm{max}}(\beta_1,\beta_2,\alpha)$ with \begin{align} C_{\mathrm{max}}(\alpha) = 2 \rho(\beta_1,\beta_2) \Big( 1- \sqrt{1-\alpha}\Big)^2 \quad {\rm{for}}\,\, \alpha \geq \frac{3}{4}, \nonumber \end{align} then there exists $\delta>0$ such that \begin{equs} \max_{\mathcal{C} \subset [n]:\atop |\mathcal{C}| = \frac{n}{2} }\mathbb{P}^{(\mathcal{C})}_{\mathbf{1}, a, b}\left(T_{d_{\max}}(\mathcal{C},\beta_1,\beta_2,\delta)=1\right)+\sup_{\mathbf{\Theta} \in \Xi(s_n, A_n)} \max_{\mathcal{C}\subset [n]:\atop |\mathcal{C}|= \frac{n}{2}}\mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}\left(T_{d_{\max}}(\mathcal{C},\beta_1,\beta_2,\delta)=0\right)=0. \\\label{eq:risk_max_gen} \end{equs} \end{enumerate} \end{theorem} Note that $T_{HC}(\mathcal{C},\beta_1,\beta_2)$ and $T_{d_{\max}}(\mathcal{C},\beta_1,\beta_2,\delta)$ are not statistically valid tests for all $\beta_1, \beta_2$, since they assume the true community assignment $\mathcal{C}$ known (as is the case for \eqref{eq:risk_hc_gen} and \eqref{eq:risk_max_gen}). However, for $\beta_1 = \beta_2 = 1$, $HC(1,1) = HC(\mathcal{C},1,1)$ and $d_{\max} = d_{\max}(\mathcal{C},1,1)$, and thus Theorem \ref{thm:sparsesignal_vanilla_upper_gen} derives performance guarantees for tests based on $HC(1,1)$ or $d_{\max}(1,1)$. This is summarized in Parts (i) and (ii) of the following theorem. \begin{theorem}\label{thm:sparsesignal_vanilla_upper} Let $\log{n}\ll b<a\leq \frac{n}{2}$, $\alpha>\frac{1}{2}$ and consider the signal strength \begin{align} A = \sqrt{\frac{C\log n}{\sigma_{n0}^2}}.\nonumber \end{align} \begin{enumerate}[label=\textbf{\roman*}.] \item \label{thm:sparsesignal_vanilla_hc} The test based on $HC(1,1)$ is powerful {if} $C>C_{\mathrm{HC}}(\alpha)$ where \begin{align} C_{\mathrm{HC}}(\alpha) = \begin{cases} 2 \rho(1,1) \Big(\alpha - \frac{1}{2} \Big) \quad {\rm{for}}\,\, \frac{1}{2} < \alpha < \frac{3}{4}. \\ 2 \rho(1,1) \Big( 1- \sqrt{1-\alpha}\Big)^2 \quad {\rm{for}}\,\, \alpha \geq \frac{3}{4}. \end{cases} \nonumber \end{align} \item \label{thm:sparsesignal_vanilla_max_upper} The test based on $d_{\max}(1,1)$ is powerful if $C>C_{\mathrm{max}}(\alpha)$ with \begin{align} C_{\mathrm{max}}(\alpha) = 2 \rho(1,1) \Big( 1- \sqrt{1-\alpha}\Big)^2 \quad {\rm{for}}\,\, \alpha \geq \frac{3}{4}. \nonumber \end{align} \item \label{thm:sparsesignal_vanilla_max_lower} The test based on $d_{\max}(1,1)$ is powerless if $C<C_{\mathrm{max}}(\alpha)$. \end{enumerate} \end{theorem} To develop further intuition, it is instructive to compare these results to analogous ones derived in the context of the sparse signal detection problem for sequence models. In particular, motivated by the long series of results on sparse signal detection problems \citep{Ingster4,Jin1,Candes,mukherjee2015hypothesis,arias2015sparse} and recent work on heterogeneity detection over sparse Erd{\cal H}} \def\h{{\eta}} \def\HH{\mathbb{H}} \def\bH{{\bf H}{o}s-R\'{e}nyi random graphs under the $\beta$-model \citep{mms2016}, we expect that the Maximum Degree Test and the Higher Criticism should both perform optimally with sharp constants for very sparse signals $(\alpha\geq \frac{3}{4})$. Moreover, the Higher Criticism Test should be provably better than the Maximum Degree Test for denser signals with $\alpha \in (1/2,3/4)$. The observation that $\rho(1,1)=1$ for $\tau_a = \tau_b =0$, in conjunction with Theorem \ref{thm:sparse_lower} establishes the expected intuitive picture for all $a,b$ sequences with $\tau_a = \tau_b =0$. Before going into further statistical implications of Theorem \ref{thm:sparsesignal_vanilla_upper} we first comment on the analysis in the proof of Theorem \ref{thm:sparsesignal_vanilla_upper}\ref{thm:sparsesignal_vanilla_max_lower}. As mentioned earlier, the lower bound statement on the Maximum Degree Test in Theorem \ref{thm:sparsesignal_vanilla_upper}\ref{thm:sparsesignal_vanilla_max_lower} is indeed necessary to demonstrate the competition between the HC and max-degree based procedures. Analysis of the lower bounds for the vanilla Maximum Degree Test requires good control over the null distribution of the test statistic. Although the null distribution of the maximum degree of an Erd{\cal H}} \def\h{{\eta}} \def\HH{\mathbb{H}} \def\bH{{\bf H}{o}s-R\'{e}nyi graph is standard in literature \citep[Theorem 3.3$^{'}$]{bollobas}, we could not find the corresponding results for Stochastic Block Models. To this end, our next result derives the asymptotic sampling distribution of the maximum degree under the null hypothesis, after appropriate centering and scaling. Dropping notational dependence on the true underlying community assignment $\mathcal{C}$ recall that $\mu_{n0} = \mathbb{E}_{\mathbf{1}, a,b}[d_1]$, $\sigma_{n0} = {\textrm{Var}}_{\mathbf{1},a,b}[d_1]$, we have the the following result. \begin{theorem} \label{thm:max_deg_null} Let $b \gg (\log n)^3$. In this case, we have, as $n \to \infty$, \begin{align} \mathbb{P}_{\mathbf{1},a,b}\Big[ \frac{\max_i d_i - \mu_{n0} }{\sigma_{n0}} \leq \sqrt{2 \log n} \Big( 1 - \frac{\log \log n + \log (4 \pi)}{4 \log n} + \frac{y}{2 \log n}\Big) \Big] \to \exp\Big[- {\textrm{e}}^{-y} \Big]. \nonumber \end{align} \end{theorem} \begin{remark} We note that after appropriate centering and scaling, the null distribution of the maximum converges to a Gumbel distribution. It is specifically interesting to compare this result to the asymptotic distribution of the maximum degree in an Erd{\cal H}} \def\h{{\eta}} \def\HH{\mathbb{H}} \def\bH{{\bf H}{o}s-R\'{e}nyi random graph. A direct proof in that case proceeds using the method of moments \citep[Theorem 3.3$^{'}$]{bollobas}. In the case of the SBM, the individual degrees are no longer binomial, but rather a sum of two independent Binomial random variables. As a result, many direct computations involving the degrees become considerably more involved. In our proof, we circumvent this difficulty, and establish this result using a softer argument, based on a version of Stein's method for Poisson approximation \citep{barbour1992poisson}. \end{remark} We now return to a discussion on statistical implications of Theorem \ref{thm:sparsesignal_vanilla_upper}. Consider the regime $\tau_a > \tau_b >0$. Recall that the tests based on $HC(1,1)$ and $d_{\max}(1,1)$ are respectively the vanilla Higher Criticism Test and Maximum Degree Tests based on the degrees $(d_1,\ldots,d_n)$. We note that $\tau_a>\tau_b>0$ implies $\rho(1,1)>\Big( \frac{\tau_a(1-\tau_a) + \tau_b(1-\tau_b)}{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b}} \Big)$ and thus there is a gap between the thresholds derived in Theorem \ref{thm:sparsesignal_vanilla_upper} and Theorem \ref{thm:sparse_lower}. Although we do not have a similar performance lower bound for the vanilla Higher Criticism Test, we strongly believe that at least in the extreme signal sparsity regime $(\alpha\geq \frac{3}{4})$, the Maximum Degree Test and the Higher Criticism Test are essentially similar. Consequently, we are left with two possible scenarios. Either the information theoretic lower bound of Theorem \ref{thm:sparse_lower} stands to be made better, or there is the possibility of constructing optimal tests different from our usual Higher Criticism and Maximum Degree Test. Our main result verifies the latter possibility, thereby demonstrating differential behavior of the detection problem on dense graphs. This directly implies the rather surprising result that on dense graphs ($\tau_a \geq \tau_b >0$), for very sparse alternatives, the maximum degree test is not, in fact, optimal in terms of detection thresholds. This is in sharp contrast to the usual results expected for Gaussian sequence models, or for random graph models with ``exchangeable" degrees. We illustrate the differences between the two thresholds in Fig \ref{fig:main}. \begin{figure} \begin{center} \includegraphics[width=6in, height=3in]{plot.pdf} \end{center} \caption{The naive threshold $\rho(1,1)$ (in BLUE) and the correct information theoretic threshold $\rho(\beta_1^*,\beta_2^*)$ (in RED) are shown for different values of $\tau_a, \tau_b$. Note the vanishing difference between these thresholds as $\tau_a$ and $\tau_b$ becomes smaller.} \label{fig:main} \end{figure} To state the optimal procedure we need to define notation for community recovery algorithms. For any two $\mathcal{C}_1,\mathcal{C}_2\subset[n]$ define the distance between $\mathcal{C}_1,\mathcal{C}_2$ to be \begin{equs} \mathrm{dist}(\mathcal{C}_1,\mathcal{C}_2)=\min\left\{|\mathcal{C}_1\Delta \mathcal{C}_2|,|\mathcal{C}_1^c\Delta \mathcal{C}_2|\right\}. \end{equs} For any measurable $\hat{\mathcal{C}}\subset [n]$ define the risk the corresponding risk of community recovery to be \begin{equs} \mathrm{Risk}_n(\hat{\mathcal{C}},\Xi(s, A)):=\sup_{\mathbf{\Theta} \in \Xi(s, A)} \max_{\mathcal{C}\subset [n]: |\mathcal{C}|= \frac{n}{2}}\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}\left(\mathrm{dist}(\hat{\mathcal{C}},\mathcal{C})\right). \end{equs} \begin{theorem}\label{thm:sparsesignal_optimal} let $0<\tau_b<\tau_a\leq \frac{1}{2}$, $\alpha>\frac{1}{2}$, and consider the signal strength \begin{align} A = \sqrt{\frac{C\log n}{\sigma_{n0}^2}}.\nonumber \end{align} Let $\hat{\mathcal{C}}\subset [n]$ be measurable such that $\mathrm{Risk}_n(\hat{\mathcal{C}},\Xi(s, A))\rightarrow 0$ and let \begin{align} \beta_1^* = \frac{1}{1- \tau_a} \frac{1}{\sqrt{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1- \tau_b}}},\,\,\,\, \beta_2^* = \frac{1}{1- \tau_b} \frac{1}{\sqrt{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1- \tau_b}}}.\nonumber \end{align} \begin{enumerate}[label=\textbf{\roman*}.] \item \label{thm:sparsesignal_optimal_hc} The test based on $HC(\hat{\mathcal{C}},\beta_1^*,\beta_2^*)$ is powerful {if} $C>C_{\mathrm{HC}}^{\mathrm{opt}}(\alpha)$ where \begin{align} C_{\mathrm{HC}}^{\mathrm{opt}}(\alpha) = \begin{cases} 2 \rho(\beta_1^*,\beta_2^*) \Big(\alpha - \frac{1}{2} \Big) \quad {\rm{for}}\,\, \frac{1}{2} < \alpha < \frac{3}{4}. \\ 2 \rho(\beta_1^*,\beta_2^*) \Big( 1- \sqrt{1-\alpha}\Big)^2 \quad {\rm{for}}\,\, \alpha \geq \frac{3}{4}. \end{cases} \nonumber \end{align} \item \label{thm:sparsesignal_optimal_max} The test based on $d_{\max}(\hat{\mathcal{C}},\beta_1^*,\beta_2^*)$ is powerful {if} $C>C_{\mathrm{max}}^{\mathrm{opt}}(\alpha)$ with \begin{align} C_{\mathrm{max}}^{\mathrm{opt}}(\alpha) = 2 \rho(\beta_1^*,\beta_2^*) \Big( 1- \sqrt{1-\alpha}\Big)^2 \quad {\rm{for}}\,\, \alpha \geq \frac{3}{4}. \nonumber \end{align} \end{enumerate} \end{theorem} Since $\rho(\beta_1^*,\beta_2^*)$ matches the optimal threshold from Theorem \ref{thm:sparse_lower} $\Big( \frac{\tau_a(1-\tau_a) + \tau_b(1-\tau_b)}{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b}} \Big)$, Theorem \ref{thm:sparsesignal_optimal} implies that the following two-stage procedure is sharp optimal whenever $\tau_a>\tau_b>0$. \begin{enumerate} \item[(i)] Run a community detection algorithm to construct $\hat{\mathcal{C}}$ (e.g. Algorithm 1 of \cite{gao2016community}). \item[(ii)] Reject if $T_{HC}(\hat{C},\beta_1^*,\beta_2^*)>\sqrt{\log{n}}$ or the test based on $d_{\max}(1,1)$ rejects. \end{enumerate} The proof of the validity of the above two-stage procedure is easy. In particular, in the regime of dense graphs $(\text{at least when}\ \tau_a>\tau_b>0)$, strongly consistent community detection $(\mathrm{Risk}_n(\hat{\mathcal{C}},\Xi(s, A)) \\ \rightarrow 0)$ is indeed possible whenever $\|\mathbf{\Theta}\|_{\infty}=o(n^{\alpha})$ \citep{gao2016community}. As a consequence for any bounded $\mathbf{\Theta}$, Theorem \ref{thm:sparsesignal_optimal} justifies the optimality of the test based on $T_{HC}(\hat{C},\beta_1^*,\beta_2^*)$. Finally, for $\|\mathbf{\Theta}\|_{\infty}\gg 1$, the problem is trivial by using a vanilla Maximum Degree Test based on $d_{\max}(1,1)$ (this can be derived along the lines of the proof of Theorem \ref{thm:sparsesignal_vanilla_upper}\ref{thm:sparsesignal_vanilla_max_upper} and is hence omitted). Combining these two cases by union bound yields the desired sharp optimality of the two-stage procedure. This two-stage procedure is enough to complete the story of sharp detection thresholds. But, it additionally reveals the peculiar behavior of the detection problem mentioned earlier. That is, although all our natural intuition (along with results on sharp optimality of the vanilla Higher Criticism and Maximum Degree Test in the sparse graph regime) suggests that the behavior of the detection problem for degree heterogeneity does not depend on the knowledge of community assignment, our optimal procedure for the dense regime intimately relies on correct community assignment recovery. Although we were not able to prove nonexistence of procedures which are sharp optimal and do not depend on recovery of the true community assignment, our lower bound on the vanilla Maximum Degree Test performance in Theorem \ref{thm:sparsesignal_vanilla_upper}\ref{thm:sparsesignal_vanilla_max_lower} provides moral validity of this intuition. Finally, in view of this, it is extremely interesting to formalize and prove the idea of failure of ``all tests without the knowledge of true community assignments" -- in the case of dense degree corrected SBMs. Finally, note that Theorem \ref{thm:sparse_lower}, Theorem \ref{thm:sparsesignal_vanilla_upper_gen}, Theorem \ref{thm:sparsesignal_vanilla_upper}, and Theorem \ref{thm:sparsesignal_optimal} are enough to describe detection thresholds for $a,b\gg \log{n}$. The behavior of the thresholds for $a,b\lesssim \log{n}$ is subtle. In particular, the following result is not very difficult to prove. \begin{thm}\label{theorem:less_than_logn} Suppose $a,b \ll \log n$ and $\alpha>\frac{1}{2}$. Then for any sequence of tests $T_n$, \begin{equs} \lim\limits_{A\rightarrow \infty}\liminf\limits_{n\rightarrow \infty}\mathrm{Risk}(T_n,\Xi(s,A))=1. \end{equs} \end{thm} Theorem \ref{theorem:less_than_logn} demonstrates a different behavior of the detection problem compared to $a,b\gg \log{n}$ where a vanishing $A$ is detectable even in the sparse signal regime. It is therefore of interest to investigate the problem further for $a,b\lesssim \log{n}$ when $\alpha>\frac{1}{2}$ to figure out the information theoretic rate of detection of $A\rightarrow \infty$. We leave such endeavors to future projects. \section{Discussion}\label{section:discussion} In this section we collect some concluding remarks about the main results in this paper. One of the main motivations of this paper is to explore the mutual confounding of degree corrections and community assignments in the formulation of block models. In particular, \cite{jin2015fast} notes, and we paraphrase: ``as far as community detection concerns, the heterogeneity parameters $\{\theta_i\}_{i=1}^n$ are largely ancillary." In this paper we explore the other side of the story i.e. ``as far as the degree heterogeneity parameters $\{\theta_i\}_{i=1}^n$ are concerned, are the community assignments ancillary?" The answer seems to be more complicated and as our results suggest: ``it depends!" In particular, when the inference targets global testing for sparse $\mathbf{\Theta}$, community assignments indeed seem ancillary when the graph is not dense. However, for dense graphs, our results hint on the contrary. Here the information theoretic boundary for known community assignment is strictly below the detection thresholds attained for vanilla degree based Higher Criticism and Maximum Degree tests (see Figure \ref{fig:main}). The lower bound on the performance of the vanilla Maximum Degree Test further hints at the failure of procedures which do not take into account the knowledge of community assignments. In particular, we believe that it is extremely interesting to formalize and show that procedures similar to the vanilla Higher Criticism and Maximum Degree tests, which are simply based on the degree vector, will fail to achieve the information theoretic thresholds in the dense graph regime (at least when $\tau_a>\tau_b>0$). \section{Properties of linear combination of Binomial random variables} \label{sec:binom} Our analyses depend very heavily on a detailed understanding of deviation properties of linear combinations of binomial random variables. These arise very naturally in our context--- for example, each vertex degree under the null is a sum of two independent Binomial random variables and the optimal tests in Theorem \ref{thm:sparsesignal_optimal} depend on linear combination of two independent Binomial random variables. We establish some relevant results in this section, which are invaluable in the proofs of the main results stated Section \ref{section:main_results}. \subsection{Moderate Deviation properties} Moderate deviation and local CLT type properties of linear combinations of independent Binomial random variables form a cornerstone of our analysis. We note that while these results are conceptually straight-forward, the proofs are often involved due to the discrete structure of the random variables involved. To this end, let $X\sim \mathrm{Bin}\left(\frac{n}{2},\frac{a'}{n}\right)\perp Y\sim \mathrm{Bin}\left(\frac{n}{2},\frac{b'}{n}\right)$ with $a'\geq b'\gg \log{n}$ and $ 0<c< \liminf \frac{b'}{a'} \leq \limsup \frac{b'}{a'} \leq 1$ for a constant $c$. Let \begin{align} \mu_{n1}:=\mathbb{E}(X) = \frac{n}{2} \cdot \frac{a'}{n} ,\,\,\,\, \mu_{n2}:=\mathbb{E}(Y) = \frac{n}{2} \cdot \frac{b'}{n}. \nonumber \end{align} \begin{align} \sigma_{n1}^2:=\mathrm{Var} (X) = \frac{n}{2} \cdot \frac{a'}{n} \cdot \Big(1- \frac{a'}{n} \Big),\,\,\,\, \sigma_{n2}^2:=\mathrm{Var} (Y) = \frac{n}{2} \cdot \frac{b'}{n} \cdot \Big(1- \frac{b'}{n} \Big).\nonumber \end{align} Hereafter for any fixed positive constants $\beta_1$ and $\beta_2$ define \begin{equs} \sigma_n(\beta_1,\beta_2):=\sqrt{\beta_1^2 \sigma_{n1}^2 + \beta_2^2 \sigma_{n2}^2},\\ \mu_n(\beta_1,\beta_2):=\beta_1\mu_{n1}+\beta_2\mu_{n2},\\ d(\beta_1,\beta_2):={\beta_1X+\beta_2Y-\mu_n(\beta_1,\beta_2)}. \end{equs} Also for $s_1,s_2 \leq {n^{1-\alpha}}$ with $1>\alpha>\frac{1}{2}$, and $X'\sim \mathrm{Bin}\left(s_1,\frac{a''}{n}\right) \perp Y'\sim\mathrm{Bin}\left(s_2,\frac{b''}{n}\right)$ with $a''/a'\rightarrow 1$, $b''/b'\rightarrow 1$, let \begin{equs} d'(\beta_1,\beta_2):={\beta_1(X+X')+\beta_2(Y+Y')-\mu_n(\beta_1,\beta_2)}. \end{equs} \subsubsection{Log Scale Asymptotics} In this section we study moderate deviations of linear combinations of binomial random variables on the logarithmic scale. Along the way, we shall also study bounds on the probability of such linear combinations belonging to specific subintervals corresponding to moderate deviation regimes. \begin{lemma}\label{lemma:binomial_master} Let $h=h_n$ be such that $c<\liminf\frac{h}{\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}}\leq \limsup\frac{h}{\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}}<c'$ for constants $0<c<c'<\infty$ and $C_n\rightarrow C>0$ be a positive sequence. \begin{enumerate} \item \label{lemma:binomial_equal}Fix any sequence $\{\xi_n\}$ such that $|\xi_n| \ll \log{n}$. Then the following hold for any $\varepsilon>0$ and $n$ sufficiently large (depending on $c,c',\varepsilon,\beta_1,\beta_2$) \begin{enumerate} \item\label{lemma:binomial_equal_pure} $${\sup\limits_{|t|\leq \xi_n}\mathbb{P}\left(d(\beta_1,\beta_2)=h+t\right)}\leq \frac{1}{\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)\right). $$ \item\label{lemma:binomial_equal_contam} $${\sup\limits_{|t|\leq \xi_n}\mathbb{P}\left(d'(\beta_1,\beta_2)=h+t\right)}\leq\frac{1}{\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)\right). $$ \end{enumerate} \item\label{lemma:binomial_tail} The following moderate deviation asymptotics hold. \begin{enumerate} \item\label{lemma:binomial_tail_pure} $$\lim_{n \to \infty} \frac{\log\mathbb{P}\left(d(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}\right)}{\log n}=-\frac{C^2}{2}.$$ \item\label{lemma:binomial_tail_contam} $$\lim_{n \to \infty}\frac{\log \mathbb{P}\left(d'(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}\right)}{\log n}=-\frac{C^2}{2}.$$ \end{enumerate} \end{enumerate} \end{lemma} \subsubsection{Exponential Scale Asymptotics} In this section, we first characterize the upper tail of the sum of two independent binomial random variables in the moderate deviation regime on the exponential scale, which requires much more subtle analysis than usual log-scale asymptotics. This result is used in establishing the lower bound for the maximum degree test. Specifically, we will establish the following result. Recall the definition of $d(\beta_1,\beta_2)=\beta_1X+\beta_2Y-\mu_n(\beta_1,\beta_2)$, $\mu_n(\beta_1,\beta_2)=\mathbb{E}(\beta_1X+\beta_2Y)$ and $\sigma_n(\beta_1,\beta_2)^2=\mathop{\rm Var}\nolimits(\beta_1X+\beta_2Y)$. \begin{lemma}\label{lemma:binomial_tail_exp_scale} Let $b' \gg (\log n)^3$ and $x_n = \sqrt{2 \log n} (1 + o(1))$. In this case, we have, as $n \to \infty$, \begin{align} \lim_{n \to \infty} \frac{\mathbb{P}[X+Y-\mathbb{E}(X+Y) > \sqrt{\mathop{\rm Var}\nolimits(X+Y)} x_n ]}{1 - \Phi(x_n)}=\lim_{n \to \infty} \frac{\mathbb{P}[d(1,1) > \sigma_n(1,1) x_n ]}{1 - \Phi(x_n)} \to 1, \nonumber \end{align} where $\Phi(\cdot)$ is the cdf of the standard normal distribution. \end{lemma} \subsection{A Change of Measure Lemma} The next lemma is a simple change of measure argument which is necessary for truncated second moment arguments involved in proving information theoretic lower bounds. \begin{lemma}\label{lemma:binomial_change_of_measure} Let $X\sim \mathrm{Bin}(n_1,p_1)$ and $Y\sim \mathrm{Bin}(n_2,p_2)$ be independent. Then for any positive scalars $\alpha_1,\alpha_2,\beta_1,\beta_2$ and Borel set $B$ of $\mathbb{R}$ \begin{equs} \mathbb{E}\left(\alpha_1^X\alpha_2^Y\mathbf{1}\left(\beta_1X+\beta_2Y\in B\right)\right)=(1-p_1+\alpha_1 p_1)^{n_1}(1-p_2+\alpha_2 p_2)^{n_2}\mathbb{P}(\beta_1X'+\beta_2Y'\in B), \end{equs} where $X'\sim \mathrm{Bin}(n_1,p_1')$ is independent of $Y'\sim \mathrm{Bin}(n_2,p_2')$ with \begin{equs} p_1'=\frac{\alpha_1 p_1}{1-p_1+\alpha_1 p_1},\quad p_2'=\frac{\alpha_2 p_2}{1-p_2+\alpha_2 p_2}. \end{equs} \end{lemma} We establish Lemma \ref{lemma:binomial_change_of_measure} in Section \ref{section:technical_lemmas}. \section{Proofs of main results}\label{section:proofs} \subsection{Proof of Theorem \ref{thm:dense}} We prove each part of the theorem in separate subsections below. \\ \textit{Proof of Theorem \ref{thm:dense} \ref{thm:dense_upper}} In this theorem, since all computations are under the true underlying $\mathcal{C}$ and the Total Degree Test does not depend on it, we drop the notational dependence on $\mathcal{C}$ from $\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}$, $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}$, and $\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}$. We will establish the stronger result that the total degree test is powerful whenever there exists a sequence $t_n \to \infty$ such that $s A \sqrt{\frac{a}{n} } \gg t_n$. To this end, we need the following elementary lemma bounding the variance of the total degree. \begin{lemma} \label{lemma:var_totaldegree} For any $\mathbf{\Theta} \in \Xi(s_n,A_n)$ with $\| \mathbf{\Theta} \|_{\infty} \leq 1$, ${\rm{Var}}_{\mathbf{\Theta},a,b}\Big[ \sum_i d_i \Big] \leq 8 a n$. \end{lemma} \begin{proof} The proof proceeds using the elementary observations ${\rm{Var}}_{\mathbf{\Theta}, a,b}(Y_{ij}) \leq \mathbb{E}_{\mathbf{\Theta},a,b}[Y_{ij}] \leq \frac{4a}{n}$ and ${\rm{cov}}_{\mathbf{\Theta}, a,b} (d_i , d_j) = {\rm{Var}}_{\mathbf{\Theta},a,b}[Y_{ij}] \leq \frac{4a}{n}$. \end{proof} We are now ready to prove Theorem \ref{thm:dense} \ref{thm:dense_upper}. We first compute the expectation of the total degree under the null. \begin{align} \mathbb{E}_{\mathbf{1},a,b}\Big[\sum_{i=1}^{n} d_i \Big] = \sum_{i \neq j } \mathbb{E}[Y_{ij}] = 2 { n/2 \choose 2 } \frac{a}{n} + \frac{n^2}{4} \frac{b}{n} := \mu_n. \nonumber \end{align} We consider a total degree test which rejects the null for $\sum_i d_i > \mu_n + K_n$ for some sequence $K_n$ to be chosen suitably during the proof. By Chebychev's inequality, we have, \begin{align} \mathbb{P}_{\mathbf{1},a,b}\Big[\sum_i d_i > \mu_n + K_n \Big] \leq \frac{{\rm{Var}}_{\mathbf{\Theta},a,b}\Big[ \sum_i d_i \Big] }{K_n^2} \leq \frac{8an}{K_n^2}, \nonumber \end{align} where the last inequality follows using Lemma \ref{lemma:var_totaldegree}. Thus the type I error is controlled as soon as $K_n^2 \gg an$. We next turn to the type II error, and note that by monotonicity, it suffices to restrict ourselves to alternatives $\mathbf{\Theta} = (1 + A) \mathbf{1}_{S} + \mathbf{1}_{S^c}$ for some $A \leq 1$. We set $S_1 = \mathcal{C} \cap S(\mathbf{\Theta})$ and $S_2 = \mathcal{C}^c \cap S(\mathbf{\Theta})$. Further, for notational simplicity, we denote $s_1 = | S_1|$ and $s_2 = | S_2|$. In this case, we have, \begin{align} &\mathbb{E}_{\mathbf{\Theta},a,b}\Big[\sum_i d_i \Big] = (1+A)^2 \frac{a}{n} \Big[ {s_1 \choose 2} + {s_2 \choose 2} \Big] + (1 + A) \frac{a}{n} \Big[ s_1 \Big( \frac{n}{2} - s_1 \Big) + s_2 \Big( \frac{n}{2} - s_2 \Big) \Big] + s_1 s_2 (1 + A)^2 \frac{b}{n} \nonumber\\ &+ \frac{a}{n} \Big[ {\frac{n}{2} - s_1 \choose 2} + { \frac{n}{2} - s_2 \choose 2} \Big] + (1 + A) \frac{b}{n} \Big[ s_1 \Big( \frac{n}{2} - s_2 \Big)+ s_2 \Big( \frac{n}{2} - s_1 \Big) \Big] + \Big( \frac{n}{2} - s_1 \Big) \Big( \frac{n}{2} - s_2 \Big) \frac{b}{n}. \nonumber \end{align} Therefore, we have, \begin{align} &\mathbb{E}_{\mathbf{\Theta},a,b}\Big[\sum_i d_i \Big] - \mu_n \geq A \frac{a}{n} [s_1(s_1 -1) + s_2(s_2 -1)] + 2 A s_1 s_2 \frac{b}{n} \nonumber\\ &+ A \frac{a}{n} \Big[ s_1 \Big( \frac{n}{2} -s_1 \Big) + s_2 \Big( \frac{n}{2} - s_2 \Big) \Big] + A \frac{b}{n} \Big[ s_1 \Big( \frac{n}{2} -s_2 \Big) + s_2 \Big( \frac{n}{2} - s_1 \Big) \Big] \nonumber \\ &\geq A \frac{b}{n} (n-1) s \geq \frac{1}{2} A b s. \nonumber \end{align} Therefore, we have, \begin{align} \mathbb{P}_{\mathbf{\Theta},a,b}\Big[ \sum_i d_i - \mu_n < K_n \Big] &= \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ \sum_i d_i - \mathbb{E}_{\mathbf{\Theta},a,b}\Big[\sum_i d_i \Big] < K_n - \Big(\mathbb{E}_{\mathbf{\Theta},a,b}\Big[\sum_i d_i \Big] - \mu_n \Big) \Big] . \nonumber \\ &\leq \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ \sum_i d_i - \mathbb{E}_{\mathbf{\Theta},a,b}\Big[\sum_i d_i \Big] < K_n - \frac{1}{2} Abs \Big]. \nonumber \end{align} Thus if $K_n < \frac{1}{2} Abs$, using Chebychev inequality, we have, \begin{align} \mathbb{P}_{\mathbf{\Theta},a,b}\Big[ \sum_i d_i - \mu_n < K_n \Big] &\leq \frac{8an}{\Big(K_n - \frac{1}{2} Abs \Big)^2 } . \label{eq:type2} \end{align} The type II error is controlled as soon as the RHS in \eqref{eq:type2} goes to zero as $n \to \infty$. Finally, it remains to choose $K_n$. We set $K_n = \frac{1}{4} Abs $ and note that under the theses of this theorem, both type I and type II errors are controlled asymptotically under this choice. The proof is complete. \ \vrule height4pt width4pt depth0pt % \textit{Proof of Theorem \ref{thm:dense}. \ref{thm:dense_lower}} The proof proceeds by the usual argument of analyzing the second moment of the marginal likelihood. To this end, we fix a prior $\pi$ which sets the community assignment $\mathcal{C} = \{ 1, \cdots, \frac{n}{2} \}$. The prior $\pi$ selects $s/2$ locations at random from $\mathcal{C}$ and $s/2$ locations (assuming $s$ is even w.l.o.g.) independently from $\mathcal{C}^c$ to form the set $S(\mathbf{\Theta})$. Given $S(\mathbf{\Theta})$, we set $\theta_i = 1+ A$ for $ i \in S(\Theta)$. Also, in this theorem, since all computations are under this chosen $\mathcal{C}$ we drop the notational dependence on $\mathcal{C}$ from $\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}$, $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}$, and $\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}$. Now, given $\mathbf{\Theta}$, the likelihood ratio \begin{align} &L_S = \frac{d \mathbb{P}_{\mathbf{\Theta}, a, b} } { d \mathbb{P}_{\mathbf{1},a,b}} \nonumber \\ &= \prod_{ i < j, \mathcal{C}(i) = \mathcal{C}(j) } (\theta_i \theta_j)^{Y_{ij}} \Big(\frac{1- \theta_i \theta_j \frac{a}{n}}{1- \frac{a}{n}} \Big)^{(1- Y_{ij})} \prod_{ i < j, \mathcal{C}(i) \neq \mathcal{C}(j) } (\theta_i \theta_j)^{Y_{ij}} \Big(\frac{1- \theta_i \theta_j \frac{b}{n}}{1- \frac{b}{n}} \Big)^{(1- Y_{ij})}. \nonumber \end{align} We define the marginal likelihood $L_{\pi} = \mathbb{E}_{S}[L_S]$, where $\mathbb{E}_S[\cdot]$ denotes the expectation with respect to $S\sim \pi$. It suffices to establish that under the thesis of Theorem \ref{thm:dense_lower}, $L_{\pi} = 1 + o(1)$. To this end, we note that $\mathbb{E}_{\mathbf{1}, a,b} [ L_{\pi}] = \mathbb{E}_{S} [ E_{\mathbf{1},a,b} [ L_{S}]] = 1$ by Fubini's theorem. The result follows once we establish that $\mathbb{E}_{\mathbf{1}, a,b} [ L_{\pi}^2] = 1 + o(1)$ under the assumptions of Theorem \ref{thm:dense_lower}. This will be established in the rest of the proof. We note that $\mathbb{E}_{\mathbf{1},a,b}[ ( L_{\pi})^2 ] = \mathbb{E}_{S_1, S_2} [ \mathbb{E}_{\mathbf{1}, a, b} [ L_{S_1} L_{S_2}]]$, where $S_1, S_2$ are iid draws from the measure $\pi$. Setting $\mathbf{\Theta} : = \mathbf{\Theta}(S_1) = (\theta_1, \cdots, \theta_n)$ and $\overline{\mathbf{\Theta}} := \mathbf{\Theta}(S_2) = (\overline{\theta}_1, \cdots , \overline{\theta}_n)$ to denote the true parameter vectors corresponding to $S_1,S_2$ obtained under iid sampling from $\pi$, we have, \begin{align} L_{S_1}L_{S_2} = &\prod_{ i < j : \mathcal{C}(i)= \mathcal{C}(j) } (\theta_i \theta_j \overline{\theta}_i \overline{\theta}_j)^{Y_{ij}} \Big[\Big( \frac{1- \theta_i \theta_j \frac{a}{n} }{1- \frac{a}{n}}\Big) \Big(\frac{1- \overline{\theta}_i \overline{\theta}_j \frac{a}{n}}{1- \frac{a}{n} } \Big) \Big]^{(1- Y_{ij})} \times \nonumber\\ &\prod_{ i < j : \mathcal{C}(i) \neq \mathcal{C}(j) } (\theta_i \theta_j \overline{\theta}_i \overline{\theta}_j)^{Y_{ij}} \Big[\Big( \frac{1- \theta_i \theta_j \frac{b}{n} }{1- \frac{b}{n}}\Big) \Big(\frac{1- \overline{\theta}_i \overline{\theta}_j \frac{b}{n}}{1- \frac{b}{n} } \Big) \Big]^{(1- Y_{ij})} \nonumber \\ &:= \prod_{i < j } T_{ij}. \nonumber \end{align} Further, we define $L_{S_1} L_{S_2} = T_1 T_2 T_3$, where \begin{align} T_1 := \prod_{i<j : \mathcal{C}(i) = \mathcal{C}(j) = \mathcal{C} } T_{ij} , \,\,\,\, T_2 := \prod_{i < j : \mathcal{C}(i) = \mathcal{C}(j) = \mathcal{C}^c } T_{ij}, \,\,\,\, T_3 := \prod_{i < j : \mathcal{C}(i) \neq \mathcal{C}(j) } T_{ij}. \nonumber \end{align} Note that under the null hypothesis $H_0$, $T_1, T_2, T_3$ are independent and thus to analyze $\mathbb{E}_{\mathbf{1}, a,b}[L_{S_1} L_{S_2}]$, it suffices to study $\mathbb{E}_{\mathbf{1}, a,b}[T_j]$ separately for $j =1,2,3$. We first analyze $T_1$. To this end. we define $Z_1 = | S_1 \cap S_2 \cap \mathcal{C} |$ and $Z_2 = | S_1 \cap S_2 \cap \mathcal{C}^c|$. Using independence of edges, we have, \begin{align} \mathbb{E}_{\mathbf{1}, a,b}[T_1] = \prod_{i < j : \mathcal{C}(i) = \mathcal{C}(j) = \mathcal{C}} \mathbb{E}_{\mathbf{1},a,b}[ T_{ij}]. \nonumber \end{align} We will encounter the following cases. \begin{enumerate} \item $i,j \in S_1 \cap S_2 \cap \mathcal{C}$. In this case, \begin{align} \mathbb{E}_{\mathbf{1}, a,b}[T_{ij}] = (1 + A)^4 \frac{a}{n} + \Big( 1 - \frac{a}{n} \Big) \Big( \frac{1 - (1+A)^2 \frac{a}{n}}{1- \frac{a}{n}}\Big)^2. = 1 + \frac{\frac{a}{n}A^2}{1- \frac{a}{n} } (2+ A)^2. \nonumber \end{align} There are ${ Z_1 \choose 2 }$ such terms. \item $i, j \in S_1 \cap S_2^c \cap \mathcal{C}$ or $i,j \in S_1^c \cap S_2 \cap \mathcal{C}$ or $i \in S_1 \cap S_2 \cap \mathcal{C}$ while $j \in \mathcal{C} \cap S_1^c \cap S_2^c$ or $i \in S_1 \cap S_2^c \cap \mathcal{C}$, $j \in S_1^c \cap S_2 \cap \mathcal{C}$. In this case, \begin{align} \mathbb{E}_{\mathbf{1}, a,b} [T_{ij}] = (1+ A)^2 \frac{a}{n} + \Big( 1 - \frac{a}{n} \Big) \Big( \frac{1 - (1+ A) \frac{a}{n}}{1- \frac{a}{n}} \Big)^2 = 1 + \frac{\frac{a}{n} A^2 }{ 1 - \frac{a}{n}}. \nonumber \end{align} There are $2 { \frac{s}{2} - Z_1 \choose 2 } + Z_1 \Big( \frac{n}{2} - s + Z_1 \Big) + \Big( \frac{s}{2} - Z_1 \Big)^2$ many $(i,j)$ pairs which have this contribution. \item $i \in S_1 \cap S_2 \cap \mathcal{C}$, $j \in S_1 \cap S_2^c \cap \mathcal{C}$ or $i \in S_1 \cap S_2 \cap \mathcal{C}$, $ j \in S_1^c \cap S_2 \cap \mathcal{C}$. We have, \begin{align} \mathbb{E}_{\mathbf{1}, a,b}[T_{ij}] = (1+ A)^3 \frac{a}{n} + \Big( \frac{1- (1+A)^2 \frac{a}{n}}{ 1- (1+A) \frac{a}{n}} \Big) \Big(\frac{1- (1+A) \frac{a}{n} }{1- \frac{a}{n}} \Big) \Big(1 - \frac{a}{n} \Big) = 1 + \frac{\frac{a}{n} A^2 }{ 1- \frac{a}{n}}(2 + A). \nonumber \end{align} There are $2 Z_1 \Big( \frac{s}{2} - Z_1 \Big)$ many terms with this contribution. \item For all other $(i,j)$ pairs, it is easy to check that $\mathbb{E}_{\mathbf{1}, a, b}[T_{ij}] =1$. \end{enumerate} We note that under the thesis of the Theorem, $A \to 0$ as $n \to \infty$. Thus we have the upper bound \begin{align} &\mathbb{E}_{\mathbf{1}, a,b} [ T_1 ] \leq \Big( 1 + C \frac{\frac{a}{n} A^2 }{1 - \frac{a}{n}} \Big)^ {U_1}, \nonumber \\ &U_1 = {Z_1 \choose 2} + 2 {\frac{s}{2 } - Z_1 \choose 2} + Z_1 \Big( \frac{n}{2} - s + Z_1 \Big) + 2 Z_1 \Big( \frac{s}{2} - Z_1 \Big) + \Big( \frac{s}{2} - Z_1 \Big)^2, \nonumber \end{align} for some absolute constant $C >0$. Upon simplification, we obtain the bound \begin{align} \mathbb{E}_{\mathbf{1}, a, b} [T_1 ] \leq \Big( 1 + C \frac{\frac{a}{n}A^2 }{1 - \frac{a}{n}} \Big)^{\frac{n}{2} Z_1 + \frac{9}{8} s^2 }. \label{eq:T1bound} \end{align} A similar calculation yields an analogous bound for $T_2$. We thus obtain, setting $Z_2 = | S_1 \cap S_2 \cap \mathcal{C}^c|$, \begin{align} &\mathbb{E}_{\mathbf{1},a,b} [ T_2] \leq \Big( 1 + C \frac{\frac{a}{n} A^2 }{1 - \frac{a}{n}} \Big)^{ \frac{n}{2} Z_2 + \frac{9}{8} s^2}. \label{eq:T2bound} \end{align} Finally, it remains to bound $T_3$. To this end, our analysis proceeds similar to that of $T_1$ described above, and will thus be sketched briefly. Using independence of edges under $H_0$, we have $\mathbb{E}_{\mathbf{1},a,b}[T_3] = \prod_{i<j: \mathcal{C}(i) \neq \mathcal{C}(j)} \mathbb{E}_{\mathbf{1}, a,b}[T_{ij}]$. We encounter the following cases: \begin{enumerate} \item $i \in S_1 \cap S_2 \cap \mathcal{C}$ and $j \in S_1 \cap S_2 \cap \mathcal{C}^c$. In this case, we have, \begin{align} \mathbb{E}_{\mathbf{1},a,b}[T_{ij}] = (1 + A)^4 \frac{b}{n} + \Big( \frac{1 - (1+A)^2 \frac{b}{n} }{1 - \frac{b}{n} } \Big)^2 \Big( 1 - \frac{b}{n} \Big)= 1 + \frac{\frac{b}{n}A^2}{1- \frac{b}{n} } (2+ A)^2 . \nonumber \end{align} There are $Z_1 Z_2 $ terms with this contribution. \item $i \in S_1 \cap S_2 \cap \mathcal{C}, j \in S_1 \cap S_2^c \cap \mathcal{C}^c$ or $ i \in S_1 \cap S_2 \cap \mathcal{C}, j \in S_1^c \cap S_2 \cap \mathcal{C}^c$ and the related pairs $i \in S_1 \cap S_2^c \cap \mathcal{C} , j \in S_1 \cap S_2 \cap \mathcal{C}^c$ and $ i \in S_1^c \cap S_2 \cap \mathcal{C}, j \in S_1 \cap S_2 \cap \mathcal{C}^c$. Each pair contributes \begin{align} \mathbb{E}_{\mathbf{1}, a,b}[T_{ij}] = (1+ A)^3 \frac{b}{n} + \Big( \frac{1- (1+A)^2 \frac{b}{n}}{ 1- (1+A) \frac{b}{n}} \Big) \Big(\frac{1- (1+A) \frac{b}{n} }{1- \frac{b}{n}} \Big) \Big(1 - \frac{b}{n} \Big) = 1 + \frac{\frac{b}{n} A^2 }{ 1- \frac{b}{n}}(2 + A). \nonumber \end{align} There are $2 Z_1 \Big( \frac{s}{2} - Z_2 \Big) + 2 Z_2 \Big( \frac{s}{2} - Z_1 \Big)$ many terms with this contribution. \item $i \in S_1 \cap S_2 \cap \mathcal{C}, j \in S_1^c \cap S_2^c \cap \mathcal{C}^c$, or $i \in S_1^c \cap S_2^c \cap \mathcal{C}, j \in S_1 \cap S_2 \cap \mathcal{C}^c$ or $ i \in S_1 \cap S_2^c \cap \mathcal{C}, j \in S_1^c \cap S_2 \cap \mathcal{C}^c$ or $i \in S_1^c \cap S_2 \cap \mathcal{C}, j \in S_1 \cap S_2^c \cap \mathcal{C}^c$. Each term contributes \begin{align} \mathbb{E}_{\mathbf{1},a,b} [T_{ij}] = (1+ A)^2 \frac{b}{n} + \Big( 1 - \frac{b}{n} \Big) \Big( \frac{1 - (1+ A) \frac{b}{n}}{1- \frac{b}{n}} \Big)^2 = 1 + \frac{\frac{b}{n} A^2 }{ 1 - \frac{b}{n}}. \nonumber \end{align} There are $Z_1 \Big( \frac{n}{2} - s + Z_2 \Big) + Z_2 \Big( \frac{n}{2} - s + Z_1 \Big) + 2 \Big( \frac{s}{2} - Z_1 \Big) \Big( \frac{s}{2} - Z_2 \Big)$ many terms with this contribution. \item Every other pair has $\mathbb{E}_{\mathbf{1},a,b} [T_{ij}] =1$. \end{enumerate} Similar considerations as for $T_1$ above lead to the upper bound \begin{align} &\mathbb{E}_{\mathbf{1},a,b}[T_3] \leq \Big( 1 + C \frac{\frac{b}{n}A^2}{1 - \frac{b}{n}} \Big)^{V_1}, \nonumber \\ &V_1 = Z_1 Z_2 + 2 Z_1 \Big( \frac{s}{2} - Z_2 \Big) + 2 Z_2 \Big( \frac{s}{2} - Z_1 \Big)+ Z_1 \Big( \frac{n}{2} - s + Z_2 \Big) \nonumber \\ &+ Z_2 \Big( \frac{n}{2} - s + Z_1 \Big) + 2 \Big( \frac{s}{2} - Z_1 \Big) \Big( \frac{s}{2} - Z_2 \Big), \nonumber \end{align} for some absolute constant $C >0$. We note that $Z_1 , Z_2 \leq \frac{s}{2}$ and thus $V_1 \leq \frac{7s^2}{4} + \frac{n}{2} (Z_1 + Z_2)$. Finally, this yields the following upper bound on $T_3$. \begin{align} \mathbb{E}_{\mathbf{1},a,b}[T_3] \leq \Big( 1 + C \frac{\frac{b}{n}A^2}{1 - \frac{b}{n}} \Big)^{\frac{7s^2}{4} + \frac{n}{2} (Z_1 + Z_2)}. \label{eq:T3bound} \end{align} Combining \eqref{eq:T1bound}, \eqref{eq:T2bound} and \eqref{eq:T3bound}, we obtain, \begin{align} &\mathbb{E}_{\mathbf{1},a,b}[L_{S_1} L_{S_2}] \leq \Big( 1 + C \frac{\frac{a}{n}A^2 }{1 - \frac{a}{n}} \Big)^{\frac{n}{2} (Z_1 + Z_2) + \frac{9}{4} s^2 } \cdot \Big( 1 + C \frac{\frac{b}{n}A^2}{1 - \frac{b}{n}} \Big)^{\frac{7s^2}{4} + \frac{n}{2} (Z_1 + Z_2)}. \nonumber \\ &\leq \exp{\Big[ \frac{9C}{4} s^2 A^2 \Big( \frac{\frac{a}{n}}{1- \frac{a}{n}} + \frac{\frac{b}{n}}{1- \frac{b}{n} } \Big) \Big] } \cdot \exp{\Big[ \frac{Cn}{2} (Z_1 + Z_2) A^2 \Big( \frac{\frac{a}{n}}{1- \frac{a}{n}} + \frac{\frac{b}{n}}{1- \frac{b}{n} } \Big)\Big]}. \nonumber \end{align} We note that under $\pi$, $Z_1, Z_2$ are independent Hypergeometric($\frac{n}{2}$, $\frac{s}{2}$, $\frac{s}{2}$ ) random variables. Therefore, they are stochastically bounded by a ${\rm{Bin}} (\frac{s}{2}, \frac{s}{n-s})$ random variable and finally, we have, $Z_1 + Z_2 \lesssim {\rm{Bin}}\Big( s, \frac{s}{n-s} \Big)$. This implies that \begin{align} &\mathbb{E}_{S_1, S_2}\Big[ \exp{\Big[ \frac{Cn}{2} (Z_1 + Z_2) A^2 \Big( \frac{\frac{a}{n}}{1- \frac{a}{n}} + \frac{\frac{b}{n}}{1- \frac{b}{n} } \Big)\Big]} \Big] \nonumber \\ &\leq \Big( 1 - \frac{s}{n} + \frac{s}{n} \exp{\Big[ \frac{Cn}{2} A^2 \Big( \frac{\frac{a}{n}}{1- \frac{a}{n}} + \frac{\frac{b}{n}}{1- \frac{b}{n} } } \Big) \Big] \Big)^{\frac{s}{2}}\nonumber \\ &\leq \exp{\Big[ \frac{s^2}{2n} \Big( {\rm{e}}^{ C\frac{A^2 n}{2} ( \frac{a/n}{1- a/n} + \frac{b/n}{1- b/n} ) } -1 \Big) \Big] }. \label{eq: bound_temp1} \end{align} Finally, we note that under the assumptions of this theorem, $\alpha \leq \frac{1}{2}$ implies that $A^2 a \to 0$ as $n\to \infty$. Thus using the bound obtained in \eqref{eq: bound_temp1}, we obtain, \begin{align} \mathbb{E}_{\mathbf{1},a,b}[L_{\pi}^2] &\leq \exp{\Big[ \frac{9C}{4} s^2 A^2 \Big( \frac{\frac{a}{n}}{1- \frac{a}{n}} + \frac{\frac{b}{n}}{1- \frac{b}{n} } \Big) \Big] } \cdot \exp{\Big[ \frac{s^2}{2n} \Big( {\rm{e}}^{ C\frac{A^2 n}{2} ( \frac{a/n}{1- a/n} + \frac{b/n}{1- b/n} ) } -1 \Big) \Big] } \nonumber \\ &\leq \exp{\Big[ C_0 s^2 A^2 \Big( \frac{\frac{a}{n}}{1- \frac{a}{n}} + \frac{\frac{b}{n}}{1- \frac{b}{n} } \Big) \Big] } = 1 + o(1), \label{eq:useful_upper} \end{align} where $C_0 >0 $ is some absolute constant, sufficiently large, and the final result follows using the assumptions of this theorem. This completes the proof.\ \vrule height4pt width4pt depth0pt \subsection{Proof of Theorem \ref{thm:sparse_lower}} This section will also have a common proof for both cases $\tau_a = \tau_b =0$ and $\tau_a > \tau_b >0$. The proof proceeds by an analysis of the truncated likelihood ratio under the least favorable prior. To this end, consider the prior $\pi$ which fixes the partition $\mathcal{C} = \{1, \cdots, n/2 \}$. For any $i \in \{1, \cdots , n\}$, let $\mathcal{C}(i) = \mathcal{C}$ if $i \in \mathcal{C}$ and $\mathcal{C}(i) = \mathcal{C}^c$ otherwise. Further, the prior chooses $s/2$ elements (assuming $s$ is even w.l.o.g.) randomly from $\mathcal{C}$ and $\mathcal{C}^c$ respectively to form the set $S(\mathbf{\Theta})$. Given $S(\mathbf{\Theta})$, we set $\theta_i = 1+ A$ for $i \in S(\mathbf{\Theta})$ and $\theta_i = 1$ otherwise. In the rest of the proof, we denote the set $S(\Theta)$ as $S$. Also, in this theorem, since all computations are under this chosen $\mathcal{C}$ we drop the notational dependence on $\mathcal{C}$ from $\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}$, $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}$, and $\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}$. Now, for any such given $\mathbf{\Theta}$, the likelihood ratio \begin{align} &L_S = \frac{d \mathbb{P}_{\mathbf{\Theta}, a, b} } { d \mathbb{P}_{\mathbf{1},a,b}} \nonumber \\ &= \prod_{ i < j, \mathcal{C}(i) = \mathcal{C}(j) } (\theta_i \theta_j)^{Y_{ij}} \Big(\frac{1- \theta_i \theta_j \frac{a}{n}}{1- \frac{a}{n}} \Big)^{(1- Y_{ij})} \prod_{ i < j, \mathcal{C}(i) \neq \mathcal{C}(j) } (\theta_i \theta_j)^{Y_{ij}} \Big(\frac{1- \theta_i \theta_j \frac{b}{n}}{1- \frac{b}{n}} \Big)^{(1- Y_{ij})}. \nonumber \end{align} For $i \in S$, with slight abuse of notation, we define the out-degree to vertices in $S^c \cap \mathcal{C}(i)$ as $d_1(i) = \sum_{j \in \mathcal{C}(i)\cap S^c} Y_{ij}$ while the out-degree to vertices in the opposite block corresponds to $ d_2(i) = \sum_{j \in \mathcal{C}(i)^c \cap S^c} Y_{ij}$. Under $H_0$, we have, \begin{align} \mathbb{E}_{\mathbf{1},a,b} \Big[d_i(1) \Big] = \frac{n-s}{2} \cdot \frac{a}{n} ,\,\,\,\, \mathbb{E}_{\mathbf{1},a,b} \Big[d_i(2) \Big]= \frac{n-s}{2} \cdot \frac{b}{n}. \nonumber \\ \mathop{\rm Var}\nolimits_{\mathbf{1},a,b} [d_i(1)] = \frac{n-s}{2} \cdot \frac{a}{n} \cdot \Big(1- \frac{a}{n}, \Big)\,\,\,\, \mathop{\rm Var}\nolimits_{\mathbf{1},a,b} [d_i(2)] = \frac{n-s}{2} \cdot \frac{b}{n} \cdot \Big( 1 -\frac{b}{n} \Big). \nonumber \end{align} Further, we define the constants \begin{align} \beta_1^* = \frac{1}{1- \tau_a} \frac{1}{\sqrt{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1- \tau_b}}},\,\,\,\, \beta_2^* = \frac{1}{1- \tau_b} \frac{1}{\sqrt{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1- \tau_b}}}\nonumber \end{align} For $i \in S$, consider the ``good" event \begin{align} \Gamma_{S,i} = \Big\{ \frac{\beta_1^*(d_i(1) - \mathbb{E}_{\mathbf{1}, a,b} [d_i(1)]) + \beta_2^* (d_i(2) - \mathbb{E}_{\mathbf{1},a,b} [d_i(2)])}{\sqrt{(\beta_1^*)^2 \mathop{\rm Var}\nolimits_{\mathbf{1},a,b}[d_i(1)] + (\beta_2^*)^2 \mathop{\rm Var}\nolimits_{\mathbf{1},a,b}[d_i(2)]}} \leq \sqrt{2 \log n}\Big\}. \nonumber \end{align} We set $\Gamma_S = \cap_{i \in S} \Gamma_{S,i}$. We define $\tilde{L}_{\pi} = \mathbb{E}_{S}[L_S \mathbf{1}_{\Gamma_S}]$, where $\mathbb{E}_{S}[\cdot]$ denotes the expectation with respect to $S\sim \pi$. Then it suffices to establish that if $A$ is of the form \eqref{eq:signal_const} with $C < C_{\mathrm{sparse}}(\alpha)$, $\mathbb{E}_{\mathbf{1},a,b}[\tilde{L}_{\pi}] = \mathbb{E}_{\mathbf{1},a,b}[(\tilde{L}_{\pi} )^2] = 1 + o(1)$. This will complete the proof of the required lower bound. To this end, we note that by Fubini's theorem, $\mathbb{E}_{\mathbf{1},a,b}[\tilde{L}_{\pi}] = \mathbb{E}_{S} [ \mathbb{E}_{\mathbf{1},a,b}[ L_S \mathbf{1}_{\Gamma_S} ]]$. Further, we have, $\mathbb{E}_{\mathbf{1},a,b}[L_S \mathbf{1}_{\Gamma_S}] = 1 - \mathbb{E}_{\mathbf{1},a,b}[L_S \mathbf{1}_{\Gamma_S^c} ]$ and that \begin{align} \mathbb{E}_{\mathbf{1},a,b}[L_S \mathbf{1}_{\Gamma_S^c}] \leq \sum_{i \in S} \mathbb{E}_{\mathbf{1},a,b}[ L_S \mathbf{1}_{\Gamma_{S,i}^c}] = \sum_{i \in S} \mathbb{P}\Big[ \frac{\beta_1^* ( X - \frac{n-s}{2}\frac{a}{n}) + \beta_2^* (Y- \frac{n-s}{2} \frac{b}{n}) }{\sqrt{(\beta_1^* )^2 \frac{n-s}{2} \frac{a}{n} \Big( 1 - \frac{a}{n} \Big) + (\beta_2^*)^2 \frac{n-s}{2} \frac{b}{n} \Big( 1 - \frac{b}{n} \Big)}} > \sqrt{2\log n} \Big], \nonumber \end{align} using Lemma \ref{lemma:binomial_change_of_measure}, with $X \sim {\rm{Bin}}( \frac{n-s}{2}, \frac{a}{n}(1 + A) )$, $Y \sim {\rm{Bin}}(\frac{n-s}{2}, \frac{b}{n}(1+A))$. We note that \begin{align} &\mathbb{P}\Big[ \frac{\beta_1^* ( X - \frac{n-s}{2}\frac{a}{n}) + \beta_2^* (Y- \frac{n-s}{2} \frac{b}{n}) }{\sqrt{(\beta_1^* )^2 \frac{n-s}{2} \frac{a}{n} \Big( 1 - \frac{a}{n} \Big) + (\beta_2^*)^2 \frac{n-s}{2} \frac{b}{n} \Big( 1 - \frac{b}{n} \Big)}} > \sqrt{2\log n} \Big] \nonumber \\ &=\mathbb{P}\Big[ \frac{\beta_1^* (X - \mathbb{E}[X]) + \beta_2^* (Y- \mathbb{E}[Y])}{\sqrt{\mathop{\rm Var}\nolimits (\beta_1^* X + \beta_2^* Y ) }} > \sqrt{2 \log n} - \sqrt{C \log n} \sqrt{\frac{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1 - \tau_b }}{\tau_a (1 - \tau_a) + \tau_b(1-\tau_b)}} \Big]. \nonumber \\ &\leq \exp{\Big\{ - \log n \Big(1- \sqrt{\frac{C}{2} \frac{ \frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b} }{\tau_a(1-\tau_a) + \tau_b (1-\tau_b)}} \Big)^2 (1+o(1)) \Big\} } \nonumber \\ &= n^{- \Big(1- \sqrt{\frac{C}{2} \frac{ \frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b} }{\tau_a(1-\tau_a) + \tau_b (1-\tau_b)}} \Big)^2 (1+o(1))}, \nonumber \end{align} using Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_tail_pure}. Thus we finally have, \begin{align} \mathbb{E}_{\mathbf{1},a,b}[L_S \mathbf{1}_{\Gamma_S^c}] \leq n^{1- \alpha - \Big(1- \sqrt{\frac{C}{2} \frac{ \frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b} }{\tau_a(1-\tau_a) + \tau_b (1-\tau_b)}} \Big)^2 + o(1) } = o(1) \nonumber \end{align} if $C < C_{\mathrm{sparse}}(\alpha)$. This completes the first part of the proof. To study the truncated second moment, we note that $\mathbb{E}_{\mathbf{1},a,b} [ ( \tilde{L}_{\pi})^2 ] = \mathbb{E}_{\mathbf{1}, a,b} [ \mathbb{E}_{S_1,S_2}[ L_{S_1} L_{S_2} \mathbf{1}_{\Gamma_{S_1} \cap \Gamma_{S_2}}] ]$, where $S_1, S_2$ are iid draws from the measure $\pi$. Now, we note that on the event $\Gamma_{S_1} \cap \Gamma_{S_2}$, for $i \in S_1 \cap S_2$, we have, \begin{align} &\beta_1^* \Big( \sum_{j \in S_1^c \cap S_2^c \cap \mathcal{C}(i)} Y_{ij} \Big) + \beta_2^* \Big( \sum_{j \in S_1^c \cap S_2^c \cap \mathcal{C}(i)^c } Y_{ij} \Big) \leq \nonumber\\ &\beta_1^* \Big(\frac{n-s}{2} \Big)\frac{a}{n} \Big( 1- \frac{a}{n} \Big) + \beta_2^* \Big( \frac{n-s}{2} \Big) \frac{b}{n} \Big( 1- \frac{b}{n} \Big) + \sqrt{2 \log n} \sqrt{\frac{n-s}{2} \Big( (\beta_1^*)^2 \frac{a}{n} \Big( 1- \frac{a}{n}\Big) + (\beta_2^*)^2 \frac{b}{n} \Big(1 - \frac{b}{n} \Big) \Big) }. \nonumber \end{align} For $i \in S_1 \cap S_2$, we denote the above event as $\mathscr{C}_{S_1, S_2, i}$. Finally, we set $\mathscr{C}_{S_1, S_2} = \cap_{ i \in S_1 \cap S_2} \mathscr{C}_{S_1, S_2, i}$. The above discussion implies that $\Gamma_{S_1} \cap \Gamma_{S_2} \subseteq \mathscr{C}_{S_1, S_2}$ and therefore $\mathbb{E}_{\mathbf{1},a,b}[L_{S_1}L_{S_2} \mathbf{1}_{\Gamma_{S_1}\cap \Gamma_{S_2}}] \leq \mathbb{E}_{\mathbf{1},a,b}[L_{S_1}L_{S_2} \mathbf{1}_{\mathscr{C}_{S_1,S_2}}]$. Setting $\mathbf{\Theta} : = \mathbf{\Theta}(S_1) = (\theta_1, \cdots, \theta_n)$ and $\overline{\mathbf{\Theta}} := \mathbf{\Theta}(S_2) = (\overline{\theta}_1, \cdots , \overline{\theta}_n)$ to denote the true parameter vectors corresponding to $S_1,S_2$ obtained under iid sampling from $\pi$, we have, \begin{align} L_{S_1}L_{S_2} = &\prod_{ i < j : \mathcal{C}(i)= \mathcal{C}(j) } (\theta_i \theta_j \overline{\theta}_i \overline{\theta}_j)^{Y_{ij}} \Big[\Big( \frac{1- \theta_i \theta_j \frac{a}{n} }{1- \frac{a}{n}}\Big) \Big(\frac{1- \overline{\theta}_i \overline{\theta}_j \frac{a}{n}}{1- \frac{a}{n} } \Big) \Big]^{(1- Y_{ij})} \times \nonumber\\ &\prod_{ i < j : \mathcal{C}(i) \neq \mathcal{C}(j) } (\theta_i \theta_j \overline{\theta}_i \overline{\theta}_j)^{Y_{ij}} \Big[\Big( \frac{1- \theta_i \theta_j \frac{b}{n} }{1- \frac{b}{n}}\Big) \Big(\frac{1- \overline{\theta}_i \overline{\theta}_j \frac{b}{n}}{1- \frac{b}{n} } \Big) \Big]^{(1- Y_{ij})} . \nonumber\\ &:= \gamma_0 \prod_{i \in S_1 \cap S_2} \tilde{T}_i , \nonumber \end{align} where \begin{align} \tilde{T}_i = &\prod_{j \in S_1^c \cap S_2^c \cap \mathcal{C}(i)} (1+A)^{2 Y_{ij}} \Big[\frac{1- (1+A)\frac{a}{n}}{1- \frac{a}{n}} \Big]^{2(1-Y_{ij})} \prod_{j \in S_1^c \cap S_2^c \cap \mathcal{C}(i)^c} (1+ A)^{2 Y_{ij}} \Big[ \frac{1- (1+A)\frac{b}{n}}{1- \frac{b}{n}} \Big]^{2(1-Y_{ij})}. \nonumber \end{align} Further, it is easy to see that under $H_0$, $\gamma_0$ and $\prod_{i \in S_1 \cap S_2} \tilde{T}_i$ are independent and therefore \begin{align} \mathbb{E}_{\mathbf{1},a,b}[L_{S_1}L_{S_2} \mathbf{1}_{\mathscr{C}_{S_1,S_2}}] = \mathbb{E}_{\mathbf{1},a,b}[\gamma_0] \mathbb{E}_{\mathbf{1},a,b}\Big[\Big( \prod_{i \in S_1 \cap S_2} \tilde{T}_i \Big)\mathbf{1}_{\mathscr{C}_{S_1,S_2}} \Big]. \nonumber \end{align} We will use the following lemma. The proof is similar to the case for $\alpha \leq 1/2$ and will thus be deferred to the end of the section. \begin{lemma} \label{lemma:gamma0} As $n \to \infty$, $\mathbb{E}_{\mathbf{1},a,b}[\gamma_0] = 1+o(1)$, uniformly over all $S_1, S_2 \subset \{1, \cdots, n\}$, with $|S_i| =s= n^{1- \alpha}$, $i =1.2$, such that $|S_i \cap \mathcal{C} | = \frac{s}{2}$, $i=1,2$. \end{lemma} We will complete the lower bound proof assuming Lemma \ref{lemma:gamma0}. Using Lemma \ref{lemma:binomial_change_of_measure}, we have, setting $Z_1 = |S_1 \cap S_2 \cap \mathcal{C}|$ and $Z_2 = | S_1 \cap S_2 \cap \mathcal{C}^c|$, \begin{align} &\mathbb{E}_{\mathbf{1},a,b} [\tilde{T}_i \mathscr{C}_{S_1, S_2,i} ] = \Big( 1 + \frac{\frac{a}{n} A^2 }{ 1 - \frac{a}{n}} \Big)^{\frac{n}{2} - s + Z_1} \Big( 1 + \frac{\frac{b}{n} A^2}{1- \frac{b}{n}} \Big)^{\frac{n}{2} - s + Z_2} \times \nonumber \\ &\mathbb{P} \Big[ \beta_1^* X' + \beta_2^* Y' \leq \frac{n-s}{2} \Big( \beta_1^*\frac{a}{n} + \beta_2^* \frac{b}{n} \Big)+ \sqrt{2\log n} \sqrt{ \frac{n-s}{2} \Big((\beta_1^*)^2 \frac{a}{n} \Big(1 - \frac{a}{n} \Big) + (\beta_2^*)^2 \frac{b}{n} \Big(1- \frac{b}{n} \Big) \Big)} \Big], \nonumber \end{align} where $X' \sim {\rm{Bin}} \Big(\frac{n}{2} - s + Z_1, \frac{\frac{a}{n}(1+ A)^2}{1 + \frac{\frac{a}{n} A^2}{1- \frac{a}{n}}} \Big)$ and $Y' \sim {\rm{Bin}} \Big( \frac{n}{2} - s + Z_2 , \frac{\frac{b}{n}(1+ A)^2}{1 + \frac{\frac{b}{n} A^2}{1- \frac{b}{n}}} \Big)$. Upon using Taylor approximation, we have, \begin{align} &\mathbb{P} \Big[ \beta_1^* X' + \beta_2^* Y' \leq \frac{n-s}{2} \Big( \beta_1^*\frac{a}{n} + \beta_2^* \frac{b}{n} \Big)+ \sqrt{2\log n} \sqrt{ \frac{n-s}{2} \Big((\beta_1^*)^2 \frac{a}{n} \Big(1 - \frac{a}{n} \Big) + (\beta_2^*)^2 \frac{b}{n} \Big(1- \frac{b}{n} \Big) \Big)} \Big] \nonumber\\ &= \mathbb{P}\Big[ \frac{\beta_1^* (X' - \mathbb{E}[X']) + \beta_2^* (Y' - \mathbb{E}[Y'])}{\sqrt{\mathop{\rm Var}\nolimits(\beta_1^* X' + \beta_2^* Y')}} < \sqrt{2 \log n} \Big( 1 - 2 C(\tau_a, \tau_b) (1+ o(1)) \Big) ]. \nonumber \end{align} where we set \begin{align} C(\tau_a, \tau_b) = \sqrt{\frac{C}{2} \frac{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1- \tau_b} }{\tau_a (1 - \tau_a) + \tau_b(1-\tau_b)} } . \nonumber \end{align} We next run into two cases. Consider first the case when $2 C(\tau_a, \tau_b) <1$. In this case, we bound the above probability by $1$. Therefore, we have, \begin{align} \mathbb{E}_{\mathbf{1},a,b}[L_{S_1} L_{S_2} \mathbf{1}_{\mathscr{C}_{S_1,S_2}} ] &\leq \Big[ \Big( 1 + \frac{\frac{a}{n} A^2 }{1- \frac{a}{n}}\Big)^{\frac{n}{2} - s + Z_1} \Big( 1 + \frac{\frac{b}{n} A^2 }{1- \frac{b}{n}} \Big)^{\frac{n}{2} -s + Z_2 } \Big]^{Z_1 + Z_2} \nonumber \\ &\leq \exp{\Big[ \frac{n}{2} (Z_1 + Z_2) A^2 \Big( \frac{\frac{a}{n} }{1 - \frac{a}{n}} + \frac{ \frac{b}{n} }{ 1 - \frac{b}{n}} \Big) \Big]} \nonumber \\ &\leq \exp{\Big[ (Z_1 + Z_2) \Big( \frac{ \frac{\tau_a}{1 - \tau_a} + \frac{\tau_b }{1- \tau_b}}{\tau_a ( 1 - \tau_a) + \tau_b (1 - \tau_b) } \Big) C \log n\Big]}. \nonumber \end{align} Now, $Z_1 + Z_2$ can be dominated stochastically by $U \sim {\rm{Bin}}( s, \frac{s}{n})$. Therefore, \begin{align} \mathbb{E}_{\mathbf{1}, a,b} [ \tilde{L}_{\pi}^2]& \leq \mathbb{E}_{S_1, S_2} [ \mathbb{E}_{\mathbf{1},a,b}[ L_{S_1} L_{S_2} \mathbf{1}_{\mathscr{C}_{S_1, S_2}} ] ]. \nonumber \\ &\leq \mathbb{E}[ n^{ \sqrt{2} C(\tau_a, \tau_b) U }] = \Big[ \Big( 1 - \frac{s}{n} \Big) + \frac{s}{n} n^{\sqrt{2} C(\tau_a, \tau_b) } \Big]^s \nonumber \\ &\leq \exp {\Big[ \frac{s^2}{n} n^{\sqrt{2} C(\tau_a, \tau_b) } \Big]} = \exp{[n^{1 - 2 \alpha - \sqrt{2} C(\tau_a, \tau_b)} ]} = 1 + o(1) \nonumber \end{align} if $C < 2 \Big( \frac{\tau_a(1-\tau_a) + \tau_b(1-\tau_b)}{\frac{\tau_a}{1- \tau_a} + \frac{\tau_b}{1-\tau_b}} \Big) \Big(\alpha - \frac{1}{2} \Big) $. This concludes the proof in this case. Next, we deal with the case $2C(\tau_a, \tau_b) >1$. It is easy to see that for $C < C_{\mathrm{sparse}}(\alpha)$, this is possible only for $\alpha > 3/4$. In this case, using Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_equal_pure}, \begin{align} &\mathbb{P}\Big[ \frac{\beta_1^* (X' - \mathbb{E}[X']) + \beta_2^* (Y' - \mathbb{E}[Y'])}{\sqrt{\mathop{\rm Var}\nolimits(\beta_1^* X' + \beta_2^* Y')}} < \sqrt{2 \log n} \Big( 1 - 2 C(\tau_a, \tau_b) \Big) \Big] \nonumber\\ &\leq \exp{ \Big[ - \log n \Big( 1 - 2 C(\tau_a, \tau_b) \Big)^2 (1+o(1)) \Big]}= n^{- (1- 2C(\tau_a,\tau_b) )^2 (1 +o(1)) }\nonumber \end{align} In this case, upon repeating the calculation above, we obtain, \begin{align} \mathbb{E}_{\mathbf{1},a,b}[ \tilde{L}_{\pi}^2] &\leq \mathbb{E}_{U} \Big[ \exp{\{ U \log n f(\tau_a, \tau_b) \} }\Big]\leq \exp{ \{ n^{1- 2 \alpha + f(\tau_a, \tau_b)} \} }, \nonumber \\ f(\tau_a, \tau_b) &= 2 C(\tau_a, \tau_b) ^2 - (1 - 2 C(\tau_a, \tau_b))^2 . \nonumber \end{align} It is easy to see by direct computation that $1- 2\alpha -f(\tau_a, \tau_b) <0$ when $C < C_{\mathrm{sparse}}(\alpha)$. The proof will thus be complete, once we establish Lemma \ref{lemma:gamma0}.\ \vrule height4pt width4pt depth0pt\\ \begin{proof}[Proof of Lemma \ref{lemma:gamma0}:] The proof borrows heavily from that of Theorem \ref{thm:dense}\ref{thm:dense_lower}. Upon using the same notation as in the proof of Theorem \ref{thm:dense}\ref{thm:dense_lower}, we have, $\gamma_0 = \prod_{\{i,j\} \in \mathscr{A}} T_{ij}$, where \begin{align} \mathscr{A} = \{ \{i,j \} : i \in S_1 \cap S_2 , j \in S_1^c \cap S_2^c \}^c. \nonumber \end{align} As in the proof of Theorem \ref{thm:dense}\ref{thm:dense_lower}, we decompose $ \gamma_0 = T_1 T_2 T_3$, with $T_l = \prod_{\{i, j\} \in \mathscr{A}_l} T_{ij}$, $l=1,2,3$, where we set \begin{align} \mathscr{A}_1 &= \{ \{i,j\} \in \mathscr{A} : i, j \in \mathscr{C} \}, \nonumber \\ \mathscr{A}_2 &= \{ \{i,j\} \in \mathscr{A} : i,j \in \mathscr{C}^c \}, \nonumber\\ \mathscr{A}_3 &= \{ \{i,j\} \in \mathscr{A} : i \in \mathscr{C}, j \in \mathscr{C}^c \}. \nonumber \end{align} We note that under $\mathbb{P}_{\mathbf{1},a,b}[\cdot]$, $T_1$, $T_2$, and $T_3$ are independent--- we will bound each expectation in turn. Further, using independence of the edges, we have, \begin{align} \mathbb{E}_{\mathbf{1},a,b}[T_1] = \prod_{\{i, j\} \in \mathscr{A}_1} \mathbb{E}_{\mathbf{1},a,b}[T_{ij}]. \nonumber \end{align} We will encounter the following cases. \begin{enumerate} \item $i,j \in S_1 \cap S_2 \cap \mathcal{C}$. In this case, \begin{align} \mathbb{E}_{\mathbf{1}, a,b}[T_{ij}] = (1 + A)^4 \frac{a}{n} + \Big( 1 - \frac{a}{n} \Big) \Big( \frac{1 - (1+A)^2 \frac{a}{n}}{1- \frac{a}{n}}\Big)^2. = 1 + \frac{\frac{a}{n}A^2}{1- \frac{a}{n} } (2+ A)^2. \nonumber \end{align} There are ${ Z_1 \choose 2 }$ such terms. \item $i, j \in S_1 \cap S_2^c \cap \mathcal{C}$ or $i,j \in S_1^c \cap S_2 \cap \mathcal{C}$ or $i \in S_1 \cap S_2^c \cap \mathcal{C}$, $j \in S_1^c \cap S_2 \cap \mathcal{C}$. In this case, \begin{align} \mathbb{E}_{\mathbf{1}, a,b} [T_{ij}] = (1+ A)^2 \frac{a}{n} + \Big( 1 - \frac{a}{n} \Big) \Big( \frac{1 - (1+ A) \frac{a}{n}}{1- \frac{a}{n}} \Big)^2 = 1 + \frac{\frac{a}{n} A^2 }{ 1 - \frac{a}{n}}. \nonumber \end{align} There are $2 { \frac{s}{2} - Z_1 \choose 2 } + \Big( \frac{s}{2} - Z_1 \Big)^2$ many $(i,j)$ pairs which have this contribution. \item $i \in S_1 \cap S_2 \cap \mathcal{C}$, $j \in S_1 \cap S_2^c \cap \mathcal{C}$ or $i \in S_1 \cap S_2 \cap \mathcal{C}$, $ j \in S_1^c \cap S_2 \cap \mathcal{C}$. We have, \begin{align} \mathbb{E}_{\mathbf{1}, a,b}[T_{ij}] = (1+ A)^3 \frac{a}{n} + \Big( \frac{1- (1+A)^2 \frac{a}{n}}{ 1- (1+A) \frac{a}{n}} \Big) \Big(\frac{1- (1+A) \frac{a}{n} }{1- \frac{a}{n}} \Big) \Big(1 - \frac{a}{n} \Big) = 1 + \frac{\frac{a}{n} A^2 }{ 1- \frac{a}{n}}(2 + A). \nonumber \end{align} There are $2 Z_1 \Big( \frac{s}{2} - Z_1 \Big)$ many terms with this contribution. \item For all other $(i,j)$ pairs, it is easy to check that $\mathbb{E}_{\mathbf{1}, a, b}[T_{ij}] =1$. \end{enumerate} We note that under the thesis of the Theorem, $A \to 0$ as $n \to \infty$. Thus we have the upper bound \begin{align} &\mathbb{E}_{\mathbf{1}, a,b} [ T_1 ] \leq \Big( 1 + C \frac{\frac{a}{n} A^2 }{1 - \frac{a}{n}} \Big)^ {U_1}, \nonumber \\ &U_1 = {Z_1 \choose 2} + 2 {\frac{s}{2 } - Z_1 \choose 2} + 2 Z_1 \Big( \frac{s}{2} - Z_1 \Big) + \Big( \frac{s}{2} - Z_1 \Big)^2, \nonumber \end{align} for some absolute constant $C >0$. Upon simplification, we obtain the bound \begin{align} \mathbb{E}_{\mathbf{1}, a, b} [T_1 ] \leq \Big( 1 + C \frac{\frac{a}{n}A^2 }{1 - \frac{a}{n}} \Big)^{\frac{3}{4} s^2 }. \nonumber \end{align} A similar calculation yields an analogous bound for $T_2$. We thus obtain, setting $Z_2 = | S_1 \cap S_2 \cap \mathcal{C}^c|$, \begin{align} &\mathbb{E}_{\mathbf{1},a,b} [ T_2] \leq \Big( 1 + C \frac{\frac{a}{n} A^2 }{1 - \frac{a}{n}} \Big)^{ \frac{3}{4} s^2}. \nonumber \end{align} Finally, it remains to bound $T_3$. We follow the same argument, and encounter the following cases. \begin{enumerate} \item $i \in S_1 \cap S_2 \cap \mathcal{C}$ and $j \in S_1 \cap S_2 \cap \mathcal{C}^c$. In this case, we have, \begin{align} \mathbb{E}_{\mathbf{1},a,b}[T_{ij}] = (1 + A)^4 \frac{b}{n} + \Big( \frac{1 - (1+A)^2 \frac{b}{n} }{1 - \frac{b}{n} } \Big)^2 \Big( 1 - \frac{b}{n} \Big)= 1 + \frac{\frac{b}{n}A^2}{1- \frac{b}{n} } (2+ A)^2 . \nonumber \end{align} There are $Z_1 Z_2 $ terms with this contribution. \item $i \in S_1 \cap S_2 \cap \mathcal{C}, j \in S_1 \cap S_2^c \cap \mathcal{C}^c$ or $ i \in S_1 \cap S_2 \cap \mathcal{C}, j \in S_1^c \cap S_2 \cap \mathcal{C}^c$ and the related pairs $i \in S_1 \cap S_2^c \cap \mathcal{C} , j \in S_1 \cap S_2 \cap \mathcal{C}^c$ and $ i \in S_1^c \cap S_2 \cap \mathcal{C}, j \in S_1 \cap S_2 \cap \mathcal{C}^c$. Each pair contributes \begin{align} \mathbb{E}_{\mathbf{1}, a,b}[T_{ij}] = (1+ A)^3 \frac{b}{n} + \Big( \frac{1- (1+A)^2 \frac{b}{n}}{ 1- (1+A) \frac{b}{n}} \Big) \Big(\frac{1- (1+A) \frac{b}{n} }{1- \frac{b}{n}} \Big) \Big(1 - \frac{b}{n} \Big) = 1 + \frac{\frac{b}{n} A^2 }{ 1- \frac{b}{n}}(2 + A). \nonumber \end{align} There are $2 Z_1 \Big( \frac{s}{2} - Z_2 \Big) + 2 Z_2 \Big( \frac{s}{2} - Z_1 \Big)$ many terms with this contribution. \item $ i \in S_1 \cap S_2^c \cap \mathcal{C}, j \in S_1^c \cap S_2 \cap \mathcal{C}^c$ or $i \in S_1^c \cap S_2 \cap \mathcal{C}, j \in S_1 \cap S_2^c \cap \mathcal{C}^c$. Each term contributes \begin{align} \mathbb{E}_{\mathbf{1},a,b} [T_{ij}] = (1+ A)^2 \frac{b}{n} + \Big( 1 - \frac{b}{n} \Big) \Big( \frac{1 - (1+ A) \frac{b}{n}}{1- \frac{b}{n}} \Big)^2 = 1 + \frac{\frac{b}{n} A^2 }{ 1 - \frac{b}{n}}. \nonumber \end{align} There are $ 2 \Big( \frac{s}{2} - Z_1 \Big) \Big( \frac{s}{2} - Z_2 \Big)$ many terms with this contribution. \item Every other pair has $\mathbb{E}_{\mathbf{1},a,b} [T_{ij}] =1$. \end{enumerate} Similar considerations as for $T_1$ above lead to the upper bound \begin{align} &\mathbb{E}_{\mathbf{1},a,b}[T_3] \leq \Big( 1 + C \frac{\frac{b}{n}A^2}{1 - \frac{b}{n}} \Big)^{V_1}, \nonumber \\ &V_1 = Z_1 Z_2 + 2 Z_1 \Big( \frac{s}{2} - Z_2 \Big) + 2 Z_2 \Big( \frac{s}{2} - Z_1 \Big)+ 2 \Big( \frac{s}{2} - Z_1 \Big) \Big( \frac{s}{2} - Z_2 \Big), \nonumber \end{align} for some absolute constant $C >0$. We note that $Z_1 , Z_2 \leq \frac{s}{2}$ and thus $V_1 \leq \frac{7s^2}{4}$. Finally, this yields the following upper bound on $T_3$. \begin{align} \mathbb{E}_{\mathbf{1},a,b}[T_3] \leq \Big( 1 + C \frac{\frac{b}{n}A^2}{1 - \frac{b}{n}} \Big)^{\frac{7s^2}{4}}. \nonumber \end{align} The rest of the proof can be completed following the same argument as in that of Theorem \ref{thm:dense}\ref{thm:dense_lower} \end{proof} \subsection{Proof of Theorem \ref{thm:sparsesignal_vanilla_upper_gen}} We prove each part of the theorem in separate subsections below.\\ \textit{Proof of Theorem \ref{thm:sparsesignal_vanilla_upper_gen} \ref{thm:sparsesignal_vanilla_hc_gen}} Throughout $\mathcal{C}$ denotes the underlying community assignment and all results are uniform in this $\mathcal{C}$. By virtue of centering and scaling of individual $HC(\C,\beta_1,\beta_2;t)$ under the null, we have by union bound and Chebyshev's Inequality, \begin{equs} \ &\mathbb{P}_{\mathbf{1},a,b}^{(\mathcal{C})}\left(HC(\C,\beta_1,\beta_2)\geq \sqrt{\log{n}}\right)\\ &\leq \sum_t \mathbb{P}_{\mathbf{1},a,b}^{(\mathcal{C})} \left(GHC(\C,\beta_1,\beta_2;t) > \sqrt{\log {n}} \right)\leq \frac{\sqrt{10\log{n}}}{\log{n}}\rightarrow 0 \quad \text{as}\ n\rightarrow \infty. \end{equs} This controls the Type I error of this test. It remains to control the Type II error. We will establish as usual that the non-centrality parameter under the alternative beats the null and the alternative variances of the statistic. We consider alternatives as follows. Let $\mathbb{P}_{\boldsymbol{\Theta},a,b}$ be such that $\theta_i=1+A$ for $i\in S$ and $\theta_i=1$ otherwise, where $A=\sqrt{\frac{C^*\log{n}}{\sigma_{n0}^2}}$ with $2\rho(\beta_1,\beta_2)\geq C^*>C_{\mathrm{HC}}(\beta_1,\beta_2,\alpha)$, $|S|=s=n^{1-\alpha}$, $\alpha \in (1/2,1)$. The case of higher signals can be handled by standard monotonicity arguments and are therefore omitted. Also, let $S_1=\mathcal{C}\cap S,\ S_1^c=\mathcal{C}\cap S^c,\ s_1=|S_1|$ and $S_2=\mathcal{C}^c\cap S,\ S_2^c=\mathcal{C}^c\cap S^c$, $s_2=|S_2|$. Also let $$\overline{\rho}(\beta_1,\beta_2):=1/\rho(\beta_1,\beta_2).$$ The following Lemma studies the behavior of this statistic under this class of alternatives. \begin{lemma}\label{lemma:power_hcnew} Let $t=\sqrt{2r\log{n}}$ with $r=\min\left\{1,2C^*\overline{\rho}(\beta_1,\beta_2)\right\}$. Then \begin{enumerate} \item[(a)] $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}\left(GHC(\C,\beta_1,\beta_2;t)\right)\gg \sqrt{\log{n}}.$ \label{lemma:power_hcnew_a} \item[(b)] $\left(\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}\left(GHC(\C,\beta_1,\beta_2;t)\right)\right)^2\gg \mathrm{Var}_{\boldsymbol{\theta},a,b}\left(GHC(\C,\beta_1,\beta_2;t)\right).$ \label{lemma:power_hcnew_b} \end{enumerate} \end{lemma} The Type II error of the HC statistic may be controlled immediately using Lemma \ref{lemma:power_hcnew}. This is straightforward--- however, we include a proof for the sake of completeness. For any alternative considered above, we have, using Chebychev's inequality and Lemma \ref{lemma:power_hcnew}, \begin{equs} \mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}[HC(\C,\beta_1,\beta_2) > \sqrt{\log n}] &\geq \mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}[GHC(\C,\beta_1,\beta_2;t) \geq \sqrt{\log n}] \\&\geq 1 - \frac{ \mathrm{Var}^{(\mathcal{C})}_{\boldsymbol{\theta},a,b}\left(GHC(\C,\beta_1,\beta_2;t)\right)}{(\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}\left(GHC(\C,\beta_1,\beta_2;t)\right) - \sqrt{\log n})^2} \to 1 \end{equs} as $n \to \infty$. This completes the proof, modulo that of Lemma \ref{lemma:power_hcnew}. \ \vrule height4pt width4pt depth0pt \begin{proof}[Proof of Lemma \ref{lemma:power_hcnew}] The proof requires a detailed understanding of the mean and variance of the $HC(\C,\beta_1,\beta_2;t)$ statistics. Due to centering, $HC(\C,\beta_1,\beta_2;t)$ has mean $0$ under the null hypothesis. Our next proposition estimates the variances of the $HC(\C,\beta_1,\beta_2;t)$ statistics under the null and the class of alternatives introduced above. We also lower bound the expectation of the $HC(\C,\beta_1,\beta_2;t)$ statistics under the alternative. \begin{prop} \label{lemma:hcnew_main} For $t = \sqrt{2 r \log n}$ with $r > \frac{C^*\overline{\rho}(\beta_1,\beta_2)}{2}$, we have, \begin{equs} \lim_{n\to \infty} \frac{\log \mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta}=\boldsymbol{1},a,b}\left(HC(\C,\beta_1,\beta_2;t)\right) }{\log n} &= 1- r, \label{eq:null_varnew} \\ \lim_{n \to \infty} \frac{\log \mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}\left(HC(\C,\beta_1,\beta_2;t)\right)}{\log n} &\geq 1- \alpha -\frac{1}{2} \left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2 , \label{eq:alt_expnew}\\ \lim_{n \to \infty} \frac{ \log \mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}\left(HC(\C,\beta_1,\beta_2;t)\right) }{\log n}&= \max\left\{ 1-\alpha -\frac{1}{2} \left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2, 1- r \right\}\\ \label{eq:alt_varnew}. \end{equs} \end{prop} We defer the proof of Proposition \ref{lemma:hcnew_main} to Section \ref{section:technical_lemmas}. The rest of the proof follows along the lines of the proof of Lemma 6.4 in \cite{mms2016} by noting that $C^*/8(1-\theta)$ in Proposition 6.4 of \cite{mms2016} can be mapped to the constant $C^*\overline{\rho}(\beta_1,\beta_2)$ in Lemma \ref{lemma:power_hcnew} and Proposition \ref{lemma:hcnew_main}. \end{proof} \textit{Proof of Theorem \ref{thm:sparsesignal_vanilla_upper_gen} \ref{thm:sparsesignal_vanilla_max_upper_gen}} Throughout $\mathcal{C}$ denotes the underlying community assignment and all results are uniform in this $\mathcal{C}$. We set $\mu_{n0}(\mathcal{C},\beta_1,\beta_2)=\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}$ and recall the definition of $\sigma_{n0}(\mathcal{C},\beta_1,\beta_2)$ from Section \ref{section:tests}. First, we control the Type I error of $\phi(\beta_1, \beta_2 ,\delta)$ for any $\delta >0$. Indeed, we have, using Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_tail_pure} and an union bound, \begin{align} \mathbb{E}^{(\mathcal{C})}_{\mathbf{1},a,b}[T_{d_{\max}}(\mathcal{C},\beta_1,\beta_2,\delta)] \leq n\cdot n^{- (1 + \delta)^2 + o (1) } \to 0. \nonumber \end{align} Using stochastic monotonicity of the test statistic in $A$, it suffices to analyze the Type II error for $A = \sqrt{\frac{C\log n}{\sigma_{n0}^2}}$ with \begin{align} C_{\mathrm{max}}(\beta_1,\beta_2,\alpha)<C \leq 2 \frac{(\beta_1^2 \tau_a (1 - \tau_a) + \beta_2^2 \tau_b (1 - \tau_b)) ( \tau_a (1 - \tau_a ) + \tau_b (1 - \tau_b) )}{(\beta_1 \tau_a + \beta_2 \tau_b )^2 }. \nonumber \end{align} Now, we note that \begin{align} \mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta}, a,b} \Big[ \max_{i \in [n]} D_i (\mathcal{C},\beta_1 , \beta_2) > \sqrt{2 (1 + \delta) \log n} \Big] \geq \mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b} \Big[ \max_{ i \in S(\mathbf{\Theta})} D_i (\mathcal{C},\beta_1 ,\beta_2) > \sqrt{2 (1 + \delta) \log n} \Big]. \nonumber \end{align} Further, for $i \in S(\mathbf{\Theta})$, we set $d_i'(1) = \sum_{j \in \mathcal{C}(i) \cap S(\mathbf{\Theta})^c} Y_{ij}$ and $d_i'(2) = \sum_{j \in \mathcal{C}(i)^c \cap S(\mathbf{\Theta})^c} Y_{ij}$. We observe that $d_i(1) \geq d_i'(1)$ and $d_i(2) \geq d_i'(2)$, and thus \begin{align} &\mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b} \Big[ \max_{ i \in S(\mathbf{\Theta})} D_i (\mathcal{C},\beta_1 ,\beta_2) > \sqrt{2 (1 + \delta) \log n} \Big] \nonumber \\ & \geq \mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})} (\beta_1 d_i'(1) + \beta_2 d_i'(2) ) > \mu_{n0}(\mathcal{C},\beta_1,\beta_2) + \sigma_{n0}(\mathcal{C},\beta_1,\beta_2) \sqrt{2 (1 + \delta) \log n } \Big]. \label{eq:max_upper_int1} \end{align} Finally, we note that for $j \in \mathcal{C}(i)$, $\mathbb{E}^{(\mathcal{C})}_{\mathbf{1},a,b}[Y_{ij}] = \frac{a}{n}$, and for $j \in \mathcal{C}(i) \cap S(\mathbf{\Theta})^c$, $\mathbb{E}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}[Y_{ij}] \geq (1+ A) \frac{a}{n}$. Similarly, for $j \in \mathcal{C}(i)^c$, $\mathbb{E}^{(\mathcal{C})}_{\mathbf{1},a,b}[Y_{ij}] = \frac{b}{n}$, while for $j \in \mathcal{C}(i)^c \cap S(\mathbf{\Theta})^c$, $\mathbb{E}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}[Y_{ij}] \geq (1 + A) \frac{b}{n}$. Plugging these into \eqref{eq:max_upper_int1} and simplifying, we get \begin{align} &\mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})} (\beta_1 d_i'(1) + \beta_2 d_i'(2) ) > \mu_{n0}(\mathcal{C},\beta_1,\beta_2) + \sigma_{n0}(\mathcal{C},\beta_1,\beta_2) \sqrt{2 (1 + \delta) \log n } \Big] \geq \nonumber \\ &\mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})} \frac{\beta_1 d_i'(1) + \beta_2 d_i'(2) - \mathbb{E}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}[\beta_1 d_i'(1) + \beta_2 d_i'(2)] }{\sqrt{\mathop{\rm Var}\nolimits_{\mathbf{\Theta},a,b}[\beta_1 d_i'(1) + \beta_2 d_i'(2)] }} > C' \sqrt{\log n} \Big],\nonumber \end{align} where $C'$ is given as \begin{align} C'= \sqrt{2 (1 + \delta)} - \frac{\beta_1 \tau_a + \beta_2 \tau_b }{ \sqrt{( \beta_1^2 \tau_a(1 - \tau_a) + \beta_2^2 \tau_b (1 - \tau_b) )(\tau_a (1- \tau_a) + \tau_b(1 -\tau_b) )}} \sqrt{C}. \nonumber \end{align} We note that under the assumptions introduced above, $C'>0$. We note that $\{ \beta_1 d_i'(1) + \beta_2 d_i'(2): i \in S(\mathbf{\Theta})\}$ are independent and for any fixed $i \in S(\mathbf{\Theta})$, $d_i'(1)$ is stochastically larger than a $X \sim {\rm{Bin}}\Big( \frac{n}{2}-s, \frac{a}{n} \Big)$, while $d_i'(2)$ stochastically dominates $Y \sim {\rm{Bin}}\Big(\frac{n}{2} -s, \frac{b}{n} \Big)$. Thus we have, \begin{align} &\mathbb{P}^{(\mathcal{C})}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})} \frac{\beta_1 d_i'(1) + \beta_2 d_i'(2) - \mathbb{E}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}[\beta_1 d_i'(1) + \beta_2 d_i'(2)] }{\sqrt{\mathop{\rm Var}\nolimits_{\mathbf{\Theta},a,b}[\beta_1 d_i'(1) + \beta_2 d_i'(2)] }} \leq C' \sqrt{\log n} \Big] \nonumber\\ &\leq \Big( \mathbb{P}\Big[ \frac{\beta_1 X + \beta_2 Y - \mathbb{E}[\beta_1 X + \beta_2 Y ]}{\sqrt{\mathop{\rm Var}\nolimits(\beta_1 X + \beta_2 Y)}} \leq C' \sqrt{\log n} \Big] \Big)^s \nonumber\\ &= \Big(1- \mathbb{P}\Big[ \frac{\beta_1 X + \beta_2 Y - \mathbb{E}[\beta_1 X + \beta_2 Y ]}{\sqrt{\mathop{\rm Var}\nolimits(\beta_1 X + \beta_2 Y)}} > C' \sqrt{\log n} \Big] \Big)^s \nonumber\\ &\leq \exp\Big[ -s \mathbb{P}\Big[\frac{\beta_1 X + \beta_2 Y - \mathbb{E}[\beta_1 X + \beta_2 Y ]}{\sqrt{\mathop{\rm Var}\nolimits(\beta_1 X + \beta_2 Y)}} > C' \sqrt{\log n} \Big] \Big] \nonumber\\ &= \exp\Big[ - n^{1- \alpha - \frac{(C')^2}{2} + o(1) } \Big] \to 0\nonumber \end{align} as $n \to \infty$, by the choice of $C'$. This controls the Type II error and completes the proof.\ \vrule height4pt width4pt depth0pt \subsection{Proof of Theorem \ref{thm:sparsesignal_vanilla_upper}} The proof of Theorem \ref{thm:sparsesignal_vanilla_upper}\ref{thm:sparsesignal_vanilla_hc} and \ref{thm:sparsesignal_vanilla_max_upper} follows from Theorem \ref{thm:sparsesignal_vanilla_upper_gen}\ref{thm:sparsesignal_vanilla_hc_gen} and \ref{thm:sparsesignal_vanilla_max_upper_gen}. Hence here we only prove Theorem \ref{thm:sparsesignal_vanilla_upper} \ref{thm:sparsesignal_vanilla_max_lower}.\\ \textit{Proof of Theorem \ref{thm:sparsesignal_vanilla_upper} \ref{thm:sparsesignal_vanilla_max_lower}} In this theorem, since all computations are under the true underlying $\mathcal{C}$ and the test based $d_{\max}(1,1)$ does not depend on it, we drop the notational dependence on $\mathcal{C}$ from $\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}$, $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}$, and $\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}$. Consider any alternative $\mathbf{\Theta} \in \Xi(s,A)$ with $A$ as in \label{eq:alt1}, such that $\theta_i = 1+A$ for $i \in S(\mathbf{\Theta})$ and $\theta_i = 1$ otherwise. Recall that we have set $\mu_{n0} = \mathbb{E}_{\mathbf{1},a,b}[d_1]$ and $\sigma_{n0}^2 = \Var_{\mathbf{1},a,b}[d_1]$. If possible, suppose there exists a consistent sequence of tests based on the max degree with asymptotically zero risk again the alternative sequence under consideration. In this case, there exists a sequence of cut-offs $\{k_n\}$ such that \begin{align} \mathbb{P}_{\mathbf{1},a,b}[\max_i d_i < k_n ] \to 1,\,\,\,\, \mathbb{P}_{\mathbf{\Theta},a,b}[\max_i d_i > k_n ] \to 1, \nonumber \end{align} as $n \to \infty$. Without loss of generality, we set \begin{align} k_n = \mu_{n0} + \sigma_{n0}\sqrt{2 \log n} \Big( 1 - \frac{\log \log n + \log (4 \pi)}{4 \log n} + \frac{y_n}{2 \log n} \Big). \nonumber \end{align} We first observe that for any such sequence $\{k_n\}$, $\mathbb{P}_{\mathbf{1},a,b}[\max_i d_i < k_n] \to 1 $ as $n \to \infty$ implies that $y_n \to \infty$ as $n \to \infty$. To this note, suppose $y_n \leq M$ along any subsequence. Thus along this subsequence, using Theorem \ref{thm:max_deg_null}, we have $\mathbb{P}_{\mathbf{1},a,b}[\max_i d_i < k_n ] \leq \exp[ \textrm{e}^{-M} ] <1$. Thus $y_n \to \infty$ as $n \to \infty$. The rest of the proof establishes that the Type II error does not converge to $0$ as $n \to \infty$ for any such sequence of cutoffs $k_n$, and alternatives $\mathbf{\Theta}$ outlined above. To this end, note that \begin{align} \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ \max_i d_i > k_n \Big] &=\mathbb{P}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})} d_i > k_n \Big] + \mathbb{P}_{\mathbf{\Theta},a,b} \Big[\max_{i \in S(\mathbf{\Theta})^c } d_i > k_n \Big] \nonumber \\ &\leq n^{1- \alpha} \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ d_{i_0} > k_n \Big] + \mathbb{P}_{\mathbf{\Theta},a,b} \Big[\max_{i \in S(\mathbf{\Theta})^c } d_i > k_n \Big] \nonumber, \end{align} for some $i_0 \in S(\mathbf{\Theta})$. We will establish that each of these terms converge to zero as $n \to \infty$. To this end, first, we note that $y_n \to \infty$ implies that $y_n \geq 0$ eventually. Thus we have, \begin{align} \mathbb{P}_{\mathbf{\Theta},a,b}[ d_{i_0} > k_n ] &\leq \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ d_{i_0} > \mu_{n0}+ \sigma_{n0} \sqrt{2\log n} \Big( 1 - \frac{\log \log n + \log (4\pi)}{4\log n} \Big) \Big] \nonumber \\ &= \mathbb{P}_{\mathbf{\Theta},a,b} \Big[d_{i_0} > \mu_{n0} + \sigma_{n0} \sqrt{2 \log n} ( 1 + o(1) ) \Big]. \nonumber \end{align} We note that under $\mathbb{P}_{\mathbf{\Theta},a,b}[\cdot]$, $d_{i_0} \sim U + V$, with $U,V$ independent random variables and $U \sim \textrm{Bin}\Big( \frac{n}{2} , (1 + A) \frac{a}{n} \Big) $ and $V \sim \textrm{Bin}\Big(\frac{n}{2}, ( 1 + A) \frac{b}{n} \Big)$ respectively. Thus $\mathbb{E}_{\mathbf{\Theta},a,b} [d_{i_0}] = \frac{n}{2} (1+ A) \Big( \frac{a}{n} + \frac{b}{n} \Big)$ and $\Var_{\mathbf{\Theta},a,b}[d_{i_0}] = \frac{n}{2} (1+A) \Big( \frac{a}{n} (1 - (1+A) \frac{a}{n} ) + \frac{b}{n} ( 1- (1 +A) \frac{b}{n} ) \Big)$. Thus $\mathbb{E}_{\mathbf{\Theta},a,b} [d_{i_0} ] - \mu_{n0} = \frac{n}{2} A \Big( \frac{a}{n} + \frac{b}{n} \Big)$. Further, we observe that $A \to 0$ as $n \to \infty$ to conclude that $\mathop{\rm Var}\nolimits_{\mathbf{\Theta},a,b}[d_{i_0}] / \mathop{\rm Var}\nolimits_{\mathbf{1},a,b}[d_{i_0}] \to 1$ as $n \to \infty$. Thus we have, \begin{align} & \mathbb{P}_{\mathbf{\Theta},a,b} \Big[d_{i_0} > \mu_{n0} + \sigma_{n0} \sqrt{2 \log n} ( 1 + o(1) ) \Big] \nonumber \\ & = \mathbb{P}_{\mathbf{\Theta},a,b}\Big[ \frac{d_{i_0} - \mathbb{E}_{\mathbf{\Theta},a,b}[d_{i_0}] }{\sqrt{\mathop{\rm Var}\nolimits_{\mathbf{\Theta},a,b}[d_{i_0}]} } > \sqrt{2 \log n} (1 + o (1)) - \sqrt{C \log n} \frac{\frac{n}{2} \Big( \frac{a}{n} + \frac{b}{n} \Big)}{\sigma_n^2} \Big]. \nonumber \\ &= \mathbb{P}_{\mathbf{\Theta},a,b}\Big[ \frac{d_{i_0} - \mathbb{E}_{\mathbf{\Theta},a,b}[d_{i_0}] }{\sqrt{\mathop{\rm Var}\nolimits_{\mathbf{\Theta},a,b}[d_{i_0}]} } > \sqrt{2 \log n} \Big( 1 - \frac{\tau_a + \tau_b }{\tau_a (1 - \tau_a) + \tau_b (1 - \tau_b) } \sqrt{\frac{C}{2}} \Big) (1 + o(1)) \Big] \nonumber \\ &= n^{ - \Big( 1 - \frac{\tau_a + \tau_b }{ \tau_a (1 - \tau_a) + \tau_b (1 - \tau_b) } \sqrt{\frac{C}{2}} \Big)^2 + o(1) }, \nonumber \end{align} using Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_tail}. As a result, we have, \begin{align} n^{1- \alpha} \mathbb{P}_{\mathbf{\Theta},a,b} [d_{i_0} > k_n ] = o(1) , \nonumber \end{align} as $ 1 - \alpha - \Big(1 - \frac{\tau_a + \tau_b }{ \tau_a (1 - \tau_a) + \tau_b (1 - \tau_b) } \sqrt{\frac{C}{2}} \Big)^2 <0 $, in this case. This controls the first term. The control of the second term is similar to the control of the Type I error. However, we have to carefully control the contribution due to the contamination edges with the non-null vertices. To this end, note that for $i \in S(\mathbf{\Theta})^c$, $d_i = Z_{1i} + Z_{2i}$, where $Z_{1i} = \sum_{j \neq i , j \in S(\mathbf{\Theta})^c} Y_{ij}$, $Z_{2i} = \sum_{j \neq i, j \in S(\mathbf{\Theta})} Y_{ij}$. Thus we have, \begin{align} \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})^c} d_i > k_n \Big] &\leq \mathbb{P}_{\mathbf{\Theta},a,b} \Big[\max_{i \in S(\mathbf{\Theta})^c} Z_{1i} + \max_{i \in S(\mathbf{\Theta})^c } Z_{2i} > k_n \Big] \nonumber \\ &\leq \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})^c } Z_{1i} > k_n - k_n' \Big] + \mathbb{P}_{\mathbf{\Theta},a,b} \Big[ \max_{i \in S(\mathbf{\Theta})^c} Z_{2i} > k_n' \Big], \nonumber \end{align} for some sequence $k_n'$ to be chosen appropriately. For each $i \in S(\mathbf{\Theta})^c$, we note that $Z_{2i}$ is stochastically dominated by a $\textrm{Bin}\Big(s, \frac{a}{n}(1 + A) \Big)$ random variable. We choose $k_n' = \frac{\sigma_n \zeta_n'}{\sqrt{2 \log n}}$, for some sequence $\zeta_n' \to \infty$ to be chosen appropriately. We note that $\alpha \in (\frac{1}{2},1)$ implies that $k_n' \gg s \frac{a}{n} (1 + A)$ and thus, by Bernstein's inequality, for $i \in S(\mathbf{\Theta})^c$, \begin{align} \mathbb{P}_{\mathbf{\Theta},a,b}[Z_{2i} > k_n' ] &\leq \exp\Big[ - \frac{\Big(k_n' - s\frac{a}{n}(1+A) \Big)^2}{2\Big[ s \frac{a}{n} (1+ A) \Big( 1 - \frac{a}{n}(1+A) \Big) + \frac{1}{3} \Big(k_n'- s \frac{a}{n} (1+A) \Big) \Big]} \Big] \leq \exp\Big[-C k_n' \Big] \nonumber \end{align} for some $C >0$. Further, $a,b \gg (\log n)^3$ implies that $k_n' \gg \log n$ and thus \begin{align} \mathbb{P}_{\mathbf{\Theta},a,b}\Big[ \max_{i \in S(\mathbf{\Theta})^c} Z_{2i} > k_n' \Big] \leq n \exp[ - C k_n' ] = o(1). \nonumber \end{align} Finally, we have, for $i \in S(\mathbf{\Theta})^c$, $Z_{1i}$ is stochastically dominated by $M_1 + M_2$, where $M_1, M_2$ are independent random variables with $M_1 \sim \textrm{Bin}\Big( \frac{n}{2}, \frac{a}{n} \Big) $ and $M_2 \sim \textrm{Bin} \Big( \frac{n}{2} , \frac{b}{n} \Big)$. This implies \begin{align} & \mathbb{P}_{\mathbf{\Theta},a,b } \Big[ \max_{i \in S(\mathbf{\Theta})^c } Z_{1i} > k_n - k_n' \Big] \leq \mathbb{P}_{\mathbf{1},a,b} \Big[ \max_i d_i > k_n - k_n' \Big] \nonumber \\ &= \mathbb{P}_{\mathbf{1},a,b}\Big[ \max_i d_i > \mu_{n0} + \sigma_{n0} \sqrt{2 \log n} \Big( 1 - \frac{\log \log n + \log (4 \pi) }{4 \log n } + \frac{y_n - \zeta_n'}{2 \log n} \Big) \Big]. \nonumber \end{align} We note that for any sequence $y_n \to \infty$, we can choose a sequence $\zeta_n' \to \infty$ sufficiently slow such that $y_n - \zeta_n' \to \infty$. Under such a choice of $\zeta_n'$, $\mathbb{P}_{\mathbf{\Theta},a,b } \Big[ \max_{i \in S(\mathbf{\Theta})^c } Z_{1i} > k_n - k_n' \Big] = o(1) $ as $n \to \infty$. This establishes that no such test can control the Type I and Type II errors simultaneously, and thus completes the proof. \ \vrule height4pt width4pt depth0pt \subsection{Proof of Theorem \ref{thm:max_deg_null}} In this theorem, since all computations are under the true underlying $\mathcal{C}$ and the test based $d_{\max}(1,1)$ does not depend on it, we drop the notational dependence on $\mathcal{C}$ from $\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}$, $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}$, and $\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}$. For $y \in \mathbb{R}$, we define $x : = x(n,y)$ as the solution of the equation $\frac{1}{\sqrt{2 \pi}} \frac{n}{x} \exp[ - x^2 /2] = \exp[-y]$. We will establish that \begin{align} \mathbb{P}_{\mathbf{1},a,b}\Big[ \frac{\max_i d_i - \mu_{n0} }{\sigma_{n0}} \leq x\Big] \to \exp[ \textrm{e}^{-y} ] \label{eq:max_toprove} \end{align} as $n \to \infty$, where as usual we set $\mu_{n0} = \mathbb{E}_{\mathbf{1},a,b}[d_1]$ and $\sigma_{n0}^2 = \Var_{\mathbf{1},a,b}[d_1]$. Upon direct computation, we obtain \begin{align} x(n,y) = \sqrt{2 \log n} \Big( 1 - \frac{\log \log n + \log (4 \pi)}{4 \log n} + \frac{y}{2 \log n} \Big) + o(1) \nonumber \end{align} as $n \to \infty$, thus immediately implying the desired result. Thus it remains to establish \eqref{eq:max_toprove}. To this end, we define $Z= \sum_{i} \mathbf{1}(d_i > \mu_{n0} + x \sigma_{n0})$. We claim that as $n \to \infty$, $Z$ converges in distribution to a Poisson($\exp[-y])$ random variable. This immediately implies \begin{align} \mathbb{P}_{\mathbf{1},a,b}\Big[ \frac{\max_i d_i - \mu_{n0} }{\sigma_{n0}} \leq x\Big] = \mathbb{P}_{\mathbf{1},a,b}[Z=0] \to \exp[\textrm{e}^{-y}] \nonumber \end{align} as $n \to \infty$. This yields \eqref{eq:max_toprove}. Finally, it remains to establish the Poisson approximation for $Z$ as $n \to \infty$. To this end, we use the following version of Stein's method for Poisson approximation \citep[Theorem 2.C and Corollary 2.C.4]{barbour1992poisson}. We define a sequence of Bernoulli random variables $\{X_i : i \in I\}$ to be \textit{positively related} if for every $i \in I$, we can construct $\{Y_j^{(i)}: j \neq i \}$, coupled with $\{X_i : i \in I\}$ such that $\{Y_j^{(i)} : j \neq i \}$ is distributed as $\{X_j : j \neq i \} | X_i =1$ and $ \forall j \neq i$, $ Y_{j}^{(i)} \geq X_j$. We set $W= \sum_i X_i$, with $X_i \sim \textrm{Ber}(p_i)$, and $\lambda = \sum_i p_i$. The following theorem \citep[Corollary 2.C.4]{barbour1992poisson} bounds the TV distance between $W$ and a Poisson random variable with mean $\lambda$. \begin{theorem*}\citep[Corollary 2.C.4]{barbour1992poisson}\\ $$d_{\textrm{TV}}(W, \textrm{Poi}(\lambda) ) \leq \min\Big\{ 1, \frac{1}{\lambda} \Big\} \Big( \Var(W) -\lambda + 2 \sum_i p_i^2\Big).$$ \end{theorem*} The desired Poisson approximation result follows immediately from the lemma below. \begin{lemma} \label{lem:suff_poisson} The Bernoulli variables $\{\mathbf{1}(d_i > \mu_{n0} + x \sigma_{n0} ): 1 \leq i \leq n \}$ are \textit{positively related}. We set $\lambda =\lambda_n := n \mathbb{P}_{\mathbf{1},a,b}[d_1 > \mu_{n0} + x \sigma_{n0}]$. Then we have, if $b \gg (\log n)^3$, as $n \to \infty$, $\lambda \to \exp[-y]$ and $\Var(Z) \to \exp[-y]$ \end{lemma} An application of the Poisson approximation theorem above concludes the proof modulo the proof of Lemma \ref{lem:suff_poisson}. \ \vrule height4pt width4pt depth0pt \begin{proof}[Proof of Lemma \ref{lem:suff_poisson}] First, we establish that $X_i = \mathbf{1}(d_i > \mu_{n0} + x \sigma_{n0})$ are \textit{positively related}. We note that the $X_i$ are increasing functions of independent random variables $\mathbf{Y}$ and thus the $\{X_i: 1 \leq i \leq n \}$ are positively related \citep[Theorem 2.G]{barbour1992poisson}. Next, we check that $\lambda \to \exp[-y]$. We have, \begin{align} \lambda = n \mathbb{P}_{\mathbf{1},a,b} [ d_1 > \mu_{n0} + x \sigma_{n0}] = n (1 - \Phi(x))(1 + o(1)), \nonumber \end{align} where the last equality follows from Lemma \ref{lemma:binomial_tail_exp_scale}. Combining Mills ratio with the definition of $x$ immediately gives us the desired result. Finally, we check the variance condition. \begin{align} \Var(Z) = n \mathbb{P}_{\mathbf{1},a,b}[d_1 > \mu_{n0} + x \sigma_{n0}] ( 1 - \mathbb{P}_{\mathbf{1},a,b}[d_1 > \mu_n + x \sigma_n]) + {n \choose 2} \mathop{\rm cov}\nolimits(X_1, X_2). \nonumber \end{align} By computations similar to those involved in control of term $T_4$ of Lemma \ref{lemma:bounds_five_sparse_graphs} proved in Section \ref{section:technical_lemmas}, ${n \choose 2} \mathop{\rm cov}\nolimits(X_1, X_2)=n^{-1(1+o(1))}$ for any fixed $y\in \mathbb{R}$. This completes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm:sparsesignal_optimal}} We claim that the claim of the theorem follows from Theorem \ref{thm:sparsesignal_vanilla_upper_gen} since $\mathrm{Risk}_n(\hat{\mathcal{C}},\Xi(s, A))\rightarrow 0$. To see this note that $\mathrm{Risk}_n(\hat{\mathcal{C}},\Xi(s, A))\rightarrow 0$ implies $\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}(d(\hat{\mathcal{C}},\mathcal{C})\geq 1)\rightarrow 0$ uniformly over all $\mathcal{C}\subset[n]$ with $|\mathcal{C}|=n/2$ and $\Xi(s, A)$. Since $d(\hat{\mathcal{C}},\mathcal{C})$ is a $\mathbb{N}$-valued random variable, we immediately have $\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}(d(\hat{\mathcal{C}},\mathcal{C})=0)\rightarrow 1$ uniformly over all $\mathcal{C}\subset[n]$ with $|\mathcal{C}|=n/2$ and $\Xi(s, A)$. The proof therefore follows from Theorem \ref{thm:sparsesignal_vanilla_upper_gen} by working on the event $d(\hat{\mathcal{C}},\mathcal{C})=0$. \subsection{Proof of Theorem \ref{theorem:less_than_logn}} We proceed exactly along the lines of the proof of Theorem \ref{thm:dense}\ref{thm:dense_lower}. Indeed,we consider the same prior on the parameter space and as earlier, denote the marginal likelihood as $L_{\pi}$. We recall that $\mathbb{E}_{\mathbf{1},a,b}[L_{\pi}] = 1$. It remains to study the second moments. Again, we follow the arguments in the proof of Theorem \ref{thm:dense}\ref{thm:dense_lower} and consider the upper bound \eqref{eq:useful_upper} \begin{align} \mathbb{E}_{\mathbf{1},a,b}[L_{\pi}^2] \leq \exp\Big[ \frac{9C}{4} s^2 A^2 \Big( \frac{a/n}{1- a/n} + \frac{b/n}{1- b/n} \Big) \Big] \exp\Big[ \frac{s^2}{n} \Big( {\rm{e}}^{C\frac{A^2 n}{2} ( \frac{a/n}{1- a/n} + \frac{b/n}{1- b/n} ) } -1 \Big) \Big], \nonumber \end{align} for some universal constant $C>0$. First, we note that for $\alpha > \frac{1}{2}$, and $a \ll (\log n)$, \begin{align} s^2 \Big( \frac{a/n}{1- a/n} + \frac{b/n}{1 - b/n} \Big) \leq 2 s^2 \frac{a/n}{1- a/n} \leq 2 \frac{\log n}{n^{2 \alpha -1}\Big( 1- \frac{a}{n} \Big)} \to 0 \nonumber \end{align} as $n \to \infty$. Finally, we note that for $a \ll \log n$, \begin{align} \frac{s^2}{n} \Big( {\rm{e}}^{C\frac{A^2 n}{2} ( \frac{a/n}{1- a/n} + \frac{b/n}{1- b/n} ) } -1 \Big) \leq \frac{s^2}{n} {\rm{e}}^{C' a} \nonumber \end{align} for some universal constant $C'>0$. We note that $\alpha > \frac{1}{2}$ and $a \ll \log n$ implies that \begin{align} \frac{s^2}{n} {\rm{e}}^{C' a} \leq n^{1 - 2 \alpha + c} \nonumber \end{align} for any constant $c>0$. This concludes the proof, upon choosing $c>0$ sufficiently small, so that $ 2 \alpha > 1 +c $. This concludes the proof. \section{Proofs of Binomial Deviation Bounds} Throughout we let $\tau_{a'}=\lim a'/n$ and $\tau_{b'}=\lim b'/n$ and let $M=\sup_{n\geq 1}\max\{|\tau_{a'}-a'/n|,|\tau_{b'}-b'/n|,|C_n-C|\}$. \subsection{Proof of Lemma \ref{lemma:binomial_master}}We prove each part of the lemma separately below.\\ \textit{Proof of Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_equal_pure}:} Let $$h^*=\frac{h(\beta_1\sigma_{n1}^2)}{\sigma_n(\beta_1,\beta_2)^2},$$ and \begin{equs} {\cal A}} \def\a{{\alpha}:=\{x:\mu_{n1}+x/\beta_1\in\mathbb{N}\cap [0,n/2]\}. \end{equs} Then we have for any $C^*>0$ \begin{equs} \ & \sup_{|t|\leq \xi_n}\mathbb{P}\left(d(\beta_1,\beta_2)=h+t\right)\\ &=\sup_{|t|\leq \xi_n}\sum\limits_{h_1 \in {\cal A}} \def\a{{\alpha}}\mathbb{P}(X=\mu_{n1}+h_1/\beta_1)\mathbb{P}(Y=\mu_{n2}+(h-h_1+t)/\beta_2)\\ &=\sup_{|t|\leq \xi_n}\sum\limits_{h_1 \in \atop{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)^c }\mathbb{P}(X=\mu_{n1}+h_1/\beta_1)\mathbb{P}(Y=\mu_{n2}+(h-h_1+t)/\beta_2)\\&+\sup_{|t|\leq \xi_n}\sum\limits_{h_1 \in \atop{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)}\mathbb{P}(X=\mu_{n1}+h_1/\beta_1)\mathbb{P}(Y=\mu_{n2}+(h-h_1+t)/\beta_2), \end{equs} where $I(C^*,h^*):=[h^*-C^*\sigma_{n1}\sqrt{\log{n}},h^*+C^*\sigma_{n1}\sqrt{\log{n}}]$ and $I(C^*,h^*)^c$ denotes its complement. Now \begin{equs} \ & \sup_{|t|\leq \xi_n}\sum\limits_{h_1 \in \atop{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)^c }\mathbb{P}(X=\mu_{n1}+h_1/\beta_1)\mathbb{P}(Y=\mu_{n2}+(h-h_1+t)/\beta_2)\\ &\leq \sup_{|t|\leq \xi_n}\mathbb{P}\left(Y>\mu_{n2}+(h-h^*+C^*\sigma_{n1}\sqrt{\log{n}}+t)/\beta_2\right)\\ &+\sup_{|t|\leq \xi_n}\mathbb{P}\left(X>\mu_{n1}+(h^*+C^*\sigma_{n1}\sqrt{\log{n}})/\beta_1\right). \end{equs} Now by Lemma 6.2 of \cite{mms2016} Part (a, ii) \begin{equs} \sup_{|t|\leq \xi_n}\mathbb{P}\left(X>\mu_{n1}+(h^*+C^*\sigma_{n1}\sqrt{\log{n}})/\beta_1\right)&\leq n^{-\frac{\kappa_1^2(C^*)}{2}+o(1)}, \end{equs} where $\kappa_1(C^*)=\frac{C^*+c\beta_1^2\sqrt{\frac{\tau_{a'}(1-\tau_{a'})}{\beta_1^2\tau_{a'}(1-\tau_{a'})+\beta_2^2\tau_{b'}(1-\tau_{b'})}}}{\beta_1}$ since $c<\liminf{\frac{h}{\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}}}$. Again by Lemma 6.2 of \cite{mms2016} Part (a, ii) we have for $|t|\leq \xi_n \ll \log{n}$ \begin{equs} \sup_{|t|\leq \xi_n}\mathbb{P}\left(Y>\mu_{n2}+(h-h^*+C^*\sigma_{n1}\sqrt{\log{n}}+t)/\beta_2\right)&\leq n^{-\frac{\kappa_2^2(C^*)}{2}+o(1)}, \end{equs} where $\kappa_2(C^*)=\frac{C^*\sqrt{\frac{\tau_{a'}(1-\tau_{a'})}{\tau_{b'}(1-\tau_{b'})}}+c\beta_2^2\sqrt{\frac{\tau_{b'}(1-\tau_{b'})}{\beta_1^2\tau_{a'}(1-\tau_{a'})+\beta_2^2\tau_{b'}(1-\tau_{b'})}}}{\beta_2}$.\\ Now by Theorem 1.2 of \cite{bollobas}, whenever $|t|\leq \xi_n \ll \log{n} $, one has for any fixed $\varepsilon\in (0,1)$ and $n$ large enough (depending on $\varepsilon,\beta_1,\beta_2,C^*,c,c',M$) \begin{equs} \ & \sup_{|t|\leq \xi_n}\sum\limits_{h_1 \in \atop{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)}\mathbb{P}(X=\mu_{n1}+h_1/\beta_1)\mathbb{P}(Y=\mu_{n2}+(h-h_1+t)/\beta_2)\\ &\leq \sum\limits_{h_1 \in \atop{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)}\frac{1}{\sqrt{2\pi \sigma_{n1}^2}}\exp\left(-\frac{h_1^2}{2(\beta_1\sigma_{n1})^2}(1-\varepsilon)\right)\times \frac{1}{\sqrt{2\pi \sigma_{n2}^2}}\exp\left(-\frac{(h-h_1)^2}{2(\beta_2\sigma_{n2})^2}(1-\varepsilon)\right). \end{equs} Since the function $f(h_1)=\frac{h_1^2}{2(\beta_1\sigma_{n1})^2}+\frac{(h-h_1)^2}{2(\beta_2\sigma_{n2})^2}$ is minimized at $h_1=h^*$, we have \begin{equs} \ & \sup_{|t|\leq \xi_n}\sum\limits_{h_1 \in \atop{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)}\mathbb{P}(X=\mu_{n1}+h_1/\beta_1)\mathbb{P}(Y=\mu_{n2}+(h-h_1+t)/\beta_2)\\ &\leq |{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)|\frac{1}{2\pi \sigma_{n1}\sigma_{n2}}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)\right) \label{eqn:lclt_intermsof_h}\\ \end{equs} Therefore for any given sequence $\{\xi_n\}$ such that $|\xi_n|\ll \log{n}$ \begin{equs} \ & \sup\limits_{|t|\ll \xi_n }\mathbb{P}\left(d(\beta_1,\beta_2)=h+t\right)\\ &\leq |{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)|\frac{1}{2\pi \sigma_{n1}\sigma_{n2}}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)\right)+ n^{-\frac{\kappa_1^2(C^*)}{2}+o(1)}+n^{-\frac{\kappa_2^2(C^*)}{2}+o(1)}. \label{eqn:dbeta_control} \end{equs} Now note that $|{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)|\leq const\cdot \sigma_{n1}\sqrt{\log{n}}$ for a constant $const$. (depending on $C^*,c,c',\beta_1,\beta_2,M$) and $\kappa_1^2(C^*),\kappa_2^2(C^*)$ are increasing function of $C^*$. The proof is therefore complete by choosing $C^*$ large enough constant (depending on $c,c',\beta_1,\beta_2,M$).\\ \textit{Proof of Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_equal_contam}:} For any sequence $\{\delta_n\}$ let $t_n(\delta_n)=\delta_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}$. We make use of the following lemma, the proof of which being simple is omitted. \begin{lemma}\label{lemma:delta_choice} There exists a positive sequence $\delta_n^*\rightarrow 0$ such that the following hold. \begin{enumerate} \item[(i)]$ t_n(\delta_n^*)\wedge {\delta_n^*}\left(\beta_1\wedge\beta_2\right)\times \sqrt{a' \wedge b'}\times \sqrt{\log{n}}\gg \log{n}$. \item [(ii)]$ \frac{\sqrt{\log{n}}}{\delta_n^*}\frac{(s_1\vee s_2)\times (a''\vee b'')}{n\sqrt{a'\wedge b'}}\rightarrow 0$. \end{enumerate} \end{lemma} Fix a sequence $\delta_n^*$ satisfying $(i)$ and $(ii)$ of Lemma \ref{lemma:delta_choice}. Then \begin{equs} \ & \sup\limits_{|t|\leq \xi_n}\mathbb{P}\left(d'(\beta_1,\beta_2)=h+t\right)\\ & \leq \sup\limits_{|t|\leq \xi_n}\left\{\begin{array}{c}\mathbb{P}\left(\beta_1 X'+\beta_2Y'>t_n(\delta_n^*)\right)\\+\sup\limits_{r\in [0,t_n(\delta_n^*)]}\mathbb{P}\left(d(\beta_1,\beta_2)=h+t-r\right)\end{array}\right\}\\ &=\mathbb{P}\left(\beta_1 X'+\beta_2Y'>t_n(\delta_n^*)\right)+\sup\limits_{|t|\leq \xi_n,\atop r\in [0,t_n(\delta_n^*)]}\mathbb{P}\left(d(\beta_1,\beta_2)=h+t-r\right). \end{equs} For the first term we have \begin{equs} \ & \mathbb{P}\left(\beta_1 X'+\beta_2Y'>t_n(\delta_n^*)\right)\\ &=\mathbb{P}\left(\beta_1 (X'-s_1a''/n)+\beta_2(Y'-s_2b''/n)>t_n(\delta_n^*)-(\beta_1s_1a''/n+\beta_2s_2b''/n)\right). \end{equs} Now by Lemma \ref{lemma:delta_choice} \begin{equs} \ & t_n(\delta_n^*)-(\beta_1s_1a''/n+\beta_2s_2b''/n)\\ &\geq \frac{\delta_n^*}{2}\left(\beta_1\wedge\beta_2\right)\times \sqrt{a' \wedge b'}\times \sqrt{\log{n}}-\frac{2}{n}(\beta_1 \vee\beta_2)\times (s_1\vee s_2)\times (a''\vee b'')\\ &=\frac{\delta_n^*}{2}\left(\beta_1\wedge\beta_2\right)\times \sqrt{a' \wedge b'}\times \sqrt{\log{n}}\left(1-2\frac{\sqrt{\log{n}}}{\delta_n^*}\frac{(s_1\vee s_2)\times (a''\vee b'')}{n\sqrt{a'\wedge b'}}\right)\\ &\gg \log{n}. \end{equs} Therefore by Bernstein's Inequality for $\theta>0$ one has for $n$ large enough (depending on $\theta,\beta_1,\beta_2$) \begin{equs} \mathbb{P}\left(\beta_1 X'+\beta_2Y'>t_n(\delta_n^*)\right)&\leq n^{-\theta}. \end{equs} Finally by Lemma \ref{lemma:delta_choice} and \eqref{eqn:dbeta_control} \begin{equs} \ &\sup\limits_{|t|\leq \xi_n,\atop r\in [0,t_n(\delta_n^*)]}\mathbb{P}\left(d(\beta_1,\beta_2)=h+t-r\right)\\ &\leq |{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)|\frac{1}{2\pi \sigma_{n1}\sigma_{n2}}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)\right)+ n^{-\frac{\kappa_1^2(C^*)}{2}+o(1)}+n^{-\frac{\kappa_2^2(C^*)}{2}+o(1)}, \end{equs} for any $C^*>0$ and $\varepsilon \in (0,1)$, whenever $|t|\leq \xi_n\ll \log{n} $ Therefore for any given sequence $\{\xi_n\}$ such that $|\xi_n|\ll \log{n}$, any $C^*,\theta>0$, and $\varepsilon \in (0,1)$ we have for $n$ large enough (depending on $\varepsilon,c,c',\beta_1,\beta_2,C^*,\theta,M$) \begin{equs} \ & \sup\limits_{|t|\ll \xi_n }\mathbb{P}\left(d'(\beta_1,\beta_2)=h+t\right)\\ &\leq n^{-\theta}+|{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)|\frac{1}{2\pi \sigma_{n1}\sigma_{n2}}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)\right)+ n^{-\frac{\kappa_1^2(C^*)}{2}+o(1)}+n^{-\frac{\kappa_2^2(C^*)}{2}+o(1)}.\\ \label{eqn:dbetaprime_control} \end{equs} Now note that $|{\cal A}} \def\a{{\alpha} \cap I(C^*,h^*)|\leq const\cdot \sigma_{n1}\sqrt{\log{n}}$ for a constant $const$. (depending on $C^*,c,c',\beta_1,\beta_2,M$) and $\kappa_1^2(C^*),\kappa_2^2(C^*)$ are increasing function of $C^*$. The proof is therefore complete by choosing $C^*$ and $\theta$ large enough constant (depending on $c,c',\beta_1,\beta_2,M$).\\ \textit{Proof of Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_tail_pure}:} The proof proceeds by producing upper and lower bounds on the desired moderate deviation probability. \subsection*{\textbf{Upper Bound}} For $h=C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}$, any $\varepsilon>0$, and $\Delta_n>0$ one has \begin{equs} P(d(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})&= \sum_{k=0}^{M_n-1}\mathbb{P}(d(\beta_1,\beta_2)\in (h+k\Delta_n,h+(k+1)\Delta_n)), \end{equs} where $M_n=\left(\frac{n(\beta_1+\beta_2)}{2}-\mu_n(\beta_1,\beta_2)-h\right)/\Delta_n$. Fix $B>\limsup C_n$ to be chosen later and let $m_n=B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}/\Delta_n$. Then \begin{equs} \ & \sum_{k=0}^{m_n-1}\mathbb{P}(d(\beta_1,\beta_2)\in (h+k\Delta_n,h+(k+1)\Delta_n))\\ &\leq \sum_{k=0}^{m_n-1}|\mathscr{H}\cap (h+k\Delta_n,h+(k+1)\Delta_n)|\sup_{t \in [0,\Delta_n]}\mathbb{P}(d(\beta_1,\beta_2)=h+k\Delta_n+t) \end{equs} where $\mathscr{H}=\{x:\mu_n(\beta_1,\beta_2)+x\in \beta_1\mathbb{N}+\beta_2\mathbb{N}\}$. Now it is easy to see that $|\mathscr{H}\cap (h+k\Delta_n,h+(k+1)\Delta_n)|\leq \beta_1\beta_2\Delta_n^2$. Also by the choice of $m_n$, for any $\varepsilon>0$ \begin{equs} \sup_{t \in [0,\Delta_n]}\mathbb{P}(d(\beta_1,\beta_2)=h+k\Delta_n+t)&\leq \frac{1}{\sqrt{2\pi}\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{(h+k\Delta_n)^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)^{1/2}\right). \end{equs} Therefore as long as $\Delta_n$ is bounded we have by arguments similar to the proof of part (a, i) \begin{equs} \ & \sum_{k=0}^{m_n-1}\mathbb{P}(d(\beta_1,\beta_2)\in (h+k\Delta_n,h+(k+1)\Delta_n))\\ &\leq \sum_{k=0}^{m_n-1}|\mathscr{H}\cap (h+k\Delta_n,h+(k+1)\Delta_n)|\sup_{t \in [0,\Delta_n]}\mathbb{P}(d(\beta_1,\beta_2)=h+k\Delta_n+t)\\ &\leq \beta_1\beta_2\Delta_n^2\sum_{k=0}^{m_n-1}\frac{1}{\sqrt{2\pi}\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{(h+k\Delta_n)^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)^{1/2}\right)\\ &\leq \beta_1\beta_2\Delta_n^2\int_{0}^{m_n}\frac{1}{\sqrt{2\pi}\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{(h+x\Delta_n)^2}{2\sigma_n(\beta_1,\beta_2)^2}(1-\varepsilon)^{1/2}\right)dx\\ &=\beta_1\beta_2\Delta_n^2\left(\bar{\Phi}\left(\frac{h(1-\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right)-\bar{\Phi}\left(\frac{(h+m_n\Delta_n)(1-\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right)\right)\\ &=\beta_1\beta_2\Delta_n^2\left(\bar{\Phi}\left(\frac{h(1-\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right)-\bar{\Phi}\left(\frac{(h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})(1-\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right)\right). \end{equs} Therefore if $\Delta_n$ is bounded \begin{equs} \ & P(d(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})\\ &= \sum_{k=0}^{m_n-1}\mathbb{P}(d(\beta_1,\beta_2)\in (h+k\Delta_n,h+(k+1)\Delta_n))\\&+\sum_{k=m_n}^{M_n}\mathbb{P}(d(\beta_1,\beta_2)\in (h+k\Delta_n,h+(k+1)\Delta_n))\\ &\leq \beta_1\beta_2\Delta_n^2\left(\bar{\Phi}\left(\frac{h(1-\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right)-\bar{\Phi}\left(\frac{(h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})(1-\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right)\right)\\&+P(d(\beta_1,\beta_2)>h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}). \end{equs} It remains to control $P(d(\beta_1,\beta_2)>h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})$ which we will do using a naive Bernstein bound. In particular we have by Bernstein's Inequality \begin{equs} \ & P(d(\beta_1,\beta_2)>h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})\\&\leq \exp\left(-\frac{1}{2}\frac{(h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})^2}{\sigma_n(\beta_1,\beta_2)^2+\frac{1}{3}(\beta_1\vee\beta_2)(h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})}\right). \end{equs} Now note that $\sigma_n(\beta_1,\beta_2)^2\gg h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}$. Therefore for sufficiently large $n$ \begin{equs} \sigma_n(\beta_1,\beta_2)^2+\frac{1}{3}(\beta_1\vee\beta_2)(h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})\leq 2\sigma_n(\beta_1,\beta_2)^2. \end{equs} As a consequence for sufficiently large $n$ \begin{equs} P(d(\beta_1,\beta_2)>h+B\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})&\leq \exp\left(-\frac{1}{4}(C_n+B)^2\log{n}\right). \end{equs} The desired control of the upper bound is thereafter complete by choosing $B$ large enough depending on $\varepsilon>0$. \subsection*{\textbf{Lower Bound}} We first claim that for any $C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}\leq h\ll b'$ \begin{equs} \mathbb{P}(d(\beta_1,\beta_2)\in (h,h+3\beta_2))\geq \frac{1}{\sqrt{2\pi}\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1+\varepsilon)^{1/2}\right).\label{eqn:claim_binomial_tail_pure} \end{equs} Deferring the proof of \eqref{eqn:claim_binomial_tail_pure}, we first finish the proof of the lower bound. In view of the claim, for $t=C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}$ and any $M_n\ll b'$ one has for any $\varepsilon$ \begin{equs} \ & P(d(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}})\\&\geq \sum_{k=0}^{M_n}\mathbb{P}(d(\beta_1,\beta_2)\in (t+3k\beta_2,t+3(k+1)\beta_2))\\ &\geq \sum_{k=0}^{M_n}\frac{1}{\sqrt{2\pi}\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{(t+3k\beta_2)^2}{2\sigma_n(\beta_1,\beta_2)^2}(1+\varepsilon)^{1/2}\right)\\ &\geq \int_{0}^{M_n}\frac{1}{\sqrt{2\pi}\sigma_n(\beta_1,\beta_2)}\exp\left(-\frac{(t+3x\beta_2)^2}{2\sigma_n(\beta_1,\beta_2)^2}(1+\varepsilon)^{1/2}\right)dx\\ &=\bar{\Phi}\left(\frac{t(1+\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right)-\bar{\Phi}\left(\frac{(t+3M_n\beta_2)(1+\varepsilon)^{1/2}}{\sigma_n(\beta_1,\beta_2)}\right). \end{equs} Using Mill's ratio the proof of the lower bound is therefore complete by choosing $C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}\ll M_n\ll b'$. We now complete the proof of the claim in \eqref{eqn:claim_binomial_tail_pure}. The main idea of the proof is simple and relies on finding $O(\sigma_n(\beta_1,\beta_2))$ distinct pairs $(h_1,h_2)$ such that $\beta_1h_1+\beta_2h_2-\mu_n(\beta_1,\beta_2)\in (h,h+3\beta_2)$ and $P(X=h_1)P(Y=h_2)\geq \frac{1}{2\pi \sigma_{n1}\sigma_{n2}}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1+\varepsilon)\right)$. The proof can thereby be completed by adding over these contributing $O(\sigma_n(\beta_1,\beta_2))$ distinct pairs $(h_1,h_2)$. For a fixed $\bar{h}> C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}$ (to be decided on later), consider any $$C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}\leq h\leq \bar{h} ,\quad h^*=\frac{h(\beta_1\sigma_{n1})^2}{\sigma_n(\beta_1,\beta_2)^2},$$ and let for any $h_1>0$ $$h_1^*(h_1)=\left\lceil \mu_{n1}+\frac{h^*+h_1}{\beta_1}\right\rceil,$$ and \begin{equs} t^*(h_1)=\beta_2\left\{\left\lceil \mu_{n2}+\frac{h+\beta_1\left(\mu_{n1}-h_1^*(h_1)\right)}{\beta_2}\right\rceil-\left(\mu_{n2}+\frac{h+\beta_1\left(\mu_{n1}-h_1^*(h_1)\right)}{\beta_2}\right)\right\}. \\\label{eqn:tstar_alt_difficult} \end{equs} We will need the fact that $\left\lceil \mu_{n2}+\frac{h+\beta_1\left(\mu_{n1}-h_1^*(h_1)\right)}{\beta_2}\right\rceil\geq 0$ which is guaranteed by the next easy to show lemma whose proof is omitted. \begin{lemma} For $n$ sufficiently large (depending on $\beta_1,\beta_2$) $\mu_{n2}+\frac{h+\beta_1\left(\mu_{n1}-h_1^*(h_1)\right)}{\beta_2}\geq 0$. \end{lemma} We can now proceed as follows. Note that with $\delta_n(h_1)=\left\lceil \mu_{n1}+\frac{h^*+h_1}{\beta_1}\right\rceil-\mu_{n1}+\frac{h^*+h_1}{\beta_1}$ and $\tilde{h}(h_1)=\frac{h^*+h_1}{\beta_1}\left(1+\frac{\beta_1\delta_n(h_1)}{h^*+h_1}\right)$ one has by Theorem 1.5 of \cite{bollobas} \begin{equs} \ & \mathbb{P}\left(X=h_1^*(h_1)\right)\\ &=\mathbb{P}\left(X= \mu_{n1}+\frac{h^*+h_1}{\beta_1}+\delta_n\right)\\ &=\mathbb{P}\left(X=\mu_{n1}+\tilde{h}(h_1)\right)\\ &\geq \frac{1}{\sqrt{2\pi p_1q_1n_1}}\exp\left(-\frac{\tilde{h}(h_1)^2}{2p_1q_1n_1}\left(\begin{array}{c}1+\frac{\tilde{h}(h_1)p_1}{q_1 n_1}+\frac{2q_1\tilde{h}(h_1)^2}{3p_1^2n_1^2}+\frac{q_1}{\tilde{h}(h_1)}\\+\left(\frac{1}{h_1^*(h_1)}+\frac{1}{n_1-h_1^*(h_1)}\right)\frac{n_1p_1q_1}{6\tilde{h}(h_1)^2}\end{array}\right)\right), \end{equs} where $n_1=n/2$, $p_1=a'/n$ and $q_1=1-a'/n$. Now it is easy to see that \begin{equs} \frac{\tilde{h}(h_1)p_1}{q_1 n_1}=O\left(\frac{\tilde{h}(h_1)a'}{n^2}\right),\ \frac{2q_1\tilde{h}(h_1)^2}{3p_1^2n_1^2}=O\left(\frac{\tilde{h}(h_1)^2}{(a')^2}\right),\ \frac{q_1}{\tilde{h}(h_1)}=O\left(\frac{1}{\tilde{h}(h_1)}\right),\\ \left(\frac{1}{h_1^*(h_1)}+\frac{1}{n_1-h_1^*(h_1)}\right)\frac{n_1p_1q_1}{6\tilde{h}(h_1)^2}=O\left(\frac{a'}{h_1^*(h_1)\tilde{h}(h_1)^2}\right), \end{equs} where the $O$-notations involve universal constants free from $\beta_1,\beta_2,C_n$. If $\bar{h},h_1$ is such that, \begin{equs} \bar{h}\ll a', \quad h_1\leq \beta_1\sqrt{2\pi}\frac{\sigma_{n1}\sigma_{n2}}{\sigma_n(\beta_1,\beta_2)} \label{eqn:condition_hbar_and_h_1}, \end{equs} then since $b' \gg \log{n}$, we have for any $\varepsilon>0$, sufficiently large $n$ (depending on $M$ and $\varepsilon>0$) \begin{equs} \mathbb{P}\left(X=h_1^*(h_1)\right)\geq \frac{1}{\sqrt{2\pi \sigma_{n1}^2}}\exp\left(-\frac{(h^*)^2}{2(\beta_1 \sigma_{n1})^2}\left(1+\varepsilon\right)^{1/2}\right).\label{eqn:lowerbound_x_difficult} \end{equs} Similarly for $n_2=n/2$, $p_2=b'/n$ and $q_2=1-b'/n$ and any $m \in \mathbb{N}$ one has \begin{equs} \ &\mathbb{P}\left(Y=\mu_{n2}+\frac{h+t^*(h_1)+\beta_2m+\beta_1\left(\mu_{n1}-h_1^*(h_1)\right)}{\beta_2}\right)\\ &=\left(Y=\left\lceil \mu_{n2}+\frac{h-(h^*+h_1)-\beta_1\delta_n(h_1)}{\beta_2}\right\rceil+m\right)\\ &=\mathbb{P}\left(Y=\mu_{n2}+{\tilde{h}}(m)\right), \end{equs} where $\tilde{h}(m)=\frac{h-(h^*+h_1)}{\beta_2}\left(1-\frac{\frac{\beta_1}{\beta_2}\delta_n(h_1)-\beta_2m-\beta_2\delta_n'(h_1)}{h-(h^*+h_1)}\right)$ with $\delta_n'(h_1)=\left\lceil \mu_{n2}+\frac{h-(h^*+h_1)-\beta_1\delta_n(h_1)}{\beta_2}\right\rceil-\left(\mu_{n2}+\frac{h-(h^*+h_1)-\beta_1\delta_n(h_1)}{\beta_2}\right)$. Therefore, once again by Theorem 1.5 of \cite{bollobas} \begin{equs} \ & \mathbb{P}\left(Y=\mu_{n2}+{\tilde{h}}(m)\right)\\ &\geq \frac{1}{\sqrt{2\pi p_2q_2n_2}}\exp\left(-\frac{\tilde{h}(m)^2}{2p_2q_2n_2}\left(\begin{array}{c}1+\frac{\tilde{h}(m)p_2}{q_2 n_2}+\frac{2q_2\tilde{h}(m)^2}{3p_2^2n_2^2}+\frac{q_2}{\tilde{h}(m)}\\+\left(\frac{1}{\mu_{n2}+\tilde{h}(m)}+\frac{1}{n_1-\mu_{n2}-\tilde{h}(m)}\right)\frac{n_2p_2q_2}{6\tilde{h}(m)^2}\end{array}\right)\right) \end{equs} Now it is easy to see that \begin{equs} \frac{\tilde{h}(m)p_2}{q_2 n_2}=O\left(\frac{\tilde{h}(m)b'}{n^2}\right),\ \frac{2q_2\tilde{h}(m)^2}{3p_2^2n_2^2}=O\left(\frac{\tilde{h}(m)^2}{(b')^2}\right),\ \frac{q_2}{\tilde{h}(m)}=O\left(\frac{1}{\tilde{h}(m)}\right),\\ \left(\frac{1}{\mu_{n2}+\tilde{h}(m)}+\frac{1}{n_2-\mu_{n2}-\tilde{h}(m)}\right)\frac{n_2p_2q_2}{6\tilde{h}(m)^2}=O\left(\frac{b'}{(\mu_{n2}+\tilde{h}(m))\tilde{h}(m)^2}\right), \end{equs} where the $O$-notations involve universal constants free from $\beta_1,\beta_2,C_n$. If $\bar{h},h_1,m$ is such that, \begin{equs} \bar{h}\ll a', \quad h_1\leq \beta_1\sqrt{2\pi}\frac{\sigma_{n1}\sigma_{n2}}{\sigma_n(\beta_1,\beta_2)}, \quad m \leq \sigma_{n2},\label{eqn:condition_hbar_and_h_2} \end{equs} then since $b' \gg \log{n}$, we have for any $\varepsilon>0$, sufficiently large $n$ (depending on $M$ and $\varepsilon>0$) \begin{equs} \mathbb{P}\left(Y=\mu_{n2}+{\tilde{h}}(m)\right)\geq \frac{1}{\sqrt{2\pi \sigma_{n2}^2}}\exp\left(-\frac{(h-h^*)^2}{2(\beta_2\sigma_{n2})^2}\left(1+\varepsilon\right)^{1/2}\right).\label{eqn:lowerbound_y_difficult} \end{equs} Combining \eqref{eqn:lowerbound_x_difficult} and \eqref{eqn:lowerbound_y_difficult}, we have that under the common conditions \eqref{eqn:condition_hbar_and_h_1}, \eqref{eqn:condition_hbar_and_h_2} \begin{equs} \mathbb{P}\left(X=h_1^*(h_1)\right)\mathbb{P}\left(Y=\mu_{n2}+{\tilde{h}}(m)\right)&\geq \frac{1}{2\pi \sigma_{n1} \sigma_{n2}}\exp\left(-\frac{h^2}{2\sigma_n(\beta_1,\beta_2)^2}(1+\varepsilon)^{1/2}\right). \end{equs} Now for each $h_1 \subseteq \left[0, \beta_1\sqrt{2\pi}\frac{\sigma_{n1}\sigma_{n2}}{\sigma_n(\beta_1,\beta_2)}\right]\cap \beta_1\mathbb{N}$ the number of $h_1^*(h_1)$ is distinct and \begin{equs} \beta_1 h_1^*(h_1)+\beta_2\mu_{n2}+{\tilde{h}}(m)-\mu_n(\beta_1,\beta_2)\in (h,h+3m\beta_2).\quad \end{equs} Therefore we can choose $m=1$ to complete the proof of claim \eqref{eqn:claim_binomial_tail_pure}.\\ \textit{Proof of Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_tail_contam}:} Recall the proof of Part (a, ii) and Fix a sequence $\delta_n^*$ satisfying $(i)$ and $(ii)$ of Lemma \ref{lemma:delta_choice}. Then \begin{equs} \ & \mathbb{P}\left(d'(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}\right)\\ &\leq \mathbb{P}(\beta_1X'+\beta_2Y'>t_n(\delta_n^*))+\mathbb{P}(d(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}-t_n(\delta_n^*)) \end{equs} where $t_n(\delta_n^*)=\delta_n^*\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}^*}$. Now by our choice of $\delta_n^*$ we have $C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}-t_n(\delta_n^*)=C_n(1+o(1))\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}$. Moreover, similar to the proof of Part (a,ii) , by Bernstein's Inequality for $\theta>0$ one has for $n$ large enough (depending on $\theta,\beta_1,\beta_2,\tau_a,\tau_b$) \begin{equs} \mathbb{P}\left(\beta_1 X'+\beta_2Y'>t_n(\delta_n^*)\right)&\leq n^{-\theta}. \end{equs} Therefore by Part (b, ii) we have by choosing $\theta>2C$ \begin{equs} \mathbb{P}\left(d'(\beta_1,\beta_2)>C_n\sigma_n(\beta_1,\beta_2)\sqrt{\log{n}}\right)&\leq n^{-\frac{C^2}{2}+o(1)}. \end{equs} The lower bound is trivial from Part (b, ii) since $d'(\beta_1,\beta_2)\geq d(\beta_1,\beta_2)$. \ \vrule height4pt width4pt depth0pt \subsection{Proof of Lemma \ref{lemma:binomial_tail_exp_scale}} Recall the definitions $\mu_{n1} = \mathbb{E}[X]$, $\mu_{n2} = \mathbb{E}[Y]$, $\sigma_{n1}^2 = \Var(X)$, $\sigma_{n2}^2 = \Var(Y)$ and $\mu_n(\beta_1,\beta_2)=\mathbb{E}(\beta_1X+\beta_2Y)$ and $\sigma_n(\beta_1,\beta_2)=\mathop{\rm Var}\nolimits(\beta_1X+\beta_2Y)$. For brevity, we let $\mu_n=\mu_n(1,1)$ and $\sigma_n=\sigma_n(1,1)$. Let $\mathscr{H} = \{ h>0: \mu_n + h \in \mathbb{N} \}$. Thus $\mathbb{P}[X+Y > \mu_n + \sigma_n x_n] = \sum_{ h \in \mathscr{H}: h > \sigma_n x_n} \mathbb{P}[X+ Y = \mu_n + h]$. Let $\mathscr{H}_1 = \{ h : \mu_{n1} + h \in \mathbb{N} \}$ and set $h_1^* = \inf\{ h \in \mathscr{H}_1 : h > \frac{\sigma_{n1}^2 }{\sigma_n} x_n\}$. Thus we have, for $h \in \mathscr{H}$, \begin{align} \mathbb{P}[X+Y = \mu_n + h] &= \sum_{h_1 \in \mathscr{H}_1} \mathbb{P}[X = \mu_{n1} + h_1 ] \mathbb{P}[ Y = \mu_{n2} + (h-h_1)] \nonumber \\ &\geq \sum_{h_1 = h_1^*}^{h_1^* + m^* } \mathbb{P}[X = \mu_{n1} + h_1 ] \mathbb{P}[ Y = \mu_{n2} + (h-h_1)], \label{eq:int1} \end{align} for some $m^*$ to be chosen appropriately. Using \cite[Theorem 1.5]{bollobas}, we have, \begin{align} \mathbb{P}[X = \mu_{n1} + h_1 ] \geq \frac{1}{\sqrt{2\pi} \sigma_{n1}} \exp\Big[ - \frac{h_1^2}{ 2 \sigma_{n1}^2 } - \xi_{1n} \Big], \nonumber \end{align} for an explicit sequence $\xi_{1n}(h_1)$, depending on $h_1$. Upon using the fact that $a', b' \gg (\log n )^3$, $h_1 = O( \sigma_n \sqrt{2 \log n})$, for any $m^* \ll h_1^*$, it is immediate that $\xi_{1n}(h_1) = o(1)$, uniformly over $ h_1^* < h_1 < h_1^* + m^*$. Thus we have, \begin{align} \mathbb{P}[X = \mu_{n1} + h_1 ] \geq (1 + o(1)) \frac{1}{\sqrt{2\pi} \sigma_{n1}} \exp\Big[ - \frac{h_1^2}{ 2 \sigma_{n1}^2 } \Big] . \nonumber \end{align} Similar arguments immediately imply that for $h_1^* < h_1 < h_1^* + m_1$, \begin{align} \mathbb{P}[ Y = \mu_{n2} + (h-h_1) ] \geq (1 + o(1)) \frac{1}{\sqrt{2 \pi} \sigma_{n2}} \exp\Big[ - \frac{(h- h_1)^2}{2 \sigma_{n2}^2} \Big]. \nonumber \end{align} Using these bounds in \eqref{eq:int1}, for $h = O(\sigma_n \sqrt{2 \log n})$, we obtain the lower bound \begin{align} \mathbb{P}[ X+ Y = \mu_n + h] &\geq (1 + o(1)) \frac{m^*}{2\pi \sigma_{n1} \sigma_{n2}} \exp\Big[ - \frac{h^2 }{2\sigma_{n}^2} \Big] \nonumber\\ &= (1 + o(1)) \frac{1}{\sqrt{2 \pi} \sigma_n } \exp\Big[ - \frac{h^2 }{2\sigma_{n}^2} \Big] , \nonumber \end{align} where we choose $m^* = \sqrt{2\pi} \sigma_{n1} \sigma_{n2} / \sigma_{n} \ll h_1^*$. Finally, we have, setting $M_n = \sigma_n \sqrt{C\log n}$ for some constant $C$ sufficiently large, \begin{align} &\mathbb{P}[X+ Y > \mu_n + \sigma_n x_n ] \geq \sum_{h \in \mathscr{H} : \sigma_n x_n < h < M_n } \mathbb{P}[X+ Y = \mu_n + h] \nonumber\\ &\geq (1 + o (1)) \frac{1}{\sqrt{2\pi}\sigma_n} \sum_{h \in \mathscr{H} : \sigma_n x_n < h < M_n } \exp\Big[ - \frac{h^2}{2 \sigma_n^2} \Big] \geq (1 + o(1)) ( 1 - \Phi(x_n)), \nonumber \end{align} if $C$ is chosen sufficiently large. This establishes the required lower bound. Next, we turn to the upper bound. We have, \begin{align} &\mathbb{P}[X+Y > \mu_n + \sigma_n x_n] = \sum_{h \in \mathscr{H}: h > \sigma_n x_n } \mathbb{P}[X+Y = \mu_n + h]\nonumber\\ &= \sum_{h \in \mathscr{H} : \sigma_n x_n < h < \sigma_n \sqrt{C \log n}} \mathbb{P}[X+Y = \mu_n +h] + \sum_{h \in \mathscr{H}: h > \sigma_n \sqrt{C \log n}} \mathbb{P}[X+Y = \mu_n + h], \label{eq:tail_upper_exact} \end{align} for some constant $C>0$ sufficiently large, to be chosen later. We have, using Lemma \ref{lemma:binomial_master} Part \ref{lemma:binomial_tail}, we have, \begin{align} \sum_{h \in \mathscr{H}: h > \sigma_n \sqrt{C \log n}} \mathbb{P}[X+Y = \mu_n + h] \leq n^{- \frac{C^2}{2} (1 + o(1)) }. \label{eq:temp_veryhightail} \end{align} Finally, we will use the following ``local limit" lemma. \begin{lemma} \label{lemma:eq_upper} Let $X \sim \textrm{Bin}\Big(n , \frac{a'}{n} \Big)$ and $Y \sim \textrm{Bin}\Big(n, \frac{b'}{n} \Big)$ be independent random variables with $a' \geq b'$, $\liminf b'/a' >0$. Assume $b' \gg (\log n)^3$ and set $\mu_n = \mathbb{E}[X+Y]$, $\sigma_n^2 = \Var(X+Y)$. Then for any constant $C>2$ and $ \sigma_n \sqrt{2 \log n} < h < \sigma_n \sqrt{C \log n}$, we have, for $h \in \mathcal{H}$, \begin{align} \mathbb{P}[X+Y = \mu_n + h ] \leq (1+ o(1))\frac{1}{\sqrt{2 \pi} \sigma_n} \exp\Big[ - \frac{h^2}{2 \sigma_n^2} \Big]. \nonumber \end{align} \end{lemma} We defer the proof of Lemma \ref{lemma:eq_upper} and complete upper bound proof. Lemma \ref{lemma:eq_upper} immediately yields \begin{align} \sum_{h \in \mathscr{H} : \sigma_n x_n < h < \sigma_n \sqrt{C \log n}} \mathbb{P}[X+Y = \mu_n +h] &\leq (1 + o(1)) \sum_{h \in \mathscr{H} : \sigma_n x_n < h < \sigma_n \sqrt{C \log n}} \frac{1}{\sqrt{2\pi} \sigma_n} \exp\Big[ - \frac{h^2}{2 \sigma_n^2} \Big]. \nonumber\\ &\leq (1+o(1)) \int_{x_n}^{\sqrt{C \log n}} \phi(x) \textrm{d}x, \nonumber \end{align} where $\phi(\cdot)$ is the density of the standard Gaussian distribution. We know that $(1 - \Phi(\sqrt{C\log n})) \leq n^{-\frac{C^2}{2}}$. Thus for $C$ sufficiently large, $\frac{\Phi(\sqrt{C \log n} ) - \Phi(x_n)}{1 - \Phi(x_n)} \to 1$ as $n \to \infty$. For any such choice of $C$, we immediately have, using \eqref{eq:tail_upper_exact} and \eqref{eq:temp_veryhightail}, \begin{align} \mathbb{P}[X+Y > \mu_n + \sigma_n x_n] \leq (1 + o(1)) (1 - \Phi(x_n)) + n^{-\frac{C^2}{2} (1 + o(1))} = (1 + o(1)) (1 - \Phi(x_n)). \nonumber \end{align} This completes the proof modulo proof of Lemma \ref{lemma:eq_upper}. \ \vrule height4pt width4pt depth0pt \begin{proof}[Proof of Lemma \ref{lemma:eq_upper}] We have, for $h \in \mathscr{H}$, setting $h_1^* = h \sigma_{n1}^2/ \sigma_{n}^2$, and $m^* = \sqrt{2\pi}\frac{{\sigma_{n1} \sigma_{n2}}}{2\sigma_n}$, \begin{align} &\mathbb{P}[X+ Y = \mu_n + h ] = \sum_{h_1 \in \mathscr{H}_1} \mathbb{P}[X= \mu_{n1} + h_1 , Y = \mu_{n2} + h- h_1] =T_1 + T_2 + T_3, \nonumber \\ &T_1 = \sum_{h_1 \in \mathscr{H}_1: h_1 < h_1^* - m^* } \mathbb{P}[X= \mu_{n1} + h_1 , Y = \mu_{n2} + h- h_1] , \nonumber \\ &T_2 = \sum_{h_1 \in \mathscr{H}_1: h_1^* - m^* < h_1 < h_1^* + m^* } \mathbb{P}[X= \mu_{n1} + h_1 , Y = \mu_{n2} + h- h_1] , \nonumber\\ &T_3 = \sum_{h_1 \in \mathscr{H}_1: h_1 > h_1^* + m^* } \mathbb{P}[X= \mu_{n1} + h_1 , Y = \mu_{n2} + h- h_1] , \nonumber \end{align} First, we analyze the term $T_2$. \cite[Theorem 1.2]{bollobas} implies that for $h_1 = O( \sigma_{n1} \sqrt{\log n} )$, \begin{align} \mathbb{P}[ X = \mu_{n1} + h_1 ] < \frac{1}{\sqrt{2\pi} \sigma_{n1}} \exp\Big[- \frac{h_1^2}{2 \sigma_{n1}^2} + \xi_n(1) \Big] , \nonumber \end{align} for an explicit sequence $\xi_n(1)$. Using $a'\geq b' \gg (\log n)^3$, and $h = O(\sigma_{n} \sqrt{ \log n})$ it immediately follows that $\xi_1(n) = o(1)$. Using similar arguments for $\mathbb{P}[ Y = \mu_{2 n} + h - h_1]$ , we obtain that \begin{align} T_2 &\leq (1 + o(1)) \sum_{h_1 \in \mathscr{H}_1: h_1^* - m^* < h_1 < h_1^* + m^* } \frac{1}{2 \pi \sigma_{n1} \sigma_{n2} } \exp\Big[ - \frac{h_1^2}{2 \sigma_{n1}^2} - \frac{(h- h_1)^2}{2 \sigma_{n2}^2} \Big] \nonumber \\ &\leq (1 + o(1)) \frac{1}{\sqrt{2 \pi} \sigma_{n}}\exp\Big[ - \frac{h^2}{2 \sigma_n^2} \Big] =: (1 + o(1)) z_n, \nonumber \end{align} using the definition of $h_1^*$. We will be done once we establish $T_1, T_3 = o(z_n)$. We will sketch this proof for $T_3$--- the argument for $T_1$ is analogous and will be omitted. We note that \begin{align} T_3 &= \sum_{h \in \mathscr{H}_1: h_1^* + m^* < h_1 < x_n \sigma_{n1} (1 + \tau_n)} \mathbb{P}[ X = \mu_1 + h_1, Y = \mu_2 + h-h_1] \nonumber\\ &+ \sum_{h \in \mathscr{H}_1: h_1 > x_n \sigma_{n1}(1 + \tau_n) } \mathbb{P}[X = \mu_{n1} + h_1, Y = \mu_{n2} + (h - h_1)], \label{eq:int2} \end{align} for some sequence $\tau_n>0$ to be chosen appropriately. We will establish that each of these terms is $o(T_2)$. To this end, we note that \begin{align} \sum_{h \in \mathscr{H}_1: h_1 > x_n \sigma_{n1} (1 + \tau_n)} \mathbb{P}[X = \mu_{n1} + h_1, Y = \mu_{n2} + (h - h_1)] \leq \mathbb{P}[X > \mu_{n1} + \sigma_{n1} x_n ( 1 + \tau_n) ] \sup_x \mathbb{P}[Y = x]. \nonumber \end{align} By direct computation, it is easy to see that $\sup_x \mathbb{P}[Y= x] = O(\frac{1}{\sqrt{b'}})$. Using the results in \cite[Lemma 6.2]{mms2016}, we have, \begin{align} \mathbb{P}[X> \mu_{n1} + \sigma_{n1} x_n (1 + \tau_n) ] \leq n^{- (1 + \tau_n)^2 (1 + o(1))}. \nonumber \end{align} Thus for any sequence $\tau_n >0 $ such that $\liminf \tau_n >0$, the second term in \eqref{eq:int2} is $o(z_n)$. Next, we study the first term in the RHS of \eqref{eq:int2}. We note that $\sigma_{n} \geq \sigma_{n1}$, $h< \sigma_n \sqrt{C\log n}$ implies $h - x_n \sigma_{n1} (1 + \tau_n) \geq x_n \sigma_{n} \Big( 1 - \frac{\sigma_{n1}}{\sigma_n} ( 1 + \tau_n) \Big)$ and $h- x_n \sigma_{n1} (1+ \tau_n) \leq \sqrt{\log n} \sigma_n \Big(\sqrt{C} - \sqrt{2} \frac{\sigma_{n1}}{\sigma_n} (1 + \tau_n) \Big)$. This implies that $h - x_n \sigma_{n1} (1 + \tau_n) = O( \sqrt{\log n} \sigma_n )$ for some sequence $\tau_n$ sufficiently small. We will fix any such sequence in the rest of the proof. For any such $ h_1^* + m^* < h_1 < x_n \sigma_n (1 + \tau_n)$, \cite[Theorem 1.2]{bollobas} implies that \begin{align} \mathbb{P}[ X = \mu_{n1} + h_1] \leq \frac{1}{\sqrt{2 \pi} \sigma_{n1}} \exp\Big[ - \frac{h_1^2}{2 \sigma_{n1}^2 } + \xi_n(1, h_1) \Big], \nonumber \end{align} for some explicit sequence $\xi_n(1, h_1)$, depending on $h_1$. Further, $a',b' \gg (\log n )^3$ and $ h_1^* + m^* < h_1 < x_n \sigma_{n1} (1 + \tau_n)$ implies that \begin{align} \mathbb{P}[ X = \mu_{n1} + h_1 ] \leq (1 + o(1)) \frac{1} { \sqrt{2 \pi} \sigma_{n1}} \exp\Big[ - \frac{h_1^2}{2 \sigma_{n1}^2} \Big]. \nonumber \end{align} where $o(1)$ is a term uniformly controlled for all $ h_1^* + m^* < h_1 < x_n \sigma_n (1 + \tau_n)$. Exactly analogous considerations imply that \begin{align} \mathbb{P}[Y = \mu_{n2} + h - h_1] \leq (1 + o(1)) \frac{1}{ \sqrt{2 \pi} \sigma_{n2}} \exp\Big[ - \frac{ (h- h_1)^2}{2 \sigma_{n2}^2} \Big]. \nonumber \end{align} Thus we have, \begin{align} &\sum_{h \in \mathscr{H}_1: h_1^* + m^* < h_1 < x_n \sigma_{n1} (1 + \tau_n)} \mathbb{P}[ X = \mu_1 + h_1, Y = \mu_2 + h-h_1] \nonumber \\ &\leq (1 + o(1)) \sum_{h \in \mathscr{H}_1 : h_1^* + m^* < h_1 < x_n \sigma_{n1}( 1 + \tau_n)} \frac{1}{2 \pi \sigma_{n1} \sigma_{n2}} \exp\Big[ - \frac{h_1^2}{2 \sigma_{n1}^2} - \frac{(h- h_1)^2}{2 \sigma_{n2}^2} \Big]. \nonumber \\ &\leq (1 + o(1)) \frac{1}{\sqrt{2\pi} \sigma_n } \exp\Big[ - \frac{h^2 \sigma_{n1}^2}{2 \sigma_n^2} \Big] \int_{h_1^* + m^*}^{x_n \sigma_{n1} (1+\tau_n)} \frac{1}{\sqrt{2\pi} \sigma_0} \exp\Big[ -\frac{(x- \frac{h \sigma_0^2 }{\sigma_{n2}^2})^2}{2 \sigma_0^2} \Big]\textrm{d} x \nonumber \\ &\leq (1 + o(1)) \frac{1}{\sqrt{2 \pi}\sigma_n } \exp\Big[ - \frac{h^2 \sigma_{n1}^2}{2 \sigma_n^2} \Big] = o(z_n), \nonumber \end{align} where we set $\sigma_0^2 = \frac{\sigma_{n1}^2 \sigma_{n2}^2}{\sigma_n^2}$. This completes the proof. \end{proof} \section{Proof of Technical Lemmas}\label{section:technical_lemmas} \subsection{Proof of Lemma \ref{lemma:binomial_change_of_measure}} Let $\mathcal{A}=\{(t_1,t_2)\in \mathbb{N}^2: 0\leq t_1\leq n_1,\ 0\leq t_2\leq n_2,\ \beta_1t_1+\beta_2t_2\in B\}$. \begin{equs} \ & \mathbb{E}\left(\alpha_1^X\alpha_2^Y\mathbf{1}\left(\beta_1X+\beta_2Y\in B\right)\right)\\ &=\sum_{(t_1,t_2)\in \mathcal{A}}{n_1\choose t_1}{n_2 \choose t_2}\alpha_1^{t_1}\alpha_2^{t_2}p_1^{t_1}p_2^{t_2}(1-p_1)^{n_1-t_1}(1-p_2)^{n_2-t_2} \\ &=\sum_{(t_1,t_2)\in \mathcal{A}}{n_1\choose t_1}{n_2 \choose t_2}(\alpha_1p_1)^{t_1}(\alpha_2p_2)^{t_2}(1-p_1)^{n_1-t_1}(1-p_2)^{n_2-t_2} \\ &=(1-p_1+\alpha_1 p_1)^{n_1}(1-p_2+\alpha_2 p_2)^{n_2} \sum_{(t_1,t_2)\in \mathcal{A}}\left\{\begin{array}{c}{n_1\choose t_1}{n_2 \choose t_2}\left(\frac{\alpha_1p_1}{1-p_1+\alpha_1p_1}\right)^{t_1}\left(\frac{\alpha_2p_2}{1-p_2+\alpha_2p_2}\right)^{t_2}\\ \times\left(1-\frac{\alpha_1p_1}{1-p_1+\alpha_1p_1}\right)^{n_1-t_1}\left(1-\frac{\alpha_2p_2}{1-p_2+\alpha_2p_2}\right)^{n_2-t_2}\end{array}\right\}\\ &=(1-p_1+\alpha_1 p_1)^{n_1}(1-p_2+\alpha_2 p_2)^{n_2}\mathbb{P}(\beta_1X'+\beta_2Y'\in B). \end{equs} \subsection{Proof of Proposition \ref{lemma:hcnew_main}} We analyze each term in turn. In the analysis, since $\mu^0_{n1}(\mathcal{C})$, $\mu^0_{n2}(\mathcal{C})$, and $\sigma_{n0}(\C,\beta_1,\beta_2)$ do not depend on $\mathcal{C}$ we simply refer to them as $\mu^0_{n1}$, $\mu^0_{n2}$, and $\sigma_{n0}(\beta_1,\beta_2)$. {\textbf{Analysis of $\mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}(HC(\C,\beta_1,\beta_2;t))$}}\\ We have, for $i_1 \in S_1,\ i_2\in S_2,\ j_1 \in S_1^c$, and $j_2\in S_2^c$, \begin{equs} \ & \mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}\left(HC(\C,\beta_1,\beta_2;t)\right) \\ &= s_1\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{i_1}(\C,\beta_1,\beta_2)>t\right)+s_2\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{i_2}(\C,\beta_1,\beta_2)>t\right)\\&+(n/2-s_1)\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{j_1}(\C,\beta_1,\beta_2)>t\right)+(n/2-s_2)\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{j_2}(\C,\beta_1,\beta_2)>t\right)-n\mathbb{P}_{\mathbf{1},a,b}^{(\mathcal{C})}\left(D_1>t\right) \\ &\geq s_1 \left(\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{i_1}(\C,\beta_1,\beta_2)>t\right) - \mathbb{P}_{\mathbf{1},a,b}^{(\mathcal{C})}\left(D_1(\C,\beta_1,\beta_2)>t\right) \right)\\&+s_2 \left(\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{i_1}(\C,\beta_1,\beta_2)>t\right) - \mathbb{P}_{\mathbf{1},a,b}^{(\mathcal{C})}\left(D_1(\C,\beta_1,\beta_2)>t\right) \right). \end{equs} Now note that \begin{equs} \ & \mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{i_1}(\C,\beta_1,\beta_2)>t\right)\\ &=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1 d_{i_1}(1)+\beta_2d_{i_1}(2)-\beta_1\mu^0_{n1}-\beta_2\mu^0_{n2}>\sqrt{2r\log{n}}\sigmazero(\beta_1,\beta_2)\right)\\ &\geq \mathbb{P}\left(\beta_1 X+\beta_2Y-\beta_1\mu^0_{n1}-\beta_2\mu^0_{n2}>\sqrt{2r\log{n}}\sigmazero(\beta_1,\beta_2)\right)\\ &=\mathbb{P}\left(\frac{\beta_1 X+\beta_2Y-\mu_n(\beta_1,\beta_2)}{\sigma_n(\beta_1,\beta_2)}>\frac{\sqrt{2r\log{n}}\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-\mu_n(\beta_1,\beta_2)}{\sigma_n(\beta_1,\beta_2)}\right) \end{equs} where $X\sim \mathrm{Bin}\left(n/2-s_1,(1+A)\frac{a}{n}\right)\perp Y\sim \mathrm{Bin}\left(n/2-s_2,(1+A)\frac{b}{n}\right)$, $\mu_n(\beta_1,\beta_2)=\mathbb{E}\left(\beta_1X+\beta_2Y\right)$ and $\sigma_n(\beta_1,\beta_2)=\mathrm{Var}\left(\beta_1X+\beta_2Y\right)$. Since \begin{equs} \frac{\sqrt{2r\log{n}}\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-\mu_n(\beta_1,\beta_2)}{\sigma_n(\beta_1,\beta_2)}\sim (\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)})\sqrt{\log{n}}, \end{equs} and $r > \frac{C^*\overline{\rho}(\beta_1,\beta_2)}{2}$, we have by an application of Lemma \ref{lemma:binomial_master} Part (b, i) \begin{equs} \mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{i_1}(\C,\beta_1,\beta_2)>t\right)=n^{\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2/2+o(1)} \end{equs} By exactly similar arguments the following also hold \begin{equs} \mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_{i_2}(\C,\beta_1,\beta_2)>t\right)&=n^{\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2/2+o(1)},\\ \mathbb{P}_{\mathbf{1},a,b}^{(\mathcal{C})}\left(D_{1}(\C,\beta_1,\beta_2)>t\right)&=n^{-r+o(1)}. \end{equs} Therefore \begin{equs} \mathbb{E}_{\mathbf{\Theta},a,b}^{(\mathcal{C})}\left(HC(\C,\beta_1,\beta_2;t)\right) &\geq \max\{s_1,s_2\} \left(n^{-\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2/2+o(1)}-n^{-r+o(1)}\right) \\ &\geq n^{1-\alpha-\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2/2+o(1)}. \end{equs} This completes the proof of \eqref{eq:alt_expnew}. {\textbf{Analysis of $\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}\left(HC(\C,\beta_1,\beta_2;t)\right)$}}\\ We begin by the following basic decomposition of the variance between diagonal and off-diagonal terms. \begin{equs} \ &\mathrm{Var}^{(\mathcal{C})}_{\mathbf{\Theta},a,b}\left(HC(\C,\beta_1,\beta_2;t)\right) := \sum_{i=1}^{5} T_i, \label{eq:var_decomp_five}\\ T_1 &= (s_1-1)a^{(s_1)}(t)(1-a^{(s_1)}(t))+(s_2-1)a^{(s_2)}(t)(1-a^{(s_2)}(t)),\\ T_2 &= (n/2-s_1)a^{(n/2-s_1)}(t)(1-a^{(n/2-s_1)}(t))+(n/2-s_2)a^{(n/2-s_2)}(t)(1-a^{(n/2-s_2)}(t)), \\ T_3 &= s_1(s_1-1)(b^{(s_1)}(t)-(a^{(s_1)}(t))^2)+s_2(s_2-1)(b^{(s_2)}(t)-(a^{(s_2)}(t))^2)\\ &+s_1s_2(b^{(s_1,s_2)}(t)-a^{(s_1)}(t)a^{(s_2)}(t)),\\ T_4 &= (n/2-s_1)(n/2-s_1-1)(b^{(n/2-s_1)}(t)-(a^{(n/2-s_1)}(t))^2)\\&+(n/2-s_2)(n/2-s_2-1)(b^{(n/2-s_2)}(t)-(a^{(n/2-s_2)}(t))^2)\\ &+(n/2-s_1)(n/2-s_2)(b^{(n/2-s_1,n/2-s_2)}(t)-a^{(n/2-s_1)}(t)a^{(n/2-s_2)}(t)), \\ T_5 &= s_1(n/2-s_1)(b^{(s_1,n/2-s_1)}(t)-a^{(s_1)}(t)a^{(n/2-s_1)}(t))\\ &+s_2(n/2-s_2)(b^{(s_2,n/2-s_2)}(t)-a^{(s_2)}(t)a^{(n/2-s_2)}(t))\\ &+s_1(n/2-s_2)(b^{(s_1,n/2-s_2)}(t)-a^{(s_1)}(t)a^{(n/2-s_2)}(t))\\ &+s_2(n/2-s_1)(b^{(s_2,n/2-s_1)}(t)-a^{(s_2)}(t)a^{(n/2-s_1)}(t)), \end{equs} where for $l,l_1\neq l_2 \in \{1,2\}$, \begin{equs} a^{(s_l)}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_i(\C,\beta_1,\beta_2)>t\right), \quad i \in S_l,\\ a^{(n/2-s_l)}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(D_i(\C,\beta_1,\beta_2)>t\right), \quad i \in S_l^c, \\ b^{(s_l)}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}(D_i(\C,\beta_1,\beta_2)>t,D_j(\C,\beta_1,\beta_2)>t), \quad (i,j)\in S_l\times S_l,\\ b^{(n/2-s_l)}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}(D_i(\C,\beta_1,\beta_2)>t,D_j(\C,\beta_1,\beta_2)>t), \quad (i,j)\in S_l^c\times S_l^c,\\ b^{(s_{l_1},s_{l_2})}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}(D_i(\C,\beta_1,\beta_2)>t,D_j(\C,\beta_1,\beta_2)>t), \quad (i,j)\in S_{l_1}\times S_{l_2},\\ b^{(s_{l_1},n/2-s_{l_2})}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}(D_i(\C,\beta_1,\beta_2)>t,D_j(\C,\beta_1,\beta_2)>t), \quad (i,j)\in S_{l_1}\times S_{l_2}^c,\\ b^{(n/2-s_{l_1},n/2-s_{l_2})}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}(D_i(\C,\beta_1,\beta_2)>t,D_j(\C,\beta_1,\beta_2)>t), \quad (i,j)\in S_{l_1}^c\times S_{l_2}^c. \end{equs} the control of the various terms above is achieved by the following lemma. \begin{lemma} \label{lemma:bounds_five_sparse_graphs} Let $\tau_a=\tau_b=0$ For $t = \sqrt{2 r \log n}$ with $r > \frac{C^*\overline{\rho}(\beta_1,\beta_2)}{2}$, we have for any $\epsilon>0$ \begin{equs} \lim_{n\to \infty} \frac{\log T_1}{\log n} &= 1-\alpha -\frac{1}{2} \left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2, \quad \lim_{n \to \infty} \frac{\log T_2}{\log n} = 1-r, \\ \lim_{n \to \infty} \frac{\log T_3}{\log n} &\leq 1-2\alpha-\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)} \right)^2(1-\varepsilon), \quad \lim_{n\to \infty} \frac{\log T_4}{\log n} \leq 1-2r(1-\varepsilon), \\ \lim_{n \to \infty}& \frac{\log T_5}{\log n} \leq 1-\alpha-\left(\frac{1}{2}\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)} \right)^2+ r\right)(1-\varepsilon). \end{equs} \end{lemma} We note that Lemma \ref{lemma:bounds_five_sparse_graphs} indeed verifies \eqref{eq:alt_varnew} by taking $\epsilon>0$ small enough. Also this verifies \eqref{eq:null_varnew} by taking $C^*=0$ in Lemma \ref{lemma:bounds_five_sparse_graphs}. \ \vrule height4pt width4pt depth0pt \begin{proof}[Proof of Lemma \ref{lemma:bounds_five_sparse_graphs}] We will constantly use the following simple identity. \begin{lemma}\label{lemma_simple_identity} For real numbers $p,x_1,x_2,y_1,y_2$ $$px_1x_2+(1-p)y_1y_2-(px_1+(1-p)y_1)(px_2+(1-p)y_2)=p(1-p)(x_1-y_1)(x_2-y_2).$$ \end{lemma} \paragraph*{\textbf{Control of $T_1$}} Note that for $l \in \{1,2\}$ and $i \in S_l$ \begin{equs} a^{(s_l)}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1 \mu^0_{n1}+\beta_2\mu^0_{n2}\right), \end{equs} where $\beta_1d_i(1)+\beta_2d_i(2)\sim \beta_1(Z_{11}+Z_{12})+\beta_2(Z_{21}+Z_{22})$ with the following independent components \begin{equs} Z_{11}\sim \mathrm{Bin}\left(s_l-1,(1+A)^2\frac{a}{n}\right),\\ Z_{12}\sim \mathrm{Bin}\left(n/2-s_l,(1+A)\frac{a}{n}\right),\\ Z_{21}\sim \mathrm{Bin}\left(n/2-s_{l'},(1+A)\frac{b}{n}\right),\\ Z_{22}\sim \mathrm{Bin}\left(s_{l'},(1+A)^2\frac{b}{n}\right), \end{equs} for $l'\neq l\in \{1,2\}$. To operationalize Lemma \ref{lemma:binomial_master}, note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_l)(1+A)\frac{a}{n}-(n/2-s_{l'})(1+A)\frac{b}{n}}{\sqrt{(n/2-s_l)(1+A)\frac{a}{n}\left(1-(1+A)\frac{a}{n}\right)+(n/2-s_{l'})(1+A)\frac{b}{n}\left(1-(1+A)\frac{b}{n}\right)}}\\ &=\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)(1+o(1))\sqrt{\log{n}}. \end{equs} Therefore, applying Lemma \ref{lemma:binomial_master} Part (b, ii), \begin{equs} \lim\limits_{n \rightarrow \infty}\frac{\log{T_1}}{\log{n}}= 1-\alpha-\frac{1}{2}\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2. \end{equs} \paragraph*{\textbf{Control of $T_2$}} Note that for $l \in \{1,2\}$ and $i \in S_l^c$ \begin{equs} a^{(n/2-s_l)}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1 \mu^0_{n1}+\beta_2\mu^0_{n2}\right), \end{equs} where $\beta_1d_i(1)+\beta_2d_i(2)\sim \beta_1(Z_{11}+Z_{12})+\beta_2(Z_{21}+Z_{22})$ with the following independent components \begin{equs} Z_{11}\sim \mathrm{Bin}\left(s_l,(1+A)\frac{a}{n}\right),\\ Z_{12}\sim \mathrm{Bin}\left(n/2-s_l-1,\frac{a}{n}\right),\\ Z_{21}\sim \mathrm{Bin}\left(n/2-s_{l'},\frac{b}{n}\right),\\ Z_{22}\sim \mathrm{Bin}\left(s_{l'},(1+A)\frac{b}{n}\right), \end{equs} for $l'\neq l\in \{1,2\}$. To operationalize Lemma \ref{lemma:binomial_master}, note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_l-1)\frac{a}{n}-(n/2-s_{l'})\frac{b}{n}}{\sqrt{(n/2-s_l-1)\frac{a}{n}\left(1-\frac{a}{n}\right)+(n/2-s_{l'})\frac{b}{n}\left(1-\frac{b}{n}\right)}}\\ &=\sqrt{2r}(1+o(1))\sqrt{\log{n}}. \end{equs} Therefore, applying Lemma \ref{lemma:binomial_master} Part (b, ii), \begin{equs} \lim\limits_{n \rightarrow \infty}\frac{\log{T_2}}{\log{n}}= 1-r. \end{equs} \paragraph*{\textbf{Control of $T_3$}} Similar to \cite{mms2016}, we begin by noting the following simple identities followed by local central limit theorem type estimates. However, in order to deal with arbitrary linear combinations, one need more detailed computations and uniform control of local central limit type estimates. Fix $(i,j)\in S_l\times S_l$ for $l \in \{1,2\}$. Then we have \begin{equs} b^{(s_l)}(t)&=\frac{a}{n}(1+A)^2 (a^{(s_l)'}(t) )^2+\left(1-\frac{a}{n}(1+A)^2\right)(a^{(s_l)''}(t))^2,\\ a^{(s_l)}(t)&=\frac{a}{n}(1+A)^2 (a^{(s_l)'}(t) )+\left(1-\frac{a}{n}(1+A)^2\right)(a^{(s_l)''}(t)), \end{equs} where \begin{equs} a^{(s_l)'}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_1\right),\\ a^{(s_l)''}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right), \label{eqn:aprimes_signal} \end{equs} where $d_i'(1)=\sum\limits_{t\in \mathcal{C}(i):\atop t\neq j}Y_{it}$ Therefore using Lemma \ref{lemma_simple_identity} for $l \in \{1,2\}$ we have \begin{equs} s_l(s_l-1)(b^{(s_l)}(t)-(a^{(s_l)}(t))^2)&=s_l(s_l-1)(1+A)^2\frac{a}{n}\left(1-(1+A)^2\frac{a}{n}\right)\left(a^{(s_l)'}(t)-a^{(s_l)''}(t)\right)^2.\\ \label{eqn:signal_cov_within_block} \end{equs} Now note that for $l \in \{1,2\}$ \begin{equs} \left(a^{(s_l)'}(t)-a^{(s_l)''}(t)\right)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_1\right)\\ &-\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right)\\ &\leq const\cdot \sup_{|\xi|\leq \beta_1}\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)=t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull+\xi\right),\\ \label{eqn:signal_lclt} \end{equs} where $const$ depends only on $\beta_1,\beta_2$. Now $\beta_1d_i'(1)+\beta_2d_i(2)\sim \sum\limits_{k=1}^4 Z_k$ with independent components \begin{equs} Z_1\sim \mathrm{Bin}\left(s_l-2,(1+A)^2\frac{a}{n}\right),\\ Z_2\sim \mathrm{Bin}\left(n/2-s_l,(1+A)\frac{a}{n}\right),\\ Z_3\sim \mathrm{Bin}\left(n/2-s_{l'},(1+A)\frac{b}{n}\right),\\ Z_4\sim \mathrm{Bin}\left(s_{l'},(1+A)^2\frac{b}{n}\right),\label{eqn:signal_bin_sim} \end{equs} for $l'\neq l\in \{1,2\}$. Further note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_l)(1+A)\frac{a}{n}-(n/2-s_{l'})(1+A)\frac{b}{n}}{\sqrt{(n/2-s_l)(1+A)\frac{a}{n}\left(1-(1+A)\frac{a}{n}\right)+(n/2-s_{l'})(1+A)\frac{b}{n}\left(1-(1+A)\frac{b}{n}\right)}}\\ &=\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)(1+o(1))\sqrt{\log{n}}. \label{eqn:signal_cov_exponent} \end{equs} Similarly for $(i,j)\in S_{l_1}\times S_{l_2}$ for $l_1\neq l_2 \in \{1,2\}$ \begin{equs} b^{(s_{l_1},s_{l_2})}(t)&=\frac{b}{n}(1+A)^2 (a^{(s_{l_1},s_{l_2})'}(t)a^{(s_{l_2},s_{l_1})'}(t) )+\left(1-\frac{b}{n}(1+A)^2\right)(a^{(s_{l_1},s_{l_2})''}(t)a^{(s_{l_2},s_{l_1})''}(t) ), \end{equs} where \begin{equs} a^{(s_{l_1},s_{l_2})'}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_2\right),\\ a^{(s_{l_1},s_{l_2})''}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right),\\ a^{(s_{l_2},s_{l_1})'}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_j(1)+\beta_2d_j'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_2\right),\\ a^{(s_{l_2},s_{l_1})''}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_j(1)+\beta_2d_j'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right), \label{eqn:aprimes_cross_signal} \end{equs} where we define $d_i'(2)=\sum\limits_{t\in \mathcal{C}^c(i):\atop t\neq j}Y_{it}$ and $d_j'(2)=\sum\limits_{t\in \mathcal{C}^c(j):\atop t\neq i}Y_{jt}$ Therefore using Lemma \ref{lemma_simple_identity} \begin{equs} \ & s_1s_2(b^{(s_1,s_2)}(t)-a^{(s_1)}(t)a^{(s_2)}(t))\\&=s_1s_2(1+A)^2\frac{b}{n}\left(1-(1+A)^2\frac{b}{n}\right)\left(a^{(s_1,s_2)'}(t)-a^{(s_1,s_2)''}(t)\right)\left(a^{(s_2,s_1)'}(t)-a^{(s_2,s_1)''}(t)\right).\\\label{eqn:signal_cov_across_block} \end{equs} Similar to before \begin{equs} \left(a^{(s_1,s_2)'}(t)-a^{(s_1,s_2)''}(t)\right) &\leq const\cdot \sup_{|\xi|\leq \beta_1}\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i'(2)=t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull+\xi\right),\\ \left(a^{(s_2,s_1)'}(t)-a^{(s_2,s_1)''}(t)\right)&\leq const\cdot \sup_{|\xi|\leq \beta_1}\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_j(1)+\beta_2d_j'(2)=t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull+\xi\right). \\ \label{eqn:across_signal_lclt} \end{equs} where $const$ depends only on $\beta_1,\beta_2$. Now $\beta_1d_i(1)+\beta_2d_i'(2)\sim \sum\limits_{k=1}^4 Z_k$ with independent components \begin{equs} Z_1\sim \mathrm{Bin}\left(s_1-1,(1+A)^2\frac{a}{n}\right),\\ Z_2\sim \mathrm{Bin}\left(n/2-s_1,(1+A)\frac{a}{n}\right),\\ Z_3\sim \mathrm{Bin}\left(n/2-s_{2},(1+A)\frac{b}{n}\right),\\ Z_4\sim \mathrm{Bin}\left(s_{2},(1+A)^2\frac{b}{n}\right),\label{eqn:acrosssignal_bin_sim_s1} \end{equs} and note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_1)(1+A)\frac{a}{n}-(n/2-s_{2})(1+A)\frac{b}{n}}{\sqrt{(n/2-s_1)(1+A)\frac{a}{n}\left(1-(1+A)\frac{a}{n}\right)+(n/2-s_{2})(1+A)\frac{b}{n}\left(1-(1+A)\frac{b}{n}\right)}}\\ &=\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)(1+o(1))\sqrt{\log{n}}. \label{eqn:across_signal_cov_exponent_s1} \end{equs} Similarly $\beta_1d_j(1)+\beta_2d_j'(2)\sim \sum\limits_{k=1}^4 Z_k$ with independent components \begin{equs} Z_1\sim \mathrm{Bin}\left(s_2-1,(1+A)^2\frac{a}{n}\right),\\ Z_2\sim \mathrm{Bin}\left(n/2-s_2,(1+A)\frac{a}{n}\right),\\ Z_3\sim \mathrm{Bin}\left(n/2-s_{1},(1+A)\frac{b}{n}\right),\\ Z_4\sim \mathrm{Bin}\left(s_{1},(1+A)^2\frac{b}{n}\right),\label{eqn:acrosssignal_bin_sim_s2} \end{equs} and note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_2)(1+A)\frac{a}{n}-(n/2-s_{1})(1+A)\frac{b}{n}}{\sqrt{(n/2-s_2)(1+A)\frac{a}{n}\left(1-(1+A)\frac{a}{n}\right)+(n/2-s_{1})(1+A)\frac{b}{n}\left(1-(1+A)\frac{b}{n}\right)}}\\ &=\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)(1+o(1))\sqrt{\log{n}}. \label{eqn:across_signal_cov_exponent_s2} \end{equs} Therefore, applying Lemma \ref{lemma:binomial_master} Part (a, ii) along with \eqref{eqn:signal_cov_within_block}, \eqref{eqn:signal_lclt}, \eqref{eqn:signal_bin_sim}, \eqref{eqn:signal_cov_exponent}, \eqref{eqn:signal_cov_across_block}, \eqref{eqn:across_signal_lclt}, \eqref{eqn:acrosssignal_bin_sim_s1}, \eqref{eqn:across_signal_cov_exponent_s1}, \eqref{eqn:acrosssignal_bin_sim_s2}, \eqref{eqn:across_signal_cov_exponent_s2}, we have for any fixed $\varepsilon>0$ \begin{equs} \lim\limits_{n \rightarrow \infty}\frac{\log{T_3}}{\log{n}}\leq 1-2\alpha-\left(\sqrt{2r}-\sqrt{C^*\overline{\rho}(\beta_1,\beta_2)}\right)^2(1-\varepsilon). \end{equs} \paragraph*{\textbf{Control of $T_4$}} The analysis of $T_4$ is similar in philosophy to that of $T_3$ and goes through a reduction to supremum of local central limit theorem type probability estimates for linear combination of independent Binomial random variables. However, since we need to control a similar term in the proof of Theorem \ref{thm:max_deg_null}, we present the control of $T_4$ below. Fix $(i,j)\in S_l^c\times S_l^c$ for $l \in \{1,2\}$. Then we have \begin{equs} b^{(n/2-s_l)}(t)&=\frac{a}{n} (a^{(n/2-s_l)'}(t) )^2+\left(1-\frac{a}{n}\right)(a^{(n/2-s_l)''}(t))^2,\\ a^{(n/2-s_l)}(t)&=\frac{a}{n}(a^{(n/2-s_l)'}(t) )+\left(1-\frac{a}{n}\right)(a^{(n/2-s_l)''}(t)), \end{equs} where for $(i,j)\in S_l^c\times S_l^c$ \begin{equs} a^{(n/2-s_l)'}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_1\right),\\ a^{(n/2-s_l)''}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right). \label{eqn:aprimes_nonsignal} \end{equs} Therefore using Lemma \ref{lemma_simple_identity} for $l \in \{1,2\}$ we have \begin{equs} \ & (n/2-s_l)(n/2-s_l-1)(b^{(n/2-s_l)}(t)-(a^{(n/2-s_l)}(t))^2)\\ &=(n/2-s_l)(n/2-s_l-1)\frac{a}{n}\left(1-\frac{a}{n}\right)\left(a^{(n/2-s_l)'}(t)-a^{(n/2-s_l)''}(t)\right)^2.\\ \label{eqn:nonsignal_cov_within_block} \end{equs} Now note that for $l \in \{1,2\}$ and $(i,j)\in S_l^c\times S_l^c$ \begin{equs} \left(a^{(n/2-s_l)'}(t)-a^{(n/2-s_l)''}(t)\right)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_1\right)\\ &-\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right)\\ &\leq const\cdot \sup_{|\xi|\leq \beta_1}\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i'(1)+\beta_2d_i(2)=t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull+\xi\right),\\ \label{eqn:nonsignal_lclt} \end{equs} where $const$ depends only on $\beta_1,\beta_2$. Now $\beta_1d_i'(1)+\beta_2d_i(2)\sim \sum\limits_{k=1}^4 Z_k$ with independent components \begin{equs} Z_1\sim \mathrm{Bin}\left(s_l,(1+A)\frac{a}{n}\right),\\ Z_2\sim \mathrm{Bin}\left(n/2-s_l-2,\frac{a}{n}\right),\\ Z_3\sim \mathrm{Bin}\left(n/2-s_{l'},\frac{b}{n}\right),\\ Z_4\sim \mathrm{Bin}\left(s_{l'},(1+A)\frac{b}{n}\right),\label{eqn:nonsignal_bin_sim} \end{equs} for $l'\neq l\in \{1,2\}$. Further note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_l-2)\frac{a}{n}-(n/2-s_{l'})\frac{b}{n}}{\sqrt{(n/2-s_l-2)\frac{a}{n}\left(1-\frac{a}{n}\right)+(n/2-s_{l'})\frac{b}{n}\left(1-\frac{b}{n}\right)}}\\ &=\sqrt{2r}(1+o(1))\sqrt{\log{n}}. \label{eqn:nonsignal_cov_exponent} \end{equs} Similarly for $(i,j)\in S_{l_1}^c\times S_{l_2}^c$ for $l_1\neq l_2 \in \{1,2\}$ \begin{equs} b^{(n/2-s_{l_1},n/2-s_{l_2})}(t)&=\frac{b}{n} (a^{(n/2-s_{l_1},n/2-s_{l_2})'}(t)a^{(n/2-s_{l_2},n/2-s_{l_1})'}(t) )\\&+\left(1-\frac{b}{n}\right)(a^{(n/2-s_{l_1},n/2-s_{l_2})''}(t)a^{(n/2-s_{l_2},n/2-s_{l_1})''}(t) ). \end{equs} where \begin{equs} a^{(n/2-s_{l_1},n/2-s_{l_2})'}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_2\right),\\ a^{(n/2-s_{l_1},n/2-s_{l_2})''}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right),\\ a^{(n/2-s_{l_2},n/2-s_{l_1})'}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_j(1)+\beta_2d_j'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull-\beta_2\right),\\ a^{(n/2-s_{l_2},n/2-s_{l_1})''}(t)&=\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_j(1)+\beta_2d_j'(2)>t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull\right), \label{eqn:aprimes_cross_nonsignal} \end{equs} where we define $d_i'(2)=\sum\limits_{t\in \mathcal{C}^c(i):\atop t\neq j}Y_{it}$ and $d_j'(2)=\sum\limits_{t\in \mathcal{C}^c(j):\atop t\neq i}Y_{jt}$ Therefore using Lemma \ref{lemma_simple_identity} for $l_1 \neq l_2 \in \{1,2\}$ we have \begin{equs} \ & s_1s_2(b^{(s_1,s_2)}(t)-a^{(s_1)}(t)a^{(s_2)}(t))\\ &=s_1s_2(1+A)^2\frac{b}{n}\left(1-(1+A)^2\frac{b}{n}\right)\left\{\begin{array}{c}\left(a^{(n/2-s_{l_1},n/2-s_{l_2})'}(t)-a^{(n/2-s_{l_1},n/2-s_{l_2})''}(t)\right)\\ \times \left(a^{(n/2-s_{l_2},n/2-s_{l_1})'}(t)-a^{(n/2-s_{l_2},n/2-s_{l_1})''}(t)\right)\end{array}\right\} .\\\label{eqn:nonsignal_cov_across_block} \end{equs} Similar to before \begin{equs} \ & \left(a^{(n/2-s_1,n/2-s_2)'}(t)-a^{(n/2-s_1,n/2-s_2)''}(t)\right) \\&\leq const\cdot \sup_{|\xi|\leq \beta_1}\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_i(1)+\beta_2d_i'(2)=t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull+\xi\right),\\ \ & \left(a^{(n/2-s_2,n/2-s_1)'}(t)-a^{(n/2-s_2,n/2-s_1)''}(t)\right)\\&\leq const\cdot \sup_{|\xi|\leq \beta_1}\mathbb{P}_{\boldsymbol{\Theta},a,b}^{(\mathcal{C})}\left(\beta_1d_j(1)+\beta_2d_j'(2)=t\sigmazero(\beta_1,\beta_2)+\beta_1\muonenull+\beta_2\mutwonull+\xi\right). \\ \label{eqn:across_nonsignal_lclt} \end{equs} where $const$ depends only on $\beta_1,\beta_2$. Now $\beta_1d_i(1)+\beta_2d_i'(2)\sim \sum\limits_{k=1}^4 Z_k$ with independent components \begin{equs} Z_1\sim \mathrm{Bin}\left(s_1,(1+A)\frac{a}{n}\right),\\ Z_2\sim \mathrm{Bin}\left(n/2-s_1-1,\frac{a}{n}\right),\\ Z_3\sim \mathrm{Bin}\left(n/2-s_{2},\frac{b}{n}\right),\\ Z_4\sim \mathrm{Bin}\left(s_{2},(1+A)\frac{b}{n}\right),\label{eqn:acrossnonsignal_bin_sim_s1} \end{equs} and note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_1-1)\frac{a}{n}-(n/2-s_{2})\frac{b}{n}}{\sqrt{(n/2-s_1-1)\frac{a}{n}\left(1-\frac{a}{n}\right)+(n/2-s_{2})\frac{b}{n}\left(1-\frac{b}{n}\right)}}\\ &=\sqrt{2r}(1+o(1))\sqrt{\log{n}}. \label{eqn:across_nonsignal_cov_exponent_s1} \end{equs} Similarly $\beta_1d_j(1)+\beta_2d_j'(2)\sim \sum\limits_{k=1}^4 Z_k$ with independent components \begin{equs} Z_1\sim \mathrm{Bin}\left(s_2,(1+A)\frac{a}{n}\right),\\ Z_2\sim \mathrm{Bin}\left(n/2-s_2-1,\frac{a}{n}\right),\\ Z_3\sim \mathrm{Bin}\left(n/2-s_{1},\frac{b}{n}\right),\\ Z_4\sim \mathrm{Bin}\left(s_{1},(1+A)\frac{b}{n}\right),\label{eqn:acrossnonsignal_bin_sim_s2} \end{equs} and note that \begin{equs} \ & \frac{t\sigmazero(\beta_1,\beta_2)+\beta_1\mu^0_{n1}+\beta_2\mu^0_{n2}-(n/2-s_2-1)\frac{a}{n}-(n/2-s_{1})\frac{b}{n}}{\sqrt{(n/2-s_2-1)\frac{a}{n}\left(1-\frac{a}{n}\right)+(n/2-s_{1})\frac{b}{n}\left(1-\frac{b}{n}\right)}}\\ &=\sqrt{2r}(1+o(1))\sqrt{\log{n}}. \label{eqn:across_nonsignal_cov_exponent_s2} \end{equs} Therefore, applying Lemma \ref{lemma:binomial_master} Part (a, ii) along with \eqref{eqn:nonsignal_cov_within_block}, \eqref{eqn:nonsignal_lclt}, \eqref{eqn:nonsignal_bin_sim}, \eqref{eqn:nonsignal_cov_exponent}, \eqref{eqn:nonsignal_cov_across_block}, \eqref{eqn:across_nonsignal_lclt}, \eqref{eqn:acrossnonsignal_bin_sim_s1}, \eqref{eqn:across_nonsignal_cov_exponent_s1}, \eqref{eqn:acrossnonsignal_bin_sim_s2}, \eqref{eqn:across_nonsignal_cov_exponent_s2}, we have for any fixed $\varepsilon>0$ \begin{equs} \lim\limits_{n \rightarrow \infty}\frac{\log{T_4}}{\log{n}}\leq 1-2r(1-\varepsilon). \end{equs} \paragraph*{\textbf{Control of $T_5$}} The analysis of $T_5$ is similar in philosophy to those of $T_3$ and $T_4$, and goes through a reduction to supremum of local central limit theorem type probability estimates for linear combination of independent Binomial random variables. We therefore omit the details. \end{proof} \bibliographystyle{imsart-nameyear}
{ "attr-fineweb-edu": 1.947266, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUf9w241xiMs_hTsVo
\section{Introduction} It has been realized in recent years that the quantum features of the correlations exhibited by multipartite quantum systems are manyfold, entanglement being only one of the possible non-classical manifestations~\cite{OZ01,HV01,Vedral03,LC10,Luo08,Luo08b,LL08,MV12,GPA11,LLZ07,DVB10,RRA2011,MPP12,SRN08,RS10,WPM09,AD10,OHHH02,LF11,RCC10,SAPB12}. Even separable mixed states (that are not entangled) can have correlations with subtle non-classical properties~\cite{OZ01}. Several quantitative measures have been proposed to study the different non-classical aspects (besides entanglement) of the correlations appearing in composite quantum systems. Among these measures we can mention quantum discord~\cite{OZ01} and the measures of correlations based on the disturbances of quantum states arising from local measurements. The latter ones were advanced by Luo~\cite{Luo08,Luo08b,LL08} and by SaiToh and collaborators~\cite{SRN08,RS10}. In the case of pure states all these measures coincide with quantum entanglement. However, in the case of mixed states these quantities correspond to physical properties of quantum systems that differ from entanglement. It is generally agreed that the states of a bipartite system that are to be regarded as being only classically correlated (that is, having no quantum correlations) are those described by density matrices that are diagonal in a product basis ${\{\ket{i}\ket{j},\,i=1,\ldots,N_1;\, j=1,\ldots, N_2\}}$, where ${\{\ket{i},\,i=1,\ldots,N_1\}}$ and ${\{\ket{j},\,j=1,\ldots,N_2\}}$ are orthonormal bases associated with the two subsystems, ${N_{1,2}}$ being the dimensions of the concomitant Hilbert spaces. It is worth stressing that the set of classical states is different from the set of separable (that is, non entangled) states. Indeed, there are important differences between these two sets. For instance, the set of separable states is convex, while the set of classical states is not~\cite{LL08}. Also, measures of non-classicality such as discord do not satisfy monogamy relations~\cite{SAPB12}, which constitutes a basic property of quantum entanglement. It is usually assumed that classical states do not provide resources for information processing or information transmission, based on quantum correlations. Consider two parties $A$ and $B$, that share a quantum state of a bipartite system consisting of two subsystems $a$ and $b$ ($A$ is in possession of subsystem $a$ while $B$ is in possession of subsystem $b$). Now, assume that one or both subsystems ($a$ and $b$) are themselves composite systems. Then, as it was shown by Li and Luo, tracing over part of one (or both) subsystems $a$ and $b$, it is possible to obtain a state with quantum correlations, even if the original joint state of the composite $ab$ was classical. This is an interesting effect, because it indicates that the aforementioned classical state of the composite system $AB$ may have some ``hidden" quantum correlations. The aim of the present contribution is to study in detail this effect for some families of states of systems of three qubits. The approach to quantum correlations proposed by Luo~\cite{Luo08} on the basis of measurement induced disturbances has two desirable features. First, it has a direct and intuitive interpretation in terms of the basic notion that in classical settings one can do a measurement on a system without disturbing it. In quantum scenarios, on the contrary, measurements usually lead to disturbances on the systems being measured. Luo applies these ideas to the study of correlations in bipartite systems. According to this approach, a bipartite system has only classical correlations if it is possible to conduct local measurements on both subsystems that do not disturb the global state of the composite system. If this can not be done, the (minimum) size of the disturbance due to local measurements constitutes a quantitative measure of the quantumness of the correlations exhibited by the system under consideration. Another advantage of Luo's proposal is that the concomitant measure of the quantum character of correlations is computationally more tractable than other measures, such as quantum discord. It is important to emphasize that both quantum discord and the notion of quantum correlations based upon measurement induced disturbances determine the same family of classical states of a quantum bipartite system. As already mentioned, these states are those described by density matrices that are diagonal in a product basis. Indeed, it is shown in~\cite{Luo08} that a quantum state $\rho$ of a bipartite system is undisturbed by appropriate (un-read) local measurements if and only if $\rho$ is diagonal in a product basis. This suggests a natural way of assessing the ``amount of quantumness'' exhibited by the correlations present in a quantum state $\rho$, by recourse to the minimum possible ``distance'' between $\rho$ and the disturbed state ${\Pi(\rho)}$ resulting from a local measurement~\cite{Luo08}. \section{Non-classicality indicators based on measurement induced disturbances} Given a bipartite system's density matrix ${\rho^{ab}}$, with ${\rho^a:=\tr_b{\rho^{ab}}}$ and ${\rho^b:=\tr_a{\rho^{ab}}}$ the pertinent reduced densities, one defines the measure \begin{equation} \label{eq:mid} \midi(a,b) := \mathcal{I}(a,b) - I_C(a,b) \,, \end{equation} where ${\mathcal{I}(a,b):=S(a)+S(b)-S(a,b)}$ is the mutual quantum information between the two parties $a$-$b$ of $\rho^{ab}$ and ${I_C(a,b)}$ is the classical mutual information ascribed to the post-measurement state ${\Pi[\rho^{ab}]:=\sum_{m,n}{\Pi_{mn}\rho^{ab} \Pi_{mn}}}$, such that ${\{\Pi_{mn}\}=\{\Pi_m^a\otimes\Pi_n^b\}}$ is a projective measurement (complete and bi-local) on ${\rho^{ab}}$. $S(\rho)=-\tr(\rho \log \rho)$ is the von Neumann entropy of the state $\rho$. We are particularly interested in Measurement Induced Disturbances (MIDs). Now the set ${\{\Pi_m^a\}}$ (${\{\Pi_n^b\}}$) corresponds to the eigen-projectors of the spectral decomposition of the state ${\rho^a}$ (${\rho^b}$). Since MIDs do not involve any kind of optimization, they may overestimate sometimes quantum correlations. This problem has been dealt with in several ways. One of them is the symmetric discord \begin{equation} \label{eq:sym_discord} \mathcal{M_S}(a,b) := \inf_{\{E_a\otimes E_b\}} \{ \mathcal{I}(a,b) - I'(a,b) \} \,, \end{equation} where ${I'(a,b)}$ corresponds to the post-measurement state resulting from the general local measure ${\{E_a\otimes~E_b\}}$~\cite{WPM09,MV12,GPA11}. Our main goal here is to detect non-classicality. As a consequence, overestimation does not constitute an important problem for us. Thus, we focus attention on MIDs given the tractability of the associated computational problem, both from the analytical and the numerical viewpoints. As mentioned before, a bipartite state is \textit{classical} if and only if it is diagonal in one special basis, that of the local eigen-projectors, and can thus be expressed as \begin{equation} \label{eq:classicalstate} \rho^{ab} = \sum_{m,n}{ p_{mn} \Pi_{m}^a \otimes \Pi_n^b } \,, \end{equation} with ${\{p_{mn}\}}$ a bivariate probability distribution ${p_{mn}\geq0}$ and ${\sum_{m,n}p_{mn}=1}$. \section{Quantum Correlations from Classical States} Li and Luo~\cite{LL08} demonstrated that any separable state (classical or not) can be regarded as embedded in a classical state of a system of larger dimensionality. Reciprocally, given a classical state ${\rho^{ab}}$, any reduction would lead to a separable state in a space of lesser dimension. Thus, a classical state's reduction might generally be a non-classical one. This is the fact on which we will concentrate our efforts, i.e., \emph{the possibility of finding reduced states correlated (in quantum fashion), starting from a classical state}. We consider a classical state ${\rho^{ab}}$ and analyze the possibility of encountering non-classical reductions. We begin by enumerating some MID-properties that apply for an arbitrary reduced state of ${\rho^{ab}}$ whenever this classical state is defined via Eq.~\eqref{eq:classicalstate}. Assume that both parties are amenable to further decomposition such that their associated Hilbert spaces can be cast as tensor products ${\hilb^a=\bigotimes_m\hilb^{a_m}}$ and ${\hilb^b=\bigotimes_n\hilb^{b_n}}$. The joint state of the parties $a_i$ and $b_j$ is \begin{equation} \label{eq:reduction} \rho^{a_ib_j} = \sum_{m,n}{ p_{mn} \rho_{m}^{a_i} \otimes \rho_n^{b_j} } \,, \end{equation} where ${\rho_{m}^{a_i}:=\tr_{\{a_k,k\neq i\}}{\Pi_m^a}}$ and ${\rho_{n}^{b_j}:=\tr_{\{b_k,k\neq j\}}{\Pi_n^b}}$. Thus, we can apply our quantum correlations' measure between these two components: \begin{equation} \label{eq:midcomp} \midi(a_i,b_j) := \mathcal{I}(a_i,b_j) - I_C(a_i,b_j) \,, \end{equation} so as to compute quantum correlations using Eq.~\eqref{eq:reduction}. Firstly, we realize that for any state ${\rho^{ab}}$ (classical or not), given the positivity of ${I_C(a,b)}$, one has \begin{equation} \label{eq:prop1} \midi(a,b) \leq \mathcal{I}(a,b) \,, \end{equation} with strict equality ${\midi(a,b)=\mathcal{I}(a,b)}$ iff the post-measurement classical state is a product state, so that ${\Pi[\rho^{ab}]\equiv\rho^a\otimes\rho^b}$, with ${\rho^a}$ and ${\rho^b}$ coinciding with the states of the parties before measurement, that does not modifies the reduced states of $a$ and $b$. \subsection{Some interesting bounds} We introduce now, with regards to the reduction of ${\rho^{a_i,b_j}}$, a series of bounds that improve on Eq.~\eqref{eq:prop1}. \begin{itemize} \item Given the strong subadditivity of von Neumann's entropy, one can verify that \begin{align} \label{eq:prop2} \midi(a_i,b_j) & \leq \mathcal{I}(a_i,b_j) \notag \\ & \leq \text{min}\{\mathcal{I}(a_i,b),\mathcal{I}(a,b_j)\} \notag \\ & \leq \mathcal{I}(a,b). \end{align} In particular, if ${\rho^{ab}}$ is classical, then ${\mathcal{I}(a,b)}$ measures its classical correlations. Accordingly, Eq.~\eqref{eq:prop2} implies that \emph{quantum correlations between $a_i$ and $b_j$ have as an upper bound the classical correlations between $a$ and $b$ for the composite system}. Equality ${\midi(a_i,b_j)=\mathcal{I}(a_i,b_j)}$ holds iff ${\Pi[\rho^{a_ib_j}]\equiv\rho^{a_i}\otimes\rho^{b_j}}$. Moreover, if $\rho^{ab}$ is classical, then ${\mathcal{I}(a,b)\leq\min\{H(a),H(b)\}}$, with $H(a)$ ($H(b)$) the Shannon entropies of the marginal distributions $p_i^a:=\sum_j{p_{ij}}$ and $p_j^b:=\sum_i{p_{ij}}$, associated respectively to $\rho^a$ and $\rho^b$. Thus, given $\rho^{ab}$ classical, one has \begin{equation} \label{eq:prop2b} \midi(a_i,b_j) \leq \mathcal{I}(a,b) \leq \min\{H(a),H(b)\} \,. \end{equation} \item Also, ${\midi(a_i,b_j)}$ have as an upper bound the entropies of the pertinent parties, i.e. \begin{align} \label{eq:prop3} \midi(a_i,b_j) & \leq \min\{S(a_i),S(b_j)\} , \end{align} where $S(a_i)$ ($S(b_j)$) stands for von Neumann's entropy for $\rho^{a_i}$ ($\rho^{b_j}$). To prove \eqref{eq:prop3} it is enough to point out that ${S(a_i,b_j)-S(a_i)}$ and ${S(a_i,b_j)-S(b_j)}$ are concave functions because ${\rho^{a_i,b_j}}$ is separable~\cite{NK01}. Accordingly, ${S(a_i,b_j)}$ is greater than either $S(a_i)$ or $S(b_j)$, so that ${\mathcal{I}(a_i,b_j)\leq\min\{S(a_i),S(b_j)\}}$. In view of \eqref{eq:prop2}, the bound \eqref{eq:prop3} follows immediately. This inequality reveals just how the quantum correlations between two system's components are conditioned by the dimensionality of the parties, with ${S(u)\leq\log[\dim(u)]}$. \item Lastly, note that a sufficient condition for the reduction ${\rho^{a_ib_j}}$ (of the classical state $\rho^{ab}$) to be classical as well, i.e., that ${\midi(a_i,b_j)=0}$, is that ${\{\rho_m^{a_i}\}}$ and ${\{\rho_n^{b_j}\}}$ be sets of mutually commuting operators. In such a case, there exist common basis of eigen-projectors for each party, ${\{\Pi_u^{a_i}\}}$ and ${\{\Pi_v^{b_j}\}}$, so that it is possible to express the composite state in the fashion \begin{equation} \label{eq:conmut} \rho^{a_ib_j} = \sum_{u,v}{ p_{uv} \Pi_u^{a_i} \otimes \Pi_v^{b_j} } \,. \end{equation} It is worth mentioning, however, that the commutativity condition of the sets ${\{\rho_m^{a_i}\}}$ and ${\{\rho_n^{b_j}\}}$ is not necessary for the classicality of ${\rho^{a_ib_j}}$. It is still possible to encounter classical states even if this commutativity is not verified~\cite{LL08}. \end{itemize} We pass now to a single example in which to observe the phenomenon we are interested in: a three-qubits bipartite system. \section{Nonzero MID in classical bipartite states of three qubits} Consider the special bipartite state $\rho^{ab}$ exhibiting the following features. Party $a$ is comprised of two qubits while party $b$ consists of just one qubit. The state is classical for (Cf. Eq.~\eqref{eq:classicalstate}) \begin{equation} \label{eq:3qclassical} \rho^{ab} = \sum_{m=1}^{4}\sum_{n=1}^{2}{ p_{mn} \Pi_{m}^{a} \otimes \Pi_n^{b} } \,, \end{equation} with ${\{\Pi_{m}^{a}\equiv\Pi_{m}^{12}\}}$ the set of eigen-projectors of ${\rho^a\equiv\rho^{12}=\tr_b\rho^{ab}}$ and ${\{\Pi_{n}^{b}\equiv\Pi_{n}^{3}\}}$ that of $\rho^3$. So as to encounter quantum correlations in the reduced state ${\rho^{13}=\tr_2{\rho^{ab}}}$, we need that some of the members of the set ${\{\rho_m^1=\tr_2\Pi_m^a\}}$ do not commute amongst themselves. For instance, define the operators ${\Pi_m^a:=\ket{a_m}\bra{a_m}}$ with \begin{equation} \label{eq:a_base} \begin{cases} \ket{a_1} = \ket{00}\,, & \ket{a_2} = \ket{10} \,, \\ \ket{a_3} = \ket{+1}\,, & \ket{a_4} = \ket{-1} \,, \end{cases} \end{equation} with the states given by ${\ket{+}=(1/\sqrt{2})(\ket{0}+\ket{1})}$ and ${\ket{-}=(1/\sqrt{2})(\ket{0}-\ket{1})}$, for the basis of $a$, and the operators ${\Pi_n^b:=\ket{b_n}\bra{b_n}}$ with ${\ket{b_1}=\ket{0}}$ and ${\ket{b_2}=\ket{1}}$ for subsystem $b$. Using these basis, we numerically compute the measures $\midi(1,3)$ for a sample of $10^4$ states with randomly generated distributions ${\{p_{mn}\}}$. In the graph $\midi(1,3)$ vs. ${\mathcal{I}(a,b)}$ (Fig.~\ref{fig:MvsI_rand}) we see that the ensuing states almost completely fill up the region lying under the straight line of unit slope ${\midi(1,3)=\mathcal{I}(a,b)}$. Such result agrees with the upper bound anticipated by Eqs.~\eqref{eq:prop2} and \eqref{eq:prop3}. Note that all classical states of the composite system ${a-b}$ (not only those belonging to the family \eqref{eq:3qclassical}-\eqref{eq:a_base}) correspond to points that must lie within the above triangular region of the ${\mathcal{I}(a,b)-\midi(1,3)}$ plane. We consider now states lying on the border of the region depicted in Fig.~\ref{fig:MvsI_rand}. For the lower border we have ${\{\midi(1,3)=0;\,0\leq\mathcal{I}(a,b)\leq1\}}$, for the right-side one ${\{\mathcal{I}(a,b)=1;\,0\leq\midi(1,3)\leq1\}}$, and for the upper border ${\{\midi(1,3)=\max\midi(1,3)|_{\mathcal{I}(a,b)}\}}$. We shall provide parameterized families of states that correspond to the above borders, in order to illustrate the fact that these frontiers can be actually reached by classical states of the ${a-b}$ composite system. If the contrary is not explicitly stated, we are going to consider states belonging to the family \eqref{eq:3qclassical}-\eqref{eq:a_base}. \begin{figure} \centering \includegraphics[width=.8\columnwidth]{Bellomo_Fig_1.png} \caption{MID corresponding to subsystem $(1,3)$ vs. mutual information for the classical state $ab$, evaluated for ${\sim10^5}$ randomly selected states. The dotted line (red) depicts the bound ${\midi(1,3)\leq\mathcal{I}(a,b)}$.} \label{fig:MvsI_rand} \end{figure} \begin{enumerate} \item \emph{${\midi(1,3)=0}$ Border}. It is easy to construct classical states $\rho^{ab}$ such that $\rho^{13}$ is classical as well. Any state defined using the basis ${\{\Pi_m^a\}}$ with commuting operators from the set ${\{\rho_m^1=\tr_2\Pi_m^a\}}$ will do. It is convenient to have a free parameter at our disposal so as to have the classical mutual information to traverse the interval ${0\leq\mathcal{I}(a,b)\leq1}$. For example, we have the one-parameter family ${\rho^{ab}_\alpha=\alpha\ket{000}\bra{000}+(1-\alpha)\ket{101}\bra{101}}$, with ${0\leq\alpha\leq1}$. Then, for this family, we have ${\mathcal{I}^\alpha(a,b)=-\alpha\log\alpha-(1-\alpha)\log(1-\alpha)}$, ${\midi^\alpha(1,3)=0}$ that yields the whole lower border of the region we are interested in (see Fig.~\ref{fig:M0border}). \begin{figure} \centering \includegraphics[width=.8\columnwidth]{Bellomo_Fig_2.png} \caption{Mutual information as a function of the parameter $\alpha$ for the family ${\rho^{ab}_\alpha}$ that reproduces the lower border of the regi\'on for which ${\midi(1,3)=0}$.} \label{fig:M0border} \end{figure} \item \emph{${\mathcal{I}(a,b)=1}$ Border}. Here we need a family of states such that i) the reduction $\rho^{13}$ exhibit quantum correlations and also ii) maximizes the mutual information. For this second condition to hold we need a strong correlation, like that given by ${p_{ij}=p_i^a\delta_{ij}}$. Accordingly, ${\mathcal{I}(a,b)=S(a)=S(b)=S(a,b)}$, that is maximized by uniform marginal distributions. The states-family \begin{equation} \label{landafam} \rho^{ab}_\gamma=\frac{1}{2}\ket{000}\bra{000}+\frac{1}{2}\ket{\psi_\gamma 1 1}\bra{\psi_\gamma 1 1} \,, \end{equation} with ${\ket{\psi_\gamma}=\cos\gamma\ket{0}+\sin\gamma\ket{1}}$ and ${0\leq\gamma\leq\pi}$, foots the bill and also satisfies the condition that ${\rho_1^1:=\tr_2\Pi_1^a=\ket{0}\bra{0}}$ and ${\rho_2^1:=\tr_2\Pi_2^a=\ket{\psi_\gamma}\bra{\psi_\gamma}}$ do not generally commute. This raises hopes of ending up with ${\midi(1,3)\neq0}$. Indeed, one can analytically find the MID for this family. The eigenvalues of ${\Pi[\rho^{13}_\gamma]}$ are ${(1/4)(1\pm\cos\gamma)}$, both exhibiting a twofold degeneracy. We see in Fig.~\ref{fig:rightborder_MvsAng} that this family yields the right-side border that concerns us here, with ${0\leq\midi(1,3)\leq1}$. The states ${\rho^{ab}_\gamma}$ do not belong to the family \eqref{eq:3qclassical}-\eqref{eq:a_base}, but they are nevertheless classical states of the composite system ${a-b}$ illustrating that the frontier ${\mathcal{I}(a,b)=1}$ (with values of $\midi(1,3)$ covering the full range ${[0,1]}$) can be reached by these kind of states. \begin{figure} \centering \includegraphics[width=.8\columnwidth]{Bellomo_Fig_3.png} \caption{MID for the subsystem ${(1,3)}$, as a function of the angle $\gamma$, for the family ${\rho^{ab}_\gamma}$, that reproduces the right-side border of the triangular region of Fig.~\ref{fig:MvsI_rand}.} \label{fig:rightborder_MvsAng} \end{figure} \item \emph{Upper border.} We were unable to find a family of states maximizing MID for all values of ${\mathcal{I}(a,b)}$ within the interval that interest us. We did encounter a class of one-parameter states that reach the maximum possible value ${\midi^{max}_{(1,3)}=1}$. Fixing ${p_{11}=1-2\lambda}$ and ${p_{31}=\lambda=p_{42}}$ in the above mentioned basis (Cf. Eq.~\ref{eq:a_base}) we obtain the family \begin{align} \label{eq:M13max} \rho^{ab}_\lambda & = (1-2\lambda)\ket{000}\bra{000} \notag\\ & + \lambda(\ket{+10}\bra{+10}+\ket{-11}\bra{-11}). \end{align} We shall consider the range of $\lambda$-values ${[0,1/2)}$. Within this range we have that the mutual information is ${\mathcal{I}_\lambda(a,b)=-\lambda\log\lambda-(1-\lambda)\log(1-\lambda)}$. The eigenvalues of ${\rho^{13}_\lambda}$ are ${\{0,\lambda,\frac{1}{2}(1-\lambda+c_\lambda),\frac{1}{2}(1-\lambda-c_\lambda)\}}$, with ${c_\lambda:=\sqrt{1-4\lambda+5\lambda^2}}$. Those for the post-measurement ${\Pi[\rho^{13}_\lambda]}$ state are ${\{1-\frac{3}{2}\lambda,\frac{\lambda}{2},\frac{\lambda}{2},\frac{\lambda}{2}\}}$. Accordingly, the associated entropies become \begin{align} \label{eq:S13} S(1&,3) = -\lambda\log\lambda \notag\\ & - \frac{1}{2}(1-\lambda+c_\lambda)\}\log(\frac{1}{2}(1-\lambda+c_\lambda)\}) \notag\\ & - \frac{1}{2}(1-\lambda-c_\lambda)\}\log(\frac{1}{2}(1-\lambda-c_\lambda)\}) , \end{align} and \begin{equation} \label{eq:S13m} S'(1,3) = 1 -\frac{2-3\lambda}{2}\log(2-3\lambda) -\frac{3}{2}\lambda\log\lambda \,. \end{equation} Finally, from \eqref{eq:S13}-\eqref{eq:S13m} and making ${\midi^\lambda(1,3)=S'(1,3)-S(1,3)}$ one obtains the MID for this class of one-parameter states. Fig.~\ref{fig:MyAvsI_f4} depicts the pertinent results for the family ${\rho^{ab}_\lambda}$. For verification purposes, we also evaluated in numerical fashion the optimized measure ${\mathcal{M_S}(1,3)}$. \begin{figure} \centering \includegraphics[width=.8\columnwidth]{Bellomo_Fig_4.png} \caption{$\midi(1,3)$ (cyan) and ${\mathcal{M_S}(1,3)}$ (red) for the subsystem ${(1,3)}$ vs. the mutual information of the classical state $ab$ corresponding to the family ${\rho^{ab}_\lambda}$.} \label{fig:MyAvsI_f4} \end{figure} A measure optimized over all local projective measurements significantly differs from the MID evaluated from ${\mathcal{I}(a,b)\approx0.5}$ (Fig.~\ref{fig:MyAvsI_f4}). This evidences the MID-overestimation of quantum correlations. However, it is clear that the two measures agree on {\it which} are the states that exhibit quantum correlations. \vfill \end{enumerate} \section{Conclusions} We have investigated in some detail the precise way in which, starting from a classically correlated state, one finds by reduction a state correlated in quantum fashion. Things were illustrated with reference to the system of lowest dimensionality where this effect can take place, that is, a three-qubit system. In this case we encountered a relationship between the classical mutual information of the composite state and the maximum quantum correlation for the reduced state. We also found families of classical states that actually reach the bounds foreseen by such relationship. Our results may be of interest because they provide a better insight on the relation between classical and quantum correlations. In particular, the fact that (separable) quantum-correlated states can be obtained as reductions of classical states raises some interesting questions concerning the status of non-entangled quantum correlated states as resources for information-related tasks. In this regards, it is instructive to consider the following scenario. Consider a bipartite quantum system consisting of subsystems $a$ and $b$. Assume that the Hilbert spaces of $a$ and $b$ have dimensions that are composite numbers (that is, they are not prime numbers). In that case the subsystems $a$ and $b$ can always be regarded formally as composite systems (this possibility is related to the fact that the tensor-product structure of the Hilbert space of composite systems is indeed observe-dependent. See for instance~\cite{ZLL04} and references therein). Consequently, a classically correlated state of the composite $ab$ can have ``hidden" quantum correlations that correspond to a reduced state obtained by tracing over ``part" of subsystems $a$ and $b$, if these subsystems are appropriately regarded as factorized into subsystems. This, in turn, suggests the intriguing possibility that, besides the quantum correlations themselves, the dimensionality of the subsystems $a$ and $b$ should be regarded as ``resources" in the sense that, the larger are these dimensions (provided that they are composite numbers) the larger the amount of ``hidden" quantum correlations that can be extracted from the original state by recourse to appropriate reductions. We plan to address this point in a forthcoming contribution.
{ "attr-fineweb-edu": 1.959961, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfD7xK1Thg98UhySp
\section{Mean-Field Hamiltonian} The mean-field Hamiltonian for the DDW state is \begin{equation} \label{eqn:mean-field-Ham} H_{\rm DDW} = \sum_{{\bf k},\sigma} [( \epsilon_{\bf k}-\mu ) c^\dagger_{{\bf k} \sigma} c_{{\bf k} \sigma} + (i\Delta_{\bf k} c^\dagger_{{\bf k} \sigma}c_{{\bf k+Q} \sigma} + \text{ h.c.})] \end{equation} where $c_{\bf k}$ is the annihilation operator of an electron in a state with momentum ${\bf k}$ and spin $\sigma$, $\mu$ is the chemical potential, $\Delta_{\bf k} = \langle c^\dagger_{\bf{k} \sigma} c_{\bf {k+Q} \sigma} \rangle = \Delta_0 (\cos k_x - \cos k_y) /2 $ is the DDW order parameter and ${\bf Q} = (\pi,\pi)$ is the DDW ordering wave vector, with lattice spacing set to unity. The tight-binding band structure is given by $\epsilon_{{\bf k}} = \epsilon_{1 {\bf k}} + \epsilon_{2 {\bf k}}$, with \begin{equation} \epsilon_{1{\bf k}} = -2t (\cos k_x + \cos k_y)\:, \hskip 0.4 cm \epsilon_{2{\bf k}} = 4t' \cos k_x \cos k_y. \end{equation} where $t$ and $t'$ are nearest-neighbor and next-neighbor hopping parameters. Introducing the two-component field operator $\chi^\dagger_{\bf k \sigma} = \left( \begin {array} {cc} c^\dagger_{\bf k \sigma} & -i c^\dagger_{\bf k+Q \sigma} \end{array} \right) $ the mean field Hamiltonian can be written as \begin{equation} H_{DDW} = \sum_{{\bf k}, \sigma} \chi^\dagger_{\bf k \sigma} B_{\bf k} \chi_{\bf k \sigma} \end{equation} with \begin{equation} B_{\bf k} = \left( \begin{array}{ccc} \epsilon_{\bf k}-\mu & \Delta_{\bf k} \\ \Delta_{\bf k} & \epsilon_{\bf k+Q}-\mu \end{array} \right) \end{equation} or \begin{equation} B_{\bf k} = (\epsilon_{2 \bf k} - \mu) + \epsilon_{1 \bf k} \sigma^3 + \Delta_{\bf k} \sigma^1 \end{equation} where $\sigma^i$ are the Pauli matrices and the sum is over half the original Brillouin zone (reduced Brillouin zone - RBZ), {\em i.e.} $|k_x| + |k_y| \leq \pi$, . Diagonalizing the Hamiltonian will then give the DDW quasiparticle energy bands \begin{equation} E_{1,2}= (\epsilon_{2 \bf k} - \mu) \pm \sqrt{\epsilon^2_{1 \bf k} + \Delta^2_{\bf k}} \end{equation} as depicted in figure \ref{Fig:EnSpect}. \begin{figure} \centerline{\includegraphics[scale=0.75]{EnSpect.EPS}} \caption{ DDW quasiparticle energy bands along the line $k_y = k_x/2$ in the Brillouin zone for $t=0.3, t'=0.09$,$\Delta_0=0.2$ and $\mu =-0.3$ eV. } \label{Fig:EnSpect} \end{figure} \section{Green Function} The "non-interacting" Nambu Green's function can then be obtained by inverting the matrix $B_{\bf k}$ . (For now, the only relevant interactions are the electron-electron interactions which generate the DDW coupling. All other interactions including quasiparticle-quasiparticle and quasiparticle-impurity interactions will be taken into account later by assuming a non-zero self energy). The $2 \times 2$ Nambu Green function is defined by \begin{multline} G_0({\bf k},t) = \langle T \chi_{\bf k}(t) \chi^\dagger_{\bf k}(0)\rangle \\ =\left( \begin{array}{ccc} \langle T c_{\bf k }(t) c^\dagger_{\bf k}(0)\rangle & i\langle T c_{\bf k }(t) c^\dagger_{\bf k+Q}(0)\rangle \\ -i\langle T c_{\bf k+Q}(t) c^\dagger_{\bf k}(0)\rangle & \langle T c_{\bf k+Q}(t) c^\dagger_{\bf k+Q}(0)\rangle \\ \end{array} \right) \end{multline} The Fourier transform of the Green function matrix is then \begin{multline} G_0({\bf k},\omega) = \int dt e^{i \omega t} G_0({\bf k}, t) \\ = \frac{1}{(\omega + i\delta)-(\epsilon_{2 \bf k} - \mu) - \epsilon_{1 \bf k} \sigma^3 - \Delta_{\bf k} \sigma^1} \\ = \frac {(\omega - (\epsilon_{2 \bf k} - \mu) + \epsilon_{1 \bf k} \sigma^3 + \Delta_{\bf k} \sigma^1} {\omega -(\epsilon_{2 \bf k} - \mu)+ i\delta)^2 - \epsilon_{1 \bf k}^2 + \Delta_{\bf k}^2} \end{multline} We now consider the effects of impurity scattering and `residual' electron-electron interactions. Here, we will capture their combined effect by introducing a non-zero single-particle self-energy $\Sigma ({\bf k}, \omega) = \Sigma_1 ({\bf k}, \omega) + i \Sigma_2 ({\bf k}, \omega) $, where the real and imaginary parts give the energy renormalization and quasiparticle lifetime, respectively. Neglecting the shift in the excitation energy due to the real part $\Sigma_1 ({\bf k}, \omega)$, the self-energy can be written in terms of the quasiparticle lifetime as \begin{equation} \Sigma ({\bf k},\omega) = -\frac{i}{\tau ({\bf k},\omega)} \end{equation} where $1/\tau ({\bf k},\omega)$ is the quasiparicle scattering rate (inverse lifetime). Including the self-energy, the full Green's function $G({\bf k}, \omega)$ will then be given according to Dyson's equation by \begin{equation} G({\bf k},\omega) = \left(G_0({\bf k},\omega)^{-1} - \Sigma({\bf k},\omega)\right)^{-1} \end{equation} The spectral function $A({\bf k},\omega)$ can then be calculated by taking the imaginary part of the Green function \begin{widetext} \begin{equation} A({\bf k},\omega)= -\frac{1}{\pi} \text{Im } G({\bf k}, \omega) = \frac{1}{\pi \tau} \frac{(\omega -\epsilon_{2 \bf k} + \mu)^2 + (\epsilon_{1 \bf k}^2 + \Delta_{\bf k}^2)+(1/\tau)^2 + 2(\omega -\epsilon_{2 \bf k} + \mu)(\epsilon_{1 \bf k} \sigma^3+ \Delta_{\bf k} \sigma^1)} {[(\omega - E_{1 \bf k}) (\omega - E_{2 \bf k}) - (1/\tau)^2]^2 + (2(\omega -\epsilon_{2 \bf k} + \mu) / \tau)^2} \end{equation} \end{widetext} \begin{figure} \center \centerline{\includegraphics[scale=0.65]{A11.eps}} \centerline{\includegraphics[scale=0.65]{A22.eps}} \caption{Elements of the DDW spectral function matrix for $1/\tau=0.005$ eV: (a)$A_{11}(\omega, k_x, k_y=0)$ and (b)$A_{22}(\omega, k_x, k_y=0)$. Other parameters are the same as in figure \ref{Fig:EnSpect}.} \label{Fig:A} \end{figure} In Figure \ref{Fig:A}, the diagonal elements of the Nambu spectral function matrix $A(\omega, k_x, k_y=0)$ are plotted against $k_x$ and $\omega$. For demonstration purposes, the scattering rate here is assumed to be a constant $1/\tau = 0.02$ eV (independent of momentum and frequency). The diagonal entries $A_{11}$ and $A_{22}$ have peaks centered mostly at energies corresponding to the upper and lower energy bands ($E_1$ and $E_2$) respectively. \section {Optical conductivity} Having found the spectral function $A(\omega, \bf{k})$, the real part of the AC conductivity can now be calculated by using the Kubo formula \begin{equation} \text{Re } \sigma_{xx}(\omega) = \frac{1}{\omega} \text{Im } \Pi_{xx}(i \omega_n \rightarrow \omega + i \delta ) \end{equation} where $\Pi(i \omega_n)$ is the Fourier tansform of the current-current correlation function in Matsubara formalism \begin{equation} \Pi_{xx} ( i \omega_n) = \int_0^\beta d\tau e^{i \omega_n \tau} \Pi_{xx} ( \tau) \end{equation} with \begin{equation} \Pi_{xx} ( \tau) = \langle T_\tau j_x ( \tau) j_x (0) \rangle \end{equation} The current operator for the DDW quasiparticles can be obtained by minimally-coupling the mean-field Hamiltonian (\ref{eqn:mean-field-Ham}) to the electromagnetic field, ${\bf A}$, and differentiating once with respect to ${\bf A}$: \begin{multline} {\bf j} = \sum _{RBZ} \biggl[{{\bf v}_{F2}}({\bf k}) \left(\chi_{\bf k}^\dagger \chi_{\bf k}\right) + {{\bf v}_{F1}} ({\bf k})\left(\chi_{\bf k}^\dagger \sigma^3 \chi_{\bf k} \right)\\ + {{\bf v}_\Delta}({\bf k})\left(\chi_{\bf k}^\dagger \sigma^1 \chi_{\bf k} \right)\biggr] \end{multline} where ${{\bf v}_{F1}}({\bf k}) = {\bf \nabla_k} \epsilon_1({\bf k})$ ,${{\bf v}_{F2}}({\bf k}) = {\bf \nabla_k} \epsilon_2({\bf k})$ and ${{\bf v}_\Delta}({\bf k})={\bf \nabla_k}\Delta({\bf k})$. Using this form of the current operator, the current-current correlation function can be written in terms of the elements of the imaginary-time Nambu Green's function ${\cal G}_{ij}$ (the non-interacting Green's function $ G_{ij}$ is now promoted to the interacting one ${\cal G}_{ij}$, to take the effect of the residual interactions into account). We will have \begin{multline} \langle T_\tau j(\tau) j(0) \rangle = -\sum _{RBZ}\bigl( {[{{\bf v}_{F2}}({\bf k})]^2} {\rm tr}( {\cal G} (-\tau) {\cal G} (\tau))\\ + {[{{\bf v}_{F1}}({\bf k})]^2} {\rm tr}({\sigma^3} {\cal G} (-\tau) {\sigma^3}{\cal G} (\tau))\\ + {[{{\bf v}_\Delta}({\bf k})]^2} {\rm tr}\left({\sigma^1}{\cal G}(-\tau) {\sigma^1} {\cal G}(\tau)\right)\bigr)\\ + ({{\bf v}_{F1}}.{{\bf v}_{F2}}) {\rm tr}\left( ({\sigma^3}{\cal G}(-\tau) + {\cal G}(-\tau){\sigma^3}) {\cal G}(\tau) \right)\bigr)\\ + ({{\bf v}_{F2}}.{{\bf v}_{\Delta}}) {\rm tr}\left( ({\sigma^1}{\cal G}(-\tau) + {\cal G}(-\tau){\sigma^1}) {\cal G}(\tau) \right)\bigr)\\ + ({{\bf v}_{F1}}.{{\bf v}_{\Delta}}) {\rm tr}\left( ({\sigma^1}{\cal G}(-\tau){\sigma^3} + {\sigma^3}{\cal G}(-\tau){\sigma^1}) {\cal G}(\tau) \right)\bigr)\\ \end{multline} In this equation, we have ignored vertex corrections. These are important when the scattering rate is strongly angle-dependent, as they distinguish the transport and quasiparticle lifetimes (for instance, through a $(1-\cos\theta)$ factor) and also distinguishing umklapp scattering from momentum-conserving scattering. In what follows, we will assume that the replacement $\tau\rightarrow\tau_{\rm tr}$ is made ({\em i.e.} ignore vertex corrections). Reference\cite {vertex} has considered the vertex correction for DDW conductivity. Writing ${\cal G}$ in terms of the spectral function $A(\bf k, \omega)$, evaluating the Matsubara sum, and doing the analytic continuation $(i \omega_n \rightarrow \omega + i\delta)$, we find the optical conductivity to be \cite {Mahan} \begin{widetext} \begin{multline} \sigma(\omega)\sim \frac{1}{\omega} \sum_{RBZ} \int_{-\infty} ^\infty \frac{d\varepsilon}{2\pi} \left[n_F(\varepsilon)-n_F(\varepsilon+\omega) \right]\times\\ \biggl\{ {\left[{{\bf v}_{F2}}({\bf k})\right]^2} \Bigl[ A_{11}(k,\varepsilon) A_{11}(k,\varepsilon+\omega) +2 A_{12}(k,\varepsilon) A_{12}(k,\varepsilon+\omega) + A_{22}(k,\varepsilon) A_{22}(k,\varepsilon+\omega) \Bigr] \\ + {\left[{{\bf v}_{F1}}({\bf k})\right]^2} \Bigl[ A_{11}(k,\varepsilon) A_{11}(k,\varepsilon+\omega) -2 A_{12}(k,\varepsilon) A_{12}(k,\varepsilon+\omega) + A_{22}(k,\varepsilon) A_{22}(k,\varepsilon+\omega) \Bigr] \\ + {\left[{{\bf v}_{\Delta}}({\bf k})\right]^2} \Bigl[ A_{22}(k,\varepsilon) A_{11}(k,\varepsilon+\omega) +2 A_{12}(k,\varepsilon) A_{12}(k,\varepsilon+\omega) + A_{11}(k,\varepsilon) A_{22}(k,\varepsilon+\omega) \Bigr] \\ +2 {{\bf v}_{F1}}({\bf k}). {{\bf v}_{F2}}({\bf k}) \Bigl[A_{11}(k,\varepsilon) A_{11}(k,\varepsilon+\omega) - A_{22}(k,\varepsilon) A_{22}(k,\varepsilon+\omega) \Bigr] \\ +2 {{\bf v}_{F2}}({\bf k}). {{\bf v}_\Delta}({\bf k}) \Bigl[A_{12}(k,\varepsilon) ( A_{11}(k,\varepsilon+\omega)+ A_{22}(k,\varepsilon+\omega)) + ( A_{11}(k,\varepsilon)+A_{22}(k,\varepsilon)) A_{12}(k,\varepsilon+\omega) \Bigr]\\ +2 {{\bf v}_{F1}}({\bf k}). {{\bf v}_\Delta}({\bf k}) \Bigl[A_{12}(k,\varepsilon) ( A_{11}(k,\varepsilon+\omega)- A_{22}(k,\varepsilon+\omega)) + ( A_{11}(k,\varepsilon)- A_{22}(k,\varepsilon)) A_{12}(k,\varepsilon+\omega) \Bigr] \biggr\} \end{multline} \end{widetext} where ${n_F}(\epsilon)$ is the Fermi distribution function. For demonstration purposes, in fig.~\ref{Fig:BigGap} the real part of the optical conductivity has been plotted against $\omega$ for two different temperatures (one above $T^*$, depicted with a solid line, one below $T^*$, depicted with a dashed-dotted line), assuming that the quasiparticle lifetime is a constant (temperature and momentum independent) and the gap is unrealistically big ($W_0 = 0.25$ eV). As expected an upward shift of the SW occurs when the gap opens. Similar calculations have also been done in \cite {Valenzuela}. \begin{figure} \centerline{\includegraphics[scale=0.80]{BigGap.EPS}} \caption{ Real part of the optical conductivity for a constant scattering rate $\gamma=\gamma_0 = 0.02$, $\Delta_0 = 0.25$ and $T^*=0.030$. } \label{Fig:BigGap} \end{figure} \section {Quasiparticle lifetime} Applying the formulas of the preceding section to the underdoped cuprates presupposes that the quasiparticle picture makes sense there. This is questionable, particularly in the anti-nodal regions, where even lowest-order perturbation theory around the DDW mean-field Hamiltonian \cite{DDWARPES} predicts short lifetimes which may indicate a breakdown of quasiparticles. However, we will compute the conductivity in the quasiparticle approximation to show that, even at this level, an upward shift of spectral weight is not expected. If there are no quasiparticles at the anti-nodes, then the situation may be even better. In order to proceed with this strategy, we need one final ingredient, an {\it ansatz} for the quasiparticle scattering rate ${1}/{\tau({\bf k},\omega; T)}= \gamma({\bf k},\omega; T)$, as a function of momentum, frequency, and temperature in the underdoped cuprates. A number of angle-resolved photoemission experiments have measured the inverse lifetime (imaginary part of the self energy), as a function of these different parameters. In these experiments, the width of the quasiparticle peaks in the energy distribution curves (EDC's) or momentum distribution curve (MDC's) are measured as functions of momentum, energy and temperature \cite{VallaPRL00, VallaScience99, Kaminski0404385,KaminskiPRL00}. While the transport lifetimes are not necessarily identical to the quasiparticle lifetimes (or, equivalently, the vertex corrections are not necessarily small), we expect that they will have a similar anisotropic behavior. For the purposes of this model calculation, we will simply take them to be the same. We emphasize that the lifetimes which we adopt below are for illustrative purposes since our main goal is to show that an upward movement of spectral weight is not a necessary concomitant of the emergence of DDW order at finite temperature. We are not making any claims here about the correctness of these lifetimes. We take the imaginary part of the self-energy (quasiparticle scattering rate) to have the form \begin{equation} \label{eqn:lifetime-split} \Sigma_2 ({\bf k},\omega; T) = \Sigma_2(\omega; T) + \Gamma ({\bf k}). \end{equation} where $\Sigma_2(\omega; T)$ is temperature and frequency dependent with no momentum dependence and $\Gamma ({\bf k})$ is strongly momentum dependent. Such a form has been motivated by marginal Fermi liquid phenomenology \cite{MFL,PNAS} (MFL). In this way of analyzing the data, it is assumed that the quasiparticle lifetime which comes from electron-electron scattering is independent of temperature and linear in energy for small temperatures and linear in temperature, independent of the binding energy for higher temperatures: \begin{equation} \Sigma^{MFL}_2(\omega; T) = \lambda \text{Max}(|\omega|, T). \end{equation} In this {\it ansatz}, all of the angular dependence comes from {\it elastic} electron-electron scattering. This is a convenient form, but we will show that our results hold even for some others. For instance, we repeat our calculations with the standard Fermi liquid (FL) quasiparticle lifetime: \begin{equation} \Sigma^{FL}_2(\omega; T) = \lambda \text{Max}(\omega^2, T^2). \end{equation} again assuming that the angular dependence comes from $\Gamma ({\bf k})$. In order to show that these particular forms of $\omega$-dependence are not playing a big role in shifts of SW, we have also tried $\omega$-independent forms of these scattering rates: $\Sigma^{T-Linear}_2(\omega; T) = \lambda T$ and $\Sigma^{T^2}_2(\omega; T) = \lambda T^2$. While this form of the quasiparticle lifetime does not give the correct DC conductivity, it is still useful as a check because our goal is to emphasize the role of anisotropy and show that it can lead to a downward shift of spectral weight, regardless of its detailed frequency and temperature dependence. The quasiparticle lifetime is strongly anisotropic \cite{Kaminski0404385}: excitations in the antinodal region are more strongly scattered than the ones in the nodal region by up to a factor of 5. Hence, the assumed form (\ref{eqn:lifetime-split}) necessitates that $\Gamma ({\bf k})$ be a strongly anisotropic function of $\bf k$. We take the form: \begin{equation} \Gamma_{\rm Aniso} ({\bf k}) = \gamma_0 (1+ (\cos k_x - \cos k_y)^2), \end{equation} where $\gamma_0$ is the scattering strength in the nodal region. In order to check the role of anisotropy in the eventual shifting of the SW, we also check the case in which $\Gamma ({\bf k})$ is (unrealistically) isotropic: \begin{equation} \Gamma_{\rm Iso} ({\bf k}) = \gamma_0 . \end{equation} \section {Results} In Figures \ref{Fig:Linear} to \ref{Fig:FL}, we have plotted the real part of the optical conductivity vs $\omega$, when the momentum-independent part of the lifetime is given by the four different forms listed above. In each figure the isotropic case is compared with the anisotropic one with the same set of parameters (listed in the figure captions) except for $\gamma_{0}$, which is smaller in the anisotropic case because of its extra (larger than 1) prefactor. In all cases we have $t=0.3$, $t'=0.09$, $\mu = -0.3$ and $T^*= 0.03$. It is clear that in figures 4a, 5a, 6a, 7a, in which the scattering rate is isotropic, there is an upward movement of spectral weight (though it is small in some cases). However, in figures 4b, 5b, 6b, 7b, in which the scattering rate is anisotropic, there is a clear downward movement of spectral weight. (Broadening the Drude peak has also been seen in \cite{Valenzuela} and \cite{vertex}, in which scattering has been isotropic.) \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.90]{IL.EPS}} \centerline{\includegraphics[scale=0.90]{AL.EPS}} \caption{ Real part of the optical conductivity for (a) isotropic scattering rate with $\gamma_0=0.020$ and (b) anisotropic scattering rate with $\gamma_0=0.008$. Quasiparticle lifetime is taken to be linear in temperature and independent of $\omega$.} \label{Fig:Linear} \end{center} \end{figure} \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.90]{IMFL.EPS}} \centerline{\includegraphics[scale=0.90]{AMFL.EPS}} \caption{Real part of the optical conductivity for (a) isotropic scattering rate with $\gamma_0=0.020$ and (b) anisotropic scattering rate with $\gamma_0=0.008$. Quasiparticle lifetime's temperature and frequency dependence is given by the marginal Fermi liquid theory.} \label{Fig:MFL} \end{center} \end{figure} \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.90]{IT2.EPS}} \centerline{\includegraphics[scale=0.90]{AT2.EPS}} \caption{Real part of the optical conductivity for (a) isotropic scattering rate with $\gamma_0=0.010$ and (b) anisotropic scattering rate with $\gamma_0=0.002$. Quasiparticle lifetime is taken be to independent of $\omega$ have a $T^2$ temperature dependence.} \label{Fig:T2} \end{center} \end{figure} \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.90]{IFL.EPS}} \centerline{\includegraphics[scale=0.90]{AFL.EPS}} \caption{ Real part of the optical conductivity for (a) isotropic scattering rate with $\gamma_0=0.020$ and (b) anisotropic scattering rate with $\gamma_0=0.008$. Quasiparticle lifetime's temperature and frequency dependence is given by the Fermi liquid theory.} \label{Fig:FL} \end{center} \end{figure} \section {Conclusions} Recent data by Santander-Syro {\em et al} \cite {BontempsPRL}, has questioned the previous belief that the opening of the pseudogap in the underdoped cuprates can be detected by in-plane optical conductivity measurements. The unexpected result that as the temperature is reduced (and although a gap opens in the antinodal region of the Brillouin zone), the spectral weight is still transferred to lower frequencies, was interpreted as lack of any pseudogap signature in the optical data. In this paper we showed that this effect is consistent with the DDW theory of the pseudogap state of the underdoped cuprates. In the four sets of graphs of the previous section it can clearly be seen that the key factor in deciding which way the SW is transferred is isotropy or anisotropy of the scattering rate. It is shown that regardless of the form of the temperature and frequency dependence of the quasiparticle lifetime ({\em e.g.} Fermi liquid or non-Fermi liquid) the SW is shifted upward for the isotropic scattering rate while it is shifted downward for the anisotropic case. (We have not tried to find a form for the quasiparticle lifetime which correctly reproduces all of the transport data, but have focussed on the role of anisotropy and have shown that similar results are obtained for several different forms of frequency and temperature dependence.) We believe the explanation to be as follows: the pseudogap opens in the antinodal region where the carriers are more strongly scattered than the ones in the nodal region, where there is no gap. Therefore, we have lost those excitations which already gave relatively little contribution to the transport properties of the normal state. Furthermore, by reducing the temperature, the scattering rate of all the excitations is reduced. Our results clearly show that in the (more realistic) anisotropic case, the effect of lost excitations due to the gap opening can be more than canceled by a temperature-dependent reduction of the scattering rate for the rest of the excitations and hence a downward shift of the optical spectral weight. It is obvious that anisotropy is the key here, since for similar parameters for isotropic scattering rate, an upward transfer is observed. We note that similar considerations should apply to the case of 2H-TaSe$_2$\cite{Forro}. \acknowledgements We would like to thank Sudip Chakravarty and Dmitri Basov for discussions. This work has been supported by the NSF under Grant No. DMR-0411800.
{ "attr-fineweb-edu": 1.503906, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfGPxK7IAGCrm8hj2
\section*{Acknowledgment} \end{document} \section{Methodology} We first introduce two sampling methods to explore the complement. Then, we illustrate how to utilize and modify Laplacian regularization for building CLAR on GNNs. The overall architecture of CLAR is shown as Fig.\ref{fig:clar}. \begin{figure*}[t] \centering \includegraphics[trim=2cm 0cm 6cm 0cm,clip=true,width=\textwidth]{figures/icassp2.png} \caption{The architecture of CLAR and GNNs. The orange lines denote CLAR components and the blue ones are GNNs.} \label{fig:clar} \end{figure*} \subsection{Sampling on the Complement} The spectral character is hard to obtain directly from the complement, because the complement is greatly dense. To this end, we present two fast sampling strategies on the complement as below. \textit{Node-based Sampling.} For each node in $\mathcal{V}$, we sample a fixed number of edges $\mathcal{E}_{node} \subset \bar{\mathcal{E}}$ to construct the sampled graph $\mathcal{G}_{node}=\left(\mathcal{V},\mathcal{E}_{node}\right)$. $\mathcal{G}_{node}$ is a spanning subgraph of the complement $\bar{\mathcal{G}}$. Specifically, given a base number of sampling $S$, for each node $u \in \mathcal{V}$, we sample $S$ nodes $ \{u_1,u_2,u_3,\cdots,u_S\}$ without replacement, if and only if $\left(u,u_i\right) \in \bar{\mathcal{E}}$ where $i=1,2,3,\cdots,S$. \textit{Edge-based Sampling.} Besides, we further consider the influence of node degrees. This is based on the idea that if a node is connected to more nodes, whose distinction can be more easily erased by relatively dense lower-order information. At this time, the higher-order information should be introduced more. Node degrees are fully expressed in $\mathcal{E}$. In specific, we firstly take the head nodes of every edges that $\mathcal{V}_{head} = \left(v_i,i=1,2,3,\cdots,|\mathcal{E}|\right)$, where $\mathcal{E} = \{\left(v_i, v_j\right)\}$. Then we sample the tail nodes $\mathcal{V}_{tail} = \left(\bar{v}_j, \bar{v}_j \in \mathcal{V}\right)$, if and only if $\exists v_i \in \mathcal{V}_{head}, \left(v_i, \bar{v}_j\right) \in \bar{\mathcal{E}}$. Here, $|\mathcal{V}_{head}|$ and $|\mathcal{V}_{tail}|$ are not restricted to be the same. Similar to $S$ in node based sampling, we introduce $S$ to repeat $\mathcal{V}_{head}$ as $S$ times to meet $|\mathcal{V}_{tail}|$. Finally, by a pairing combination of $\mathcal{V}_{head}$ and $\mathcal{V}_{tail}$, we get $\mathcal{E}_{edge}$, and $\mathcal{G}_{edge} = \left(\mathcal{V}, \mathcal{E}_{edge}\right)$ as the sampled graph. To be simplified, we uniform the sampled graph obtained via the different methods as $\mathcal{G}_{s}=(\mathcal{V}_{s},\mathcal{E}_{s})$. Please refer to Algorithm~\ref{alg:sample} in Appendix~\ref{app:sampling} for the sampling strategy $\mathtt{SAMPLE}$. \subsection{Complement Laplacian Regularization} We follow the traditional graph-based semi-supervised node classification task to obtain CLAR. To start with, the objective function is~\citep{kipf_deep_2020}, \begin{equation} \mathcal{L}=\mathcal{L}_{sup}+\gamma\mathcal{L}_{reg}, \label{equ:semi} \end{equation} where $\mathcal{L}_{sup}$ is the supervised classification loss, and $\gamma$ is the weighting factor of the Laplacian regularizer $\mathcal{L}_{reg}$. This regularizer constrain the similarity of the connected nodes' representations (Eq~\ref{equ:reg}). \begin{equation} \mathcal{L}_{reg}=\sum_{i,j}A_{ij}||f\left(x_i\right)-f\left(x_j\right)||^2 =\tr\left(f\left(X\right)^TLf\left(X\right)\right). \label{equ:reg} \end{equation} To be clear, the topological similarity is totally captured by $\mathcal{L}_{reg}$, for $f\left(x\right)$ is a node-node-independent classifier. However, the Laplacian regularization (i.e., $\mathcal{L}_{reg}$ in Eq. \ref{equ:semi}) is redundant on GNNs~\citep{DBLP:conf/aaai/0002MC21}, because GNNs (Eq.\ref{equ:gnn_layer}) covers this regularization \citep{kipf_deep_2020}, both are low-pass filters. We provide spectral analysis in Sec.~\ref{sec:math}. In this paper, we instead construct Laplacian regularization on the extracted complement $\mathcal{G}_{s}$, building the high-pass filters from the negation of the low-pass ones of GNNs, that \begin{equation} \mathcal{L}_{com}=\tr \left(f(X,A)^T\mathrm{L}(\tilde{A_s})f(X,A)\right), \label{equ:com} \end{equation} where $\tilde{A_s}$ is the adjacency matrix reduced on $\mathcal{G}_{s}$ with normalization. While $\mathtt{SAMPLE}$ is uncontrollable, we append the Laplacian regularization to balance the condition (i.e., Eq.~\ref{equ:ori}). \begin{equation} \mathcal{L}_{ori}=\tr \left(f(X,A)^T \mathrm{L}(\tilde{A}) f(X,A)\right) \label{equ:ori} \end{equation} Therefore, CLAR is composed of the two regularization components, with hyper-parameters $\alpha$ and $\beta$ to adjust. \begin{equation} \mathcal{L}_{CLAR}=\alpha \mathcal{L}_{ori}+ \beta \mathcal{L}_{com}, \end{equation} In general, the enhanced GNNs with CLAR supplementing high-pass filters can be achieved by the objective function: \begin{equation} \mathcal{L} = \mathcal{L}_{cls} + \mathcal{L}_{CLAR}, \end{equation} where $\mathcal{L}_{cls}$ is the classification loss of GNNs (e.g., Eq.~\ref{equ:semi}). \section{Related Work} \label{sec:relate} Under the message-passing mechanism, GNNs aggregate the neighbors' information and update the central nodes (formulated in Eq.~(\ref{equ:gnn_layer})), such as GCN~\cite{DBLP:conf/iclr/KipfW17}, GAT~\cite{DBLP:conf/iclr/VelickovicCCRLB18} using the attention mechanism, and SAGE~\cite{DBLP:conf/nips/HamiltonYL17} sampling the neighboring nodes. \textbf{Spectral explanations for GNNs.}\quad GNNs apply (graph) filters on the full-scale eigenvalues/frequency components obtained from the graph Laplacian eigendecomposition~\cite{DBLP:conf/icml/WuSZFYW19,DBLP:conf/iclr/BalcilarRHGAH21}. {The corresponding frequency components are of low-frequency and the high-frequency parts, as shown in Fig.~\ref{fig:freq}. Accordingly, we have filters of low-pass and high-pass when defining them on these frequency components. For instance, in the right canvas of Fig.~\ref{fig:freq}, the denoted $\mathtt{Chameleon-Cheby-GCN}$ indicates a high-pass filter, since it specially lets high-frequency components to pass.} For a deeper view of the property that GNNs as filters hold, \cite{DBLP:conf/aaai/LiHW18} prove that most GNNs focus on the low-frequency components and act as low-pass filters, they quickly become more low-pass after stacking layers, leading to over-smoothing. Furthermore, the low-frequency components encourage the connected nodes to become similar, which is orthogonal to heterophilic graphs where the connected nodes tend to be disparate~\cite{DBLP:conf/cikm/DongDJJL21,DBLP:conf/aaai/BoWSS21}. { \textbf{Architecture-based solutions.}\quad Self-loops~\cite{DBLP:conf/iclr/XuHLJ19} and skip-connection~\cite{DBLP:conf/icml/XuLTSKJ18, DBLP:conf/icml/ChenWHDL20, DBLP:conf/iccv/Li0TG19} help to retain the initial node features for alleviating over-smoothing in GNNs. Dealing with heterophily, CPGNN~\cite{DBLP:conf/aaai/ZhuR0MLAK21} sieves the compatible neighbors in every propagation via learning a compatible matrix for each label. H2GCN~\cite{DBLP:conf/nips/ZhuYZHAK20} concatenates all layers to restore representations from the former aggregations. Another line of solutions analyze the causes of this problem, i.e., the nature of low-pass filters in GNNs, and address it directly. Such new architectures often incorporate high-pass filters to alleviate these issues. FAGCN~\cite{DBLP:conf/aaai/BoWSS21} arranges low-pass and high-pass aggregations in each layer. AdaGNN~\cite{DBLP:conf/cikm/DongDJJL21} applies several filters in each aggregation and \cite{DBLP:conf/pkdd/LiKW21} further adapt the attention mechanism. \cite{chang2021spectral} leverages attention mechanism to achieve an high-pass filter feature. Besides, the polynomial based graph filters, e.g., Cheby-GCN~\cite{DBLP:conf/nips/DefferrardBV16}, BernNet~\cite{DBLP:journals/corr/abs-2106-10994}, and GPRGNN~\cite{DBLP:conf/iclr/ChienP0M21}, have the potential to approximate arbitrary filters by utilizing high-ordered polynomials. \textbf{Regularization-based solutions.}\quad Some recent work develops regularizers for existing GNN models, making this strategy practical as it avoids designing from scratch. P-reg~\cite{DBLP:conf/aaai/0002MC21} promotes the consistency of adjacent layers to accelerate GNNs to the infinite layer. MADReg penalizes the smoothness of hidden representations, and AdaGraph enhances the graph topology via self-training~\cite{DBLP:conf/aaai/ChenLLLZS20}. DropEdge~\cite{DBLP:conf/iclr/RongHXH20} randomly drops a specific ratio of edges, a particular case of AdaGraph. These regularizers align with the semi-supervised paradigm (Eq.~(\ref{equ:semi_loss})), nevertheless, they inherit the low-frequency components from GNNs and thus heterophily is unsolved. } \textbf{Our proposal to related work.}\quad Our proposal, CLAR, is the first regularization method that derives from spectral domain to enhance high-frequency components for GNNs. Unlike architecture-based approaches, we incorporate high-pass filters in a plug-in manner, avoiding the work of implementing a new architecture, thus maintaining the properties of the backbone GNN. Compared with existing regularization methods, we practice the high-pass feature advanced. \section{Introduction} Graph neural networks (GNNs) is of high practical value in nowadays' research, in which enhances the progress in various domains~\cite{2021GNNs}, e.g., forecasting the publications' classes on citation networks and deciding the proteins' functions based on protein-protein interactions networks. In essence, GNNs learn a function (i.e., filter) taking the eigenvalues (i.e., frequency components) obtained from eigendecomposition of the graph Laplacian matrix as arguments \citep{DBLP:conf/icml/WuSZFYW19,DBLP:conf/iclr/BalcilarRHGAH21} from a spectrum perspective. To avoid eigendecomposition, In early stage, polynomial-based spectral filters alternate to approximately utilize the polynomials (e.g., Cheby-GCN~\citep{DBLP:conf/nips/DefferrardBV16}). \citet{DBLP:conf/iclr/KipfW17} simplify the first-order polynomial term and propose GCN, where the connected neighbors aggregate for updating the central nodes. Because it endues the spectral filters with spatial signification, this paradigm formalizes the modern graph neural networks (GNNs). \begin{figure}[t] \centering \includegraphics[trim=2.5cm 0cm 1cm 0cm,clip=true,width=0.5\textwidth]{figures/frequency.pdf} \caption{We investigate the frequency components on Chameleon and CiteSeer. On the left, we retain 1) $0$ to the current frequency component $\lambda$ (dashed lines), and 2) successive $50$ sorted frequency components (solid lines) to train and evaluate an MLP. For example, the high-frequency part (the red area) shows equal importance to the low-frequency part, especially on Chameleon (heterophily). On the right, we train Cheby-GCN and BernNet with $10$-order; the high-frequency responses are also positively captured, particularly on the heterophily Chameleon (right).\textcolor{red}{less is more, what do you want your readers pay the most attention to...}} \label{fig:freq} \end{figure} However, recent studies prove that GNNs act as a type of Laplacian smoothing, a low-pass filter~\citet{DBLP:conf/aaai/LiHW18}. When reaching out for higher-order neighbors via stacking layers, the GNNs are getting more strictly low-pass, and the connected nodes would converge to similar values, called over-smoothing~\citep{DBLP:conf/icml/ChenWHDL20,DBLP:conf/icml/WuSZFYW19}. It dissolves the characteristic of each node and leads to poor performance. On the other hand, this feature is suitable for homophily graphs where the neighbors behave resemblance~\citep{kipf_deep_2020}, while defective for heterophily graphs, where the neighbors are disparate~\citep{DBLP:conf/aaai/BoWSS21}. These facts and drawbacks motivate us to ask: \textit{only the low-frequency components are required?} We investigate the frequency components for homophily and heterophily graphs. In Figure~\ref{fig:freq}, high-frequency components (the red area) show the capability to classify as the low-frequency components (the left figure), especially on Chameleon, a heterophily graph. Apart from this, Cheby-GCN and adaptive spectral filters, BernNet~\citep{DBLP:journals/corr/abs-2106-10994} also capture the high-frequency components, whereas Chameleon also returns a more significant frequency response. These observations shed light on that high-frequency components are also practical. Then we attempt to figure out, \textit{how to enhance high-frequency components for GNNs?} In this paper, we start from the spectral view to argue that the complement of the original graph incorporates a high-pass filter and propose Complement LAplacian Regularization (CLAR) to enhance high-frequency components for GNNs. In CLAR, we adopt 1) random sampling strategies to better capture the frequency components on the complement and 2) the original Laplacian regularization to balance the noisy connections from the sampling. Remarkably, our proposal is incomparable to existing regularization methods for enhancing high-frequency components. Our contributions are as follows: \begin{itemize} \item We investigate that the high-frequency components are also effective for graph learning and argue that the complement of the original graph incorporates a high-pass filter of the frequency components. \item Based on our argument, we propose a model-agnostic plugin (CLAR) that enhances the high-frequency components for GNNs via regularizing the complement. \item Experiments on homophily and heterophily graphs show CLAR can alleviate over-smoothing, better depict the heterophily graphs, and improve robustness against topological noise. \end{itemize} \section{Our Proposal: Complementary LAplacian Regularization} \label{sec:prop} \newtheorem{lemma}{Lemma} \newtheorem{remark}{Remark} \newtheorem{claim}{Claim} \newtheorem{theorem}{Theorem} We propose CLAR to replenish the high-frequency components for GNNs via a complementary Laplacian regularization. In the following, we first introduce our main theory in Theorem~\ref{theorem}, showing complementary Laplacian regularization incorporates a high-pass filter. Next, we present the implementation details of CLAR, including sampling strategies and constructing the regularization. \subsection{Main theorem} Suppose $W^*$ is the optimal feature transformation parameters and initialize $Z^{(0)}=XW^*$ for aggregation. \cite{DBLP:conf/www/ZhuWSJ021} propose: \begin{lemm} \label{lemma2} Given an adjacency matrix $A$, a GNN model can be used to approximate a regularization objective that is defined on the corresponding Laplacian matrix $\mathnormal{L}(A)$. \end{lemm} Besides,~\cite{DBLP:conf/iclr/BalcilarRHGAH21} link the aggregation functions to the frequency response of spectral filters. \begin{lemm} \label{lemma1} Given an adjacency matrix $A$, the frequency response $\Phi_{A}$ of a GNN defined on $A$ is negatively related to the frequency components of Laplacian matrix of $A$, i.e., $\Phi_{A}(\lambda) \approx 1-\lambda$ is a low-pass filter. \end{lemm} So far, we have bridged the GNN regularization objective to the following: i) the GNN aggregation function, and ii) the frequency response. We take GCN for an example to instantiate this relationship. Using Lemma~\ref{lemma2}, the regularization objective for GCN is \begin{equation} \mathcal{L}_{GCN}=\min_{Z} \tr(Z^T\tilde{\mathrm{L}}(A)Z). \label{equ:gcn_optim} \end{equation} { With this formulation, we can view the mean-average aggregation of GCN (using $\tilde{A}$) as a regularization objective in terms of the normalized Laplacian matrix $\tilde{\mathrm{L}}(A)$. } Next, suppose the frequency components of $\tilde{\mathnormal{L}}(A)$ is $\lambda$, where we remove the subscript to denote the variable of frequency components. From Lemma~\ref{lemma1}, GCN acts as a low-pass filter that $\Phi_{A}(\lambda) \approx 1-\lambda$, aligning with the results from \cite{DBLP:conf/www/ZhuWSJ021,DBLP:conf/aaai/LiHW18}. Along this line of thought, \textit{if a GNN regularization negates $\mathcal{L}_{GCN}$, it incorporates a high-pass filter}, and we propose: \begin{theo} \label{theorem} Suppose a complement graph $\bar{\mathcal{G}}=(\mathcal{V},\bar{\mathcal{E}})$. $A_s$ is the adjacency constructed by aribitrary $\mathcal{E}_s \subset \bar{\mathcal{E}}$. The Laplacian regularization built on $A_s$, i.e., $\mathcal{L}_{s} = \min_{Z} \tr \left( Z^T \tilde{\mathrm{L}}(A_s) Z \right)$ produces a high-pass filter effect on the original graph $\mathcal{G}=(\mathcal{V}, \mathcal{E})$'s signal $Z$. \end{theo} \begin{proof} Suppose a edge set $\mathcal{E}^*$ composed by the arbitrary subset of the complement edges and the original edge set, that $\mathcal{E}^* = \mathcal{E}_s \cup \mathcal{E}$. Then we can rewrite $A_s = - A + A^* $, where $A^*$ is the adjacency matrix of $\mathcal{E}^*$ and $A_s$ is for $\mathcal{E}_s$. The objective of the Laplacian regularization with respect to $A_s$ is formulated as follows: \begin{align} \mathcal{L}_{s} &= \min_Z \tr\left(Z^T\tilde{\mathrm{L}}\left(-A + A^*\right)Z\right) \\ &= \min_Z -\tr\left(Z^T\tilde{\mathrm{L}}(A)Z\right) + \tr\left(Z^T\tilde{\mathrm{L}}\left(A^*\right)Z\right). \label{equ:prop} \end{align} Using Lemma~\ref{lemma2} and Lemma~\ref{lemma1}, $\tr(Z^T\tilde{\mathrm{L}}(A)Z)$ is a low-pass filter $\Phi_A(\lambda) = (1-\lambda)$. The former half of Eq.~(\ref{equ:prop}) negates this low-pass filter, and thus it constructs a high-pass filter. \end{proof} { Any given subset of complementary edges equals the result of subtracting the original edge set from the union of the arbitrary subset and original edge set, making the Laplacian regularization built on it consist of high-pass filters from the original graph and low-pass filters from the corresponding complementary graph. Note that this theorem demands no restrictions on subsets, making its realization straightforward on any graphs. \eat{ The intuition behind this theorem is simple, regularization on an arbitrary subset of complementary edges can be thought of as subtracting the original set of edges from the union of the arbitrary subset and the original set of edges. Given this, Laplacian regularization built on arbitrary subsets high-pass filters the original spectrum and low-pass filters the spectrum of the union set consisting of the original edge set and the subset.} We want to emphasize that Eq.~(\ref{equ:prop}) may fall into an unexpected calculation as $\tr\left(Z^T\tilde{\mathrm{L}}\left(A^*\right)Z\right)$ when fixing the subset during the training. Instead, one can apply random sampling at each layer to avoid this, therefore, we develop the following sampling methods on $\bar{\mathcal{E}}$. } \subsection{Sampling from the Complement Graph} Constructing a high-pass filter from a complement graph is not trivial. According to Theorem~\ref{theorem}, it is unnecessary and impractical to include the whole complement graph. Instead, we only need the high-pass character from any subset of the complement. Therefore, we propose two efficient sampling strategies for the complement graph from two perspectives, including a node level and an edge level. \textit{Node-based sampling strategy} traverses the nodes set. Given a fixed sampling multiple $S$, for each node $v_i$ in $\mathcal{V}$, we sample $S$ nodes from $\mathcal{V}$, if and only if they are not connected in the graph. Then, we obtain a sampled complement with $|\mathcal{V}|$ nodes and maximum of $S\times|\mathcal{V}|$ edges. \textit{Edge-based sampling strategy} further considers the influence of node degrees. From a local perspective, suppose a node connects to more neighbors than other nodes. We argue it could be more influenced by the original low-pass filter and more edges from the complement graph should be helpful to make the high-pass filter works. This strategy iterates all the nodes in each of the edges, in which the node degrees embed. Given a fixed sampling multiple $S$, for each node $(u,v)$ in each edge, we sample $S$ nodes from $\mathcal{V}$, if and only if either $u$ and $v$ does not connected to them. By this means, we sample the complement with $|\mathcal{V}|$ nodes (since the graph is connected) and $2S\times|\mathcal{E}|$ edges. In the following content, we signify the sampled graph obtained from the both methods as $\mathcal{G}_{s}=(\mathcal{V}_{s},\mathcal{E}_{s})$. Please refer to the following Algorithm~\ref{alg:sample} for the sampling details. \begin{algorithm}[h] \caption{Sampling Strategy $\mathtt{SAMPLE}$} \label{alg:sample} \textbf{Input}: the raw graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ and sampling hyper-parameter $S$. \begin{algorithmic}[1] \STATE Initialize $\mathcal{E}_{s} \leftarrow \emptyset$. \IF{sampling based on nodes} \FOR{each node $u$ in $\mathcal{V}$} \STATE sample $S$ edges $\mathcal{E}_{u}=\{(u,v_i)\}$ from $\bar{\mathcal{E}}$ \STATE $\mathcal{E}_{s} \leftarrow \mathcal{E}_{s} \cup \mathcal{E}_{u}$ \ENDFOR \ENDIF \IF{sampling based on edges} \FOR{each edge $(u,v)$ in $\mathcal{E}$} \STATE sample $S$ edges $\mathcal{E}_{u}=\{(u,v_i)\}$ from $\bar{\mathcal{E}}$ \STATE sample $S$ edges $\mathcal{E}_{v}=\{(u_i,v)\}$ from $\bar{\mathcal{E}}$ \STATE $\mathcal{E}_{s} \leftarrow \mathcal{E}_{s} \cup \mathcal{E}_{u} \cup \mathcal{E}_{v}$ \ENDFOR \ENDIF \RETURN Sampled graph $\mathcal{G}_{s}=(\mathcal{V},\mathcal{E}_{s})$ \end{algorithmic} \end{algorithm} \subsection{Constructing Regularization on the Sampled Complement} \label{sec:const_clar} Based on the sampled complement graph, we propose Complement LAplacian Regularization~(CLAR) as a plugin layer to enhance GNNs. CLAR is composed of two regularization components to cover both low-pass and high-pass functions. In formal, we use $\mathcal{L}_{CLAR}$ to represent the CLAR regularization, then, \begin{equation} \mathcal{L}_{CLAR} = \beta \mathcal{L}_{com} + \alpha \mathcal{L}_{ori}, \label{equ:clar} \end{equation} where $\mathcal{L}_{com}$ and $\mathcal{L}_{ori}$ are high-pass and low-pass regularization respectively, and $\alpha$ and $\beta$ are hyper-parameters. The high-pass regularization $\mathcal{L}_{com}$ is a Laplacian regularization built on the extracted complement $\mathcal{G}_{s}$, \begin{equation} \mathcal{L}_{com}=\tr \left(f(X,A)^T\mathrm{L}(\tilde{A_s})f(X,A)\right), \label{equ:com} \end{equation} where $\tilde{A_s}$ is the normalized adjacency matrix reduced from $\mathcal{G}_{s}$. { In Theorem~\ref{theorem}, we show that constructing $A_s$ with randomness can prevent $\mathcal{L}_{com}$ from converging to unexpected low-pass filters from $A^*=A+A_s$. Our proposed sampling strategy $\mathtt{SAMPLE}$ is well aligned with the need for this theorem. However, the frequency components of $\tilde{\mathrm{L}}(A^*)$, $\lambda^*$, is incalculable at each training iteration. We therefore introduce a low-pass filter on $\tilde{\mathrm{L}}(A)$ to amend this.} Specifically, we append a modified Laplacian regularization, where we replace Eq.~(\ref{equ:reg}) with GNNs output $f(X,A)$, \begin{equation} \mathcal{L}_{ori}=\tr \left(f(X,A)^T \mathrm{L}(\tilde{A}) f(X,A)\right). \label{equ:ori} \end{equation} Here, $\mathcal{L}_{ori}$ (Eq.~(\ref{equ:ori})) relies on hidden representations (i.e., $f(X,A)$), as opposed to operating on the fixed structure in traditional GNNs. } In general, incorporating GNNs with a high-pass filter provided by CLAR can be achieved by the objective function: \begin{equation} \mathcal{L} = \mathcal{L}_{GNN} + \mathcal{L}_{CLAR}. \end{equation} $\mathcal{L}_{GNN}$ is the classification loss of GNNs, e.g., \begin{equation} \mathcal{L}_{GNN}=\mathtt{CrossEntropy}\left(f(X,A)_{train},Y_{train}\right). \label{equ:semi_loss} \end{equation} This formulation is in line with the semi-supervised node classification paradigm, i.e., Eq.~(\ref{equ:semi}). Obviously, our method is completely a post plugin for GNNs, and can be applied to any GNN backbones. The overall optimization process of deploying CLAR is listed in Algorithm~\ref{alg:clar}. \begin{algorithm}[h] \caption{Optimization process of GNNs with CLAR} \label{alg:clar} \textbf{Input}: the raw graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ with adjacency matrix $A$ and node features $X$, GNN function $f_{\Theta}$ with trainable parameters $\Theta$, sampling strategy $\mathtt{SAMPLE}$, sampling hyper-parameter $S$, adjustment hyper-parameters $\alpha, \beta$, learning step $\eta$, and the maximum training epoch $E$. \begin{algorithmic}[1] \STATE Initialize the parameter of the GNN function: $\Theta \leftarrow \Theta_0$ \FOR{each epoch $e$ in $0,1,\cdots E$} \STATE get the hidden representations: $H \leftarrow f_{\Theta_e}(X,A)$ \STATE compute GNN loss function $\mathcal{L}_{GNN}\leftarrow\mathtt{CrossEntropy}\left(H_{train},Y_{train}\right)$ \STATE get the sampled complement: $\mathcal{G}_{s} \leftarrow \mathtt{SAMPLE}(\mathcal{G},S)$ \STATE calculate two regularization terms: $\mathcal{L}_{com} \leftarrow \tr(H^T\tilde{A}_sH)$ and $\mathcal{L}_{ori}\leftarrow \tr(H^T\tilde{A}H)$ \STATE $\mathcal{L}_{CLAR} \leftarrow \beta \mathcal{L}_{com} + \alpha \mathcal{L}_{ori}$, $\mathcal{L}\leftarrow\mathcal{L}_{CLAR}+\mathcal{L}_{GNN}$ \STATE calculate the gradient $\frac{\partial \mathcal{L}}{\partial \Theta_{e}}$ \STATE update $\Theta_{e+1}\leftarrow\Theta_{e} + \eta \frac{\partial \mathcal{L}}{\partial \Theta_{e}}$ \IF{the criterion for stop training is satisfied} \STATE get the optimized parameter $\Theta^* \leftarrow \Theta_{e+1}$ \ENDIF \ENDFOR \RETURN the optimized GNN function $f_{\Theta^*}$ \end{algorithmic} \end{algorithm} \textbf{Remark.}\quad The hyper-parameter $\alpha$ and $\beta$ can shaping the filter. For example, increasing $\alpha$ enhances the low-frequency components behaving the same as Laplacian regularization (i.e., Eq.~(\ref{equ:reg})). On the contrary, increasing $\beta$ enhances the high-frequency components. Additionally, since the sampling multiple $S$ is not directly related to $\{\lambda^*\}$, it is not sensitive to CLAR, which is consistent with our experiments for Table~\ref{tab:spearman}. \input{tab/tab_data} \subsection{Spectral Analysis for Other Regularization Methods} We further provide spectral analysis of CLAR and other regularization methods to indicate that only our proposal obtains a high-pass filter. We omit the hyper-parameter ahead of the regularization methods, e.g. $\gamma$ in Eq.~(\ref{equ:semi}). \textit{Network Lasso~(NL)} ,i.e. Eq.~(\ref{gl_optim})~\cite{nlasso}, is identical to graph Laplacian regularization~ Eq.~(\ref{equ:semi_loss})~\cite{kipf_deep_2020}, a low-pass filter covered in GCN (Eq.~(\ref{equ:gcn_optim})) and CLAR (Eq.~(\ref{equ:ori})). Therefore, its improvement stays trivial. \begin{equation} \mathcal{L}_{NL}=\min_{Z} \tr(Z^T\tilde{\hat{L}}Z). \label{gl_optim} \end{equation} \textit{P-reg~(P)} , i.e. Eq.~(\ref{p_optim}), penalties the difference of $Z$ and GCN-smoothed representations $\tilde{\hat{A}}Z$, which is equivalent to squared Laplacian regularization using squared error~\cite{DBLP:conf/aaai/0002MC21}. \begin{equation} \mathcal{L}_{P}=\min_{Z} \tr(Z^T\tilde{\hat{L}}^T\tilde{\hat{L}}Z). \label{p_optim} \end{equation} The convolution matrix of P-reg is the adjacency matrix of $\tilde{\hat{L}}^2$. According to Lemma~\ref{lemma1}, its frequency response is $\Phi_{P-reg}(\lambda)\approx1-\lambda^2$, as the frequency component of $\tilde{\hat{L}}^2$ is $\lambda^2$. Therefore, P-reg is still a low-pass filter with more low-frequency components than NL. \textit{MADReg~(MR)} , i.e. Eq.~(\ref{mad_optim}), proposes a penalty on the difference between higher-order neighbors and lower-order neighbors to alleviate over-smoothing. Specifically, MADReg obtains the $k$-order neighbors by stacking $k$ layers (e.g., GCN layers). Practically, MADReg takes $8\geq$ and $\leq 3$ for higher-order and higher-order, respectively. \begin{equation} \mathcal{L}_{MR}=\min_{Z} \tr(Z^T((Q-\tilde{\mathrm{L}}(\tilde{A^7}))-\tilde{\mathrm{L}}(\tilde{A^3}))Z) \label{mad_optim} \end{equation} $Q$ is a matrix of ones, and $(Q-\tilde{\mathrm{L}}(\tilde{A^7}))$ signifies higher-order neighbors that $k\geq8$. We approximate the Laplacian of the power adjacency matrix to the power of the Laplacian matrix, $\tilde{\mathrm{L}}(\tilde{A^k})\approx\tilde{\mathrm{L}}(\tilde{A})^k$. Then we have \begin{align*} \mathcal{L}_{MR}&\approx\min_{Z} \tr(Z^TQZ) -\tr(Z^T\tilde{\mathrm{L}}(\tilde{A})^7Z)\\ &-\tr(Z^T\tilde{\mathrm{L}}(\tilde{A})^3Z). \end{align*} Leaving the independent first term, we derive the frequency responses of MADReg, that $-\Phi_{MR}(\lambda)\approx(1-\lambda)^3+(1-\lambda)^7$, which is a negation of two more extreme low-pass filters, that makes the high-frequency components not distinctive. \textit{AdaGraph~(AG)} , i.e. Eq.~(\ref{ag_optim}), modifies Network Lasso (Eq.~(\ref{gl_optim})) via adding and dropping edges to $A$. \textit{DropEdge} is a special case of AG, when $A_{add}=[0]_{N\times N}$. \begin{equation} \mathcal{R}_{AG}=\min_{Z} \tr(Z^T\tilde{\mathrm{L}}(A-A_{drop}+A_{add})Z) \label{ag_optim} \end{equation} Practically, $l1$-norm regularization is constrained on $A_{add}$ and $A_{drop}$, making them sparse~\cite{DBLP:conf/aaai/ChenLLLZS20}. Consequently, the low-pass filter of $\tilde{\mathrm{L}}(A)$ dominates the optimization problem. In total, CLAR is unparalleled among other regularization methods for incorporating a high-pass filter. \textbf{Remark.}\quad We want to emphasize that adding CLAR to GNNs does not equal adding a regularization on the complete graph. Direct usage of the complete graph may fail to capture the desired high-pass information because it is often too dense. Therefore, a sampling process is required. Towards this, we randomly sample complement edges in each training epoch, resulting in high-pass filtering according to Theorem~\ref{theorem}. Besides, such a random sampling can ensure the overall optimization does not depend on any particular edge sets. \section{Mathematical Justifications} In this section, we present the spectral examination of CLAR and other regularization methods. We argue that our proposal uniquely obtain high-pass filters. \label{sec:math} \subsection{Spectral Examination of CLAR} Special note here, we separate the optimization problem of feature transformation (e.g., $\mathcal{L}_{sup}$ in Eq.~\ref{equ:semi}) and that of the propagation (e.g., $\mathcal{L}_{reg}$ in Eq.~\ref{equ:semi}). The following discussions are in the context of the propagation objective. On this basis, GNNs are bridged to their spectral explanations. Let us first look at the propagation of GCN~\citep{DBLP:conf/iclr/KipfW17} as an example of GNNs, where re-normalization trick with self-loops is utilized. Suppose that $Z^{(l+1)}=\tilde{\hat{A}}Z^{(l)}$, where $Z^{(0)}=X\Theta^*$ and $\Theta^*$ is the optimal parameters of feature transformation (e.g., full-connected layers). GCN optimizes the following regularization objective~\citep{DBLP:conf/www/ZhuWSJ021}, extended to Lemma~\ref{lemma2}. \begin{equation} \mathcal{L}_{GCN}=\min_{Z} \tr(Z^T\tilde{\hat{L}}Z), \label{equ:gcn_optim} \end{equation} where $\tr(\cdot)$ is calculating the trace of a matrix. \newtheorem{lemma}{Lemma} \newtheorem{remark}{Remark} \newtheorem{claim}{Claim} \newtheorem{prop}{Proposition} \begin{lemma} \label{lemma2} The explicit optimization problem defined on Laplacian matrix (e.g., Eq.~\ref{equ:gcn_optim}) could be approximated by a layer-wise propagation network building on the corresponding adjacency matrix (e.g., GCN), and vice versa. \end{lemma} In Lemma~\ref{lemma2}, the 'corresponding adjacency matrix' in the propagation denotes convolution matrix~\citep{DBLP:conf/iclr/BalcilarRHGAH21}. Besides, we introduce frequency response, a continuous function defined in the space of graph spectrum, e.g, $\Phi(\lambda)$. Then, the propagation layer and the frequency response could be bridged by \cite{DBLP:conf/iclr/BalcilarRHGAH21}. Still, we take GCN for example, whose convolution matrix is $\tilde{\hat{A}}$. The spectral components (also spectrum) of $\tilde{\hat{L}}$ is $\lambda=\{\lambda_i,i=1,2,\cdots,n\}$. And frequency response of GCN is $\Phi_{GCN}(\lambda_i) \approx 1-\lambda_i$, leading to Lemma~\ref{lemma1}. \begin{lemma} \label{lemma1} For any convolution matrix directly defined on adjacency matrix $A$, its frequency response $\Phi_{A}$ is negatively linear correlated to the spectrum of Laplacian matrix of $A$. Specifically, $\Phi_{A}(\lambda_i) \approx 1-\lambda_i$ is a low-pass filter. \end{lemma} \begin{prop} \label{prop} An Laplacian regularization constructed on the complement incorporates the high-pass filters. \end{prop} \textbf{Proof.} Suppose a larger edges set $\mathcal{E}^*$, that $\mathcal{E} \cup \mathcal{E}^*$. A sampled complement $\mathcal{E}_s \subset \mathcal{V}\times \mathcal{V}\cap\mathcal{E}$ can be donated by $\mathcal{E}^* \cap \mathcal{E}$ without any loss of generality. Then, the corresponding adjacency matrix is $A_s=A^*-A$, and we obtain the Laplacian regularization objective (the same to Eq.~\ref{equ:com}): \begin{align} \mathcal{L}_{com} &= \min_Z \gamma \tr\left(Z^T\tilde{\mathrm{L}}\left(A^*-A\right)Z\right) \\ &= \min_Z \gamma \left(\tr\left(Z^T\tilde{\mathrm{L}}\left(A^*\right)Z\right)-\tr\left(Z^T\tilde{\hat{L}}Z\right)\right). \label{equ:prop} \end{align} Omitting the hyper-parameter $\gamma$, $\tr(Z^T\tilde{\hat{L}}Z))$ is equivalent to GNNs with the convolution matrix $\tilde{A}$, using Lemma~\ref{lemma2}. Next, this convolution returns the frequency responses as $\left(1-\lambda\right)$ approximately, a low-pass filter on $\tilde{\mathrm{L}}(X)$ induced from Lemma~\ref{lemma1}. However, Eq.~\ref{equ:prop} contains a negation on $\tr(Z^T\tilde{\hat{L}}Z))$, that $\Phi_{A}(\lambda)\approx-\left(1-\lambda\right)$. This formulation points out a negation on the low-pass filter incorporates a high-pass one. Our main claim that CLAR obtains a \textit{high-pass} filter evidenced by Proposition~\ref{prop}. In CLAR, however, the spectrum of $\tilde{\mathrm{L}}(A^*)$ is not ensured due to the random sampling, i.e., $\Phi_{A^*}(\lambda^*)\approx1-\lambda^*$, which is a low-pass filter on $\tilde{\mathrm{L}}(A^*)$. To this, we deploy a low-pass filter on $\tilde{\mathrm{L}}(A)$ (i.e., Eq.~\ref{equ:ori}) to trade-off the unexpected condition. \input{tab_data} \textbf{Remark.} The hyper-parameter $\alpha$ and $\beta$ can appropriately shape the filters. For example, increasing $\alpha$ represents enhancing the low-pass filters behaving the same as Laplacian regularization (i.e., Eq.~\ref{equ:reg}). Similarly, the high frequency components can be enhanced via increasing $\beta$. Additionally, since the sampling multiple $S$ is not directly related to $\lambda^*$, it is not sensitive to CLAR, that is consistent with our experiments (Table~\ref{tab:spearman} in Appendix~\ref{app:s}). \subsection{Comparisons to Other Regularization Methods} In order to certify that only CLAR obtains a high-pass filter making it superior, we extensively discuss five other regularization methods, including Network Lasso (Laplacian regularization)~\cite{nlasso}, P-reg~\citep{DBLP:conf/aaai/0002MC21}, MADReg~\citep{DBLP:conf/aaai/ChenLLLZS20}, AdaGraph~\citep{DBLP:conf/aaai/ChenLLLZS20} and DropEdge~\citep{DBLP:conf/iclr/RongHXH20}. For simplification, we omit the hyper-parameter ahead every regularization methods (e.g., $\gamma$ in Eq.~\ref{equ:semi}). \subsubsection{Network Lasso~(NL)} Network Lasso (Eq.~\ref{gl_optim}) \citep{nlasso} is identical to the Laplacian regularization in Eq.~\ref{equ:semi}~\citep{kipf_deep_2020}, a low-pass filter, which is covered in GCN (Eq.~\ref{equ:gcn_optim}) and CLAR(Eq.~\ref{equ:ori}). Therefore, its improvement stays trivial. \begin{equation} \mathcal{L}_{NL}=\min_{Z} \tr(Z^T\tilde{\hat{L}}Z). \label{gl_optim} \end{equation} \subsubsection{P-reg~(P)} P-reg (Eq.~\ref{p_optim}) is a penalty on the difference of $Z$ and GCN-smoothed $\tilde{\hat{A}}Z$, which is equivalent to squared Laplacian regularization using squared error~\citep{DBLP:conf/aaai/0002MC21}. \begin{equation} \mathcal{L}_{P}=\min_{Z} \tr(Z^T\tilde{\hat{L}}^T\tilde{\hat{L}}Z). \label{p_optim} \end{equation} The convolution matrix of P-reg is the adjacency matrix of $\tilde{\hat{L}}^2$. According to Lemma~\ref{lemma1}, its frequency response is $\Phi_{P-reg}(\lambda)\approx1-\lambda^2$, as the spectrum component of $\tilde{\hat{L}}^2$ is $\lambda^2$. Therefore, P-reg is still a low-pass filter that holds more low-frequency components compared to NL. \subsubsection{MADReg~(MR)} MADreg (Eq.~\ref{mad_optim}) proposes penalty on the difference of the higher-order neighbors and the lower-order neighbors to alleviate over-smoothing. In specific, the $k$-order neighbors are obtained by stacking $k$ layers (e.g., GCN layers). Practically, MADReg take $8\geq$ for the higher-order and $\leq 3$ for the lower-order. \begin{equation} \mathcal{L}_{MR}=\min_{Z} \tr(Z^T((Q-\tilde{\mathrm{L}}(\tilde{A^7}))-\tilde{\mathrm{L}}(\tilde{A^3}))Z) \label{mad_optim} \end{equation} $Q$ is a matrix of ones, and $(Q-\tilde{\mathrm{L}}(\tilde{A^7}))$ signifies higher-order neighbors that $k\geq8$. Approximating the Laplacian of the power adjacency matrix to the power of the Laplacian matrix, that $\tilde{\mathrm{L}}(\tilde{A^k})\approx\tilde{\mathrm{L}}(\tilde{A})^k$, separately we have \begin{align*} \mathcal{L}_{MR}&\approx\min_{Z} \tr(Z^TQZ) -\tr(Z^T\tilde{\mathrm{L}}(\tilde{A})^7Z)\\ &-\tr(Z^T\tilde{\mathrm{L}}(\tilde{A})^3Z). \end{align*} Leaving the independent first term, we derive the frequency responses of MADReg, that $-\Phi_{MR}(\lambda)\approx(1-\lambda)^3+(1-\lambda)^7$, which is a negation of two more extreme low-pass filters, that makes the high-frequency components not distinctive. \subsubsection{AdaGraph~(AG)} AG (Eq.~\ref{ag_optim}) modifies Network Lasso (Eq.~\ref{gl_optim}) via adding and dropping edges to $A$. \textbf{DropEdge~(DE)} is a special case of AG, when $A_{add}=[0]_{N\times N}$. \begin{equation} \mathcal{R}_{AG}=\min_{Z} \tr(Z^T\tilde{\mathrm{L}}(A-A_{drop}+A_{add})Z) \label{ag_optim} \end{equation} Practically, $l1$-norm regularization is constrained on $A_{add}$ and $A_{drop}$, making them sparse~\citep{DBLP:conf/aaai/ChenLLLZS20}. Consequently, the low-pass filter of $\tilde{\mathrm{L}}(A)$ leads the optimization problem. \input{tab_filter} \section{Introduction} Recent years have witnessed the prosperity of graph neural networks (GNNs) in various domains summarized in~\cite{2021GNNs}. At an early age, GNN models often directly apply the full-scale spectrum filters from Laplacian eigenvectors, such as Cheby-GCN~\cite{DBLP:conf/nips/DefferrardBV16}. Although theoretically sound, these methods have prohibitive computational costs due to the eigendecomposition. To alleviate this issue, researchers propose to approximate filters and lead to modern GCN~\cite{DBLP:conf/iclr/KipfW17}, aggregating neighboring nodes' messages for updating the central nodes, called message-passing (MP) mechanism. This mechanism is often framed as a regularization to integrate the graph information~\cite{DBLP:conf/iclr/KipfW17}. Despite its success, such a regularization inherits issues from the message passing mechanism that only focuses on the low-frequency information {(i.e., the low-valued eigenvalues of the graph Laplacian matrix; related definitions are provided in the next section.)} in graphs~\cite{DBLP:conf/aaai/LiHW18}. This often leads to severely degraded performance when stacking multiple layers, i.e., over-smoothing phenomena~\cite{DBLP:conf/icml/ChenWHDL20,DBLP:conf/icml/WuSZFYW19}. Another vital bottleneck in applying existing GNNs is that graphs with heterophily are orthogonal to the underlying assumption of MP, where directly connected nodes in such graphs are not necessarily similar~\cite{DBLP:conf/aaai/BoWSS21}. In the context of regularization, a line of research attempt to address these limitations, such as promoting consistency between layers~\cite{DBLP:conf/aaai/0002MC21}, randomly dropping edges~\cite{DBLP:conf/iclr/RongHXH20}. However, these solutions require low-frequency filtering as an input from the beginning; therefore, the corresponding expressive ability is constrained due to the above issues. \begin{figure}[t] \centering \includegraphics[trim=2.5cm 0cm 1cm 0cm,clip=true,width=0.48\textwidth]{figures/frequency.pdf} \caption{We illustrate the importance of high-frequency information in graphs by experimenting with Chameleon (heterophily) and CiteSeer (homophily) datasets. On the left, we show the node classification accuracy under different levels of frequencies. The performance from the high-frequency part alone is on par with that of low-frequency, suggesting discriminative power from high frequencies. On the right, GNNs with wide-band filters receive unneglectable responses from high frequencies, which is often abandoned in existing GNNs. } \label{fig:freq} \end{figure} \textit{Could we instead expand the range of frequencies in this setting?} Research shows that graph data often includes a wide range of frequencies~\cite{ortega2018graph}, and high-frequency components have advantages in certain tasks~\cite{fagcn2021}. In particular, the aforementioned issues may not be as much of a concern under high frequencies. Consider, as an example, both high-frequency components and low-frequency components from two different types of graphs: homophily and heterophily, as shown in Fig.~\ref{fig:freq}. The classification performance varies from low-frequency to high-frequency zones, suggesting the importance of high frequency in this task. Further, we observe comparable frequency response from the high-frequency level when using wide-band frequency-sensitive filters, such as GPRGNN~\cite{DBLP:conf/iclr/ChienP0M21} and Cheby-GCN~\cite{DBLP:conf/nips/DefferrardBV16}. This further strengthens the role of high-frequency components, which should be considered in the message passing procedure. In recent years, there are also some efforts try to explore high-frequency in GNNs, e.g.,~\cite{DBLP:conf/aaai/BoWSS21,DBLP:conf/cikm/DongDJJL21}. These approaches are comprehensively concluded in the next section. \textit{However, they all implement new GNN architectures, and fail to adapt to existing models.} This is unwanted in many maturely practiced scenarios, where the necessary properties from existing models can not be abandoned. Our goal in this work is to efficiently exploit the high frequency component to boost the express ability of GNNs while maintaining the merit of traditional GNNs. We develop Complement LAplacian Regularization (CLAR), which introduces the high frequency as a plugin adding to existing GNNs. More specifically, in CLAR, we adopt 1) random sampling strategies to better capture the high-frequency components from the complement and 2) the original Laplacian regularization to balance the noisy connections from the sampling. Our contributions are as follows: \begin{itemize} \item \noindent{\bf Problem Formulation:} We show the necessity of applying high-frequency components and define the problem of integrating high-frequency in graph learning. \item \noindent{\bf Effective Algorithm:} We develop CLAR, a model-agnostic plugin that enhances the high-frequency components for GNNs with theoretical justification. \item \noindent{\bf Effectiveness:} Extensive experiments show that our solution enhances the performance of GNNs against over-smoothing, enhance the expressivity in the heterophilic graphs, and boost the robustness against topological noise. It also outperforms other regularization methods using the high-frequency information. \end{itemize} \section{Experiments} \label{sec:exp} In this section, we conduct experiments on homophilic and heterophilic graphs. \footnote{Our implementation is available at \url{https://drive.google.com/drive/folders/1RAU0hiI66-8o6bk8OGuIwKy0iTPj7G9-?usp=sharing}.} Table~\ref{tab:graph_stat} shows the statistics of the two types of graphs. The experiments aim to answer the following research questions: \begin{itemize} \item[Q1.] Does CLAR better express high-frequency components? \item[Q2.] Does CLAR help to express graphs with heterophily? \item[Q3.] Is CLAR capable of alleviating over-smoothing? \item[Q4.] How does CLAR affect topological robustness? \end{itemize} \textit{Homophilic graphs.} (1) Citation networks: Cora, CiteSeer, and PubMed~\cite{DBLP:conf/icml/YangCS16}, (2) Amazon networks: Computers and Photo~\cite{DBLP:journals/corr/abs-1811-05868}, networks of whether goods are frequently brought together. and (3) Coauthor networks: Physics and CS~\cite{DBLP:conf/aaai/ChenLLLZS20}. \textit{Heterophily graphs.} (1) Actor, the actor-only induced subgraph in a film-director-actor-writer network~\cite{DBLP:conf/iclr/PeiWCLY20}. (2) Cornell and Texas from WebKB~\cite{DBLP:conf/iclr/PeiWCLY20}, networks of web pages, and the hyperlinks between them. And (3) Squirrel and Chameleon from WikipediaNetwork~\cite{DBLP:journals/compnet/RozemberczkiAS21}, including Wikipedia pages and their hyperlinks. \textbf{Remark for computational complexity.}\quad For GNN models, the training time complexity of GCN, GPRGNN, and Cheby-GCN is linearly related to the propagation step due to their iterative formulations. However, BernNet is quadratic. $\mathcal{O}(|\mathcal{E}||\mathcal{V}|)$ is expected in each propagation layer if utilizing sparse matrix multiplication, $\mathcal{O}(|\mathcal{V}|^3)$ otherwise. CLAR takes time complexity $\mathcal{O}(d|\mathcal{V}|)$ and no space complexity, where $d$ is the hidden dimension. CLAR and other regularization methods stay the same time and space complexity to the backbone GNNs during inference. Therefore, we include the regularization methods and Cheby-GCN and GPRGNN with two layers in our experiments. For a comprehensive examination of CLAR, we further consider GAT and SAGE. \subsection{Fitting on Artificial Graph Signals} \label{exp:art} For Q1, we artificialize low-pass, high-pass, band-pass, and band-reject filters on Chameleon and Squirrel and examine the fitness of different approaches. In detail, we eigendecomposite the Laplacian matrix and apply the filters, i.e., $h(\cdot)$, on the graphs signals that $U diag(h(\lambda))U^T$. Afterward, we take the original graph signals as input and converge to the filtered signals using the mean squared error (MSE). Experiments adopt the early-stop mechanism with a maximum of $1000$ epochs and patience of $50$ epochs and optimize using Adam with the fixed learning rate of $0.1$. The compared approaches are set as follows. \textit{Baselines (vanilla).} We adopt two-layer GCN, GAT, and SAGE as the baselines. Attention heads for GAT in each layer are 8 and 1. We accept all the neighbors in SAGE to avoid the influence of sampling. Since the parameters in SAGE are doubled for the separation of central nodes and the aggregated neighbors, SAGE is an enhanced GCN. We append CLAR on the final output of the baselines. The hidden dimension is 32 for all. \textit{Ours (+~CLAR).} CLAR applied on the same eigendecomposition with the baselines for fair comparisons. We figure out an empirical solution since $\alpha$ and $\beta$ are too extensive to tune. Particularly, we clamp the loss value of CLAR within $[0.0, 1.0]$ for numerical stability and find that $\alpha,\beta \in [0,1,2]$ is enough to search for CLAR to assist GNNs. Table~\ref{tab:filter} shows the remained MSE after training, which is the lower, the better. All the baselines with CLAR outperform the vanilla versions, as CLAR expresses more frequency components. Among the four filters, the high-pass and the band-reject compose more high-frequency components than others, where the baselines can benefit more from CLAR. Besides, we observe that GAT lacks frequency explanation the most and also improves the most significant for supplying frequency components. SAGE is slightly less satisfied than GCN, due to the doubled parameters and separation. \subsection{Expressing Real-World Graph Signals} \label{sec:exp_real} To answer Q2, we conduct semi-supervised node classification for real-world homophilic and heterophilic graphs. We randomly split all the data sets into a 60\% training set, 20\% validation set, 20\% test set, and report the average accuracy~(micro F1 score) after $50$ runs. Then, we introduce the implementation of the baselines and CLAR (+~CLAR), and present other compared methods in the following. \input{tab/tab_node_classification} \textit{Spectral filters.} Cheby-GCN (Cheby)~\footnote{https://github.com/pyg-team/pytorch\_geometric.git} and GPRGNN (GPR)~\footnote{https://github.com/jianhao2016/GPRGNN/} are set to $2$-order for equal computation. \textit{Regularization methods.} We consider all the regularization methods in Sec.\ref{sec:relate} and append them on GCN. We implement DropEdge~(DE)~\footnote{https://github.com/pyg-team/pytorch\_geometric.git} with a $0.5$ dropout rate. P-reg~(P)~\footnote{https://github.com/yang-han/P-reg/} is set to $\lambda=0.1$ for citation networks and $0.5$ for the rest. Network Lasso~(NL) is identical to CLAR when $\alpha=1, \beta=0$. We realize MADReg~(MR) according to the original paper due to the absence of the source code and apply a clamp on the regularization values for numerical stability, same to CLAR. Note that we report our experimental results of AdaGraph~(AG) using the setting from the original papers due to the lacking of implementation. Following this, Planetoid splits are used for citation networks, where the sizes of training/validation/test are 20/50/1000, respectively. In the co-author networks, 20 nodes are randomly sampled for training, 30 nodes are for validation and the rest are for testing in each class. We summarize all results in Table~\ref{tab:ag_acc}, where CLAR outperforms AG consistently. \begin{table}[h] \caption{Comparisons to AG on Node Classification (Accuracy \%)} \centering \begin{tabular}{ccccccc} \toprule & CiteSeer & Cora & CS & Physics & PubMed\\ \midrule GCN & 70.3 & 81.5 & 89.8 & 92.8 & 76.3 \\ w/ AG & 69.7 & 82.3 & 90.3 & 93.0 & 77.4\\ w/ CLAR & \textbf{71.5} & \textbf{82.6} & \textbf{91.4} & \textbf{93.9} & \textbf{78.5}\\ \midrule GAT & 72.5 & 83.0 & 85.5 & 91.1 & 75.9 \\ w/ AG & 69.1 & 77.9 & 86.6 & 91.4 & 76.6\\ w/ CLAR & \textbf{72.7} & \textbf{83.3} & \textbf{87.3} & \textbf{92.0} & \textbf{77.1}\\ \midrule SAGE & 67.4 & 78.9 & 90.1 & 93.0 & 75.2 \\ w/ AG & \textbf{69.4} & 80.2 & 90.3 & 92.7 & 77.2\\ w/ CLAR & 68.7 & \textbf{80.2} & \textbf{91.7} & \textbf{93.7} & \textbf{77.7}\\ \bottomrule \end{tabular} \label{tab:ag_acc} \end{table} In Table.~\ref{node_classification}, all the baselines outperform the original versions by CLAR. Significantly heterophilic graphs are improved by more high-frequency components. Since GNN baselines satisfy the low-frequency components, the homophilic graphs improve less. $2$-order Cheby-GCN and GPRGNN are not enough to express the filters, except for the low-frequency components (homophilic graphs), particularly on Computers and Photo. Even though all the other regularization methods enhance GCN to some extent, their increments are marginal compared to CLAR, because only CLAR contains a high-pass filter. The tuned hyper-parameters indicate that a greater $\alpha$, i.e. the low-pass, is beneficial for homophilic graphs, and a larger $\beta$, i.e. the high-pass helps with heterophily. \textit{Sensitivity of the sampling hyper-parameter $S$.} Recall that we assume the classification performance is robust to $S$ in Sec.~\ref{sec:const_clar}. We further implement an experiment to verify this using Spearman correlation analysis, as shown in Table~\ref{tab:spearman}. Here, we set the range of $S$ as $\{1,2,4,8,16,32\}$. It is obvious that the relation between the performance and $S$ is neglectable, demonstrating the robustness of $S$. \label{sec:reprod} \begin{table}[h] \centering \caption{Spearman Correlation Coefficients of $S$ and Accuracy (\%)} \begin{tabular}{cccccc} \toprule & CiteSeer & Cora & CS & Physics & PubMed \\ \midrule GCN & -0.078 & -0.059 & -0.034 & 0.166 & 0.182\\ GAT & -0.084 & 0.200 & -0.104 & -0.042 & -0.063\\ SAGE & 0.009 & -0.002 & 0.081 & 0.023 & -0.004\\ \bottomrule \end{tabular} \label{tab:spearman} \end{table} \subsection{Tackling Over-smoothing} For Q3, we apply CLAR on GCN with increasing stacking layers and test on a more challenging Planetoid split, where only $20$ fixed samples of each category are for training, while $500$ and $1000$ samples are for validation and test~\cite{DBLP:conf/icml/YangCS16}. Over-smoothing might be more severe due to lacking supervised signals under this context. As introduced earlier, GNNs quickly become more low-pass with increasing layers. To tackle this condition, we introduce more high-frequency components through larger loss values that we loose the clamp to $[0.0, 10.0]$ and restrain $\alpha,\beta \in \{0.0, 0.1, 0.2, 0.5, 0.8, 1.0\}$ to retain numerical stability. CLAR~(10) denotes this modified CLAR, and CLAR~(1) introduced earlier. Besides, we include other regularization methods claiming against over-smoothing, DropEdge with a dropout ratio of $0.2$, and MADReg is set to be the same as $\alpha$ in CLAR~(10). We report the test accuracy of GCN with the regularization methods on Cora along with increasing layers in Fig.~\ref{fig:over_smoothing}. According to the results, CLAR helps GCN beyond the original version and out-perform DropEdge around the early and more deep layers. MADReg performs slightly weaker. Importantly, CLAR is superior to MLP even in the deep layers, certifying the adjustable filters within its effect. \begin{figure}[h] \centering \includegraphics[trim=0.3cm 0.4cm 0.3cm 0.3cm,clip,width=0.48\textwidth]{figures/over_smoothing_workshop.pdf} \caption{Test accuracy (\%) of GCN with various regularizers. Remarkably, only CLAR is always beyond MLP.} \label{fig:over_smoothing} \end{figure} \subsection{Boosting Topological Robustness} For Q4, we evaluate CLAR against topological noise, i.e., edges missing. This situation is common; in practice, only partial connections are observed often. Due to the random sampling strategies, CLAR is naturally steady to this. We successively drop the $10\%$ ratio of the edges to imitate the situation and evaluate CLAR on two-layer GCN. The results are summarized in Figure~\ref{fig:drop} with using Planetoid splits. Along with increasing dropout ratios, CLAR helps to maintain performance. Particularly after dropping more edges, CLAR and CLAR* benefit significantly. Finally, we provide a spectral explanation for this empirical observation; when the low-frequency components are getting sparse, e.g., dropping edges, utilizing the high-pass filters might be beneficial to express the spectral components. \begin{figure}[h] \centering \includegraphics[trim=0.3cm 0.35cm 0.3cm 0.35cm,clip,width=0.48\textwidth]{figures/drop_workshop.pdf} \caption{Test accuracy (\%) of GCN+CLAR with increasing dropping edges rate. * denotes magnifying CLAR for $10$ times.} \label{fig:drop} \end{figure \subsection{ Sampling Strategies' Evaluation} We proposed two types of sampling strategies from two perspectives, namely node-based sampling and edge-based sampling. We argue that these two have similar behaviors w.r.t the overall performance. To verify this, we deploy CLAR on a two-layer GCN on two homophilic datasets and two heterophilic datasets. We evaluate these two strategies using the following settings for the hyper-parameters: $\alpha, \beta \in \{0, 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2\}$ and $S \in \{1, 2\}$, We use Pearson product-moment correlation coefficient to measure the difference between their performance on the test set, as shown in Table~\ref{tab:hyper_sample}. Note that the value of Pearson product-moment correlation coefficient is within the range of $[-1,1]$, whose signs signify the positive or negative relations. Normally, it is said to be a strong positive correlation when this coefficient is greater than $0.5 ~\cite{schober2018correlation}. In Table~\ref{tab:hyper_sample}, all coefficients are greater than $0.8$ , indicating the difference between these two sampling strategies is neglectable. One possible explanation is that our target graphs are insensitive to these two sampling methods due to their sparseness. In practice, we suggest using the edge-based sampling strategy when graphs are dense and the node-based one on sparse graphs. \begin{table}[h] \centering \caption{Pearson product-moment correlation coefficients of the two sampling strategies} \begin{tabular}{c|cccc} \toprule Dataset & Cora & CiteSeer & Chameleon & Squirrel \\ \midrule Coefficients & 0.8067 & 0.9903 & 0.9382 & 0.8431 \\ \bottomrule \end{tabular} \label{tab:hyper_sample} \end{table} \subsection{Reproductivity} We now report some of the details of implementation for reproductivity, which are fully available in our code repository. Firstly, the sampling hyper-parameter $S$ is restricted within $\{1,2\}$, whose insensitivity is studied in Sec.~\ref{sec:exp_real}. $\alpha$ and $\beta$ are used for adjusting the proportion of the high- and low- frequency components in CLAR. In random splits, they are within $\{0.0,1.0,2.0\}$, discussed in Sec.~\ref{exp:art}. In Planetoid splits, they are $\{0.0,\pm 0.001,\pm 0.005\}$ to avoid over-fitting on the regularization because of the stronger sparsity, where the limited negative values can expand the hyper-parameter space. \subsubsection*{References}} \usepackage{mathtools} \usepackage{booktabs} \usepackage{tikz} \newcommand{\zxn}[1]{\textcolor{green}{{#1}}} \newcommand{\swap}[3][-]{#3#1#2} \title{Complement LAplacian Regularization~(CLAR): \\An Efficient High-Frequency Regularization for Graph Neural Networks} \author[1]{Jiaqi Sun} \author[3]{Shenglin Zhao} \author[1]{Jiayi Li} \author[2]{Tonglee Chung} \author[1]{Yujiu Yang} \affil[1]{% Tsinghua Shenzhen International Graduate School\\ Tsinghua University\\ Shenzhen, China } \affil[2]{% Department of Computer Science and Technology\\ Tsinghua University\\ Beijing, China } \affil[3]{% Tencent Technology Company\\ Shenzhen, China } \begin{document} \maketitle \input{0_abstract} \input{1_introduction} \input{2_relatedwork} \input{3_preliminaries} \input{4_5_proposal} \input{6_experiment} \input{7_conclusion} \clearpage \newpage \section{Sampling Strategies Algorithm} \section{Preliminaries} \label{sec:pre} \subsection{Graph Concepts} \textbf{Notations.}\quad $\mathcal{G}=\left(\mathcal{V},\mathcal{E}\right)$ is a connected graph with node features $\mathit{X} = \{x_1,\cdots,x_{\mathnormal{N}}\}$ and labels $Y = \{y_1, \cdots, y_{N}\}$ with $c$ categories. $\mathit{A} \in \mathbb{R}^{\mathnormal{N}\times\mathnormal{N}}$ is the adjacency matrix, whose degree matrix is diagonal that $\mathit{D}_{ii}=\sum_{j=1}^{\mathnormal{N}}\mathit{A}_{ij}$. The Laplacian matrix is $\mathit{L}=\mathit{D}-\mathit{A}$. $\mathrm{N}(\cdot)$ operates to get the neighbors, and $\mathrm{L}(\cdot)$ is to obtain a Laplacian matrix. $\hat{\cdot}$ indicates add-self-loop operator, and $\tilde{\cdot}$ means normalization. A normalized Laplacian matrix can be factorized as $\tilde{L}=U \Lambda U^T$, where $\Lambda$ is diagonal with entries $\Lambda_{ii} = \lambda_i$ indicating the frequency components/spectrum of $\mathcal{G}$. \textbf{The complement graph.}\quad The complement of $\mathcal{G}$ is denoted by $\bar{\mathcal{G}}=\left(\mathcal{V},\bar{\mathcal{E}}\right)$, where $\bar{\mathcal{E}}=\mathcal{V}\times\mathcal{V}-\mathcal{E}$, i.e., for any distinctive nodes $v_i, v_j \in \mathcal{V}$, $\left(v_i, v_j\right) \in \bar{\mathcal{E}}$ if and only if $\left(v_i, v_j\right) \notin \mathcal{E}$~\cite{1977Graph}. \textbf{Homophily and heterophily.}\quad Homophily means that the connected nodes share the same label, while the labels of neighboring nodes may distinct from each other on heterophilic graphs. \cite{DBLP:conf/nips/ZhuYZHAK20} propose homophily ratio $h$ to measure homophily, \begin{equation} h=\frac{|\{(v_i,v_j)|(v_i,v_j)\in\mathcal{E}\land y_i =y_j\}|}{|\mathcal{E}|}. \label{equ:homo} \end{equation} $h$ represents the ratio of edges with the same label over the total edges, and thus $h\in[0,1]$. Usually, we consider the graphs with $h>0.7$ are homophilic, and the ones with $h<0.3$ are heterophilic~\cite{DBLP:journals/corr/abs-2106-10994,DBLP:conf/aaai/BoWSS21}. \subsection{Graph Neural Networks Layer} GNNs utilize both the node features and the graph structure for learning representations. In each layer, GNN aggregates the neighboring nodes and update the central nodes. Given the representation of node $v_i$ at $k$-th layer $h_i^{(k)}$, and the neighboring nodes of $v_i$, ${N}(v_i)$, GNN formulates the hidden representation of $v_i$ at layer $k+1$ as \begin{equation} h_i^{(k+1)}=\sigma\left(\mathtt{UPD}\left(\mathtt{AGG}\left(\{h_j^{(k)}\}\right), h_i^{(k)} \right)W^{(k+1)}\right), \label{equ:gnn_layer} \end{equation} where $\sigma$ is the non-linear activation, $\mathtt{UPD}$ and $\mathtt{AGG}$ are the aggregation and updating functions, $\{h_j^{(k)}\}$ is the hidden representations for the neighbors of node $v_i$, that $\{v_j|v_j\in \mathrm{N}(v_i)\}$. $W^{(k+1)}$ is the feature transformation parameter. The input is initialized as the raw node feature as $h_i^{(0)}=x_i$. Functions like $\mathtt{UPD}$ and $\mathtt{AGG}$ process node information together with the adjacency matrix $A$, resulting in a recursive updating manner as $H^{(k+1)}=f^{(k+1)}(H^{(k)}, A)$, where $f^{(k+1)}(\cdot)$ denotes parameter learning at the $k+1$-layer. Then, the overall GNNs stack $K$ layers and return $ H^{(K)}$, \begin{equation} H^{(K)} = f^{(K-1)}( \cdots f^{(1)}(f^{(0)}(X,A),A) \cdots ,A). \label{equ:gnn_fx} \end{equation} \subsection{Semi-supervised Node classification} Recall that a standard semi-supervised node classification is often formulated as \begin{equation} \mathcal{L}=\mathcal{L}_{cls}+\gamma\mathcal{L}_{reg}, \label{equ:semi} \end{equation} where $\mathcal{L}_{cls}$ optimizes the objective for classification, e.g., a multi-layer perceptron (MLP); $\mathcal{L}_{reg}$ promotes the similarity of the connected nodes' representations, writing as \begin{equation} \mathcal{L}_{reg}=\tr\left(g\left(X\right)^TLg\left(X\right)\right), \label{equ:reg} \end{equation} where $\tr(\cdot)$ calculates the trace of a matrix and $g(\cdot)$ can be any transformations on the initial features. \section{Conclusion and Future work.} Modern GNNs normally behave as low-pass filters and neglect the high-frequency components. In the context of regularization methods for GNNs, this paper proposes CLAR to enhance the high-frequency components efficiently. Based on our main theorem that the complement of the original graph incorporates a high-pass filter, we explore the complement and obtain a complementary Laplacian regularization term. Experiments verify that our proposal can help GNNs deal with over-smoothing and better express heterophilic graphs. We extensively discuss the spectral interpretability for existing regularization methods and point out that CLAR only possesses a high-pass filter within its effect. Next, the complement is worthy of exploring to model the graph spectrum better when low-frequency components are sparse, but high-frequency components might help. A high-frequency enhanced GNN architecture is also promising via the complement without increased parameters.
{ "attr-fineweb-edu": 1.733398, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfKg5qhDCTJN_B3sP
\section{Introduction} We present a study of the interplay between the spin variables and the chiral variables (chiralities) in the $\pm J$ $XY$ spin glass. The former correspond to the continuous rotational symmetry of this model, and the latter to its discrete chiral symmetry (i.e., the invariance of the model Hamiltonian under reflection of all the spins with respect to a reference axis), first pointed out by Villain \cite{Vf,Vspg,Vd3}. Below the lower critical dimension, $d_\ell$, which is believed to be greater than $2$ {lcd}, the correlation lengths associated with the chiralities and with the spin variables diverge as $T^{-\nu_c}$ and $T^{-\nu_s}$, respectively, at the zero-temperature ($T$) critical point. The question of the relation between the two types of variables has become of interest following speculations by Kawamura and Tanemura \cite{KaT}, by Ray and Moore \cite{RM} and by Kawamura \cite{Ka}, prompted by Monte Carlo simulations, that below $d_\ell$, the two correlation lengths are different, with $\nu_c>\nu_s$. This suggests that the chiralities will order more easily than the spins in higher dimensions. Consequently, above $d_\ell$ there would be a regime of dimensions with long range chiral glass order, but without conventional spin glass order. This possibility receives intuitive support from the idea that discrete symmetry leads to long range order more easily than continuous symmetry does. Two recent publications \cite{NHM,NH} address this issue analytically. Both these studies, just like the Monte Carlo work of \cite{KaT,RM}, consider the finite size scaling of the ground state energy differences between periodic (P), antiperiodic (AP), and reflecting (R) boundary conditions. In one of them, Ney-Nifle and Hilhorst \cite{NH} transform the two-dimensional $XY$ $\pm J$ spin glass on a finite $N\times M$ square lattice into a grand-canonical Coulomb gas problem of which, as is well-known, the logarithmically interacting charges represent the chiral variables. The charges must take half-integer values on the frustra\-ted plaquettes and therefore cannot vanish even in the ground state. In the case of uncorrelated disorder, the plaquettes are randomly and independently frustrated with probabi\-lity $\frac{1}{2}$, and it is not possible to find the ground state explicitly. For that reason, the subsequent analytic treatment of \cite{NH} remains restricted to the example of a rectan\-gular array of frustrated plaquettes with randomly distributed intercolumn distances. In this example, the authors find no evidence for a chiral correlation length diverging faster than the spin correlation length. By a heuristic argument they extend this conclusion to the case of uncorrelated $\pm J$ disorder. In an earlier investigation, Ney-Nifle {\em et al}\/ {NHM} considered the random $\pm J$ $XY$ model on a one-dimensional ladder lattice, again in the Coulomb gas representation. This problem is exactly solvable, or nearly so, for general disorder, and the conclusi\-ons drawn from it are fully coherent with those from the two-dimensional model \cite{NH}. However, this model suffers from the drawback that, in the Coulomb gas language, it has only exponentially decaying electrostatic interactions (for reasons explained in that work), so that one may wonder if an essential ingredient of the difficult two-dimensional problem has not been lost. In the present work, we reconcile the requirements of exact solvability and truly long-range interactions between the chiralities by studying the $\pm J$ $XY$ spin glass on a $N\times 2$ lattice which is periodic both in the longitudinal and the transverse direction. We work again in the Coulomb gas representation, and apply different boundary conditions. In section 2, we show that, on this two-dimensional lattice with one compactified dimension, the electrostatic interaction decomposes into two components. The first one is a ``strong" or charge-charge interaction; it is nothing but the one-dimensional Coulomb potential, which increases linearly with distance. The second one is a ``weak" interaction: it acts between transversely oriented ``dipoles" and decays exponentially with distance. We shall refer to them as the Coulomb and the dipolar interaction, respectively. The Coulomb gas representation of the $XY$ model Hamiltonian involves, in addition to these two interactions, two supplementary ``global" terms that couple the system's total electric dipole moment to the boundary conditions imposed on the Hamiltonian. These extra terms have drawn a certain attention in the recent literature \cite{FTY,NHM,NH}, and they play again an important r\^ole here. We are not able to solve the ground state problem for the complete Hamiltonian. However, we are able to conclude that in the large $N$ limit, whatever the boundary conditions, the ground state is one of the infinitely many ground states of the ``strong" Coulomb interaction combined with one of the global terms. The details of the proof (of largely technical nature) of this fact are given in Appendix A. This set of ground states consists, roughly speaking, of charge configurations in which the long-range Coulomb interaction is screened away as much as possible by the formation of longitudinally oriented dipoles, as exhibited in section 3. The degeneracy within this set is lifted by the weak interaction and by the second global term, which are therefore responsible for the selection of the ground state of the full Hamiltonian and for the energy differences between P, AP, and R boundary conditions. Even though we remain unable to say which member of this set is selected as the true ground state, we are able to describe (in section 4) the domain walls and domain wall energies involved in passing from one boundary condition to another. Using the relation between the correlation lengths and the finite size scaling exponent of the ground state energy differences, we conclude in the final section 5, for the first time within an $XY$ spin glass with random $\pm J$ disorder and having a nontrivial long-range interaction between its chiralities, that the spin and the chiral correlation lengths diverge, for $T \to 0$, with the {\em same}\/ exponent $\nu$. We determine its exact value, $\nu=\log^{-1}_{\frac{8}{3}}(3+2\sqrt{2})=0.5564\ldots$, in Appendix B. \section{The tube: a compactified two-dimensional lattice} In this section we shall exhibit the Hamiltonians of a random $XY$ model on a $N\times 2$ square lattice with periodic boundary conditions (PBC) in the transverse direction and successively periodic, antiperiodic, and reflecting boundary conditions in the other direction. We call this lattice a {\em tube}\/ (see figure 1). It can be viewed as a two-dimensional lattice of which one dimension has been compactified. First we shall recall the same model on the more general $N\times M$ lattice and then specialise to the tube. The effect of the compactification on the interaction will thereby become clear. We consider a random $\pm J$ $XY$ model where the spins are two-component unit vectors whose angles ${\phi_i}$ (with a reference axis) take values in $(-\pi, \pi]$. Two nearest-neighbour spins, $\phi_i$ and $\phi_j$, have an interaction energy $-J\cos(\phi_i-\phi_j-\pi_{ij})$, where $J$ is a constant, and the $\pi_{ij}$ are quenched random variables that take the values \begin{equation} \pi_{ij} = \left\{ \begin{array}{r@{\quad \quad}l} 0 \, , & \mbox{\small with probability} \, \, {1 \over 2} \\ \pi \, , & \mbox{\small with probability} \, \, {1 \over 2} \end{array} \right. . \end{equation} The partition function is \begin{equation}\label{ZXY} Z_{XY} = \int\limits^\pi_{-\pi} \prod_{i} d\phi_{i} \,\, \mbox{e}^{\beta J \sum\limits_{<i,j>} \cos(\phi_i - \phi_j - \pi_{ij})}\,\, . \end{equation} The sum in the exponential in (\ref{ZXY}) runs over all nearest-neighbour bonds of the periodic lattice with the convention that in $<i,j>$ the site $j$ is to the right of $i$ (for a horizontal bond) or above $i$ (for a vertical bond). In our notation the site vectors $i=(i_x,i_y)$ have half-integer components $i_x=\frac{1}{2},\frac{3}{2},\ldots,\frac{2N-1}{2}$ and $i_y=\frac{1}{2},\frac{3}{2},\ldots,\frac{2M-1}{2}$. \begin{figure} \setlength{\unitlength}{1cm} \begin{picture}(13,6) \thicklines \put(1.5,1){\line(1,0){11}} \put(1.5,3){\line(1,0){11}} \put(1.5,5){\line(1,0){0.125}} \put(12.375,5){\line(1,0){0.125}} \multiput(1.875,5)(0.5,0){21}{\line(1,0){0.25}} \multiput(2,1)(2,0){6}{\line(0,1){4}} \thinlines \put(1.9,2){\line(1,0){0.2}} \put(1.9,4){\line(1,0){0.2}} \put(3,0.9){\line(0,1){0.2}} \put(5,0.9){\line(0,1){0.2}} \put(7,0.9){\line(0,1){0.2}} \put(9,0.9){\line(0,1){0.2}} \put(11,0.9){\line(0,1){0.2}} \put(0.75,1.25){\vector(0,1){1.5}} \put(0.75,1){\makebox(0,0){$y$}} \put(12,0.4){\vector(1,0){1.5}} \put(11.75,0.4){\makebox(0,0){$x$}} \put(1.5,2){\makebox(0,0){$1$}} \put(1.5,4){\makebox(0,0){$2$}} \put(9,0.7){\makebox(0,0){$N$}} \put(11,0.7){\makebox(0,0){$1$}} \put(3.5,2){\makebox(0,0){$\pi_{x_0}^{12}$}} \put(6.5,2){\makebox(0,0){$\pi_{x_0+1}^{12}$}} \put(5,3.25){\makebox(0,0){$\pi_{(x_0,2)}$}} \put(5,1.25){\makebox(0,0){$\pi_{(x_0,1)}$}} \put(3.5,4){\makebox(0,0){$\pi_{x_0}^{21}$}} \put(6.5,4){\makebox(0,0){$\pi_{x_0+1}^{21}$}} \put(5,0.6){\makebox(0,0){$x_0$}} \thicklines \put(5,2){\makebox(0,0){\boldmath $p_{(x_0,1)}$}} \put(5,4){\makebox(0,0){\boldmath $p_{(x_0,2)}$}} \end{picture} {\footnotesize {\noindent\bf\footnotesize Figure 1:} {\sl\footnotesize Chiralities on a tube lattice.} The bonds on the dashed line are identical with the ones on the lower solid line. The bonds have quenched disorder variables $\pi$ and the plaquette centres frustration variables $p$ associated with them. According to the definition (2.4), one has, for example, $p_{(x_0,1)}=(\pi_{(x_0,1)} + \pi_{x_0+1}^{12} - \pi_{(x_0,2)} - \pi_{x_0}^{12})/(2\pi)$.} \end{figure} Since we are interested in the ground state properties of the model, we shall replace (\ref{ZXY}) by the corresponding Villain expression, which is believed to be equivalent to (\ref{ZXY}) in the large-$\beta$ limit \cite{Vf,Vspg}, and is easier to analyse. The Villain partition function is \begin{equation}\label{ZV} Z_{\mbox{\tiny V}} = \int^\pi_{-\pi} \prod_i d\phi_i \sum_{\{n_{ij}\}} \mbox{e}^{-\frac{\beta J}{2} \sum\limits_{<i,j>}(\phi_i - \phi_j - \pi_{ij} - 2\pi n_{ij})^2}\,\, , \end{equation} where the $n_{ij}$ are additional dynamical variables. These $n_{ij}$ are integers and the sum on them ensures that the integrand has period $2 \pi$ in $\phi_i - \phi_j$. In the following, we set $J=2$. For each plaquette of the lattice, we define a frustration variable $p_r$, with $r = (x,y)$ a vector with integer components $x=1,\ldots,N$ and $y=1,\ldots,M$ that labels the centres of the plaquettes, \begin{equation}\label{pr} p_r \equiv {\sum\limits_{<i,j>}\!}^{(r)}\, \epsilon_{ij}^r\,\, \frac{\pi_{ij}}{2\pi}\,\, , \end{equation} where the sum is restricted to the bonds that define the plaquette $r$. In (\ref{pr}), $\epsilon_{ij}^r = -1$ or $1$ depending on whether one runs through the corners of the triangle $(ijr)$ clockwise or counterclockwise. The frustration variable is {\em integer}\/ for {\em nonfrustrated}\/ plaquettes and {\em half-integer}\/ otherwise. In (\ref{ZV}) one can integrate on the continuous degrees of freedom. The algebra (cf.~\cite{Vf,Vspg,NHM,NH}) includes the transformation from the variables $n_{ij}$ to the new discrete variables $q_r$ called the ``chiralities'' of the plaquettes. The chirality $q_r$ runs through all integers (half-integers) when $p_r$ is integer (half-integer). One shows that the chiralities interact via a Coulomb interaction (which is why they are also called ``charges'') and that they satisfy the neutrality condition \begin{equation} \sum\limits_r q_r = 0\,\, . \end{equation} Recently, Ney-Nifle and Hilhorst \cite{NH} (see also \cite{NHM}) extended the mapping of the $XY$ Hamiltonian onto a Coulomb gas Hamiltonian by including all the finite size corrections on a $N\times M$ lattice with various boundary conditions. We will now adapt their results to the tube lattice, for which a simplified notation is defined in figure 1. \subsection{Periodic boundary conditions} We shall first consider the $N\times 2$ system with periodic boundary conditions [PBC] in the longitudinal direction. We denote its partition function by $Z_{\mbox{\tiny P}}$. Starting from the more general model $Z_{\mbox{\tiny V}}$ \cite{NH}, see equation (\ref{ZV}), we change variables from $n_{ij}$ to the chiralities $q_r$ which allows to perform the Gaussian integration on the first set of variables, $\phi_i$. Including all numerical prefactors in $Z_0^{\mbox{\tiny P}}$, one gets \cite{NH} \begin{equation}\label{ZP} Z_{\mbox{\tiny P}} = Z_0^{\mbox{\tiny P}}\, \sum_{\{ q_r \} }\, \sum_{n,m}\, e^{-\beta {\cal H}_{\mbox{\tiny P}}}\, \delta \left(\sum_r q_r, 0 \right) \,\, , \end{equation} where $\delta(\cdot,\cdot)$ denotes the Kronecker delta. The additional dynamical variables $n$ and $m$ run over all integers and the $q_r$ take integer or half-integer values, as mentioned above. The Hamiltonian ${\cal H}_{\mbox{\tiny P}}$, which will be the starting point of our considerations, reads explicitly \cite{NH} \begin{equation}\label{HPBC} \begin{array}{lcl} {\cal H}_{\mbox{\tiny P}} &=& \hspace{0,5cm}\frac{8\pi^2}{N} \left(n+ \frac{1}{2} \sum\limits_{x=1}^N q_{(x,1)} + \sum\limits_{x=1}^N \frac{\pi_{(x,1)}}{2\pi} \right)^2 \\[2mm] {} &{}& + \,\, 2\pi^2N \left(m+{1 \over N}\sum\limits_{x=1}^N x(q_{(x,1)}+q_{(x,2)}) + \frac{\pi_N^{12} + \pi_N^{21}}{2\pi} \right)^2 \\[2mm] {} &{}& + \,\, \pi^2 \sum\limits_{r,r'} q_rq_{r'}U_{N,2}(r-r') \,\, . \end{array} \end{equation} We will briefly discuss its meaning. The first two terms are due to the finite system size. They represent a coupling of the horizontal and the vertical component of the total electric dipole moment, respectively, to the quenched disorder. In the third term, $U_{N,M}(R)$ is the interaction between two charges \begin{equation}\label{U} U_{N,M}(R) = \frac{1}{2N}\,{\sum\limits_{k_x,k_y}\!}^*\, \frac{e^{i(X k_x + Y k_y)}-1} {\sin^2(\frac{k_x}{2}) + \sin^2(\frac{k_y}{2})}\,\, , \end{equation} with $R=(X,Y)$, $k_x = 0, \frac{2\pi}{N},\ldots , \frac{2\pi(N-1)}{N}$ and $k_y = 0, \frac{2\pi}{M},\ldots , \frac{2\pi(M-1)}{M}\, $. The asterisk indicates that the term $(k_x,k_y)=(0,0)$ is left out of the summation. In $d=2$, $U_{N,M}$ ($N,M\to\infty$) is the two-dimensional Coulomb interaction which varies as a logarithm at large distances \cite{Vf,Vspg}. For the tube, we will see in what follows that the compactification leads to a decomposition of $U_{N,2}$ into two parts: a one-dimensional Coulomb interaction that increases linearly with distance and an exponentially decreasing interaction, which is a remnant of a two-dimensional dipole-dipole interaction. The appearence of the linear Coulomb interaction and its competition with the dipolar interaction makes the model interesting. To separate these two interactions in $U_{N,2}$, we combine the two chiralities of a column $x$ as \begin{equation}\label{q} \begin{array}{lcc} q_x^+ &\equiv& q_{(x,1)} + q_{(x,2)} \,\, , \\ q_x^- &\equiv& q_{(x,1)} - q_{(x,2)} \,\, . \end{array} \end{equation} Introducing $q_x^+$ and $q_x^-$ in (\ref{HPBC}) and evaluating (\ref{U}) for $N\to\infty$ in these new variables, we get \begin{equation}\label{HPBC1} \begin{array}{lcl} {\cal H}_{\mbox{\tiny P}} &=& \hspace{0,5cm}\frac{8\pi^2}{N} \left(n+\frac{1}{4}\sum\limits_{x=1}^N q_x^- + \sum\limits_{x=1}^N \frac{\pi_{(x,1)}}{2\pi} \right) ^2 + \,\, \pi^2 \sum\limits_{x,x'=1}^N q_x^-q_{x'}^-U_{\mbox{\tiny P}}^-(x-x') \\[2mm] {} &{}& +\,\, 2\pi^2N \left(m+{1 \over N}\sum\limits_{x=1}^N xq_x^+ + \frac{\pi_N^{12} + \pi_N^{21}}{2\pi}\right)^2 + \,\, \pi^2 \sum\limits_{x,x'=1}^N q_x^+q_{x'}^+U_{\mbox{\tiny P}}^+(x-x') \,\, . \end{array} \end{equation} We find that the charges $q_x^+$ interact via the long-range periodised Coulomb potential \begin{equation} \begin{array}{lcl}\label{UP+} U_{\mbox{\tiny P}}^+(X) &=& \hspace{0,5cm} \frac{1}{2N}\,{\sum\limits_{k_x\not= 0}}\, \frac{e^{i k_x X}-1} {\sin^2(\frac{k_x}{2})}\\[2mm] {} &\simeq& -\,|X|(1-{{|X|} \over N})\, , \qquad N\to\infty, \, \frac{|X|}{N}\,\, \mbox{fixed}, \, 0\laq{{|X|} \over N}\laq 1\,\, . \end{array} \end{equation} If $|X|$ is negligible with respect to $N$, $U^+_{\mbox{\tiny P}}(X)$ is the usual one-dimen\-sio\-nal Coulomb interaction, linear in $X$. If not, the term $1-\frac{|X|}{N}$ becomes impor\-tant and reflects the symmetry and periodicity of the lattice. The charges $q_x^-$ interact via a short-range (dipolar) potential \begin{equation}\label{U-P} \begin{array}{lcl} U_{\mbox{\tiny P}}^-(x-x') &=& \hspace{0,5cm} \frac{1}{2N}\,{\sum\limits_{k_x}}\, \frac{e^{i k_x (x-x')}} {1 + \sin^2(\frac{k_x}{2})}\\[2mm] {} &\simeq& {\sqrt{2} \over 8} (3-2\sqrt{2})^{d(x,x')}\,\, , \qquad N\to\infty ,\,\, |x-x'|\,\, \mbox{fixed}, \end{array} \end{equation} where $d(x,x')$ is the length of the shortest path between $x$ and $x'$, taking into account the periodic geometry. Furthermore, one obtains from the calculation that both potentials have the symmetry properties \begin{equation} \begin{array}{lcl} U^{\pm}_{\mbox{\tiny P}}(X) &=& U^{\pm}_{\mbox{\tiny P}}(X+N) \,\, , \\ U^{\pm}_{\mbox{\tiny P}}(X) &=& U^{\pm}_{\mbox{\tiny P}}(-X) \,\, . \end{array} \end{equation} Because of the range of the interactions, we will also call the long-range Coulomb interaction between the charges $q_x^+$ {\em strong}\/ interaction and the short-range (dipolar) interaction between the charges $q_x^-$ {\em weak}\/ interaction. In the large $N$ limit, for convenience, we rewrite the Coulomb interaction term in (\ref{HPBC1}), using (\ref{UP+}) and charge neutrality, as \begin{equation}\label{UP+1} \pi^2 \sum\limits_{x,x'=1}^N q_x^+q_{x'}^+U_{\mbox{\tiny P}}^+(x-x')\,\, =\,\, - \pi^2 \sum\limits_{x,x'=1}^N |x-x'|q_x^+q_{x'}^+ - \frac{2\pi^2}{N} \left( \sum\limits_{x=1}^N xq_x^+ \right)^2\,\, . \end{equation} Inserting this expression in (\ref{HPBC1}) and writing out also the interaction potentials $U_{\mbox{\tiny P}}^-$ explicitly, we obtain eventually \begin{equation}\label{HPBCf} \begin{array}{rcl} {\cal H}_{\mbox{\tiny P}} &=& \hspace{0,5cm}\frac{8\pi^2}{N} \left(n+ \frac{1}{4} \sum\limits_{x=1}^N q_x^- + \sum\limits_{x=1}^N \frac{\pi_{(x,1)}}{2\pi}\right)^2 + \,\, {\sqrt{2} \over 8}\pi^2 \sum\limits_{x,x'=1}^N (3-2\sqrt{2})^{d(x,x')}q_x^-q_{x'}^- \\ {} &{}& +\,\, 2\pi^2N \left(m+\frac{\pi_N^{12} + \pi_N^{21}}{2\pi} \right)^2 - 4\pi^2\left(m+\frac{\pi_N^{12} + \pi_N^{21}}{2\pi}\right)\sum\limits_{x=1}^N xq_x^+ - \,\, \pi^2 \sum\limits_{x,x'=1}^N |x-x'|q_x^+q_{x'}^+ \,\, , \end{array} \end{equation} valid in the large $N$ limit. The task will now be to minimise ${\cal H}_{\mbox{\tiny P}}$ with respect to the four variables $q_x^+$, $q_x^-$, $m$, and $n$ in order to find its ground state energy. This will be done in section III. \subsection {Antiperiodic boundary conditions} Passing from PBC to antiperiodic boundary conditions (APBC) means changing the sign of the two horizontal bonds that belong to the plaquettes $(N,1)$ and $(N,2)$. Under this change frustrated (unfrustrated) plaquettes remain frustrated (unfrustra\-ted). Thus the only modification needed to get the Hamiltonian ${\cal H}_{\mbox{\tiny AP}}$ for APBC is to replace $\pi_{(N,1)}$ by $\pi_{(N,1)} + \pi$ in the first term in equation (\ref{HPBCf}), i.e., to add $\frac{1}{2}$ in the expression between parentheses in that term. \subsection {Reflecting boundary conditions} One obtains the Hamiltonian ${\cal H}_{\mbox{\tiny R}}$ for the $XY$ spin glass on an $N\times M$ lattice with reflecting boundary conditions (RBC) in the horizontal direction and PBC in the vertical direction by replacing the horizontal interactions in one single, but arbitrary column, say $N$, by \begin{equation} (\phi_i + \phi_j - 2\pi n_{ij} - \pi_{ij} )^2 \,\, . \end{equation} This amounts to reflecting the spins on one side of this column with respect to the reference axis. The ensuing modifications in passing to the Coulomb representation result in \cite{NH} \begin{equation}\label{ZR} Z_{\mbox{\tiny R}} = Z_0^{\mbox{\tiny R}}\, \sum_{\{q_r\}}\, \sum_{{n,m}}\, e^{-\beta {\cal H}_{\mbox{\tiny R}}}\, \delta\left(\left[\sum_r q_r - {{\pi_N^{12} + \pi_N^{21}} \over \pi}\right] \bmod 2, 0\right) \end{equation} with \begin{equation} {\cal H}_{\mbox{\tiny R}} = \pi^2\,\sum_{{r,r'}} q_rq_{r'}U_{\mbox{\tiny R}}(r-r')\,\, , \end{equation} and $r=(x,y)$ labels the centres of the plaquettes on the $N\times M$ lattice as before. We do not recall the explicit, general, form for the potential $U_{\mbox{\tiny R}}$ here, but rather use the variables $q_x^+$ and $q_x^-$ defined in (\ref{q}) and give the explicit expression of ${\cal H}_{\mbox{\tiny R}}$ in the case of the tube lattice: \begin{equation}\label{HRBC} {\cal H}_{\mbox{\tiny R}} = \pi^2\sum\limits_{x,x'=1}^N q_x^+q_{x'}^+\, U_{\mbox{\tiny R}}^+(x-x') + \pi^2\sum\limits_{x,x'=1}^N q_x^-q_{x'}^-\, U_{\mbox{\tiny R}}^-(x-x') \end{equation} where \begin{equation}\label{UR} \begin{array}{ccll} U_{\mbox{\tiny R}}^+(X) &=& {N \over 2} (1-{{2|X|} \over N})\, ,& \quad N\to\infty, \, \frac{|X|}{N}\,\, \mbox{fixed, $0\laq{|X| \over N}\laq 1$}\,\, , \\[2mm] U_{\mbox{\tiny R}}^-(x-x') &=& {\sqrt{2} \over 8} (3-2\sqrt{2})^{d(x,x')} \sigma (x,x')\,\, , & \quad N\to\infty \,\, , \,\, |x-x'|\,\, \mbox{fixed}\,\, , \end{array} \end{equation} where $\sigma (x,x') = -1$ if the shortest path between $x$ and $x'$ crosses the bond $\pi_N^{12}$ (or $\pi_N^{21}$), and $\sigma (x,x') = 1$ otherwise. The interactions $U_{\mbox{\tiny R}}^+$ and $U_{\mbox{\tiny R}}^-$ differ furthermore in their symmetry properties from those of PBC in that one has now \begin{equation}\label{URS} U^{\pm}_{\mbox{\tiny R}}(X) = - U^{\pm}_{\mbox{\tiny R}}(X+N) \,\, , \end{equation} i.e., antiperiodicity of the interaction potentials. \subsection{The Hamiltonians in terms of electric field energy} In this subsection, we rewrite the strong interaction part of the Hamiltonians (\ref{HPBCf}) and (\ref{HRBC}) in terms of an electric field $E_x$: \begin{equation}\label{E1} E_x = E_0 + \sum_{x'=1}^x q_{x'}^+\,\, . \end{equation} $E_x$ is the electric field between $x$ and $x+1$, and $E_0$ is a constant background field whose value will be set later in such a way that the volume sum of the energy density $E_x$ gives the Coulomb energy of the Hamiltonians. The advantage of this rewriting is clearly seen in Appendix A when properties of the ground states of the Hamiltonians (for large $N$) are proven: $E_x$ is a local variable, whereas $\sum\limits_{x'=1}^N q_{x'}^+ U_{\mbox{\tiny P,R}}^+(x-x')$ involves all columns of the lattice. To determine the effects on the system's energy when a charge $q_{x_0}^+$ is changed in a given configuration is much easier in terms of the local variable $E_{x_0}$, as is manifest in Appendix A. Squaring (\ref{E1}) and summing over $x$, we get \begin{equation} \begin{array}{lcl}\label{E2} \sum\limits_{x=1}^N E_x^2 &=& NE_0^2+ 2E_0\sum\limits_{x=1}^N \sum\limits_{x'=1}^xq_{x'}^+ + \sum\limits_{x=1}^N \sum\limits_{x'=1}^x \sum\limits_{x''=1}^xq_{x'}^+q_{x''}^+ \\[0,2cm] {} &=& NE_0^2+ 2E_0\sum\limits_{x=1}^N (N-x+1) q_{x}^+ + \sum\limits_{x'=1}^N \sum\limits_{x''=1}^N [N+1-\max(x',x'')]q_{x'}^+q_{x''}^+ \,\, . \end{array} \end{equation} Using the identity \begin{equation} \max(x',x'') = {|x'-x''| \over 2} + {{x' + x''} \over 2} \end{equation} and rearranging terms leads to \begin{equation}\label{E3} \begin{array}{lcl} \sum\limits_{x=1}^N E_x^2 &=& \,\, N\left[E_0^2+ 2E_0 \sum\limits_{x=1}^N q_{x}^+ + (\sum\limits_{x=1}^N q_{x}^+)^2\right] - \left(2E_0 + \sum\limits_{x=1}^N q_{x}^+\right)\sum\limits_{x=1}^N (x-1)q_{x}^+ \\[3mm] {}&{}& \,\,\,\, -\,\, \sum\limits_{x'=1}^N \sum\limits_{x''=1}^N {|x'-x''| \over 2} q_{x'}^+q_{x''}^+ \,\, . \end{array} \end{equation} The value of the constant background field $E_0$ is obtained by setting \begin{equation} 2\pi^2 \sum_{x=1}^N E_x^2 = \pi^2 \sum_{x,x'=1}^N q_x^+ q_{x'}^+ U_{\mbox{\tiny P,R}}^+(x-x')\,\, . \end{equation} In the large $N$ limit, this amounts to comparing the expression in (\ref{E3}) with the Coulomb interaction part of (\ref{HPBCf}) and (\ref{HRBC}) [after insertion of (\ref{UR})]. This gives \begin{equation}\label{E0} E_0 = \left\{ \begin{array}{r@{\quad \quad}l} m + \frac{\pi_N^{12} + \pi_N^{21}}{2\pi}\,\, , & \mbox{for} \, \, {\cal H}_{\mbox{\tiny P}}\,\,\mbox{and}\,\, {\cal H}_{\mbox{\tiny AP}}\,\, , \\ -{1 \over 2} \sum\limits_{x=1}^N q_x^+\,\, , & \mbox{for} \, \, {\cal H}_{\mbox{\tiny R}}\,\, , \end{array} \right. \end{equation} and the Hamiltonians read hence \begin{equation}\label{HE} \begin{array}{rcl} {\cal H}_{\mbox{\tiny P,AP}} &=& \hspace{0,5cm} 2\pi^2\sum\limits_{x=1}^N E_x^2 + {\sqrt{2} \over 8} \pi^2 \sum\limits_{x,x'=1}^N (3-2\sqrt{2})^{d(x,x')} q_x^-q_{x'}^- \\[2mm] {}&{}& +\,\, {{8\pi^2} \over N} (n+{1 \over 4} \sum\limits_{x=1}^N q_x^- + \sum\limits_{x=1}^N {\pi_{(x,1)}\over {2\pi}} + \frac{1}{2}\delta_{\mbox{\tiny AP}})^2 \\[3mm] {\cal H}_{\mbox{\tiny R}} &=& \hspace{0,5cm} 2\pi^2\sum\limits_{x=1}^N E_x^2 + {\sqrt{2} \over 8} \pi^2\, \sum\limits_{x,x'=1}^N\, (3-2\sqrt{2})^{d(x,x')} \,\sigma (x,x')\, q_x^-q_{x'}^- \end{array} \end{equation} where \begin{equation} \delta_{\mbox{\tiny AP}} = \left\{ \begin{array}{r@{\quad \quad}l} 0\, , & \hspace{0,7cm}\mbox{for} \,\,\mbox{PBC} \,\, , \\ 1 \, , & \hspace{0,7cm} \mbox{for} \,\,\mbox{APBC} \,\, , \end{array} \right. \end{equation} with, from (\ref{ZP}) and (\ref{ZR}), \begin{equation}\label{constr} \begin{array}{lcll} \sum\limits_{x=1}^N q_x^+ &=& \hspace{0,5cm} 0 & \hspace{0,7cm}\mbox{for} \,\,{\cal H}_{\mbox{\tiny P}}\,\,\mbox{and}\,\, {\cal H}_{\mbox{\tiny AP}}\,\, , \\ \sum\limits_{x=1}^N q_x^+ \bmod 2 &=& \frac{\pi_N^{12} + \pi_N^{21}}{\pi} \bmod 2 & \hspace{0,7cm}\mbox{for} \,\,{\cal H}_{\mbox{\tiny R}} \,\, . \end{array} \end{equation} Having established the Hamiltonians for the different boundary conditions, we are now ready to determine those properties of their ground state configurations that are sufficient to calculate the typical energy difference between the ground state energies for $N\to \infty$. \setcounter{equation}{0} \section {The ground states for the different boundary conditions} We summarise the problem to which the preceding sections have led. Each of the three expressions (\ref{HE}) should now be minimised with respect to the variables $n$, $\{E_x\}$, and $\{q_x^-\}$. The $\{E_x\}$ are defined in terms of $m$ and $\{q_x\}$ by (\ref{E1}) and (\ref{E0}), and the $\{q_x^+\}$ and $\{q_x^-\}$ are defined in terms of the original charges $\{q_r\}$ by (\ref{q}). The variables $m$ and $n$ are integers, and the $\{q_r\}$ are half-integer or integer according to whether the plaquette $r$ is or is not frustrated.\\[2mm] The ground states of the Hamiltonians (\ref{HE}) possess in the large $N$ limit the following properties: {\em \begin{enumerate} \item The electric field satisfies $|E_x|\laq \frac{1}{2}$ for $x=$ 1,\ldots,N. The constant background field takes values $|E_0|\laq \frac{1}{2}$. \item The charges $q_{(x,1)}$, $q_{(x,2)}$ take the values $0$,$\pm\frac{1}{2}$. \item On doubly frustrated columns, the charges $q_{(x,1)}$ and $q_{(x,2)}$ are equal if and only if $|E_{x-1}|=\frac{1}{2}$. \end{enumerate} } These properties of the ground states, for $N\to \infty$, are proved in Appendix A. Property {\em 1}\/ indicates that the system's ground state is within the set of states that minimise the electric field (i.e., the Coulomb) energy. For PBC and APBC charge neutrality implies $E_N=E_0$ (cf.~equation (\ref{E1})), whereas for RBC antiperiodicity of the strong interaction potential leads to $E_N=-E_0=\frac{1}{2}\sum\limits_{x=1}^N q_x^+$ (from equations (\ref{E1}) and (\ref{E0})). Let us just note here that it was conjectured in \cite{Vspg} that, in the ground state of frustrated $XY$ spin systems, the chiralities (charges) $q_r$ are likely to be zero on nonfrustrated plaquettes and take the values $\pm\frac{1}{2}$ on frustrated plaquettes. The above property {\em 2}\/ shows that, for large $N$, this is {\em indeed}\/ the case for the tube lattice.\\[2mm] We will now construct the set of states that have properties {\em 1}\/ and {\em 2}\/: We start placing charges $q_{(x,y)}=0,\pm\frac{1}{2}$ successively from $x=1$ to $x=N$, minimising the local electric field energy density $E_x^2$ at each step and knowing that it changes by a half-integer amount (with $q_x^+=\pm\frac{1}{2}$) on a column with one frustrated plaquette and by an integer amount (with $q_x^+=0$ for $E_{x-1}=0$ and $q_x^+=0,\pm 1$ otherwise) on a column with both plaquettes frustrated (see figure 2). Having thus obtained a state that has properties {\em 1}\/ and {\em 2}\/, we see that it is always possible to partition the nonzero charges $q_{(x,y)}$ into dipoles as in figure 2, grouping together two successive charges of opposite sign along the $x$-axis such that outside the dipoles the electric field is zero. [There is one exceptional case (see figure 3): for ${\cal H}_{\mbox{\tiny R}}$ and $E_0 = {1 \over 2}$ the last and the first charge placed are not part of any dipole, but of the same sign to take account of the antiperiodicity of the potential $U^+_{\mbox{\tiny R}}(X) = - U^+_{\mbox{\tiny R}}(X+N)$, respectively $E_N=-E_0$, as mentioned above.] As one sees from figure 2, e.g.~from the dipole containing charges on the columns $x_1$ and $x_1'$, dipole reversals do not change the Coulomb (i.e., electric field) energy of the system. So, the ground state of the system is found within a set consisting of chains of dipoles, satisfying properties {\em 1}\/~and {\em 2}\/, degenerate in Coulomb energy. The possibility of columns with two identical charges leads to the partition into dipoles not being unique. Property {\em 3}\/, however, introduces a further constraint on the set among which one finds the ground state. Furthermore, one can easily convince oneself that this latter property implies that one can reach {\em every}\/ state that satisfies properties {\em 1}\/~to {\em 3}\/ from {\em any}\/ other such state, by reversals of dipoles, for {\em any}\/ given partition. In particular, the ground state of the system differs, for large $N$ from a state as constructed above (with the additional constraint from property {\em 3}\/) by a reversal of dipoles. \begin{figure} \setlength{\unitlength}{1cm} \begin{picture}(14,7) \thicklines \put(1.5,4){\line(1,0){12}} \put(1.5,5){\line(1,0){12}} \put(1.5,6){\line(1,0){0.06175}} \put(13.43825,6){\line(1,0){0.06175}} \multiput(1.81175,6)(0.25,0){47}{\line(1,0){0.125}} \multiput(2,4)(1,0){12}{\line(0,1){2}} \put(4.3,5.4){\vector(-2,-1){1.6}} \put(6.5,4.65){\vector(0,1){0.65}} \put(7.65,4.65){\vector(1,1){0.65}} \put(10.3,5.4){\vector(-2,-1){1.6}} \put(12.35,4.65){\vector(-1,1){0.65}} \put(13.5435,5.152){\vector(-3,1){0.8}} \thinlines \put(0.75,4.25){\vector(0,1){0.5}} \put(0.75,4){\makebox(0,0){$y$}} \put(11.25,3.5){\vector(1,0){1.5}} \put(11,3.5){\makebox(0,0){$x$}} \put(2.5,4.5){\makebox(0,0){+}} \put(4.5,5.5){\makebox(0,0){-}} \put(6.5,4.5){\makebox(0,0){-}} \put(6.5,5.5){\makebox(0,0){+}} \put(7.5,4.5){\makebox(0,0){-}} \put(8.5,4.5){\makebox(0,0){+}} \put(8.5,5.5){\makebox(0,0){+}} \put(10.5,5.5){\makebox(0,0){-}} \put(11.5,5.5){\makebox(0,0){+}} \put(12.5,4.5){\makebox(0,0){-}} \put(12.5,5.5){\makebox(0,0){+}} \thicklines \put(1.45,1.5){\line(1,0){1.05}} \put(2.5,2){\line(1,0){2}} \put(4.5,1.5){\line(1,0){3}} \put(7.5,1){\line(1,0){1}} \put(8.5,2){\line(1,0){2}} \put(10.5,1.5){\line(1,0){1}} \put(11.5,2){\line(1,0){2}} \put(2.5,1.5){\line(0,1){0.5}} \put(4.5,1.5){\line(0,1){0.5}} \put(7.5,1){\line(0,1){0.5}} \put(8.5,1){\line(0,1){1}} \put(10.5,1.5){\line(0,1){0.5}} \put(11.5,1.5){\line(0,1){0.5}} \put(2.5,1.25){\makebox(0,0){$x_1$}} \put(4.5,1.25){\makebox(0,0){$x_1'$}} \put(1,2){\makebox(0,0){$+\frac{1}{2}$}} \put(1,1){\makebox(0,0){$-\frac{1}{2}$}} \thinlines \put(1.45,1){\line(1,0){0.1}} \put(1.45,2){\line(1,0){0.1}} \put(1.5,0.5){\vector(0,1){2}} \put(1.45,1.5){\vector(1,0){12.1}} \put(1.3,2.8){\makebox(0,0){$E_x$}} \put(13.75,1.25){\makebox(0,0){$x$}} \multiput(2.5,1.45)(1,0){11}{\line(0,1){0.1}} \end{picture} {\footnotesize {\noindent\bf\footnotesize Figure 2:} {\sl\footnotesize Construction of a state that possesses properties 1 and 2.} We start placing charges $q_{(x,y)}$ from the left to the right. We proceed in a way that the local electric field stays as close to $0$ as possible. The signs indicate plaquettes with charges $\pm\frac{1}{2}$. Plaquettes with no sign are charge-free (unfrustrated).} \end{figure} \begin{figure} \setlength{\unitlength}{1cm} \begin{picture}(14,7) \thicklines \put(1.5,4){\line(1,0){12}} \put(1.5,5){\line(1,0){12}} \put(1.5,6){\line(1,0){0.06175}} \put(13.43825,6){\line(1,0){0.06175}} \multiput(1.81175,6)(0.25,0){47}{\line(1,0){0.125}} \multiput(2,4)(1,0){12}{\line(0,1){2}} \thinlines \put(0.75,4.25){\vector(0,1){0.5}} \put(0.75,4){\makebox(0,0){$y$}} \put(11.25,3.5){\vector(1,0){1.5}} \put(11,3.5){\makebox(0,0){$x$}} \put(2.5,4.5){\makebox(0,0){+}} \put(3.5,5.5){\makebox(0,0){-}} \put(5.5,4.5){\makebox(0,0){+}} \put(5.5,5.5){\makebox(0,0){-}} \put(6.5,4.5){\makebox(0,0){-}} \put(9.5,5.5){\makebox(0,0){-}} \put(10.5,4.5){\makebox(0,0){+}} \put(11.5,5.5){\makebox(0,0){-}} \put(12.5,4.5){\makebox(0,0){-}} \put(12.5,5.5){\makebox(0,0){+}} \multiput(8.5,3.775)(0,0.1){24}{\line(0,1){0.05}} \put(8.5,3.6){\makebox(0,0){$N$}} \thicklines \put(1.45,1.5){\line(1,0){1.05}} \put(2.5,2){\line(1,0){1}} \put(3.5,1.5){\line(1,0){3}} \put(6.5,1){\line(1,0){2}} \put(8.5,2){\line(1,0){1}} \put(9.5,1.5){\line(1,0){1}} \put(10.5,2){\line(1,0){1}} \put(11.5,1.5){\line(1,0){2}} \put(2.5,1.5){\line(0,1){0.5}} \put(3.5,1.5){\line(0,1){0.5}} \put(6.5,1){\line(0,1){0.5}} \put(9.5,1.5){\line(0,1){0.5}} \put(10.5,1.5){\line(0,1){0.5}} \put(11.5,1.5){\line(0,1){0.5}} \put(8.5,0.75){\makebox(0,0){$N$}} \put(1,2){\makebox(0,0){$+\frac{1}{2}$}} \put(1,1){\makebox(0,0){$-\frac{1}{2}$}} \put(3.35,5.35){\vector(-1,-1){0.65}} \put(5.5,5.35){\vector(0,-1){0.65}} \put(11.35,5.35){\vector(-1,-1){0.65}} \put(12.5,4.65){\vector(0,1){0.65}} \thinlines \put(1.45,1){\line(1,0){0.1}} \put(1.45,2){\line(1,0){0.1}} \put(1.5,0.5){\vector(0,1){2}} \put(1.45,1.5){\vector(1,0){12.1}} \put(1.3,2.8){\makebox(0,0){$E_x$}} \put(13.75,1.25){\makebox(0,0){$x$}} \multiput(2.5,1.45)(1,0){11}{\line(0,1){0.1}} \multiput(8.5,1.025)(0,0.1){9}{\line(0,1){0.05}} \end{picture} {\footnotesize {\noindent\bf\footnotesize Figure 3:} {\sl\footnotesize Example of a state with properties 1~and 2~for RBC, while $E_0=\frac{1}{2}$.} For RBC, proceeding in the construction as described in the text, the last charge to be placed is of the same sign as the first. This is obvious from equation (\ref{E0}), $E_0 = - \frac{1}{2}\sum\limits_{x=1}^N\, q_x^+$, which indicates that, in general, there is a surplus of two charges $q_{(x,y)}$ with opposite sign to $E_0=\pm\frac{1}{2}$ (here $E_0=\frac{1}{2}$), to take account of the fact that for RBC the potentials $U^{\pm}_{\mbox{\tiny R}}$ are antiperiodic (see (\ref{URS})). But still all charges, but the first and the last one, can be grouped into dipoles as announced in the text.} \end{figure} The exact ground state configuration remains unknown, but we know that it minimises the Coulomb energy independently of the other energies involved and have characterised the set of Coulomb energy ground states. [Let us just point out here that, when passing from PBC to RBC, one conserves by virtue of (\ref{E0}) and (\ref{constr}) the property $E_0=0$ or $E_0\ne 0$ for the ground states at both boundary conditions, due to the fact that $\frac{\pi^{12}_N+\pi^{21}_N}{2\pi}$ stays the same [see also Appendix A, especially equation (\ref{E0g})]. So the Coulomb energy stays indeed the same.] The only remaining degrees of freedom in this set are the directions of single dipoles: The degeneracy is lifted by the other terms in the Hamiltonians of equation (\ref{HE}), which fix these directions. The amount of Coulomb energy of the system with the different boundary conditions being the same, it is the effect of these {\em other}\/ terms that give rise to the difference between ground state energies when one varies the boundary conditions. We address this issue in detail in the following section. In spite of the fact that we ignore the exact ground state configurations, the above properties s! uffice to analyse and determine th e ground state energy differences for $N\to \infty$. \setcounter{equation}{0} \section {Boundary conditions and ground state energy differences} \subsection {Generalities} There is a general relation between the finite size scaling of the energy difference, $\Delta E^{(N)} \stackrel{N\to \infty}{\simeq} J N^{-y}$ (where $J$ is the energy scale), of the ground states of a system for different boundary conditions and the corresponding correlation length, $\xi(T)$, at a finite temperature $T$: The correlation length $\xi(T)$ is set by $\Delta E^{(\xi)} \sim k_{\mbox{\tiny B}}T$, hence \begin{equation}\label{xi} \xi (T) \sim \left( {{k_{\mbox{\tiny B}}T} \over J} \right) ^{-{1 \over y}}\,\, , \qquad T\to 0\,\, . \end{equation} [Let us just note here that the energy difference may be either concentrated in a domain wall or associated with a continuous variation of the order parameter.] In the Villain model, we may study the spin-spin correlation and the chirality-chirality correlation by applying APBC and RBC \cite{KaT,NHM,NH}. So we have to calculate \begin{equation}\label{Ediff} \begin{array}{lcc} \Delta E_{\mbox{\tiny AP}}^{(N)} &=& E_{\mbox{\tiny AP}}^{(N)} - E_{\mbox{\tiny P}}^{(N)} \,\, ,\qquad N\to \infty\,\, ,\\ \Delta E_{\mbox{\tiny R}}^{(N)} &=& E_{\mbox{\tiny R}}^{(N)} - E_{\mbox{\tiny P}}^{(N)}\,\, ,\qquad N\to \infty\,\, , \end{array} \end{equation} where $E_{\mbox{\tiny P}}^{(N)}$, $E_{\mbox{\tiny AP}}^{(N)}$, and $E_{\mbox{\tiny R}}^{(N)}$ are the ground state energies under P, AP, and R boundary conditions, respectively. In this section and in Appendix B, we will call a dipole with charges $q_{(x,y)}$ and $q_{(x',y')}$ {\em slanted}\/, if $y \ne y'$, and {\em horizontal}\/, if $y=y'$. [With this definition, the slanted ones include the vertical dipoles.] As the probability for a plaquette to be frustrated is the same for all plaquettes, a dipole is as likely to be slanted as horizontal. Furthermore, writing $U^-$ for $U^-_{\mbox{\tiny P}}$ at PBC (\ref{U-P}) and for $U^-_{\mbox{\tiny R}}$ at RBC (\ref{UR}), the weak (dipolar) interaction $U^-(\ell)$ has the property \begin{equation} U^-(\ell ) > \sum_{{\ell ' = \ell +1}}^\infty U^- (\ell ') \end{equation} for all boundary conditions, so that we may approximate its effect by restricting the interactions of each nonzero charge $q_x^-$ to those with its two nonvanishing neighbouring charges. Upon renumbering the nonzero charges $q_x^-$ on the frustrated columns by a new index $s = 1,2,\ldots,N_c$ (where $N_c$ is the total number of the frustrated columns with nonzero $q_x^-$), we can finally rewrite the effective weak (dipolar) interaction as \begin{equation}\label{Ueff} \sum\limits_{x,x'}\, q_x^-q_{x'}^-\, U^-(x-x') = \sum_{{s=1}}^{N_c} U_s q_s^-q_{s+1}^- + U(0) \sum_{{s=1}}^{N_c} (q_s^-)^2 \,\, , \end{equation} where $q_{N_c+1} \equiv q_1$ for PBC/APBC and $q_{N_c+1} \equiv -q_1$ for RBC. The charges $q_s^-$ take the values $\pm {1\over 2}$ and $\pm 1$, and the $U_s$ are independent quenched random interaction constants. Since in the set of states that we consider the Coulomb energy is boundary condition independent, we will deduce the energy differences, $\Delta E_{\mbox{\tiny AP}}^{(N)}$ and $\Delta E_{\mbox{\tiny R}}^{(N)}$ ($N \to \infty$), from (\ref{Ueff}) and from the global spin wave term in (\ref{HE}). The large $N$ limit is selfunderstood in what follows. \subsection {Antiperiodic boundary conditions}\label{APB} The actual ground state minimises the second and third term in ${\cal H}_{\mbox{\tiny P}}$ and ${\cal H}_{\mbox{\tiny AP}}$, equation (\ref{HE}), within the space of degenerate ground states of the Coulomb energy, characterised in the preceding section. The second term, rewritten in equation (\ref{Ueff}), is the weak interaction between the $q_x^-$. In Appendix B we show that its lowest-lying excitation lies typically an energy amount $\sim JN^{-y_c}$ above the ground state, and determine the exponent $y_c=\log_{\frac{8}{3}}(3+2\sqrt{2}) = 1.7972\ldots$ . The third term is \begin{equation}\label{cond0} \begin{array}{lcl} \frac{8\pi^2}{N}\left(n+\frac{1}{2}\sum\limits_x q_{(x,1)} + \sum\limits_x \frac{\pi_{(x,1)}}{2\pi}\right)^2 &{}&\mbox{in}\,\, {\cal H}_{\mbox{\tiny P}}\,\, , \\[3mm] \frac{8\pi^2}{N}\left(n+\frac{1}{2}\sum\limits_x q_{(x,1)} + \sum\limits_x \frac{\pi_{(x,1)}}{2\pi} + \frac{1}{2}\right)^2 &{}&\mbox{in}\,\, {\cal H}_{\mbox{\tiny AP}}\,\, , \end{array} \end{equation} where we have used the neutrality condition to write $\sum\limits_x q_x^- = 2\, \sum\limits_x q_{(x,1)}$. The terms in (\ref{cond0}), of order $N^{-1}$, have their origin in a global spin wave, i.e., of wave length $> N$, which helps the system to adjust to its boundary conditions when there is a rotational mismatch (cf.~\cite{NHM,NH}). One might wonder if it is always possible, by choosing $n$ properly, that the terms in (\ref{cond0}) vanish in the ground state in ${\cal H}_{\mbox{\tiny P}}$ and/or ${\cal H}_{\mbox{\tiny AP}}$. As $n+ \sum\limits_{x=1}^N {\pi_{(x,1)} \over {2\pi}} \in \{\,0, \pm {1 \over 2}, \pm 1,\ldots\,\}$, this depends obviously on the number of nonzero charges $q_{(x,1)}$. Hence we have two cases: \begin{itemize} \item[{\em (i)}] the number of frustrated plaquettes $(x,1)$ is even; \item[{\em (ii)}] the number of frustrated plaquettes $(x,1)$ is odd. \end{itemize} Correspondingly: \begin{equation}\label{cond} \begin{array}{lcl} {1 \over 2} \sum\limits_{x=1}^N q_{(x,1)} \in \{\,0, \pm {1 \over 2}, \pm 1, \ldots\,\}&{}& \quad\mbox{case {\em (i)}\/}\,\, ,\\[2mm] {1 \over 2} \sum\limits_{x=1}^N q_{(x,1)} \in \{\, \pm {1 \over 4}, \pm\frac{3}{4}, \ldots\,\}&{}& \quad\mbox{case {\em (ii)}}\,\, . \end{array} \end{equation} We investigate these cases further. \vspace{0.3cm} \noindent {\em (i) Even number of frustrated plaquettes $(x,1)$}\/: Given the set of $\pi_{(x,1)}$ and possibly reversing a sequence of dipoles as in Appendix B to get\, $\frac{1}{2}\sum\limits_x q_{(x,1)} + \sum\limits_x \frac{\pi_{(x,1)}}{2\pi}$\, integer for PBC and halfinteger for APBC or vice versa, the terms in (\ref{cond0}) vanish for both Hamiltonians by a proper choice of $n$. As there is a difference of $\frac{1}{2}$ in the term in parentheses in (\ref{cond0}), the ground states of ${\cal H}_{\mbox{\tiny P}}$ and ${\cal H}_{\mbox{\tiny AP}}$ differ by a reversal of a sequence of dipoles containing an odd number of $q_{(x,1)}\ne 0$, i.e., a sequence of dipoles among which an odd number is slanted. Hence we obtain, reinserting $J$, \begin{equation}\label{APi} \Delta E_{\mbox{\tiny AP}}^{(N)} \sim \pm JN^{-y_c} , \quad \qquad \mbox{case {\em (i)}\/} \,\, , \end{equation} where the sign indicates that either state, at PBC or APBC, has the lower ground state energy. \vspace{0.2cm} \noindent {\em (ii) Odd number of frustrated plaquettes $(x,1)$}\/: Here, from (\ref{cond}), the terms (\ref{cond0}) in ${\cal H}_{\mbox{\tiny P}}$ and ${\cal H}_{\mbox{\tiny AP}}$ are always nonzero and the optimal value of $n$ will give an energy $J\frac{\pi^2}{4N}$ irrespective of the directions of the dipoles. These will thus be determined by the weak interaction only and be the same for both boundary conditions. The energy difference is hence \begin{equation}\label{APii} \Delta E_{\mbox{\tiny AP}}^{(N)} = 0, \quad \qquad \mbox{case {\em (ii)}} \,\/ , \end{equation} in this case. \subsection {Reflecting boundary conditions} The ground states at PBC and RBC minimise the second and the third term in ${\cal H}_{\mbox{\tiny P}}$ and the second term in ${\cal H}_{\mbox{\tiny R}}$, equation (\ref{HE}), within the set of states characterised in the preceding section. So, again, one has to distinguish between an even and odd number of $q_{(x,1)}\ne 0$, when calculating the typical ground state energy difference. \vspace{3mm} \noindent {\em (i) Even number of frustrated plaquettes $(x,1)$}\/: We saw in section \ref{APB} that the term for PBC in (\ref{cond0}) vanishes in the ground state of ${\cal H}_{\mbox{\tiny P}}$ and that, for half of the samples, one finds the ground state at PBC by minimising the weak (dipolar) interaction. If one changes to RBC for {\em this}\/ half of the samples, while keeping the PBC ground state configuration, one will almost always be able to lower the energy [Note that the sign of the bond that passes column $N$, which is almost never the weakest one, has changed (!)]: one just has to reverse a sequence of dipoles starting at column $N$ in such a way that only one of the weakest bonds is broken. For the {\em other}\/ half of the samples, the cancellation of the global spin wave term implies a reversal of a sequence of dipoles in the configuration {\em after}\/ minimisation of the dipolar interaction. For these latter samples, when one changes to RBC, one has to reverse again the same sequence that was reversed to get the ground state at PBC. In {\em both}\/ cases, this leads to \begin{equation}\label{Ri} \Delta E_{\mbox{\tiny R}}^{(N)} \sim \pm J N^{-y_c} , \quad \qquad \mbox{case {\em (i)}\/} \,\, , \end{equation} for the difference in ground state energies. The minus sign applies for the first half of the samples, the plus for the second half. \vspace{2mm} \noindent (ii) {\em Odd number of frustrated plaquettes $(x,1)$}\/: The global spin wave term never vanishes in PBC, so that \begin{equation}\label{Rii} \Delta E_{\mbox{\tiny R}}^{(N)} \simeq \frac{\pi^2}{4} J N^{-1} \, , \quad \qquad \mbox{case {\em (ii)}}\, \, , \end{equation} neglecting a possible contribution of order $J N^{-y_c}$. \setcounter{equation}{0} \section{Conclusion} We have studied the $XY$ spin glass with $\pm J$ bonds on a tube lattice. This system has both a continuous (spin) and a discrete (chiral) symmetry, and hence two order parameters play a r\^ole. Our purpose was to determine the divergence, for $T\to 0$, of the chiral and the spin correlation lengths, via the finite size scaling of the ground state energy differences under different boundary conditions. In the presence of two symmetries, the usual single-symmetry relation between the finite size scaling exponents of the ground state energy difference and the correlation length has to be extended in a nontrivial way. Nevertheless, the spin correlation length exponent $y_s$ (see equation (\ref{xi})) is given by the energy difference when one passes from periodic to antiperiodic boundary conditions, namely $\overline{(\Delta E_{\mbox{\tiny AP}}^{(N)})^2}^\frac{1}{2} \sim J N^{-y_s}$. New boundary condi\-tions, reflecting ones, were introduced \cite{KaT} to determine the chirality correlation length expo\-nent $y_c$. The difficulty in performing such an analysis on a general $N\times M$ lattice is that one does not know how to construct the ground states of the disordered systems. The tube lattice, of this work, however, just as the ladder lattice studied earlier \cite{NHM}, allows for a precise theoretical analysis of this relation. In contrast to the ladder lattice, the tube lattice still has long-range interactions between its chiralities, and is therefore closer to a two-dimensional system. We first apply the well-known transformation \cite{Vspg,NH} of the $XY$ spin glass into a Coulomb gas, a system of chiral variables (also called charges). The resulting effective Hamiltonian can be cast in the form (\ref{HE}) where it is the sum of three terms: {\em \begin{itemize} \item[(i)] A one-dimensional Coulomb interaction, linearly increasing with distance, bet\-ween charges $q_1^+,$ $q_2^+, \ldots, q_N^+$; in {\rm (\ref{HE})}, this term has been expressed as the volume sum of the energy density of the electric field $E_x$. \item[(ii)] A ``dipolar" interaction that decreases exponentially with distance between the $q_1^-,q_2^-,\ldots,q_N^-$. \item[(iii)] The energy of a spin wave needed to match PBC or APBC (but absent under RBC), and whose wavenumber depends on the total electric dipole moment. \end{itemize}} The third term disappears in the thermodynamic limit. Its relevance for a finite size scaling analysis was first pointed out by Fisher, Tokuyasu and Young \cite{FTY}. Moreover, the three terms are, on the one hand, coupled by {\em local}\/ constraints, that link the allowed values of $q_x^+$ and $q_x^-$ with the fixed values of the ferromagnetic or antiferromagnetic bonds $\pi_{ij}$ between the spins on the lattice, and, on the other hand, by a {\em global}\/ constraint on the total charge (zero for PBC and APBC, and even or uneven for RBC). Taking these constraints into account, we identify the low-lying excitations of the three terms, respectively: {\em \begin{itemize} \item[(i)] Coulomb excitations that cost an energy of order $J$. \item[(ii)] Chiral excitations, obtained by reversing a sequence of chiral variables, that cost an energy $\sim J N^{-y_c}$ with $y_c=1.7972\ldots$ . \item[(iii)] Global spin waves that cost an energy $\sim J N^{-1}$. \end{itemize}} The $\pm J$ $XY$ spin glass on the ladder lattice \cite{NHM} consists of both interactions {\em (ii)}\/ and {\em (iii)}\/. Due to the additional long-range interaction {\em (i)}\/, the tube is closer to the two-dimensional model. In spite of the number of interactions in competition, we were able to characterise and delimit the set of charge configurations, within which lies the ground state. In the configurations contained in this set, the charges take the values $\pm {1 \over 2}$ on the frustrated plaquettes and zero on the others, and form a chain of dipoles. We now give a summary of our results, and recall numerical results for comparison. When changing boundary conditions from PBC to APBC, or RBC, it is the excitations {\em (ii)}\/ and {\em (iii)}\/ that give {\em both}\/ energy differences, $\Delta E_{\mbox{\tiny AP}}^{(N)}$ and $\Delta E_{\mbox{\tiny R}}^{(N)}$. This implies the same conclusions as in \cite{NHM}: Firstly, the ground state obtained with P boundary conditions can adjust to AP boundary conditions {\em via a chiral excitation}\/, so that \begin{equation}\label{EAPres} \overline{(\Delta E_{\mbox{\tiny AP}}^{(N)})^2}^\frac{1}{2} \sim J N^{-y_c}\,\, , \qquad y_c=1.7972\ldots\,\, . \end{equation} The last equation contains {\em no reference to spin waves}\/ and means that $y_s=y_c$. Secondly, passing from P to R boundary conditions releases a global spin wave (as was first observed by Kawamura and Tanemura \cite{KaT} in $d=2$) in half of the samples, but does not do so in the other half. In $d=2$, Kawamura and Tanemura performed a numerical analysis of the different ground state energies of the cosine $XY$ model. They find, as $N\rightarrow \infty$, \begin{equation}\label{KaTc} \begin{array}{lcllcl} \overline{(\Delta E_{\mbox{\tiny AP}}^{(N)})^2}^\frac{1}{2} &\simeq& a\, N^{-y_s}\,\, ,&y_s &\approx& 0.84\, , \\ \overline{(\epsilon^{(N)}_{\mbox{\tiny R}} - \overline{\epsilon^{(N)}_{\mbox{\tiny R}}})^2}^\frac{1}{2} &\simeq& b\,N^{-y_c} + {\cal O}(N^{-y_s})\,\, , &y_c &\stackrel{<}{\approx}& 0.38\, , \end{array} \end{equation} where $a$ and $b$ are constants, and a new quantity, namely \begin{equation} \epsilon^{(N)}_{\mbox{\tiny R}} \equiv \Delta E_{\mbox{\tiny R}}^{(N)} - \min(0,\Delta E_{\mbox{\tiny AP}}^{(N)})\,\, , \end{equation} has been introduced. Thus, they get two distinct exponents $y_s$ and $y_c$, with $y_s > y_c$, and conclude that the chiralities order on a longer scale than the spin variables. For the tube, upon collecting our results [equations (\ref{APi}),(\ref{APii}),(\ref{Ri}) and (\ref{Rii})], we get \begin{equation}\label{ERres} \overline{(\epsilon^{(N)}_{\mbox{\tiny R}} - \overline{\epsilon^{(N)}_{\mbox{\tiny R}}})^2}^\frac{1}{2} = \frac{\pi^2}{8} J N^{-1} + {\cal O}(N^{-y_c})\,\, , \end{equation} i.e., the R boundary conditions probe a global spin wave term proportional to $N^{-1}$. If we now conjecture on the extrapolation of our results to $d=2$, then we expect for the quantities of equation (\ref{KaTc}) that $\Delta E_{\mbox{\tiny AP}}$ would yield a chiral exponent $y_c$ as in (\ref{EAPres}) but with a smaller value (since $y_c$ should vanish at some, still higher, lower critical dimension); and that $\epsilon_{\mbox{\tiny R}}$ would yield the spin wave exponent $d-2=0$. Instead, in contrast, Kawamura and Tanemura interpret their simulation according to (\ref{KaTc}). We expect that simulations on larger $2d$ systems will confirm our scenario. \vspace{4.5cm} \setcounter{equation}{0} \renewcommand{\theequation}{\mbox{A.\arabic{equation}}} \begin{appendix} \noindent {\Large\bf Appendix A} \vskip 0,5cm In this appendix, we prove that in the large $N$ limit the ground states of the system for the different boundary conditions possess the properties {\em 1}\/ to {\em 3}\/ announced in section III. In the calculations, we write the expression of the weak (dipolar) interaction in its form at PBC/APBC (equation (\ref{U-P})), with again $J=2$. The arguments are nevertheless readily rewritten for RBC, including the appropriate factors of $\sigma(x,x')$ (equation (\ref{UR})). Furthermore, we neglect the global spin wave term ${\cal O}(\frac{1}{N})$ that appears for PBC/APBC. Upon proper choice of $n$, this term contributes at most $\frac{\pi^2}{2N}$ to the ground state energies at PBC/APBC, which is small for $N\rightarrow \infty$, in comparison to the other energies involved.\\[5mm] In preparation of the proofs of the ground state properties {\em 1}\/ to {\em 3}\/, we show, in a first step, that \vspace{-0.3cm} {\em \begin{itemize} \item[(i)] in the ground state, the charges $q_x^-$ take values $|q_x^-|\laq \frac{3}{2}$, \end{itemize}}\vspace{-0.3cm} \noindent and, using {\em (i)}\/, in a second step, that \vspace{-0.3cm} {\em \begin{itemize} \item[(ii)] in the ground state, the charges $q_x^-$ take values $|q_x^-|\laq 1$. \end{itemize}\vspace{-0.3cm} } ${}$ \noindent {\bf Proofs of {\em (i)} and {\em (ii)}:} \noindent {\em (i) In the ground state, the charges $q_x^-$ take values such that $|q_x^-|\laq \frac{3}{2}$.\\[0.1cm]} Let us look at some state with charges $q_x^{-,0}$ such that \begin{equation} q \equiv \max\{|q_x^{-,0}|\} \gaq 2\,\, . \end{equation} Let $q_{x_0}^{-,0}$ be a charge with $|q_{x_0}^{-,0}|=q$. For reasons of charge reversal symmetry, we may take $q_{x_0}^{-,0}$ positive without loss of generality. Consider now the state in which $q_{x_0}^{-,0}$ is changed into $q_{x_0}^{-,0}-2$, while all other charges $q_x^{-,0}$ and all charges $q_x^{+,0}$ are kept unchanged. [Note that, by an appropriate change of $q_{(x,1)}$ and $q_{(x,2)}$, one can add an arbitrary multiple of 2 to some charge $q_x^-$ without changing $q_x^+$ (see (\ref{q})).] The difference in energy $\Delta E$ between the (final) state, with $q_{x_0}^{-,0}$ changed, and the initial state can readily be calculated. As the charges $q_x^+$ are unchanged, it comes from the difference in weak interaction energy only. For $q_{x_0}^{-,0}\gaq 2$, one finds \begin{equation} \begin{array}{lcl} \Delta E &=& \frac{\sqrt{2}}{8}\pi^2\left[(q-2)^2-q^2\right] + \frac{\sqrt{2}}{8}\pi^2\left[q-(q-2)\right]2\sum\limits_{s>0}(q_{x+s}^{-,0} + q_{x-s}^{-,0})(3-2\sqrt{2})^s \\[2mm] {} &=& \frac{\sqrt{2}}{8}\pi^2 \left[-4(q-1) + 4\sum\limits_{s>0}(q_{x+s}^{-,0}+q_{x-s}^{-,0})(3-2\sqrt{2})^s\right]\,\, . \end{array} \end{equation} We have \begin{equation} \left|4\sum\limits_{s>0}(q_{x+s}^{-,0}+q_{x-s}^{-,0})(3-2\sqrt{2})^s\right| \laq 8q\left|\sum\limits_{s>0}(3-2\sqrt{2})^s\right| \,\, , \end{equation} and thus, summing the geometric series and using $q\gaq 2$, \begin{equation} \Delta E \laq \frac{\sqrt{2}}{8}\pi^2 \left[-4(q-1) + 4(\sqrt{2}-1)q\right] < 0\,\, . \end{equation} So the final state is lower in energy than the initial state. Hence a state, in which $q\gaq 2$, is not the ground state. \vspace{0.1cm} \noindent {\em (ii) In the ground state, the charges $q_x^-$ take values such that $|q_x^-|\laq 1$.\\[0.1cm]} Let us take some state with charges $q_x^{-,0}=0,\pm\frac{1}{2},\pm 1, \pm\frac{3}{2}$. Be $n$ the number of charges $\pm\frac{3}{2}$, with $n>0$. Suppose for the moment that $n<N$. There exists hence a sequence of charges $q_{x_0}^{-,0}$, $q_{x_0+1}^{-,0}$,$\cdots$, $q_{x_0+n'-1}^{-,0}=\pm\frac{3}{2}$ ($1\laq n'\laq n$) that is enclosed between charges $q_x^{-,0}$ of absolute value $\laq 1$ (i.e., $|q_{x_0-1}^{-,0}|$,$|q_{x_0+n'}^{-,0}|\laq 1$). Consider now the state in which all charges $q_x^{-,0}=\pm\frac{3}{2}$ in this sequence are replaced by $\mp\frac{1}{2}$, while all other charges $q_x^{-,0}$ and all charges $q_x^{+,0}$ are kept unchanged. As in {\em (i)}\/, the difference in energy $\Delta E$ between the (final) state, with the changes, and the initial state is due to the difference in weak (dipolar) interaction energy only. The difference $\Delta E_0$ in weak interaction energy, coming from the self-interaction terms in the sum, is \begin{equation} \Delta E_0 = -2\frac{\sqrt{2}}{8}\pi^2 n'\,\, , \end{equation} the one from the nearest-neighbour interaction terms is of absolute value \begin{equation} |\Delta E_1| \laq 2\frac{\sqrt{2}}{8}\pi^2(3-2\sqrt{2}) \left[(n'-1)\frac{9}{4}+2\cdot\frac{3}{2}\cdot 1\right] + 2\frac{\sqrt{2}}{8}\pi^2(3-2\sqrt{2}) \left[(n'-1)\frac{1}{4}+2\cdot\frac{1}{2}\cdot 1\right]\,\, , \end{equation} while the energy difference of all other terms is of absolute value \begin{equation} |\Delta E_2| \laq 4\frac{\sqrt{2}}{8}\pi^2\frac{(3-2\sqrt{2})^2} {1-(3-2\sqrt{2})}\frac{9}{4}n' + 4\frac{\sqrt{2}}{8}\pi^2\frac{(3-2\sqrt{2})^2} {1-(3-2\sqrt{2})}\frac{3}{4}n'\,\, . \end{equation} To obtain the last inequality, we have substituted for all charges $q_x^-$, but the ones in the sequence, the maximal possible absolute value $\frac{3}{2}$ and taken all terms in the sum to be negative in the initial state and positive in the final state [which is obviously an upper bound for the energy change, but impossible to realise]. The overall energy difference is thus \begin{equation}\label{dE2} \begin{array}{lcl} \Delta E &\laq& -2\frac{\sqrt{2}}{8}\pi^2n' + \frac{\sqrt{2}} {8}\pi^2(3-2\sqrt{2})\left[5(n'-1)+8\right] + 12\frac{\sqrt{2}} {8}\pi^2\frac{(3-2\sqrt{2})^2}{1-(3-2\sqrt{2})}n'\\ {}&=& - \frac{\sqrt{2}}{8}\pi^2[n'(29-20\sqrt{2}) - 9 + 6\sqrt{2}] < 0\,\, . \end{array} \end{equation} Again, the energy of the final state is lower than the energy of the initial state. If $n=N$, a similar reasoning (with $n'=n=N$) leads to the same conclusion. This proves {\em (ii)}\/.\\[0.7mm] \noindent {\bf We are now prepared to show that the ground state has the properties {\em 1}\/ to {\em 3}\/, announced in section III.}\\[0.5mm] \noindent {\em 1. In the ground state, the electric field satisfies $|E_x|\laq \frac{1}{2}$. The constant background field takes values $|E_0|\laq \frac{1}{2}$.} \noindent We first note that the electric field is constrained to be of absolute value $\laq 1$ for the ground state configuration. This can be seen as follows. {}From (\ref{HE}), we see that minimisation of the Coulomb interaction part of the Hamiltonians means minimising the mean square value of the local electric field $E_x$. This leads for all Hamiltonians to \begin{equation}\label{E0g} E_0 = \left\{ \begin{array}{r@{\quad \quad}l} 0\, , & \mbox{if} \, \, \frac{\pi_N^{12} + \pi_N^{21}}{\pi} \bmod 2 = 0 \,\, , \\ \pm{1 \over 2}\, , & \mbox{if} \, \, \frac{\pi_N^{12} + \pi_N^{21}}{\pi} \bmod 2 = 1 \,\, , \end{array} \right. \end{equation} for the constant background field $E_0$. The electric field, $E_x$, should be locally optimal, that is stay as close to $0$ as possible. For any given state with given sets of charges $\{q_x^+\}$ and $\{q_x^-\}$, one can go successively through the system, from $x=1$ to $x=N$, changing, whenever necessary, the charges $q_x^+$ in such a way that the value of the electric field is bounded in absolute value by $1$ in the final state: At every column with no frustrated or two frustrated plaquettes, the change in electric field can be bounded to be $0,\pm 1$ by an appropriate change of $q_x^+$, if necessary, and to be $\pm\frac{1}{2}$ on every column with exactly one frustrated plaquette. [By the definition of $q_x^+$ and $q_x^-$, equation (\ref{q}), one can add an arbitrary multiple of 2 to some charge $q_x^+$ without changing $q_x^-$, by an appropriate change of $q_{(x,1)}$ and $q_{(x,2)}$.] Whenever one encounters a column $x_0$, during the above procedure, where the electric field jumps to a value $|E_{x_0}|\gaq\frac{3}{2}$, one changes $q_{x_0}^+$ by the appropriate amount, as well as the next nonzero charge $q_x^+$, say at $x_0'$, by the opposite amount (to conserve charge neutrality). By the definition of the electric field, cf.~equation (\ref{E1}), every time one changes the charges at $x_0$ and $x_0'$ only, the electric field remains unchanged on columns $x<x_0$ and $x\gaq x_0'$. So in the end, while keeping the set of charges $\{q_x^-\}$ fixed, the value of the electric field is bounded in absolute value by $1$. In particular, the final state is lower in energy than the initial state. \vspace{2mm} Let us now take a charge configuration of the system (with charges $q_x^{+,0}$, $q_x^{-,0}$) such that the electric field jumps, say at column $x_0$, to a value $|E_{x_0}|=1$, and stays at this value until column $x_0'$ ($>x_0$), where one finds hence the next nonzero charge $q_x^{+,0}$. Consider the state in which $q_{x_0}^{+,0}$ is changed by an amount of $1$ to some value $|E'_{x_0}|\laq\frac{1}{2}$ and $q_{x_0'}^{+,0}$ by the opposite amount (to conserve charge neutrality), while keeping the charges $q_x^{+,0}$ (and $q_x^{-,0}$) on all other columns fixed. We observe that, from (\ref{E1}), the electric field is unchanged for $x<x_0$ and $x\gaq x_0'$ and that by the definition of $q_x^+$ and $q_x^-$, the charges $q_{x_0}^{-,0}$ and $q_{x_0'}^{-,0}$ changed. Anyhow, from {\em (ii)}\/, the absolute value of the charges $q_x^-$ can be taken to be bounded by $1$ in both states, as otherwise the energy of the state can still be lowered. The difference $\Delta E^F$ in strong interaction energy between the final state and the initial state is \begin{equation} \Delta E^F \laq -2\pi^2\frac{3}{4}\,|x_0-x'_0|\,\, , \end{equation} while the difference $\Delta E^f$ in weak interaction energy is bounded by \begin{equation} \begin{array}{lcl} |\Delta E^f| &\laq& \,\,\,\,\, 2\frac{\sqrt{2}}{8}\pi^2 \left[1+2\left|\sum\limits_{s>0}q_{x_0}^{-,0}(q_{x_0+s}^{-,0} +q_{x_0-s}^{-,0})(3-2\sqrt{2})^s\right|\right]\\[4mm] {}&{}&+\,\, 2\frac{\sqrt{2}}{8}\pi^2\left[1+2 \left|\sum\limits_{s>0}q_{x_0'}^{-,0}(q_{x_0'+s}^{-,0} +q_{x_0'-s}^{-,0})(3-2\sqrt{2})^s\right|\right]\,\, . \end{array} \end{equation} [The factors $2$ in the last inequality stem from the fact that both the energy of the initial state and the one of the final state enter in the difference.] Hence we get for the total energy difference \begin{equation} \Delta E \laq -2\pi^2\frac{3}{4} + 4\frac{\sqrt{2}}{8}\pi^2 [1+2(\sqrt{2}-1)] < -\frac{\sqrt{2}-1}{2}\pi^2\,\, , \end{equation} i.e., the energy of the final state is lower than the one of the initial state. Thus $|E_x|\laq\frac{1}{2}$ for the ground state configuration. \noindent {\em 2. In the ground state, the charges $q_{(x,1)}$, $q_{(x,2)}$ take the values $0,\pm\frac{1}{2}$.}\/ \noindent From {\em (ii)}\/ and {\em 1}\/, the absolute values of the charges on the plaquettes in the ground state are bounded by 1. In the case that $E_{x-1}=0$ in the ground state, it is easy to see from {\em (ii)}\/ and {\em 1}\/ that the charges $q_{(x,1)}$, $q_{(x,2)}$ on the plaquettes of column $x$ are of value $0,\pm\frac{1}{2}$.\\[-0.5cm] Let us thus consider the case $|E_{x-1}|=\frac{1}{2}$. If there is at least one frustrated plaquette on column $x$, it is again obvious from {\em (ii)}\/ and {\em 1}\/ that $q_{(x,1)}$ and $q_{(x,2)}$ are of absolute value $\laq\frac{1}{2}$. Suppose now that there are two nonfrustrated plaquettes on a certain column $x_0$. From what we have stated at the beginning of this paragraph, it could be that there is a charge $\pm 1$ on one of the plaquettes. Let us furthermore suppose that there is a sequence of columns $x_0$, $x_0+1$,$\cdots$, $x_0+n-1$ that carry charges $q_x^-$ of absolute value $1$.\\[-0.5cm] If the number $n$ of charges $q_x^-$ in the sequence is even, consider the state in which all the charges $q_{x_0}^-$,$q_{x_0+1}^-$, $\cdots$,$q_{x_0+n-1}^-$ are changed to $0$ [this is possible as it conserves charge neutrality, as one convinces oneself with the help of {\em 1}\/]. The absolute value of the electric field is the same in both the initial and the final state, so that the difference in energy is again due to the weak interaction only: \begin{equation} \Delta E \laq -\frac{\sqrt{2}}{8}\pi^2n+ 4\frac{\sqrt{2}}{8}\pi^2n \frac{3-2\sqrt{2}}{1-(3-2\sqrt{2})} = \frac{\sqrt{2}}{8}\pi^2n [2\sqrt{2}-3] < 0\,\, , \end{equation} and the energy of the final state is lower than the one of the initial state.\\[-0.3cm] In the case that $n$ is odd, one can take $n=1$ and $|q_{x-1}^-|$,$|q_{x+1}^-|\laq\frac{1}{2}$ without loss of generality, in view of the preceding paragraph. Suppose first that one of the charges $q_{x-1}^-$,$q_{x+1}^-$ is of absolute value $\frac{1}{2}$ in the ground state, say $q_{x+1}^-$ and say $|q_{(x+1,1)}|=\frac{1}{2}$. Let us compare the energy of this state with the one in which $q_x^-$ is changed to $0$ and $q_{(x+1,1)}$ replaced by $-q_{(x+1,1)}$ [note that $q_x^-+q_{(x+1,1)}=-q_{(x+1,1)}$ from {\em 1}\/, so that the absolute value of the electric field remains unchanged and charge neutrality is conserved]. Using {\em (ii)}\/, the difference in energy is \begin{equation} \begin{array}{lcl} \Delta E &\laq& \frac{\sqrt{2}}{8}\pi^2\left[-1+ 2(|q_{x-1}^{-,0}q_{x}^{-,0}| +|q_{x}^{-,0}q_{x+1}^{-,0}|+|q_{x+1}^{-,0}q_{x+2}^{-,0}|) (3-2\sqrt{2})\right.\\[2mm] {}&{}& \hspace{1.5cm}+\, 2|-q_{x+1}^{-,0}q_{x+2}^{-,0}|(3-2\sqrt{2}) +2\sum\limits_{s>1}|q_{x}^{-,0}||q_{x-s}^{-,0}+ q_{x+s}^{-,0}|(3-2\sqrt{2})^s\\[2mm] {}&{}& \hspace{1.5cm}+\, 2\cdot 2\sum\limits_{s>1}|q_{x+1}^{-,0}||q_{x+1-s}^{-,0} +q_{x+1+s}^{-,0}| (3-2\sqrt{2})^s\Big]\\[3mm] {}&\laq& \frac{\sqrt{2}}{8}\pi^2\left[-1+8(3-2\sqrt{2}) \frac{1}{2}+8\frac{(3-2\sqrt{2})^2}{2\sqrt{2}-2}\right] = \frac{\sqrt{2}}{8}\pi^2\, (-17 + 12\sqrt{2}) < 0\,\, . \end{array} \end{equation} Again the final state is lower in energy than the initial state. Secondly, if $q_{x-1}^-=q_{x+1}^-=0$, there are again two possibilities. Either there are only nonfrustrated plaquettes on the columns $x-1$ and $x+1$, or, on at least one of them, both plaquettes are frustrated, say at $x+1$. In the last case, one has $q_x^+=-2q_{(x+1,1)}=-2q_{(x+1,2)}$ from {\em 1}\/; hence changing $q_x^-$ to $0$ and $q_{(x+1,1)},q_{(x+1,2)}$ to $-q_{(x+1,1)},-q_{(x+1,2)}$ conserves charge neutrality, the absolute value of the electric field and does not change $q_{x-1}^-=q_{x+1}^-=0$. Using {\em (ii)}\/, the energy difference between the final and the initial state is then \begin{equation} \begin{array}{lcl} \Delta E &\laq& \frac{\sqrt{2}}{8}\pi^2\left[-1+ 2\sum\limits_{s>1}|q_{x}^{-,0}||q_{x-s}^{-,0} +q_{x+s}^{-,0}|(3-2\sqrt{2})^s\right]\\[2mm] {} &\laq& \frac{\sqrt{2}}{8}\pi^2\left[-1+ 4\frac{(3-2\sqrt{2})^2}{2\sqrt{2}-2}\right] = \frac{\sqrt{2}}{8}\pi^2\, (-15 + 10\sqrt{2}) < 0\,\, . \end{array} \end{equation} In the case that $q_{(x-1,1)}=q_{(x-1,2)}= q_{(x+1,1)}=q_{(x+1,2)}=0$, there is a column $x-s$ or $x+s$ such that $q_{(x-\sigma,1)}=q_{(x-\sigma,2)}= q_{(x+\sigma,1)}=q_{(x+\sigma,2)}=0$ for all $1<\sigma<s$ and that one of the charges $q_{(x-s,1)}$,$q_{(x-s,2)}$,$q_{(x+s,1)}$,$q_{(x+s,2)}$ is nonzero. Considerations analoguous to the ones earlier in this paragraph lead again to the conclusion that a state in which there is a nonzero charge on a column with two nonfrustrated plaquettes is not the ground state. So in the ground state, the charges $q_{(x,1)}$, $q_{(x,2)}$ take the values $0,\pm\frac{1}{2}$ only.\\[0.3cm] \noindent {\em 3. In the ground state, the charges $q_{(x,1)}$ and $q_{(x,2)}$ are equal on doubly frustrated columns, if and only if $|E_{x-1}|=\frac{1}{2}$.} \noindent Following exactly the same lines as in {\em 2}\/, one shows that a state in which $q_{(x_0,1)}=-q_{(x_0,2)}$, i.e., $|q_{x_0}^-|=1$, on some doubly frustrated column $x_0$, while $|E_{x_0-1}|=\frac{1}{2}$, is higher in energy than the one in which $q_{(x_0,1)}=q_{(x_0,2)}$, i.e., $|q_{x_0}^-=0|$ [the appropriate change of some other charge, as in {\em 2}\/, to conserve charge neutrality, is tacitly understood]. It is evident from {\em 1}\/ that, in the ground state, $q_{(x_0,1)}=-q_{(x_0,2)}$ on some doubly frustrated column $x_0$, if $E_{x_0-1}=0$. \vspace{2cm} \setcounter{equation}{0} \renewcommand{\theequation}{\mbox{B.\arabic{equation}}} \noindent {\Large\bf Appendix B} \vskip 0,5cm We calculate the typical energy change in a tube of length $N$, in the limit $N \rightarrow \infty$, when a sequence of dipoles is reversed to adjust to APBC or RBC from PBC, as in section IV. This amount of energy is related to the typical length of the longest interval that contains no nonzero charges $q_x^-$ between two such (nonzero) charges. The reversals that are effectuated in section IV do not necessarily involve {\it the} longest of these intervals because of the constraints: one must not brake up a dipole, or in certain cases has to reverse not just {\em any}\/ sequence, but one that contains an {\em odd}\/ number of slanted dipoles. Nevertheless, these reversals will still typically involve intervals that are of the same order as the longest interval. Since each plaquette is frustrated with probability $1 \over 2$ independently of the others, a tube of length $N$ will typically contain $N \over 4$ nonfrustrated columns, $\frac{N}{2}$ with one frustrated plaquette, and $N \over 4$ where both plaquettes are frustrated. From property {\em 3}\/ of the ground state configuration, half of the doubly frustrated columns will typically contain charges that belong to the same dipole and the other half charges that belong to two different dipoles. This is due to the fact that at a given column $E_x = 0$ or $E_x = \pm {1 \over 2}$ (i.e., $q_x^+$ integer or half-integer) with equal probability; in the first case, one has $q_x^- = \pm 1$, and in the other, $q_x^- = 0$. So, there are more charges $q_x^- = 0$ than there are nonfrustrated columns. Since it is typically half of the doubly frustrated columns that give $q_x^- = 0$, we see that, again typically, ${5 \over 8}N$ of the columns carry a nonzero charge $q_x^-$, while ${3 \over 8}N$ of the columns are neutral in $q_x^-$. So the number of intervals between two nonzero charges $q_x^-$, that contain no other such (nonzero) charge, is ${5 \over 8}N$. The probability $p(\ell)$ for the two subsequent nonvanishing charges $q_x^-$ to be at a distance $\ell$ (i.e., to be separated by an interval of $\ell -1$ columns containing no nonzero charge $q_x^-$) is \begin{equation} p(\ell ) = \left( {3 \over 8} \right) ^{\ell -1} {5 \over 8}\, , \hspace{2cm} \ell = 1,2,\ldots\,\, . \end{equation} Let $P_N(m)$ be the probability distribution for the length $m$ of the longest one of these distances. Obviously, $P_N(m)$ equals the probability that all ${5 \over 8}N$ intervals have length $\ell \laq m$, minus the probability that they all have length $\ell \laq m - 1$; explicitly \begin{equation}\label{Pm} P_N(m) = \left[ \sum_{{\ell = 1}}^m p(\ell ) \right] ^{{5 \over 8}N} - \left[ \sum_{{\ell = 1}}^{m-1} p(\ell ) \right] ^{{5 \over 8}N} = \left[ 1 - \left({3 \over 8} \right) ^m \right] ^{{5 \over 8}N} - \left[ 1 - \left({3 \over 8} \right) ^{m-1} \right] ^{{5 \over 8}N} . \end{equation} When $N$ is large, $P_N(m)$ will be peaked around some large value of $m$. Its scaling form can be obtained if one transforms from $m$ to $m'$ according to \begin{equation}\label{m} m = \gamma \log N + m' \,\, , \end{equation} with $\gamma$ to be determined. Indeed, upon using (\ref{m}) and (\ref{Pm}), one finds \begin{equation} P_N(m) \,\,\stackrel{N\to\infty}{\simeq}\,\, x(1-x^{\frac{5}{3}})\,\, ,\qquad \mbox{with}\quad x\,=\,e^{-\frac{5}{8}\left(\frac{3}{8}\right)^m\,N}\,\, . \end{equation} This shows that $P_N(m)$ is effectively nonzero only for argument values \begin{equation} m = \log_{{8 \over 3}}N + {\cal O} (1) \,\, , \end{equation} and that the appropriate scaling limit reads \begin{equation} \begin{array}{llcl} {}&N &\rightarrow& \infty\, , \\ {}&m' &{}& \mbox{finite, fixed}\, , \\ \mbox{and}&\gamma &=& \frac{1}{\log\frac{8}{3}}\, . \end{array} \end{equation} Having thus obtained the typical length \begin{equation} m = \frac{\log N}{\log\frac{8}{3}} \end{equation} of the longest interval, we are able to determine the typical energy change of the mentioned dipole reversals. From the effective weak interaction (\ref{Ueff}) we deduce that the typical energy change due to such a reversal is of order $c\, U(m)$, where \begin{equation} U(m) = (3 - 2 \sqrt{2})^m \end{equation} is the energy change when breaking up a bond at distance $m$ and the constant $c$ is of order unity. The exponent $y_c$ is then obtained from the defining equation \begin{equation} c\, U(0)\, N^{-y_c} = c\, U(m) \end{equation} which gives \begin{equation} y_c = \log_{8 \over 3}(3+2\sqrt{2}) = 1.7972\ldots \end{equation} for the exponent of the chirality-chirality correlation length. \end{appendix}
{ "attr-fineweb-edu": 1.314453, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfLE5qhLBuWLdFije
\section{Introduction}\label{intro} The density-matrix renormalization method (DMRG) introduced by White \cite{white} has opened new possibilities in the study of one-dimensional quantum systems. With this approach not only the energies of large systems can be calculated very accurately, the method also yields correlation functions. \cite{gehring} It was tested e.g. on Heisenberg \cite{white} and transverse \mbox{Ising \cite{legeza}} chains and has been applied to a variety of systems like higher-spin chains \cite{hallberg}, \mbox{ladders \cite{white1}}, Hubbard chains \cite{qin}, impurity models \cite{wang}, phonon systems \cite{caron} or transfer matrices \cite{nishino,wang1}. It is therefore natural to try it in still other situations and to see how it performs there. The system which we have studied is the $U_q[SU(2)]$-symmetric Heisenberg chain with Hamiltonian \cite{alcaraz3} \begin{equation} \label{ham} H=\mp\sum_{i=1}^{L-1} e_i \end{equation} with open boundary conditions and \begin{eqnarray} \nonumber e_i=-\frac{1}{2} \left( \sigma_i^x \sigma_{i+1}^x + \sigma_i^y \sigma_{i+1}^y + \frac{q+q^{-1}}{2} \left( \sigma_i^z \sigma_{i+1}^z -1 \right) \right. \\ \left. + \frac{q-q^{-1}}{2} \left( \sigma_i^z -\sigma_{i+1}^z \right) \right) \label{e_op} \end{eqnarray} where the $\sigma_i^\alpha$ are Pauli matrices. Some basic features of the $q$-symmetry are sketched briefly in an appendix. The motivation for considering this problem was two-fold. Firstly, the model comprises various other physical systems and situations which thereby can be treated in a unified way. In particular, we were interested in calculating correlation functions and their exponents. Secondly, the Hamiltonian is non-hermi\-tian for complex $q$ and offers a possibility to investigate this situation. In the antiferromagnetic case (sign $-$) and for complex $q=\exp(i\pi/(r+1)), \,r$ integer, the chain is related to the critical models with central charge \mbox{$c=1-6/(r(r+1))$}, in particular the Ising ($r=3$) and the three-state Potts model ($r=5$) \cite{alcaraz,pasquier}. For certain real values of $q$ it corresponds to Potts models with more than three states \cite{alcaraz2}. The ferromagnetic chain with real $q$, on the other hand, describes a non-equilibrium problem, namely the hopping of classical hard-core particles on discrete sites, with a bias in one direction \cite{alcaraz1}. For $q=1$, finally, one recovers the simple isotropic Heisenberg model. The Hamiltonian (\ref{ham}) has real eigenvalues and can be treated by Bethe ansatz. In this way, finite-size spectra have been obtained in connection with conformal invariance \cite{alcaraz3}. Using the Temperley-Lieb properties of the operators $e_i$, one can also find a representation by hermitian IRF operators, for which some DMRG calculations have already been done \cite{sierra}. However, as mentioned above, we are interested in non-hermitian effects. These can be varied in this model by changing the parameter $q$. The general arguments for the DMRG procedure \cite{white} do not involve the Hamiltonian and the approach is therefore applicable here. In fact, non-hermi\-tian operators have been treated occasionally \cite{wang1,kondev,hieida}. There is some freedom, however, which density matrix to choose then. In our case we always used a hermitian one, constructed from the right eigenvectors of (\ref{ham}). One then finds that its spectrum shows a marked and systematic variation with $r$. The eigenvalues decrease more slowly than for the simple Heisenberg chain. Nevertheless, there is no difficulty in obtaining the ground-state energy of $H$ to high accuracy. This is discussed in section \ref{dmrg}. Our main effort involved correlations. The functions which we studied were $q$-sym\-metric generalizations of the quantity $<\!\mbox{\boldmath$\sigma$}_{l}\mbox{\boldmath$\sigma$}_{m}\!\!>$ and correspond to fermionic and para\-fermionic correlators in the Ising and Potts case, respectively \cite{hinr}. In a previous study with exact diagonalizations of short systems no conclusive results could be obtained for the Potts model \cite{arndt}. From our calculations up to 100 sites we were able to obtain functions with clear scaling behaviour from which bulk and surface exponents could be extracted. The results are presented in section \ref{correlation}. In the Ising case, where one has the exact solution as a test, they are quite good. In the Potts case, there are still some uncertainties, since the bulk exponents of the four functions which we considered differ while their spinor character suggests that they are equal. The diffusion problem, treated in section \ref{hopping}, has a very different but also interesting nature. Here one finds that the eigenvalues of the density matrix drop to exactly zero very rapidly, so that the DMRG procedure terminates after a certain number of states. This can be understood from the nature of the ground state which is known here. From it, the density matrix can be derived and thus one has a non-trivial example of a system where the DMRG procedure can be carried out analytically. A summary and some additional remarks can be found in the concluding section \ref{conclusion}. \section{DMRG procedure and ground-state energies}\label{dmrg} In the DMRG approach, a selection of relevant states is made, not for isolated, but for already interacting blocks which form the total system. In our calculations we always used the infinite-size variant of the algorithm \cite{white}. As usual the chain was divided in two parts labeled by the indices 1 and 2, respectively. Then the right ground-state vector $|\Phi_r>=\sum\Phi_{ij}|i>_1|j>_2$, where $|i>_1$ and $|j>_2$ denote the basis states in the two parts, and the corresponding eigenvalue $E$ were calculated by using a combination of a vector-iteration procedure (power method) and a Lanczos procedure with modified scalar product. The whole calculation was done in the $S^z=0$ subspace which reduced the numerical effort significantly. From the ground-state vector the hermitian density matrix \begin{equation} \label{rho_sb} \rho=|\Phi_r><\Phi_r| \end{equation} of the chain was constructed where \begin{equation}\label{eigenvec} <\Phi_r|=|\Phi_r>^\dagger \end{equation} is the usual hermitian conjugate. From (\ref{rho_sb}) the reduced density matrix $\rho_1$ of part 1 of the chain follows as \begin{equation} \label{rho} \rho_1 = {\displaystyle \sum \limits_{i,i',j} } \Phi_{ij}\Phi_{i'j}^\ast |i>_{1\;1}<i'| \end{equation} and similarly for $\rho_2$. Since the boundary terms in (\ref{ham}) break reflection symmetry one has to work with both matrices here. The use of hermitian density matrices is already suggested by the considerations in \cite{white} where $\rho_1$ enters via an optimization procedure for a vector like $|\Phi_r>$. Non-symmetric $\rho$'s have been used in some cases \cite{nishino,wang1,hieida} but lead to obvious problems if their eigenvalues become complex. We tested the choice $\rho=|\Phi_r><\Phi_l|$, where $<\Phi_l|$ is the left eigenvector of $H$, on the XY-analogue of (\ref{ham}) but it gave unsatisfactory results, in particular large errors which did not decrease properly with $m$. The $m$ eigenvectors $|\lambda_n>_1, \, |\lambda_n>_2$ of $\rho_1$ and $\rho_2$ corresponding to the largest eigenvalues $\lambda_n$ are used as new basis states for the two parts of the chain in terms of which all necessary operators are expressed. After inserting two sites between part 1 and part 2 the procedure is restarted. For the calculation of expectation values \begin{equation} \label{exp} <\hat O> = < \Phi_l | \hat O | \Phi_r > \end{equation} one also needs the left eigenvector. In the present case this is just the transpose of $|\Phi_r>$ since $H$ is complex symmetric. For the efficiency of the DMRG procedure it is important that one can calculate accurate results with a relatively small number of states of the Hilbert space. Therefore the eigenvalues of the reduced density matrix should decay exponentially. However, in general it is not a priori clear if the spectrum exhibits such a behaviour and one has to check it numerically. Figure \ref{fig_spec_rho} shows such spectra for \begin{figure}[ht] \epsfxsize=80mm \epsffile{spec-rho-lang.ps} \caption{\label{fig_spec_rho} Eigenvalue spectrum of the reduced density matrix $\rho_1$ for different values of the parameter $r$ and chain length $L=30$, calculated with $64$ kept states.} \end{figure} different values of the parameter $r$. One can see roughly exponential behaviour with a fast initial and a slower final decay. However, the results for small $r$ lie considerably above those for the isotropic Heisenberg model. The $q$-symmetric chain is therefore numerically a less favourable case. This feature is essentially due to the imaginary \linebreak[4] boundary terms. When added to an XX-chain they shift the density-matrix spectrum in the same way. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|} \hline L & r=3 & r=5 \\ \hline\hline 4 & -0.6604497504 & -0.7393376252 \\ 10 & -0.7443482698 & -0.8012448364 \\ 20 & -0.7736274980 & -0.8237779981 \\ 40 & -0.7885618957 & -0.8354921842 \\ 60 & -0.7935870982 & -0.8394707437 \\ 80 & -0.7961088422 & -0.8414746652 \\ \hline \hline L & r=7 & r=$\infty$ \\ \hline \hline 4 & -0.7687664240 & -0.8080127018 \\ 10 & -0.8227289432 & -0.8516070414 \\ 20 & -0.8427349544 & -0.8682473334 \\ 40 & -0.8532195604 & -0.8770736642 \\ 60 & -0.8567946053 & -0.8801004992 \\ 80 & -0.8585981414 & -0.8816309100 \\ \hline \end{tabular} \caption{\label{tab_ew_l} Ground state energies per spin for different $r$, calculated with $m=128$ for $r=3,5,7$ and $m=64$ for $r=\infty$.} \end{center} \end{table} We have calculated the ground state energies for different values of the parameter $r$ and for different numbers of kept states $m$. Table \ref{tab_ew_l} shows some results for $m=128$. (We note that the results given here are calculated without the constant $-(L-1)(q+q^{-1})/4$ in (\ref{ham}).) The accuracy can be checked directly for $r=3$ by comparing with the analytic result \cite{burkhardt}. In this case the deviations are less than $10^{-9}$ which is in accord with a truncation error of the order of $10^{-9}$ in the density matrix. One can also compare with the values obtained by Sierra et al. \cite{sierra} in a different DMRG calculation with $m=160$. We could reproduce their data up to 9 digits. \begin{figure}[ht] \epsfxsize=80mm \epsffile{ew-L.ps} \caption{\label{fig_ew_l} Ground state energies per site for different values of $r$ calculated with $m=128$ (for $r=\infty$ with $m=64$). The symbols denote data from Sierra et al. \protect\cite{sierra}.} \end{figure} Plotting the ground-state energies per spin vs. $1/L$, one obtains the well-known asymptotic behaviour \begin{equation}\label{ew_ising} \frac{E}{L}=\epsilon_\infty+\frac{B}{L}-\frac{C}{L^2} \end{equation} where $B$ is the boundary energy and $C$ measures the Casimir effect. This is shown in figure \ref{fig_ew_l} for several values of the parameter $r$. One expects that for a non-hermitian operator the procedure does not necessarily give an upper bond for $E$. This is in fact the case. For $r=3$, the deviation from the exact result, although quite small, changes sign for a certain $L$. Similarly, the approach of $E$ towards its finite-size limit as a function of $m$ is not monotonous. This is shown in \mbox{figure \ref{fig_ew_m}.} One notes, however, that the behaviour becomes monotonous for large $r$. \begin{figure}[ht] \vspace{20mm} \epsfxsize=80mm \epsffile{ew-m.ps} \caption{\label{fig_ew_m} Ground-state energies per site for $r=3$ and $r=5$ vs. the number of kept states $m$ of the density matrix for chain length $L=50$.} \end{figure} \section{Correlation functions}\label{correlation} For the $q$-symmetric chain with $q$ a root of unity, the usual spin correlation functions are not the appropriate quantities. Instead, we consider operators which are adapted to the $U_q[SU(2)]$ symmetry. In this way one is lead to generalizations of the scalar product $\mbox{\boldmath$\sigma$}_{l}\mbox{\boldmath$\sigma$}_{m}$ of two spin operators given by \cite{hinr} \begin{equation} \label{g_op} g_{l,m}^{\pm}=c_{l,m}^{\pm}-\frac{1}{q+q^{-1}} \end{equation} with \begin{equation} \label{c_op} \begin{array}{lll} c_{l,m}^{\pm} & = & b_{m-1}^{\pm} b_{m-2}^{\pm} \ldots b_{l+1}^{\pm} e_l b_{l+1}^{\mp} \ldots b_{m-2}^{\mp} b_{m-1}^{\mp} \\ \\ b_j^{\pm} & = & q^{\pm 1} - e_j \end{array} \end{equation} where the $e_j$ are defined in equation (\ref{e_op}). Written out explicitely, these are non-local objects containing strings of spin operators. However, they are constructed solely in terms of the $e_j$ from which the Hamiltonian (\ref{ham}) is built. Therefore they can be translated into any other representation of the Temperley-Lieb algebra which the $e_j$ fulfill \cite{hinr}. In the cases $r=3$ and $5$ such representations are given by the critical transverse Ising model and the Potts model with $L/2$ sites and open boundary conditions. In these cases $<g^+_{l,m}>=<g^-_{l,m}>$, thus we work in the following only with one operator and drop the superscript. In the Ising case one finds that the string operators cancel the Jordan-Wigner factors in a fermion picture and the $<g_{l,m}>$ become simple fermionic two-point functions \cite{hinr}. For example \begin{eqnarray} <g_{2j-1,2k}> & = & -\frac{1}{\sqrt{2}}(-1)^{j+k}\nonumber \\ & & \times \left<\left(d^\dagger_j+d_j\right) \left(d_k^\dagger-d_k\right)\right> \label{g_op_ising_1} \\ <g_{2j,2k-1}> & = & -\frac{1}{\sqrt{2}}(-1)^{j+k} \nonumber \\ & & \times \left<\left(d^\dagger_j-d_j\right) \left(d_k^\dagger+d_k\right)\right> \label{g_op_ising_2} \end{eqnarray} where the $d_j,\,d_j^\dagger$ are fermion operators. These functions can be calculated exactly and are given as sums for finite $L$. The two quantities $<g_{2j,2k}>$ and $<g_{2j-1,2k-1}>$ vanish identically. The other two correspond to different behaviour at the boundaries where (\ref{g_op_ising_1}) remains finite and (\ref{g_op_ising_2}) vanishes for fixed $k$ and finite chain lengths. Due to the open ends, one is dealing with surface critical behaviour here. After performing the continuum limit one can determine the critical exponents, and one finds the bulk exponent $x=1/2$ for both correlation functions and the surface exponents $x_s=1/2$ and $x_s=3/2$ for (\ref{g_op_ising_1}) and (\ref{g_op_ising_2}), respectively \cite{hinr}. For the Ising case one can also construct the conventional spin correlation function $<\sigma^x_l\sigma^x_m>$ from the quantities $e_j$ appearing in $H$, using the property \mbox{$\left(\sigma^x\right)^2=1$.} However, this is not possible for the Potts model. Also order parameter profiles, as determined by a direct DMRG calculation in \cite{igloi}, cannot be obtained in the present formulation. However, the $g$'s are also interesting quantities since they become parafermion operators in this case \cite{hinr,mittag}. Their exponents will be studied in the following. We considered the correlation function $<g_{i,L/2}>$ ($i=1,\ldots,L/2-1$) with one point fixed in the middle of the chain while varying the position of the other one. In this way profiles were computed for chains with up to 100 sites. The correlations were significantly less accurate than the energy, so that in general $m=200$ states were kept. This was a practical limit with respect to storage and time. A complete run for fixed $r$ took several weeks CPU time on middle-performance DEC-Alpha workstations. The amount of memory was about 250-300 MB RAM and up to 500 MB harddisk space. An example of the resulting correlation function is shown in figure \ref{fig_gp_i}. The oscillations are due to the antiferromagnetic \begin{figure}[ht] \epsfxsize=80mm \epsffile{gp-i.ps} \caption{\label{fig_gp_i}Correlation function $<g_{i,L/2}>$ for $r=5$ and chain length $L=82$, calculated with $200$ kept states.} \end{figure} character of the chain. Except for the case $r=3$, where only the two functions (\ref{g_op_ising_1}) and (\ref{g_op_ising_2}) are non-zero, one can distinguish four different functions given by the maxima and minima for even and odd $L/2$. As a check on the accuracy, one can compare with exact results for $L=24$ \cite{heinzel}. One then finds deviations less than $10^{-4}$ in the Ising case and less than $10^{-5}$ in the Potts case while the truncation error is only $10^{-14}-10^{-15}$. The correlations are therefore much less precise than the ground-state energy. One also finds that the values can be above or below the exact result. The errors increase, as usual in the DMRG, as one moves towards the boundary. This effect can also be seen in figure \ref{fig_gp_i_log}, \begin{figure}[ht] \epsfxsize=80mm \epsffile{gp-i-log.ps} \caption{\label{fig_gp_i_log}Log-log plot of the correlation function $<g_{i,L/2}>$ for even $i$ and chain length $L=82 \; (r=5)$ and $L=90 \;(r=3)$, calculated with 200 kept states. The curve denotes analytic results for $r=3$.} \end{figure} where two correlation functions are shown for large systems. Up to $R\simeq 30$, where $R=L/2-i$ is the distance from the center, the numerical values for $r=3$ reproduce the analytical ones quite well. Near the boundary, where the function is very small, however, there are considerable deviations and the expected power-law behaviour $g\sim R_s^{\,x_s-x}$, with $R_s=i$ denoting the distance from the surface, breaks down. This is different for $r=5$, where the curve remains linear also in this region. One therefore expects that the results for larger $r$ are more accurate. In the middle of the chain, the situation is more favour\-able since the quantities have undergone fewer iterations here. For small $R \; (R\ll L/2)$ one finds the expected bulk behaviour $g\sim R^{-2x}$, as shown in the log-log plot of figure \ref{fig_gp_rbulk_log}. However, \begin{figure}[ht] \epsfxsize=80mm \epsffile{gp-rbulk-log.ps} \caption{\label{fig_gp_rbulk_log}Log-log plot of the correlation function \mbox{$<g_{R,L/2}>$,} $R=L/2-i$, for odd $R$ and chain length $L=82 \; (r=5)$ and $L=90 \; (r=3)$, calculated with 200 kept states. The curve denotes analytic results for $r=3$.} \end{figure} if one determines the exponent $x$ from this data for $r=3$, one still has a relative error of about 5\%. In order to improve the accuracy, we used the scaling form for $g$ \begin{equation}\label{scaling-bulk} g(R,L)=\frac{1}{R^{\,2x}}F\left(\frac{R}{L}\right) \end{equation} where $F(R/L)\rightarrow const$ for $R/L\rightarrow 0$. Collecting results for different $L$ we constructed a scaling plot $R^{\, 2x}g$ vs. $R/L$ and varied $x$ until the data fell onto a curve as well as possible. The result of such a procedure for $r=5$ is shown in figure \ref{fig_fss_rbulk}. It proves that scaling is indeed fulfilled, in \begin{figure}[ht] \epsfxsize=80mm \epsffile{fss-rbulk.ps} \caption{\label{fig_fss_rbulk}Finite-size scaling behaviour of the correlation function \protect\mbox{$<g_{R,L/2}>,$} $R=L/2-i$, for $r=5$ and five chain lengths from $L=66$ to $L=82$, calculated with $200$ kept states.} \end{figure} contrast to \cite{arndt} where it could not be seen for chains up to $L=28$. The exponent $x$ for $r=3$ then is only about 2\% off the exact result $x=1/2$. A further improvement could be achieved by extrapolating the bulk values of $g$ to the infinite-chain limit. Then a linear fit of the data in the corresponding plot gave better Ising exponents. In this way, the results in table \ref{tab_exp} were determined. The confidence interval of the fit parameters would give rise to an absolute error in the exponents less than $10^{-3}$, but the error in the data points is not known. Thus we cannot give a reasonable error bar but we can use the results for $r=3$ as a clue. \begin{table}[ht] \begin{tabular}{|c|c|c|c|c|} \hline \rule[-3mm]{0mm}{8mm} r &$\!\!<g_{2j,2k}>\!\!$& $\!\!<g_{2j,2k-1}>\!\!$& $\!\!<g_{2j-1,2k}>\!\!$& $\!\!<g_{2j-1,2k-1}>\!\!\!$\\ \hline\hline 3 & - & 0.499 & 0.502 & - \\ \hline 4 & 0.386 & 0.521 & 0.513 & 0.406 \\ \hline 5 & 0.397 & 0.520 & 0.508 & 0.409 \\ \hline 6 & 0.392 & 0.516 & 0.502 & 0.417 \\ \hline 7 & 0.412 & 0.499 & 0.484 & 0.422 \\ \hline \end{tabular} \caption{\label{tab_exp} Critical exponents $x$ of the four correlation functions, calculated with $m=200$. Exact results are known for $r=3$ and $r=\infty$ where $x=1/2$.} \end{table} The final Ising exponents differ only by $10^{-3}$ from the exact value and thus are very good. The situation is more complicated, though, for other cases. The Kac table \cite{christe} for $r=5$ contains 10 different conformal dimensions and allows for a large number of combinations $x=\Delta + \bar{\Delta}$. However, if one fixes the spin $s$ of the operators to simple values, only $x=11/20 \; (s=1/2)$, $x=7/15 \; (s=1/3)$ and $x=2/3 \; (s=2/3)$ remain. These values were also obtained directly by relating the three-state Potts model to a Gaussian model \cite{nienhuis}. From the explicit form of the parafermion operators (products of local and string operators) \cite{hinr}, it can be seen that they have spinor properties as discussed in \cite{nienhuis} with $s=1/3$. Thus one would expect a common exponent $x=0.467$ for all four $g$'s, whereas the numerical values lie about 10\% higher and lower. In principle, they would allow also other $x$-values connected with strange spin. A similar situation is found for $r=4$ (tricritical Ising model) where $x=7/16 \; (0.438)$, $x=19/40 \;(0.475)$ and $x=43/80 \; (0.537)$ are close to the measured values. The reason for the discrepancies is not clear. As noted above, one would expect better accuracy for the Potts model. If the $x$-values actually differ, it could perhaps be due to a cancellation of leading singularities in the $g$'s which, when written in the Potts representation, are sums of two contributions. The other possibility is that in spite of the satisfactory scaling behaviour the numerical results are not yet good enough. We also determined the surface exponents $x_s$. This was done with the help of another scaling plot. For $i=R_s$, the scaling form of $g$ can be written as \begin{equation}\label{scaling-surf} R_s^{\,2x}\,g(R_s,L)= \left(\frac{R_s}{L}\right)^{x_s+x}G\left(\frac{R_s}{L}\right) \end{equation} where $G\rightarrow const$ for $R_s/L\rightarrow 0$. Using the bulk exponents as determined above, one can now find the surface exponent by tuning $x_s$ until the scaling behaviour (\ref{scaling-surf}) is fulfilled. In this procedure the first few sites near the boundary, for which the errors are large, were left out. For the Ising case we found $x_s=1.58$ for the function (\ref{g_op_ising_1}) and $x_s=0.56$ for the function (\ref{g_op_ising_2}). One notes that these results differ more from the exact values than the bulk results. For the Potts case the following exponents were found: $x_s=0.81$ for $<g_{2j,2k}>$, $x_s=1.05$ for \mbox{$<g_{2j,2k-1}>$,} $x_s=0.63$ for $<g_{2j-1,2k}>$ and $x_s=0.55$ for $<g_{2j-1,2k-1}>$. We have also determined the exponents for higher $r$. The values tend to their Heisenberg limits $x=1/2$ and $x_s=1$ for $r\rightarrow \infty$. However, while the exponents of the function $<g_{2j,2k-1}>$ approach this limit from above, the exponents of the other functions always lie below it. Furthermore, the limiting values are not easy to obtain due to the logarithmic corrections which occur for $q=1$ \cite{hallberg1}. \section{$q$-symmetric driven diffusion}\label{hopping} It is well-known that the isotropic ferromagnetic Heisenberg model also describes (in all dimensions) the hopping of classical particles on a lattice with the exclusion of double occupancy \cite{alex}. This follows by expressing the master equation for the probability vector $|P>$ describing the system \begin{equation}\label{master} \frac{\partial}{\partial t}|P> = -H|P> \end{equation} in a spin one-half language where a spin up (down) corresponds to an occupied (empty) site. The stationary state is therefore the ferromagnetic ground state in the corresponding sector of fixed $S^z$. A similar result holds if there is a bias in the hopping. In one dimension and after a canonical transformation the time-evolution operator then takes exactly the form (\ref{ham}) with $q$ given by $q=(\alpha_-/\alpha_+)^{1/2}$ where $\alpha_+ \, (\alpha_-)$ is the hopping rate to the right (left) \cite{alcaraz1}. The stationary properties of this diffusion problem have already been studied \cite{sandow}. Due to the bias, the particles accumulate at one end, thus producing a non-trivial density profile. In the magnetic language, one is dealing with a uniaxial ferromagnet to which opposite boundary fields are applied at the two ends. These are real and $H$ is hermitian in this case. The numerical treatment leads to a density-matrix \linebreak[4] spectrum as shown in figure \ref{fig_spec_rho_hop}. The eigenvalues decrease faster than \begin{figure}[ht] \epsfxsize=80mm \epsffile{spec-rho-hop.ps} \caption{\label{fig_spec_rho_hop}Eigenvalue spectrum of the reduced density matrix $\rho_1$ of the $q$-symmetric hopping model at half-filling for different values of the parameter $q$, calculated with $16$ kept states and chain length $L=30$.} \end{figure} exponentially and become effectively zero beyond a certain point depending on the size of the system. The decrease also depends on the asymmetry parameter $q$. To understand this, it is useful to consider the case $q=1$ first. Then the ground state of the chain is a spin state $|S,M>$ where $S=L/2$ and $M=S^z$ can be chosen. Taking $M=0$ and dividing the chain into two halves, it can be written as \begin{equation}\label{gs_hop} |\Phi>=|S,0>=\sum_{m=-s}^{+s}c_m |s,m>_1 |s,-m>_2 \end{equation} where $|s,m>_{1,2}$ are the corresponding spin states of the two subsystems with $s=S/2=L/4$ and the $c_m$ are the appropriate Clebsch-Gordan coefficients. Thus $|\Phi>$ is the superposition of $(S+1)$ terms and the density matrix $\rho_1$ of part 1 of the chain has only that many non-zero eigenvalues. It reads explicitely \begin{equation}\label{rho_hop} \rho_1=\sum_m |s,m>_1c^2_{m\;\,1}\!\!<s,m| \end{equation} so that these eigenvalues are given by $c^2_m$. For $q\ne 1$ the situation does not change, because there are analogous decomposition formulae for the $q$-deformed angular-momentum states \cite{kirillov}. In this case one finds \begin{equation}\label{clebsch} c_m=\frac{1}{\sqrt{[4s]!}}\frac{[2s]!^{\,2}}{[s-m]!\,[s+m]!}\,q^{2sm} \end{equation} with the notation \begin{equation}\label{q-def} [n]=\frac{q^n-q^{-n}}{q-q^{-1}} \end{equation} and \begin{equation}\label{q-fac} [n]!=[1]\cdot[2]\cdot\ldots\cdot[n]. \end{equation} For $q\rightarrow 1$, $[n]$ reduces to $n$ and (\ref{clebsch}) becomes the normal Clebsch-Gordan coefficient. One can check that the $c^2_m$ given by (\ref{clebsch}) are precisely the density-matrix eigenvalues obtained in the numerical procedure. Due to the $q$-factor in (\ref{clebsch}) there is no symmetry between $m$ and $-m$ for $q\ne 1$ and the degeneracy of the eigenvalues is lifted as seen in figure \ref{fig_spec_rho_hop}. Specifically for large $q$ one has \begin{equation}\label{clebsch_lim} c_m\simeq q^{-\left(s-m\right)^2} \end{equation} so that the coefficient $c_s\simeq 1$ dominates and $|\Phi>$ becomes approximately a product state \begin{equation}\label{gs_hop_prod} |\Phi>\simeq |s,s>_1|s,-s>_2. \end{equation} Physically this means that one half of the chain is almost filled with particles and the other one is almost empty, as expected for large bias. Expressions for the density profile were given in \cite{sandow} and are easily reproduced by the DMRG calculations. Similarly, the treatment can be extended to $M\ne 0$, i.e. to systems which are not half-filled. Then the number of terms in $|\Phi>$ is even smaller, namely $(S+1-|M|)$, and becomes one for a completely full or empty system. The DMRG procedure gives exact results once $m$ exceeds this value which increases at most linearly with the system size. \section{Conclusion}\label{conclusion} We have studied the $q$-symmetric Heisenberg chain for various cases of physical interest. For the case of complex $q$ we showed that the DMRG procedure works well even though the density-matrix spectra are less favourable due to the non-hermitian boundary terms. We found the ground-state energies to high accuracy and calculated also generalized correlation functions, for which we determined the critical exponents. The study of various different models via the spin one-half operator $H$ was also motivated numerically, since the dimension of the matrices in the DMRG procedure is lower then. The results show, however, that to some extent this advantage is compensated by the sensivity of the correlations in this formulation. One should also note that the equivalent Potts chain has only half the length of the $q$-symmetric chain. Thus it seems that one cannot really circumvent certain features of the corresponding problems. To improve the results further, one would have to increase the numerical effort, by using larger values of $m$ or the sweeping procedures in the finite-size algorithm \cite{white}. The ferromagnetic chain for real $q$ was seen to have very different features and is interesting in other respects. Firstly, as an example where the density matrix can be found analytically and the DMRG procedure automatically gives the exact result. There are only a few other cases of this kind, for example two coupled oscillators \cite{han} or finite-dimensional matrix-product states \cite{klumper}. Secondly, it describes not only a magnetic problem, but also a non-equilibrium system. The use of the DMRG in this field, where the time evolution operators are in general non-hermitian, has only started \cite{hieida} and further interesting applications can be expected. \section*{Appendix} \setcounter{equation}{0} \renewcommand{\theequation}{\mbox{A.\arabic{equation}}} The $q$- or quantum group symmetry (see e.g. \cite{pasquier}) is a generalization of rotational symmetry and characterized by a modified commutator \begin{equation} \left[ S^+,S^- \right]=\left[ 2S^z \right] \end{equation} between the generators. Here the bracket is defined in (\ref{q-def}) so that a $q$-dependent function of $S^z$ appears on the right. For $q=1$, the usual angular momentum algebra is recovered. For a chain of spins, $S^z=\sum \sigma^z_n/2$ has the usual form, but $S^\pm$ are given by \begin{equation} S^\pm=\frac{1}{2}\sum_{n=1}^{L} q^{\,\sum\limits_{l=1}^{n-1}\sigma^z_l/2} \sigma_n^\pm \, q^{-\!\!\!\sum\limits_{l=n+1}^{L}\sigma^z_l/2}. \end{equation} The property $\left[ H,S^\pm \right]=0$ of the Hamiltonian (\ref{ham}) can be used, as in the case $q=1$, to obtain all ferromagnetic ground states from the one with all spins up via repeated application of $S^-$ \cite{sandow}. In this way one can also derive the Clebsch-Gordan coefficients (\ref{clebsch}). \section*{Acknowledgments} We would like to thank H. Hinrichsen, V. Rittenberg, F. Igl\'oi, M. Henkel and H. Niggemann for useful discussions and P. Arndt and T. Heinzel for correspondence. We furthermore thank X. Wang for his advice in the DMRG method, A. Honecker for help with the numerics, the Universit\'e Henri Poincar\'e, Nancy, for hospitality and the \linebreak[4] Max-Planck-Institut f\"ur Physik komplexer Systeme in \linebreak[4] Dresden for substantial computer time.
{ "attr-fineweb-edu": 1.639648, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfQnxK7FjYCv2T1B-
\section{Introduction} In a quantum theory of gravity it is expected that spacetime itself will be quantised, giving rise to indefinite, or `quantum', causal structures \cite{Butterfield2001, Hardy2005}. The process matrix formalism was developed to describe these causal structures \cite{oreshkov:2012}. In fact, it describes the most general causal relations between a finite set of regions, or `parties', compatible with the local validity of quantum mechanics in each region. However, the framework relies on an \emph{a priori} labelling of the parties, which tacitly presupposes the existence of a background reference frame. This is in conflict with the background independence of general relativity, which associates no absolute meaning to individual spacetime points or regions \cite{rovelli:1991, sep-hole}. Incorporating background independence into the quantum formalism is in fact one of the main challenges in the development of a theory of quantum gravity \cite{Ashtekar2004, Smolin2006}. In order to represent a viable approach to quantum gravity, the process matrix formalism should be able to describe indefinite causal structures without reference to a fixed background. Here, we show how this can be done. We treat a process matrix as a particular configuration of a discretised spacetime, with laboratories that correspond to the discrete units of that spacetime. A process matrix will be background independent if it is invariant under any arbitrary permutation of `laboratories' or volumes of spacetime. In this paper, we introduce background independent processes and describe some of their properties. First, we note that non-classical causal structures still arise in permutation-invariant processes. We show that imposing permutation invariance results in the loss of a distinction between spacetime points. As in general relativity, one recovers a distinction between spacetime points by using a material reference frame (a reference frame made up of physically observable systems, the `rods and clocks' picture). Finally, we discuss the symmetry properties of permutation-invariant processes. We expect permutation-invariant processes to obey a superselection rule (no coherence between different `charges') but observe, surprisingly, that not all processes obeying the superselection rule are physically valid. We show explicitly why this occurs in the case of a bipartite qubit process (where a `qubit' process is just one with two-dimensional local Hilbert spaces). We also present a partial proof generalising that result to any process matrix dwelling in the symmetric or antisymmetric subspaces of the symmetric group. Our results suggest that no invariant processes with a definite charge may exist, although more work will be needed to substantiate this conjecture. The breakdown in the association between symmetries and superselection rules indicates that background independence in quantum mechanics cannot be interpreted analagously to other known symmetries of nature, and that a new interpretation may be necessary. \section{The process matrix formalism} The process matrix formalism is a framework for quantum mechanics that does not assume any global background causal structure, just that quantum mechanics is obeyed locally. Conceptually, it extends quantum mechanics in a similar way in which, relaxing global Lorentz invariance, one can extend special relativity to general relativity. Relaxing the assumption of causal structure allows one to obtain new, `indefinite' causal relations that are incompatible with a fixed ordering of events. Relationships of this type have been observed in the laboratory \cite{procopio:2015, rubino2017, rubino2017experimental, goswami:2018, goswami2018communicating, wei2019experimental, Taddei2020, guo2020experimental, Rubino2020}, where the lack of causal order arises from temporally delocalised events, rather than from a quantum spacetime. Much of the experimental interest derives from the applications of indefinite causal relations to computation and communication \cite{chiribella09, chiribella12, colnagh:2012, araujo14, feixquantum2015, Guerin2016, Ebler2018, Salek2018, Gupta2019}. Here, we briefly describe the aspects of the process formalism that are relevant to this work. For more details, see references \cite{oreshkov:2012,shrapnel:2018,araujo:2015}. The simplest way to think of process matrices is as follows. Consider a system of $N$ laboratories. Each laboratory is occupied by an experimenter capable of performing all of the preparations, operations, and measurements compatible with the standard measurement formalism of quantum mechanics. Formally, this means that each experimenter has the ability to perform a \textit{quantum instrument}---a set $\mathcal{I}^x=\{\mathcal{M}_i^X\}_{i=1}^n$ of completely positive (CP) maps that sum to a completely positive and trace preserving (CPTP) map. The superscript $X$ denotes that the maps $\mathcal{M}^X:\mathcal{L}\left(\mathcal{H}^{X_I}\right)\to \mathcal{L}\left(\mathcal{H}^{X_O}\right)$ act on laboratory $X$. The Hilbert spaces $\mathcal{H}^{X_I}$, $\mathcal{H}^{X_O}$, respectively represent the incoming and outgoing state-spaces of laboratory $X$, with $\mathcal{L}\left(\mathcal{H}\right)$ denoting the linear operators on $\mathcal{H}$. Consider the case where we have two parties, Alice and Bob, who respectively have access to instruments $\mathcal{I}^{A}=\{\mathcal{M}^A_i\}$ and $\mathcal{I}^B=\{\mathcal{N}^B_j\}$. The probability that Alice and Bob realise a particular combination of operations $\mathcal{M}^{A}_{i},\mathcal{N}^{B}_{j}$ is given by some probability distribution $P(\mathcal{M}^{A}_{i},\mathcal{N}^{B}_{j})$. To be consistent with quantum mechanics, $P$ must be a \textit{multilinear map} \cite{shrapnel:2018}. The \textit{Choi-Jamio{\l}kowski isomorphism} \cite{jamio72, Choi1975} allows us to represent these operations by sending CP maps $\mathcal{M}^{X}$ to positive semidefinite linear operators $M^{X_IX_O}_i:=[\mathcal{I}\otimes\mathcal{M}^X_i(\ketbra{\phi^+}{\phi^+})]^T\in \mathcal{L}(\mathcal{H}^{X_I}\otimes \mathcal{H}^{X_O})$, where $\ket{\phi^+}=\sum_i\ket{i}^{X_I}\otimes\ket{i}^{X_I}$ is a non-normalised maximally entangled state and $^T$ denotes transposition in the computational basis. These operators act over an input Hilbert space $X_I$ and an output Hilbert space $X_O$. In this representation, the trace preserving condition reads $\Tr_{X_O}M^{X_IX_O}=\mathbbm{1}^{X_I}$; this means that, for a set of maps that form an instrument, we must have $\Tr_{X_O}[\sum_iM^{X_IX_O}_i]=\mathbbm{1}^{X_I}$. Our complete list of probabilities $P$ now becomes a multilinear map over linear operators. This map is equivalent to \cite[prop.~2.38]{heinosaari} \begin{align} &P(M^{A_IA_O}_{i}\otimes N^{B_IB_O}_{j})\nonumber\\ &=\Tr[W^{A_IA_OB_IB_O}\cdot(M^{A_IA_O}_{i}\otimes N^{B_IB_O}_{j})] \label{eq:born-rule} ,\end{align} for some linear operator $W^{A_IA_OB_IB_O}\in \mathcal{L}(\mathcal{H}^{A_IA_OB_IB_O})$. $W^{A_IA_OB_IB_O}$ is called a \textit{process matrix}, and is the generalisation of a joint quantum state (from the point of view of a probability measure) to correlations that can be spacelike, timelike, or neither---those with indefinite causal structure. Process matrices must satisfy the constraints \begin{gather} W^{A_IA_OB_IB_O}\geq 0 , \label{eq:pm-positive}\\ \Tr[W^{A_IA_OB_IB_O}\cdot(M^{A_IA_O}N^{B_IB_O})]=1, \\\nonumber \forall M,N\geq0 , \ \Tr_{A_O}[M^{A_IA_O}]=\mathbbm{1}^{A_I}, \Tr_{B_O}[N^{B_IB_O}]=\mathbbm{1}^{B_I} \label{eq:pm-normalised} ,\end{gather} which ensure that probabilities are nonnegative and sum to one. We have left out tensor product symbols for convenience, and will continue to do so where it is clear. In a \textit{Hilbert-Schmidt} basis, i.e. a basis $\{\sigma^X_i\}$ of $\mathcal{L}\left(\mathcal{H}\right)$ satisfying $\sigma_0^X=\mathbbm{1}^X$, $\Tr[\sigma^X_i \sigma^X_j]=d_X \delta_{ij}$, ($d_X:=\dim(\mathcal{H}^X)$) and $\Tr[\sigma^X_i]=0$ for $i>0$, a process matrix can be represented as \begin{equation} W^{A_IA_OB_IB_O}=\sum_{ijkl}w_{ijkl}\sigma_i^{A_I}\sigma_j^{A_O} \sigma_k^{B_I} \sigma_l^{B_O} ,\end{equation} where $w_{ijkl} \in \mathbb{R}$ since $W^{A_IA_OB_IB_O}$ is hermitian. The probability normalisation requirement forbids certain Hilbert-Schmidt terms from appearing in the decomposition of an allowed process. We call terms with identity on all outputs except $X$ type $X$ process terms, all outputs except $X,Y$ terms of type $XY$ etc. Forbidden bipartite process terms are terms like $A_O$, $B_O$, $A_OB_O$, $A_IA_OB_O$, $A_OB_IB_O$, and $A_IA_OB_IB_O$. We require that $\Tr[W \sigma]=0$ for any such terms $\sigma$. Thus, we effectively have linear constraints on the matrix elements of allowed processes. One consequence of not making \emph{a priori} assumptions about the causal structure is the appearance of novel types of causal order that cannot be expressed in the standard formalism of quantum mechanics. Process matrices can be causally ordered, which corresponds to the familiar situation where A comes before B comes before C, or they can be causally \textit{separable}, convex combinations of processes that have different causal orders such as `A before B' and `B before A', representing classical ignorance of causal order. One novel aspect of the process matrix formalism is that one can also have indefinite causal order, where it does not make sense to say that `A is before B' or vice versa: there are signalling correlations from A to B and also from B to A, which cannot be interpreted as classical ignorance. Throughout this section we have only discussed bipartite processes, for simplicity. Everything we have discussed generalises straightforwardly to an arbitrary number of parties. We refer the reader to references \cite{oreshkov:2012,shrapnel:2018,araujo:2015, abbott:2016,Abbott2017genuinely,wechs:2019} for a more complete discussion. Finally, note that although we refer to the local regions in process matrices as `laboratories', this is not an essential interpretation. Rather, here we will treat the process matrix as a discretised model of spacetime, with local regions corresponding to points in spacetime, without having an \emph{a priori} global causal structure. \section{Why permutation invariance?} By thinking of a process matrix as representing a particular configuration of a discrete spacetime, we can make an analogy between background independence in the process matrix formalism and background independence in general relativity. In general relativity, background independence is a consequence of the fact that observable quantities must be invariant under any arbitrary coordinate transformation. Formally, these transformations are smooth, invertible mappings from a manifold to itself, and are called \textit{diffeomorphisms}. In the process matrix formalism, the statistical properties of observables are given by e.g.~eq.~\eqref{eq:born-rule}, the Born-rule generalisation for processes. In general, eq.~\eqref{eq:born-rule} generates a multipartite probability distribution $P(i_1,...,i_n)$ given a particular process $W$. Although $P$ does not assume any causal structure, it does in general assume that it is possible to distinguish between and label the different laboratories. Operationally, this implies the existence of some background reference frame, which allows one to determine that outcome $i_1$ corresponds to party 1, outcome $i_2$ to party 2, and so on. In a background-independent theory, such a labelling should not be possible. As a consequence, the probability distribution $P$ must be invariant under permutation of the parties, i.e.~$P(i_1,i_2,...,i_n)=P(i_{\sigma(1)},i_{\sigma(2)},...,i_{\sigma(n)}),$ for all permutations $\sigma$. Permutation invariance as a discrete analogue of diffeomorphism invariance is also discussed, for example, in Ref.~\cite{arrighi:2020}. Invariance under permutations is a particular case of invariance under an arbitrary symmetry group. A general framework for dealing with this has been developed in Ref.~\cite{bartlett:2007}. Although this framework deals with Lie groups rather than finite groups (such as permutations), the main results, which we will use below, also hold for finite groups. First, we must introduce a mathematical representation for permutations, which we will use throughout the paper. Just as the group of diffeomorphisms are represented by a (continuous) diffeomorphism group, the (finite) group of \textit{permutations} of a set of $n$ elements is known as the \textit{symmetric group} and is denoted $S_n$. We define the \textit{representation} of the symmetric group $S_n$ on the space of $n$-party process matrices as the map from elements $g\in S_n$ to operators $U_g$ such that $U_gWU^\dag_g$ performs a permutation on the laboratories. For example, the action of the `swap' permutation $U_{AB}$ on a bipartite process in the Hilbert-Schmidt basis is \begin{multline} U_{AB}\left(\sum_{ijkl}\alpha_{ijkl}\sigma^{A_I}_i\sigma^{A_O}_j\sigma^{B_I}_k\sigma^{B_O}_l\right)U^\dag_{AB} \\ =\sum_{ijkl}\alpha_{ijkl}\sigma^{A_I}_k\sigma^{A_O}_l\sigma^{B_I}_i\sigma^{B_O}_j. \label{eq:U-def} \end{multline} Note that input and output spaces are always swapped together. In order to make permutations well-defined, we assume that the input spaces of all laboratories have equal dimensions, and similarly all output spaces. It is not difficult to generalise, but we will not do so here \cite{thesis}. We say that a linear operator {$A$} (which can be a process matrix or a measurement, or more generally even a quantum state or POVM element) is \textit{permutation invariant} if it is unchanged by the action of any permutation, i.e.~$U_gA{U_g}^\dag=A,\ \forall g\in S_n$. Equivalently, $A$ is permutation invariant if $\mathcal{G}[A]=A$, where $\mathcal{G}$ is the \textit{twirl operation}, \begin{equation} \mathcal{G}[W]:= \frac{1}{n!}\sum_{g\in S_n}U_gW{U_g}^\dag, \end{equation} \section{Process matrices without spacetime events.} Now we can formalise the ideas we introduced in the previous section. Our overarching goal is to develop a framework for processes in which measurement statistics are permutation-invariant, so that the processes are background-independent. We find that there are different ways to achieve this. Permutation-invariant measurement statistics are implied by permutation-invariant measurements: If $M=\mathcal{G}[M]$ (i.e.~the joint measurement operator $M\equiv M_1^AM_2^B...$, is permutation-invariant), then $\Tr[W M] = \Tr[W U_g M U_g^\dag]$ $\forall g\in S_n$. However, $\Tr[W U_g M U_g^\dag]=\Tr[U_g^\dag W U_g M]$, so $W=\mathcal{G}[W]$ also implies that measurement statistics are invariant, even if $M\neq \mathcal{G}[M]$. Finally, the measurement statistics will be permutation-invariant if both $W$ and $M$ are permutation-invariant. We mention this because each choice corresponds to a different interpretation or philosophy of background independence: \textit{If measurement operators are permutation-invariant}, but not the processes themselves, then we can think of processes as being described relative to some fixed background that we cannot access, so that we are restricted to using permutation-invariant measurements. \textit{If process matrices are permutation-invariant}, but not the measurement operators, then we can say that we are making measurements relative to some background reference frame, but that what we observe is permutation-invariant---choosing a different reference frame will give us the same statistics. \textit{If both process and measurements are permutation-invariant}, then we have totally abandoned the concept of a reference frame as a physically meaningful idea. It is sometimes convenient to focus on one of these three mathematically equivalent points of view. For example, by requiring that process matrices $W$ are permutation-invariant, $W=\mathcal{G}[W]$, we can see that even with as strict a constraint as permutation-invariance one still obtains nontrivial behaviour of processes. Consider the process matrix \begin{multline} W = \frac{1}{4}\big[\mathbbm{1}^{\otimes4} + a'_0\sigma_z \mathbbm{1}\sigma_z\mathbbm{1} - a'_1(\sigma_z\mathbbm{111} + \mathbbm{11}\sigma_z\mathbbm{1}) \\ -a'_2(\sigma_z\mathbbm{11}\sigma_z+\mathbbm{1}\sigma_z\sigma_z\mathbbm{1}) +a'_3(\sigma_z\mathbbm{1}\sigma_z\sigma_z+\sigma_z\sigma_z\sigma_z\mathbbm{1}) \\ +a'_4(\sigma_z\mathbbm{1}\sigma_x\sigma_x-\sigma_z\mathbbm{1}\sigma_y\sigma_y+\sigma_x\sigma_x\sigma_z\mathbbm{1}-\sigma_y\sigma_y\sigma_z\mathbbm{1})\big], \label{eq:bipartite_causal_indefinite} \end{multline} which was presented in reference \cite{branciard:2015}, with coefficients $\vb{a'} = (0.0390, 0.3355, 0.2451, 0.4291, 0.2097)$. In eq.~\eqref{eq:bipartite_causal_indefinite} we have omitted labels, so that $ABCD=A^{A_I}B^{A_O}C^{B_I}D^{B_O}$. As shown in \cite{branciard:2015}, this process can violate a causal inequality---a device-independent test for indefinite causal order similar to a Bell inequality. Therefore, it represents a minimal example of permutation-invariant process with no definite causal order (in the sense that it involves only two parties, each with a single-qubit system). There is another significant consequence that arises from imposing permutation invariance on process matrices. Consider, for simplicity, the framework in which we impose permutation invariance on processes but not on measurements. Take a process representing a state $\varrho$ prepared in laboratory $A$ and sent to laboratory $B$ through a channel $T$, $W=\varrho^{A_I}T^{A_OB_I}\mathbbm{1}^{B_O}$. This is not permutation-invariant: $A$ can signal to $B$, but $B$ {cannot signal} to $A$. We can make the process invariant by taking the mixture $W^{\mathrm{inv}}=(\varrho^{A_I}T^{A_OB_I}\mathbbm{1}^{B_O}+\varrho^{B_I}T^{B_OA_I}\mathbbm{1}^{A_O})/2$, noting the change in superscripts in the second term. $W^{\mathrm{inv}}$ may be permutation-invariant, but we have lost the ability to determine whether the state $\varrho$ was prepared in laboratory $A$ and then sent to laboratory $B$, or the reverse: we cannot perform a measurement that will tell us \textit{where} the state preparation occurred. We have lost a way to label laboratories or, equivalently, a definition of spacetime points---we no longer have a reference frame for spacetime. This appears to be a general feature of background-independence, as it is also found in general relativity. A striking consequence is that permutation-invariant processes cannot be causally ordered, except for the trivial case of non-signalling processes. A related phenomenon is that one cannot have an instrument where all operations are (a) products of local operations, and (b) permutation-invariant (aside from the degenerate case where each of the measurement operators $M_i=N_i N_i...N_i$ act identically on every laboratory). This arises because any permutation invariant product of local measurement operators $M_{i_1}M_{i_2}...M_{i_n}$ must satisfy $M=U_gMU_g^\dag$ for all $U_g$ and therefore $M_i=M_j$ for all $i, j$. This might appear alarming: one of the fundamental tenets of the process matrix formalism is that measurements can be performed locally. However, in the next section we will discuss how a definition of locality can be recovered. \section{Invariance with a reference frame.} Although in the previous section we found that permutation invariance removes the distinction between points in spacetime, it is possible to recover a definition of spacetime points using a \textit{material reference frame}---a `rods and clocks' reference frame constructed out of physical systems. This builds upon results from the theory of quantum communication without a (shared) reference frame \cite{bartlett:2007}. The idea of a material reference frame is to take the original process matrix, which identifies the Hilbert spaces (e.g.~$A_IA_O$, $B_IB_O$ etc.) with local regions labelled by spacetime events, and add to each laboratory a physical \textit{reference} system whose quantum state encodes a label uniquely specifying that laboratory. Then, a local observer can measure this reference system to obtain information about which region of space they occupy. In this way, we have encoded the information from the old abstract reference frame into a physical, observable reference frame. This being done, the invariant process is simply the sum of all possible permutations acting on the extended process (consisting of the system and referece frame), which gives \begin{equation} W^\text{inv} = \frac{1}{n!}\mathcal{L}(W), \end{equation} where \begin{equation} \mathcal{L}(A) := \sum_{g\in S_n}U^{SR}_gA^S[01...(n-1)]^{R_I}\mathbbm{1}^{R_O}{U^{\dag}_g}^{SR} \label{eq:process-inv} \end{equation} is a superoperator that applies to arbitrary operators (not necessarily process matrices). In eq.~\eqref{eq:process-inv}, the superscript on $A^S$ denotes that it is a part of the \textit{system} space $S=S^1_IS^1_O...S^n_IS^n_O$, while the superscripts $R=R_IR_O=R^1_IR^1_O...R^n_IR^n_O$ denote the \textit{reference frame} space, which contains inputs and outputs. We have used the notation $[\psi]=\ketbra{\psi}{\psi}$, so that $[01...(n-1)]^{R_I}=\ketbra{0}{0}^{R^1_I}...\ketbra{n-1}{n-1}^{R^n_I}$. Finally, $U^{SR}_g=U^S_gU^R_g$, which acts separately on the system and reference frame spaces. $U_g$ is a representation of the permutation $g$. This means that a given permutation will act on the input and output spaces of both the system and the reference frame together, so that each reference system remains associated with its corresponding laboratory, and each output space remains associated with its corresponding input space. Essentially, we have moved from a particular process $W$ to an \textit{equivalence class} of processes (the terms in eq.~\eqref{eq:process-inv}, related by permutations) described by $\mathcal{L}(W)$, which we treat as the fundamentally meaningful physical object, just as we consider equivalence classes of diffeomorphism-invariant spacetimes as the meaningful physical system in relativity. Using eq.~\eqref{eq:process-inv}, we can construct permutation-invariant processes that reproduce the statistics of arbitrary, non-invariant processes. However, as a consequence of adding a locally observable reference system to each laboratory, instruments now need to be extended so that the probability of \textit{some} measurement occurring is one. This completion turns out to be somewhat arbitrary, suggesting that there exist physically distinct instruments that are \textit{indistinguishable} when using any reference frame. To obtain permutation-invariant processes and measurements, we use the following maps: \begin{align} W\to W^\text{inv}&\equiv \frac{1}{n!}\mathcal{L}(W),\label{eq:pm-inv} \\ M_{i_1...i_n}^S&\equiv M^{S^1}_{i_1}...M^{S^n}_{i_n} \nonumber\\ \to M^\text{inv}_{i_1...i_n}&\equiv\mathcal{L}(M^{S^1}_{i_1}...M^{S^n}_{i_n})+\frac{1}{Nd_{O}}\left(\mathbbm{1}^{SR}-\mathcal{L}\left(\mathbbm{1}^{S}\right)\right). \label{eq:ins-inv} \end{align} $W^\text{inv}$ and $M^\text{inv}_{i_1...i_n}$ are now invariant under the action of $S_n$. In the appendix, we prove that $W^\text{inv}$ are valid processes, that the $M^{\mathrm{inv}}_{i_1,...,i_n}$ are each valid elements of instruments, and that $\sum_{\{i\}}M^\mathrm{inv}_{i_1,...,i_n}$ is a CPTP map. In addition, we can show that the Born rule is maintained by the permutation invariance. Using Lemma \ref{multiplication} from the appendix, \begin{align} \begin{split} &\Tr[W^\text{inv}M_i^\text{inv}]\\ &=\Tr[\frac{1}{n!}\mathcal{L}(W)(\mathcal{L}(M_i)+\frac{1}{N}(\mathbbm{1}^{SR}-\mathcal{L}(\mathbbm{1}^S))] \\ &= \Tr[\frac{1}{n!}\mathcal{L}(WM_i)]+\Tr[\frac{1}{n!}(\mathcal{L}(W)-\mathcal{L}(W))] \\ &= \Tr[\frac{1}{n!}\mathcal{L}(WM_i)]=\Tr[WM_i]. \end{split} \end{align} In the previous section, we mentioned that it is impossible to have an instrument in which all elements are both permutation-invariant and decompose into a product of local measurements. Since one of the central ideas of the process matrix formalism is locality, this was surprising. Here, we see that, \textit{conditionally on measuring in a particular reference frame}, the elements of a permutation-invariant instrument once more decompose into local measurements. Thus, we recover our definition of locality and see that it is only meaningful relative to a physical reference frame. \section{Symmetry and superselection.} Usually, symmetry constraints in quantum mechanics give rise to superselection rules on allowed states. That is, states have some `definite property' and coherences between different `types' of that property cannot exist. The archetypal example of a superselection rule is a $U(1)$ gauge symmetry. For example, electromagnetism obeys a global $U(1)$ symmetry. This symmetry is associated with a superselection rule for electric charge: states can have any integer value of charge, but they cannot be in a superposition of two different charge values. However, it is possible to have a classical statistical mixture of positive and negative charge, such as for example when there is some classical uncertainty as to the nature of the particle being prepared such as an electron or positron. The reason superselection rules arise can be seen by decomposing the Hilbert space of states in terms of copies of irreducible representations of the symmetry group (in our case the group of permutations $S_n$). A Hilbert space with a representation $U_g$ of a group $G$ can be decomposed into a direct sum of \textit{charge sectors} $\mathcal{H}_q$, each containing an inequivalent representation of $G$. Each charge sector can in turn be decomposed into a tensor product of a \textit{gauge space} $\mathcal{M}_q$, carrying an irreducible representation of $G$, and a \textit{multiplicity space} $\mathcal{N}_q$, carrying an identity representation of $G$. The entire space decomposes as \begin{equation} \mathcal{H}=\bigoplus_q \mathcal{M}_q\otimes \mathcal{N}_q ,\end{equation} so that each charge sector contains a number of copies of a particular irreducible representation. Each inequivalent representation corresponds to a different `type' of charge (in the $U(1)$ example, {the number of elementary electric charges}). In this decomposition, the \textit{twirl} - which, we recall, is the `average over all transformations' superoperator - can be expressed as \cite{bartlett:2007} \begin{equation} \mathcal{G}=\sum_q(\mathcal{D}_{\mathcal{M}_q}\otimes \mathcal{I}_{\mathcal{N}_q})\circ \mathcal{P}_q \label{eq:rep-twirl} ,\end{equation} where $\mathcal{D}$ is the completely depolarising map that sends each state to the maximally mixed state, $\mathcal{I}$ is the identity map, and $\mathcal{P}_q$ is the projector onto $\mathcal{H}_q$. Eq.~\eqref{eq:rep-twirl} tells us that linear operators that are $G$-invariant (and therefore twirl-invariant) must decompose as \begin{equation} A=\sum_q \frac{1}{d_{\mathcal{M}_q}}\mathbbm{1}_{\mathcal{M}_q} \otimes A_{\mathcal{N}_q} \label{eq:twirl-inv-process} ,\end{equation} where $d_{\mathcal{M}_q}=\text{dim}(\mathcal{M}_q)$. This restriction is a \textit{superselection rule}: requiring that allowed operators are block-diagonal in the different inequivalent representations is the same as saying that there can be no coherences between different charges. Additionally, eq.~\eqref{eq:twirl-inv-process} gives us information about the physical degrees of freedom associated with each `type' of charge: for a charge $q$, the physical state space reduces to the invariant subspace $\mathcal{N}_q$. So goes the typical interpretation of a superselection rule: physical objects have some well-defined charge that can be subject to classical uncertainty but not quantum indeterminacy. It turns out that, for processes, this standard interpretation fails. The reason it fails is because some inequivalent representations do not contain \textit{any} valid processes (whether there are \textit{no} representations that contain valid processes is an open question). Here, we will show that for any $n$-partite process with two-dimensional (qubit) local Hilbert spaces, the symmetric and antisymmetric representations never contain valid processes. First, we will consider the base case of a bipartite process, and then prove by induction that this will hold for a process with any number of laboratories, as long as each laboratory carries a single qubit. A bipartite process with two-dimensional input and output spaces gives rise to a 16-dimensional Hilbert space spanned by $\ket{i}^{A_I}\ket{j}^{A_O}\ket{k}^{B_I}\ket{l}^{B_O}\equiv \ket{ijkl},\ i,j,k,l\in \{0,1\}$. Permutations of the two laboratories are obtained by the action of $S_2$, which has two elements: the identity element and the swap element $U_{AB}\ket{ijkl}=\ket{klij}$. There are two ineqivalent representations, the symmetric and antisymmetric representations (denoted $\mathcal{W}^+$ and $\mathcal{W}^-$), which are respectively spanned by \begin{align} \ket{\psi_S^{(1)}}&=\ket{ijij},\ i,j\in\{0,1\}, \label{eq:sym-basis-1}\\ \ket{\psi_S^{(2)}}&=\frac{1}{\sqrt{2}}(\ket{ijkl}+\ket{klij}),\ i,j\neq k,l \label{eq:sym-basis-2} ,\end{align} for the `symmetric representation', and \begin{align} \ket{\psi_A}=\frac{1}{\sqrt{2}}(\ket{ijkl}-\ket{klij}),\ i,j\neq k,l \label{eq:antisym-basis} ,\end{align} for the `anti-symmetric'. In all, the symmetric representation is 10-dimensional, and the antisymmetric is 6-dimensional. The superselection rule tells us that any physically realisable (permutation invariant) process must have the form \begin{equation} W^{A_IA_OB_IB_O}=W^++W^- \label{eq:sym-process} ,\end{equation} where $W^+=\sum_{i,j}\alpha_{ij}\ketbra{w^{+}_i}{w^{+}_j}$ and $W^-=\sum_{ij}\beta_{ij}\ketbra{w^{-}_i}{w^{-}_j}$, where $w^{\pm}_i$ are respectively basis elements of the symmetric $(+)$ or antisymmetric $(-)$ representations given in eqs.~\eqref{eq:sym-basis-1}-\eqref{eq:antisym-basis}. Matrices of the form of eq.~\eqref{eq:sym-process} will not in general be valid processes. Process matrices must also satisfy eqs.~\eqref{eq:pm-positive} and \eqref{eq:pm-normalised}. Solving algebraically for a closed-form constraint on the diagonals (which can be done with a computer algebra program such as Mathematica, or by hand) reveals that the trace of any $W^+$ or $W^-$ must be zero. This violates the normalisation constraint of eq.~\eqref{eq:pm-normalised}. We can use the result for bipartite qubit processes as the base case to show that for any number of qubit laboratories, there will be no valid processes living in the symmetric or antisymmetric representations. There are two essential parts to this argument. The first is that, given a process matrix $W$, tracing out any number of laboratories must result in a valid process. In particular, for a process $W^{S^1...S^n}$ with $n$ laboratories, $\frac{1}{d_{S^1...S^n\setminus S^iS^j}}\Tr_{S^1...S^n\setminus S^iS^j}[W^{S^1...S^n}]=\overset{\sim}{W}{}^{S^iS^j}$, where $\Tr_{S^1...S^n\setminus S^iS^j}$ denotes the trace over all laboratories except $S^i$ and $S^j$ and $d_{S^1...S^n\setminus S^iS^j}$ is the dimension of all the spaces except $S^i, S^j$ , must be a valid process. $\overset{\sim}{W}{}^{S^iS^j}$ is known as a reduced process. The second part of the argument is that for any state living in the symmetric (antisymmetric) representation of $S_n$, any $(n-1)$-dimensional subsystem of that state will be in the symmetric (antisymmetric) representation of $S_{n-1}$. To see this, observe that we can write any $n$-dimensional state $\ket{\psi}$ as \begin{equation} \ket{\psi}=\sum_j c_j \ket{\psi_j}^{S^1...S^{n-1}}\ket{j}^{S^n} ,\end{equation} where $\ket{\psi_j}$ is an $(n-1)$-dimensional state, $c_j$ are some coefficients, and $\ket{j}$, $j=1,...,n$ is a basis state of $S^n$. Then, we have \begin{align} \bra{k}^{S^n}\ket{\psi} &= \bra{k}^{S^n}\sum_j c_j\ket{\psi_j}^{S^1...S^{n-1}}\ket{j}^{S^n} \nonumber\\ &= c_k\ket{\psi_k}^{S^1...S^{n-1}} .\end{align} If $\ket{\psi}$ lives in the symmetric representation, then $U_g\ket{\psi}=\ket{\psi}\forall g\in G$. In particular, this holds for all $g$ that leave the state $\ket{j}^{S^n}$ in system $S^n$ unchanged. From this, we can see that, for $g\in S_{n-1}$ and $U_g$ acting on the first $n-1$ subsystems, \begin{align} \begin{split} &\qquad U_g\ket{\psi} = \ket{\psi} \\ &\Rightarrow \sum_j c_j U_g\ket{\psi_j}^{S^1...S^{n-1}}\ket{j}^{S^n} \\ &\qquad= \sum_j c_j\ket{\psi_j}^{S^1...S^{n-1}}\ket{j}^{S^n}\\ &\Rightarrow \bra{k}^{S^n}\sum_j c_j U_g\ket{\psi_j}^{S^1...S^{n-1}}\ket{j}^{S^n} \\ &\qquad= \bra{k}\sum_j c_j\ket{\psi_j}^{S^1...S^{n-1}}\ket{j}^{S^n} \\ &\Rightarrow c_k U_g\ket{\psi_k}^{S^1...S^{n-1}} = c_k\ket{\psi_k}^{S^1...S^{n-1}} \\ &\Rightarrow U_g\ket{\psi_k}^{S^1...S^{n-1}} = \ket{\psi_k}^{S^1...S^{n-1}}. \end{split} \end{align} Therefore, the $\ket{\psi_j}^{S^1...S^{n-1}}$ will all be in the symmetric subspace, and $\Tr_{S^{n}}[\ketbra{\psi}{\psi}]/d_{S^{n}}$ will be a linear combination of operators on the symmetric subspace. This holds analagously for the antisymmetric subspace, where $U_g\ket{\psi}=\text{sgn}(g)\ket{\psi}$. The same result holds if we `project out' any number of subspaces. Taking the partial trace of a matrix in the (anti)symmetric subspace will therefore result in a matrix that is still in the (anti)symmetric subspace, where we define the (anti)symmetric subspace for matrices as the space of matrices that act on the (anti)symmetric subspace for states. We will equivalently say that these matrices belong to the (anti)symmetric representation. Combining these two arguments, we see that for an $n$-partite process $W$ living in the symmetric (antisymmetric) representation of $S_n$, $\overset{\sim}{W}{}^{S^iS^j}$ must be a valid bipartite process and live in the symmetric (antisymmetric) representation for all $i,j=1,..,n, i\neq j$. But, we saw that there are no valid symmetric or antisymmetric bipartite processes, so this is a contradiction. This tells us that there are no symmetric or antisymmetric $n$-partite qubit processes. This proof generalises to any local Hilbert space dimension once one has proved the base case. \section{Conclusion} In this paper, we have used the process matrix formalism to show that it is possible to describe quantum causal order with background independence built in, under the assumption of a discretised spacetime. We have also seen that some properties of background independent processes have counterparts in general-relativistic background independence, e.g.~the `washing out' of spacetime and the need to construct a material reference frame to recover a definition of spacetime points. Our results show that background independence is consistent with the principles of the process matrix formalism, including, with some reinterpretation, locality---which must be defined \textit{relative} to a reference frame. This follows from our discussion on local vs.~background independent measurements. We also investigated the general symmetry constraints imposed on processes by permutation invariance, and discovered that the constraint is stronger than the typical superselection rule: the standard interpretation is simply that physical systems must have a well-defined `charge', but for permutation-invariance not all charges correspond to physically realisable processes. Instead, valid processes can be block-diagonal combinations of subprocesses that are not themselves physically realisable. This implies that background independence in quantum mechanics cannot be interpreted analogously to other known symmetries of nature, and that a new interpretation may be necessary. Whether or not this `charge' can be taken seriously as a physical quantity is, for the moment, an open question. Finally, our attention has focused on permutations---namely relabellings of laboratories. These can be understood as ``classical'' coordinate transformations, which do not change, for example, whether a particle is localised at a point or in a superposition. It has been proposed that combining quantum mechanics and general relativity requires considering more general, ``quantum'' coordinate transformations \cite{hardy2016, zych:2020}. It is an interesting open question whether it is possible to extend our treatment to include such ``quantum relabellings''. \begin{acknowledgments} This work was partially supported through an Australian Research Council Discovery Early Career Researcher Award (DE170100712). We acknowledge the traditional owners of the land on which the University of Queensland is situated, the Turrbal and Jagera people. \end{acknowledgments}
{ "attr-fineweb-edu": 1.868164, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfR04uzlh9r2JtqSb
\section{Introduction} An important problem of long standing in quantum mechanics and nanoscale photonics concerns the behaviour of resonant modes with complex propagation constants. We will be concerned here with the fundamental problem in which those resonant modes arise in the context of scattering of electromagnetic waves by a sphere \cite{Mie1908}, but of course there has been a strong overlap between this vector problem and the scalar problem arising in quantum mechanical scattering \cite{CalderonPeierls1976,zeld61,perezeld,Calderon2010}. In both contexts, investigators have wished to employ analyses based on resonant modes, but have been worried by difficulties, real or apparent, arising from the behaviour of these modes at large distances from the scatterer, where the complex wave numbers give rise to divergent but oscillating wave functions which create problems when calculating energies or inner products. Some references to the difficulties attached to the problems in electromagnetic scattering, and to the applications which motivate their resolution, may be found in \cite{Dubovik1990}-\cite{Mulj16}. The approach we follow here responds to a comment, or challenge, in the paper by Zel'dovich \cite{zeld61}. He notes that the absolute value of integrals associated with complex modes may be exponentially divergent at large distances, and that this cannot be remedied by exponential "kill" functions which go to zero at large distances, and which are turned off after a convergent answer is obtained. He then suggests the use of Gaussian "kill" functions, which can be turned off gradually by letting their width increase, but which for any finite width generate a well defined answer. He then prefers to follow a different approach, based on analytic continuation and perturbation theory. We propose to follow through the Gaussian approach here in all its details for the problem of Mie scattering, a procedure for which the authors propose the shorthand "Killing Mie softly". The approach used relies extensively on analytic results obtained by McPhedran, Dawes and Scott \cite{MDS92} (hereafter referred to as MDS), which are summarised in Section 2. Section 3 discusses the details necessary for treating the limit as the Gaussian width tends to infinity, and derives a key formula giving delta-function terms which occur in some integrals. Section 4 further extends the results of McPhedran, Dawes and Scott \cite{MDS92}, giving more general results for certain Bessel products needed in later sections. Section 5 contains a discussion of spherical Bessel function integrals needed for the treatment of complex Mie scattering modes. It contains a numerical illustration of the effectiveness of the Gaussian "softly killing": see Fig. \ref{figkill} and its discussion. Section 6 assembles the expressions and integrals relating to Mie theory for electric and magnetic type modes, and also establishes analytic results relevant to inner products and normalisation factors of modes. \section{The Results of McPhedran, Dawes and Scott} The basic integral of MDS \cite{MDS92} contains a product of Bessel functions $J$ and $Y$: \begin{equation} {\cal I}_{JY}(b,K,k,\eta)=\int_0^\infty x \exp(-\eta x^2) J_b(K x) Y_b(k x) d x, \label{mds1} \end{equation} where $b$ is non-negative real and $\eta$ is a positive real. The case of integer $b$ is useful for scattering problems involving cylinders, while half-integer $b$ is useful for scattering by spheres. This can be paired with an integral whose result is known (for example from Gradshteyn and Ryzhik \cite{GR}): \begin{equation} {\cal I}_{JJ}(b,K,k,\eta)=\int_0^\infty x \exp(-\eta x^2) J_b(K x) J_b(k x) d x=\frac{\exp [-(K^2+k^2)/4 \eta] I_b (K k/2 \eta)}{2\eta}. \label{mds2} \end{equation} MDS derive the following reduction formula for ${\cal I}_{JY}(b,K,k,\eta)$: \begin{eqnarray} {\cal I}_{JY}(b,K,k,\eta&)=&\frac{-1}{2\pi \eta} \exp[-(K^2+k^2)/4 \eta]\left\{{\cal H}(b,K,k,\eta)\right.\nonumber\\ &&\left. -2\pi\eta \exp\left[\frac{K k}{2\eta}\right]\int_0^\infty \exp(-\eta x^2) J_b(\sqrt{K k} x) Y_b(\sqrt{K k} x) d x\right\},\nonumber\\ && \label{mds3} \end{eqnarray} or, scaling the integration variable, \begin{eqnarray} {\cal I}_{JY}(b,K,k,\eta&)=&\frac{-1}{2\pi \eta} \exp[-(K^2+k^2)/4 \eta]\left\{{\cal H}(b,K,k,\eta)\right.\nonumber\\ &&\left. -\frac{2\pi\eta}{Kk} \exp\left[\frac{K k}{2\eta}\right]\int_0^\infty \exp\left(-\frac{\eta}{Kk} x^2\right) J_b( x) Y_b(x) d x\right\},\nonumber\\ && \label{mds3a} \end{eqnarray} In (\ref{mds3}) the following finite-range integral has been introduced: \begin{equation} {\cal H}(b,k,K,\eta)=\int_1^{K/k} u^{(b - 1)} \exp[\frac{K k}{4\eta} (u + 1/u)] du. \label{mds4} \end{equation} This integral may be expressed in terms of Scharofsky functions \cite{Peter86}, but is easily evaluated numerically. Making the substitution $v=1/u$ in the integral, we find the symmetry relation \begin{equation} {\cal H}(b,K,k,\eta)=-{\cal H}(-b,k,K,\eta). \label{mds5} \end{equation} The second integral occurring in (\ref{mds3}) can be evaluated in closed form using properties of Meier $G$ functions, with \begin{equation} \int_0^\infty x \exp(-a x^2) J_b(x) Y_b(x) d x=\frac{1}{2\pi a} \exp[-1/(2 a)]\left[\pi \cot(\pi b) I_b\left(\frac{1}{2 a}\right)+b h_{-1,b}\left( \frac{-1}{2 a}\right)\right]. \label{mds6} \end{equation} Here $a=\eta/(K k)$, and $h_{-1,b}$ denotes an associated Bessel function\cite{Luke62}, with the expansion: \begin{equation} h_{-1,b}\left( \frac{-1}{2 a}\right)=-\frac{\exp[1/(2 a)]\sqrt{\pi}}{b\sin(\pi b)}\sum_{l=0}^\infty \frac{(-1/a)^l \Gamma (1/2+l)}{\Gamma(l-b+1)\Gamma (l+b+1)}. \label{mds7} \end{equation} Note the symmetry relation \begin{equation} h_{-1,b}\left( a\right)=h_{-1,-b}\left(a \right). \label{mds8} \end{equation} Putting these elements together, the result of MDS is \begin{eqnarray} {\cal I}_{JY}(b,K,k,\eta)&=&-\frac{ \exp[-(K^2+k^2)/4 \eta]}{2\pi\eta} \left\{{\cal H}(b,k,K,\eta) \right.\nonumber\\ & & \left. -\left[\pi \cot (b \pi) I_b\left(\frac{ K k}{2\eta}\right) +b h_{-1,b}\left( \frac{-k K}{2\eta}\right)\right]\right\}. \nonumber\\ && \label{mds9} \end{eqnarray} or \begin{eqnarray} {\cal I}_{JY}(b,K,k,\eta)&=&-\frac{ \exp[-(K^2+k^2)/4 \eta]}{2\pi\eta} {\cal H}(b,k,K,\eta) \nonumber\\ & & + \frac{\exp\left[\frac{-(K^2+k^2)}{4\eta}\right]}{2\pi \eta} \left[\pi \cot (b \pi) I_b\left(\frac{ K k}{2\eta}\right) +b h_{-1,b}\left( \frac{-k K}{2\eta}\right)\right]. \nonumber\\ && \label{mds10} \end{eqnarray} \subsection{The Case of Integer Index $b$} It will be noted that a difficulty arises with the result (\ref{mds10}) if $b$ is an integer, since the multiplying factor in $h_{-1,b}$ involves $\sin(\pi b)$ in its denominator. This can be overcome by the use of another associated Bessel function \cite{Luke62}: \begin{equation} H_{\mu, \nu}(z)=h_{\mu, \nu}(z)-\frac{\sqrt{\pi}\Gamma(\mu-\nu+1)\Gamma(\mu+\nu+1)}{2^{\nu+1} \Gamma(\mu+3/2)} \left[I_\nu(z)+\frac{K_\nu(z) \sin (\nu-\mu)\pi}{\pi \cos (\mu \pi)} \right]. \label{mds11} \end{equation} We are interested in the case $\mu=-1$, for which (\ref{mds11}) simplifies to \begin{equation} H_{-1, \nu}(z)=h_{-1, \nu}(z)+\frac{1}{\nu} \left[\frac{ \pi I_\nu(z)}{\sin (\pi \nu)}+K_\nu(z) \right]. \label{mds12} \end{equation} Now \begin{eqnarray} & b h_{-1,b}\left(\frac{-1}{2 a}\right)+\pi \cot(\pi b)I_b\left(\frac{1}{2 a}\right)=&b H_{-1,b}\left(\frac{-1}{2 a}\right) -\frac{\pi}{\sin (\pi b)} I_b\left(\frac{-1}{2 a}\right) \nonumber \\ && -K_b\left(\frac{-1}{2 a}\right) +\pi \cot (\pi b) I_b\left(\frac{1}{2 a}\right) \label{mds131} \end{eqnarray} In (\ref{mds131}) we use the analytic continuation expressions \cite{NIST} (10.34): \begin{equation} I_b\left(\frac{-1}{2 a}\right)= e^{\pi b i} I_b\left(\frac{1}{2 a}\right), ~~ K_b\left(\frac{-1}{2 a}\right)= e^{-\pi b i} K_b\left(\frac{1}{2 a}\right) -\pi i I_b\left(\frac{1}{2 a}\right). \label{mds132} \end{equation} The latter expression in (\ref{mds132}) makes evident the branch-cut behaviour of the Macdonald function near the negative real axis. We collect all the terms in $I_b(1/2 a)$ on the right-hand side of (\ref{mds131}), which cancel, leaving \begin{equation} b h_{-1,b}\left(\frac{-1}{2 a}\right)+\pi \cot(\pi b)I_b\left(\frac{1}{2 a}\right)=b H_{-1,b}\left(\frac{-1}{2 a}\right) -e^{-i\pi b} K_b\left(\frac{1}{2 a}.\right) \label{mds13} \end{equation} Hence, (\ref{mds6}) becomes for $n$ an integer: \begin{equation} \int_0^\infty x \exp(-a x^2) J_n(x) Y_n(x) d x=\frac{1}{2\pi a} \exp[-1/(2 a)]\left[ n H_{-1,n}\left(\frac{-1}{2 a}\right)+(-1)^{n+1} K_n\left(\frac{1}{2a}\right)\right]. \label{mds14} \end{equation} The following terminating series can be used for $H_{-1,n}$: \begin{align} & H_{-1,n}\left(\frac{-1}{2 a}\right)=-2 a \exp \left(\frac{1}{2 a}\right) \left[ 1+\frac{2 a}{3} (1-n^2)+\frac{4 a^2}{15} (1-n^2)(4-n^2)\right. \label{mds15} \\ &\left. +\frac{8 a^3}{105} (1-n^2)(4-n^2)(9-n^2)+\frac{16 a^4}{945} (1-n^2)(4-n^2)(9-n^2)(16-n^2)+\ldots \right].\nonumber \end{align} Particular cases of equation (\ref{mds14}) are: \begin{equation} \int_0^\infty x \exp(-a x^2) J_0(x) Y_0(x) d x=\frac{1}{2\pi a} \exp[-1/(2 a)]\left[-K_0\left( \frac{1}{2 a}\right)\right], \label{mds16} \end{equation} \begin{equation} \int_0^\infty x \exp(-a x^2) J_1(x) Y_1(x) d x=\frac{-1}{\pi}+\frac{1}{2\pi a} \exp[-1/(2 a)]\left[K_1\left( \frac{1}{2 a}\right)\right], \label{mds17} \end{equation} \begin{equation} \int_0^\infty x \exp(-a x^2) J_2(x) Y_2(x) d x=\frac{-2}{\pi}(1-2 a)+\frac{1}{2\pi a} \exp[-1/(2 a)]\left[-K_2\left( \frac{1}{2 a}\right)\right] \label{mds18} \end{equation} and \begin{equation} \int_0^\infty x \exp(-a x^2) J_3(x) Y_3(x) d x=\frac{-3}{\pi}(1-\frac{16}{3} a+\frac{32}{3} a^2)+\frac{1}{2\pi a} \exp[-1/(2 a)]\left[ K_3\left( \frac{1}{2 a}\right)\right]. \label{mds18bis} \end{equation} As $a\downarrow 0$, we have the following limiting behaviour of these: \begin{equation} \int_0^\infty x \exp(-a x^2) J_0(x) Y_0(x) d x\rightarrow \frac{-1}{2\sqrt{\pi a}} \exp \left( \frac{-1}{a}\right), \label{mds16a} \end{equation} \begin{equation} \int_0^\infty x \exp(-a x^2) J_1(x) Y_1(x) d x\rightarrow \frac{-1}{\pi}+ \frac{1}{2\sqrt{\pi a}} \exp \left( \frac{-1}{a}\right), \label{mds17a} \end{equation} \begin{equation} \int_0^\infty x \exp(-a x^2) J_2(x) Y_2(x) d x\rightarrow \frac{-2}{\pi}(1-2 a)- \frac{1}{2\sqrt{\pi a}} \exp \left( \frac{-1}{a}\right), \label{mds18a} \end{equation} and \begin{equation} \int_0^\infty x \exp(-a x^2) J_3(x) Y_3(x) d x\rightarrow \frac{-3}{\pi}(1-\frac{16}{3} a+\frac{32}{3} a^2)+\frac{1}{2\sqrt{\pi a}} \exp \left( \frac{-1}{a}\right), \label{mds18bisa} \end{equation} The first of these tends to zero as the exponential form $\exp(-1/a)$, while succeeding expressions tend to $-n/\pi$. \section{Some Limits of Integrals} Let us first consider the limit as $\eta\rightarrow 0$ of ${\cal I}_{JJ}(b,K,k,\eta)$, in the sense of generalised functions. This evaluation is done in a way similar to that followed by Lekner, Appendix B, Chapter 4 \cite{Lekner18}. We use the large argument expansion of $I_{b}$: \begin{equation} I_{b}(x)\sim \frac{\exp (x)}{\sqrt{2 \pi x}} \left(1-\frac{(b-1/2) (b+1/2)}{2 x}\right)+\exp(i(b+1/2)\pi)\frac{\exp (-x)}{\sqrt{2 \pi x}} \left(1+\frac{(b-1/2) (b+1/2)}{2 x}\right), \label{nr16} \end{equation} where the first omitted term is of order $1/x^{5/2}$. With $a=1/\sqrt{\eta}$ tending to infinity, we have in (\ref{mds2}) the term \begin{eqnarray} &&\frac{1}{2} a^2 \exp[-(K^2+k^2) a^2/4] I_{b}\left(\frac{K k a^2}{2 }\right) \sim \left\{\frac{1}{2 } a^2 \exp[-(K^2+k^2) a^2/4] \right\} \nonumber \\ && \left[ \frac{\exp (K k a^2/2)}{\sqrt{ \pi K k a^2}} \left(1-\frac{(b-1/2) (b+1/2)}{K k a^2}\right)+\exp(i(b+1/2)\pi) \frac{\exp (-K k a^2/2)}{\sqrt{ \pi K k a^2}} \left(1+\frac{(b-1/2) (b+1/2)}{K k a^2}\right) \right] . \nonumber \\ & & \label{nr17} \end{eqnarray} We complete the squares of the exponential arguments in (\ref{nr17}) to obtain: \begin{eqnarray} &&\frac{1}{2} a^2 \exp[-(K^2+k^2) a^2/4] I_{b}\left(\frac{K k a^2}{2 }\right) \sim \left\{\frac{1}{2 } a^2 \right\} \nonumber \\ && \left[ \frac{\exp (-(K-k)^2 a^2/4)}{\sqrt{ \pi K k a^2}} \left(1-\frac{(b-1/2) (b+1/2)}{K k a^2}\right)+\right.\nonumber \\ & & \left.\exp(i(b+1/2)\pi) \frac{\exp (-(K+ k)^2 a^2/4)}{\sqrt{ \pi K k a^2}} \left(1+\frac{(b-1/2) (b+1/2)}{K k a^2}\right) \right] . \nonumber \\ & & \label{nr18} \end{eqnarray} We recognise the terms in (\ref{nr18}) corresponding to $\delta$ functions: \begin{equation} \frac{a}{2 \sqrt{\pi}}\exp (-a^2 x^2/4)\rightarrow \delta (x), ~{\rm for}~a\rightarrow\infty. \label{nr19} \end{equation} Hence \begin{equation} \lim_{\eta\rightarrow 0} {\cal I}_{JJ}(b,K,k,\eta)=\frac{1}{ \sqrt{K k}} \delta (K-k). \label{nr20} \end{equation} A term $\exp(i(b+1/2)\pi) \delta (k+K)$ in (\ref{nr20}) has been be omitted under the assumption that $K+k=0$ can be excluded. Note that the asymptotic expansion of the integrand in (\ref{mds2}) has, with $\eta=0$, the leading term \begin{equation} \frac{2}{\pi \sqrt{K k}} (\cos [(K-k) x] \pm \cos [(K+k) x]). \label{mds2a} \end{equation} This then makes the origin of the full expression (\ref{nr20}) (with both $\delta$ functions included) evident: it is associated with the behaviour of the integrand near the upper limit of integration. We thus expect similar delta function terms to appear whenever the asymptotic development of the integral generates either a $\cos(K x) \cos(k x)$ or a $\sin(K x) \sin(k x)$ variation. Both these products oscillate ever more rapidly about mean zero as $x\rightarrow \infty$ unless $k=\pm K$ (in which cases they have mean 1/2 or -1/2). We now turn our attention to the behaviour as $\eta\rightarrow 0$ of \begin{equation} {\hat {\cal H}}(b,k,K,\eta)=-\frac{\exp[-(K^2+k^2)/4 \eta]}{2\pi \eta} \int_1^{K/k} u^{(b - 1)} \exp[\frac{K k}{4\eta} (u + 1/u)] du. \label{nr21} \end{equation} The integrand in (\ref{nr21}) increases monotonically as $u$ increases towards the upper limit. We analyse the asymptotic behaviour of the function ${\hat {\cal H}}(b,k,K,\eta)$ by defining $v=K/k-u$, and bringing all terms on the right-hand side of (\ref{nr21}) into a single exponential term. We then expand the argument of the exponential in powers of $v$. Retaining the zeroth, first and second powers of $v$ we find the approximation: \begin{eqnarray} {\hat {\cal H}}(b,k,K,\eta)&\approx &-\frac{1}{2\pi \eta}\int_0^{K/k-1} dv \exp\left\{(b-1) \log\left[\frac{K}{k}\right]] +\right. \nonumber\\ && \left. v\left[\frac{ k^3 - k K^2 + 4 k \eta - 4 b k \eta}{4 K \eta} \right]+ v^2 \left[\frac{k^4 + 2 k^2 \eta - 2 b k^2 \eta}{4 K^2 \eta} \right]\right\} .\nonumber \\ && \label{nr22} \end{eqnarray} Mathematica gives an expression for the integral in (\ref{nr22}) involving a combination of two imaginary error functions: \begin{eqnarray} &&{\hat {\cal H}}(b,k,K,\eta)\approx -\frac{1}{2\pi \eta} \times \nonumber \\ && \hspace{-1cm} \frac{\sqrt{\pi } \sqrt{\eta } \left(\frac{K}{k}\right)^b \left(\text{erfi}\left(\frac{4 (b-1) \eta k-K \left(8 (b-1) \eta +K^2\right)-2 k^3+3 k^2 K}{4 \sqrt{\eta } K \sqrt{k^2-2 (b-1) \eta }}\right)-\text{erfi}\left(\frac{-4 b \eta +4 \eta +k^2-K^2}{4 \sqrt{\eta } \sqrt{k^2-2 (b-1) \eta }}\right)\right) \exp \left(\frac{\left(4 (b-1) \eta -k^2+K^2\right)^2}{16 \eta \left(2 (b-1) \eta -k^2\right)}\right)}{\sqrt{k^2-2 (b-1) \eta }} \nonumber \\ && \label{nr23} \end{eqnarray} Of these two terms, the simpler expression is exponentially larger than the more complicated term. Neglecting the latter, and expanding the result as a power series in $\eta$, the first two terms give \begin{equation} {\hat {\cal H}}(b,k,K,\eta)\approx \frac{2 \left(\frac{K}{k}\right)^b}{\pi \left(k^2-K^2\right)}+\frac{8 \eta \left(b k^2-b K^2+k^2+K^2\right) \left(\frac{K}{k}\right)^b}{\pi \left(k^2-K^2\right)^3} +O(\eta^2). \label{nr24} \end{equation} This result, for $\eta \rightarrow 0$, is consistent with the result which comes from applying Watson's expression from p.134 of "A Treatise on the Theory of Bessel Functions" \cite{Watson80}: \begin{equation} \int_0^\infty x J_b(K x) Y_b(k x) d x=\frac{2 \left(\frac{K}{k}\right)^b}{\pi \left(k^2-K^2\right)}, \label{nr25} \end{equation} where the contribution is solely from the lower limit of the integral, with the oscillating contribution from the upper limit being set (or averaged) to zero. This last comment is made in the context of the theory of distributions, not well established at the time Watson wrote his treatise. \subsection{The Series $h_{-1,b}$} We begin with the expansion (\ref{mds7}): \begin{equation} h_{-1,b}\left( \frac{-1}{2 a}\right)=-\frac{\exp[1/(2 a)]\sqrt{\pi}}{b\sin(\pi b)}\sum_{l=0}^\infty \frac{(-1/a)^l \Gamma (1/2+l)}{\Gamma(l-b+1) \Gamma (l+b+1)}. \label{mds7bis} \end{equation} We expand for large $l$ the expression: \begin{eqnarray} \label{hmexp1} &&-\frac{\sqrt{\pi}}{b\sin(\pi b)}\frac{(-1/a)^l \Gamma (1/2+l) l!}{\Gamma(l-b+1) \Gamma (l+b+1)} \sqrt{l+1}= -\frac{\sqrt{\pi}}{b\sin(\pi b)} \nonumber\\ && \left(\frac{-1}{a}\right)^l \frac{1}{l! \sqrt{l+1}} \left(1+\frac{(3-8 b^2)}{8 l}+\frac{(64 b^4+16 b^2-23)}{128 l^2}+O\left(\frac{1}{l^3}\right)\right). \end{eqnarray} In the region of slow convergence of the sum (\ref{mds7bis}), i.e when $a<<1$, it will then be dominated by the following approximation, for which an exact integral form is available [Prudnikov, Vol.1, 5.2.8.10]: \begin{equation} \sum_{k=0}^\infty \frac{x^k}{k!\sqrt{k+1}}=\frac{2}{\sqrt{\pi}} \int_0^\infty \exp (x e^{-t^2}-t^2) dt. \label{hmexp2} \end{equation} Note that here we have corrected a typographical error in the upper limit of the integral in (\ref{hmexp2}). It is possible to vary the expansion (\ref{hmexp1}), for example by replacing the term $\sqrt{l+1}$ by $\sqrt{l+\alpha}$, where $\alpha$ is a free parameter. One way of choosing $\alpha$ is to make the coefficient of $1/k$ in (\ref{hmexp1}) go to zero. To achieve this, the choice of $\alpha$ required is \begin{equation} \alpha=2 b^2 + 1/4 . \label{alphachoice} \end{equation} We can generalise the sum and integral in (\ref{hmexp2}) as follows: \begin{equation} \sum_{k=0}^\infty \frac{x^k}{k!\sqrt{k+\alpha}}=\frac{2}{\sqrt{\pi}} \int_0^\infty \exp (x e^{-t^2}-\alpha t^2) dt. \label{hmexp3} \end{equation} The derivation of (\ref{hmexp3}) is simple: \begin{eqnarray} \frac{2}{\sqrt{\pi}} \int_0^\infty \exp (x e^{-t^2}-\alpha t^2) dt &=& \frac{2}{\sqrt{\pi}} \int_0^\infty \sum_{k=0}^\infty \frac{x^k \exp (-k t^2)}{k!} \exp (-\alpha t^2) dt \nonumber \\ & &= \sum_{k=0}^\infty \frac{x^k}{k!\sqrt{k+\alpha}}, \label{hmexp4} \end{eqnarray} where to get the final result the two Gaussian terms have been combined and integrated over. Note that this equality holds for all $x>0$, but in the case of interest to us ($x<0$) a problem can arise. To see where the problem comes from, we look for the vanishing of the derivative of the integrand with respect to $t$ in (\ref{hmexp4}): \begin{equation} 2 t (x e^{-t^2}+\alpha )=0 ~{\rm or} ~t=0,~ x e^{-t^2}=-\alpha. \label{hmexp5} \end{equation} The second possibility in (\ref{hmexp5}) has solutions on the real axis when $x<0$ and $t^2=-\log (\alpha/|x|)$. Thus, if the positive quantity $\alpha <-x$, there will be two maxima, with a minimum at $t=0$. In that case, the two peaks correspond to truncated Gaussians, and the equality (\ref{hmexp4}) breaks down. Somewhat counterintuitively, there is a simple way of overcoming this difficulty. We simply split the integrand into two parts, one even and the other odd in the variable $x$. It is easily shown that the integrand in each part has a single derivative zero at $t=0$, with the following results then holding for all $x$, positive or negative: \begin{equation} \sum_{k=0}^\infty \frac{x^{2 k}}{(2 k)!\sqrt{2 k+\alpha}}=\frac{2}{\sqrt{\pi}} \int_0^\infty \cosh (x e^{-t^2})\exp (-\alpha t^2) dt, \label{hmexp6} \end{equation} and \begin{equation} \sum_{k=0}^\infty \frac{x^{2 k+1}}{(2 k+1)!\sqrt{2 k+1+\alpha}}=\frac{2}{\sqrt{\pi}} \int_0^\infty \sinh (x e^{-t^2}) \exp (-\alpha t^2) dt. \label{hmexp7} \end{equation} These results have been confirmed numerically, a task easily carried out for $\alpha$ not small. \begin{figure}[tbh] \includegraphics[width=6cm]{plt3Dprudrat.pdf}~~\includegraphics[width=6cm]{plt3Dpruddiff.pdf} \caption{The ratio (left) and the difference (right) of the functions (\ref{hmexp6}) and (\ref{hmexp7}) as a function of $x$ and $\alpha$.} \label{fig-limits1} \end{figure} The numerical results in Fig. \ref{fig-limits1} show that the ratio of the sums or integrals in (\ref{hmexp6}) and (\ref{hmexp7}) tends towards unity as $x$ increases, irrespective of the value of $\alpha$. This may be readily understood from the ratio of the integrands, $\tanh (x e^{-t^2})$, which increases towards unity as $x$ increases, for fixed $t$. The difference of the sums or integrals tends towards zero in its magnitude as $x$ increases. \begin{figure}[tbh] \includegraphics[width=9cm]{nlfig2plt.pdf} \caption{The sum of the functions (\ref{hmexp6}) and (\ref{hmexp7}) as a function of $x$ for $\alpha=1.5$.} \label{fig-limits2} \end{figure} Fig. \ref{fig-limits2} shows the sum of the functions (\ref{hmexp6}) and (\ref{hmexp7}) as a function of $x$ for $\alpha=1.5$, for both positive and negative $x$. While the sum increases exponentially for $x$ increasing and positive, it remains small for negative $x$, and decreases towards zero as $x$ grows more negative. To obtain an asymptotic form in the region $x<<0$, we consider the difference function between (\ref{hmexp6}) and (\ref{hmexp7}) in $x>0$. This function is \begin{equation} \frac{2}{\sqrt{\pi}} \int_0^\infty \exp (x e^{-t^2}-\alpha t^2) dt , \label{hmexp8} \end{equation} and the integrand has its maximum with respect to variation of $t$ when \begin{equation} t=t_m=\sqrt{\log\left( \frac{x}{\alpha}\right)}. \label{hmexp9} \end{equation} Expanding about $t=t_m$ and retaining the Gaussian components, we obtain the approximation \begin{equation} \frac{2}{\sqrt{\pi}} e^{-\alpha [1+\log(x/\alpha)]} e^{-2\alpha \log(x/\alpha)(t-t_m)^2}. \label{hmexp10} \end{equation} Integrating over the Gaussian approximation we obtain the following estimate for the difference of (\ref{hmexp6}) and (\ref{hmexp7}) in $x>0$: \begin{equation} \frac{2 \exp -[\alpha (1+\log (x/\alpha))]}{\sqrt{2\alpha \log(x/\alpha)}}. \label{hmexp11} \end{equation} Fig. \ref{fig-limits3} illustrates the accuracy of the Gaussian approximation (\ref{hmexp10}) and the resulting estimate (\ref{hmexp11}). From the estimate (\ref{hmexp11}) we see that the limit as $x\rightarrow -\infty$ of the sum of the functions (\ref{hmexp6}) and (\ref{hmexp7}) is zero, for any $\alpha>0$. This is also evident from the plot on the right in Fig. \ref{fig-limits1}. \begin{figure}[tbh] \includegraphics[width=5cm]{cfnegasym.pdf}~~\includegraphics[width=5cm]{evploddpltc.pdf} \caption{(Left) The difference of the functions (\ref{hmexp6}) and (\ref{hmexp7}) in $x>0$ as a function of $x$ for $\alpha=1.5$ (blue line) and its Gaussian approximation (\ref{hmexp10}) (red dashed line). (Right) The sum of the functions (\ref{hmexp6}) and (\ref{hmexp7}) in $x<0$ as a function of $x$ for $\alpha=1.5$ (blue line) and the asymptotic estimate (\ref{hmexp11}) (red line). } \label{fig-limits3} \end{figure} \section{Extending the Results of McPhedran, Dawes and Scott} The extended set of results comes from use of the interrelations between Bessel functions: \begin{equation} Y_b(x)=\frac{J_b(x) \cos (b \pi)-J_{-b}(x)}{\sin (b \pi)},~~J_b(x)=\frac{Y_{-b}(x) \cos (b \pi)-Y_{b}(x)}{\sin (b \pi)}, \label{snr1} \end{equation} together with the symmetry relationships (\ref{mds5}) and (\ref{mds8}). The first of these is for: \begin{equation} {\cal I}_{JJm}(b,K,k,\eta)=\int_0^\infty x \exp(-\eta x^2) J_b(K x) J_{-b}(k x) d x, \label{snr2} \end{equation} for which \begin{equation} {\cal I}_{JJm}(b,K,k,\eta)=\frac{ \sin (b \pi) \exp[-(K^2+k^2)/4 \eta]}{2\pi \eta} \left[{\cal H}(b,k,K,\eta)-b h_{-1,b}\left( \frac{-k K}{2\eta}\right)\right]. \label{snr3} \end{equation} The second comes from replacing $b$ by $-b$ in (\ref{snr2}) and (\ref{snr3}): \begin{equation} {\cal I}_{JmJ}(b,K,k,\eta)=\int_0^\infty x \exp(-\eta x^2) J_{-b}(K x) J_{b}(k x) d x, \label{snr4} \end{equation} and \begin{equation} {\cal I}_{JmJ}(b,K,k,\eta)=\frac{ \sin (b \pi) \exp[-(K^2+k^2)/4 \eta]}{2\pi \eta} \left[{\cal H}(b,K,k,\eta)-b h_{-1,b}\left( \frac{-k K}{2\eta}\right)\right]. \label{snr5} \end{equation} Hence, \begin{eqnarray} {\cal I}_{JJm}(b,K,k,\eta)+{\cal I}_{JmJ}(b,K,k,\eta)&=&\frac{ \sin (b \pi) \exp[-(K^2+k^2)/4 \eta]}{2\pi \eta} \left[{\cal H}(b,K,k,\eta) \right.\nonumber \\ & & \left. +{\cal H}(b,k,K,\eta)-2 b h_{-1,b}\left( \frac{-k K}{2\eta}\right)\right], \label{snr6} \end{eqnarray} and \begin{equation} {\cal I}_{JJm}(b,K,k,\eta)-{\cal I}_{JmJ}(b,K,k,\eta)=\frac{ \sin (b \pi) \exp[-(K^2+k^2)/4 \eta]}{2\pi \eta} \left[{\cal H}(b,K,k,\eta)-{\cal H}(b,K,k,\eta)\right]. \label{nr7} \end{equation} The third evaluation concerns: \begin{equation} {\cal I}_{YY}(b,K,k,\eta)=\int_0^\infty x \exp(-\eta x^2) Y_b(K x) Y_{b}(k x) d x. \label{snr8} \end{equation} The right-hand side is expanded using (\ref{snr1}) twice, giving the expressions \begin{eqnarray} {\cal I}_{YY}(b,K,k,\eta)&=&\frac{1}{\sin^2(b \pi)}\left[\cos^2(b \pi) {\cal I}_{JJ}(b,k,K,\eta)+{\cal I}_{JJ}(-b,k,K,\eta)\right. \nonumber \\ & &\left. -\cos (b \pi) ({\cal I}_{JJm}(b,K,k,\eta)+{\cal I}_{JmJ}(b,K,k,\eta)\right] , \label{snr9} \end{eqnarray} and so \begin{eqnarray} {\cal I}_{YY}(b,K,k,\eta)&=&\frac{ \exp[-(K^2+k^2)/4 \eta]}{2 \eta} \left[ \cot^2 (b \pi) I_b\left(\frac{K k}{2 \eta}\right) +I_{-b}\left(\frac{K k}{2 \eta}\right)/\sin^2 (b\pi) \right. \nonumber\\ & & \left. -\frac{\cot (b \pi)}{\pi} \left( {\cal H}(b,K,k,\eta)+{\cal H}(b,k,K,\eta)-2 b h_{-1,b}\left( \frac{-k K}{2\eta}\right) \right)\right]. \label{snr10} \end{eqnarray} We can also expand the right-hand side of (\ref{snr8}) using (\ref{snr1}) only once. Solving, we find \begin{eqnarray} {\cal I}_{JmY}(b,K,k,\eta) &=& \int_0^\infty x \exp(-\eta x^2) J_{-b}(K x) Y_{b}(k x) d x \nonumber \\ &=& \cos( b\pi) {\cal I}_{JY}(b,K,k,\eta)-\sin (b \pi) {\cal I}_{YY}(b,K,k,\eta) . \label{snr11} \end{eqnarray} As well, \begin{eqnarray} {\cal I}_{JYm}(b,K,k,\eta) &=& \int_0^\infty x \exp(-\eta x^2) J_{b}(K x) Y_{-b}(k x) d x \nonumber \\ &=& \cos( b\pi) {\cal I}_{JY}(-b,K,k,\eta)+\sin (b \pi) {\cal I}_{YY}(-b,K,k,\eta). \label{snr12} \end{eqnarray} \section{Integrals over Spherical Bessel Functions} The results in the three preceding sections can be extended to spherical Bessel functions, using the substitutions: \begin{equation} j_n(z)=\sqrt{\frac{\pi}{2 z}} J_{n+1/2}(z)=(-1)^n \sqrt{\frac{\pi}{2 z}} Y_{-n-1/2}(z), \label{snr13} \end{equation} and \begin{equation} y_n(z)=\sqrt{\frac{\pi}{2 z}} Y_{n+1/2}(z)=(-1)^{n+1} \sqrt{\frac{\pi}{2 z}} J_{-n-1/2}(z). \label{snr14} \end{equation} The first integral we consider is \begin{eqnarray} {\cal I}_{j j}(n,K,k,\eta) &=& \int_0^\infty x^2 \exp(-\eta x^2) j_{n}(K x) j_{n}(k x) d x \nonumber \\ &=& \frac{\pi}{2 \sqrt{K k} }\frac{ \exp[-(K^2+k^2)/4 \eta]}{2 \eta} I_{n+1/2}\left(\frac{K k}{2 \eta}\right). \label{snr15} \end{eqnarray} Using the argument from Section 2 expressed in equations (\ref{nr17}-\ref{nr19}), we have \begin{equation} \lim_{\eta\rightarrow 0} \left\{ \frac{1}{2 \eta}\exp\left[\frac{-(K^2+k^2)}{4\eta}\right] I_b\left( \frac{K k}{2\eta}\right) \right\} =\frac{1}{\sqrt{K k}} \delta (K-k). \label{snr16} \end{equation} Hence, \begin{equation} \lim_{\eta\rightarrow 0} {\cal I}_{j j}(n,K,k,\eta)=\frac{\pi}{2 K k} \delta (K-k). \label{snr17} \end{equation} The second integral to be considered is (\ref{mds10}), which, for $b=n+1/2$ gives \begin{eqnarray} {\cal I}_{j y}(n,K,k,\eta) &=& \int_0^\infty x^2 \exp(-\eta x^2) j_{n}(K x) y_{n}(k x) d x \nonumber \\ &=& \frac{\pi}{2}\left\{ \frac{\exp [-(K^2+k^2)/(4 \eta)]}{2\pi \eta} \left[ -{\cal H}(n+1/2,k,K,\eta) +(n+1/2) h_{-1,n+1/2}\left(\frac{-K k}{2\eta}\right) \right]\right\}. \nonumber\\ && \label{snr18} \end{eqnarray} Now, from (\ref{nr24}), \begin{equation} \lim_{\eta\rightarrow 0} \frac{\pi}{2}\left\{ \frac{\exp [-(K^2+k^2)/(4 \eta)]}{2\pi \eta} \left[ -{\cal H}(n+1/2,k,K,\eta)\right]\right\} =\frac{ \left(\frac{K}{k}\right)^b}{ \left(k^2-K^2\right)}. \label{snr18a} \end{equation} Also, from (\ref{mds7bis}), \begin{equation} h_{-1,n+1/2}\left(\frac{-K k}{2\eta}\right)=\frac{-\exp\left(\frac{K k}{2\eta}\right)\sqrt{\pi}}{(n+1/2) \sin(\pi(n+1/2))} \sum_{l=0}^\infty\frac{\Gamma(1/2+l)}{\Gamma(l-n+1/2)\Gamma(l+n+3/2)} \left(\frac{-K k}{\eta}\right)^l. \label{snr19} \end{equation} Hence,using the asymptotic estimate (\ref{hmexp11}) with $\alpha=2 n^2+2 n+1/2$, the contribution of this term in (\ref{snr18}) goes as \begin{eqnarray} &&\frac{\pi}{2}\left\{ \frac{\exp [-(K^2+k^2)/(4 \eta)]}{2\pi \eta} (n+1/2) h_{-1,n+1/2}\left(\frac{-K k}{2\eta}\right) \right\}\approx \nonumber \\ && (-1)^{n+1} \pi^{1/2} \frac{\exp [-(K-k)^2/(4 \eta)]}{2 \eta} \frac{ \exp -[\alpha (1+\log (K k/(\alpha\eta))]}{\sqrt{2\alpha \log(K k/(\alpha \eta))}}. \label{snr20} \end{eqnarray} The contribution from this term then goes to zero as $\eta\rightarrow 0$. Hence, the entire contribution to the limit of ${\cal I}_{j y}(n,K,k,\eta)$ comes from the lower limit of integration: \begin{equation} \lim_{\eta\rightarrow 0} {\cal I}_{j y}(n,K,k,\eta)=\frac{ \left(\frac{K}{k}\right)^{n+1/2}}{ \left(k^2-K^2\right)}. \label{snr21} \end{equation} The third integral we consider is \begin{equation} {\cal I}_{yy}(n,K,k,\eta) = \int_0^\infty x^2 \exp(-\eta x^2) y_{n}(K x) y_{n}(k x) d x. \label{snr22} \end{equation} Using equation (\ref{snr14}), this gives \begin{equation} {\cal I}_{yy}(n,K,k,\eta) = \int_0^\infty x^2 \exp(-\eta x^2) j_{-n}(K x) j_{-n}(k x) d x=\frac{\pi}{2 \sqrt{K k} }\frac{ \exp[-(K^2+k^2)/4 \eta]} {2 \eta} I_{-n-1/2}\left(\frac{K k}{2 \eta}\right). \label{snr23} \end{equation} Note however that the result (\ref{snr23}) as it stands is only valid for $n=0$, as the integrand in (\ref{snr22}) diverges at the lower limit in non-integrable fashion for $n\ge 1$. The equation (\ref{snr16}) also holds if $b$ is replaced by $-b$ (the leading term in the equations (\ref{nr17}-\ref{nr19}) being unaffected by this change). Hence, for $n=0$, \begin{equation} \lim_{\eta\rightarrow 0} {\cal I}_{y y}(0,K,k,\eta)=\frac{\pi}{2 K k} \delta (K-k). \label{snr24} \end{equation} For $n\neq 0$, we have to deal with the lower limit of the integral appropriately (see below). Other integral evaluations also follow from equations (\ref{snr13},\ref{snr14}). The first of these comes from $ {\cal I}_{j j}(n,K,k,\eta)$, and so is symmetric under interchange of $K$ and $k$: with \begin{equation} {\cal I}_{jym}(n,K,k,\eta) = \int_0^\infty x^2 \exp(-\eta x^2) j_{n}(K x) \sqrt{\frac{\pi}{2 x}} Y_{-n-1/2}(k x) d x, \label{snr25} \end{equation} then \begin{equation} {\cal I}_{jym}(n,K,k,\eta) ={\cal I}_{jym}(n,k,K,\eta) =(-1)^n {\cal I}_{j j}(n,K,k,\eta). \label{snr26} \end{equation} This integral converges for all $n>0$. A similar identity exists for the integral ${\cal I}_{jym}(n,K,k,\eta)$, but is not of interest since the integral diverges at its lower limit, even for $n=0$. Other identities start from $ {\cal I}_{j y}(n,K,k,\eta)$, and so will not be symmetric under interchange of $K$ and $k$. These identify ${\cal I}_{jym}(n,K,k,\eta)$ with $(-1)^n {\cal I}_{ym y}(n,K,k,\eta)$ and $(-1)^{n+1} {\cal I}_{j jm}(n,K,k,\eta)$. \subsection{Numerical examples} We now give some examples of spherical Bessel product integrals with Gaussian factors, In each case the analytic formulae have been compared with results obtained by numerical integration in Mathematica, with agreement to all digits quoted in the Table \ref{table1}. A strongly-localising Gaussian has been used in the examples given, so as to facilitate comparisons with other techniques if desired. Of course, more stringent tests could be achieved by diminishing the value of $\eta$; this would necessitate integrating to larger values of $x$ to achieve a satisfactory "killing" of the oscillating integrand. \begin{table} \begin{tabular}{|c|c|c|c|c|c|}\\ \hline $n$& $K$& $k$&$\eta$ & ${\cal I}$ &Value\\ \hline 2 & 1.37 & 2.96 &3.58 & ${\cal I}_{jj}$ & 0.000680896 \\ 2 & 1.37+0.457 $i$ & 2.96+1.479 $i$ &3.58 & ${\cal I}_{jj}$ & 0.000741033 + 0.00100379 $i$ \\ 3 & 1.37 & 2.96 &3.58 & ${\cal I}_{jj}$ & 0.000054813 \\ 3 & 1.37+0.457 $i$ & 2.96+1.479 $i$ &3.58 & ${\cal I}_{jj}$ & -0.0000260529 + 0.000120958 $i$ \\ \hline 0 & 1.37 & 2.96 &3.58 & ${\cal I}_{yy}$ & 0.0639986 \\ 0 & 1.37+0.457 $i$ & 2.96+1.479 $i$ &3.58 & ${\cal I}_{yy}$ & 0.00806694 - 0.0549797 $i$ \\ \hline 0& 1.37 & 2.96 &3.58 & ${\cal I}_{jy}$ & -0.00941848 \\ 0 & 1.37+0.457 $i$ & 2.96+1.479 $i$ &3.58 & ${\cal I}_{jy}$ & -0.00948972 + 0.00346762 $i$ \\ 1 & 1.37 & 2.96 &3.58 & ${\cal I}_{jy}$ & -0.00851273 \\ 1 & 1.37+0.457 $i$ & 2.96+1.479 $i$ &3.58 & ${\cal I}_{jy}$ & -0.00586463 + 0.00505498 $i$ \\ 3 & 1.37 & 2.96 &3.58 & ${\cal I}_{jy}$ & -0.000878441 \\ 3 & 1.37+0.457 $i$ & 2.96+1.479 $i$ &3.58 & ${\cal I}_{jy}$ &-0.000336487 + 0.000656101 $i$ \\ \hline \end{tabular} \label{table1} \caption{Numerical examples of spherical Bessel function values, evaluated both by the analytic formulae of this section and by numerical integration} \end{table} \begin{figure}[tbh] \includegraphics[width=6cm]{pltng.pdf}~~\includegraphics[width=6cm]{pltwg.pdf} \caption{The effect of a Gaussian "killing" function on ${\cal I}_{jy}$: $n=1$, $k=1.37$, $K=2.96 + 0.457 i$. At left: without the Gaussian factor, the integrand diverges strongly; at right, with the Gaussian factor with $\eta=0.01$ the integrand converges to zero for large $x$.} \label{figkill} \end{figure} We now give an example of the effectiveness of the Gaussian "killing" function technique: see Fig. \ref{figkill}. Even with a Gaussian with $\eta$ only equal to 0.01, the divergent integrand is replaced by one which can be integrated accurately. The numerical integration of the Gaussian form with $\eta =0.01$ gives $0.0164787 - 0.0138487 i$, for integration with upper limit 80 or beyond. The analytic value for the integral (\ref{watsoninteg}) with the upper limit set to infinity is $0.0163332 - 0.0135188 i$. For $\eta=0.005$, the numerical integral gives $0.0164062 - 0.0136812 i$, slightly closer to the exact answer, while for $\eta=0.001$, the numerical integration in Mathematica fails . \subsection{Integrals from $R>0$} In the modes associated with the scattering by spheres of a given radius, $R$ say, integrals over all space involve integrands differing in $r<R$ and $r>R$, with the former being in general non-singular at the origin. The integrals over $r<R$ can be evaluated using the integral already referred to from Watson, p.134: \begin{equation} \int^{z} z {\cal C}_\mu (k z) {\cal D}_\mu (l z) d z= \frac{z\left\{k {\cal C}_{\mu+1} (k z) {\cal D}_\mu (l z) -l {\cal C}_\mu (k z) {\cal D}_{\mu+1} (l z)\right\} }{k^2-l^2}, \label{watsoninteg} \end{equation} where ${\cal C}$ and ${\cal D}$ are cylinder functions. The integrals over $r>R$ can be dealt with using the same integral: \begin{equation} \int_R^\infty r^2 {\cal C}_n(K r){\cal D}_n(k r) d r=\lim_{\delta \rightarrow 0} [ \int_\delta^\infty r^2 {\cal C}_n(K r){\cal D}_n(k r) d r - \int_\delta^R r^2 {\cal C}_n(K r){\cal D}_n(k r) d r]. \label{snr27} \end{equation} Both the integrals on the right-hand side are well behaved or have the same singularity at $\delta=0$, which can be canceled. The problem arising at the upper limit of the first integral has been dealt with already using the limit of Gaussians, so that we know how to deal with both contributions to the integral appropriately. We thus arrive at the key results of this paper in relation to spherical Bessel integrals: \begin{equation} \int_{R}^\infty x^2 j_n(K x) j_n (k x)d x=\frac{\pi}{2 K k} \delta (K-k)- \frac{R^2\left\{K {j}_{n+1} (K R) {j}_n (k R) -k {j}_n (K R) {j}_{n+1} (k R) \right\} }{K^2-k^2}, \label{key1} \end{equation} \begin{equation} \int_{R}^\infty x^2 j_n(K x) y_n (k x)d x=- \frac{R^2\left\{K {j}_{n+1} (K R) {y}_n (k R) -k {j}_n (K R) {y}_{n+1} (k R)\right\} }{K^2-k^2}, \label{key2} \end{equation} and \begin{equation} \int_{R}^\infty x^2 y_n(K x) y_n (k x)d x=\frac{\pi}{2 K k} \delta (K-k)- \frac{R^2\left\{K {y}_{n+1} (K R) {y}_n (k R) -k {y}_n (K R) {y}_{n+1} (k R)\right\} }{K^2-k^2}. \label{key2a} \end{equation} \section{Spherical Bessel function product integrals for resonant state calculations} \subsection{Bessel function integrals for magnetic source E-fields} \label{BessInt} \subsubsection{Regular spherical Bessel function integrals} The integrals we want to calculate are of the type, \begin{equation} \int d^{3}r\boldsymbol{M}_{n,m}\left( K\boldsymbol{r}\right) \cdot \boldsymbol{M}_{n,m}\left( k\boldsymbol{r}\right) =\int x^{2}j_{n}\left( Kx\right) j_{n}\left( kx\right) dx\;. \label{Mjint} \end{equation} One way to write this Bessel function integral with an infinite upper limit is, \begin{equation} \int_{R}^{\infty}x^{2}j_{n}\left( Kx\right) j_{n}\left( kx\right) dx=\frac{\pi}{2Kk}\delta\left( K-k\right) + R^{2}\frac{ Kj_{n} \left( kx\right) j_{n+1}\left( Kx\right) -kj_{n}\left( Kx\right) j_{n+1} \left( kx\right) }{K^{2}-k^{2}}\;, \label{Ross1} \end{equation} Alternative analytic expressions for eq.(\ref{Mjint}) entirely in terms of Bessel functions of the same order $n$ as the integrand are be obtained by invoking Bessel function derivatives. For finite intervals these expressions are, \begin{subequations} \begin{align} \int_{0}^{R}x^{2}j_{n}\left( Kx\right) j_{n}\left( kx\right) dx &=\frac{k \psi_{n}^{\prime}\left( kR\right) \psi_{n}\left( KR\right) -K \psi_{n}^{\prime}\left( KR\right) \psi_{n}\left( kR\right) }{Kk\left( K^{2}-k^{2} \right) } \label{0toR} \\ \begin{split} \int_{R}^{L}x^{2}j_{n}\left( Kx\right) j_{n}\left( kx\right) dx & = \frac{k \psi_{n}^{\prime}\left( kL\right) \psi_{n}\left( KL\right) -K \psi_{n}^{\prime}\left( KL\right)\psi_{n}\left( kL\right) }{Kk\left( K^{2}-k^{2} \right) }\\ & \qquad -\frac{k \psi_{n}^{\prime}\left( kR\right)\psi_{n}\left( KR\right) -K \psi_{n}^{\prime}\left( KR\right)\psi_{n}\left( kR\right) }{Kk\left( K^{2}-k^{2} \right) } \;,\label{RtoL} \end{split} \end{align} \end{subequations} while for the infinite interval one obtains, \begin{equation} \int_{R}^{\infty}x^{2}j_{n}\left( Kx\right) j_{n}\left( kx\right) dx=\frac{\pi}{2Kk}\delta\left( K-k\right) -\frac{k \psi_{n}^{\prime}\left( kR\right)\psi_{n}\left( KR\right) -K \psi_{n}^{\prime}\left( KR\right)\psi_{n}\left( kR\right) }{Kk\left( K^{2}-k^{2} \right) }\;. \label{Br1} \end{equation}\label{Brform} Bessel function recurrence relations readily show that eqs.(\ref{Ross1}) and (\ref{Br1}) are just alternate expressions of the same quantity. Special attention needs to be paid to the case $K=k$, where eqs.(\ref{Ross1}) and (\ref{Brform}) encounter problems. The following alternative expression may be established: \begin{align} \begin{split} & \underset{K\rightarrow k}{\lim} \frac{k \psi_{n}^{\prime}\left( kR\right) \psi_{n}\left( KR\right) -K \psi_{n}^{\prime}\left( KR\right) \psi_{n}\left( kR\right) }{Kk\left( K^{2}-k^{2} \right)}\\ & =\frac{R}{2}\frac{\left[ \psi_{n}^{\prime}\left( kR\right) \right]^{2} +\psi_{n}^{2}\left( kR\right) -n\left( n+1\right) j_{n}^{2} \left( kR\right) -j_{n}\left( kR\right) \psi_{n}^{\prime}\left( kR\right) } {k^{2}}\;,\label{Ktoklim} \end{split} \end{align} where $\psi_{n}\left( x\right) \equiv xj_{n}\left( x\right) $ are the Ricatti Bessel functions. This result gives the following definite integrals, \begin{subequations} \begin{align} \int_{0}^{R}x^{2}j_{n}^{2}\left( kx\right) dx & =\frac{R}{2}\frac{\left[ \psi_{n}^{\prime}\left( kR\right) \right] ^{2}+\psi_{n}^{2}\left( kR\right) -n\left( n+1\right) j_{n}^{2}\left( kR\right) -j_{n} \left( kR\right) \psi_{n}^{\prime}\left( kR\right) }{k^{2}}\label{Min}\\ \begin{split} \int_{R}^{L}x^{2}j_{n}^{2}\left( kx\right) dx & =\left\{ \frac{L}{2} \frac{\left[ \psi_{n}^{\prime}\left( kL\right) \right] ^{2} +\psi_{n}^{2} \left( kL\right) -n\left( n+1\right) j_{n}^{2}\left( kL\right) -j_{n}\left( kL\right) \psi_{n}^{\prime}\left( kL\right) }{k^{2}}\right. \\ & \left. \qquad -\frac{R}{2}\frac{\left[ \psi_{n}^{\prime}\left( kR\right) \right] ^{2}+\psi_{n}^{2}\left( kR\right) -n\left( n+1\right) j_{n}^{2} \left( kR\right) -j_{n}\left( kR\right) \psi_{n}^{\prime} \left(kR\right) } {k^{2}}\right\} \;,\label{Mext} \end{split} \end{align} \end{subequations} A comparison of eqs.(\ref{Br1}), (\ref{Ktoklim}) and (\ref{Mext}) above allows us to conclude that, \begin{align} \frac{\pi}{2Kk}\delta\left( K-k\right) = \underset{L\rightarrow\infty}{\lim} \frac{k \psi_{n}^{\prime}\left( kL\right) \psi_{n}\left( KL\right) -K \psi_{n}^{\prime}\left( KL\right) \psi_{n}\left( kL\right) }{Kk\left( K^{2}-k^{2} \right)} \;. \label{delta_rep_J} \end{align} The veracity of eq.(\ref{delta_rep_J}) may be verified by the method of Section 3. An important special case of the above integrals is the orthogonality of the free-space wave functions, \begin{equation} \int_{0}^{\infty}dxx^{2}j_{n}\left( Kx\right) j_{n}\left( kx\right) =\frac{\pi}{2Kk}\delta\left( K-k\right) \;. \end{equation} \subsubsection{Spherical Neumann integrals} The spherical Neumann functions are ill-defined when their argument goes to zero, but one can still evaluate integrals not involving the origin. Notably, one has, \begin{align} \int_{R}^{\infty}x^{2}y_{n}\left( Kx\right) y_{n}\left( kx\right) dx=\frac{\pi}{2Kk}\delta\left( K-k\right) +R^{2}\left[\frac{ Ky_{n+1}\left( Kx\right) y_{n}\left( kx\right) -ky_{n}\left( Kx\right) y_{n+1}\left( kx\right) }{K^{2}-k^{2}}\right]\;, \label{Rossyint} \end{align} while the expression for the definite integral using Neumann function derivatives is, \begin{align} \begin{split} \int_{R}^{L}x^{2}y_{n}\left( Kx\right) y_{n}\left( kx\right) dx & =\frac{k\chi_{n}^{\prime}\left( kL\right) \chi_{n}\left( KL\right) -K \chi_{n}^{\prime}\left( KL\right) \chi_{n}\left( kL\right)}{K k \left( K^{2}-k^{2} \right)}\\ & \qquad-R^{2}\frac{k \chi_{n}^{\prime}\left( kR\right) \chi_{n}\left( KR\right) -K \chi_{n}^{\prime}\left( KR\right) \chi_{n}\left( kR\right) }{K k \left( K^{2}-k^{2} \right)}\;. \end{split}\label{Bryfinint} \end{align} Arguing in analogy with eq.(\ref{Ktoklim}) that, \begin{align} \frac{\pi}{2Kk}\delta\left( K-k\right) =\underset{L\rightarrow\infty}{\lim} \frac{k \chi_{n}^{\prime}\left( kL\right) \chi_{n}\left( KL\right) -K \chi_{n}^{\prime}\left( KL\right) \chi_{n}\left( kL\right) }{K k \left( K^{2}-k^{2} \right)}\;, \end{align} the $L\rightarrow\infty$ limit of eq.(\ref{Bryfinint}) then reads, \begin{align} \int_{R}^{\infty}x^{2}\chi_{n}\left( Kx\right) \chi_{n}\left( kx\right) dx=\frac{\pi}{2Kk}\delta\left( K-k\right) - \frac{k \chi_{n} \left(KR\right) \chi_{n}^{\prime}\left( kR\right) -K \chi_{n}\left( kR\right) \chi_{n}^{\prime}\left( KR\right)}{K k \left( K^{2}-k^{2} \right)}\;, \end{align} which one can verify is just another way of writing eq.(\ref{Rossyint}). One can again evaluate the $K\rightarrow k$ of the second term on the right hand side of eq.(\ref{Bryfinint}) with the expression, \begin{align} \begin{split} & \underset{K\rightarrow k}{\lim}\frac{ k\chi_{n}\left( KR\right) \chi_{n}^{\prime}\left( kR\right) -K \chi_{n}\left( kR\right) \chi_{n}^{\prime} \left( KR\right) }{K k \left( K^{2}-k^{2} \right)}\\ & \qquad=\frac{R}{2}\frac{\left[ \chi_{n}^{\prime}\left( kR\right) \right]^{2} +\chi_{n}^{2}\left( kR\right) -n\left( n+1\right) y_{n}^{2}\left( kR\right) -y_{n}\left( kR\right) \chi_{n}^{\prime} \left(kR\right) }{k^{2}}\;, \end{split} \end{align} where $\chi_{n}(z)=xy_{n}(z)$ are Ricatti Neumann functions. Finite integrals of the Neumann functions squared are then, \begin{align}\begin{split} \int_{R}^{L}x^{2}y_{n}^{2}\left( kx\right) dx & =\left\{ \frac{L}{2} \frac{\left[ \chi_{n}^{\prime}\left( kL\right)\right]^{2}+\chi_{n}^{2} \left( kL\right) -n\left( n+1\right) y_{n}^{2}\left( kL\right) -y_{n}\left( kL\right) \chi_{n}^{\prime}\left( kL\right) }{k^{2}}\right. \\ & \qquad\qquad \left. -\frac{R}{2}\frac{\left[ \chi_{n}^{\prime}\left( kR\right)\right]^{2} +\chi_{n}^{2}\left( kR\right) -n\left( n+1\right) y_{n}^{2}\left( kR\right) -y_{n}\left( kR\right) \psi_{n}^{\prime} \left(kR\right) }{k^{2}}\right\} \;. \end{split}\end{align} Another remark is that all the finite domain integrals hold even if $K$ and $k$ are complex valued. \subsubsection{Mixed Bessel-Neumann integrals} For the mixed Bessel-Neumann integrals one can extend the lower bound to zero and we have, \begin{align}\begin{split} \int_{0}^{L}x^{2}j_{n}\left( Kx\right) y_{n}\left( kx\right) dx & =\frac{k \psi_{n}\left( KL\right) \chi_{n}^{\prime}\left( kL\right) -K\psi_{n}^{\prime}\left( KL\right) \chi_{n}\left( kL\right)}{K k \left( K^{2}-k^{2} \right)} -\frac{K^{n}}{k^{n+1}\left( K^{2}-k^{2}\right) } \;, \end{split}\end{align} while the indefinite integrals are, \begin{align}\begin{split} \int_{R}^{\infty} x^{2} j_{n}\left( Kx\right) y_{n}\left( kx\right) dx & =-\frac{k\psi_{n}\left( KR\right) \chi_{n}^{\prime}\left( kR\right)-K\psi_{n}^{\prime} \left( KR\right) \chi_{n}\left( kR\right)}{K k \left( K^{2}-k^{2} \right)} \;, \\ \int_{0}^{\infty} j_{n}\left( Kx\right) y_{n}\left( kx\right) dx &= -\frac{K^{n}}{k^{n+1}\left( K^{2}-k^{2}\right) } \end{split}\end{align} where the lack of a delta-function contribution is important. We can also obtain the results for limits of $K\rightarrow k$, \begin{align}\begin{split} &\int_{0}^{L}x^{2}j_{n}\left( kx\right) y_{n}\left( kx\right) dx= \\ & \frac{\psi_{n}^{\prime}\left( kL\right)\chi_{n}^{\prime}\left( kL\right) +\psi_{n}\left( kL\right)\chi_{n}\left( kL\right) -n\left( n+1\right) y_{n}\left( kL\right)j_{n}\left( kL\right) - \psi_{n}^{\prime}\left( kL\right) y_{n}\left( kL\right) }{2k^{2}/L} - \frac{n+1}{2k^{3}} \;. \end{split}\end{align} with a particularly simple result for the indefinite integral, \begin{align}\begin{split} &\int_{0}^{\infty}x^{2}j_{n}\left( kx\right) y_{n}\left( kx\right) dx= - \frac{n+1}{2k^{3}} \;. \end{split}\end{align} \subsubsection{Spherical Hankel function integrals} Recalling that the spherical Hankel functions are defined, \begin{equation} h_{n}\left( x\right) \equiv j_{n}\left( x\right) +iy_{n}\left( x\right) \;, \end{equation} this means, \begin{align} h_{n}\left( Kx\right) h_{n}\left( kx\right) =j_{n}\left( Kx\right)j_{n}\left( kx\right) -y_{n}\left( Kx\right) y_{n}\left( kx\right) + i \left\{ j_{n}\left( Kx\right)y_{n}\left( kx\right) + y_{n}\left( Kx\right)j_{n}\left( kx\right) \right\} \;. \end{align} Consequently, regardless of our interpretation of the delta functions, $\delta\left( K-k\right)$, with complex values of $K$ and $k$, they are going to cancel for the Hankel function integrals to leave us with, \begin{subequations} \begin{align} \int_{R}^{\infty}x^{2}h_{n}\left( Kx\right) h_{n}\left( kx\right) dx & =-\frac{k\xi_{n}\left( KR\right) \xi_{n}^{\prime}\left( kR\right) -K\xi_{n}\left( kR\right) \xi_{n}^{\prime}\left( KR\right) }{K k \left( K^{2}-k^{2} \right)}\;\\ \int_{R}^{\infty}x^{2}h_{n}^{2}\left( kx\right) dx & =-\frac{R}{2} \frac{\left[ \xi_{n}^{\prime}\left( kR\right) \right] ^{2} +\xi_{n}^{2}\left( kR\right) -n\left( n+1\right) h_{n}^{2}\left( kR\right) -h_{n}\left( kR\right) \xi_{n}^{\prime}\left( kR\right) }{k^{2}} \;, \end{align} \end{subequations} which are precisely the results needed for normalization and orthogonalization. \subsection{Bessel function integrals for electric type fields} \subsubsection{Bessel and Neumann product integrals} For electric type fields, the integrals that one needs to evaluate are (for regular fields), \begin{align}\begin{split} & \int d^{3}r \boldsymbol{N}_{n,m}\left( K\boldsymbol{r}\right) \cdot \boldsymbol{N}_{n,m}\left( k \boldsymbol{r}\right) \\ & \qquad =\int \frac{ n\left( n+1\right) j_{n}\left( Kx\right) j_{n}\left( kx\right) +\psi_{n}^{\prime}\left( Kx\right) \psi_{n}^{\prime} \left( kx\right) }{Kk} dx\;, \end{split}\end{align} but also carry out the analogous integrals for outgoing partial waves with the Bessel $j_n$ functions replaced by outgoing Hankel functions, $h_n$, and the $\psi_{n}(z)=zj_n(z)$ replaced by $\xi_{n}(z)=zh_n(z)$. For finite integrals with $K\neq k$ one finds, \begin{align}\begin{split} & \int_{R}^{L}\frac{ n\left( n+1\right) j_{n} \left(Kx\right) j_{n}\left( kx\right) +\psi_{n}^{\prime}\left( Kx\right) \psi_{n}^{\prime}\left( kx\right)}{Kk} dx\\ & \qquad \qquad =\frac{K\psi_{n}\left( KL\right) \psi_{n}^{\prime}\left( kL\right) -k\psi_{n}\left( kL\right) \psi_{n}^{\prime}\left( KL\right)} {Kk \left( K^{2}-k^{2} \right)}\\ & \qquad \qquad\qquad -\frac{K \psi_{n}\left( KR\right) \psi_{n}^{\prime} \left(kR\right) -k \psi_{n}\left( kR\right) \psi_{n}^{\prime}\left( KR\right)} { Kk\left( K^{2}-k^{2} \right) } \;, \label{Njfin} \end{split}\end{align} \begin{align}\begin{split} & \int_{0}^{R} \frac{ n\left( n+1\right) j_{n}\left( Kx\right) j_{n}\left( kx\right) +\psi_{n}^{\prime}\left( Kx\right) \psi_{n}^{\prime}\left( kx\right) }{Kk} dx \\ & \qquad \qquad =\frac{K \psi_{n}\left( KR\right) \psi_{n}^{\prime} \left(kR\right) -k \psi_{n}\left( kR\right) \psi_{n}^{\prime}\left( KR\right)} {K k \left( K^{2}-k^{2}\right) }\;. \end{split}\end{align} A finite integral for $K=k$ is, \begin{align}\begin{split} & \int_{R}^{L} \frac{ n\left( n+1\right) j_{n}^{2}\left( kx\right) +\left[\psi_{n}^{\prime}\left( kx\right)\right]^{2} }{k^{2}} dx\\ & \qquad \qquad=\frac{L}{2}\frac{\left[ \psi_{n}^{\prime}\left( kL\right)\right]^{2} +\psi_{n}^{2}\left( kL\right) -n\left( n+1\right) j_{n}^{2}\left(kL\right) +j_{n}\left( kL\right) \psi_{n}^{\prime}\left( kL\right)}{k^{2}}\\ &\qquad \qquad \qquad-\frac{R}{2}\frac{\left[ \psi_{n}^{\prime}\left( kR\right) \right]^{2}+\psi_{n}^{2}\left( kR\right) -n\left( n+1\right) j_{n}^{2} \left(kR\right) +j_{n}\left( kR\right) \psi_{n}^{\prime}\left( kR\right)}{k^{2}} \;. \end{split}\end{align} In the $L\rightarrow\infty$ limit, the first term on the right hand side of the last line vanishes, while the second term yields a delta function once again through, \begin{align} \frac{\pi}{2Kk}\delta\left( K-k\right) = \underset{L\rightarrow\infty}{\lim} \frac{K\psi_{n}\left( KL\right) \psi_{n}^{\prime}\left( kL\right) -k\psi_{n}\left( kL\right) \psi_{n}^{\prime}\left( KL\right)} {Kk \left( K^{2}-k^{2} \right)} \;. \end{align} and we arrive at the required indefninte integral, \begin{align}\begin{split} & \int_{R}^{\infty} \frac{ n\left( n+1\right) j_{n}\left( Kx\right) j_{n}\left( kx\right) +\psi_{n}^{\prime}\left( Kx\right) \psi_{n}^{\prime}\left( kx\right)}{Kk} dx \\ & =\frac{\pi}{2Kk}\delta\left( K-k\right) -\frac{K \psi_{n}\left( KR\right) \psi_{n}^{\prime}\left( kR\right) -k \psi_{n} \left(kR\right) \psi_{n}^{\prime}\left( KR\right)}{Kk\left( K^{2}-k^{2} \right) }\;. \end{split}\end{align} The same procedure for $y_{n}$ product integrals yields, \begin{align}\begin{split} & \int_{R}^{\infty} \frac{ n\left( n+1\right) y_{n}\left( Kx\right) y_{n}\left( kx\right) +\chi_{n}^{\prime}\left( Kx\right) \chi_{n}^{\prime}\left( kx\right) }{Kk} dx\\ & =\frac{\pi}{2Kk}\delta\left( K-k\right) -\frac{K \chi_{n}\left( KR\right) \chi_{n}^{\prime}\left( kR\right) -k \chi_{n} \left(kR\right) \chi_{n}^{\prime}\left( KR\right)}{Kk\left( K^{2}-k^{2}\right) }\;. \end{split}\end{align} \subsection{Hankel product integrals} The Hankel function integrals that we need for field normalization are, \begin{subequations} \begin{align} \begin{split} & \int_{R}^{\infty}\frac{ n\left( n+1\right) h_{n} \left(Kx\right) h_{n}\left( kx\right) + \xi_{n}^{\prime}\left( Kx\right) \xi_{n}^{\prime}\left( kx\right)}{Kk} dx \\ & \qquad \qquad =-\frac{K \xi_{n}\left( KR\right) \xi_{n}^{\prime}\left( kR\right)-k \xi_{n}\left( kR\right) \xi_{n}^{\prime}\left( KR\right)}{Kk\left( K^{2}-k^{2} \right)}\; , \end{split}\\ {\rm and} & \;,\nonumber \\ \begin{split} &\int_{R}^{\infty}\frac{ n\left( n+1\right) \xi_{n}^{2}\left( kx\right) + \left[\xi_{n}^{\prime}\left( kx\right)\right]^{2} }{k^{2}} dx \\ & \qquad \qquad =-\frac{R}{2} \frac{\left[ \xi_{n}^{\prime}\left( kR\right)\right]^{2} +\xi_{n}^{2}\left( kR\right) -n\left( n+1\right) \xi_{n}^{2}\left( kR\right) + h_{n}\left( kR\right) \xi_{n}^{\prime}\left(kR\right)}{k^{2}} \;. \end{split}\end{align}\label{hankHinf} \end{subequations} \section{Conclusions} The analytic expressions we have given involving products of Bessel functions combined with a Gaussian term are easily verified numerically, and this has been done for the results given here. The results obtained analytically for integrals with the Gaussian having tended to a constant may also be tested numerically, as was exemplified in Section 5.1. Another test, which will be reported on in a future publication, is provided by residue calculus of field scattering amplitudes, which provides a second route to mode normalisation factors. In fact, these two analytic approaches agree completely. \section{Acknowedgements} Research conducted within the context of the International Associated Laboratory for Photonics between France and Australia. This work has been carried out thanks to the support of the A*MIDEX project (no. ANR-11-IDEX-0001-02) funded by the Investissements d'Avenir French Government program, managed by the French National Research Agency (ANR). The authors would like to thank Remi Colom, Nicolas Bonod and Thomas Durt for helpful discussions. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.72168, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfTfxK6EuNBRhKLJF
\section{Introduction} \label{sec:ccp} \setcounter{equation}{0} The cosmological constant (CC) problem is one of the most puzzling issues in modern cosmology. Since Einstein first introduced it to allow for a static universe, the CC has been removed and reintroduced in the Einstein field equations, eventually as the simplest way to explain the supernovae data providing evidence for the present acceleration in the cosmological expansion. It is outside the purpose of this letter to review the history of the CC (see, {\em e.g.}~Refs.~\cite{Weinberg, Carroll, hep-th/0012253}). Here we just need to recall that there are actually two problems related to the CC: the coincidence problem, {\em i.e.}~why the CC is taking over now, and the huge discrepancy of about 120 orders of magnitude between theoretical predictions and the observed value when $\Lambda$ is considered as a purely vacuum effect. We will address this second issue in the framework of Ho$\check{\textrm{r}}$ava-Lifshitz theory with detailed balance. \par Vacuum effects are real, as shown, for instance, by experiments on the Casimir effect. The vacuum energy density and related $\Lambda_\textsc v$ should therefore give observable effects in cosmology just as they do in the laboratory. On this basis, one can slightly reformulate the CC problem as follows: given for granted the presence a zero-point energy, what effect is compensating for the very large $\Lambda_\textsc v$ it induces via the Einstein field equations to result in the small observed $\Lambda_\textsc{obs}$? One could easily argue that a large and negative ``bare'' CC, say $\Lambda_\textsc b$, can do this if $|\Lambda_\textsc b| \lesssim \Lambda_\textsc v$ and \begin{equation} \Lambda_\textsc b + \Lambda_\textsc v = \Lambda_\textsc{obs} \ . \label{occ} \end{equation} Of course, this simple idea is not new and can be applied to General Relativity (GR), as well as in other models~\footnote{For instance, see Ref.~\cite{Weinberg}, where the case of supersymmetry is also discussed.}, but in standard GR this choice is less natural for (at least) two well known reasons: i) choosing $\Lambda_\textsc b$ negative and large is completely arbitrary. Since Minkowski is the most natural vacuum state of GR, $\Lambda_\textsc b$ should ``in principle'' vanish; ii) a questionable fine tuning is required for the two large quantities $\Lambda_\textsc b$ and $\Lambda_\textsc v$ to compensate and give $\Lambda_\textsc b + \Lambda_\textsc v \approx \Lambda_\textsc{obs}$. We will see that in Ho$\check{\textrm{r}}$ava's theory with detailed balance this mechanism is less artificial. This is because: i) its vacuum contains a large negative bare CC; ii) as we will show, the total CC, $\Lambda_\textsc b + \Lambda_\textsc v$, is related to the scale $\ell_\textsc{uv}$ at which Lorentz symmetry violating terms become relevant, and it is small when $\ell_\textsc{uv}$ is close to the Planck length~\footnote{Lorentz invariance is tested to quite a good level of accuracy. See Ref.~\cite{lor} for a comprehensive review of Lorentz symmetry violation and related tests.}. \par This letter is organized as follows. In Sec.~\ref{sec:hor}, we briefly review Ho$\check{\textrm{r}}$ava's theory and estimate the scale at which corrections to the Einstein-Hilbert action start affecting the dynamics. In Sec.~\ref{sec:cch}, we compute the vacuum contribution to the CC and relate it to the above mentioned scale. Finally, our conclusions are presented in Sec.~\ref{sec:sum}. \section{Ho$\check{\textrm{r}}$ava's theory and the GR regime} \label{sec:hor} \setcounter{equation}{0} Ho$\check{\textrm{r}}$ava's theory of gravity at a Lifshitz point $z$ is an attempt to provide an ultraviolet (UV) completion of GR at the price of breaking Lorentz invariance at a fundamental level and recover the relativistic theory as an emerging feature at large scales. The original idea~\cite{HL, HL1, HL2} is to develop a theory of gravity with an anisotropic scaling between the space and time dimensions, \begin{equation} \vec x \rightarrow l \, \vec x \ ; \quad t \rightarrow l^z t \ , \end{equation} like it was done by Lifshitz in his studies of scalar fields~\cite{Lif}. In the above $z$ is called the dynamical critical exponent and will be fixed to $z=3$ in what follows. The resulting theory has an improved UV behaviour and is power-counting renormalizable. \par There are different versions of the theory, essentially depending on whether the detailed balance principle and/or the projectability condition hold. Detailed balance restricts the potential to the form provided in Eq.~\eqref{ha} below, while projectability is nothing but the requirement that the lapse function depends on time only, $N=N(t)$. Another crucial feature in Ho$\check{\textrm{r}}$ava's gravity is the assumption that the GR invariance under diffeomorphisms is replaced by the less restrictive group of \emph{foliated} diffeomorphisms, the difference being that the time coordinate transformation can only depend on the old time variable, $t \rightarrow \tilde t = \tilde t(t)$. More details about Ho$\check{\textrm{r}}$ava-Lifshitz gravity, the ongoing discussion of the possible conceptual flaws of the theory and the effects of Lorentz symmetry violation can be found in Refs.~\cite{svw1, svw2, Visser, nikolic, caihu, christos, lipang, blas}. In particular, several aspects of Ho$\check{\textrm{r}}$ava-Lifshitz cosmology were studied in Refs.~\cite{calcagni, lmp, kk, brand, muko1, saridakis, muko2, riotto, anzhong, muko3,roy}. \par The action for the Ho$\check{\textrm{r}}$ava\ theory satisfying the detailed balance principle is given by \begin{equation} \begin{split} S= \int \mathrm{d} t\, \mathrm{d}^3 {\bf x}\,\sqrt{g}\,N & \left[ \frac{2}{\kappa^2} \left( K_{ij}\,K^{ij}-\lambda\, K^2\right) -\frac{\kappa^2}{2\,w^4}\,C_{ij}\,C^{ij} +\frac{\kappa^2\,\mu}{2\,w^2}\,\varepsilon^{ijk}\,R_{i\ell}\,\nabla_jR_k^{\ell} \right. \\ &\left.\ -\frac{\kappa^2\mu^2}{8}\,R_{ij}\,R^{ij} +\frac{\kappa^2\,\mu^2}{8\,(1-3\lambda)} \left(\frac{1-4\,\lambda}{4}\,R^2+\Lambda_\textsc w\, R-3\,\Lambda_\textsc w^2\right)\right] \ , \label{ha} \end{split} \end{equation} where we are using the same notation as in Ref.~\cite{HL}. To recover GR in the infrared one must have $\lambda \rightarrow 1$ and \begin{equation} c=\frac{\kappa^2\,\mu}{4}\,\sqrt{-\frac{\Lambda_\textsc w}{2}} \ ; \quad G_\textsc n=\frac{\kappa^2}{32\,\pi\,c} \ ; \quad \Lambda_\textsc b=\frac{3}{2}\,\Lambda_\textsc w \ . \label{def} \end{equation} Note that the units used here match those adopted in~\cite{HL}: $[c]=[\Lambda]=L^{-2}$, $[G_\textsc n]=L^{2}$, $[\mu]=L^{-1}$ and $[\kappa]=[w]=1$. By using the first two expressions in Eq.~\eqref{def}, one also obtains \begin{equation} G_\textsc n = \frac{1}{8\, \pi\, \mu}\, \sqrt{-\frac{2}{\Lambda_\textsc w}} \ , \end{equation} so that $\Lambda_\textsc w$ (hence $\Lambda_\textsc b$) must be non-zero and negative for $G_\textsc n$ to be real. \par In order to estimate the energy scale at which the terms in Eq.~\eqref{ha} switch on, we substitute the parameters of the theory $\kappa$, $\mu$ and $\Lambda_\textsc w$ with $c$, $G_\textsc n$ and $\Lambda_\textsc b$, and then use the relation $x^0 = c\,t$ to write the action as (we set $N=1$ henceforth) \begin{equation} \begin{split} S=\int \mathrm{d}^4 x\,\sqrt{g} & \left[ \frac{1}{16\, \pi\, c^2\, G_\textsc n}\left( K_{ij}\,K^{ij}-K^2\right) -\frac{16\, \pi\, G_\textsc n}{w^4}\,C_{ij}\,C^{ij} +\frac{2}{w^2}\,\sqrt{-\frac{3}{\Lambda_\textsc b}}\,\varepsilon^{ijk}\,R_{i\ell}\,\nabla_jR_k^{\ell} \right. \\ & \left.\ +\frac{1}{16\, \pi\, G_\textsc n} \left( \frac{3}{\Lambda_\textsc b}\, R_{ij}\,R^{ij} -\frac{9}{8\, \Lambda_\textsc b}\,R^2 + R -2\, \Lambda_\textsc b\right) \right] \ . \end{split} \label{haIR} \end{equation} This action is non-relativistic but reduces to the Einstein-Hilbert action when the last two terms dominate. In the above, $w$ is a dimensionless parameter that acts as a coupling in the three-dimensional Chern-Simons action used by Ho$\check{\textrm{r}}$ava\ to deform his four-dimensional action. In a Friedmann-Robertson-Walker (FRW) background, these operators (of dimension higher than $4$) vanish identically so that only the terms in the last line of Eq.~\eqref{haIR} are relevant for our analysis. Before proceeding to tackle the CC problem, however, we estimate the relative size of all the corrections to the GR potential. We therefore factor the dimensions out of the terms in the action~\eqref{haIR} by rescaling the spatial coordinates as \begin{equation} \tilde x_i = \mathcal{P} \, x_i \end{equation} where $\mathcal{P}$ carries the dimension of a spatial derivative ({\em i.e.}~the inverse of a length), and $[\tilde x_i] = 1$. Because of this transformation, all the functions of the three metric $[\tilde R] = [\tilde R_{ij}]=\ldots=[\tilde C_{ij}]=1$ and \begin{equation} \begin{split} \!\!\!\!\! S=\!\!\int\!\! \mathrm{d}^4 x\,\sqrt{g} & \left[ \frac{1}{16\, \pi\, c^2\, G_\textsc n} \left( K_{ij}\,K^{ij}-K^2\right) - \mathcal{P}^6\, \frac{16\, \pi\, G_\textsc n}{w^4}\, \tilde C_{ij}\, \tilde C^{ij} + \mathcal{P}^5\, \frac{2}{w^2}\, \sqrt{-\frac{3}{\Lambda_\textsc b}}\,\varepsilon^{ijk}\, \tilde R_{i\ell}\, \tilde \nabla_j \tilde R_k^{\ell} \right. \\ &\left.\ +\frac{1}{16\, \pi\, G_\textsc n} \left( \mathcal{P}^4\, \frac{3}{\Lambda_\textsc b}\, \tilde R_{ij}\, \tilde R^{ij} - \mathcal{P}^4\, \frac{9}{8\, \Lambda_\textsc b}\, \tilde R^2 + \mathcal{P}^2\, \tilde R -2\, \Lambda_\textsc b\right) \right] \ , \end{split} \label{haIRtilde} \end{equation} where the power of $\mathcal{P}$ is then the scaling (or inverse length) dimension of the corresponding polynomial term in the three metric and its derivatives. A correction of derivative order $n$ to the GR potential then becomes relevant when the ratio between the coefficient in front of the term multiplied by $\mathcal{P}^n$ and the coefficient multiplying $\tilde R$ comes to be of order one. This determines a set of inverse length scales $k_n$, which are given by~\footnote{We recall that we do {\em not\/} set $c=G_\textsc n=1$.} \begin{equation} k_6 \sim \frac{w}{\sqrt{16 \pi G_\textsc n}} \ ; \quad k_5 \sim \left[ \sqrt{- \frac{\Lambda_\textsc b}{3}}\, \frac{w^2}{32 \pi G_\textsc n} \right]^{\frac{1}{3}} \label{pp} \end{equation} for the dimension 6 and 5 terms respectively, while for the two dimension 4 terms we have (in the order they appear in the action) \begin{equation} k_{4a} \sim \sqrt\frac{-\Lambda_\textsc b}{3} \,\lesssim\, k_{4b} \sim \sqrt\frac{-8\Lambda_\textsc b}{9} \ . \label{p4} \end{equation} As we mentioned before, the two scales in Eq.~\eqref{pp} are not relevant in FRW, since the corresponding corrections vanish identically there, and we are just left with those in Eq.~\eqref{p4}. The smaller of these, $k_{4a}$, then identifies the scale at which Einstein-Hilbert terms and the first Lorentz-violating correction are equally significant. We will therefore denote it with \begin{equation} k_4 \,\equiv\, k_{4a} \,\simeq\, \sqrt\frac{-\Lambda_\textsc b}{3} \ , \label{c4} \end{equation} and use it in the next section to evaluate the vacuum energy contributions. It is important to note that, by Eq.~\eqref{c4}, $\Lambda_\textsc b$ must be quite large (other than non-zero and negative), otherwise we would observe UV effects ({\em e.g.}~Lorentz symmetry violations) in the infrared regime. In the next section will see that $\Lambda_\textsc b$ is actually very close to the Planck scale. \section{Vacuum contributions} \label{sec:cch} \setcounter{equation}{0} A zero-point energy contribution to the energy momentum tensor is equivalent to a correction to the bare cosmological constant $\Lambda_\textsc b$ given by \begin{equation} \Lambda_\textsc v= \frac{8\, \pi\, G_\textsc n}{c^4}\, \varepsilon_{\textsc v} \ , \label{lve} \end{equation} where $\varepsilon_{\textsc v}$ is the energy density of the vacuum given by~\footnote{We use here the standard approximation $\sum_k \approx \frac{V}{8\,\pi^3} \int \mathrm{d} \vec k$ valid for very large spatial volume $V$. Note also that $[\varepsilon_{\textsc v}]=L^{-12}$ and $[\hbar]=L^{-6}$ in our units.} \begin{equation} \varepsilon_{\textsc v} = \frac{1}{V}\, \sum_{\vec k} \frac{\hbar}{2}\, \omega(\vec k) = \frac{1}{8\,\pi^3} \int_0^{k_\textsc{max}} \frac{\hbar}{2}\, \omega(\vec k)\,\mathrm{d}^3 k = \frac{\hbar}{4\, \pi^2} \int_0^{k_\textsc{max}} \omega(\vec k)\, k^2\, \mathrm{d} k \ , \label{ev} \end{equation} where $k_\textsc{max}$ is an appropriate inverse wavelength cut-off and $\omega(\vec k)$ the dispersion relation valid in the regime considered. \par In Ho$\check{\textrm{r}}$ava's model, we can, in principle, identify three different regimes, depending on which operators in the action Eq.~\eqref{ha} (dimension 2, dimension 4 or dimension higher than 4) are relevant at a given (inverse) length scale. Assuming a sharp transition between the the three regimes, we can split the integral appearing in Eq.~\eqref{ev} into three parts, \begin{equation} \int_0^{k_\textsc{max}} = \int_0^{k_4} + \int_{k_4}^{k_6} + \int_{k_6}^{k_\textsc{max}} \ , \label{ints-0} \end{equation} However, since operators with dimension larger than 4 vanish identically in an FRW universe, their contribution to the vacuum energy is negligible and, for our interest in this work, their contribution can be ignored. Hence, we shall only consider two regimes: the infrared (IR) regime, where the action flows to the Einstein-Hilbert action as previously mentioned, and the ultraviolet (UV) regime, in which dimension 4 terms dominate. The difference is substantial to our purposes, for the dispersion relation $\omega(\vec k)$ in Eq.~\eqref{ev} in the UV is expected to differ significantly from the one in the IR. Assuming a sharp transition between the two regimes at $k_4$ given in Eq.~\eqref{c4}, we can split the integral appearing in Eq.~\eqref{ev} into two parts, \begin{equation} \int_0^{k_\textsc{max}} = \int_0^{k_4} + \int_{k_4}^{k_\textsc{max}} \ , \label{ints} \end{equation} where for $k_\textsc{max}$ we simply choose the inverse Planck length $k_\textsc{p}=\ell_{\textsc p}^{-1}=\sqrt{c^3/G_\textsc n\hbar}$\,. We shall just evaluate the energy density for one massless scalar field in flat space in the following. \subsection{IR contribution} In the infrared, the dispersion relation is the usual \begin{equation} \omega_{\textsc{ir}}(\vec k) \simeq c\, k \equiv c\,|\vec k| \ . \end{equation} Eq.~\eqref{ev} then yields \begin{equation} \varepsilon_\textsc{v,ir} = \frac{\hbar}{4\,\pi^2} \int_0^{k_4} c\,k^3\,\mathrm{d} k = \frac{\hbar\, c}{16\, \pi^2}\, k^4_4 \end{equation} and the IR contribution to $\Lambda_\textsc v$ is \begin{equation} \Lambda_\textsc{v,ir} = \frac{8\, \pi\, G_\textsc n}{c^4}\, \varepsilon_\textsc{v,ir} = \frac{\hbar\, G_\textsc n}{2\, \pi\, c^3}\, k^4_4 = \frac{\hbar\, G_\textsc n}{18\, \pi\, c^3}\, \Lambda_\textsc b^2 = \frac{\ell_{\textsc p}^2}{18\, \pi}\, \Lambda_\textsc b^2 \ . \end{equation} It is then interesting to note that, in the infrared, a bare CC automatically induces a vacuum CC proportional to the square of the bare CC. \subsection{UV contribution} Let us turn our attention to the UV contribution \begin{equation} \varepsilon_\textsc{v,uv} = \frac{\hbar}{4\, \pi^2} \int_{k_4}^{k_\textsc{max}} \omega_{\textsc{uv}}(\vec k)\, k^2\, \mathrm{d} k \ , \end{equation} where $\omega_{\textsc{uv}}(\vec k)$ is the dispersion relation valid in the UV. A dimension 4 correction suggests that $\omega_{\textsc{uv}}(\vec k) \propto k^2$ so that, on purely dimensional grounds, we expect that \begin{equation} \omega_{\textsc{uv}}(\vec k) = \sqrt \frac{\hbar\, G_\textsc n}{c} \,k^2 \ , \end{equation} whence \begin{equation} \begin{split} \varepsilon_\textsc{v,uv} & = \frac{\hbar}{4\, \pi^2}\, \sqrt \frac{\hbar\, G_\textsc n}{c} \int_{k_4}^{k_\textsc{max}} k^4\, \mathrm{d} k = \frac{1}{20\, \pi^2} \sqrt \frac{\hbar^3\, G_\textsc n}{c} \left( {k^5_\textsc{max}} - {k^5_\textsc{eq}} \right) \\ & = \frac{1}{20\, \pi^2} \left( \frac{c^7}{\hbar\, G_\textsc n^2} -\frac{1}{9} \sqrt {-\frac{\hbar^3\, G_\textsc n\, \Lambda_\textsc b^5}{3\,c}} \right) \ . \end{split} \label{evuv} \end{equation} The total $\varepsilon_{\textsc v}$ is then \begin{equation} \varepsilon_{\textsc v} = \varepsilon_\textsc{v,ir} + \varepsilon_\textsc{v,uv} = \frac{1}{144\, \pi^2} \left[ \hbar\, c\, \Lambda_\textsc b^2 + \frac{4}{5} \left( \frac{9\,c^7}{\hbar\, G_\textsc n^2} - \sqrt {-\frac{\,\hbar^3\, G_\textsc n\, \Lambda_\textsc b^5}{3\,c}}\right)\right] \ , \end{equation} which in turn induces a $\Lambda_\textsc v$ given by \begin{equation} \Lambda_\textsc v= \frac{8\, \pi\, G_\textsc n}{c^4}\, \varepsilon_{\textsc v} = \frac{1}{18\,\pi} \left[ \ell_{\textsc p}^2\, \Lambda_\textsc b^2 + \frac{4}{5} \left( \frac{9}{\ell_{\textsc p}^2} - \sqrt\frac{-\ell_{\textsc p}^6\, \Lambda_\textsc b^5}{3}\,\right)\right] \ . \end{equation} Here it is important to note that the above expression is scale independent, since it is given in terms of the fundamental constants $\Lambda_\textsc b$ and $\ell_{\textsc p}$ which are not subject to renormalization effects. \par From Eq.~\eqref{occ}, Ho$\check{\textrm{r}}$ava's theory with detailed balance can cope with observations if the total CC matches the observed $\Lambda_\textsc{obs}$, that is if \begin{equation} \Lambda_\textsc b + \Lambda_\textsc v = \Lambda_\textsc b + \frac{1}{18\,\pi} \left[ \ell_{\textsc p}^2\, \Lambda_\textsc b^2 + \frac{4}{5} \left( \frac{9}{\ell_{\textsc p}^2} - \sqrt\frac{-\ell_{\textsc p}^6\, \Lambda_\textsc b^5}{3}\,\right)\right] = \Lambda_\textsc{obs} \ . \end{equation} Since $\Lambda_\textsc b$ is expected to be much larger than $\Lambda_\textsc{obs}$ [see the discussion below Eq.~\eqref{pp})], we can effectively approximate $\Lambda_\textsc{obs}\simeq 0$ and solve \begin{equation} \Lambda_\textsc b + \frac{1}{18\,\pi} \left[ \ell_{\textsc p}^2\, \Lambda_\textsc b^2 + \frac{4}{5} \left( \frac{9}{\ell_{\textsc p}^2} - \sqrt\frac{-\ell_{\textsc p}^6\, \Lambda_\textsc b^5}{3}\,\right)\right] \simeq 0 \label{bccsol} \end{equation} for $\Lambda_\textsc b$. One then obtains \begin{equation} \Lambda_\textsc b \simeq -0.13\,\ell_{\textsc p}^{-2} \ . \label{bcc} \end{equation} which implies that UV effects start contributing at~\footnote{Of course, our results must be taken as order of magnitude estimates, since, in evaluating $\Lambda_\textsc v$, we just considered one scalar field and have not properly accounted for all the degrees of freedom of existing matter fields.} \begin{equation} \ell_\textsc{uv} \equiv k_4^{-1} \equiv \sqrt \frac{-3}{\Lambda_\textsc b} \simeq 4.8\,\ell_{\textsc p} \ . \label{Luv} \end{equation} This result tells us that non-GR terms only switch on at very small length scales. The above estimate for $\ell_\textsc{uv}$ is also relatively stable with respect to the possible values of $k_4$. In fact, for $k_4 \ll k_{4a}$, the solution for $\ell_\textsc{uv}$ tends to $\sqrt{15\pi/2} \,\ell_{\textsc p}\simeq 4.9\,\ell_{\textsc p}$. For $k_{4a} < k_4 \lesssim 4.3 \, k_{4a}$, $\ell_\textsc{uv}$ slowly decreases toward its minimum, $\ell_\textsc{uv,min} \simeq 4.3\, \ell_{\textsc p}$, and then increases again for $k_4 \gtrsim 4.3 \, k_{4a}$, with the ratio $\ell_\textsc{uv}/\ell_{\textsc p}$ growing slower than the square root of $k_4/k_{4a}$. For instance, one needs $k_4$ be (approximately) 15 times larger than its natural value $k_{4a}$ for the ratio $\ell_\textsc{uv}/\ell_{\textsc p}$ to double the value given in Eq.~\eqref{Luv}. We can therefore conclude that the result in Eq.~\eqref{Luv} is a reliable approximation and $\ell_\textsc{uv} \simeq 5\,\ell_{\textsc p}$ for any sensible choice of $k_4$. \par Our analysis also reveals two intriguing features: \\ i) the UV regime ``spans'' only about five Planck lengths, and one might argue that its existence is negligible in a first approximation. A closer inspection however, reveals that it is essential for the consistency of the present study. Indeed, without the UV contribution given in Eq.~\eqref{evuv}, Eq.~\eqref{bccsol} would yield two solutions, $\ell_\textsc{uv} = \infty$ and $\ell_\textsc{uv} = \ell_{\textsc p}/\sqrt{6 \pi} < \ell_{\textsc p}$, both of which are clearly unacceptable. \\ ii) no choice of $k_4$ in Eq.~\eqref{ints} can lead to $\ell_\textsc{uv}<\ell_{\textsc p}$ that is, the scale at which Lorentz-violating UV terms become relevant is larger than the Planck scale and Ho$\check{\textrm{r}}$ava's theory cannot be considered equivalent to GR all the way down to the Planck length. \par Finally, Eq.~\eqref{bcc} allows us to close the system~\eqref{def} providing a first estimate for the IR values of the parameters of the theory, namely \begin{equation} \mu_\textsc{ir} \approx \frac{5}{4\,\pi} \sqrt\frac{\hbar}{G_\textsc n\, c^3} = \frac{5\,\ell_{\textsc p}}{4\,\pi\,G_\textsc n} \ ; \quad \Lambda_\textsc{w,ir} \approx \frac{1}{60\, \ell_{\textsc p}^2} \end{equation} while $\kappa$ is already determined by the second of Eqs.~\eqref{def}. \section{Conclusions} \label{sec:sum} \setcounter{equation}{0} We have shown that the huge discrepancy between the observed value of the cosmological constant and standard predictions from Quantum Field Theory can be addressed in the framework of Ho$\check{\textrm{r}}$ava-Lifshitz theory of gravity with detailed balance. In fact, this theory contains a negative and large bare cosmological constant which compensates for the large and positive vacuum energy of matter fields. In so doing, we have established a relation between the smallness of the total cosmological constant and the scale of ultraviolet effects given in Eq.~\eqref{Luv}. One can take the view that ultraviolet effects do not show up at lower energies because the bare cosmological constant is almost totally compensating for the zero-point energy density or, conversely, that the almost perfect compensation between the bare and the vacuum cosmological constant prevents Lorentz-violating ultraviolet effects to interfere with large scale physics. \par Details of the interplay with this apparently different sectors of Ho$\check{\textrm{r}}$ava's cosmology are left open for future investigations. \section*{Acknowledgments} The authors wish to thank A.~Papazoglou, R.~Maartens and A.~Wang for useful discussions about Ho$\check{\textrm{r}}$ava's theory and comments on the manuscript. S.~S.~is supported by the Marie-Curie Incoming International Grant IIF-2006-039205.
{ "attr-fineweb-edu": 1.832031, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfUg5qhLB3UNjqRgB
\section{Introduction} Monte Carlo (MC) methods are an important computational approach in many fields of science and technology. One common problem solved through MC methods is the numerical integration of a function. Here, the integrand is evaluated at randomly selected points and the values are averaged to obtain an approximation to the integral. This procedure takes a number of evaluations of the integrand that scales inversely to the square of the desired accuracy. Quantum algorithms have the potential to improve this error scaling, e.g., such that a shorter runtime is required to achieve the same error as classical MC. A quantum algorithm for database search was first presented by Grover \cite{Grover1996}, obtaining a quadratic speedup in the number of queries to an unstructured database for finding a particular element. It was generalized to amplitude amplification in \cite{Brassard2002} and extended to amplitude estimation in the same reference. Amplitude estimation provides a useful starting point for quantum versions of MC. In the qubit setting, quantum MC algorithms were discussed, e.g.,~in \cite{Montanaro2015,Xu2018}. Ref.~\cite{Pati2000} adapts Grover's search algorithm to the quantum continuous-variable (CV) context. To our knowledge, a CV adaptation of amplitude estimation and its extension to Monte Carlo has not so far been considered. In this work, we introduce a CV version of quantum Monte Carlo (QMC)\footnote{We note that ``quantum Monte Carlo" is commonly used as a descriptor for the study of quantum systems with classical Monte Carlo techniques, for example in quantum chemistry.} for the evaluation of multi-dimensional integrals. As an intermediate step, we also adapt the amplitude estimation algorithm to the CV setting. We discuss the steps of our algorithm and highlight the CV versions of familiar qubit-based transformations, including the controlled rotation and reflection operations. Our analysis contains an account of errors, including inaccuracies due to finite squeezing. The resultant algorithm can give quadratic speedups for evaluating integrals, and we discuss under which conditions a speedup is realized. By focusing on the CV paradigm of quantum computing, our approach confers a number of advantages in comparison to existing results using qubits for speedups in MC~\cite{Montanaro2015,Xu2018,Rebentrost2018finance}. Fundamentally, the mechanics of CV naturally accommodates the language of integration and does not require any discretization of the integration space, as is the case for qubits. The number of modes used for $n$-dimensional CV QMC is $n + 3$, a quantity that is independent of the desired accuracy of integration. However, the ability to squeeze and apply the required cubic phase gates presents a challenge for physical implementations of CV QMC. We summarize the setting of the present work in section \ref{sectionSetting}. We detail each stage of our algorithm for integration over a single dimension in Secs.~\ref{sectionFirstStage} and~\ref{sectionCVMC}, while discussing errors and speedups from the algorithm in Sec.~\ref{sectionErrors}. The extension to multiple dimensions is discussed in Sec.~\ref{secMulti} and an example numerical implementation of phase estimation is given in Sec.~\ref{Sec:Numerics}. We then conclude in Sec.~\ref{sectionDiscussion}. \section{Setting} \label{sectionSetting} Consider a real valued $n$-dimensional function $g(\vec x): \mathbbm R^n \to \mathbbm R$. Its integral over a region $R \subseteq \mathbbm R^n$ is written as \begin{equation} \label{eqMainInt} \mathcal{I} := \int_{R} d \vec x \,\, g(\vec x), \end{equation} where $\vec x \in \mathbbm R^n$. For many choices of $g(\vec{x})$, the explicit evaluation of $\mathcal{I}$ is hard, and one often resorts to MC sampling to find an approximate solution. For many applications, a representation of the integral as an \textit{expectation value} appears naturally, and we focus on this setting in the following. Then \begin{equation}\label{eqExpValMain} \mathcal{I} = \int_{\mathbbm R^n} d\vec x \,\, p(\vec x) f(\vec x), \end{equation} where $f(\vec x): \mathbbm R^n \to \mathbbm R$ is another real valued function and $p(\vec x)$ is a multidimensional probability distribution $p(\vec x): \mathbbm R^n \to \mathbbm R$ with $\int d\vec x \,\, p(\vec x) = 1$, for example the Gaussian distribution. Here, $f(\vec x)$ is an arbitrary function describing a random variable of the outcomes distributed with $p(\vec x)$, so that $\mathcal{I}$ can be approximated through MC using $N_{C}$ samples as \begin{equation} \mathcal{I} \approx \tilde{\mathcal{I}} := \frac{1}{N_{C}}\sum_{\substack{i=1 \\ x_{i} \sim p(x)}}^{N_{C}} f(x_{i}). \end{equation} The probability that this approximation is inconsistent beyond an error $\epsilon$ is given by \begin{equation}\label{eqCheb} {\rm Pr} \left (|\mathcal{I} - \tilde{\mathcal{I}}| \geq \epsilon \right) \leq \frac{\sigma^{2}}{N_{C} \epsilon^{2}}, \end{equation} where $\sigma^{2}$ is the variance of $p(x)$. Hence, for a constant error probability it suffices to pick $N_{C} = \mathcal{O}(\sigma^{2} / \epsilon^{2})$. We introduce a CV quantum algorithm for MC integration and show that it can provide speedups in comparison to the classical approach. Our approach can yield a close to quadratic speedup in processing time with a number of steps $N_Q = \mathcal{O}(1 / \epsilon)$. Our algorithm consists of two stages. The first stage represents computing the integral. Here, an optical mode is prepared following the probability distribution $p(\vec x)$ and a ``controlled rotation" is then enacted with other modes to imprint the random variable $f(\vec x)$. The second stage is to perform a CV version of amplitude estimation that we introduce here, which is achieved by combining with a squeezed resource mode for phase estimation. This stage allows the integral to be extracted with a quadratic speedup in runtime. In this work, the function $f(\vec x)$ is bounded as $0 \leq f(\vec x) \leq 1$ for all $\vec x$. This also means that the desired integral is $\mathcal{I} \leq 1$, since $p(\vec x)$ is a probability density. An extension to more general functions was given in the qubit context in Ref.~\cite{Montanaro2015} and the corresponding CV version will be the subject of future work. We now proceed to explain CV QMC and then discuss the speedups and errors. The following focuses on the case of one dimensional integration, and we extend to multiple dimensions in Sec.~\ref{secMulti}. Figure~\ref{Fig:Diagram} shows the quantum circuit diagram for one-dimensional integration using CV QMC, requiring $4$ modes and split up into the two stages. \begin{figure} $$ \,\,\,\, \Qcircuit @C=1em @R=2em { & & \,\,\,\, \,\,\,\, \,\, \mbox{\bf First Stage} & & & & \mbox{\bf Second Stage} \\ & \lstick{\ket{\rm vac}} & \gate{\mathcal{G}} & \multigate{2}{\mathcal{H}} & \qw & \multigate{3}{\mathcal{Q}_{c}} & \multigate{3}{\mathcal{Q}_{c}} & \qw & \ldots & & \multigate{3}{\mathcal{Q}_{c}} & \qw \\ & \lstick{\ket{\rm vac}} & \gate{S} & \ghost{\mathcal{H}} & \qw & \ghost{\mathcal{Q}_{c}} & \ghost{\mathcal{Q}_{c}} & \qw & \ldots & & \ghost{\mathcal{Q}_{c}} & \qw \\ & \lstick{\ket{\rm vac}} & \gate{S} & \ghost{\mathcal{H}} & \qw & \ghost{\mathcal{Q}_{c}} & \ghost{\mathcal{Q}_{c}} & \qw & \ldots & & \ghost{\mathcal{Q}_{c}} & \qw \\ & \lstick{\ket{\rm vac}} & \qw & \qw & \gate{S} & \ghost{\mathcal{Q}_{c}} & \ghost{\mathcal{Q}_{c}} & \qw & \ldots & & \ghost{\mathcal{Q}_{c}} & \meterB{\ket{x}} \gategroup{2}{2}{4}{4}{0.7em}{.} \gategroup{2}{5}{5}{11}{0.7em}{--} } $$ \caption{Quantum circuit diagram for one dimensional integration using CV QMC. Here, $\mathcal{I}=\int dx \,\, p(x) f(x)$ is approximated in two stages. The first stage (Sec.~\ref{sectionFirstStage}) begins by preparing the first mode using $\mathcal{G}$ so that its position wavefunction matches $\sqrt{p(x)}$. The second and third modes are squeezed in the position eigenbasis as much as possible and then $\mathcal{H}$ imprints $f(x)$ using the CV analog of a controlled rotation. Projecting onto regions in the position eigenbasis of the second and third mode then gives $\mathcal{I}$ through the success probability. The second stage (Sec.~\ref{sectionCVMC}) is CV amplitude estimation. Here, multiple applications of the three-mode unitary $\mathcal{Q}$ amplifies the amplitude corresponding to the projection. Adding a squeezed ancilla mode and instead applying the controlled unitary $\mathcal{Q}_{c}$ imprints $\mathcal{I}$ into the final mode. Measuring the final mode in the position eigenbasis then gives the integral as an expectation value. The total number of applications of $\mathcal{Q}_{c}$ can be $\mathcal{O}(1/\epsilon)$ for an approximation error~$\epsilon$. The errors in CV QMC are accounted for in Sec.~\ref{sectionErrors}.} \label{Fig:Diagram} \end{figure} \section{First Stage: Encoding the integral} \label{sectionFirstStage} The objective of the first stage of CV QMC is to encode the integral $\mathcal{I}$. This stage uses three modes: the first mode is prepared dependent on $p(x)$ and the other two modes are used to imprint $f(x)$. The probability of successfully postselecting on these two modes then gives the integral $\mathcal{I}$. This method uses a projective measurement so that the CV version of amplitude estimation can be enacted in the second stage of our algorithm. \subsection{Initial states} \label{sectionInitial} In most algorithms of CV quantum computation the initial states are prepared in the vacuum, $\ket{\rm vac}$. As is the case here, it can also be useful to theoretically work with infinitely squeezed $\hat{q}$ (position) eigenstates $\ket {x_{0}}_q $. See Appendix \ref{appendixCVBasics} for an introduction to some elements of continuous-variable quantum computing. Using the squeezing and displacement gates, these states are prepared from the vacuum by infinitely squeezing and then displacing by ${x_{0}}$, i.e., \begin{equation} \ket {x_{0}}_q = \lim_{r\to \infty} D\left (x_{0} \right)S(r) \ket{\rm vac}. \end{equation} Realistically, there is a maximum squeezing factor $r_{\max}$ achievable in physical implementations, which introduces errors. For an approximation to $\ket {x_{0}}_q $ we can use a finitely squeezed and displaced coherent state \begin{eqnarray} \ket{G_{{x_{0}},s} } &=& D\left (x_{0} \right)S\left (r \right) \ket{\rm vac} \nonumber \\ &=& \int dx \,\, G_{x_{0}, s}(x) \ket{x}_q, \end{eqnarray} with a squeezing factor of $r \rightarrow r_{\max}$ and the squeezing $s:=\frac{1}{\sqrt 2} e^{-r}\to s_{\min}$. For these states the wavefunction $G_{{x_{0}}, s}(x)$ is proportional to a Gaussian with a standard deviation proportional to $s$ and a mean ${x_{0}}$. For some of the discussion, we assume availability of position eigenstates $\ket{{x_{0}}}_{q}$ using infinite squeezing and then account for the error effects of using finitely squeezed and displaced coherent states $\ket{G_{{x_{0}},s} }$. \subsection{Preparing $p(x)$} \label{sectionG} The first stage of CV QMC begins with preparing a mode according to the probability distribution $p(x)$. Precisely, we assume availability of a unitary $\mathcal G$ such that \begin{equation}\label{Eq:Prep} \mathcal G \ket{\rm vac} = \int dx \sqrt{p(x)} \ket x_q. \end{equation} As discussed in Appendix~\ref{appendixDecomposition}, $\mathcal G$ can be implemented by approximating it through a decomposition into a universal set of elementary CV gates, such as Gaussian unitaries and the cubic phase gate. For CV QMC to provide speedups over conventional MC, the decomposition of $\mathcal{G}$ into elementary gates must be efficient, see Sec.~\ref{sectionErrors}. To highlight an important case, many problems involve the Gaussian probability density \begin{equation} p(x) =G_{x_0,\sqrt{2} \sigma}^2(x). \end{equation} with $\sigma$ standard deviation and $x_{0}$ mean. The square root of this density is proportional to $\sqrt{p(x)} \propto G_{x_0,\sqrt 2 \sigma}(x)$. We can then prepare a mode in the state $\ket{G_{x_0, \sqrt 2\sigma}}$ by applying \begin{equation} \mathcal{G} = D\left(x_{0} \right) S\left(- \log \sqrt{2} \sigma \right) \end{equation} to the vacuum. \subsection{Applying $f(x)$} \label{sectionF} The random variable function $f(x)$ is imprinted by interacting with an additional two ancillary modes. The interaction is given by the three-mode gate \begin{equation}\label{eqGateHid} \mathcal H^{\rm id} := e^{-i \left (1/\sqrt{f(\hat q_1)} \right) \otimes \hat p_2 \otimes \hat p_3}. \end{equation} Here, we use the $^{\rm id}$ superscript to denote the ``ideal" version of the gate. This gate acts with the function $1/\sqrt{f(\hat q_1)}$ on the first mode, where $\hat q_1$ is the position operator of the first mode, and with the momentum operators $\hat p_2$ and $\hat p_3$ on the ancilla modes. To understand the action of $\mathcal{H}^{\rm id}$, we can apply it to $\ket{x}_{q_{1}}\ket{0}_{q_{2}}\ket{0}_{q_{3}}$. Using $\ket{0}_{q_2} = \int dp' \ket{p'}_{p_2} $, the result is \begin{eqnarray} \mathcal H^{\rm id} \ket x_{q_1} \ket{0}_{q_2} \ket 0_{q_3} &=& \ket x_{q_1} \int dp' \ket{p'}_{p_2} \ket { \frac{ p'}{\sqrt{f(x)}} }_{q_3}.\label{eqApplyH} \end{eqnarray} Hence the interaction can be interpreted as the CV analogue of a controlled rotation, i.e.,~adding a $\frac{1}{\sqrt{f(x)}}$ position displacement onto the last mode dependent upon the position eigenstate $\ket{x}_{q_{1}}$ of the first mode. In the qubit version, an ancilla qubit is rotated by an amount determined by another register of qubits, see Appendix \ref{appendixH} for more discussion on this analogy. Similarly to the qubit case, we can perform a measurement on the ancillary modes of $\mathcal H^{\rm id} \ket x_{q_1} \ket{0}_{q_2} \ket 0_{q_3}$ to obtain an amplitude encoding of the function $\sqrt{f(x)}$, or the function itself $f(x)$ through the probability of success. We measure the second mode postselected in the state $\ket{0}_{q_2}$ and the third mode in the state $\ket{x_{\rm off}}_{q_3}$. The offset value $x_{\rm off}$ is arbitrary and may be chosen according to experimental convenience. Applying $\bra{0}_{q_2}\otimes \bra{x_{\rm off}}_{q_3}$ results in \begin{eqnarray} &&\left (\bra{0}_{q_2}\otimes \bra{x_{\rm off}}_{q_3}\right ) \mathcal H^{\rm id} \ket x_{q_1} \ket{0}_{q_2} \ket 0_{q_3} = \frac{\sqrt{f(x)} }{2\sqrt{\pi}} \ket x_{q_1}, \nonumber\\ \label{eqHPostSelect} \end{eqnarray} see Appendix \ref{appendixH} for the intermediate steps. This means that the resultant state is $\ket{x}_{q_{1}}$ with a probability proportional to $f(x)$. For a physical implementation of CV QMC, the function $1/\sqrt{f(\hat{q}_{1})}$ can be implemented via a polynomial approximation $h(\hat{q}_{1})$. We can in principle implement the exponentiated polynomial $e^{-i h(\hat q_1)}$ by decomposing it into a sequence of Gaussian single-mode gates and cubic phase gates. Such decompositions have been studied previously, for example in \cite{Sefi2011}, and are also discussed in Appendix \ref{appendixDecomposition}. This decomposition must be efficient to provide useful speedups through CV QMC, see Sec.~\ref{sectionErrors}. The three-mode interaction using $h(x)$ is given by \begin{equation}\label{eqGateHimpl} \mathcal H \equiv \mathcal H^{\rm impl} := e^{-i h(\hat q_1)\otimes \hat p_2 \otimes \hat p_3}. \end{equation} Here, we use the $^{\rm impl}$ superscript to denote the ``implementation" version of the gate. This gate is generated by a higher-order polynomial in the position and momentum operators of the three modes. As the single mode gate $e^{-i h(\hat q_1)}$, it can also be decomposed into a sequence of single and two-mode Gaussian operations and single-mode cubic phase gates. If the decomposition of $e^{-i h(\hat q_1)}$ is efficient then also the decomposition of $\mathcal H^{\rm impl}$ is efficient. This approach is a generalization of the technique used by Lau \textit{et al.} \cite{Lau2016} for performing a quantum matrix inversion \cite{Harrow2009} in the CV setting. It achieves an encoding of the function $1/\vert h(x)\vert \approx \sqrt{f(x)}$ as an amplitude of the position eigenstate $\ket x_{q_1}$. The interaction can of course be performed on a superposition state in the position eigenbasis, which is the route now used to obtain our desired integral. \subsection{Obtaining the integral} \label{sectionIntegral} We can combine the state preparation of part B with the controlled rotation and postselection on ancilla modes detailed in part C to obtain $\mathcal{I}$. Take the three-mode vacuum state to be the initial state \begin{equation} \ket{\psi_{\rm in}}:=\ket {\rm vac}_{1} \ket{\rm vac}_{2} \ket{\rm vac}_{3}, \end{equation} First, define the operator \begin{equation} \label{eqOpK} \mathcal K^{\rm id} := \mathcal H^{\rm id} \left (\mathcal G\otimes S(\infty) \otimes S(\infty) \right ). \end{equation} This ``ideal" operator consists of preparing the superposition over all position eigenkets in the first mode with amplitudes $\sqrt{p(x)}$ via $\mathcal G$, infinitely squeezing the ancilla modes via $S(\infty)$, and finally applying the three-mode controlled rotation gate $\mathcal{H}^{\rm id}$ that encodes $f(x)$. Finitely squeezed initial states are accounted for in Appendix \ref{App:FiniteSqueezing}. We define the resulting state as \begin{equation} \ket {\chi^{\rm id}} := \mathcal K^{\rm id} \ket{\psi _{\rm in}}. \end{equation} After preparing the first mode and squeezing the ancillas, we have as an intermediate state \begin{equation} \mathcal G \otimes S(\infty)\otimes S(\infty)\ket{\psi_{\rm in}} = \int dx \sqrt{p(x)} \ket x_{q_1} \ket{0}_{q_2} \ket 0_{q_3}. \end{equation} Then applying $\mathcal H$, see Eq.~(\ref{eqApplyH}), obtains \begin{equation} \ket {\chi^{\rm id}} = \int dx \sqrt{p(x)} \ket x_{q_1} \int dp' \ket{p'}_{p_2} \ket {\frac{ p'}{\sqrt{f(x)}}}_{q_3}. \end{equation} Postselecting on the resource modes in the infinitely squeezed states $\ket{0}_{q_2}\otimes \ket{x_{\rm off}}_{q_3}$, using Eq.~(\ref{eqHPostSelect}), arrives at \begin{equation}\label{eqHPostSelectSuperposition} \frac{1}{2\sqrt{\pi} }\int dx \,\, \sqrt{p(x) f(x)} \ket x_{q_1} . \end{equation} The postselection success probability is \begin{equation}\label{eqHPostSelectProb} \frac{1}{4\pi} \int_{-\infty}^{\infty} dx \,\, p(x)f(x) = \frac{\mathcal I}{4\pi}, \end{equation} which is proportional to the desired integral $\mathcal I$. For the results in Eqs.~(\ref{eqHPostSelect}), (\ref{eqHPostSelectSuperposition}), and (\ref{eqHPostSelectProb}), postselection is performed on infinitely squeezed states. In other words, the operator \begin{equation}\label{Eq:M} \mathcal{M} := \mathbbm I\otimes \ket{0}_{q_2} \bra{0}_{q_2} \otimes \ket{x_{\rm off}}_{q_3} \bra{x_{\rm off}}_{q_3} \end{equation} is measured, where $\mathbbm I$ is the identity operator. However, it is unphysical to be able to measure such an operator as one can only measure the position in a finite interval with spread $\Delta x$. The physically realizable $\Delta x$ is larger than the spread required for an ideal operation $\Delta x' \to 0$. We now account for this physical setting by introducing a suitable \textit{projector}. The following CV amplitude estimation algorithm also requires a projector, while here $ \mathcal{M}^2 \neq \mathcal{M}$ since the infinitely squeezed states are not normalizable. \footnote{In the qubit setting, this requirement can be relaxed, see e.g.~\cite{Xu2018}, and the measurement can be a Hermitian operator. We leave this case for future discussion.} There are multiple ways to define a projector that obtains the integral to a certain approximation. An alternative projector to the present work is the projector discussed in \cite{Pati2000}, see also Appendix \ref{appendixPBL}. In this work, we focus on a projector into squeezed coherent states, defined as \begin{equation} \label{eqProj} P_{x_0,\Delta x} := \ket{G_{x_0,\Delta x} }\bra{G_{x_0,\Delta x} }. \end{equation} Note that $P_{x_0,\Delta x}^2 = P_{x_0,\Delta x}$. Such a projector can be measured via the application of (anti-) squeezing and subsequent heterodyne measurement \cite{Weedbrook2012}. Let the maximum squeezing factor be $r_{\max}$ and the associated squeezing be $s_{\min}$. In contrast to Eq.~(\ref{Eq:M}), the projector to obtain the desired success probability approximately is given by \begin{equation} \mathcal P:= \mathbbm I\otimes P_{0,s_{\min}} \otimes P_{x_{\rm off},s_{\min}}. \end{equation} Note again that $\mathcal P^2 =\mathcal P$. This operator projects into the respective squeezed coherent states with spread $s_{\min}$. Before we apply this projector, consider the state $\ket{\chi^{\rm id}}$. Realistically, we can only apply $\mathcal H^{\rm impl}$ and squeeze with $r_{\max}$, thus applying the operator \begin{equation} \label{eqOpKimpl} \mathcal K^{\rm impl} := \mathcal H^{\rm impl} \left (\mathcal G\otimes S(r_{\max}) \otimes S(r_{\max}) \right ), \end{equation} which leads to the state \begin{equation} \ket {\chi^{\rm impl}} := \mathcal K^{\rm impl} \ket{\psi _{\rm in}}. \end{equation} If we measure this projector on the state $ \ket {\chi^{\rm impl}}$, we obtain \begin{eqnarray} \bra {\chi^{\rm impl}} \mathcal P \ket {\chi^{\rm impl}} &\approx&\frac{s_{\rm min}^4}{ \pi^2 } \mathcal I , \label{eqchiPchi} \end{eqnarray} see Appendix \ref{appendixFiniteSqueezing}. The measurement returns the integral as before, scaled by the measurement spread and the squeezing $s_{\min}$ in the limit of strong squeezing ($s_{\min}\to 0$). The error scales as $\Ord{s_{\min}^6}$, see also Appendix \ref{appendixFiniteSqueezing}. The additional error due to the polynomial approximation is discussed in Appendix \ref{appendixErrorPolynomial}. To summarize, the integral can be obtained by using a CV analogue of a controlled rotation and then performing a projective measurement (similar to the qubit setting). However, this does not yet provide a speedup in comparison to classical methods since finding $\bra {\chi^{\rm impl}} \mathcal P \ket {\chi^{\rm impl}}$ is achieved experimentally through a simple Bernoulli trial. We now show how a CV version of amplitude estimation can be applied to provide speedups through QMC. \section{Second stage: Amplitude Estimation and Speedup} \label{sectionCVMC} The second stage of CV QMC is to provide a speedup by adding an additional mode and repetitively performing a four mode interaction that encodes the integral as the result of position measurement on the final mode. This stage represents a CV version of amplitude amplification and estimation. As discussed in Ref.~\cite{Brassard2002,Knill2007}, amplitude estimation for qubits is a combination of amplitude amplification with quantum phase estimation. We consider both elements in the CV setting. For CV amplitude amplification, we define a continuous-variable operator $\mathcal Q$ in analogy to the qubit case (also called the Grover operator in the search context). This operator encodes the desired expectation value in its eigenvalues. CV phase estimation using a single squeezed mode is then performed with a ``controlled" operator $\mathcal Q^c$ to resolve the corresponding eigenvalue. \subsection{Amplitude amplification} First, along the lines of \cite{Brassard2002}, we would like to turn the measurement operator into a unitary operator. Consider the idealized operator \begin{equation}\label{eqV} \mathcal V^{\rm id} := \mathbbm I^{\otimes 3} - 2 \mathcal M. \end{equation} This operator is not unitary as $\mathcal M$ is not a projector. Nevertheless, a measurement of $\mathcal V^{\rm id} $ on $\ket {\chi^{\rm id}}$ extracts the desired integral via $\bra {\chi^{\rm id}} \mathcal V^{\rm id} \ket {\chi^{\rm id}} =1- \mathcal I/2\pi$. To obtain a unitary operator we use the previously defined projector \begin{equation} \mathcal V^{\rm impl} := \mathbbm I^{\otimes 3} - 2 \mathcal P. \end{equation} which leads to $\bra {\chi^{\rm impl}} \mathcal V^{\rm impl} \ket {\chi^{\rm impl}} \approx 1 - 2 \frac{s_{\min}^4}{ \pi^2 } \mathcal I$. Recall again the errors due to finite squeezing and polynomial approximations, as discussed further in Sec.~\ref{sectionErrors}. Using the ideal states for the moment, we can formally express $\mathcal V^{\rm id} \ket {\chi^{\rm id}}$ as a linear combination of $\ket {\chi^{\rm id}}$ and a particular orthogonal complement $\ket {\chi^{{\rm id},\perp}}$, i.e., \begin{equation} \mathcal V^{\rm id} \ket {\chi^{\rm id}} = \cos (\theta/2) \ket {\chi^{\rm id}} + e^{i\phi} \sin (\theta/2) \ket {\chi^{{\rm id},\perp}}. \end{equation} It follows that \begin{equation} \label{eqTheta} \cos \left(\frac{\theta}{2}\right)= 1 - \frac{\mathcal I}{2\pi}, \end{equation} and we can equivalently think of $\theta$ as containing the desired integral. Next we use the fact that $\mathbbm{I} - 2 \ket{\psi}\bra{\psi}$ defines a unitary reflection around any state $\ket{\psi}$, i.e., so that $\ket{\psi} \rightarrow - \ket{\psi}$ and $\ket{\psi^{\perp}} \rightarrow \ket{\psi^{\perp}}$ for any orthogonal states. From this, define $\mathcal Q^{\rm id}$ as the ideal operator for amplitude amplification, given by a sequence of a reflection of $\mathcal V^{\rm id} \ket {\chi^{\rm id}}$ followed by a reflection of $\ket {\chi^{\rm id}}$, \begin{equation}\label{eqQ} \mathcal Q^{\rm id} := \left (\mathbbm I -2 \ket {\chi^{\rm id}} \bra {\chi^{\rm id}}\right )\left(\mathbbm I -2 \mathcal V^{\rm id} \ket {\chi^{\rm id}} \bra {\chi^{\rm id}} \mathcal V^{\rm id}\right ). \end{equation} This operator performs a rotation by an angle of $2\theta$ in the two-dimensional Hilbert space spanned by $\ket{{\chi^{\rm id}}}$ and $\mathcal V^{\rm id}\ket{{\chi^{\rm id}}}$ ~\cite{Brassard2002,Knill2007,Rebentrost2018finance}. We can hence diagonalize $\mathcal{Q}^{\rm id}$ in this subspace, see Appendix \ref{appendixQ}. This leads to the eigenstates $\ket{ \psi_\pm}$ with corresponding eigenvalues $e^{\pm i \theta}$. Now, the state $\ket {\chi^{\rm id}}$ outputted from the first stage of CV QMC is the initial state for amplitude estimation. We can express \cite{Xu2018} \begin{eqnarray} \ket {\chi^{\rm id}} = \frac{1}{\sqrt 2} \left ( \ket {\psi_+} + \ket {\psi_-} \right). \end{eqnarray} Applying $\mathcal{Q}^{\rm id}$ to $\ket {\chi^{\rm id}}$ will thus add a phase based on the eigenvalues $e^{\pm i \theta}$. Amplitude amplification consists of repeatedly applying $\mathcal{Q}^{\rm id}$ to $\ket {\chi^{\rm id}}$. The extension to amplitude \textit{estimation} is to combine with CV phase estimation to imprint both values $\pm \theta$ onto another ancilla mode. The eigenvalues can then be extracted via homodyne measurement statistics. Before proceeding to phase estimation, we first discuss how $\mathcal{Q}^{\rm id}$ can be implemented. Using the definition of $\ket {\chi^{\rm id}} = \mathcal K^{\rm id} \ket{\psi_{\rm in}}$, we can expand the operator into the following sequence of operations \begin{equation}\label{Eq:Q} \mathcal Q^{\rm id} = \mathcal K \mathcal Z \mathcal K^\dagger \mathcal V \mathcal K \mathcal Z \mathcal K^\dagger \mathcal V, \end{equation} omitting the superscript $^{\rm id}$ for clarity. The operator $\mathcal Z$ is defined as the reflection of the computational zero state \begin{equation}\label{eqZ} \mathcal Z := \mathbbm I^{\otimes 3} - 2 \ket {\psi_{\rm in }} \bra{\psi_{\rm in}}. \end{equation} Implementation of the reflection operators $\mathcal Z$ and $\mathcal V$ is discussed in Appendix~\ref{SecReflec}. This requires invoking a reflection gate of the vacuum state, which is described in Appendix \ref{appendixVacuum}. The explicit gate sequence of $\mathcal Q^{\rm id}$ is given in Appendix \ref{appendixQ}. We also note that $\mathcal Q^{\rm id}$ is the idealized unitary based on ideal rotations. When non-ideal, the action of $\mathcal{Q}$ no longer remains in the two-dimensional subspace of $\ket{{\chi^{\rm id}}}$ and $\mathcal{V}\ket{{\chi^{\rm id}}}$. We discuss the effect of erroneous applications of the gates in Sec.~\ref{sectionErrors}. \subsection{Adding a control mode} In the qubit algorithm, for phase estimation we apply the controlled version $\mathcal Q_c$ of an operator $\mathcal{Q}$ that transforms as \begin{equation} \mathcal Q_c \ket j \ket \psi = \ket j \mathcal Q^j \ket \psi, \end{equation} where $\ket j$ is a label state comprised of multiple qubits. Such an operation can be built up from the controlled unitary \begin{equation} \ket 0\bra 0 \otimes \mathbbm I + \ket 1\bra 1 \otimes \mathcal Q, \end{equation} where we apply $\mathcal Q$ if a single control qubit is in the state $\ket 1$ and do nothing otherwise. In the CV setting, we modify this approach by replacing a register of control qubits by a single resource mode denoted by $\phi$, which is prepared in a position eigenstate $\ket {x}_{q_\phi}$, see e.g. Ref.~\cite{Liu2016}. We attach $\phi$ to the current three modes by performing a four mode interaction that can be seen as a controlled version of $\mathcal{Q}^{\rm id}$. In particular, $\phi$ is attached via the operator $\hat p_\phi$, leading to the ideal phase estimation operator \begin{eqnarray} \mathcal Q^{\rm id}_c &=& e^{-i 1/\sqrt{f(\hat q_1)}\otimes \hat p_2 \otimes \hat p_3 \otimes \hat p_\phi} (\times {\rm more\ terms}). \end{eqnarray} The full expression for $\mathcal Q^{\rm id}_c$ is given in Eq.~\eqref{Eq:BigQcid}. This interaction requires $\mathcal G_c$ as the controlled version of $\mathcal G$. More details are given in Appendix \ref{appendixQ}. We note that the addition of control through $\mathcal Q^{\rm id}_c$ must be efficient for speedups in CV QMC, see Sec.~\ref{sectionErrors} for further discussion. \subsection{Phase estimation} \label{sectionPhase} The operator $\mathcal Q_c$ is used to perform phase estimation to extract the eigenvalues of $\mathcal Q$, which are related to the desired integral. We briefly show this schematically, before providing a more precise analysis. Let $\ket {\psi_+}$ be the eigenstate of $\mathcal Q^{\rm id}$ corresponding to the eigenvalue $e^{i\theta}$, with $\theta$ as given by Eq.~(\ref{eqTheta}). Then, with the phase estimation mode in the initial state $\ket 0_{q_\phi}$, \begin{equation} \mathcal Q_c^{\rm id} \ket {\psi_+} \ket 0_{q_\phi} = \ket {\psi_+} e^{i \theta \hat p_\phi} \ket 0_{q_\phi} = \ket {\psi_+} \ket \theta_{q_\phi}, \end{equation} where the phase estimation mode is shifted in position by an amount equal to $\theta$. A position measurement of the $\ket \theta_{q_\phi}$ mode then obtains the result. Performing phase estimation with a single infinitely squeezed mode allows for $\theta$ to be measured exactly with a single measurement. However, this is clearly unphysical, and we now perform an analysis of phase estimation using a finitely-squeezed state centered around $0$, i.e., the state $\ket{G_{0,s}}$. Let again $\ket {\psi_+} $ be the eigenstate of $\mathcal Q^{\rm id}$ with eigenvalue $e^{i\theta}$. Define $\epsilon_\theta^{\rm target}>0$ to be the desired final accuracy for the $\theta$ value. Note that the final error for approximating $\mathcal{I}$ is then also $\mathcal{O}(\epsilon^{\rm target}_\theta)$~\cite{Rebentrost2018finance}. Let $M$ be an integer. We apply $\mathcal Q^{\rm id}_c$ for $M$ times, leading to \begin{eqnarray}\label{eqPhaseEstimationState} \left(\mathcal Q^{\rm id}_c \right )^M \ket {\psi_+} \ket {G_{0,s}} &=& \ket {\psi_+} e^{-i M \theta \hat p_\phi} \ket {G_{0,s}}\\ &=& \frac{\ket {\psi_+}}{\sqrt{s}{\pi^{1/4}}} \int dx e^{-\frac{x^2}{2s^2}} \ket {x+ M \theta}_{q_\phi}. \nonumber \end{eqnarray} Measuring the position of the resource mode obtains a result sampled from the error-free probability distribution \begin{eqnarray} P_\theta(q) = \frac{1}{s \sqrt{\pi}} e^{-\frac{(M\theta-q)^2}{s^2}}. \end{eqnarray} See Appendix \ref{appendixPE} for the expression for $P_\theta(q)$ including the error due to the erroneous simulation of $\mathcal Q_c$, which is enumerated by $\epsilon_{Q}$. Let the samples obtained from independent runs be given by $Y_j$. The success probability of a single measurement $Y_j/M$ being inside a range $\epsilon^{\rm target}_\theta$ around the expectation value $\theta$, i.e., $\vert Y_j/M - \theta \vert \leq \epsilon^{\rm target}_\theta$, is given by \begin{equation}\label{eqSuccessProb} p_{\rm success} = {\rm erf}\left( \frac{ M\epsilon^{\rm target}_\theta}{\sqrt{(M\epsilon_Q)^2 + s^2}}\right), \end{equation} see Appendix \ref{appendixPE}, Eq.~(\ref{eqSuccessProb}). Using $M \geq 1/\epsilon^{\rm target}_\theta$ \cite{Xu2018}, the vacuum squeezing $s=1/\sqrt{2}$, and no gate errors $\epsilon_Q=0$, we can lower bound the success probability to be $p_{\rm success} \geq {\rm erf}(\sqrt 2) > 0.95$. In the presence of gate errors $\epsilon_{Q}$, if we pick $\epsilon_{Q} = \epsilon^{\rm target}_\theta$ and vacuum squeezing again, we obtain a lower bound of $p_{\rm success} \geq {\rm erf}(\sqrt{2/3}) > 0.75$. A single-shot success probability greater than $1/2$ is a requirement for boosting the success probability via multiple independent runs. We repeat the measurement $L\geq 1$ times and take the median of the obtained values $Y_j/M$. Let the desired success probability for $\vert {\rm Median}(Y_j/M) - \theta \vert \leq \epsilon^{\rm target}_\theta$, or ``confidence", be given by $c$. It can be achieved by using $L \leq \vert \log(1-c) \vert/ \vert \log(2\sqrt{p_{\rm success}(1-p_{\rm success})}) \vert$ repetitions \cite{Nagaj2009,Rebentrost2018finance}. Concretely, if we have $p_{\rm success}={\rm erf}(\sqrt{2/3})$ and would like to boost it to $c=0.995$, then $L \approx 37$ will be sufficient. Furthermore, we can leverage squeezing in the phase estimation mode as a resource. Indeed, choose $s$ smaller than the vacuum squeezing, $s < 1/\sqrt{2}$, $\epsilon_{Q} = \epsilon^{\rm target}_\theta$, but leave $M$, the number of required applications of $\mathcal Q_c$, as a variable. To achieve the same success probability ${\rm erf}(\sqrt{2/3})$, the argument of the error function has to be the same which leads to the relation $(M \epsilon^{\rm target}_\theta)^2 = 2s^2$. Thus we can take $M \geq \sqrt{2} s /\epsilon^{\rm target}_\theta$, which lowers the requirement for $M$ at the cost of more squeezing $\sqrt{2}s<1$. The feasibility of implementing corresponding squeezing factors will have to be determined for each application and for different experimental setups. \subsection{Query complexity speedup}\label{Sec:Speedup} In QMC, we conventionally use the number of applications of the relevant unitary to consider the speedup. Here, we consider applications of $\mathcal{K}$, since it is the unitary used to encode the integral from the first stage of the algorithm. In physical implementations, $\mathcal{Q}_{c}$ and its composing elements have a runtime which must be accounted for if any real speedup is to occur. Runtimes are discussed in the following section on efficiency and errors. Classical algorithms typically require $N_{C}=\Ord{1/\epsilon_\theta^2}$ evaluations of the integrand, see Eq.~(\ref{eqCheb}). In the quantum algorithm, each application of $\mathcal{Q}_{c}$ involves a constant number of applications of $\mathcal K$, see Eq.~(\ref{Eq:Q}). The total number of applications of $\mathcal K$ is thus \begin{equation}\label{eqNq} N_{Q} = \Ord{M \times L}. \end{equation} To obtain a quantum speedup, we take $M \geq \sqrt 2 s/\epsilon_{\theta}^{\rm target}$ and the gate error $\epsilon_Q = \epsilon_\theta^{\rm target}$. We also take $L$ to be constant (e.g. $\approx 37$), as discussed. This means that \begin{equation}\label{Eq:Speedup} N_{Q} = \Ord{\frac{s}{\epsilon_\theta^{\rm target}}}. \end{equation} Thus, a quadratic speedup is obtained over the classical runtime. The achieved error is $\epsilon_\theta^{\rm target}$ and the confidence is high, say $99.5\%$. Surprisingly, further speedups may be possible by concurrently decreasing $M$ and decreasing $s$. We note, however, general lower bounds for the search and amplitude amplification problems \cite{Bennett1997}. \subsection{Gate complexity} We now summarize the gate complexity of the algorithm. The classical complexity is $\tOrd{1/\left( \epsilon_\theta^{\rm target}\right)^2}$ for the common situation when the integrand can be evaluated classically in $\Ord{{\rm poly} \log 1/\epsilon}$. Here, $\tOrd{\cdot}$ omits polylogarithmic factors. For the quantum algorithm, let a single application of $\mathcal Q$ be achieved to error $\epsilon_Q$ at a runtime cost $T_Q\left(\epsilon_Q\right)$. Using the requirement $\epsilon_Q = \epsilon_\theta^{\rm target}$ obtains a runtime $T_Q\left( \epsilon_\theta^{\rm target} \right)$. Together with the number of calls to the unitary $\mathcal Q$, the total gate complexity scales as \begin{equation} N_q T_Q\left ( \epsilon_\theta^{\rm target} \right) = \Ord{ \frac{s T_Q\left( \epsilon_\theta^{\rm target} \right)}{ \epsilon_\theta^{\rm target}}}. \end{equation} Thus in terms of the gate complexity, the possibility of a speedup crucially depends on $T_Q( \epsilon_\theta^{\rm target} )$. For a quadratic speedup we require \begin{equation} T_Q\left( \epsilon\right) = \Ord{ {\rm poly} \log \left(1/ \epsilon\right) }, \end{equation} which then results in \begin{equation} N_q T_Q\left ( \epsilon_\theta^{\rm target} \right) = \tOrd{\frac{s}{\epsilon_\theta^{\rm target} }}. \end{equation} We can still achieve speedups below quadratic if we assume a $0 < \delta<1$ and \begin{equation} T_Q( \epsilon ) = \Ord{1/ \epsilon^\delta }. \end{equation} which then results in \begin{equation} N_q T_Q \left( \epsilon_\theta^{\rm target} \right) = \Ord{\frac{s}{\left(\epsilon_\theta^{\rm target}\right)^{1+\delta} }}. \end{equation} With these assumptions on the gate complexity of $\mathcal Q$, the overall quantum gate complexity consequently scales better than the classical complexity. The next sections discusses the assumptions for such a runtime on the individual quantum operations and methods to achieve such a runtime. \section{Errors and gate complexity} \label{sectionErrors} Here we account for the errors that arise due to various approximations in a physical implementation of CV QMC. These errors can be split into different categories: (A) due to preparing the first mode, (B) due to finite squeezing, (C) from the polynomial approximation of $f(x)$ and (D) through implementing various gates/unitaries. We also discuss here the effect of gate runtimes on any speedups through CV QMC by using as a proxy the number of required Gaussian unitaries and cubic phase gates. The referenced appendices support this section. \subsection{Errors in preparing $p(x)$} The first mode is prepared according to $\sqrt{p(x)}$ by applying the unitary $\mathcal{G}$ to the vacuum. We suppose that $\mathcal G$ can be applied to accuracy $\epsilon_G$, and moreover that this can be achieved using $T_G$ continuous-variable Gaussian and cubic phase gates. For a quadratic speedup in CV QMC we require that \begin{equation} T_G = \Ord{{\rm poly} \log 1/\epsilon_G}. \end{equation} We further assume that controlling $\mathcal G$ adds at most a logarithmic overhead to the runtime. For sub-quadratic speedups, these requirement can be relaxed. \subsection{Squeezing errors} Generally there will be a maximum required squeezing and a maximum achievable squeezing in any given mode. Assume that there exists a squeezing factor $r_{\Delta x'}$ such that larger squeezing leads to computationally equivalent results. Equivalently, there is a scale $\Delta x'$ such that two CV position states $\ket x$ and $\ket {x+\Delta x'}$ are computationally equivalent. Moreover, there is a physically achievable squeezing factor given by $r_{\max}$. The interesting case is $r_{\max} \leq r_{\Delta x'}$, when the achievable squeezing is below the computationally required squeezing, requiring an error analysis. The present CVMC algorithm and other continuous-variable algorithms schematically use the infinite squeezing gate \begin{equation} S(\infty), \end{equation} here for the gate $\mathcal G\otimes S(\infty) \otimes S(\infty)$ in Eq.~(\ref{eqOpK}). By assumption, \textit{without} changing the effect of the computation, we can replace \begin{equation} S(\infty) \to S(r_{\Delta x'}), \end{equation} where $r_{\Delta x'}$ is the squeezing factor corresponding to $\Delta x'$ according to Eq.~(\ref{eqSqueezingFactorsRelation}). In a physical implementation, squeezing factors of $r_{\Delta x'}$ may not be reached. Instead, we reach $r_{\max}$. The gate error due to only reaching $r_{\max}$ can be quantified via \begin{eqnarray} \label{eqSError} \epsilon_S := \left \Vert S(r_{\max}) - S(r_{\Delta x'})\right \Vert &=& \Ord{ \vert r_{\Delta x'}-r_{\max}\vert}. \end{eqnarray} There are four modes in CV QMC which require various degrees of squeezing: (1) the first mode may require squeezing for state preparation, (2) the second and third mode require squeezing for performing the controlled rotation and also to implement the projector $\mathcal P$. Finally, (3) the fourth mode is squeezed according to $s$ for phase estimation. Case (1) is accounted for in $\epsilon_{{G}}$, case (2) is handled by Eq.~(\ref{eqSError}) and also in Appendix~\ref{App:FiniteSqueezing}, while case (3) determines the final error $\epsilon_{\theta}$ in estimating the integral. \subsection{Polynomial approximation errors} \begin{figure} \includegraphics[width=\columnwidth]{hpolynomial.pdf} \caption{Example polynomial approximation of $f(x)$. Here, $p(x)$ (dotted green line) is a Gaussian distribution and $f(x)$ is an arbitrary function (solid red line). The set $\mathcal{X}_{h}$ selects the non-trivial region of the integral, and we find a polynomial $h(x)$ such that $1/|h(x)|^{2}$ (dashed red line) approximates $f(x)$ within $\mathcal{X}_{h}$ up to some error $\epsilon_{h}$.} \label{Fig:PolyApprox} \end{figure} Recall that $0 \leq f(x) \leq 1$ and suppose that there exists a polynomial $h(x)$ such that $1/|h(x)|^{2}$ approximates $f(x)$ in the following way. Define a compact set $\mathcal X_h$ which denotes the non-trivial region of the integral. First, we require that $h(x)$ satisfies the point-wise error condition \begin{equation}\label{eqErrorPointwise} \left \vert f(x) - \frac{1}{\vert h(x) \vert^2} \right \vert \leq \epsilon_h, \end{equation} for all $x \in \mathcal X_h$ where $\epsilon_h>0$, see Fig.~\ref{Fig:PolyApprox}. Outside the set, we require that \begin{equation}\label{Eq:TrivialRegion} \int_{ \bar{\mathcal X_h}} dx p(x) \left \vert \frac{1}{\vert h(x) \vert^2} - f(x) \right \vert \leq \eta \end{equation} for a small $\eta > 0$, where $\bar{\mathcal X_h}$ is the complement. The total error $ \left \vert \int dx \frac{p(x)}{\vert h(x) \vert^2} - \int dx f(x)p(x) \right \vert$ is then $\mathcal{O}(\epsilon_{h} + \eta)$, see Appendix \ref{appendixErrorPolynomial}. Thus, under the assumptions and $\eta=\Ord{\epsilon_h}$, the polynomial approximation error is no larger than $\Ord{\epsilon_h}$. \subsection{Controlled rotation error} Regarding the gate $\mathcal H$, the pointwise error property Eq.~(\ref{eqErrorPointwise}) leads to the gate error \begin{equation} \Vert \mathcal H^{\rm id} - \mathcal H^{\rm impl} \Vert =\Ord{\epsilon_h}, \end{equation} when the polynomial argument is inside $\mathcal X_h$. Appendix~\ref{appendixCVBasics} defines the operator norm used in this work. This is shown via $\left \Vert \mathcal H^{\rm id} - \mathcal H^{\rm impl} \right \Vert = \Ord{\left \vert h(x) - \frac{1}{\sqrt{f(x)} } \right \vert }= \Ord{f(x)^3 \epsilon_h} = \Ord{ \epsilon_h}$, using $f(x)\leq 1$. In addition, the gate $\mathcal H$ can in principle decomposed into elementary Gaussian operations and cubic phase gates, denoted by an operator $\mathcal H^{\rm dec}$. The cost shall be given by $T_H(d_{h},\epsilon_{h}^{\rm dec})$ to error $\epsilon^{\rm dec}_h := \Vert \mathcal H^{\rm impl} - \mathcal H^{\rm dec}\Vert$, where $d_{h}$ is the polynomial degree of $h(x)$. Regarding this cost, if the decomposition requires a number of gates $T_H=1/\epsilon_{h}^{\rm dec}$ then possibilities of a quantum speedup are lost. Such runtime costs appear for example when lowest-order Suzuki-Trotter methods are used \cite{Sefi2011}. With higher-order Suzuki -Trotter methods, one can in principle achieve a runtime of $T_H=\Ord{1/(\epsilon_{h}^{\rm dec})^\delta}$ with a constant $0<\delta <1$ \cite{Childs2017} that can be made arbitrarily close to $0$. Such methods are expected to translate to the CV context, but a proper analysis will be left for future work. Additionally, we note that exponentially precise Hamiltonian simulation methods exist for qubits \cite{berry2017exponential,Gilyen2018}, which may also be translated into the CV framework. In such cases, we may even achieve $T_H=\Ord{\log 1/\epsilon_{h}^{\rm dec}}$. The dependence of $T_H$ on $d_h$ is usually $\Ord{2^{d_h}}$ with a potential $\Ord{{d_h}}$ discussed in Appendix \ref{appendixDecomposition}. In summary, $\mathcal H$ can be implemented to accuracy $\epsilon_H := \Vert \mathcal H^{\rm id} - \mathcal H^{\rm dec} \Vert = \Ord{\epsilon_h + \epsilon^{\rm dec}_h}$ in runtime $T_H$. We further assume that controlling $\mathcal H$ adds at most a logarithmic overhead to the runtime. \subsection{Reflections} The reflection gates $\mathcal V$ and $\mathcal Z$, defined in Eqns.~(\ref{eqV}) and (\ref{eqZ}), respectively, are also performed to an error. Possible methods for implementing them are shown in Appendix \ref{SecReflec}. The gate $\mathcal Z$ is implemented with squeezing and application of the PBL gate, see Appendix \ref{appendixZ}. The accuracy $\epsilon_Z$ depends on the parameter $\Delta x$ of the PBL gate and the squeezing error, obtaining $\epsilon_Z = \Ord{\Delta x^2 + \epsilon_S}$. For the runtime, we obtain a constant number of applications of the Pati, Braunstein, and Lloyd (PBL) gate. This suggests that potentially $T_Z = \Ord{{\rm poly} \log 1/\epsilon_Z}$ but further research has to be devoted to the efficient implementation of the PBL gate. The gate $\mathcal V$ is be implemented via squeezing, displacements, and the PBL gate, see Appendix \ref{appendixV} to accuracy $\epsilon_V = \Ord{\Delta x^2 + \epsilon_S}$. The runtime potentially is $T_V=\Ord{{\rm poly} \log 1/\epsilon_Z}$ since a constant number of PBL gates are required. \subsection{The operator $\mathcal Q$} \label{sectionErrorQ} The first stage of the algorithm is to apply $\mathcal{K}$ in Eq.~\eqref{eqOpK}, while the second stage involves multiple repetitions of $\mathcal{Q}$ using the reflections $\mathcal{V}$ and $\mathcal{Z}$ for phase estimation, along with $\mathcal{K}$ (see Eq.~\eqref{Eq:Q}). Each of these gates has an error and corresponding runtime. The operator $\mathcal K$ is in turn composed of state preparation $\mathcal{G}$, squeezing of the ancilla modes, and the controlled rotation $\mathcal{H}$. The sequence of operations is shown in Eq.~(\ref{Eq:BigQid}). Thus, implementing $\mathcal Q$ achieves an accuracy \begin{equation} \label{eqErrorQ} \epsilon_Q = \Ord{\epsilon_G + \epsilon_S + \epsilon_H + \epsilon_Z + \epsilon_V}, \end{equation} and requires runtime \begin{eqnarray} T_Q(\epsilon_G, \epsilon_H, \epsilon_Z, \epsilon_V) &=& 4T_G(\epsilon_G) + 4T_H( \epsilon_H) \nonumber \\ &&+ 2 T_Z( \epsilon_Z) + 2 T_V( \epsilon_V). \end{eqnarray} To simplify the analysis, we take the allowable error for the individual operations proportional to a small constant times $\epsilon_Q$, i.e., we take the required errors to be $\epsilon_i = \Ord{\epsilon_Q}$ such that they add up to $\epsilon_Q$. For the runtime, if each of the individual $T_i( \epsilon_i) = \Ord{{\rm poly} \log 1/\epsilon_i}$ either by assumption or by the employed methods, and because of Eq.~(\ref{eqErrorQ}), we can find the overall bound $T_Q(\epsilon_Q) = \Ord{{\rm poly} \log 1/\epsilon_Q}$. Otherwise, if one of the individual $T_i( \epsilon_i) = \Ord{1/\epsilon_i^\delta}$ with $0<\delta<1$ we find the overall bound $T_Q(\epsilon_Q) = \Ord{ 1/\epsilon_Q^\delta}$. We further assume that controlling $\mathcal Q$ via controlling all the gates adds at most a logarithmic overhead to the runtime. \section{Multidimensional integration} \label{secMulti} The generalization of this algorithm for $n$-dimensional integration, i.e., Eq.~\eqref{eqExpValMain}, is straightforward. We begin by preparing $n$ modes according to $p(\vec{x})$ by applying the operator $\mathcal G$ to obtain \begin{equation} \mathcal{G} \ket{\rm vac}^{\otimes n} = \int d \vec{x} \,\, \sqrt{p(\vec{x})} \ket{\vec{x}}_{q}, \end{equation} where $\ket{\vec{x}}_{q}$ is the product of position eigenstates corresponding to $\vec{x}$. Let $h(\vec x) = h(x_1,\dots,x_n)$ be a polynomial that suitably approximates $ f(\vec x)$ via $\frac{1}{\vert h(\vec x)\vert^2}$. Define the gate acting on the $n$ modes plus two more ancilla modes as \begin{equation}\label{eqGateHMult} \mathcal H := e^{-i h(\hat q_1,\dots,\hat q_n)\otimes \hat p_{n+1} \otimes \hat p_{n+2}}. \end{equation} Applying the gate to $\mathcal{G}\ket{\rm vac}^{\otimes n}\ket{0}_{q_{n+1}}\ket{0}_{q_{n+2}}$ gives \begin{equation} \int d \vec{x} \,\, \sqrt{p(\vec{x})} \ket{\vec{x}}_{q} \int d p' \,\, \ket{p'}_{p_{n+1}}\ket{h(\vec{x}) p'}_{q_{n+2}}. \end{equation} The remaining analysis is analogous to the single variable case discussed above. Define the operator \begin{equation} \mathcal{M} := \mathbbm I^{\otimes n} \otimes \ket{0}_{q_{n+1}} \bra{0}_{q_{n+1}} \otimes \ket{x_{\rm off}}_{q_{n+2}} \bra{x_{\rm off}}_{q_{n+2}}, \end{equation} i.e., the extension to $n+2$ modes of the operator $\mathcal{M}$ in Eq.~\eqref{Eq:M} used to extract the integral. Then the expectation value of this measurement on the state above is given by \begin{equation} \langle \mathcal{M} \rangle = \frac{1}{4\pi} \int d\vec x \,\, \frac{p(\vec x)}{\vert h(\vec x) \vert^2} \propto \mathcal{I}. \end{equation} The error analysis and extension to finite squeezing is analogous to the one-dimensional case. Phase estimation proceeds similarly by adding another mode and using the phase estimation operator \begin{equation} \mathcal{Q}_{c}^{\rm id} = e^{-i h(\hat q_1,\dots,\hat q_n)\otimes \hat p_{n+1} \otimes \hat p_{n+2} \otimes \hat p_\phi} \,\, (\times \rm more \,\, terms), \end{equation} which is the generalization of Eq.~\eqref{Eq:BigQcid}. \section{Numerics}\label{Sec:Numerics} As discussed previously, one needs $n+3$ optical modes with appropriate squeezing in order to evaluate an $n$ dimensional integral using the CV QMC algorithm. There are two main ingredients: (i) encoding of the integrand and (ii) amplitude estimation using amplitude amplification and phase estimation. The first $n+2$ modes encode the underlying integrand while an ancilla mode is added for amplitude estimation, so that repeated measurements of its position lead to the expected value of the integral. While the encoding of the integrand and amplitude amplification can be hard to simulate, we can showcase the quadratic speedup of one mode phase estimation, where the integral-dependent phase $\theta$ is predetermined classically, following the approach taken in Ref. \cite{Rebentrost2018finance}. In this section, we consider a simple example with $f(x) = 1/(1+x^2)^2$ subject to $p(x)$ taken to be a Gaussian probability distribution $G_{x_{0},s}$ with $x_{0} = 0$ and $\sigma = 1/2$. With these choices, Eq. (\ref{eqExpValMain}) can be integrated analytically giving $\mathcal I \approx 0.74$. We take $\theta \approx 0.98$ to be the predetermined phase for the single mode phase estimation on the ancilla mode, i.e., the solution to \begin{equation} \theta = 2 \arccos\left(1 - \frac{\mathcal{I}}{2\pi}\right). \end{equation} Single mode phase estimation is carried out using the Strawberry Fields software suite~\cite{Killoran2018}. We also estimate $\mathcal I$ using the standard classical Monte Carlo integration techniques and, at the end, compare the scaling of the errors with number of MC steps used in the two algorithms. Let $\hat \theta$ and $\theta$ (which in the particular example considered here is 0.98) be the estimated and predetermined values of the phases, respectively. We define the corresponding estimation error as the fractional difference between $\hat \theta$ and $\theta$: \begin{equation}\label{error} {\rm Error} := |\hat\theta-\theta|/\theta. \end{equation} In the following, the subscripts $Q$ and $C$ denote the quantum and classical estimations. The fractional error defined above follows a power-law behavior with the number of MC steps as follows: \begin{equation}\label{powerlaw} {\rm Error} = b~N^\zeta, \end{equation} where $N$ is the number of MC steps and $\zeta$ denotes the scaling exponent. For the standard classical MC approach the scaling exponent is $\zeta_{C} = -\frac{1}{2}$. \begin{figure} \includegraphics[width=\columnwidth]{QuantumSpeedup_nMC.pdf} \caption{Comparison of classical and quantum MC. Fractional error (defined in Eq. \eqref{error}) in the classical and quantum estimations are plotted against the number of MC steps ($N$) and fitted to a power law function. The squeezing parameter is fixed to $r=10$. It is evident that quantum error scales approximately quadratically faster than the classical counterpart, i.e. $\zeta_{Q}/\zeta_{C}\approx 2$. } \label{fignumerics} \end{figure} Figure \ref{fignumerics} shows the comparison between the classical and quantum error scalings. The left panel shows the behavior of the error as a function of MC steps. The dotted-dashed curves show the data points and the solid lines are fits to the power law behavior of Eq. \eqref{powerlaw}. For the plot shown in the left panel we fixed the squeezing parameter to be $r=10$. The plots illustrate that for the quantum algorithm $\zeta_{Q} \approx -1.063$, giving an advantage over the classical method by a factor of $\zeta_{Q}/\zeta_{C}\approx 2$. This indicates an approximately quadratic speedup in the error scaling in terms of the number of MC steps. For the quantum case we consider $N_{Q} = ML$, where we have fixed $L=100$ and varied $M$. \section{Discussion and Conclusion} \label{sectionDiscussion} We have discussed a CV quantum algorithm for MC integration. To summarize, the main assumptions for the algorithm to work are the following. First, there are certain direct requirements on the integrand. \begin{enumerate*}[label=(\roman*)] \item The function $f(\vec x)$ is bounded as $0 \leq f(\vec x) \leq 1$ for all $\vec x$. This also means that the desired integral is $\int p(\vec x) f(\vec x) d\vec x \leq 1$. An extension to more general functions was given in the qubit context in Ref.~\cite{Montanaro2015} and will be subject of future work. \item There exists a polynomial $h(\vec x)$ which relates to the function $f(\vec x)$ via $f(\vec x) = 1/ \vert h(\vec x)\vert^2$ with point-wise error at most $\epsilon_h$ on a compact set. Outside of the set, the integral is vanishingly small. \end{enumerate*} Next, there are requirements on implementations of the algorithm to provide a speedup over classical methods. \begin{enumerate*}[label=(\roman*)] \setcounter{enumi}{2} \item There exists a unitary $\mathcal G$ to efficiently prepare a quantum state which encodes $\sqrt{p(\vec x)}$ in its amplitudes. \item We assume that an efficient continuous-variable gate sequence related to the polynomial function $h(\vec x)$ can be constructed. \item Lastly, a reflection around the computational initial state and the state defining the projective measurement can be efficiently implemented, i.e., the gates $\mathcal{V}$ and $\mathcal{Z}$. \end{enumerate*} Until now, only the Grover search problem has been discussed in the CV framework \cite{Pati2000}, and the generalization to amplitude estimation and MC simulations was not provided in the literature. The algorithm presented here can potentially achieve quadratic speedups in estimating integrals on a continuous-variable quantum computer. Moreover, the CV setting naturally accommodates the task of multidimensional integration and requires a fixed number of modes, this contrasts with the qubit setting of QMC which requires discretization and a number of qubits that increases with the desired accuracy. For the CV amplitude estimation, we have discussed the important role of the vacuum reflection and high-quality gate decompositions. We have constructed an implementation of the reflection by expressing it in terms of a gate introduced previously \cite{Pati2000}. However, simpler implementations may be possible to achieve the desired relative phase of the zero-photon state in analogy to the qubit implementations. Gate decompositions are an important ingredient of the present work. For amplitude estimation to provide speedups, such gate decompositions have to be efficient with small errors. Higher-order Suzuki-Trotter expansions can in principle achieve a $1/\epsilon^\delta$ runtime dependency in the error $\epsilon$, where $\delta$ is a constant that can be made arbitrarily small. While there are many studies on gate decompositions for qubits, the detailed study of such expansions and even exponentially precise methods is still in its relative infancy in the CV setting. The applications of Monte Carlo integration are manifold, for example in mathematical finance (the pricing problem \cite{Rebentrost2018finance}) and machine learning (Markov chain sampling \cite{Szegedy2004}). Future work will include steps toward concrete implementations of the algorithm presented here on realistic photonic hardware. It will also be of value to investigate quantum generalizations and applications of the two other main areas of MC methods: optimization and sampling. \acknowledgements We acknowledge Juan Miguel Arrazola, Timjan Kalajdzievski, and Krishna Kumar Sabapathy for insightful discussions. \bibliographystyle{apsrev}
{ "attr-fineweb-edu": 1.933594, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfWY5qhDCyOQLSUyq
\section{Introduction}\label{sec:intro} Over the last five years, the data science community has devoted significant attention to stochastic optimisation in Riemannian manifolds. This was impulsed by Bonnabel, who proved the convergence of the Riemannian stochastic gradient method~\cite{bonnabel}. Later on~\cite{sra}, the rate of convergence of this method was studied in detail, under various convexity assumptions on the cost function. More recently, asymptotic efficiency of the averaged Riemannian stochastic gradient method was proved in~\cite{flamm}. Previously, for the specific problem of computing Riemannian means, several results on the convergence and asymptotic normality of Riemannian stochastic optimisation methods had been obtained~\cite{arnaudon}\cite{yang}. The present work moves in a different direction, focusing on recursive estimation in Riemannian manifolds. While recursive estimation is a special case of stochastic optimisation, it has its own geometric structure, given by the Fisher information metric. Here, several original results will be introduced, which show how this geometric structure can be exploited, to design Riemannian stochastic optimisation algorithms which compute fast, asymptotically efficient, recursive estimates, of a statistical parameter which belongs to a Riemannian manifold. For the first time in the literature, these results extend, from the Euclidean context to the Riemannian context, the classical results of~\cite{nev}\cite{duflo}. The mathematical problem, considered in the present work, is formulated in Section \ref{sec:problem}. This involves a parameterised statistical model $P$ of probability distributions $P_{\scriptscriptstyle \theta\,}$, where the statistical parameter $\theta$ belongs to a Riemannian manifold $\Theta$. Given independent observations, with distribution $P_{\scriptscriptstyle \theta^*}$ for some $\theta^* \in \Theta$, the aim is to estimate the unknown parameter $\theta^*$. In principle, this is done by minimising a statistical divergence function $D(\theta)$, which measures the dissimilarity between $P_{\scriptscriptstyle \theta}$ and $P_{\scriptscriptstyle \theta^*\,}$. Taking advantage of the observations, there are two approaches to minimising $D(\theta)$\,: stochastic minimisation, which leads to recursive estimation, and empirical minimisation, which leads to classical techniques, such as maximum-likelihood estimation~\cite{broniatowski2}\cite{broniatowski1}. The original results, obtained in the present work, are stated in Section \ref{sec:props}. In particular, these are Propositions \ref{prop:ratel2}, \ref{prop:normality}, and \ref{prop:information}. Overall, these propositions show that recursive estimation, which requires less computational resources than maximum-likelihood estimation, can still achieve the same optimal performance, characterised by asymptotic efficiency~\cite{ibrahas}\cite{vaart}. To summarise these propositions, consider a sequence of recursive estimates $\theta_{\scriptscriptstyle n\,}$, computed using a Riemannian stochastic optimisation algorithm with decreasing step sizes ($n$ is the number of observations already processed by the algorithm). Informally, under assumptions which guarantee that $\theta^*$ is an attractive local minimum of $D(\theta)$, and that the algorithm is neither too noisy, nor too unstable, in the neighborhood of $\theta^*$,\\%[0.1cm] \indent $\bullet$ Proposition \ref{prop:ratel2} states that, with an adequate choice of step sizes, the $\theta_{\scriptscriptstyle n}$ achieve a fast non-asymptotic rate of convergence to $\theta^*$. Precisely, the expectation of the squared Riemannian distance between $\theta_{\scriptscriptstyle n}$ and $\theta^*$ is $O\left(n^{\scriptscriptstyle -1}\right)$. This is called a fast rate, because it is the best achievable, for any step sizes which are proportional to $n^{\scriptscriptstyle -q}$ with $q \in (1/2,1]$~\cite{benv}\cite{duflo}. Here, this rate is obtained without any convexity assumptions, for twice differentiable $D(\theta)$. It would still hold for non-differentiable, but strongly convex, $D(\theta)$~\cite{sra}. \\%[0.1cm] \indent $\bullet$ Proposition \ref{prop:normality} states that the distribution of the $\theta_{\scriptscriptstyle n}$ becomes asymptotically normal, centred at $\theta^*$, when $n$ grows increasingly large, and also characterises the corresponding asymptotic covariance matrix. This proposition is proved using a novel linearisation technique, which also plays a central role in~\cite{flamm}. \\%[0.1cm] \indent $\bullet$ Proposition \ref{prop:information} states that, if the Riemannian manifold $\Theta$ is equipped with the Fisher information metric of the statistical model $P$, then Riemannian gradient descent with respect to this information metric, when used to minimise $D(\theta)$, computes recursive estimates $\theta_{\scriptscriptstyle n}$ which are asymptotically efficient, achieving the optimal asymptotic rate of convergence, given by the Cram\'er-Rao lower bound. This is illustrated, with a numerical application to the recursive estimation of elliptically contoured distributions, in Section \ref{sec:mggd}. \indent The end result of Proposition \ref{prop:information} is asymptotic efficiency, achieved using the Fisher information metric. In~\cite{flamm}, an alternative route to asymptotic efficiency is proposed, using the averaged Riemannian stochastic gradient method. This method does not require any prior knowledge of the Fisher information metric, but has an additional computational cost, which comes from computing on-line Riemannian averages. The proofs of Propositions \ref{prop:ratel2}, \ref{prop:normality}, and \ref{prop:information}, are detailed in Section \ref{sec:proofs}, and Appendices \ref{sec:geometric} and \ref{sec:clt}. Necessary background, about the Fisher information metric (in short, this will be called the information metric), is recalled in Appendix \ref{sec:efficiency}. Before going on, the reader should note that the summation convention of differential geometry is used throughout the following, when working in local coordinates. \vfill \pagebreak \section{Problem statement} \label{sec:problem}Let $P = (P,\Theta,X)$ be a statistical model, with parameter space $\Theta$ and sample space $X$. To each $\theta \in \Theta$, the model $P$ associates a probability distribution $P_{\scriptscriptstyle\theta}$ on $X$. Here, $\Theta$ is a $C^r$ Riemannian manifold with $r > 3$, and $X$ is any measurable space. The Riemannian metric of $\Theta$ will be denoted $\langle\cdot,\cdot\rangle$, with its Riemannian distance $d(\cdot,\cdot)$. In general, the metric $\langle\cdot,\cdot\rangle$ is not the information metric of the model $P$. Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space, and $(x_{\scriptscriptstyle n}\,;n=1,2,\ldots)$ be i.i.d. random variables on $\Omega$, with values in $X$. While the distribution of $x_{\scriptscriptstyle n}$ is unknown, it is assumed to belong to the model $P$. That is, $\mathbb{P}\circ x^{\scriptscriptstyle -1}_{\scriptscriptstyle n} = P_{\scriptscriptstyle \theta^*}$ for some $\theta^* \in \Theta$, to be called the true parameter. Consider the following problem\,: how to obtain fast, asymptotically efficient, recursive estimates $\theta_{\scriptscriptstyle n}$ of the true parameter $\theta^*$, based on observations of the random variables $x_{\scriptscriptstyle n}$? The present work proposes to solve this problem through a detailed study of the decreasing-step-size algorithm, which computes \begin{subequations} \label{subeq:algorithm} \begin{equation} \label{eq:algorithm} \theta_{\scriptscriptstyle n+1} = \mathrm{Exp}_{\scriptscriptstyle \theta_{\scriptscriptstyle n}}\!\left(\gamma_{\scriptscriptstyle n+1}u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})\right) \hspace{1cm} n = 0,1,\ldots \end{equation} starting from an initial guess $\theta_{\scriptscriptstyle 0}\,$. This algorithm has three ingredients. First, $\mathrm{Exp}$ denotes the Riemannian exponential map of the metric $\langle\cdot,\cdot\rangle$ of $\Theta$~\cite{petersen}. Second, the step sizes $\gamma_{\scriptscriptstyle n}$ are strictly positive, decreasing, and verify the usual conditions for stochastic approximation~\cite{nev}\cite{kushner} \begin{equation} \label{eq:stepsize} \sum\,\gamma_{\scriptscriptstyle n} \,=\, \infty \hspace{1cm} \sum\,\gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n} \,<\, \infty \end{equation} Third, $u(\theta,x)$ is a continuous vector field on $\Theta$ for each $x \in X$, which generalises the classical concept of score statistic~\cite{ibrahas}\cite{heyde}. It will become clear, from the results given in Section \ref{sec:props}, that the solution of the above-stated problem depends on the choice of each one of these three ingredients. A priori knowledge about the model $P$ is injected into Algorithm (\ref{eq:algorithm}) using a divergence function $D(\theta) = D(P_{\scriptscriptstyle \theta^*},P_{\scriptscriptstyle \theta})$. As defined in~\cite{amari}, this is a positive function, equal to zero if and only if $P_{\scriptscriptstyle \theta} = P_{\scriptscriptstyle \theta^*\,}$, and with positive definite Hessian at $\theta = \theta^*$. Since one expects that minimising $D(\theta)$ will lead to estimating $\theta^*$, it is natural to require that \begin{equation} \label{eq:gradient} E_{\scriptscriptstyle\theta^*\,}u(\theta,x) \,=\, - \nabla D(\theta) \end{equation} In other words, that $u(\theta,x)$ is an unbiased estimator of minus the Riemannian gradient of $D(\theta)$. With $u(\theta,x)$ given by (\ref{eq:gradient}), Algorithm (\ref{eq:algorithm}) is a Riemannian stochastic gradient descent, of the form considered in~\cite{bonnabel}\cite{sra}\cite{flamm}. However, as explained in Remark \ref{rk:gradient}, (\ref{eq:gradient}) may be replaced by the weaker condition (\ref{eq:weakgrad1}), without affecting the results in Section \ref{sec:props}. In this sense, Algorithm (\ref{eq:algorithm}) is more general than Riemannian stochastic gradient descent. \end{subequations} In practice, a suitable choice of $D(\theta)$ is often the Kullback-Leibler divergence~\cite{shiryayev}, \begin{subequations} \label{subeq:kl} \begin{equation} \label{eq:kl} D(\theta) \,=\, -\,E_{\scriptscriptstyle\theta^*}\log L(\theta) \hspace{1cm} L(\theta) \,=\, \frac{dP_{\scriptscriptstyle \theta}}{dP_{\scriptscriptstyle \theta^*}} \end{equation} where $P_{\scriptscriptstyle \theta}$ is absolutely continuous with respect to $P_{\scriptscriptstyle \theta^*}$ with Radon-Nikodym derivative $L(\theta)$. Indeed, if $D(\theta)$ is chosen to be the Kullback-Leibler divergence, then (\ref{eq:gradient}) is satisfied by \begin{equation} \label{eq:score} u(\theta,x) = \nabla \log L(\theta) \end{equation} which, in many practical situations, can be evaluated directly, without any knowledge of $\theta^*\,$. \end{subequations} \section{Main results} \label{sec:props}The motivation of the following Propositions \ref{prop:as} to \ref{prop:information} is to provide general conditions, which guarantee that Algorithm (\ref{eq:algorithm}) computes fast, asymptotically efficient, recursive estimates $\theta_{\scriptscriptstyle n}$ of the true parameter $\theta^*$. In the statement of these propositions, it is implicitly assumed that conditions (\ref{eq:stepsize}) and (\ref{eq:gradient}) are verified. Moreover, the following assumptions are considered. \\[0.1cm] \indent \textbf{(d1)} the divergence function $D(\theta)$ has an isolated stationary point at $\theta = \theta^*$, and Lipschitz gradient in a neighborhood of this point. \textbf{(d2)} this stationary point is moreover attractive\,: $D(\theta)$ is twice differentiable at $\theta = \theta^*$, with positive definite Hessian at this point. \textbf{(u1)} in a neighborhood of $\theta = \theta^*$, the function $V(\theta) = E_{\scriptscriptstyle \theta^*}\Vert u(\theta,x)\Vert^2$ is uniformly bounded. \textbf{(u2)} in a neighborhood of $\theta = \theta^*$, the function $R(\theta) = E_{\scriptscriptstyle \theta^*}\Vert u(\theta,x)\Vert^4$ is uniformly bounded. \\[0.1cm] For Assumption (d1), the definition of a Lipschitz vector field on a Riemannian manifold may be found in~\cite{meunier}. For Assumptions (u1) and (u2), $\Vert\cdot\Vert$ denotes the Riemannian norm.\\[0.1cm] \indent Let $\Theta^*$ be a neighborhood of $\theta^*$ which verifies (d1), (u1), and (u2). Without loss of generality, it is assumed that $\Theta^*$ is compact and convex (see the definition of convexity in~\cite{petersen}\cite{udriste}). Then, $\Theta^*$ admits a system of normal coordinates $(\theta^{\scriptscriptstyle\,\alpha}\,;\alpha = 1\,,\ldots,\,d\,)$ with origin at $\theta^*$. With respect to these coordinates, denote the components of $u(\theta^*,x)$ by $u^{\scriptscriptstyle \alpha}(\theta^*)$ and let $\Sigma^* = (\Sigma^*_{\scriptscriptstyle\alpha\beta})$,\begin{subequations} \begin{equation} \label{eq:cov} \Sigma^*_{\scriptscriptstyle\alpha\beta} \,=\, E_{\scriptscriptstyle \theta^*}\! \left[u^{\scriptscriptstyle\alpha}(\theta^*)\,u^{\scriptscriptstyle\beta}(\theta^*)\right] \end{equation} When (d2) is verified, denote the components of the Hessian of $D(\theta)$ at $\theta = \theta^*$ by $H = \left(H_{\scriptscriptstyle \alpha \beta}\right)$, \begin{equation} \label{eq:hess} H_{\scriptscriptstyle \alpha \beta} \,=\, \left.\frac{\partial^{\scriptscriptstyle\, 2}\!\,D}{\mathstrut\partial\theta^{\scriptscriptstyle \alpha}\partial\theta^{\scriptscriptstyle \beta}}\right|_{\scriptscriptstyle \theta^{\scriptscriptstyle \alpha} = 0} \end{equation} Then, the matrix $H = \left(H_{\scriptscriptstyle \alpha \beta}\right)$ is positive definite~\cite{absil}. Denote by $\lambda > 0$ its smallest eigenvalue. \end{subequations} Propositions \ref{prop:as} to \ref{prop:information} require the condition that the recursive estimates $\theta_{\scriptscriptstyle n}$ are stable, which means that all the $\theta_n$ lie in $\Theta^*$, almost surely. The need for this condition is discussed in Remark \ref{rk:stable}. Note that, if $\theta_{\scriptscriptstyle n}$ lies in $\Theta^*$, then $\theta_{\scriptscriptstyle n}$ is determined by its normal coordinates $\theta^{\scriptscriptstyle\, \alpha}_{\scriptscriptstyle n\,}$. \begin{proposition}[consistency]\label{prop:as}assume (d1) and (u1) are verified, and the recursive estimates $\theta_{\scriptscriptstyle n}$ are stable. Then, $\lim\theta_{\scriptscriptstyle n} = \theta^*$ almost surely. \end{proposition} \begin{proposition}[mean-square rate]\label{prop:ratel2}assume (d1), (d2) and (u1) are verified, the recursive estimates $\theta_{\scriptscriptstyle n}$ are stable, and $\gamma_{\scriptscriptstyle n} = \frac{a}{n}$ where $2\lambda a > 1$. Then \begin{equation} \label{eq:ratel2} \mathbb{E}\,d^{\scriptscriptstyle\, 2}(\theta_{\scriptscriptstyle n\,},\theta^*) \,=\, O\left(n^{\scriptscriptstyle -1}\right) \end{equation} \end{proposition} \begin{proposition}[almost-sure rate]\label{prop:rateas}assume the conditions of Proposition \ref{prop:ratel2} are verified. Then, \begin{equation} \label{eq:rateas} d^{\scriptscriptstyle\, 2}(\theta_{\scriptscriptstyle n\,},\theta^*) \,=\, o(n^{\scriptscriptstyle -p}) \text{ for } p \in (0,1) \hspace{1cm} \text{almost surely} \end{equation} \end{proposition} \begin{proposition}[asymptotic normality]\label{prop:normality}assume the conditions of Proposition \ref{prop:ratel2}, as well as (u2), are verified. Then, the distribution of the re-scaled coordinates $(n^{\scriptscriptstyle 1/2}\theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n})$ converges to a centred $d$-variate normal distribution, with covariance matrix $\Sigma$ given by Lyapunov's equation \begin{equation} \label{eq:lyapunov} A\,\Sigma\,+\Sigma\,A \,=\, -a^{\scriptscriptstyle 2}\,\Sigma^* \end{equation} where $A = \left(A_{\scriptscriptstyle \alpha\beta}\right)$ with $A_{\scriptscriptstyle \alpha\beta} = \frac{1}{2}\delta_{\scriptscriptstyle \alpha\beta} - aH_{\scriptscriptstyle \alpha \beta}$ (here, $\delta$ denotes Kronecker's delta). \end{proposition} \begin{proposition}[asymptotic efficiency]\label{prop:information}assume the Riemannian metric $\langle\cdot,\cdot\rangle$ of $\Theta$ coincides with the information metric of the model $P$, and let $D(\theta)$ be the Kullback-Leibler divergence (\ref{eq:kl}). Further, assume (d1), (d2), (u1) and (u2) are verified, the recursive estimates $\theta_{\scriptscriptstyle n}$ are stable, and $\gamma_{\scriptscriptstyle n} = \frac{a}{n}$ where $2a > 1$. Then,\\[0.1cm] \begin{subequations} \label{subeq:information} \noindent \textbf{\emph{(i)}} the rates of convergence (\ref{eq:ratel2}) and (\ref{eq:rateas}) hold true. \\[0.1cm] \noindent \textbf{\emph{(ii)}} if $a = 1$, the distribution of the re-scaled coordinates $(n^{\scriptscriptstyle 1/2}\theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n})$ converges to a centred $d$-variate normal distribution, with covariance matrix $\Sigma^*$.\\[0.1cm] \noindent \textbf{\emph{(iii)}} if $a = 1$, and $u(\theta,x)$ is given by (\ref{eq:score}), then $\Sigma^*$ is the identity matrix, and the recursive estimates $\theta_{\scriptscriptstyle n}$ are asymptotically efficient. \\[0.1cm] \noindent \textbf{\emph{(iv)}} the following rates of convergence also hold \begin{eqnarray} \label{eq:information1} \mathbb{E}\,D(\theta_{\scriptscriptstyle n})\,=\, O\left(n^{\scriptscriptstyle -1}\right) \hspace{4.9cm} \\[0.1cm] \label{eq:information2} D(\theta_{\scriptscriptstyle n\,}) \,=\, o(n^{\scriptscriptstyle -p}) \text{ for } p \in (0,1) \hspace{1cm} \text{almost surely} \end{eqnarray} \end{subequations} \end{proposition} The following remarks are concerned with the scope of Assumptions (d1), (d2), (u1), and (u2), and with the applicability of Propositions \ref{prop:as} to \ref{prop:information}. \begin{remark}\label{rk:metric}(d2), (u1) and (u2) do not depend on the Riemannian metric $\langle\cdot,\cdot\rangle$ of $\Theta$. Precisely, if they are verified for one Riemannian metric on $\Theta$, then they are verified for any Riemannian metric on $\Theta$. Moreover, if the function $D(\theta)$ is $C^{\scriptscriptstyle2}$, then the same is true for (d1). In this case, Propositions \ref{prop:as} to \ref{prop:information} apply for any Riemannian metric on $\Theta$, so that the choice of the metric $\langle\cdot,\cdot\rangle$ is a purely practical matter, to be decided according to applications. \end{remark} \begin{remark}\label{rk:gradient}the conclusion of Proposition \ref{prop:as} continues to hold, if (\ref{eq:gradient}) is replaced by \begin{equation} \label{eq:weakgrad1} E_{\scriptscriptstyle \theta^*}\langle u(\theta,x),\!\nabla D(\theta)\rangle < 0 \text{ for } \theta \neq \theta^* \end{equation} Then, it is even possible to preserve Propositions \ref{prop:ratel2}, \ref{prop:rateas}, and \ref{prop:normality}, provided (d2) is replaced by the assumption that the mean vector field, $X(\theta) = E_{\scriptscriptstyle \theta^*\,} u(\theta,x)$, has an attractive stationary point at $\theta = \theta^*$. This generalisation of Propositions \ref{prop:as} to \ref{prop:normality} can be achieved following essentially the same approach as laid out in Section \ref{sec:proofs}. However, in the present work, it will not be carried out in detail. \end{remark} \begin{remark}\label{rk:stable}the condition that the recursive estimates $\theta_{\scriptscriptstyle n}$ are stable is standard in all prior work on stochastic optimisation in manifolds~\cite{bonnabel}\cite{sra}\cite{flamm}. In practice, this condition can be enforced through replacing Algorithm (\ref{eq:algorithm}) by a so-called projected or truncated algorithm. This is identical to (\ref{eq:algorithm}), except that $\theta_{\scriptscriptstyle n}$ is projected back onto the neighborhood $\Theta^*$ of $\theta^*$, whenever it falls outside of this neighborhood~\cite{nev}\cite{kushner}. On the other hand, if the $\theta_{\scriptscriptstyle n}$ are not required to be stable, but (d1) and (u1) are replaced by global assumptions, \\[0.1cm] \indent \textbf{(d1')} $D(\theta)$ has compact level sets and globally Lipschitz gradient. \\[0.1cm] \indent \textbf{(u1')} $V(\theta) \leq C\,(1+D(\theta))$ for some constant $C$ and for all $\theta \in \Theta$. \\[0.1cm] then, applying the same arguments as in the proof of Proposition \ref{prop:as}, it follows that the $\theta_{\scriptscriptstyle n}$ converge to the set of stationary points of $D(\theta)$, almost surely. \end{remark} \begin{remark}\label{rk:chi2} from (ii) and (iii) of Proposition \ref{prop:information}, it follows that the distribution of $n\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n\,},\theta^*)$ converges to a $\chi^{\scriptscriptstyle 2}$-distribution with $d$ degrees of freedom. This provides a practical means of confirming the asymptotic efficiency of the recursive estimates $\theta_{\scriptscriptstyle n\,}$. \end{remark} \section{Application\,: estimation of ECD} \label{sec:mggd}Here, the conclusion of Proposition \ref{prop:information} is illustrated, by applying Algorithm (\ref{eq:algorithm}) to the estimation of elliptically contoured distributions (ECD) \cite{kotz}\cite{sraell}. Precisely, in the notation of Section \ref{sec:problem}, let $\Theta = \mathcal{P}_{\scriptscriptstyle m}$ the space of $m \times m$ positive definite matrices, and $X = \mathbb{R}^{\scriptscriptstyle m\,}$. Moreover, let each $P_{\scriptscriptstyle \theta}$ have probability density function \begin{equation} \label{eq:ecd} p(x|\theta) \,\propto\, \exp\left[ h\left(x^{\scriptscriptstyle{\dagger}}\theta^{\scriptscriptstyle-1}x\right) - \frac{1}{2}\log\det(\theta)\right] \hspace{1cm} \theta \in \mathcal{P}_{\scriptscriptstyle m}\,,x \in \mathbb{R}^{\scriptscriptstyle m} \end{equation} where $h:\mathbb{R}\rightarrow \mathbb{R}$ is fixed, has negative values, and is decreasing, and $^{\scriptscriptstyle\dagger}$ denotes the transpose. Then, $P_{\scriptscriptstyle \theta}$ is called an ECD with scatter matrix $\theta$. To begin, let $(x_{\scriptscriptstyle n}\,;n=1,2,\ldots)$ be i.i.d. random vectors in $\mathbb{R}^{\scriptscriptstyle m\,}$, with distribution $P_{\scriptscriptstyle\theta^*}$ given by (\ref{eq:ecd}), and consider the problem of estimating the true scatter matrix $\theta^*$. The standard approach to this problem is based on maximum-likelihood estimation~\cite{pascal}\cite{sraell}. An original approach, based on recursive estimation, is now introduced using Algorithm (\ref{eq:algorithm}). As in Proposition \ref{prop:information}, the parameter space $\mathcal{P}_{\scriptscriptstyle m}$ will be equipped with the information metric of the statistical model $P$ just described. In~\cite{berkane}, it is proved that this information metric is an affine-invariant metric on $\mathcal{P}_{\scriptscriptstyle m\,}$. In other words, it is of the general form~\cite{cyrus} \begin{subequations} \label{subeq:infometric} \begin{equation} \label{eq:affinv} \langle u,u\rangle_{\scriptscriptstyle \theta} \,=\, I_{\scriptscriptstyle 1}\,\mathrm{tr}\left(\theta^{\scriptscriptstyle -1}u\right)^{\scriptscriptstyle 2}\,+\,I_{\scriptscriptstyle 2}\,\mathrm{tr}^{\scriptscriptstyle 2}\left(\theta^{\scriptscriptstyle -1}u\right) \hspace{1cm} u \in T_{\scriptscriptstyle \theta}\mathcal{P}_{\scriptscriptstyle m} \end{equation} parameterised by constants $I_{\scriptscriptstyle 1} > 0$ and $I_{\scriptscriptstyle 2} \geq 0$, where $\mathrm{tr}$ denotes the trace and $\mathrm{tr}^{\scriptscriptstyle 2}$ the squared trace. Precisely~\cite{berkane}, for the information metric of the model $P$, \begin{equation} \label{eq:infocoeff} I_{\scriptscriptstyle 1} = \frac{\varphi}{2m^{\scriptscriptstyle 2}(m+2)} \hspace{1cm} I_{\scriptscriptstyle 2} = \frac{\varphi}{m^{\scriptscriptstyle 2}} - \frac{1}{4} \end{equation} where $\varphi$ is a further constant, given by the expectation \begin{equation} \label{eq:varphi} \varphi \,=\, E_{\scriptscriptstyle e}\left[h^\prime(x^{\scriptscriptstyle{\dagger}}x)\left(x^{\scriptscriptstyle{\dagger}}x\right)\right]^2 \end{equation} with $e \in \mathcal{P}_{\scriptscriptstyle m}$ the identity matrix, and $h^\prime$ the derivative of $h$. This expression of the information metric can now be used to specify Algorithm (\ref{eq:algorithm}). \end{subequations} First, since the information metric is affine-invariant, it is enough to recall that all affine-invariant metrics on $\mathcal{P}_{\scriptscriptstyle m}$ have the same Riemannian exponential map~\cite{pennec}\cite{sraell}, \begin{subequations} \label{subeq:algoecd} \begin{equation} \label{eq:exp} \mathrm{Exp}_{\scriptscriptstyle \theta}(u) \,=\, \theta\exp\left(\theta^{\scriptscriptstyle -1}u\right) \end{equation} where $\exp$ denotes the matrix exponential. Second, as in (ii) of Proposition \ref{prop:information}, choose the sequence of step sizes \begin{equation} \label{eq:assstep} \gamma_{\scriptscriptstyle n} = \frac{1}{n} \end{equation} Third, as in (iii) of Proposition \ref{prop:information}, let $u(\theta,x)$ be the vector field on $\mathcal{P}_{\scriptscriptstyle m}$ given by (\ref{eq:score}), \begin{equation} \label{eq:scoreecd} u(\theta,x) = \nabla^{\scriptscriptstyle (inf)} \log L(\theta) = \nabla^{\scriptscriptstyle (inf)} \log p(x|\theta) \end{equation} where $\nabla^{\scriptscriptstyle (inf)}$ denotes the gradient with respect to the information metric, and $L(\theta)$ is the likelihood ratio, equal to $p(x|\theta)$ divided by $p(x|\theta^*)$. Now, replacing (\ref{subeq:algoecd}) into (\ref{eq:algorithm}) defines an original algorithm for recursive estimation of the true scatter matrix $\theta^*$. \end{subequations} To apply this algorithm in practice, one may evaluate $u(\theta,x)$ via the following steps. Denote $g(\theta,x)$ the gradient of $\log p(x|\theta)$ with respect to the affine-invariant metric of~\cite{pennec}, which corresponds to $I_{\scriptscriptstyle 1} = 1$ and $I_{\scriptscriptstyle 2} = 0$. By direct calculation from (\ref{eq:ecd}), this is given by \begin{subequations} \label{subeq:infograd} \begin{equation} \label{eq:classgrad} g(\theta,x) \,=\, -\frac{1}{2}\theta - h^\prime\left(x^{\scriptscriptstyle\dagger}\theta^{\scriptscriptstyle -1}x\right)x x^{\scriptscriptstyle\dagger} \end{equation} Moreover, introduce the constants $J_{\scriptscriptstyle 1}= I_{\scriptscriptstyle 1}$ and $J_{\scriptscriptstyle 2} = I_{\scriptscriptstyle 1}+ m I_{\scriptscriptstyle 2\,}$. Then, $u(\theta,x)$ can be evaluated, \begin{equation} \label{eq:uthetax} u(\theta,x) \,=\, J^{\scriptscriptstyle -1}_{\scriptscriptstyle 1} \left(g(\theta,x)\right)^{\scriptscriptstyle \perp}\,+\,J^{\scriptscriptstyle -1}_{\scriptscriptstyle 2} \left(g(\theta,x)\right)^{\scriptscriptstyle \parallel} \end{equation} from the orthogonal decomposition of $g = g(\theta,x)$, \begin{equation} \label{eq:orthogonal} g^{\scriptscriptstyle \parallel}\,=\, \mathrm{tr}\left(\theta^{\scriptscriptstyle-1}g\right)\frac{\theta}{m} \hspace{1cm} g^{\scriptscriptstyle \perp}\,=\, g - g^{\scriptscriptstyle \parallel} \end{equation} \end{subequations} \indent Figures \ref{fig1} and \ref{fig2} below display numerical results from an application to Kotz-type distributions, which correspond to $h(t) \!=\! -\frac{t^{s}}{\mathstrut 2}$ in (\ref{eq:ecd}) and $\varphi = s^{\scriptscriptstyle 2}\frac{m}{2s}\left(\frac{m}{2s}+1\right)$ in (\ref{eq:varphi})~\cite{kotz}\cite{berkane}. These figures were generated from $10^3$ Monte Carlo runs of the algorithm defined by (\ref{eq:algorithm}) and (\ref{subeq:algoecd}), with random initialisation, for the specific values $s = 4$ and $m = 7$. Essentially the same numerical results could be observed for any $s \leq 9$ and $m \leq 50$. Figure \ref{fig1} confirms the fast non-asymptotic rate of convergence (\ref{eq:ratel2}), stated in (i) of Proposition \ref{prop:information}. On a log-log scale, it shows the empirical mean $\mathbb{E}_{\scriptscriptstyle{\mathrm{MC}}}\,d^{\scriptscriptstyle\, 2}(\theta_{\scriptscriptstyle n},\theta^*)$ over Monte Carlo runs, as a function of $n$. This decreases with a constant negative slope equal to $-1$, starting roughly at $\log n = 4$. Here, the Riemannian distance $d(\theta_{\scriptscriptstyle n},\theta^*)$ induced by the information metric (\ref{subeq:infometric}) is given by~\cite{cyrus} \begin{equation} \label{eq:infodistance} d^{\scriptscriptstyle\,2}(\theta,\theta^*) \,=\, I_{\scriptscriptstyle 1}\,\mathrm{tr}\left[\log\left(\theta^{\scriptscriptstyle -1}\theta^*\right)\right]^{\scriptscriptstyle 2}\,+\,I_{\scriptscriptstyle 2}\,\mathrm{tr}^{\scriptscriptstyle 2}\left[\log\left(\theta^{\scriptscriptstyle -1}\theta^*\right)\right] \hspace{1cm} \theta\,,\theta^* \in \Theta \end{equation} where $\log$ denotes the symmetric matrix logarithm~\cite{higham}. Figure \ref{fig2} confirms the asymptotic efficiency of the recursive estimates $\theta_{\scriptscriptstyle n\,}$, stated in (iii) of Proposition \ref{prop:information}, using Remark \ref{rk:chi2}. It shows a kernel density estimate of $n\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n\,},\theta^*)$ where $n = 10^5$ (solid blue curve). This agrees with a $\chi^{\scriptscriptstyle 2}$-distribution with $28$ degrees of freedom (dotted red curve), where $d = 28$ is indeed the dimension of the parameter space $\mathcal{P}_{\scriptscriptstyle m}$ for $m = 7$. \begin{figure}[!b] \centering \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=6cm]{pente.png} \caption{{\small fast non-asymptotic rate of convergence}} \label{fig1} \end{minipage} \hfill \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=6cm]{Chi-2.png} \caption{{\small asymptotic efficiency (optimal rate of convergence)}} \label{fig2} \end{minipage} \end{figure} \vfill \pagebreak \section{Proofs of main results} \label{sec:proofs} \subsection{Proof of Proposition \ref{prop:as}} \label{subsec:proofas} the proof is a generalisation of the original proof in~\cite{bonnabel}, itself modeled on the proof for the Euclidean case in~\cite{bottou}. Throughout the following, let $\mathcal{X}_{\scriptscriptstyle n}$ be the $\sigma$-field generated by $x_{\scriptscriptstyle 1}\,,\ldots,\,x_{\scriptscriptstyle n}$\!~\cite{shiryayev}. Recall that $(x_{\scriptscriptstyle n}\,;n=1,2,\ldots)$ are i.i.d. with distribution $P_{\scriptscriptstyle \theta^*\,}$. Therefore, by (\ref{eq:algorithm}), $\theta_{\scriptscriptstyle n}$ is $\mathcal{X}_{\scriptscriptstyle n}$-measurable and $x_{\scriptscriptstyle n+1}$ is independent from $\mathcal{X}_{\scriptscriptstyle n\,}$. Thus, using elementary properties of conditional expectation~\cite{shiryayev}, \begin{subequations} \label{eq:moments} \begin{eqnarray} \label{eq:moments1} \mathbb{E}\left[u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})\middle|\mathcal{X}_{\scriptscriptstyle n}\right] = -D(\theta_{\scriptscriptstyle n}) \\[0.1cm] \label{eq:moments2} \mathbb{E}\left[\Vert u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})\Vert^2 \middle|\mathcal{X}_{\scriptscriptstyle n}\right] = V(\theta_{\scriptscriptstyle n}) \end{eqnarray} \end{subequations} where (\ref{eq:moments1}) follows from (\ref{eq:gradient}), and (\ref{eq:moments2}) from (u1). Let $L$ be a Lipschitz constant for $\nabla D(\theta)$, and $C$ be an upper bound on $V(\theta)$, for $\theta \in \Theta^*$. The following inequality is now proved, for any positive integer $n$, \begin{equation} \label{eq:rsinequality} \mathbb{E}\left[D(\theta_{\scriptscriptstyle n+1}) - D(\theta_{\scriptscriptstyle n})\middle|\mathcal{X}_{\scriptscriptstyle n}\right] \,\leq \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,LC - \gamma_{\scriptscriptstyle n+1}\Vert \nabla D(\theta_{\scriptscriptstyle n})\Vert^2 \end{equation} once this is done, Proposition \ref{prop:as} is obtained by applying the Robbins-Siegmund theorem~\cite{duflo}. \\[0.1cm] \textit{Proof of (\ref{eq:rsinequality})}\,: let $c(t)$ be the geodesic connecting $\theta_{\scriptscriptstyle n}$ to $\theta_{\scriptscriptstyle n+1}$ with equation \begin{subequations} \begin{equation} \label{eq:rsproof1} c(t) = \mathrm{Exp}_{\scriptscriptstyle \theta_{\scriptscriptstyle n}}\!\left(t\gamma_{\scriptscriptstyle n+1}u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})\right) \end{equation} From the fundamental theorem of calculus, \begin{equation} \label{eq:rsproof2} D(\theta_{\scriptscriptstyle n+1}) - D(\theta_{\scriptscriptstyle n})\,=\, \gamma_{\scriptscriptstyle n+1}\,\langle u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1}),\nabla D(\theta_{\scriptscriptstyle n})\rangle \,+\, \gamma_{\scriptscriptstyle n+1}\,\int^1_0\left[\langle \dot{c},\nabla D\rangle_{\scriptscriptstyle c(t)} - \langle \dot{c},\nabla D\rangle_{\scriptscriptstyle c(0)}\right]\,dt \end{equation} Since the recursive estimates $\theta_{\scriptscriptstyle n}$ are stable, $\theta_{\scriptscriptstyle n}$ and $\theta_{\scriptscriptstyle n+1}$ both lie in $\Theta^*$. Since $\Theta^*$ is convex, the whole geodesic $c(t)$ lies in $\Theta^*$. Then, since $\nabla D(\theta)$ is Lipschitz on $\Theta^*$, it follows from (\ref{eq:rsproof2}), \begin{equation} \label{eq:rsproof3} D(\theta_{\scriptscriptstyle n+1}) - D(\theta_{\scriptscriptstyle n})\,\leq \gamma_{\scriptscriptstyle n+1}\,\langle u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1}),\nabla D(\theta_{\scriptscriptstyle n})\rangle \,+\, \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,L\Vert u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})\Vert^2 \end{equation} Taking conditional expectations in this inequality, and using (\ref{eq:moments1}) and (\ref{eq:moments2}), \begin{equation} \label{eq:rsproof4} \mathbb{E}\left[D(\theta_{\scriptscriptstyle n+1}) - D(\theta_{\scriptscriptstyle n})\middle|\mathcal{X}_{\scriptscriptstyle n}\right] \,\leq \,- \gamma_{\scriptscriptstyle n+1}\Vert \nabla D(\theta_{\scriptscriptstyle n})\Vert^2 \,+\, \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,LV(\theta_{\scriptscriptstyle n}) \end{equation} \end{subequations} so (\ref{eq:rsinequality}) follows since (u1) guarantees $V(\theta_{\scriptscriptstyle n}) \leq C$. \hfill$\blacksquare$ \\[0.1cm] \textit{Conclusion}\,: by the Robbins-Siegmund theorem, inequality (\ref{eq:rsinequality}) implies that, almost surely, \begin{subequations} \begin{equation} \label{eq:proofas1} \lim D(\theta_{\scriptscriptstyle n}) =D_{\scriptscriptstyle \infty} \,<\infty \hspace{0.25cm}and\hspace{0.25cm} \sum^{\scriptscriptstyle\infty}_{\scriptscriptstyle n=1}\,\gamma_{\scriptscriptstyle n+1}\,\Vert \nabla D(\theta_{\scriptscriptstyle n})\Vert^2 \, < \infty \end{equation} In particular, from the first condition in (\ref{eq:stepsize}), convergence of the sum in (\ref{eq:proofas1}) implies \begin{equation} \label{eq:proofas2} \lim\,\Vert\nabla D(\theta_{\scriptscriptstyle n})\Vert \,= 0 \hspace{1cm} \text{almost surely} \end{equation} \end{subequations} Now, since the sequence of recursive estimates $\theta_{\scriptscriptstyle n}$ lies in the compact set $\Theta^*$, it has at least one point of accumulation in this set, say $\theta_{*\,}$. If $\theta_{\scriptscriptstyle n(k)}$ is a subsequence of $\theta_{\scriptscriptstyle n\,}$, converging to $\theta_{*\,}$, $$ \Vert \nabla D(\theta_*)\Vert \,=\, \lim\,\Vert\nabla D(\theta_{\scriptscriptstyle n(k)})\Vert\,=\, \lim\,\Vert\nabla D(\theta_{\scriptscriptstyle n})\Vert \,= 0 \hspace{1cm} \text{almost surely} $$ where the third equality follows from (\ref{eq:proofas2}). This means that $\theta_*$ is a stationary point of $D(\theta)$ in $\Theta^*$. Thus, (d1) implies $\theta_* = \theta^*$ is the unique point of accumulation of $\theta_{\scriptscriptstyle n\,}$. In other words, $\lim \theta_{\scriptscriptstyle n} = \theta^*$ almost surely. \hfill $\blacksquare$ \subsection{Proof of Proposition \ref{prop:ratel2}} the proof is modeled on the proofs for the Euclidean case, given in~\cite{nev}\cite{benv}. It relies on the following geometric Lemmas \ref{lemma:grad} and \ref{lemma:trigo}. Lemma \ref{lemma:grad} will be proved in Appendix \ref{sec:geometric}. On the other hand, Lemma \ref{lemma:trigo} is the same as the trigonometric distance bound of~\cite{sra}. For Lemma \ref{lemma:grad}, recall that $\lambda > 0$ denotes the smallest eigenvalue of the matrix $H$ defined in (\ref{eq:hess}). \begin{subequations} \begin{lemma} \label{lemma:grad} for any $\mu < \lambda$, there exists a neighborhood $\bar{\Theta}^*$ of $\theta^*$, contained in $\Theta^*$, with \begin{equation} \label{eq:lemmgrad} \langle\mathrm{Exp}^{\scriptscriptstyle -1}_{\scriptscriptstyle \theta}(\theta^*),\nabla D(\theta)\rangle \,\leq\, -\mu\,d^{\scriptscriptstyle\,2}(\theta,\theta^*) \hspace{1cm}\text{for } \theta \in \bar{\Theta}^* \end{equation} \end{lemma} \begin{lemma} \label{lemma:trigo} let $-\kappa^{\scriptscriptstyle 2}$ be a lower bound on the sectional curvature of $\Theta$ in $\Theta^*$, and $C_{\scriptscriptstyle\kappa} = R\kappa\coth(R\kappa)$ where $R$ is the diameter of $\Theta^*$. For $\tau,\theta \in \Theta^*$, where $\tau = \mathrm{Exp}_{\scriptscriptstyle \theta}(u)$, \begin{equation} \label{eq:trigo} d^{\scriptscriptstyle\,2}(\tau,\theta^*) \,\leq\, d^{\scriptscriptstyle\,2}(\theta,\theta^*) - 2\,\langle \mathrm{Exp}^{\scriptscriptstyle -1}_{\scriptscriptstyle \theta}(\theta^*),u\rangle + C_{\scriptscriptstyle\kappa}\Vert u\Vert^2 \end{equation} \end{lemma} \end{subequations} \noindent \textit{Proof of (\ref{eq:ratel2})}\,: let $\gamma_{\scriptscriptstyle n} = \frac{a}{n}$ with $2\lambda a > 2\mu a > 1$ for some $\mu < \lambda$, and let $\bar{\Theta}^*$ be the neighborhood corresponding to $\mu$ in Lemma \ref{lemma:grad}. By Proposition \ref{prop:as}, the $\theta_{\scriptscriptstyle n}$ converge to $\theta^*$ almost surely. Without loss of generality, it can be assumed that all the $\theta_n$ lie in $\bar{\Theta}^*$, almost surely. Then, (\ref{eq:algorithm}) and Lemma \ref{lemma:trigo} imply, for any positive integer $n$, \begin{subequations} \begin{equation} \label{eq:trigalgo1} d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n+1},\theta^*) \,\leq\, d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) - 2\gamma_{\scriptscriptstyle n+1}\,\langle \mathrm{Exp}^{\scriptscriptstyle -1}_{\scriptscriptstyle \theta_{\scriptscriptstyle n}}(\theta^*),u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})\rangle + \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,C_{\scriptscriptstyle\kappa}\Vert u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})\Vert^2 \end{equation} Indeed, this follows by replacing $\tau = \theta_{\scriptscriptstyle n+1}$ and $\theta = \theta_{\scriptscriptstyle n}$ in (\ref{eq:trigo}). Taking conditional expectations in (\ref{eq:trigalgo1}), and using (\ref{eq:moments1}) and (\ref{eq:moments2}), $$ \mathbb{E}\left[d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n+1},\theta^*)\middle|\mathcal{X}_{\scriptscriptstyle n}\right]\,\leq\, d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) + 2\gamma_{\scriptscriptstyle n+1}\,\langle \mathrm{Exp}^{\scriptscriptstyle -1}_{\scriptscriptstyle \theta_{\scriptscriptstyle n}}(\theta^*),\nabla D(\theta_{\scriptscriptstyle n})\rangle + \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,C_{\scriptscriptstyle\kappa}V(\theta_{\scriptscriptstyle n}) $$ Then, by (u1) and (\ref{eq:lemmgrad}) of Lemma \ref{lemma:grad}, \begin{equation} \label{eq:trigalgo3} \mathbb{E}\left[d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n+1},\theta^*)\middle|\mathcal{X}_{\scriptscriptstyle n}\right]\,\leq\, d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*)(1-2\gamma_{\scriptscriptstyle n+1}\mu) + \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,C_{\scriptscriptstyle\kappa}C \end{equation} where $C$ is an upper bound on $V(\theta)$, for $\theta \in \Theta^*$. By further taking expectations \begin{equation} \label{eq:trigalgo4} \mathbb{E}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n+1},\theta^*)\,\leq\, \mathbb{E}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*)(1-2\gamma_{\scriptscriptstyle n+1}\mu) + \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,C_{\scriptscriptstyle\kappa}C \end{equation} \end{subequations} Using (\ref{eq:trigalgo4}), the proof reduces to an elementary reasoning by recurrence. Indeed, replacing $\gamma_{\scriptscriptstyle n} = \frac{a}{n}$ into (\ref{eq:trigalgo4}), it follows that \begin{subequations} \begin{equation} \label{eq:recurrence1} \mathbb{E}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n+1},\theta^*)\,\leq\, \mathbb{E}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*)\left(1-\frac{2\mu a}{ n+1}\right) + \frac{a^{\scriptscriptstyle 2}C_{\scriptscriptstyle\kappa}C}{(n+1)^{\scriptscriptstyle 2}} \end{equation} On the other hand, if $b(n) = \frac{b}{n}$ where $b > a^{\scriptscriptstyle 2}C_{\scriptscriptstyle\kappa}C\,(2\mu a -1)^{\scriptscriptstyle -1}$, then \begin{equation} \label{eq:recurrence2} b(n+1) \geq b(n) \left(1-\frac{2\mu a}{ n+1}\right) + \frac{a^{\scriptscriptstyle 2}C_{\scriptscriptstyle\kappa}C}{(n+1)^{\scriptscriptstyle 2}} \\[0.1cm] \end{equation} \end{subequations} Let $b$ be sufficiently large, so (\ref{eq:recurrence2}) is verified and $\mathbb{E}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n_{\scriptscriptstyle o}},\theta^*) \leq b(n_{\scriptscriptstyle o})$ for some $n_{\scriptscriptstyle o\,}$. Then, by recurrence, using (\ref{eq:recurrence1}) and (\ref{eq:recurrence2}), one also has that $\mathbb{E}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n\,},\theta^*) \leq b(n)$ for all $n \geq n_{\scriptscriptstyle o\,}$. In other words, (\ref{eq:ratel2}) holds true. \hfill $\blacksquare$ \vfill \pagebreak \subsection{Proof of Proposition \ref{prop:rateas}} the proof is modeled on the proof for the Euclidean case in~\cite{nev}. To begin, let $W_{\scriptscriptstyle n}$ be the stochastic process given by \begin{subequations} \begin{equation} \label{eq:proofrateas1} W_{\scriptscriptstyle n} \,=\, n^{\scriptscriptstyle p}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) + n^{\scriptscriptstyle -q} \hspace{1cm} \text{ where } q \in (0,1-p) \end{equation} The idea is to show that this process is a positive supermartingale, for sufficiently large $n$. By the supermartingale convergence theorem~\cite{shiryayev}, it then follows that $W_{\scriptscriptstyle n}$ converges to a finite limit, almost surely. In particular, this implies \begin{equation} \label{eq:proofrateas2} \lim n^{\scriptscriptstyle p}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) \,=\, \ell_{\scriptscriptstyle p} < \infty \hspace{1cm} \text{almost surely} \end{equation} Then, $\ell_{\scriptscriptstyle p}$ must be equal to zero, since $p$ is arbitrary in the interval $(0,1)$. Precisely, for any $\varepsilon \in (0,1-p)$, $$ \ell_{\scriptscriptstyle p} \,=\, \lim n^{\scriptscriptstyle p}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) \,=\, \lim n^{\scriptscriptstyle -\varepsilon }n^{\scriptscriptstyle p+\varepsilon}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) \,=\,\left(\lim n^{\scriptscriptstyle -\varepsilon }\right)\,\ell_{\scriptscriptstyle p+\varepsilon} \,=\, 0 $$ \end{subequations} It remains to show that $W_{\scriptscriptstyle n}$ is a supermartingale, for sufficiently large $n$. To do so, note that by (\ref{eq:trigalgo3}) from the proof of Proposition \ref{prop:ratel2}, $$ \mathbb{E}\left[W_{\scriptscriptstyle n+1}-W_{\scriptscriptstyle n}\middle|\mathcal{X}_{\scriptscriptstyle n}\right] \leq\, d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*)\,\frac{p-2\mu a}{(n+1)^{\scriptscriptstyle1-p}} \,+\, \frac{a^{\scriptscriptstyle 2}C_{\scriptscriptstyle\kappa}C}{(n+1)^{\scriptscriptstyle 2-p}} \,-\, \frac{q}{(n+1)^{\scriptscriptstyle q+1}} $$ Here, the first term on the right-hand side is negative, since $2\mu a > 1 > p$. Moreover, the third term dominates the second one for sufficiently large $n$, since $ q < 1- p$. Thus, for sufficiently large $n$, the right-hand side is negative, and $W_{\scriptscriptstyle n}$ is a supermartingale.\hfill $\blacksquare$ \subsection{Proof of Proposition \ref{prop:normality}} the proof relies on the following geometric Lemmas \ref{lemma:linearalgo} and \ref{lemma:linearfield}, which are used to linearise Algorithm (\ref{eq:algorithm}), in terms of the normal coordinates $\theta^{\scriptscriptstyle\,\alpha}$. This idea of linearisation in terms of local coordinates also plays a central role in~\cite{flamm}. \begin{subequations} \begin{lemma} \label{lemma:linearalgo} let $\theta_{\scriptscriptstyle n\,},\theta_{\scriptscriptstyle n+1}$ be given by (\ref{eq:algorithm}) with $\gamma_{\scriptscriptstyle n} = \frac{a}{n\,}$. Then, in a system of normal coordinates with origin at $\theta^*$, \begin{equation} \label{eq:linearalgo} \theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n+1}\,=\, \theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n}+\gamma^{\phantom{\scriptscriptstyle 2}}_{\scriptscriptstyle n+1}\,u^{\scriptscriptstyle \alpha}_{\scriptscriptstyle n+1} + \gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,\pi^{\scriptscriptstyle \alpha}_{\scriptscriptstyle n+1} \hspace{1cm} \mathbb{E}\left|\pi^{\scriptscriptstyle \alpha}_{\scriptscriptstyle n+1}\right| = O(n^{\scriptscriptstyle -1/2}) \end{equation} where $u^{\scriptscriptstyle \alpha}_{\scriptscriptstyle n+1}$ are the components of $u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1})$. \end{lemma} \begin{lemma} \label{lemma:linearfield} let $v_{\scriptscriptstyle n} = \nabla D(\theta_{\scriptscriptstyle n})\,$. Then, in a system of normal coordinates with origin at $\theta^*$, \begin{equation} \label{eq:linearfield} v^{\scriptscriptstyle\, \alpha}_{\scriptscriptstyle n}\,=\, H^{\phantom{\scriptscriptstyle 2}}_{\scriptscriptstyle\alpha\beta}\,\theta^{\scriptscriptstyle\,\beta}_{\scriptscriptstyle n}\,+\, \rho^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n} \hspace{1cm} \rho^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n} = o\left( d(\theta_{\scriptscriptstyle n},\theta^*)\right) \end{equation} where $v^{\scriptscriptstyle\, \alpha}_{\scriptscriptstyle n}$ are the components of $v_{\scriptscriptstyle n}$ and the $H_{\scriptscriptstyle\alpha\beta}$ were defined in (\ref{eq:hess}). \end{lemma} \end{subequations} \noindent \textit{Linearisation of (\ref{eq:algorithm})}\,: let $u(\theta_{\scriptscriptstyle n},x_{\scriptscriptstyle n+1}) = -v_{\scriptscriptstyle n}+w_{\scriptscriptstyle n+1\,}$. Then, it follows from (\ref{eq:linearalgo}) and (\ref{eq:linearfield}), \begin{subequations} \begin{equation} \label{eq:linearisation1} \theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n+1} \,=\, \theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n} \,-\, \gamma^{\phantom{\scriptscriptstyle 2}}_{\scriptscriptstyle n+1}\,H^{\phantom{\scriptscriptstyle 2}}_{\scriptscriptstyle\alpha\beta}\,\theta^{\scriptscriptstyle\,\beta}_{\scriptscriptstyle n}\,-\,\gamma^{\phantom{\scriptscriptstyle 2}}_{\scriptscriptstyle n+1}\,\rho^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n}\,+\,\gamma^{\phantom{\scriptscriptstyle 2}}_{\scriptscriptstyle n+1}\,w^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n+1}\,+\,\gamma^{\scriptscriptstyle 2}_{\scriptscriptstyle n+1}\,\pi^{\scriptscriptstyle \alpha}_{\scriptscriptstyle n+1} \end{equation} Denote the re-scaled coordinates $n^{\scriptscriptstyle 1/2}\theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n}$ by $\eta^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n\,}$, and recall $\gamma_{\scriptscriptstyle n} = \frac{a}{n\,}$. Then, using the estimate $(n+1)^{\scriptscriptstyle 1/2} = n^{\scriptscriptstyle 1/2}(1+(2n)^{\scriptscriptstyle -1}+O(n^{\scriptscriptstyle -2}))$, it follows from (\ref{eq:linearisation1}) that \begin{equation} \label{eq:linearisation2} \eta^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1}\,=\, \eta^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n} + \frac{A_{\scriptscriptstyle\alpha\beta}}{n+1}\,\eta^{\scriptscriptstyle\beta}_{\scriptscriptstyle n}\,+\, \frac{a}{(n+1)^{\scriptscriptstyle 1/2}}\,\left[ B^{\phantom{\scriptscriptstyle2}}_{\scriptscriptstyle\alpha\beta}\,\theta^{\scriptscriptstyle\,\beta}_{\scriptscriptstyle n} \,- \rho^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n}\,+\,w^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n+1}\,+\,\frac{a\pi^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1}}{n+1}\,\right] \\[0.1cm] \end{equation} where $A_{\scriptscriptstyle \alpha\beta} = \frac{1}{2}\delta_{\alpha\beta}-aH_{\scriptscriptstyle\alpha\beta}$ and $B_{\scriptscriptstyle\alpha\beta} = O(n^{\scriptscriptstyle-1})$. Equation (\ref{eq:linearisation2}) is a first-order, inhomogeneous, linear difference equation, for the ``vector" $\eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n}$ of components $\eta^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n\,}$. \hfill$\blacksquare$ \end{subequations} \vfill \pagebreak \noindent \textit{Study of equation (\ref{eq:linearisation2})}\,: switching to vector-matrix notation, equation (\ref{eq:linearisation2}) is of the general form \begin{subequations} \begin{equation} \label{eq:linearisation3} \eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n+1} \,=\, \left(I \,+\,\frac{A}{n+1}\right)\,\eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n} \,+\, \frac{a\,\xi^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n+1}}{(n+1)^{\scriptscriptstyle 1/2}} \end{equation} where $I$ denotes the identity matrix, $A$ has matrix elements $A_{\scriptscriptstyle \alpha\beta\,}$, and $\left(\xi_{\scriptscriptstyle n}\right)$ is a sequence of inputs. The general solution of this equation is~\cite{nev}\cite{kailath} \begin{equation} \label{eq:transition1} \eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n}\,=\, A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,m}\,\eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle m}\,+\, \sum^{\scriptscriptstyle n}_{\scriptscriptstyle k = m+1}\,A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,k}\,\frac{a\,\xi^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle k}}{k^{\scriptscriptstyle 1/2}} \hspace{1cm} \text{for }\,\, n \geq m \end{equation} where the transition matrix $A_{\scriptscriptstyle n,k}$ is given by \begin{equation} \label{eq:transitions2} A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,k} \,=\,\prod^{\scriptscriptstyle n}_{\scriptscriptstyle j = k+1}\,\left(I+\frac{A}{j}\right) \hspace{1cm} A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,n} = I \end{equation} Since $2\lambda a > 1$, the matrix $A$ is stable. This can be used to show that~\cite{nev}\cite{kailath} \begin{equation} \label{eq:nev} q > \frac{1}{2} \,\text{ and }\,\mathbb{E}\left|\xi^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n}\right| = O(n^{\scriptscriptstyle -q}) \,\,\Longrightarrow\,\, \lim\eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n} \,=\, 0\,\text{ in probability} \end{equation} where $|\xi_{\scriptscriptstyle n}|$ denotes the Euclidean vector norm. Then, it follows from (\ref{eq:nev}) that $\eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n}$ converges to zero in probability, in each one of the three cases \end{subequations} $$ \xi^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1} \,=\, B^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle \alpha\beta}\,\theta^{\scriptscriptstyle\,\beta}_{\scriptscriptstyle n}\hspace{0.25cm};\hspace{0.25cm} \xi^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1} \,=\, \rho^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n}\hspace{0.25cm};\hspace{0.25cm} \xi^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1} \,=\, \frac{\pi^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1}}{n+1} $$ Indeed, in the first two cases, the condition required in (\ref{eq:nev}) can be verified using (\ref{eq:ratel2}), whereas in the third case, it follows immediately from the estimate of $\mathbb{E}|\pi^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1}|$ in (\ref{eq:linearalgo}). \hfill$\blacksquare$ \\[0.1cm] \noindent \textit{Conclusion}\,: by linearity of (\ref{eq:linearisation2}), it is enough to consider the case $\xi^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n+1} = w^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n+1}$ in (\ref{eq:linearisation3}). Then, according to (\ref{eq:transition1}), $\eta^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n}$ has the same limit distribution as the sums \begin{equation} \label{eq:sum} \tilde{\eta}^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n} \,=\, \sum^{\scriptscriptstyle n}_{\scriptscriptstyle k=1}\, A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,k}\,\frac{aw_{\scriptscriptstyle k}}{k^{\scriptscriptstyle 1/2}} \end{equation} \begin{subequations} \label{subeq:clt} By (\ref{eq:moments}), $(w_{\scriptscriptstyle k})$ is a sequence of square-integrable martingale differences. Therefore, to conclude that the limit distribution of $\tilde{\eta}^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n}$ is a centred $d$-variate normal distribution, with covariance matrix $\Sigma$ given by (\ref{eq:lyapunov}), it is enough to verify the conditions of the martingale central limit theorem~\cite{martingale}, \begin{equation} \label{eq:clt1} \lim\max_{\scriptscriptstyle k\leq n}\,\left| A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,k}\,\frac{aw_{\scriptscriptstyle k}}{k^{\scriptscriptstyle 1/2}}\right| \,=\, 0 \,\text{ in probability}\hspace{0.51cm} \end{equation} \begin{equation} \label{eq:clt2} \sup\,\mathbb{E}\left|\tilde{\eta}^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n}\right|^2 \,<\,\infty \hspace{4.2cm} \end{equation} \begin{equation} \label{eq:clt3} \lim \sum^{\scriptscriptstyle n}_{\scriptscriptstyle k=1}\frac{a^{\scriptscriptstyle 2}}{k}\,A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,k\,}\Sigma^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle k\,}A^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle n,k} \,=\,\Sigma \,\text{ in probability} \end{equation} \end{subequations} where $\Sigma^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle k}$ is the conditional covariance matrix \begin{equation} \label{eq:sigmak} \Sigma^{\phantom{\scriptscriptstyle\,2}}_{\scriptscriptstyle k} \,=\,\mathbb{E}\left[w^{\phantom{\scriptscriptstyle\dagger}}_{\scriptscriptstyle k}w^{\scriptscriptstyle{\dagger}}_{\scriptscriptstyle k\,}\middle|\mathcal{X}_{\scriptscriptstyle k-1}\right] \end{equation} Conditions (\ref{subeq:clt}) are verified in Appendix \ref{sec:clt}, which completes the proof. \hfill$\blacksquare$ \subsection{Proof of Proposition \ref{prop:information}} denote $\partial_{\scriptscriptstyle\alpha}=\frac{\partial}{\mathstrut\partial\theta^{\scriptscriptstyle\,\alpha}}$ the coordinate vector fields of the normal coordinates $\theta^{\scriptscriptstyle\,\alpha\,}$. Since $\langle\cdot,\cdot\rangle$ coincides with the information metric of the model $P$, it follows from (\ref{eq:hess}) and (\ref{eq:raofish2}), \begin{subequations} \begin{equation} \label{eq:hid1} H_{\scriptscriptstyle\alpha\beta} \,=\, \langle \partial_{\scriptscriptstyle \alpha\,},\partial_{\scriptscriptstyle \beta}\rangle_{\scriptscriptstyle \theta^*} \end{equation} However, by the definition of normal coordinates~\cite{petersen}, the $\partial_{\scriptscriptstyle \alpha}$ are orthonormal at $\theta^*$. Therefore, \begin{equation} \label{eq:hid2} H_{\scriptscriptstyle\alpha\beta} \,=\, \delta_{\scriptscriptstyle\alpha\beta} \end{equation} \end{subequations} Thus, the matrix $H$ is equal to the identity matrix, and its smallest eigenvalue is $\lambda = 1$. \\[0.1cm] \textit{Proof of \emph{(i)}}\,: this follows directly from Propositions \ref{prop:ratel2} and \ref{prop:rateas}. Indeed, since $\lambda = 1$, the conditions of these propositions are verified, as soon as $2a > 1$. Therefore, (\ref{eq:ratel2}) and (\ref{eq:rateas}) hold true. \hfill$\blacksquare$ \\[0.1cm] \textit{Proof of \emph{(ii)}}\,: this follows from Proposition \ref{prop:normality}. The conditions of this proposition are verified, as soon as $2a > 1$. Therefore, the distribution of the re-scaled coordinates $(n^{\scriptscriptstyle 1/2}\theta^{\scriptscriptstyle\,\alpha}_{\scriptscriptstyle n})$ converges to a centred $d$-variate normal distribution, with covariance matrix $\Sigma$ given by Lyapunov's equation (\ref{eq:lyapunov}). If $a = 1$, then (\ref{eq:hid2}) implies $A_{\scriptscriptstyle\alpha\beta} = - \frac{1}{2}\delta_{\scriptscriptstyle \alpha\beta\,}$, so that Lyapunov's equation (\ref{eq:lyapunov}) reads $\Sigma = \Sigma^*$, as required. \hfill$\blacksquare$ \\[0.1cm] \indent For the following proof of (iii), the reader may wish to recall that summation convention is used throughout the present work. That is~\cite{petersen}, summation is implicitly understood over any repeated subscript or superscript from the Greek alphabet, taking the values $1\,,\ldots,\,d\,$.\\[0.1cm] \textit{Proof of \emph{(iii)}}\,: let $\ell(\theta) = \log L(\theta)$ and assume $u(\theta,x)$ is given by (\ref{eq:score}). Then, by the definition of normal coordinates~\cite{petersen}, the following expression holds \begin{subequations} \begin{equation} \label{eq:score1} u^{\scriptscriptstyle \alpha}(\theta^*) \,=\, \left.\frac{\partial \ell}{\partial \theta^{\scriptscriptstyle\,\alpha}}\right|_{\scriptscriptstyle \theta^{\scriptscriptstyle \alpha} = 0} \end{equation} Replacing this into (\ref{eq:cov}) gives \begin{equation} \label{eq:score2} \Sigma^*_{\scriptscriptstyle\alpha\beta} \,=\, E_{\scriptscriptstyle \theta^*}\! \left[\frac{\partial \ell}{\partial \theta^{\scriptscriptstyle\,\alpha}}\frac{\partial \ell}{\partial \theta^{\scriptscriptstyle\,\beta}}\right]_{\scriptscriptstyle \theta^{\scriptscriptstyle \alpha} = 0} \,=\, - \,E_{\scriptscriptstyle \theta^*}\left.\frac{\partial^{\scriptscriptstyle\, 2}\!\,\ell}{\mathstrut\partial\theta^{\scriptscriptstyle \alpha}\partial\theta^{\scriptscriptstyle \beta}}\right|_{\scriptscriptstyle \theta^{\scriptscriptstyle \alpha} = 0} \,=\, \left.\frac{\partial^{\scriptscriptstyle\, 2}\!\,D}{\mathstrut\partial\theta^{\scriptscriptstyle \alpha}\partial\theta^{\scriptscriptstyle \beta}}\right|_{\scriptscriptstyle \theta^{\scriptscriptstyle \alpha} = 0} \end{equation} \end{subequations} where the second equality is the so-called Fisher's identity (see~\cite{amari}, Page 28), and the third equality follows from (\ref{eq:kl}) by differentiating under the expectation. Now, by (\ref{eq:hess}) and (\ref{eq:hid2}), $\Sigma^*$ is the identity matrix. To show that the recursive estimates $\theta_{\scriptscriptstyle n}$ are asymptotically efficient, let $(\tau^{\scriptscriptstyle\alpha}\,;\alpha = 1,\ldots, d\,)$ be any local coordinates with origin at $\theta^*$ and let $\tau^{\scriptscriptstyle \alpha}_{\scriptscriptstyle n} = \tau^{\scriptscriptstyle \alpha}(\theta_{\scriptscriptstyle n})\,$. From the second-order Taylor expansion of each coordinate function $\tau^{\scriptscriptstyle \alpha}$, it is straightforward to show that \begin{subequations} \begin{equation} \label{eq:proofeff1} n^{\scriptscriptstyle 1/2}\tau^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n} \,=\,\left(\frac{\partial \tau^{\scriptscriptstyle\alpha}}{\partial\theta^{\scriptscriptstyle\,\gamma}}\right)_{\!\scriptscriptstyle \theta^*} \!\left(n^{\scriptscriptstyle 1/2}\theta^{\scriptscriptstyle\,\gamma}_{\scriptscriptstyle n\,}\right)\,+\, \sigma^{\scriptscriptstyle\alpha}(\theta_{\scriptscriptstyle n})\left(n^{\scriptscriptstyle 1/2}d^{\scriptscriptstyle\, 2}(\theta_{\scriptscriptstyle n\,},\theta^*)\right) \end{equation} where the subscript $\theta^*$ indicates the derivative is evaluated at $\theta^*$, and where $\sigma^{\scriptscriptstyle\alpha}$ is a continuous function in the neighborhood of $\theta^*$. By (\ref{eq:rateas}), the second term in (\ref{eq:proofeff1}) converges to zero almost surely. Therefore, the limit distribution of the re-scaled coordinates $(n^{\scriptscriptstyle 1/2}\tau^{\scriptscriptstyle\alpha}_{\scriptscriptstyle n})$ is the same as that of the first term in (\ref{eq:proofeff1}). By (ii), this is a centred $d$-variate normal distribution with covariance matrix $\Sigma^{\scriptscriptstyle\tau}$ given by \begin{equation} \label{eq:proofeff2} \Sigma^{\scriptscriptstyle\tau}_{\scriptscriptstyle\alpha\beta} \,=\, \left(\frac{\partial \tau^{\scriptscriptstyle\alpha}}{\partial\theta^{\scriptscriptstyle\,\gamma}}\right)_{\!\scriptscriptstyle \theta^*} \Sigma^*_{\scriptscriptstyle\gamma\kappa}\, \left(\frac{\partial \tau^{\scriptscriptstyle\beta}}{\partial\theta^{\scriptscriptstyle\,\kappa}}\right)_{\!\scriptscriptstyle \theta^*} \,=\, \left(\frac{\partial \tau^{\scriptscriptstyle\alpha}}{\partial\theta^{\scriptscriptstyle\,\gamma}}\right)_{\!\scriptscriptstyle \theta^*} \left(\frac{\partial \tau^{\scriptscriptstyle\beta}}{\partial\theta^{\scriptscriptstyle\,\gamma}}\right)_{\!\scriptscriptstyle \theta^*} \\[0.1cm] \end{equation} where the second equality follows because $\Sigma^*_{\scriptscriptstyle\gamma\kappa} = \delta_{\scriptscriptstyle\gamma\kappa}$ since $\Sigma^*$ is the identity matrix. It remains to show that $\Sigma^{\scriptscriptstyle\tau}$ is the inverse of the information matrix $I^{\scriptscriptstyle\tau}$ as in (\ref{eq:efficiency}). According to (\ref{eq:raofish2}), this is given by \begin{equation} \label{eq:proffeff3} I^{\scriptscriptstyle\tau}_{\scriptscriptstyle\alpha\beta} \,=\, \left.\frac{\partial^{\scriptscriptstyle\, 2}\!\,D}{\mathstrut\partial\tau^{\scriptscriptstyle \alpha}\partial\tau^{\scriptscriptstyle \beta}}\right|_{\scriptscriptstyle \tau^{\scriptscriptstyle \alpha} = 0} \,=\, - \,E_{\scriptscriptstyle \theta^*}\left.\frac{\partial^{\scriptscriptstyle\, 2}\!\,\ell}{\mathstrut\partial\tau^{\scriptscriptstyle \alpha}\partial\tau^{\scriptscriptstyle \beta}}\right|_{\scriptscriptstyle \tau^{\scriptscriptstyle \alpha} = 0} \,=\, E_{\scriptscriptstyle \theta^*}\! \left[\frac{\partial \ell}{\partial \tau^{\scriptscriptstyle\alpha}}\frac{\partial \ell}{\partial \tau^{\scriptscriptstyle\beta}}\right]_{\scriptscriptstyle \tau^{\scriptscriptstyle \alpha} = 0} \\[0.1cm] \end{equation} where the second equality follows from (\ref{eq:kl}), and the third equality from Fisher's identity (see~\cite{amari}, Page 28). Now, a direct application of the chain rule yields the following $$ I^{\scriptscriptstyle\tau}_{\scriptscriptstyle\alpha\beta} \,=\, E_{\scriptscriptstyle \theta^*}\! \left[\frac{\partial \ell}{\partial \tau^{\scriptscriptstyle\alpha}}\frac{\partial \ell}{\partial \tau^{\scriptscriptstyle\beta}}\right]_{\scriptscriptstyle \tau^{\scriptscriptstyle \alpha} = 0} \,=\, \left(\frac{\partial \theta^{\scriptscriptstyle\,\gamma}}{\partial\tau^{\scriptscriptstyle\alpha}}\right)_{\!\scriptscriptstyle \theta^*} E_{\scriptscriptstyle \theta^*}\! \left[\frac{\partial \ell}{\partial \theta^{\scriptscriptstyle\,\gamma}}\frac{\partial \ell}{\partial \theta^{\scriptscriptstyle\,\kappa}}\right]_{\scriptscriptstyle \theta^{\scriptscriptstyle\,\gamma} = 0} \left(\frac{\partial \theta^{\scriptscriptstyle\,\kappa}}{\partial\tau^{\scriptscriptstyle\beta}}\right)_{\!\scriptscriptstyle \theta^*} \\[0.1cm] $$ By the first equality in (\ref{eq:score2}), this is equal to \begin{equation} \label{eq:proofeff4} I^{\scriptscriptstyle\tau}_{\scriptscriptstyle\alpha\beta} \,=\, \left(\frac{\partial \theta^{\scriptscriptstyle\,\gamma}}{\partial\tau^{\scriptscriptstyle\alpha}}\right)_{\!\scriptscriptstyle \theta^*} \Sigma^*_{\scriptscriptstyle\gamma\kappa} \left(\frac{\partial \theta^{\scriptscriptstyle\,\kappa}}{\partial\tau^{\scriptscriptstyle\beta}}\right)_{\!\scriptscriptstyle \theta^*} \,=\, \left(\frac{\partial \theta^{\scriptscriptstyle\,\gamma}}{\partial\tau^{\scriptscriptstyle\alpha}}\right)_{\!\scriptscriptstyle \theta^*} \left(\frac{\partial \theta^{\scriptscriptstyle\,\gamma}}{\partial\tau^{\scriptscriptstyle\beta}}\right)_{\!\scriptscriptstyle \theta^*} \\[0.1cm] \end{equation} because $\Sigma^*_{\scriptscriptstyle\gamma\kappa} = \delta_{\scriptscriptstyle\gamma\kappa}$ is the identity matrix. Comparing (\ref{eq:proofeff2}) to (\ref{eq:proofeff4}), it is clear that $\Sigma^{\scriptscriptstyle\tau}$ is the inverse of the information matrix $I^{\scriptscriptstyle\tau}$ as in (\ref{eq:efficiency}). \end{subequations} \hfill$\blacksquare$ \\[0.1cm] \textit{Proof of \emph{(iv)}}\,: (\ref{eq:information1}) and (\ref{eq:information2}) follow from (\ref{eq:ratel2}) and (\ref{eq:rateas}), respectively, by using (\ref{eq:raofish1}). Precisely, it is possible to write (\ref{eq:raofish1}) in the form \begin{subequations} \begin{equation} \label{eq:proofD1} D(\theta_{\scriptscriptstyle n}) \,=\, \frac{1}{2}\,d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) \,+\, \omega\!\left( \theta_{\scriptscriptstyle n}\right)d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) \end{equation} where $\omega$ is a continuous function in the neighborhood of $\theta^*$, equal to zero at $\theta = \theta^*$. To obtain (\ref{eq:information1}), it is enough to take expectations in (\ref{eq:proofD1}) and note that $\omega$ is bounded above in the neighborhood of $\theta^*$. Then, (\ref{eq:information1}) follows directly from (\ref{eq:ratel2}). To obtain (\ref{eq:information2}), it is enough to multiply (\ref{eq:proofD1}) by $n^{\scriptscriptstyle p}$ where $p \in (0,1)$. This gives the following expression \begin{equation} \label{eq:proofD2} n^{\scriptscriptstyle p} D(\theta_{\scriptscriptstyle n}) \,=\, \frac{1}{2}\,n^{\scriptscriptstyle p}d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*) \left(1+ \omega\!\left( \theta_{\scriptscriptstyle n}\right)\right) \end{equation} From (\ref{eq:rateas}), $n^{\scriptscriptstyle p}d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*)$ converges to zero almost surely. Moreover, by continuity of $\omega$, it follows that $\omega\!\left( \theta_{\scriptscriptstyle n}\right)$ converges to $\omega\!\left( \theta^*\right) = 0$ almost surely. Therefore, by taking limits in (\ref{eq:proofD2}), it is readily seen that \begin{equation} \lim\, n^{\scriptscriptstyle p} D(\theta_{\scriptscriptstyle n}) \,=\, \frac{1}{2}\left(\lim\, n^{\scriptscriptstyle p}d^{\scriptscriptstyle\,2}(\theta_{\scriptscriptstyle n},\theta^*)\right) \left(1+ \lim\,\omega\!\left( \theta_{\scriptscriptstyle n}\right) \right) \,=\, 0 \end{equation} almost surely. However, this is equivalent to the statement that $D(\theta_{\scriptscriptstyle n}) = o(n^{\scriptscriptstyle -p})$ for $p \in (0,1)$, almost surely. Thus, (\ref{eq:information2}) is proved. \hfill$\blacksquare$ \end{subequations} \vfill \pagebreak
{ "attr-fineweb-edu": 1.542969, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfb05qhDC0V1enMdP
\section{Introduction} \label{introduction} Graph Neural Networks (GNNs) have opened a unique path to learning on data by leveraging the intrinsic relations between entities that can be structured as a graph. By imposing these structural constraints, additional information can be learned and used for many types of prediction tasks. With rapid development of the field and easy accessibility of computation and data, GNNs have been used to solve a variety of problems like node classification \cite{kipf_semi-supervised_2017,velickovic_graph_2017,abu-el-haija_mixhop_2019,chen_simple_2020}, link prediction \cite{ying_graph_2018,berg_graph_2017,chami_hyperbolic_2019}, graph classification \cite{ying_hierarchical_2018,zhang_end--end_2018}, prediction of molecular properties \cite{gilmer_neural_2017,madhawa_graphnvp_2019}, natural language processing \cite{marcheggiani_encoding_2017}, node ranking \cite{maurya_fast_2019} and so on. In this work, we focus on the node classification task using graph neural networks. Since the success of early GNN models like GCN \cite{kipf_semi-supervised_2017}, researchers have successively proposed numerous variants \cite{wu_comprehensive_2019} to address various shortcomings in model training and to improve the prediction capabilities. Some of the techniques used in these variants include neighbor sampling \cite{hamilton_inductive_2017,chen_fastgcn:_2018}, attention mechanism to assign different weights to neighbors \cite{velickovic_graph_2017}, use of Personalized PageRank matrix instead of adjacency matrix \cite{klicpera_predict_2018} and simplified model design \cite{wu_simplifying_2019}. Also, there has been a growing interest in making the models deeper by stacking more layers and using the residual connections to improve the expressiveness of the model \cite{rong_dropedge_2020,chen_simple_2020}. However, most of these models by design are more suitable for homophily datasets where nodes linked to each other are more likely to belong in the same class. They may not perform well with heterophily datasets which are more likely to have nodes with different labels connected together. Zhu et al. \cite{zhu_beyond_2020} highlight this problem and propose node's ego-embedding and neighbor-embedding separation to improve performance on heterophily datasets. In general, GNN models combine feature aggregation and transformation using a learnable weight matrix in the same layer, often referred to as graph convolutional layer. These layers are stacked together with the non-linear transformation (e.g., ReLU) and regularization(e.g., Dropout) as a learning framework on the graph data. Stacking the layers also has the effect of introducing powers of adjacency matrix (or laplacian matrix), which helps to generate a new set of features for a node by aggregating neighbor's features at multiple hops, thus encoding the neighborhood information. The number of these unique features depends on the propagation steps or the depth of the model. The final node embeddings are the output of just stacked layers or, for some models, also has skip connection or residual connection combined at final layer. However, such a combination muddles the distinction between the importance of features and expressiveness of MLP. It becomes challenging to analyze which features are essential and how much expressiveness MLP requires over a specific task. To overcome this challenge, we provide a framework to treat the feature propagation and learning separately. With this freedom, we propose a simple GNN model with three unique design considerations: Soft-selection of features using softmax function, Hop-Normalization, and unique mapping of features. With experimental results, we show that our simple 2-layer GNN outperforms other state-of-art GNN models (both shallow and deep) and achieves up to 64\% higher node classification accuracy. In addition, analyzing the model parameters gives us an insight into identifying which features are most responsible for classification accuracy. One interesting observation we find is regarding Chameleon and Squirrel datasets. These are dense graph datasets and are generally regarded as being low-quality heterophily datasets. However, in our experiments with our proposed model, we find them to be showing strong heterophily properties with improved classification results. Furthermore, we demonstrate that due to the simple design of our model, it can scale up for very large graph datasets. We run experiments on ogbn-papers100M dataset, which is the largest publicly available node classification dataset, and achieve higher accuracy than the state of the art models. The rest of the paper is organized as follows: Section \ref{preliminaries} outlines formulation of graph neural networks and details node classification task. In Section \ref{propose_arch}, we discuss design strategies for GNNs and propose the GNN model FSGNN. In Section \ref{related_work}, we briefly introduce relevant GNN literature. Section \ref{experiments} contains the experimental details and comparison with other GNN models. In Section \ref{discussion}, we empirically analyze our proposed design strategies and their effect on the model's performance. Section \ref{conclusion} summarizes the paper. \section{Preliminaries} \label{preliminaries} Let $G = (V,E)$ be an undirected graph with $n$ nodes and $m$ edges. For numerical calculations, graph is represented as adjacency matrix denoted by $A\in \{0,1\}^{n\times n}$ with each element $A_{ij}=1$ if there exists an edge between node $v_i$ and $v_j$, otherwise $A_{ij}=0$. If self-loops are added to the graph then, resultant adajcency matrix is denoted as $\Tilde{A} = A+I$. Diagonal degree matrix of $A$ and $\Tilde{A}$ are denoted as $D$ and $\Tilde{D}$. Each node is associated with a d-dimensional feature vector and the feature matrix for all nodes is represented as $X \in \mathbb{R} ^{n \times d}$. \subsection{Graph Neural Networks} Graph Neural Networks (GNNs) leverage feature propagation mechanism \cite{gilmer_neural_2017} to aggregate neighborhood information of a node and use non-linear transformation with trainable weight matrix to get the final embeddings for the nodes. Conventionally, a simple GNN layer is defined as \begin{equation} \label{eq:homophily_gnn} H^{(i+1)} = \sigma (\Tilde{A}_{sym}H^{(i)}W^{(i)}) \end{equation} where $\Tilde{A}_{sym} = \Tilde{D}^{-\frac{1}{2}} \Tilde{A}\Tilde{D}^{-\frac{1}{2}}$ is a symmetric normalized adjacency matrix with added self-loops. $H^i$ represents features from the previous layer, $W^i$ denotes the learnable weight matrix, and $\sigma$ is a non-linear activation function, which is usually ReLU in most implementation of GNNs. However, this formulation is suitable for homophily datasets as features are cumulatively aggregated i.e. node's own features are added together with neighbor's features. For heterophily datasets, we require a propagation scheme to separate features of neighbors from node's own features. So we use the following formulation for the GNN layer, \begin{equation} \label{eq:heterophily_gnn} H^{(i+1)} = \sigma (A_{sym}H^{(i)}W^{(i)}) \end{equation} where $A_{sym} = D^{-\frac{1}{2}} AD^{-\frac{1}{2}}$ is symmetric normalized adjacency matrix without added self-loops. To combine features from multiple hops, concatenation operator can be used before the final layer. Following the conventional GNN formulation using $\Tilde{A}$, a simple 2-layered GNN can be represented as \cite{kipf_semi-supervised_2017}, \begin{equation} \label{eq:gnn_2layer} Z = \Tilde{A}_{sym}\sigma(\Tilde{A}_{sym}XW^{(0)})W^{(1)} \end{equation} \subsection{Node Classification} Node classification is an extensively studied graph based semi-supervised learning problem. It encompasses training the GNN to predict labels of nodes based on the features and neighborhood structure of the nodes. GNN model is considered as a function $f(X,A)$ conditioned on node features $X$ and adjacency matrix $A$. Taking the example of Eq. \ref{eq:gnn_2layer}, GNN aggregates the features of two hops of neighbors and outputs $Z$. Softmax function is applied row-wise, and cross-entropy error is calculated over all labeled training examples. The gradients of loss are back-propagated through the GNN layers. Once trained, the model can be used for the prediction of labels of nodes in the test set. \begin{figure*}[h] \centering \includegraphics[width=0.9\textwidth]{model_diagram.pdf} \caption{Figure shows model diagram of FSGNN. Input features are generated based on powers of $A$ and $\Tilde{A}$.} \label{fig:model_diagram} \end{figure*} \subsection{Homophily vs Heterophily} Node classification problem relies on the graph structure and features of the nodes to identify the labels of the node. Under homophily, nodes are assumed to have neighbors with similar features and labels. Thus, the cumulative aggregation of node's self-features with that of neighbors reinforce the signal corresponding to the label and help to improve accuracy of the predictions. While in the case of heterophily, nodes are assumed to have dissimilar features and labels. In this case, the cumulative aggregation will reduce the signal and add more noise causing neural network to learn poorly and causing drop in performance. Thus it is essential to have node's self-features separate from the neighbor's features. In real-world datasets, homophily and heterophily levels may vary, hence it is optimal to have both aggregation schemes (Eq. \ref{eq:homophily_gnn} \& \ref{eq:heterophily_gnn}) \section{Proposed Architecture} \label{propose_arch} For the design of a GNN with good generalization capability and performance, there are many aspects of the data that needs to be considered. The feature propagation and aggregation scheme is governed by if the class label distribution has strong homophily or heterophily or some combination of both. The number of hops (and depth of the model for many GNN models) for feature aggregation are dependent on graph structure and size as well as label distribution among neighbors of the nodes. Also, the type and amount of regularization during training needs to be decided, for example, using dropout on input features or on graph edges. Keeping these aspects under consideration, we propose three design strategies that help to create a versatile and simple GNN model. \vspace{5mm} \subsection{Design Strategies for GNNs} \subsubsection{Decouple feature generation and representation learning}\hfill As discussed in Sec. 2.1, these features can be aggregated cumulatively (homophily-based) or non-cumulatively (heterophily-based). Moreover, the features can also be combined based on some arbitrary criteria. We assume a function, $$g(X,A,K) \mapsto \{X_1,X_2, \dotsc ,X_p\}$$ The function takes $X$ as node features matrix, $A$ as an adjacency matrix, $K$ as the power of the adjacency matrix or number of hops to propagate features and outputs a set of aggregated features. These features then can be combined using sum or concatenation operation to get final representation of the node. However, in the node classification task, for a given label distribution, only a subset of these features are useful to predict the label of the node. For example, features of node's neighbors that lie at a greater distance in the graph may not be sufficiently informative or useful for node's label prediction. Conventionally, GNN models have feature propagation and transformation combined into a single layer, and the layers are stacked together. This step makes it difficult to distinguish the importance of the features and the role of MLP. To overcome this limitation, we propose to separate the feature generation step and representation learning over features separately. This provides us with three main benefits. \renewcommand{\labelenumi}{(\roman{enumi})} \begin{enumerate} \item Features generated for nodes are not constrained by the design of the GNN model. We get the freedom to choose the feature set as required by the problem and the neural network design, which is sufficiently expressive. \item We can precompute and fix the node features set and experiment with the neural network architectures for the best performance. Precomputing features also helps to scale the training of the model for large graphs with batchwise training. \item In conventional GNN setting, stacking many layers also causes oversmoothing of node features \cite{chen_measuring_2019} and adversely affects the performance of the model. Recently proposed models use skip connection or residual connection to overcome this issue. However, they fail to demonstrate which features are useful. We provide an alternate scheme where the model can learn weights that identify which features are useful for the prediction task. \end{enumerate} For the model design, instead of a single input channel, we propose to have all these features as input in parallel. Please refer Fig.\ref{fig:model_diagram} for the illustration. Each feature is mapped to a separate linear layer. Hence the linear transformations are uniquely learned for all input features. \subsubsection{Feature Selection}\hfill As features are aggregated over many hops, some features are useful and correlate with the label distribution, while others are not very useful for learning and act more like the noise for the model. As we propose to input the feature set in parallel channels, we can design the model to learn which features are more relevant for lower loss value and giving higher weights to those features while simultaneously reducing the weights on other features. We propose to weight these features with a single scalar value that is multiplied to each input feature matrix and impose a constraint on these values by softmax function. Let $\alpha_i$ be the scalar value for the $i^{th}$ feature matrix, then $\alpha_i$ scales the magnitude of the features as $\alpha_i X_iW^{(0)}_i$. Softmax function is used in deep learning as a non-linear normalizer, and its output is often practically interpreted as probabilities. Before training, the scalar values corresponding to each feature matrix are initialized with equal values and softmax is applied on these values. The resultant normalized values $\alpha_i$ are then multiplied with the input features, and the concatenation operator is applied. Considering $L$ number of input feature matrices $X_l,\: l\in\{1\:..\:L\}$ , the formulation can be described as, \begin{equation} H^{(1)} = \bigparallel^{L}_{l=1} \alpha_lX_lW_{l}^{(0)} \end{equation} $$ \textrm{where } \sum_{l=1}^{L}\alpha_{l} = 1$$ While training, the scalar values of relevant features corresponding to the labels increase towards 1 while others decrease towards 0. The features that are not useful and represent more noise than signal have their magnitudes reduced with corresponding decreasing in their scalar values. Since we are not using a binary selection of features, we term this selection procedure as "soft-selection" of features. This formulation can be understood in two ways. As GNNs have represented with a polynomial filter, \begin{equation} g_\theta(P) = \sum_{k=0}^{K-1}\theta_kP^k \end{equation} where $\theta \in \mathbb{R}^K$ is a vector of polynomial coefficients and P can be adjacency matrix \cite{kipf_semi-supervised_2017}\cite{chen_simple_2020}, laplacian matrix \cite{nt_stacked_2020} or PageRank based matrix \cite{berberidis_adaptive_2019}. As the polynomial coefficients are scalar parameters then our scheme can be considered as applying regularization on these parameters using the softmax function. The other way to look is to simply consider it as a weighting scheme. As the input features can be arbitrarily chosen, and instead of a scalar weighting scheme, a more sophisticated scheme can be used. For practical implementation, since all weights are initialized as equal, they can be set equal to 1. After normalizing with softmax function, the individual scalar values becomes equal to $1/L$. During training, these values change, denoting the importance of the features. In some cases, initial $\alpha_l = 1/L$ value may be too small and may adversely affect training. In that case, a constant $\gamma$ may be multiplied after softmax normalization to increase the initial magnitude as $\gamma\alpha_lX_lW_{l}^{(0)}$. Since $\gamma$ remains constant during the training, it does not affect the softmax regularization of the scalar parameters. As the scalar values affect the magnitude of the features, they also affect the gradients propagated back to the linear layer, which transforms the input features. Hence it is important to have a unique weight matrix for each input feature matrix. \subsubsection{Hop-Normalization}\hfill The third strategy we propose is Hop-Normalization. It is a common practice in the deep learning field to use different types of normalization schemes, for example, batch normalization \cite{ioffe_batch_2015}, layer normalization, weight normalization, and so on. However, in graph neural network frameworks, normalization of activations after hidden layers are not commonly used. It may be in part due to the common practice of normalizing node/edge features and symmetric/non-symmetric normalization of the adjacency matrix. We propose to normalize all aggregated features from different hops after linear transformation, hence the term "Hop-Normalization". We propose row-wise L2-normalize the hidden layer activations as, \begin{equation} h_{ij} = \frac{h_{ij}}{\parallel h_{i} \parallel_2} \end{equation} where $h_{i}$ represents the $i^{th}$ row vector of activations and $h_{ij}$ represents individual values. L2-normalization scales the node embedding vectors to lie on the "unit sphere". In the later section, we empirically show significant improvements in the performance of the model with the use of this scheme. \subsection{Feature Selection Graph Neural Network} Combining the design strategies proposed earlier, we propose a simple and shallow (2-layered) graph GNN model called Feature Selection Graph Neural Network (FSGNN). Figure \ref{fig:model_diagram} shows the diagrammatic representation of our model. Input features are precomputed using $A_{sym}$ and $\Tilde{A}_{sym}$ and transformed using a linear layer unique to each feature matrix. Hop-normalization is applied on the output activations of the first layer and weighted with scalar weights regularized by the softmax function. Output features are then concatenated and non-linearly transformed using ReLU and mapped to the second linear layer. Cross-entropy loss is calculated with output logits of second layer. \setlength{\textfloatsep}{1mm} \begin{algorithm}[h!] \DontPrintSemicolon \caption{Pseudo Code FSGNN (Forward propagation)} \label{alg:fsgnn} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{~ $A_{sym}$; $\Tilde{A}_{sym}$; No. of hops $K$; weight matrices $W^{(k)}$; $\alpha$ vector of dimension 2K+1; } \Output{~ Logits} \BlankLine $\alpha_i\leftarrow1.0, i=1...2K+1$ \; $\alpha \leftarrow SOFTMAX(\alpha)$ \; $list\_mat \leftarrow [X]$ \; $X_A\leftarrow X$ \; $X_{\Tilde{A}} \leftarrow X$ \; \For{$k=1...K$}{ $X_A \leftarrow A_{sym}X_A$ \; $X_{\Tilde{A}} \leftarrow \Tilde{A}_{sym}X_{\Tilde{A}}$ \; $list\_mat.APPEND(\:X_A\:)$ \; $list\_mat.APPEND(\:X_{\Tilde{A}}\:)$ \; } $list\_cat = LIST() $\; \For{$j=1...2K+1$}{ $X_f \leftarrow list\_mat[j]$ \; $Out \leftarrow HOPNORM(\:X_fW_{j}^{(0)}\:) $ \; $list\_cat.APPEND(\:\alpha_j \odot Out\:)$ \; } $H^{(1)} \leftarrow CONCAT(\:list\_cat\:)$ \; $Z \leftarrow ReLU(\: H^{(1)}\:)W^{(2)}$ \end{algorithm} \section{Related Work} \label{related_work} GNNs have emerged as an indispensable tool to learn graph-centric data. Many prediction tasks like node classification, link prediction, graph classification, etc. \cite{defferrard_convolutional_2016}\cite{kipf_semi-supervised_2017} introduced a simple end-to-end training framework using approximations of spectral graph convolutions. Since then, there has been a focus in the research community to improve the performance of GNNs, and a variety of techniques have been introduced. Earlier GNN frameworks utilized a fixed propagation scheme along all edges, which is sometimes not scalable for larger graphs. GraphSAGE\cite{hamilton_inductive_2017} and FastGCN\cite{chen_fastgcn:_2018} introduce neighbor sampling approaches in graph neural networks. GAT \cite{velickovic_graph_2017} introduces the use of the attention mechanism to provide weights to features that are aggregated from the neighbors. APPNP \cite{klicpera_predict_2018}, JK \cite{xu_representation_2018} and Geom-GCN \cite{pei_geom-gcn_2020} aim to improve the feature propagation scheme within layers of the model. More recently, researchers are proposing to make GNN models deeper. However, deeper models suffer from oversmoothing, where after stacking many GNN layers, features of the node become indistinguishable from each other, and there is a drop in the performance of the model. DropEdge \cite{rong_dropedge_2020} proposes to drop a certain number of edges to reduce the speed of convergence of oversmoothing and relieves the information loss. GCNII \cite{chen_simple_2020} use residual connections and identity mapping in GNN layers to enable deeper networks. \section{Experiments} \label{experiments} In this section, we evaluate the empirical performance of our proposed model on real-world datasets on the node classification task and compare with other graph neural network models. \subsection{Datasets} For fully-supervised node classification tasks, we perform experiments on nine datasets commonly used in graph neural networks literature. Details of the datasets are presented in Table \ref{tab:fully_supervised_data}. Homophily ratio \cite{zhu_beyond_2020} denotes the fraction of edges which connects two nodes of the same label. A higher value (closer to 1) indicates strong homophily, while a lower value (closer to 0) indicates strong heterophily in the dataset. Cora, Citeseer, and Pubmed \cite{sen_collective_2008} are citation networks based datasets and in general, are considered as homophily datasets. Graphs in Wisconsin, Cornell, Texas \cite{pei_geom-gcn_2020} represent links between webpages, Actor \cite{tang_social_2009} represent actor co-occurrence in Wikipedia pages, Chameleon and Squirrel \cite{rozemberczki_multi-scale_2020} represent the web pages in Wikipedia discussing corresponding topics. These datasets are considered as heterophily datasets. To provide a fair comparison, we use publicly available data splits taken from \cite{pei_geom-gcn_2020}\footnote{https://github.com/graphdml-uiuc-jlu/geom-gcn}. These splits have been frequently used by researchers for experiments in their publications. Results of comparison methods presented in this paper are also based on this split. In the analysis section, to demonstrate the scalability of the model for large graphs, we use ogbn-papers100M dataset\footnote{https://ogb.stanford.edu/docs/nodeprop/}, which is the largest publicly available node classification dataset. Many nodes in this dataset do not have labels assigned, hence homophily ratio is not calculated. We use standard split provided \cite{hu_open_2021} to train and evaluate the model. \begin{table} \centering \caption{Statistics of the node classification datasets} \label{tab:fully_supervised_data} \resizebox{\linewidth}{!}{% \begin{tabular}{lcrrrc} \hline \multicolumn{1}{c}{ \textbf{Datasets} } & \textbf{Hom. Ratio} & \textbf{Nodes} & \textbf{Edges} & \multicolumn{1}{c}{\textbf{Features} } & \textbf{Classes} \\ \hline Cora & 0.81 & 2,708 & 5,429 & 1,433 & 7 \\ Citeseer & 0.74 & 3,327 & 4,732 & 3,703 & 6 \\ Pubmed & 0.80 & 19,717 & 44,338 & 500 & 3 \\ Chameleon & 0.23 & 2,277 & 36,101 & 2,325 & 4 \\ Wisconsin & 0.21 & 251 & 499 & 1,703 & 5 \\ Texas & 0.11 & 183 & 309 & 1,703 & 5 \\ Cornell & 0.30 & 183 & 295 & 1,703 & 5 \\ Squirrel & 0.22 & 5,201 & 198,353 & 2,089 & 5 \\ Actor & 0.22 & 7,600 & 26,659 & 932 & 5 \\ \cmidrule(r){1-6} \multicolumn{2}{l}{ogbn-papers100M} & 111,059,956 & 1,615,685,872 & 128 & 172 \\ \hline \end{tabular} } \end{table} \subsection{Preprocessing} We follow the same preprocessing steps used by \cite{pei_geom-gcn_2020} and \cite{chen_simple_2020}. Other models also follow the same set of procedures. Initial node features are row-normalized. To account for both homophily and heterophily, we use the adjacency matrix and adjacency matrix with added-self loops for feature transformation. Both matrices are symmetrically normalized. For efficient computation, adjacency matrices are stored and used as sparse matrices. \begin{table*} \centering \caption{Mean classification accuracy on fully-supervised node classification task. Results for GCN, GAT, GraphSAGE, Cheby+JK, MixHop and H2GCN-1 are taken from \cite{zhu_beyond_2020}. For GEOM-GCN and GCNII results are taken from the respective article. Best performance for each dataset is marked as bold and second best performance is underlined for comparison. } \label{tab:full_super_results} \resizebox{\linewidth}{!}{% \begin{tabular}{lccccccccc} \toprule & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} & \textbf{Chameleon} & \textbf{Wisconsin} & \textbf{Texas} & \textbf{Cornell} & \textbf{Squirrel} & \textbf{Actor} \\ \hline \textbf{GCN} & 87.28$\pm$1.26 & 76.68$\pm$1.64 & 87.38$\pm$0.66 & 59.82$\pm$2.58 & 59.80$\pm$6.99 & 59.46$\pm$5.25 & 57.03$\pm$4.67 & 36.89$\pm$1.34 & 30.26$\pm$0.79 \\ \textbf{GAT} & 82.68$\pm$1.80 & 75.46$\pm$1.72 & 84.68$\pm$0.44 & 54.69$\pm$1.95 & 55.29$\pm$8.71 & 58.38$\pm$4.45 & 58.92$\pm$3.32 & 30.62$\pm$2.11 & 26.28$\pm$1.73 \\ \textbf{GraphSAGE} & 86.90$\pm$1.04 & 76.04$\pm$1.30 & 88.45$\pm$0.50 & 58.73$\pm$1.68 & 81.18$\pm$5.56 & 82.43$\pm$6.14 & 75.95$\pm$5.01 & 41.61$\pm$0.74 & 34.23$\pm$0.99 \\ \textbf{Cheby+JK} & 85.49$\pm$1.27 & 74.98$\pm$1.18 & 89.07$\pm$0.30 & 63.79$\pm$2.27 & 82.55$\pm$4.57 & 78.38$\pm$6.37 & 74.59$\pm$7.87 & 45.03$\pm$1.73 & 35.14$\pm$1.37 \\ \textbf{MixHop} & 87.61$\pm$0.85 & 76.26$\pm$1.33 & 85.31$\pm$0.61 & 60.50$\pm$2.53 & 75.88$\pm$4.90 & 77.84$\pm$7.73 & 73.51$\pm$6.34 & 43.80$\pm$1.48 & 32.22$\pm$2.34 \\ \textbf{GEOM-GCN} & 85.27 & \textbf{77.99} & 90.05 & 60.90 & 64.12 & 67.57 & 60.81 & 38.14 & 31.63 \\ \textbf{GCNII} & \textbf{88.01$\pm$1.33} & 77.13$\pm$1.38 & \textbf{90.30$\pm$0.37} & 62.48$\pm$2.74 & 81.57$\pm$4.98 & 77.84$\pm$5.64 & 76.49$\pm$4.37 & N/A & N/A \\ \textbf{H2GCN-1} & 86.92$\pm$1.37 & 77.07$\pm$1.64 & 89.40$\pm$0.34 & 57.11$\pm$1.58 & 86.67$\pm$4.69 & \uline{84.86$\pm$6.77} & 82.16$\pm$4.80 & 36.42$\pm$1.89 & \textbf{35.86$\pm$1.03} \\ \hline \textbf{Ours(3-hop)} & 87.73$\pm$1.36 & 77.19$\pm$1.35 & 89.73$\pm$0.39 & \uline{78.14$\pm$1.25} & \textbf{88.43$\pm$3.22} & \textbf{87.30$\pm$5.55} & \uline{87.03$\pm$5.77} & \uline{73.48$\pm$2.13} & 35.67$\pm$0.69 \\ \textbf{Ours(8-hop)} & \uline{87.93$\pm$1.00} & \uline{77.40$\pm$1.93} & \uline{89.75$\pm$0.39} & \textbf{78.27$\pm$1.28} & \uline{87.84$\pm$3.37} & \textbf{87.30$\pm$5.28} & \textbf{87.84$\pm$6.19} & \textbf{74.10$\pm$1.89} & \uline{35.75$\pm$0.96} \\ \bottomrule \end{tabular} } \end{table*} \subsection{Settings and Baselines} For a fully-supervised node classification task, each dataset is split evenly for each class into 60\%, 20\%, and 20\% for training, validation, and testing. We report the performance as mean classification accuracy over 10 random splits. We fix the embedding size to 64 and set the initial learnable scalar parameter with respect to each hop to 1 and $\gamma$ is set to 1. Thus, the initial scalar value $\alpha_i$ is set to $1/L$. Hyper-parameter settings of the model for best performance are found by performing a grid-search over a range of hyper-parameters. We compare our model to 8 different baselines and use the published results as the best performance of these models. GCNII \cite{chen_simple_2020} and H2GCN \cite{zhu_beyond_2020} have proposed multiple variants of their model. We have chosen the variant with the best performance on most datasets. \subsection{Results} Table \ref{tab:full_super_results} shows the comparison of the mean classification accuracy of our model with other popular GNN models. On heterophily datasets, our model shows significant improvements especially 64\% on Squirrel and 23\% on Chameleon dataset. Similarly, on Wisconsin, Texas, and Cornell, improvements are 2\%, 3\%, and 7\%, respectively. H2GCN has closer performance to our model than other GNN models as its architecture design accounts for the heterophily present in class labels and distinguishes node's self-features from neighbor's features. However, with our proposed model, we are able to achieve higher accuracy. The performance of other GNN models is quite a bit lower as their design is more suitable for homophily datasets. On homophily datasets, we observe most of the models have comparable performance with GCNII and GEOM-GCN in the lead. Our model is still comparable to state of the art and coming as second-best among various comparison measures. \section{Discussion} \label{discussion} \begin{table*} \centering \caption{Ablation study over 1080 different hyperparameter settings.} \label{tab:ablation_study} \resizebox{\linewidth}{!}{% \begin{tabular}{lccccccccc} \toprule & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} & \textbf{Chameleon} & \textbf{Wisconsin} & \textbf{Texas} & \textbf{Cornell} & \textbf{Squirrel} & \textbf{Actor} \\ \hline \textbf{Proposed} & 83.68$\pm$2.22 & 74.48$\pm$1.44 & \textbf{89.24$\pm$0.27} & \textbf{72.48$\pm$4.16} & 81.48$\pm$5.62 & \textbf{78.80$\pm$5.88} & \textbf{78.09$\pm$2.22} & \textbf{63.57$\pm$6.83} & 33.54$\pm$1.21 \\ \textbf{Without soft-selection} & \textbf{87.07$\pm$0.26} & \textbf{76.45$\pm$0.27} & 89.09$\pm$0.39 & 72.27$\pm$1.34 & 78.03$\pm$6.55 & 76.28$\pm$6.72 & 74.32$\pm$6.54 & 61.73$\pm$4.15 & 34.15$\pm$0.64 \\ \textbf{Common weight ($W^{(0)}$)} & 83.19$\pm$1.41 & 72.15$\pm$1.02 & 88.96$\pm$0.28 & 68.24$\pm$6.03 & 70.56$\pm$10.94 & 68.45$\pm$7.65 & 68.18$\pm$9.13 & 56.63$\pm$8.54 & 32.73$\pm$1.48 \\ \textbf{Without Hop-normalization} & 77.12$\pm$3.49 & 71.40$\pm$10.01 & 87.72$\pm$0.77 & 53.06$\pm$6.18 & \textbf{82.60$\pm$2.68} & 76.33$\pm$3.87 & 76.18$\pm$3.43 & 32.60$\pm$6.38 & \textbf{36.66$\pm$0.55} \\ \bottomrule \end{tabular} } \end{table*} \subsection{Ablation Studies} In this section, we consider the effect of various proposed design strategies on the performance of the model. In general, graph neural networks are sensitive to the hyperparameters used in training and require some amount of tuning to get the best performance. Since each dataset may have different set of best hyperparameters, it can be difficult to judge design decisions based just on best performance of the model with single hyperparameter setting. To provide a comprehensive evaluation, we compare the average accuracy of the model over 1080 combinations of the hyperparameters. The hyperparameters we tune are learning rate and weight decay of layers and dropout value applied as regularization between layers. Table \ref{tab:ablation_study} shows the average of classification accuracy values under various settings. For most datasets, our proposed design schemes lead to better average accuracy. Cora and Citeseer show better average performance without softmax regularization, however, the peak performance is marginally less with regularization. Even though Wisconsin shows higher average accuracy without normalization, however, the best performance on the dataset was achieved with the normalization layer. We found that Actor was the only dataset where performance reduced with the addition of the normalization layer. Without the normalization layer, our model achieves 37.63\% accuracy. However, to maintain consistency, we do not include it in the main results. These variations also highlight the fact that a single set of design choices may not apply to all datasets/tasks and some level of exploration is required. It is interesting to note that performance on almost all datasets is sensitive to the choice of the hyperparameters for training the model as there is a wide gap between best and average performance. One exception is Pubmed, where the model's performance is relatively unperturbed under various hyperparameter combinations. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{Avg_val_heatmap_l.pdf} \caption{Heatmap of average of learned soft-selection scalar for all datasets} \label{fig:scalar_val_heat} \end{figure} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{hop_norm.pdf} \caption{Figure shows t-SNE plots of trained embeddings (3-hop) of Squirrel and Chameleon datasets without (left) and with hop-normalization (right). Points represent nodes and colors represent their respective labels. Mean classification accuracy without and with hop-normalization are 39.92\% and 73.48\% for Squirrel; 61.38\% and 78.14\% for Chameleon datasets respectively. } \label{fig:hop_norm} \end{figure*} \subsection{Soft-Selection Parameter Analysis} We analyze the learned soft-selection parameters on average over different model hyperparameter combinations. We use four different settings: Proposed model setting, without softmax regularization on scalar weight parameters, shared linear transformation layer on input features, and without Hop-Normalization on input feature activations. For homophily datasets, it is easy to see that self-looped features are given more importance. Among heterophily datasets, Wisconsin, Cornell, Texas, and Actor have the most weights on node's ego features. In these datasets, graph structure plays a limited role in the performance accuracy of the model. For Chameleon and Squirrel datasets, we observed that the node's own features and first-hop features(without self-loop) were more useful for classification than any other features. \subsection{Hop-Normalization} In our experimental results, we find Chameleon and Squirrel datasets have significant improvements. To understand the results better, we create 2-dimensional plot of the trained embeddings of both datasets using t-SNE\cite{maaten_accelerating_2014}. Figure \ref{fig:accuracy_hop} shows the comparison of embeddings with and without hop-normalization. Without hop-normalization, embeddings of the nodes are not separated clearly, thus resulting in lower classification performance. We observe similar performance on other GNN models. While with hop-normalization, the node embeddings are well separated into clusters corresponding to their label leading to a higher observed performance with our model. \subsection{Model Scalability} Many GNN models by design are not scalable for large graph datasets with millions of nodes. To demonstrate the scalability of our model, we run experiments on \texttt{ogbn-papers100M} dataset \cite{wang_microsoft_2020}\cite{hu_open_2021} which is a citation graph with about 111 million nodes, 1.6 billion edges and 172 node label classes. Similar to our previous experimental settings, we generate a set of features with $A$ and $\Tilde{A}$ for 3-hop aggregation. The dimension of the hidden layer is set to 256 and $\gamma$ is set to L=7 (equal to number of input features) to provide stable training. The model is trained batchwise with input features for 10 random initializations, and we report mean accuracy. We compare the accuracy of our model with SGC \cite{wu_simplifying_2019}, Node2Vec \cite{grover_node2vec_2016} and SIGN \cite{frasca_sign_2020}. Similar to our method, input features can be precomputed in SGC and SIGN, thus making them scalable for larger datasets. Once features are computed, the model can be trained with small input batches of node features on the GPU. Many other GNN models cannot be trained for larger graphs as the feature generation, and model training are combined. Table \ref{tab:scalability_result} shows the mean node classification accuracy along with published results of other methods taken from \cite{frasca_sign_2020}\cite{hu_open_2021}. Our model outperforms all other methods, with SIGN having a closer performance to ours. However, SIGN uses the adjacency matrix of both directed and undirected versions of the graph for feature transformations, while our model only utilizes the adjacency matrix of the undirected graph. \begin{table}[h] \centering \caption{Mean classification accuracy on ogbn-100M dataset. SGC result is taken from \cite{hu_open_2021} and Node2Vec and SIGN results are taken from \cite{frasca_sign_2020}. Best performance is marked bold and second best performance is underlined.} \label{tab:scalability_result} \begin{tabular}{ll} \toprule \multicolumn{1}{c}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{Accuracy}} \\ \hline \textbf{SGC} & 63.29$\pm$0.19 \\ \textbf{Node2Vec} & 58.07$\pm$0.28 \\ \textbf{SIGN} & \uline{65.11$\pm$0.14} \\ \textbf{FSGNN} & \textbf{67.17$\pm$0.14} \\ \bottomrule \end{tabular} \end{table} \subsection{ Effect of increase in hops } In this section, we evaluate the change in model's performance with increase in the hops for aggregation. We choose one homophily dataset (Cora) and one heterophily dataset (Chameleon). Experiments are run with hop values set to 3,8,16, and 32. Figure \ref{fig:accuracy_hop} shows the performance of the model for each hop setting. We observe that there is little variation in the performance of the model. This result is intuitive as aggregated features from higher hops are not very useful, and the model can learn to place low weights on them. \begin{figure}[h] \centering \includegraphics[width=0.82\linewidth]{accuracy_hop.pdf} \caption{Figure shows the effect on classification accuracy of FSGNN with increase in the number of hops of feature aggregation on Cora (homophily) and Chameleon (heterophily) dataset. x-axis is in logarithmic scale. } \label{fig:accuracy_hop} \end{figure} \section{Conclusion} \label{conclusion} We discuss three GNN design strategies: separation of feature aggregation and representation learning; soft-selection of features, and hop-normalization. Using these simple and effective strategies, we propose a novel GNN model, called FSGNN. Using extensive experiments, we show that FSGNN outperforms the current state of the art GNN models on the node classification task. Analysis of the learned parameters provides us the crucial information of feature importance. Furthermore, we show that our model can be scaled for graphs with millions of nodes and billions of edges. \vspace{5mm} \section*{Implementation Details} For reproducibility of experimental results, we provide the details of our experiment setup and hyperparameters of the model. We use PyTorch 1.6.0 as deep learning framework on Python 3.8. Model training is done on Nvidia V100 GPU with 16 GB graphics memory and CUDA version 10.2.89. For node classfication results (\ref{tab:full_super_results}), we do grid search for learning rate and weight decay of the layers and dropout between the layers. Hyperparameters are set for first layer $fc1$, second layer $fc2$ and scalar weight parameter $sca$. ReLU is used as non-linear activation and Adam is used as the optimizer. Table \ref{tab:param_search} shows details of hyperparameter search space. Table \ref{tab:3_hop_param} and \ref{tab:8_hop_param} show the best hyperparameters for the model in 3-hop and 8-hop configuration respectively. For experiments on ogbn-papers100M dataset, we did not do grid search. Based on the data from earlier experiments we manually tuned the hyperparameters to get the accuracy result. Batch size of 10000 was used for training data. Table \ref{tab:ogbn_papers} shows the relevant hyperparameters for the model. \begin{table}[h] \centering \caption{Hyperparameter search space} \label{tab:param_search} \begin{tabular}{ll} \toprule \textbf{Hyperparameter} & \multicolumn{1}{c}{\textbf{Values}} \\ \hline \textbf{$WD_{sca}$} & 0.0, 0.0001, 0.001, 0.01, 0.1 \\ \textbf{$LR_{sca}$} & 0.04, 0.02, 0.01, 0.005 \\ \textbf{$WD_{fc1}$} & 0.0, 0.0001, 0.001 \\ \textbf{$WD_{fc2}$} & 0.0, 0.0001, 0.001 \\ \textbf{$LR_{fc}$} & 0.01, 0.005 \\ \textbf{$Dropout$} & 0.5, 0.6, 0.7 \\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \centering \caption{Hyperparameters of the 3-hop model} \label{tab:3_hop_param} \resizebox{\linewidth}{!}{% \begin{tabular}{lcccccc} \toprule \multicolumn{1}{l}{\textbf{Datasets}} & \textbf{$WD_{sca}$} & \textbf{$LR_{sca}$} & \textbf{$WD_{fc1}$} & \textbf{$WD_{fc2}$} & \textbf{$LR_{fc}$} & \textbf{$Dropout$} \\ \hline \textbf{Cora} & 0.1 & 0.01 & 0.001 & 0.0001 & 0.01 & 0.6 \\ \textbf{Citeseer} & 0.0001 & 0.005 & 0.001 & 0.0 & 0.01 & 0.5 \\ \textbf{Pubmed} & 0.01 & 0.005 & 0.0001 & 0.0001 & 0.01 & 0.7 \\ \textbf{Chameleon} & 0.1 & 0.005 & 0.0 & 0.0 & 0.005 & 0.5 \\ \textbf{Wisconsin} & 0.0001 & 0.01 & 0.001 & 0.0001 & 0.01 & 0.5 \\ \textbf{Texas} & 0.001 & 0.01 & 0.001 & 0.0 & 0.01 & 0.7 \\ \textbf{Cornell} & 0.0 & 0.01 & 0.001 & 0.001 & 0.01 & 0.5 \\ \textbf{Squirrel} & 0.1 & 0.04 & 0.0 & 0.001 & 0.01 & 0.7 \\ \textbf{Actor} & 0.0 & 0.04 & 0.001 & 0.0001 & 0.01 & 0.7 \\ \bottomrule \end{tabular} } \end{table} \begin{table}[h] \centering \caption{Hyperparameters of the 8-hop model} \label{tab:8_hop_param} \resizebox{\linewidth}{!}{% \begin{tabular}{lcccccc} \toprule \multicolumn{1}{l}{ \textbf{Datasets} } & \textbf{$WD_{sca}$} & \textbf{$LR_{sca}$} & \textbf{$WD_{fc1}$} & \textbf{$WD_{fc2}$} & \textbf{$LR_{fc}$} & \textbf{$Dropout$} \\ \hline \textbf{Cora} & 0.1 & 0.02 & 0.001 & 0.0001 & 0.01 & 0.6 \\ \textbf{Citeseer} & 0.0001 & 0.01 & 0.001 & 0.0001 & 0.01 & 0.5 \\ \textbf{Pubmed} & 0.01 & 0.02 & 0.0001 & 0.0 & 0.005 & 0.7 \\ \textbf{Chameleon} & 0.1 & 0.01 & 0.0 & 0.0 & 0.005 & 0.5 \\ \textbf{Wisconsin} & 0.001 & 0.02 & 0.001 & 0.0001 & 0.01 & 0.5 \\ \textbf{Texas} & 0.01 & 0.01 & 0.001 & 0.0 & 0.01 & 0.7 \\ \textbf{Cornell} & 0.0 & 0.01 & 0.001 & 0.0001 & 0.01 & 0.5 \\ \textbf{Squirrel} & 0.1 & 0.02 & 0.0 & 0.0001 & 0.01 & 0.5 \\ \textbf{Actor} & 0.0001 & 0.04 & 0.001 & 0.0001 & 0.01 & 0.7 \\ \bottomrule \end{tabular} } \end{table} \begin{table}[h!] \centering \caption{Hyperparameters for the ogbn-paper100M dataset} \label{tab:ogbn_papers} \resizebox{\linewidth}{!}{% \begin{tabular}{lccccccc} \toprule \textbf{Dataset} & \textbf{$WD_{sca}$} & \textbf{$LR_{sca}$} & \textbf{$WD_{fc1}$} & \textbf{$WD_{fc2}$} & \textbf{$LR_{fc1}$}& \textbf{$LR_{fc2}$} & \textbf{$Dropout$} \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{ogbn-papers100M}\\ \end{tabular} & 0.1 & 0.0001 & 0.001 & 0.000001 & 0.00005 & 0.0002 & 0.5 \\ \bottomrule \end{tabular} } \end{table} \section*{Acknowledgement} This work was supported by JSPS Grant-in-Aid for Scientific Research (Grant Number 21K12042, 17H01785), JST CREST (Grant Number JPMJCR1687), and the New Energy and Industrial Technology Development Organization (Grant Number JPNP20006) \printbibliography
{ "attr-fineweb-edu": 1.883789, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfeU5qhDCz6hCKk0V
\section{Introduction}\label{sec:Intro} During the past decade, control scientists have developed various tools for the regulation of large-scale systems, with the notable examples of~\cite{orosz2010controlling} for the control of biological systems,~\cite{ching2012distributed} for the regulation of brain and neural networks,~\cite{2013arXiv1309.6270P} for network protection against spreading processes, and~\cite{Chen-2012-DR-Springer} for load management in smart grid. On the other hand, the enormous size of these systems and the need for cost-effective control make the identification of a small fraction of their nodes to steer them around the state space a central problem within the control community~\cite{ 2013arXiv1304.3071O,citeulike:13239948, ramos2014np, 2014arXiv1404.7665S, bullo2014,2014arXiv1409.3289T}. This is a combinatorial task of formidable complexity; as it is shown in~\cite{2013arXiv1304.3071O}, identifying a small set of actuator nodes so that the resultant system is controllable alone is NP-hard. Nonetheless, a controllable system may be practically uncontrollable if the required input energy for the desired state transfers is forbidding, as when the controllability matrix is close to singularity~\cite{Chen:1998:LST:521603}. Therefore, by choosing input nodes to ensure controllability alone, one may not achieve a cost-effective control for the involved state transfers. {In this paper, we aim to address this important requirement, by introducing a best-approximation polynomial-time algorithm to actuate a small fraction of a system's states so that controllability is ensured and a specified control energy performance is guaranteed.} In particular, we consider the selection of a minimal number of actuators such that a bound on the minimum control effort for a given transfer is satisfied while controllability is ensured. Finding the appropriate choice of such a subset of nodes is a challenging task, since the search for a subset satisfying certain criteria constitutes a combinatorial optimization problem that can be computationally intensive. Indeed, it is shown in~\cite{2013arXiv1304.3071O} that identifying the minimum number of actuators for inducing controllability alone is NP-hard. Therefore, we extend this computationally hard problem by imposing an energy constraint on the choice of the actuator set, and we solve it with an efficient approximation algorithm. Specifically, we first generalize the involved energy objective to an $\epsilon$-close one, which remains well-defined even when the controllability matrix is non-invertible. Then, we make use of this metric and we relax the controllability constraint of the original problem. Notwithstanding, we show that for certain values of $\epsilon$ all solutions of this auxiliary program still render the system controllable. This fact, along with a supermodularity property of the generalized objective that we establish, leads to a polynomial-time algorithm that approximates up to a multiplicative factor of $O(\log n)$ any optimal actuator set that meets the specified energy bound, when the latter lies in a certain range with respect to $n$. Moreover, we show that this is the best approximation factor one can achieve in polynomial-time for the worst case. Hence, with this algorithm we aim to address the open problem of actuator placement with energy performance guarantees~\cite{2013arXiv1304.3071O, 2014arXiv1404.7665S,bullo2014,PhysRevLett.108.218703,PhysRevLett.110.208701}. To the best of our knowledge, we are the first to study the selection of a minimal number of actuators so that a bound on the minimum control effort for a given transfer is satisfied. Our results are also applicable to the case of average control energy metrics~\cite{Muller1972237} and can be extended to the cardinality-constrained actuator placement for minimum control effort, where the optimal actuator set is selected so that these metrics are minimized, while its cardinality is upper bounded by a given value. These and other relevant extensions are explored in the companion manuscript~\cite{2014arXiv1409.3289T}. The remainder of this paper is organized as follows. The formulation and model for the actuator selection problem are set forth in Section \ref{sec:Prelim}. In Section~\ref{sec:Min_N} we discuss our main results, including the intractability of this problem, as well as the supermodularity of the involved control energy objective. Then, we provide an efficient approximation algorithm for its solution. Finally, in Section~\ref{sec:examples} we illustrate our analytical findings on an integrator chain network and we test their performance over large Erd\H{o}s-R\'{e}nyi random networks. Section~\ref{sec:conc} concludes the paper. All proofs can be found in the Appendix. \section{Problem Formulation} \label{sec:Prelim} \paragraph*{Notation} Denote the set of natural numbers $\{1,2,\ldots\}$ as $\mathbb{N}$, the set of real numbers as $\mathbb{R}$, and let $[n]\equiv \{1, 2, \ldots, n\}$ for all $n \in \mathbb{N}$. Also, given a set $\mathcal{X}$, denote $|\mathcal{X}|$ as its cardinality. Matrices are represented by capital letters and vectors by lower-case letters. For a matrix ${A}$, ${A}^{T}$ is its transpose, and $A_{ij}$ is its element located at the $i-$th row and $j-$th column. For a symmetric matrix ${A}$, ${A}={A}^{T}$; and if ${A}$ is positive semi-definite, or positive definite, we write ${A} \succeq {0}$ and ${A}\succ {0}$, respectively. Moreover, for $i \in [n]$, let ${I}^{(i)}$ be an $n \times n$ matrix with a single non-zero element: $I_{ii}=1$, while $I_{jk}=0$, for $j$, $k\neq i$, and denote the identity matrix by ${I}$, where its dimension is inferred from the context. Additionally, for $\delta \in \mathbb{R}^n$, ${diag}(\delta)$ denotes an $n \times n$ diagonal matrix such that ${diag}(\delta)_{ii}=\delta_i$, for all $i \in [n]$. \subsection{Actuator Placement Model} Consider a linear system of $n$ states, $x_1, x_2,\ldots,x_n$, whose evolution is described by \begin{align} \dot{{x}}(t) = {A}{{x}}(t) + {B}{{u}}(t), t > t_0, \label{eq:dynamics} \end{align} where $t_0 \in \mathbb{R}$ is fixed, ${x}\equiv \{x_1,x_2,\ldots,x_n\}$, $\dot{{x}}(t)\equiv d{x}/dt$, while ${u}$ is the corresponding input vector. The matrices ${A}$ and ${B}$ are of appropriate dimension. Without loss of generality, we also refer to~\eqref{eq:dynamics} as a network of $n$ agents, $1, 2,\ldots, n$, which we associate with the states $x_1, x_2,\ldots, x_n$, respectively. Moreover, we denote their collection as $\mathcal{V}\equiv[n]$. Henceforth, the interaction matrix ${A}$ is fixed, while a special structure is assumed for the input matrix ${B}$. \begin{myassump}\label{assump:Diag_B} ${B}={diag}(\delta)$, where $\delta\in\{0,1\}^{n}$. \end{myassump} Each choice of the binary vector $\delta$ in Assumption~\ref{assump:Diag_B} signifies a particular selection of agents as actuators. Hence, if $\delta_i=1$, state $i$ may receive an input, while if $\delta_i=0$, receives none. We collect the above and others into the next definition. \begin{mydef}[{Actuator Set, Actuator}] Given $\delta \in \{0,1\}^{n}$ and ${B} = {diag}(\delta)$, let $\Delta \subseteq \mathcal{V}$ be such that $\forall i\in \Delta$, $\delta_i=1$, while $\forall i\notin \Delta$, $\delta_i=0$; then, $\Delta$ is called an \emph{actuator set} and any agent $i \in \Delta$ is called an \emph{actuator}. \end{mydef} \subsection{Controllability and the Minimum Energy Transfer Problem} We consider the notion of controllability and relate it to the problem of selecting a minimum number of actuators for the satisfaction of a control energy constraint. Recall that~\eqref{eq:dynamics} is controllable if for any finite $t_1>t_0$ and any initial state ${x}_0\equiv {x}(t_0)$, the system can be steered to any other state ${x}_1\equiv {x}(t_1)$, by some input ${u}(t)$ defined over $[t_0, t_1]$. Moreover, for general matrices ${A}$ and ${B}$, the controllability condition is equivalent to the matrix \begin{align} {\Gamma}(t_0,t_1)\equiv \int_{t_0}^{t_1} \mathrm{e}^{{A}(t-t_0)} {B}{B}^{T} \mathrm{e}^{{A}^{T}(t-t_0)}\,\mathrm{d}{t},\label{eq:general_gramian} \end{align} being positive definite for any $t_1>t_0$~\cite{Chen:1998:LST:521603}. Therefore, we refer to ${\Gamma}(t_0,t_1)$ as the \textit{controllability matrix} of~\eqref{eq:dynamics}. The controllability of a linear system is of great interest, because it is related to the solution of the following minimum energy transfer problem \begin{equation}\label{pr:min_energy_transfer} \begin{aligned} \underset{{u}(\cdot)}{\text{minimize}} & \; \; \; \int_{t_0}^{t_1} {u}(t)^T{u}(t)\,\mathrm{d}{t}\\ \text{subject to} \\ & \dot{{x}}(t) = {A}{{x}}(t) + {B}{{u}}(t), t_0 <t \leq t_1,\\ & {x}(t_0)= {x_0}, {x}(t_1)={x_1}, \end{aligned} \end{equation} where ${A}$ and ${B}$ are any matrices of appropriate dimension. In particular, if~\eqref{eq:dynamics} is controllable for the given ${A}$ and ${B}$, the resulting minimum control energy is given by \begin{align} ({x}_1 - \mathrm{e}^{{A}\tau}{x}_0)^{T}{\Gamma}(t_0,t_1)^{-1}({x}_1 - \mathrm{e}^{{A}\tau}{x}_0),\label{exact_energy} \end{align} where $\tau=t_1-t_0$~\cite{Muller1972237}. Therefore, if ${x}_1 - \mathrm{e}^{{A}\tau}{x}_0$ is spanned by the eigenvectors of ${\Gamma}(t_0,t_1)$ corresponding to its smallest eigenvalues, the minimum control effort~\eqref{exact_energy} may be forbiddingly high~\cite{Chen:1998:LST:521603}. Hence, when we choose the actuators of a network so that controllability is {ensured} and an input energy constraint for a specified state transfer is satisfied, we should take into account their effect on ${\Gamma}(t_0,t_1)^{-1}$. Moreover, controllability is an indispensable property for any linear system, while in many cases is viewed as a structural attribute of the involved system~\cite{1100557} that holds true even by any single input nodes, as in large-scale neural networks~\cite{citeulike:13239948}. This motivates further the setting of this paper, where the actuators are chosen so that a bound on the minimum control effort for a given transfer is satisfied and overall controllability is respected. Per Assumption~\ref{assump:Diag_B} some further properties for the controllability matrix are due. First, given an actuator set $\Delta$, associated with some $\delta$, let ${\Gamma}_\Delta \equiv {\Gamma}(t_0,t_1)$; then, \begin{align} {\Gamma}_{\Delta} = \sum_{i=1}^n \delta_i {\Gamma}_i, \label{eq:gramianTOdelta} \end{align}where for any $i \in [n]$, ${\Gamma}_i = \int_{t_0}^{t_1} \mathrm{e}^{{A}t} {I}^{(i)} \mathrm{e}^{{A}^{T} t}\,\mathrm{d}{t}$, that is, each ${\Gamma}_i$ is a constant positive semi-definite matrix determined by ${A}$, $t_0$ and $t_1$. To see why~\eqref{eq:gramianTOdelta} holds true, observe that ${B} = {diag}(\delta)$ implies ${B} = {B} {B}^{T} = \sum_{i = 1}^{n} \delta_i {I}^{(i)}$, and \eqref{eq:gramianTOdelta} follows upon replacing this in~\eqref{eq:general_gramian}. Furthermore, note that \eqref{eq:gramianTOdelta} together with the fact that ${\Gamma}_i \succeq {0}$, for any $i \in [n]$ gives ${\Gamma}_{\Delta_1}\preceq {\Gamma}_{\Delta_2} $ whenever $ {\Delta_1}\subseteq{\Delta_2}$. \subsection{Actuator Placement Problem}\label{subsec:leader_pr} We consider the problem of actuating a small number of system's~\eqref{eq:dynamics} states so that the minimum control energy for a given transfer meets some specified criterion and controllability is ensured. The challenge is in doing so using as few actuators as possible. This is an important improvement over the existing literature where the goal of actuator placement problems have either been to ensure just controllability~\cite{2013arXiv1304.3071O} or the weaker property of structural controllability~\cite{jafari2011leader,Commault20133322}. Other relevant results consider the task of leader-selection~\cite{clark2014_2,clark2014_1}, where the leaders, i.e. actuated agents, are chosen so as to minimize an appropriate mean-square convergence error of the remaining agents. Our work also departs from a set of works that study average energy metrics, such as the minimum eigenvalue of the controllability Gramian or the trace of its inverse~\cite{2014arXiv1404.7665S,bullo2014,PhysRevLett.108.218703}. Instead, here we consider an exact energy objective and require it to satisfy a particular upper bound. Let $\mathcal{C}_r \equiv \{\Delta \subseteq \mathcal{V}: |\Delta| \leq r, {\Gamma}_\Delta \succ 0\}$ be the actuator sets of cardinality at most $r$ that render~\eqref{eq:dynamics} controllable. Then, for any $\Delta \subseteq \mathcal{V}$, we write $\Delta \in \mathcal{C}_{|\Delta|}$ to denote that $\Delta$ achieves controllability. Furthermore, we set \[ {v}\equiv ({x}_1 - \mathrm{e}^{{A}\tau}{x}_0)/\|{x}_1 - \mathrm{e}^{{A}\tau}{x}_0\|_2. \] We consider the problem \begin{equation}\tag{I}\label{pr:min_set} \begin{aligned} \underset{\Delta \subseteq \mathcal{V}}{\text{minimize}} & \; \; \; |\Delta|\\ \text{subject to} \\ & \Delta \in \mathcal{C}_{|\Delta|},\\ &{v}^{T}{\Gamma}_\Delta^{-1}{v} \leq E, \end{aligned} \end{equation} for some positive constant $E$. This problem is a generalized version of the minimal controllability problem considered in~\cite{2013arXiv1304.3071O}, so that its solution not only ensures controllability, but also provides a guarantee in terms of the minimum input energy required for the normalized transfer from ${x}_0$ to ${x}_1$; indeed, for $E\rightarrow\infty$, we recover the problem of~\cite{2013arXiv1304.3071O}. For some extra properties of~\eqref{pr:min_set}, note that for any $\Delta \in \mathcal{C}_{|\Delta|}$, $0 \prec {\Gamma}_\Delta \preceq {\Gamma}_\mathcal{V}$, i.e. ${v}^{T}{\Gamma}_\mathcal{V}^{-1}{v}\leq {v}^{T}{\Gamma}_\Delta^{-1}{v}$~\cite{bernstein2009matrix}. Hence,~\eqref{pr:min_set} is feasible for any $E$ such that \begin{align} {v}^{T}{\Gamma}_\mathcal{V}^{-1}{v}\leq E.\label{lo_bound_E} \end{align} Observe that this lower bound depends only on $A$ and $v$, i.e. also on $n$, as well as on $t_0$ and $t_1$. Moreover,~\eqref{pr:min_set} is NP-hard, since it looks for a minimal solution and so it asks if $\mathcal{C}_r \neq \emptyset$ for any $r< n$~\cite{2013arXiv1304.3071O}. Thus, we need to identify an efficient approximation algorithm for its solution, which is the subject of the next section. \section{Minimal Actuator Sets with Constrained Minimum Energy Performance}\label{sec:Min_N} {We present} an efficient polynomial-time approximation algorithm for~\eqref{pr:min_set}. To this end, we first generalize the involved energy objective to an $\epsilon$-close one, that remains well-defined even when the controllability matrix is non-invertible. Next, we relax~\eqref{pr:min_set} {by introducing a program that makes use of this objective and ignores controllability constraint of \eqref{pr:min_set}. Nonetheless, we show that for certain values of $\epsilon$ all solutions of this auxiliary program still render the system controllable. This fact, along with the supermodularity property of the generalized objective that we establish, leads to our proposed approximation algorithm. The discussion of its efficiency ends the analysis of~\eqref{pr:min_set}. \subsection{An $\epsilon$-close Auxiliary Problem}\label{subsubsec:Min_N_aux} Consider the following approximation to Problem~\eqref{pr:min_set} \begin{equation}\tag{I$'$}\label{pr:min_set_approx} \begin{aligned} \underset{\Delta \subseteq \mathcal{V}}{\text{minimize}} & \; \; \; |\Delta|\\ \text{subject to} \\ & \phi(\Delta) \leq E, \end{aligned} \end{equation} where $ \phi(\Delta)\equiv {v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}+ \epsilon\sum_{i=1}^{n-1}\bar{{v}}_i^{T}({\Gamma}_\Delta+\epsilon^2{I})^{-1}\bar{{v}}_i$, for any $\Delta \subseteq \mathcal{V}$, while $\bar{{v}}_1,\bar{{v}}_2,\ldots,\bar{{v}}_{n-1}$ are an orthonormal basis for the null space of ${v}$, and $\epsilon$ is fixed such that $0< \epsilon \leq 1/E$, given $E$. Observe that the controllability constraint is now ignored, while the energy objective is well-defined for any actuator set $\Delta$ including the empty set, since the invertibility of ${\Gamma}_\Delta+\epsilon{I}$ and ${\Gamma}_\Delta+\epsilon^2{I}$ is always guaranteed for $\epsilon > 0$. The $\epsilon$-closeness is evident, since for any $\Delta \in \mathcal{C}_{|\Delta|}$, $ \phi(\Delta)\rightarrow {v}^{T}{\Gamma}_\Delta^{-1}{v}$ as $\epsilon\rightarrow 0$. Notice that we can take $\epsilon \rightarrow 0$, since we assume any positive $\epsilon \leq 1/E$. \subsection{Approximation Algorithm for Problem~\eqref{pr:min_set_approx}} \label{subsubsec:Min_N_alg} We first prove that all solutions of~\eqref{pr:min_set_approx} for $0<\epsilon \leq 1/E$, render the system controllable, notwithstanding that no controllability constraint is imposed by this program on the choice of the actuator sets. Moreover, we show that the involved $\epsilon$-close energy objective is supermodular, and then we present our approximation algorithm, followed by a discussion of its efficiency, which ends this subsection. \begin{myproposition}\label{prop:suf_contr} Fix $\omega>0$. Then, $\forall \epsilon$, $0< \epsilon\leq 1/\omega$, if $\forall\Delta \subseteq \mathcal{V}$, $\phi(\Delta) \leq \omega$, then $\Delta \in \mathcal{C}_{|\Delta|}$. \end{myproposition} Note that $\omega$ is chosen independently of the parameters of system~\eqref{eq:dynamics}. Therefore, the absence of the controllability constraint at Problem~\eqref{pr:min_set_approx} for $0<\epsilon \leq 1/E$ is fictitious; nonetheless, it obviates the necessity of considering only those actuator sets that render the system controllable. The next lemma is also essential and suggest an efficient approximation algorithm for solving~\eqref{pr:min_set_approx}. \begin{myproposition}[Supermodularity]\label{prop:subm} The function ${v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}+ \epsilon\sum_{i=1}^{n-1}\bar{{v}}_i^{T}({\Gamma}_\Delta+\epsilon^2{I})^{-1}\bar{{v}}_i: \Delta \subseteq \mathcal{V} \mapsto \mathbb{R}$ is supermodular with respect to the choice of $\Delta$. \end{myproposition} Inspired by the literature on set-covering problems subject to submodular constraints,~\cite{Nemhauser:1988:ICO:42805, citeulike:416650,krause2012submodular}, we have the following efficient approximation algorithm for Problem~\eqref{pr:min_set_approx}, and, as we illustrate by the end of this section, for Problem~\eqref{pr:min_set} as well. We note that a corollary of the above proposition is that ${v}^{T}{\Gamma}_{(\cdot)}^{-1}{v}$ is supermodular as well, but over the sets $\Delta\subseteq \mathcal{V}$ that render~\eqref{eq:dynamics} controllable. \begin{algorithm} \caption{Approximation Algorithm for the Problem~\eqref{pr:min_set_approx}.}\label{alg:minimal-leaders} \begin{algorithmic} \REQUIRE Upper bound $E$, approximation parameter $\epsilon \leq 1/E$, matrices ${\Gamma}_1, {\Gamma}_2, \ldots, {\Gamma}_n$, vector ${v}$. \ENSURE Actuator set $\Delta$. \STATE $\Delta\leftarrow\emptyset$ \WHILE {$\phi(\Delta) > E $} \STATE{ $a_i \in \text{argmax}_{a \in \mathcal{V}\setminus \Delta}\{ \phi(\Delta)-\phi(\Delta\cup\{a\}) \}$\\ \quad \mbox{} $\Delta \leftarrow \Delta \cup \{a_i\}$ } \ENDWHILE \end{algorithmic} \end{algorithm} For the efficiency of Algorithm~\ref{alg:minimal-leaders} the following is true. \begin{mytheorem}[A Submodular Set Coverage Optimization]\label{th:minimal} Denote as $l^\star$ the cardinality of a solution to Problem~\eqref{pr:min_set_approx} and as $\Delta$ the selected set by Algorithm~\ref{alg:minimal-leaders}. Then, \begin{align} &\Delta \in \mathcal{C}_{|\Delta|},\label{explain:th:minimal2}\\ &\phi(\Delta) \leq E,\label{explain:th:minima3}\\ &\frac{|\Delta|}{l^\star}\leq 1+\log \frac{n\epsilon^{-1}-\phi(\mathcal{V})}{E-\phi(\mathcal{V})}\equiv F, \label{explain:th:minimal1}\\ &F=O(\log n + \log \epsilon^{-1}+\log \frac{1}{E-\phi(\mathcal{V})}).\label{explain:approx_error0} \end{align} \end{mytheorem} Therefore, the polynomial-time Algorithm~\ref{alg:minimal-leaders} returns a set of actuators that meets the corresponding control energy bound of Problem~\eqref{pr:min_set_approx}, while it renders system~\eqref{eq:dynamics} controllable. Moreover, the cardinality of this set is up to a multiplicative factor of $F$ from the minimum cardinality actuator sets that meet the same control energy bound. In Section~\ref{sebsec:Quality} we elaborate further on the dependence of this multiplicative factor on $n$, $\epsilon$ and $E$, using~\eqref{explain:approx_error0}, while in Section~\ref{subsec:ApproximationAlgorithm} we finalize our treatment of Problem~\eqref{pr:min_set} by employing Algorithm~\ref{alg:minimal-leaders} to approximate its solutions. \subsection{Quality of Approximation of Algorithm~\ref{alg:minimal-leaders} for Problem~\eqref{pr:min_set_approx}}\label{sebsec:Quality} The result in~\eqref{explain:approx_error0} was expected from a design perspective: Increasing the network size $n$ or improving the accuracy by decreasing $\epsilon$, as well as demanding a better energy guarantee by decreasing $E$, should all push the cardinality of the selected actuator set upwards. Also, note that $\log \epsilon^{-1}$ is the design cost for circumventing the difficulty to satisfy controllability constraint of Problem~\eqref{pr:min_set} directly~\cite{2013arXiv1304.3071O}. Furthermore, per~\eqref{explain:approx_error0} and with $E-\phi(\mathcal{V})$ and $\epsilon$ both fixed, the cardinality of the actuator set that Algorithm~\ref{alg:minimal-leaders} returns is up to a multiplicative factor of $O(\log n)$ from the minimum cardinality actuator sets that meet the same performance criterion. We note that this is the best achievable bound in polynomial-time for the set covering problem in the worst case~\cite{Feige:1998:TLN:285055.285059}, while~\eqref{pr:min_set_approx} is a generalization of it (cf.~\cite{2013arXiv1304.3071O}). \subsection{Approximation Algorithm for Problem~\eqref{pr:min_set}}\label{subsec:ApproximationAlgorithm} We present an efficient approximation algorithm for Problem~\eqref{pr:min_set} that is based on Algorithm~\ref{alg:minimal-leaders}. To this end, let $\Delta$ be the actuator set returned by Algorithm~\ref{alg:minimal-leaders}, i.e. $\Delta \in \mathcal{C}_{|\Delta|}$ and $\phi(\Delta)\leq E$. Moreover, denote as $\lambda_1$, $\lambda_2$, $\ldots$, $\lambda_n$ and ${q}_1$, ${q}_2$, $\ldots$, ${q}_n$ the eigenvalues and the corresponding orthonormal eigenvectors of ${\Gamma}_{\Delta}$, respectively. Additionally, let $\lambda_m\equiv\text{min}_{i\in [n]}\lambda_i$ and ${q}_M\equiv \text{argmax}_{{q_i}, i\in[n]} {v}^T{q_i}$. Finally, consider a positive $\epsilon$ such that $n\epsilon({v}^T{q}_M)^2/\lambda_m^2\leq cE$, for some $c>0$. Then, \begin{align} \phi(\Delta)>{v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}&=\sum_{j=1}^{n}\frac{({v}^T{q_j})^2}{\lambda_j+\epsilon} \label{eq:aux_1}\\ &\geq{v}^{T}{\Gamma}_\Delta^{-1}{v}-\frac{n\epsilon({v}^T{q}_M)^2}{\lambda_m^2}\label{eq:aux_2}\\ &\geq {v}^{T}{\Gamma}_\Delta^{-1}{v}-cE, \label{eq:aux_3} \end{align} where we derived~\eqref{eq:aux_2} from~\eqref{eq:aux_1} using the fact that for any $x\geq 0$, $1/(1+x)\geq 1-x$, while the rest follow from the definition of $\lambda_m$ and ${q}_M$, as well as the assumption $n\epsilon({v}^T{q}_M)^2/\lambda_m^2\leq cE$. Moreover, it is also true that $\phi(\Delta)\leq E$ by the definition of $\Delta$, and therefore from~\eqref{eq:aux_3} we get \begin{align} {v}^{T}{\Gamma}_\Delta^{-1}{v}\leq (1+c)E. \label{eq:approx_error} \end{align} Hence, we refer to $c$ as \textit{approximation error}. On the other hand, $\lambda_m$ and ${q}_M$ are not in general known in advance. Hence, we need to search for a sufficiently small value of $\epsilon$ so that~\eqref{eq:approx_error} holds. One way to achieve this, since $\epsilon$ is lower and upper bounded by $0$ and $1/E$, respectively, is to perform a binary search. We implement this procedure in Algorithm~\ref{alg:minimal-leaders_final}, where we denote as $[\text{Algorithm}~\ref{alg:minimal-leaders}](E,\epsilon)$ the set that Algorithm~\ref{alg:minimal-leaders} returns, for given $E$ and $\epsilon$. \begin{algorithm} \caption{Approximation Algorithm for the Problem~\eqref{pr:min_set}.}\label{alg:minimal-leaders_final} \begin{algorithmic} \REQUIRE Upper bound $E$, approximation error $c$, bisection's accuracy level $a$, matrices ${\Gamma}_1, {\Gamma}_2, \ldots, {\Gamma}_n$, vector ${v}$. \ENSURE Actuator set $\Delta$. \STATE $l\leftarrow 0$, $u\leftarrow 1/E$, $\epsilon\leftarrow(l+u)/2$ \WHILE {$u-l>a$}\\ $\Delta \leftarrow [\text{Algorithm}~\ref{alg:minimal-leaders}](E,\epsilon)$ \IF {${v}^{T}{\Gamma}_\Delta^{-1}{v}- {v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}> cE$}\STATE{$u\leftarrow \epsilon$} \ELSE \STATE{$l\leftarrow \epsilon$} \ENDIF\\ $\epsilon\leftarrow (l+u)/2$ \ENDWHILE \IF {${v}^{T}{\Gamma}_\Delta^{-1}{v}- {v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}> cE$} \STATE{$u\leftarrow \epsilon$}, $\epsilon\leftarrow (l+u)/2$ \ENDIF\\ $\Delta \leftarrow [\text{Algorithm}~\ref{alg:minimal-leaders}](E,\epsilon)$\label{exit_step} \end{algorithmic} \end{algorithm} Note that in the worst case, when we first enter the \texttt{while} loop, the \texttt{if} condition is not satisfied and as a result, $\epsilon$ is set to a lower value. This process continues until the \texttt{if} condition is satisfied for the first time, from which point and on, the algorithm converges, up to the accuracy level $a$, to the largest value $\bar{\epsilon}$ of $\epsilon$ such that ${v}^{T}{\Gamma}_\Delta^{-1}{v}- {v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}\leq cE$; specifically, $|\epsilon-\bar{\epsilon}| \leq a/2$, due to the mechanics of the bisection. Then, Algorithm~\ref{alg:minimal-leaders_final} exits the \texttt{while} loop and the last \texttt{if} statement ensures that $\epsilon$ is set below $\bar{\epsilon}$ so that ${v}^{T}{\Gamma}_\Delta^{-1}{v}- {v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v} \leq cE$. The efficiency of this algorithm for Problem~\eqref{pr:min_set} is summarized below. \begin{mytheorem}[Approximation Efficiency of Algorithm~\ref{alg:minimal-leaders_final} for Problem~\eqref{pr:min_set}]\label{th:minimal_set_main} Denote as $l^\star$ the cardinality of a solution to Problem~\eqref{pr:min_set_approx} and as $\Delta$ the selected set by Algorithm~\ref{alg:minimal-leaders_final}. Then, \begin{align} &\Delta \in \mathcal{C}_{|\Delta|}, \nonumber\\ &{v}^{T}{\Gamma}_\Delta^{-1}{v} \leq (1+c)E, \label{state:1}\\ &\frac{|\Delta|}{l^\star}\leq F, \label{state:1.5}\\ &F = O(\log n+ \log \frac {1}{c\lambda_mE}+\log \frac{1}{E-\phi(\mathcal{V})}).\label{state:2} \end{align} \end{mytheorem} We remark that as $\epsilon \to 0$, $\phi(\cdot)\to {v}^{T}({\Gamma}_{(\cdot)}+\epsilon{E})^{-1}{v}$. Therefore, for any solution $\Delta^\circ$ to Problem~\eqref{pr:min_set} and $\epsilon$ small enough, $|\Delta^\circ| \geq l^\star$; to see this, note that as $\epsilon \to 0$ for any $\Delta^\circ$, $\phi(\Delta^\circ) \to {v}^{T}({\Gamma}_{\Delta^\circ}+\epsilon{I})^{-1}{v} < {v}^{T}{\Gamma}_{\Delta^\circ}^{-1}{v} \leq E$, since also $\epsilon>0$, i.e. any $\Delta^\circ$ is a candidate solution to Problem~\eqref{pr:min_set_approx}, which implies $|\Delta^\circ| \geq l^\star$. Therefore, for $\epsilon$ small enough we have $|\Delta|/|\Delta^\circ| \leq |\Delta|/l^\star$. Hence,~\eqref{state:1.5} is written as $|\Delta|/|\Delta^\circ| \leq F$, that is, the worst case bound~\eqref{state:1.5} holds also with respect to the cardinality of any solution to~\eqref{pr:min_set}; and as a result, the best-approximation properties of Algorithm~\ref{alg:minimal-leaders} are inherited by Algorithm~\ref{alg:minimal-leaders_final} as well. \section{Examples and Discussion}\label{sec:examples} We test the performance of the proposed algorithm over various systems, starting with an integrator chain in Subsection~\ref{subsec:integratorChain} and following up with Erd\H{o}s-R\'{e}nyi random networks in Subsection~\ref{subsec:randomGraphs}. \subsection{The Case of an Integrator Chain}\label{subsec:integratorChain} \begin{figure}[t] \centering \begin{tikzpicture} \tikzstyle{every node}=[draw,shape=circle]; \node (v0) at (0:0) {$1$}; \node (v1) at ( 0:1) {$2$}; \node (v2) at ( 0:2) {$3$}; \node (v3) at (0:3) {$4$}; \node (v4) at (0:4) {$5$}; \foreach \from/\to in {v0/v1, v1/v2, v2/v3, v3/v4} \draw [->] (\from) -- (\to); \draw (v0) -- (v1) (v1) -- (v2) (v2) -- (v3) (v3) -- (v4); \end{tikzpicture} \caption{A $5$-node integrator chain.} \label{fig:chain} \end{figure} We first illustrate the mechanics and efficiency of Algorithm~\ref{alg:minimal-leaders_final} using the integrator chain in Fig.~\ref{fig:chain}, where we let \begin{align} {A} = \left[\begin{array}{ccccc} -1 & 0 & 0 & 0 & 0\\ 1 & -1 & 0 & 0 & 0\\ 0 & 1 & -1 & 0 & 0\\ 0 & 0 & 1 & -1 & 0\\ 0 & 0 & 0 & 1 & -1 \end{array}\right].\nonumber \end{align} We first run Algorithm~\ref{alg:minimal-leaders_final} with $E \leftarrow {v}^{T}{\Gamma}_{\{1,5\}}^{-1}{v}$ and $a, c \leftarrow .001$ and examine the transfer from ${x}(0) \leftarrow [0, 0, 0, 0, 0]^T$ to ${x}(1) \leftarrow [1, 1, 1, 1, 1]^T$. The algorithm returned the actuator set $\{1,4\}$. As expected, node $1$ is chosen, and this remains true for any other value of ${x}(1)$, since for a chain network to be controllable, it is necessary and sufficient that node $1$ be actuated. Additionally, $\{1,4\}$ is the exact best actuator set for achieving this transfer. This is true because using MATLAB\textsuperscript{\textregistered{}} we can compute \begin{align*} &{v}^{T}{\Gamma}_{\{1\}}^{-1}{v}=5.2486\cdot10^6, {v}^{T}{\Gamma}_{\{1,2\}}^{-1}{v}=2.0860\cdot10^4, \\ &{v}^{T}{\Gamma}_{\{1,3\}}^{-1}{v}=159.9369, {v}^{T}{\Gamma}_{\{1,4\}}^{-1}{v}=159.1712,\\ &{v}^{T}{\Gamma}_{\{1,5\}}^{-1}{v}=2.1086\cdot10^4. \end{align*} Hence, node $1$ alone does not satisfy the upper bound $E$, while ${v}^{T}{\Gamma}_{\{1,4\}}^{-1}{v}$ not only satisfies this bound, but it also takes the smallest value among all the actuators sets of cardinality two that induce controllability. Therefore, $\{1,4\}$ is the best minimal actuator set to achieve the given transfer. \begin{figure*}[th] \centering \hspace*{-30pt} \includegraphics[width=1.05\textwidth]{randomSIMv5_a_is_1_f_up_to_2_power_25.eps} \caption{Number of selected actuators by Algorithm~\ref{alg:minimal-leaders_final} in Erd\H{o}s-R\'{e}nyi networks of several sizes $n$ and for varying energy bounds $E$. For each $n$, the values of $E$ are chosen so that the feasibility constraint~\eqref{lo_bound_E} of Problem~\eqref{pr:min_set} is satisfied: Specifically, for each set of values for $n$ and $k$, Algorithm~\ref{alg:minimal-leaders_final} was executed for $E\leftarrow k{v}^{T}{G(n)}_\mathcal{V}^{-1}{v}$, where $G(n)_\mathcal{V}$ is the controllability Gramian corresponding to the generated network of size $n$ with $B$ set to be the identity matrix.} \label{fig:randomGraphs} \end{figure*} Next, we set ${x}(1) \leftarrow [0, 0, 0, 1, 0]^T$ in Algorithm~\ref{alg:minimal-leaders_final}, which led again to the selection $\{1,4\}$, as one would expect for any transfer that involves only the movement of the fourth node, while controllability is desired. In other words, even though we chose $E \leftarrow {v}^{T}{\Gamma}_{\{1,5\}}^{-1}{v}$, Algorithm~\ref{alg:minimal-leaders_final} respected this energy bound and with the best possible actuator set for the given transfer, which is $\{1,4\}$, as verified in the following \begin{align*} &{v}^{T}{\Gamma}_{\{1\}}^{-1}{v}=1.5425\cdot10^7, {v}^{T}{\Gamma}_{\{1,2\}}^{-1}{v}=5.8675\cdot10^4, \\ &{v}^{T}{\Gamma}_{\{1,3\}}^{-1}{v}=401.7997,{v}^{T}{\Gamma}_{\{1,4\}}^{-1}{v}=6.2889,\\ &{v}^{T}{\Gamma}_{\{1,5\}}^{-1}{v}=2.7445\cdot10^5. \end{align*} Moreover, note that although node $1$ is selected as an actuator, in this case its corresponding input signal is zero. Thus, one may choose not to implement an actuator at this node, at the expense, however, of losing the overall network controllability. This observation motivates the analysis of~\eqref{pr:min_set} when no controllability constraint is placed on the end actuator set. Finally, by setting $E$ large enough in Algorithm~\ref{alg:minimal-leaders_final}, so that any actuator set respects this energy bound, we observe that only node $1$ is selected, as expected for the satisfaction of the controllability constraint. \subsection{Erd\H{o}s-R\'{e}nyi Random Networks}\label{subsec:randomGraphs} Erd\H{o}s-R\'{e}nyi random graphs are commonly used to model real-world networked systems~\cite{newman2006structure}. According to this model, each edge is included in the generated graph with some probability $p$, independently of every other edge. We implemented this model for varying network sizes $n$, as shown in Fig.~\ref{fig:randomGraphs}, where the directed edge probabilities were set to $p = 2\log(n)/n$, following~\cite{2013arXiv1304.3071O}. In particular, we first generated the binary adjacency matrices for each network size so that every edge is present independently with probability $p$, and then we replaced every non-zero entry with an independent standard normal variable to generate a randomly weighted graph. To avoid the computational difficulties associated with the integral equation~\eqref{eq:general_gramian} we worked with the controllability Gramian instead, which for a stable system can be efficiently calculated from the Lyapunov equation ${A}{G} + {G}{A}^{T} = -{B}{B}^{T}$ and is given in closed-form by \begin{align} {G} = \int_{t_0}^{\infty} \mathrm{e}^{{A}(t-t_0)} {B} {B}^{T} \mathrm{e}^{{A}^{T}(t-t_0)}\,\mathrm{d}{t}. \nonumber \end{align} Using the controllability Gramian in~\eqref{exact_energy} corresponds to the minimum state transfer energy with no time constraints. Therefore, we stabilized each random instances of $A$ by subtracting $1.1$ times the real part of their right-most eigenvalue and then we used the MATLAB\textsuperscript{\textregistered{}} function {\fontsize{10}{10}\selectfont\ttfamily\upshape gram} to compute the corresponding controllability Gramians. Next, we set $x_0$ to be the zero vector and $x_1$ the vector of all ones. We also set $c \leftarrow 0.1$ and $a\leftarrow1$. Finally, for each instance of $n$ we first computed the corresponding lower bound of $E$ so that~\eqref{pr:min_set} is feasible, ${v}^{T}{G}_\mathcal{V}^{-1}{v}$, and then run Algorithm~\ref{alg:minimal-leaders_final} for $E$ equal to $k {v}^{T}{G}_\mathcal{V}^{-1}{v}$, where $k$ ranged from $2$ to $2^{25}$. The number of selected actuator nodes by Algorithm~\ref{alg:minimal-leaders_final} for each $n$ with respect to $k$ is shown in Fig.~\ref{fig:randomGraphs}. We observe that as $k$ increases the number of actuators decreases, as one would expect when the energy bound of~\eqref{pr:min_set} is relaxed. In addition, we notice that for $k$ large enough, so that~\eqref{pr:min_set} becomes equivalent to the minimal controllability problem of~\cite{2013arXiv1304.3071O}, the number of chosen actuators is one, as it was generally observed in~\cite{2013arXiv1304.3071O} for a similar set of simulations. \section{Concluding Remarks}\label{sec:conc} We introduced the problem of minimal actuator placement in a linear system so that a bound on the minimum control effort for a given state transfer is satisfied while controllability is ensured. This problem was shown to be NP-hard and to have a supermodular structure. Moreover, an efficient algorithm was provided for its solution. Finally, the efficiency of this algorithm was illustrated over large Erd\H{o}s-R\'{e}nyi random networks. Our future work is focused on investigating the case where no controllability constraint is placed on the end actuator set, as well as, on exploring the effects that the network topology has on this selection. \appendices \section{Proofs of the Main Results}\label{proofs} \subsection{Proposition~\ref{prop:suf_contr}} Note that since $\|{v}\|_2=1$ and $\bar{{v}}_1,\bar{{v}}_2,\ldots,\bar{{v}}_{n-1}$ are an orthonormal basis for the null space of ${v}$, for any unit vector ${q}$ of dimension equal to that of ${v}$ it is $({v}^T{q})^2+\sum_{i=1}^{n-1}(\bar{{v}}_i^T{q})^2=1$. Next, assume that $\Delta \notin \mathcal{C}_{|\Delta|}$ and let $k$ be the corresponding number of non-zero eigenvalues of ${\Gamma}_\Delta$. Therefore, $k \leq n-1$. Moreover, denote as $\lambda_1, \lambda_2, \ldots, \lambda_n$ and ${q_1}, {q_2}, \dots, {q_n}$ the eigenvalues and orthonormal eigenvectors of ${\Gamma}_\Delta$. We get \begin{align} \phi(\Delta)=\sum_{j=1}^{k}[\frac{({v}^T{q_j})^2}{\lambda_j+\epsilon}+\sum_{i=1}^{n-1}\frac{\epsilon(\bar{{v}}_i^T{q_j})^2}{\lambda_j+\epsilon^2}]+\frac{n-k}{\epsilon}\geq v. \nonumber \end{align} Since $\epsilon \leq 1/v$ and $k \leq n-1$ we have a contradiction. \hfill{}{\scriptsize $\blacksquare$}{\scriptsize \par} \subsection{Proposition~\ref{prop:subm}} We first prove that ${v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}$ is supermodular. With similar steps one can show that $\bar{{v}}_i^{T}({\Gamma}_\Delta+\epsilon^2{I})^{-1}\bar{{v}}_i$, for any $i\in[n-1]$, also is. Then, the proof is complete, since the class of supermodular functions is closed under non-negative linear combinations. Recall that ${v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}$ is supermodular if and only if $-{v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}$ is submodular, and that a function $h: \mathcal{V}\mapsto \mathbb{R}$ is submodular if and only if for any $a \in \mathcal{V}$ the function $h_a:\mathcal{V}\setminus\{a\}\mapsto \mathbb{R}$, where $h_a(\Delta)\equiv h(\Delta\cup \{a\})-h(\Delta)$, is a non-increasing set function. In other words, if and only if for any $\Delta_1 \subseteq \Delta_2 \subseteq \mathcal{V}\setminus\{a\}$ it holds true that $h_a(\Delta_1)\geq h_a(\Delta_2)$. In our case, $h_a(\Delta)= -{v}^{T}({\Gamma}_{\Delta\cup \{a\}}+\epsilon{I})^{-1}+{v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}$. Therefore, take any $\Delta_1 \subseteq \Delta_2 \subseteq \mathcal{V}\setminus\{a\}$ and denote accordingly $\mathcal{D}\equiv\Delta_2\setminus\Delta_1$. Then, we aim to prove \begin{align*} &-{v}^{T}({\Gamma}_{\Delta_1\cup \{a\}}+\epsilon{I})^{-1}{v}+{v}^{T}({\Gamma}_{\Delta_1}+\epsilon{I})^{-1}{v}\geq \\ &-{v}^{T}({\Gamma}_{\Delta_1\cup \mathcal{D}\cup \{a\}}+\epsilon{I})^{-1}{v}+{v}^{T}({\Gamma}_{\Delta_1\cup \mathcal{D}}+\epsilon{I})^{-1}{v}. \end{align*} To this end and for $z\in [0,1]$, set $f(z)={v}^{T}({\Gamma}_{\Delta_1}+z{\Gamma}_{\mathcal{D}}+{\Gamma}_ a+\epsilon{I})^{-1}{v}$, and $g(z)={v}^{T}({\Gamma}_{\Delta_1}+z{\Gamma}_{\mathcal{D}}+\epsilon{I})^{-1}{v}$. After some manipulations the above inequality can be written as $f(1)-f(0)\geq g(1)-g(0)$. To prove this one, it suffices to prove that $df/dz \geq dg/dz$, $\forall z\in (0,1)$. Denote ${L}_1(z)={\Gamma}_{\Delta_1}+z{\Gamma}_{\mathcal{D}}+{\Gamma}_a+\epsilon{I}$ and ${L}_2(z)={\Gamma}_{\Delta_1}+z{\Gamma}_{\mathcal{D}}+\epsilon{I}$. Then, the $df/dz \geq dg/dz$ becomes \begin{align} {v}^{T}{L}_1(z)^{-1}{\Gamma}_\mathcal{D}{L}_1(z)^{-1}{v}\leq {v}^{T}{L}_2(z)^{-1}{\Gamma}_\mathcal{D}{L}_2(z)^{-1}{v}, \label{ineq:sub} \end{align} where we used the fact that for any ${A} \succ 0$, ${B}\succeq 0$, $z \in (0,1)$, $\frac{d}{dz}({A}+z{B})^{-1}$ $=$ $-({A}+z{B})^{-1}{B}({A}+z{B})^{-1}$. To show that this holds, first observe that both ${L}_1(z)$ and ${L}_2(z)$ are full rank. Thus, $\rho({\Gamma}_\mathcal{D}^{1/2}{L}_1(z)^{-1})= \rho({\Gamma}_\mathcal{D}^{1/2}{L}_2(z)^{-1})=$ $\rho({\Gamma}_\mathcal{D}^{1/2})$ and, as a result, $\mathcal{R}({\Gamma}_\mathcal{D}^{1/2}{L}_1(z)^{-1})=\mathcal{R}({\Gamma}_\mathcal{D}^{1/2}{L}_2(z)^{-1})=\mathcal{R}({\Gamma}_\mathcal{D}^{1/2})$~\cite{bernstein2009matrix}. Hence, if ${v} \notin \mathcal{R}({\Gamma}_\mathcal{D}^{1/2})$, then~\eqref{ineq:sub} holds trivially. Otherwise, if ${v} \in \mathcal{R}({\Gamma}_\mathcal{D}^{1/2})$, then $\exists \hat{{v}}$ such that ${v}={\Gamma}_\mathcal{D}^{1/2}\hat{{v}}$ and~\eqref{ineq:sub} is written equivalently \begin{align} \hat{{v}}^{T}{\Gamma}_\mathcal{D}^{1/2}&{L}_1(z)^{-1}{\Gamma}_\mathcal{D}{L}_1(z)^{-1}{\Gamma}_\mathcal{D}^{1/2}\hat{{v}} \nonumber \leq\\ &\hat{{v}}^{T}{\Gamma}_\mathcal{D}^{1/2}{L}_2(z)^{-1}{\Gamma}_\mathcal{D}{L}_2(z)^{-1}{\Gamma}_\mathcal{D}^{1/2}\hat{{v}}. \label{ineq:sub_new} \end{align} To prove~\eqref{ineq:sub_new}, it is sufficient to show that $\forall z\in [0,1]$ \begin{align} {\Gamma}_\mathcal{D}^{1/2}{L}_1(z)^{-1}&{\Gamma}_\mathcal{D}{L}_1(z)^{-1}{\Gamma}_\mathcal{D}^{1/2}\nonumber \preceq\\ &{\Gamma}_\mathcal{D}^{1/2}{L}_2(z)^{-1}{\Gamma}_\mathcal{D}{L}_2(z)^{-1}{\Gamma}_\mathcal{D}^{1/2}\label{ineq:sub_new_2}. \end{align} To this end, first observe that ${L}_1(z) \succeq {L}_2(z)$. This implies ${L}_2(z)^{-1} \succeq{L}_1(z)^{-1}$~\cite{bernstein2009matrix} and, as a result, \begin{align} {\Gamma}_\mathcal{D}^{1/2}{L}_2(z)^{-1}{\Gamma}_\mathcal{D}^{1/2} \succeq {\Gamma}_\mathcal{D}^{1/2}{L}_1(z)^{-1}{\Gamma}_\mathcal{D}^{1/2}. \nonumber \end{align} Now, since for any $0 \preceq {A} \preceq {B}$, ${A}^2 \preceq {B}^2$~\cite{bernstein2009matrix}, the previous inequality gives~\eqref{ineq:sub_new_2}. \hfill{}{\scriptsize $\blacksquare$}{\scriptsize \par} \subsection{Theorem~\ref{th:minimal}} We first prove~\eqref{explain:th:minima3},~\eqref{explain:th:minimal1} and~\eqref{explain:approx_error0}, and then~\eqref{explain:th:minimal2}. First, let $\Delta_0, \Delta_1, \ldots$ be the sequence of sets selected by Algorithm~\ref{alg:minimal-leaders}, and let $l$ be the smallest index such that $\phi(\Delta_l) \leq E$. Then, $\Delta_l$ is the set that Algorithm~\ref{alg:minimal-leaders} returns, and this proves~\eqref{explain:th:minima3}. Moreover, from~\cite{citeulike:416650}, since for any $\Delta \in \mathcal{V}$, $h(\Delta)\equiv-\phi(\Delta)+\phi(\emptyset)$ is a non-negative, non-decreasing, submodular function (cf.~Proposition~\ref{prop:subm}), it is guaranteed for Algorithm~\ref{alg:minimal-leaders} that \begin{align*} \frac{l}{l^\star}&\leq 1+\log \frac{h(\mathcal{V})-h(\emptyset)}{h(\mathcal{V})-h(\Delta_{l-1})}\\ &=1+\log \frac{n\epsilon^{-1}-\phi(\mathcal{V})}{\phi(\Delta_{l-1})-\phi(\mathcal{V})}. \end{align*} Now, $l$ is the first time that $\phi(\Delta_l) \leq E$, so $\phi(\Delta_{l-1}) > E$. This implies~\eqref{explain:th:minimal1}. Moreover, observe that $0<\phi(\mathcal{V})$ so that from \eqref{explain:th:minimal1} we get $F \leq 1+\log[n\epsilon^{-1}/(E-\phi(\mathcal{V}))]$, which in turn implies~\eqref{explain:approx_error0}. On the other hand, since $0<\epsilon \leq 1/E$ and $\phi(\Delta_l) \leq E$, Proposition~\ref{prop:suf_contr} is in effect, i.e.~\eqref{explain:th:minimal2} holds true. \hfill{}{\scriptsize $\blacksquare$}{\scriptsize \par} \subsection{Theorem~\ref{th:minimal_set_main}} The first and the third statements follow directly from Theorem~\ref{th:minimal}. For~\eqref{state:1}, first note that when Algorithm~\ref{alg:minimal-leaders_final} exits the \texttt{while} loop and after the following \texttt{if} statement, ${v}^{T}{\Gamma}_\Delta^{-1}{v}- {v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}\leq cE$. Additionally, ${v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}< \phi(\Delta)\leq E$; and as a result,~\eqref{state:1} is implied. Finally, for~\eqref{state:2}, note that ${v}^{T}{\Gamma}_\Delta^{-1}{v}- {v}^{T}({\Gamma}_\Delta+\epsilon{I})^{-1}{v}\leq cE$ holds true when $\epsilon$ is of the same order as $n({v}^T{q}_M)^2/(c\lambda_m^2E)$. Then, $\log\epsilon^{-1}=O(\log n +\log 1/(c\lambda_mE)$, since $({v}^T{q}_M)^2\leq 1$, which proves~\eqref{state:2} through~\eqref{explain:approx_error0}. \hfill{}{\scriptsize $\blacksquare$}{\scriptsize \par} \vspace*{-2pt} \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.81543, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfi45qhLBqPGg5v79
\section{Introduction} \label{sec:intro} With the increasing importance of heat-assisted magnetic recording (HAMR), high temperature micromagnetics have become an essential topic. Solving the Landau-Lifshitz-Gilbert (LLG) equation within a finite element framework cannot satisfy the demands, which arise at fast varying temperatures near the Curie point $T_{\mathrm{C}}$, because the magnitude of the magnetization is kept constant. At a fixed temperature, below $T_{\mathrm{C}}$, one could in principle use material parameters, which are adjusted to the specific simulation temperature, in order to compute the correct magnetization dynamics. Once the temperature starts to vary the LLG fails, due to the lack of longitudinal magnetization relaxation. In such a case one must use an atomistic discretization of the magnetic particle. Then, the phase transition from the ferromagnetic to the paramagnetic state at $T_{\mathrm{C}}$ follows from averaging over the spin ensemble. This procedure is computationally expensive, and thus as an alternative strategy one can solve the Landau-Lifshitz Bloch (LLB) equation \cite{garanin_fokker-planck_1997,garanin_thermal_2004,evans_stochastic_2012}. The LLB needs temperature dependent material functions, like the zero field equilibrium magnetization $m_{\mathrm{e}}$ and the longitudinal and perpendicular susceptibilities $\widetilde{\chi}_\parallel$ and $\widetilde{\chi}_\perp$ as an input. But after having obtained all requirements the LLB can be solved in a single spin approach without any mesh, which is computationally cheap \cite{chubykalo-fesenko_dynamic_2006,bunce_laser-induced_2010,volger_llb}. To correctly model all finite size effects the temperature dependent material functions must be determined for each system size or composition. Especially, for HAMR simulations this is a crucial restriction, if one aims to consider size or $T_{\mathrm{C}}$ distributions of the recording grains. In this work we intend to investigate in which limits material functions, which were computed or measured for a specific system, can be reused for other systems, by comparing atomistic LLG and LLB simulation results. In detail, we analyze the effect of the system size and the exchange constant. We hope this study to become a LLB modeling guideline, which helps to estimate the error that occurs if one reuses temperature dependent material functions. Further, it should help to minimize these errors with little effort. \section{Model} \label{sec:model} The LLB equation was designed to consider the longitudinal relaxation of the magnetization in a magnetic particle, without the need for an atomistic discretization. Many publications confirm its validity~\cite{garanin_thermal_2004,chubykalo-fesenko_dynamic_2006,atxitia_micromagnetic_2007,kazantseva_towards_2008,chubykalo-fesenko_dynamic_2006,schieback_temperature_2009,bunce_laser-induced_2010,evans_stochastic_2012,mcdaniel_application_2012,greaves_magnetization_2012,mendil_resolving_2014,volger_llb}. Our model uses the LLB, where the magnetization magnitude preserves the Boltzmann distribution up to the Curie temperature. It was formulated in Ref.~\cite{evans_stochastic_2012} per: \begin{eqnarray} \label{eq:LLB} \frac{d \boldsymbol{m}}{dt}= &-&\mu_0{\gamma'}\left( \boldsymbol{m}\times \boldsymbol{H}_{\mathrm{eff}}\right) \nonumber \\ &-&\frac{\alpha_\perp\mu_0 {\gamma'}}{m^2} \left \{ \boldsymbol{m}\times \left [ \boldsymbol{m}\times \left (\boldsymbol{H}_{\mathrm{eff}}+\boldsymbol{\xi}_{\perp} \right ) \right ] \right \}\nonumber \\ &+&\frac{\alpha_\parallel \mu_0{\gamma'}}{m^2}\boldsymbol{m}\left (\boldsymbol{m}\cdot\boldsymbol{H}_{\mathrm{eff}} \right )+\boldsymbol{\xi}_{\parallel}, \end{eqnarray} where $\gamma'$ is the reduced electron gyromagnetic ratio ($\gamma'=|\gamma_{\mathrm{e}}|/(1+\lambda^2)$ with $|\gamma_{\mathrm{e}}|=1.76086\cdot10^{11}$\,(Ts)$^{-1}$), $\mu_0$ is the vacuum permeability and $\alpha_\parallel$ and $\alpha_\perp$ are the longitudinal and perpendicular dimensionless damping constants, respectively. With $M_0$ being the saturation magnetization at zero temperature, the reduced magnetization is $\boldsymbol{m}=\boldsymbol{M}/M_0$. Thermal fluctuations are considered with thermal fields $\boldsymbol{\xi}_{\parallel}$ and $\boldsymbol{\xi}_{\perp}$. The field components are white noise random numbers. The effective field $\boldsymbol{H}_{\mathrm{eff}}$ in Eq.~\ref{eq:LLB} contains the external field $\boldsymbol{H}_{\mathrm{ext}}$, the anisotropy field along the $z$ direction \begin{equation} \label{eq:Hani} \boldsymbol{H}_\mathrm{ani}=\frac{1}{\widetilde{\chi}_{\perp}(T)}\left( m_x\boldsymbol{e}_{x}+m_y\boldsymbol{e}_{y}\right), \end{equation} and the internal exchange field \begin{equation} \label{eq:blochField} \boldsymbol{H}_{\mathrm{J}}=\begin{cases} \frac{1}{2\widetilde{\chi}_{\parallel}(T)}\left( 1-\frac{m^2}{m^2_{\mathrm{e}}(T)} \right)\boldsymbol{m} & T\lesssim T_{\mathrm{C}}\\ -\frac{1}{\widetilde{\chi}_{\parallel}(T)} \left( 1+\frac{3}{5}\frac{T_{\mathrm{C}}}{T-T_{\mathrm{C}}}m^2 \right)\boldsymbol{m}& T\gtrsim T_{\mathrm{C}}.\end{cases} \end{equation} We represent each particle with one single magnetization vector in our study. Hence, the effective field does not contain an exchange field. In Eqs.~\ref{eq:Hani} and \ref{eq:blochField} the longitudinal and perpendicular susceptibilities $\widetilde{\chi}_{\parallel}$ and $\widetilde{\chi}_{\perp}$ and the zero field equilibrium magnetization $m_{\mathrm{e}}$ are temperature dependent material functions, which have to be precomputed, in order to obtain the correct dynamical high temperature behavior. As already mentioned, strictly speaking these functions are dependent on the system size and composition. We calculate $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$ from stochastic LLG simulations with an atomistic discretization by means of the code VAMPIRE~\cite{evans_atomistic_2014}. VAMPIRE solves for the time evolution of the spins $\boldsymbol{S}_k$ with constant magnitude per: \begin{eqnarray} \label{eq:atomisticLLG} \frac{d\boldsymbol{S}_k}{dt}=&-& \gamma'\left \{\boldsymbol{S}_k \times\left ( \boldsymbol{H}_{\mathrm{eff},k}+\boldsymbol{\xi}_k \right ) \right \}\nonumber \\ &-& \gamma' \lambda \left \{ \boldsymbol{S}_k \times \left [ \boldsymbol{S}_k \times \left ( \boldsymbol{H}_{\mathrm{eff},k}+\boldsymbol{\xi}_k\right ) \right ] \right \}. \end{eqnarray} Here, the effective field contains the external field, the anisotropy field and the exchange field. For more details about the models please refer to~\cite{volger_llb}. \section{finite size effects} \label{sec:finite_size_effects} \begin{figure} \includegraphics{llb_modeling-figure0.pdf} \caption{\small Zero field equilibrium magnetization $m_{\mathrm{e}}$ of a cylindrical particle with two different diameters and material parameters as given in Tab.~\ref{tab:mat}. Results of atomistic LLG simulations (green circles) and the corresponding infinite size fits (solid blue), as well as the $m_{\mathrm{e}}$ fit of the 5\,nm reference particle (dotted black) are plotted. The latter is scaled to the Curie temperature of the actual size (dashed red).} \label{fig:me_cylinder} \end{figure} \begin{figure} \includegraphics{llb_modeling-figure1.pdf} \caption{\small Longitudinal ($\widetilde{\chi}_{\parallel}$) and perpendicular ($\widetilde{\chi}_{\perp}$) susceptibilities of a cylindrical particle with two different diameters and material parameters given in Tab.~\ref{tab:mat}. Results of atomistic LLG simulations (green circles and crosses) and the corresponding infinite size fits (solid blue), as well as the susceptibility fits of the 5\,nm reference particle (dotted black) are plotted. The latter is scaled to the Curie temperature of the actual size (dashed red).} \label{fig:chi_cylinder} \end{figure} We investigate how the functions $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$ depend on the diameter of a cylindrical particle with a constant height of 10\,nm. For each diameter, in a range of 3.5\,nm to 10\,nm, VAMPIRE simulations with a time step of 10$^{-15}$\,s are performed at various temperatures ($0-800$\,K). At each temperature 100 trajectories consisting of 20000 equilibration steps and 20000 simulation steps are computed in the absence of any external field. Averaging the magnetization components $m_\eta$ over all simulation steps yields the zero field equilibrium magnetization. Note, the magnetization components are calculated from the ensemble of $N$ spins in the particle per: \begin{equation} m_\eta=\frac{1}{N}\sum_{i=k}^{N}S_{\eta,k}. \end{equation} From the fluctuations of these components one can compute $\widetilde{\chi}_{\parallel}(T)$ and $\widetilde{\chi}_{\perp}(T)$. Finally, the three temperature dependent functions are fitted. The detailed procedure, how to properly extract the fits from atomistic LLG simulations can be found in Refs.~\cite{kazantseva_towards_2008,volger_llb}. \begin{table} \centering \begin{tabular}{c c c c c} \toprule \toprule $K_1$\,[J/m$^3$] & $J_{\mathrm{S}}$\,[T] & $A_{\mathrm{ex}}$\,[pJ/m] & $a$\,[nm] & $\lambda$\\ \midrule $6.6\times10^6$ & 1.43 & 21.58 & 0.24 & 0.1\\ \bottomrule \bottomrule \end{tabular} \caption{\small Material properties of the reference grain. $K_1$ is the uniaxial anisotropy constant, $J_{\mathrm{S}}$ is the saturation polarization and $\lambda$ is the dimensionless damping constant. $A_{\mathrm{ex}}$ denotes the exchange constant and $a$ is the lattice constant in the atomistic model. All parameters are zero temperature values.} \label{tab:mat} \end{table} The choice of the size of the smallest particle was motivated by the findings of Ref.~\cite{ellis_switching_2015}, which suggest that for even smaller particles the LLB equation, which is actually derived in the bulk regime, is not valid any more. In this section we want to focus on the differences originating from varying cylinder diameters, and thus particle volumes. More precisely, we define a reference particle with a cylinder diameter of 5\,nm and the material parameters of Tab.~\ref{tab:mat}. For other system system sizes the Curie temperature varies due to finite size effects. Hence, the temperature dependent material functions vary too. Since it is time consuming to extract the correct functions, reusing existing ones from the 5\,nm particle would be very helpful. As a consequence, we compare the directly fitted $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$ curves with the 5\,nm curves, after scaling (or shifting) them to the new Curie temperature. For example, to analyze the difference of $m_{\mathrm{e}}(T)$ of the 5\,nm system and the 10\,nm system, we directly calculate both fits from atomistic simulations. After that, we scale the 5\,nm equilibrium magnetization curve per: \begin{equation} \label{eq:scaling} m_{\mathrm{e,sc,10\,nm}}(T)=m_{\mathrm{e,at,5\,nm}}\left(T\frac{T_{\mathrm{C,10\,nm}}}{T_{\mathrm{C,5\,nm}}}\right) \end{equation} or shift it per: \begin{equation} \label{eq:shifting} m_{\mathrm{e,sh,10\,nm}}(T)=m_{\mathrm{e,at,5\,nm}}\left(T+\Delta T_{\mathrm{C}}\right ). \end{equation} Here, ``at'' indicates the atomistic fit, ``sc'' indicates the scaled fit and ``sh'' the shifted fit. Figure~\ref{fig:me_cylinder} exemplarily illustrates one system where the scaled magnetization agrees very well with the atomistic data and one where deviations are observable. The same comparison is shown for the susceptibilities in Fig.\ref{fig:chi_cylinder}. To quantify the agreement, we compute the mean squared displacement (MSD) which is defined as: \begin{equation} \left \langle \left(a-b\right)^2 \right\rangle = \frac{1}{N}\sum_{i=1}^{N} \left[a_i-b_i \right]^ 2. \end{equation} The sum is performed over all $N$ data points in a temperature range from 300\,K to 800\,K ($\Delta T=5$\,K), which is relevant to HAMR. In particular, we are interested in the following MSD ratios: \begin{itemize} \item $\mathrm{rMSD}_{\mathrm{sc}}(x)=\frac{\left \langle \left(x-x_{\mathrm{sc}}\right)^2 \right\rangle}{\left \langle \left(x-x_{\mathrm{at}}\right)^2 \right\rangle}$: ratio of the MSD of atomistic data $x$ and the scaled 5\,nm fit $x_{\mathrm{sc}}$ and the MSD of atomistic data $x$ and the corresponding fit $x_{\mathrm{at}}$ for the specific size. \item $\mathrm{rMSD}_{\mathrm{sh}}(x)=\frac{\left \langle \left(x-x_{\mathrm{sh}}\right)^2 \right\rangle}{\left \langle \left(x-x_{\mathrm{at}}\right)^2 \right\rangle}$: ratio of the MSD of atomistic data $x$ and the shifted 5\,nm fit $x_{\mathrm{sh}}$ and the MSD of atomistic data $x$ and the corresponding fit $x_{\mathrm{at}}$ for the specific size. \end{itemize} $x$ is a placeholder for $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ or $m_{\mathrm{e}}(T)$, respectively. The MSD ratios represent the quality of the scaling and shifting approach. Low ratios indicate that the error is small if materials curves are scaled or shifted instead of directly fitted. In the case of $m_{\mathrm{e}}(T)$ the MSD is truncated at $T_{\mathrm{C}}$, because per definition the equilibrium magnetization fits are zero above. Another special case appears for $\widetilde{\chi}_{\parallel}(T)$, which diverges at $T_{\mathrm{C}}$. Hence, a temperature range from $T_{\mathrm{C}}-10$\,K to $T_{\mathrm{C}}+10$\,K is excluded in the MSD calculation. \begin{figure} \includegraphics{llb_modeling-figure2.pdf} \caption{\small Mean squared displacement (MSD) ratios of the scaled ($\mathrm{rMSD}_{\mathrm{sc}}(x)$) and shifted ($\mathrm{rMSD}_{\mathrm{sh}}(x)$) temperature dependent material functions for various particle diameters. Here, $x$ is a placeholder for $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$, respectively.} \label{fig:cylinder_meanSquaredDisp} \end{figure} Figure~\ref{fig:cylinder_meanSquaredDisp} displays the MSD ratios of the three temperature dependent functions for all investigated cylinder diameters. In the case of the equilibrium magnetization it can be seen that from 3.5\,nm to 7\,nm diameter the MSD ratios of for the scaled and the shifted $m_{\mathrm{e}}(T)$ fit are within one magnitude. In a smaller range from 4\,nm to 5.5\,nm the MSD ratios are even below 2.0. Having in mind that one cannot distinguish the direct and the scaled fit in Fig.~\ref{fig:me_cylinder}a the error of the scaled and shifted equilibrium magnetizations seems to be negligible. rMSD$_{\mathrm{sc}}(\widetilde{\chi}_{\parallel})$ and rMSD$_{\mathrm{sh}}(\widetilde{\chi}_{\parallel})$ show a small error up to a diameter of 7.5\,nm. The MSD ratios are below 2.0 for all analyzed particle sizes in the case of the transversal susceptibility. The reason is, that $\widetilde{\chi}_{\perp}$ is rather noisy, as Fig~\ref{fig:chi_cylinder} points out. It has to be noted that both the scaling and the shifting of the 5\,nm functions yield small errors within the examined temperature range off 300\,K to 800\,K. For lower temperatures rMSD$_{\mathrm{sh}}(m_{\mathrm{e}})$ would become larger, because due to the shifting according to Eq.~\ref{eq:shifting} the reduced equilibrium magnetization at 0\,K would not be one. But low temperatures are of little interest for HAMR. \subsection{switching probability} \begin{figure} \includegraphics{llb_modeling-figure3.pdf} \caption{\small Switching probability versus peak temperature curves for various particle diameters. Each plot compares LLB simulation results with $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ or $m_{\mathrm{e}}(T)$ input functions, obtained separately for each grain size ($p$), with probabilities computed from the scaling ($p_{\mathrm{sc}}$) and shifting ($p_{\mathrm{sh}}$) approach, respectively.} \label{fig:cylinder_prob} \end{figure} The main goal of HAMR simulations is to efficiently calculate switching probabilities and bit error rates. Hence, we test if the scaled and shifted $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ or $m_{\mathrm{e}}(T)$ functions yield the same switching behavior in LLB simulations as separately calculated material curves. A Gaussian shaped heat pulse is applied to the grains per: \begin{equation} \label{eq:gauss_profile_PLSR} T(t)=T_{\mathrm{min}}+\left ( T_{\mathrm{peak}}-T_{\mathrm{min}} \right ) e^{-\frac{\left (t-t_0 \right )^2}{\tau^2}}, \end{equation} with $T_{\mathrm{min}}=270$\,K and $\tau=200$\,ps. Additionally, a constant external magnetic field with 0.8\,T assists the switching of the particle from its original state, with the magnetization pointing in $z$ direction, to the $-z$ direction. At each peak temperature 128 switching trajectories are simulated, by means of Eq.~\ref{eq:LLB}. Afterwards the ratio of switched and not switched particles is evaluated, yielding the switching probability. For various particle sizes the simulations are performed with the original temperature dependent material curves, the shifted and the scaled functions. Figure~\ref{fig:cylinder_prob} exemplarily illustrates the results for three particle sizes. The smallest and the largest investigated grains clearly show significant deviations between the switching probabilities $p$, computed with the directly fitted material functions for the appropriate size, and the probabilities of scaled and shifted functions $p_{\mathrm{sc}}$ and $p_{\mathrm{sh}}$. Although, the switching probabilities at high peak temperatures agree well, the transition cannot be reproduced. In the case of a 5.5\,nm particle diameter all probability curves coincide. Note, to facilitate comparison, the $x$ axes in Fig.~\ref{fig:cylinder_prob} have different ranges. The intention was to center the probability transitions. \begin{figure} \includegraphics{llb_modeling-figure4.pdf} \caption{\small MSD ratios of the switching probability for various grain diameters. $p$ denotes switching probabilities obtained from LLB simulations with separately fitted material functions for each size, $p_{\mathrm{sc}}$ and $p_{\mathrm{sh}}$ represent LLB simulation results with scaled and shifted $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ or $m_{\mathrm{e}}(T)$ curves (5\,nm reference grain). $p_{\mathrm{0,sc}}$ and $p_{\mathrm{0,sh}}$ indicate directly scaled and shifted probability curves of the 5\,nm grain.} \label{fig:prob_cylinder_meanSquaredDisp} \end{figure} To quantify the results we compute the MSD of the switching probabilities obtained from direct and scaled as well as direct and shifted material curves $\left \langle \left (p-p_{\mathrm{sc}}\right)^2 \right\rangle$ and $\left \langle \left (p-p_{\mathrm{sh}}\right)^2 \right\rangle$, respectively. More precisely, the MSD ratio of these quantities and $\left \langle \left (p_i-p_j\right)^2 \right\rangle$ are evaluated. Since the probabilities have a stochastic nature the repeated simulation with the same input parameters yields slightly different results. Hence, the MSD of the repeated computation of $p$ is the basis of our analysis, because it is assumed to be the smallest possible. Figure~\ref{fig:prob_cylinder_meanSquaredDisp} points out that the MSD ratios of all grain sizes until 8\,nm diameter are within one magnitude, for both LLB simulations with the scaled and the shifted material functions. In a wide range, from 4\,nm to 6\,nm, the ration is clearly below 2.0, which is an excellent agreement. Notably, the scaling and shifting approaches yield the correct dynamical behavior for volume changes of up to about $\pm 40$\,\%. This also coincides well with the findings of Fig.~\ref{fig:cylinder_meanSquaredDisp}. Instead of scaling or shifting $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$ and calculating switching probabilities we could directly scale or shift the switching probability curve of the 5\,nm particle corresponding to the modified $T_{\mathrm{C}}$ value of other system sizes (equivalently to Eqs.~\ref{eq:scaling} and \ref{eq:shifting}). This procedure is computationally very cheap, but it can, of course, not capture the finite size effects of large size variations. Nevertheless, Fig.~\ref{fig:prob_cylinder_meanSquaredDisp} reveals that for minor changes of the cylinder diameter of $\pm 0.25$\,nm the MSD ratios $\left \langle \left (p-p_{\mathrm{0,sc}}\right)^2 \right\rangle_{\mathrm{r}}$ and $\left \langle \left (p-p_{\mathrm{0,sh}}\right)^2 \right\rangle_{\mathrm{r}}$ are as low as for the recalculated probability curves. Remarkably, this corresponds to a volume change of $\pm 10$\,\%. \subsection{modeling strategy} \begin{figure} \includegraphics{llb_modeling-figure5.pdf} \caption{\small Finite size Curie temperature $T_{\mathrm{C}}(d)$ for various particles sizes, obtained from atomistic LLG simulations. The simulated $T_{\mathrm{C,1}}(d)$ are fitted with the finite size scaling law (Eq.~\ref{eq:finite_size_scaling_law}) with $T_{\mathrm{C}}^\infty$, $\Lambda$ and $d_0$ being fit parameters. The fit agrees well with various finite size Curie temperatures $T_{\mathrm{C,2}}(d)$, which are not used for the fit.} \label{fig:fit_Tc} \end{figure} In the above section it was shown that the temperature dependent material functions $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$ of one specific grain, which are required to integrate the LLB equation (Eq.~\ref{eq:LLB}), are sufficient to predict the dynamical behavior of particles with similar sizes. To make use of the demonstrated scaling or shifting approach one must know the Curie temperatures of the involved systems. According to Ref.~\cite{hovorka_curie_2012} $T_{\mathrm{C}}(d)$ follows the finite size scaling law: \begin{equation} \label{eq:finite_size_scaling_law} \frac{T_{\mathrm{C}}^{\infty}-T_{\mathrm{C}}(d)}{T_{\mathrm{C}}^{\infty}}=\left( \frac{d_0}{d} \right )^\Lambda, \end{equation} where $T_{\mathrm{C}}^{\infty}$ is the bulk Curie temperature and $\Lambda$ and $d_0$ are material and model dependent quantities, respectively. $T_{\mathrm{C}}^{\infty}$ and $\Lambda$ could in principle be determined from the finite size scaling analysis \cite{binder_applications_1997,hovorka_curie_2012}, but we suggest to use them, together with $d_0$, as fit parameters. As Fig.~\ref{fig:fit_Tc} indicates, we propose to compute $T_{\mathrm{C}}(d)$ for a few grain sizes from atomistic LLG simulations ($T_{\mathrm{C,1}}(d)$ in Fig.~\ref{fig:fit_Tc}). Afterwards these data can be fitted with Eq.~\ref{eq:finite_size_scaling_law} and the Curie point of other particle sizes can be estimated from the fit function (see Fig.~\ref{fig:fit_Tc}). With the known value of $T_{\mathrm{C}}(d)$ the scaling or shifting approach of the previous section can be easily applied. This strategy allows to efficiently and accurately model arbitrary grain sizes with the LLB equation, within the presented limitations. Additionally to the cylindrical particle we investigated a cube with various edge lengths (again 3.5\,nm to 10\,nm) and performed all so far shown calculations. The results are not explicitly given, because based on the volume to surface ratio the cuboid particle revealed the same scaling behavior as the cylindrical grain. \section{exchange interaction effects} \label{sec:exchange_interaction_effects} \begin{figure} \includegraphics{llb_modeling-figure6.pdf} \caption{\small Zero field equilibrium magnetization $m_{\mathrm{e}}$ of a cylindrical particle with 5\,nm diameter and two different exchange constants, based on the material parameters of Tab.~\ref{tab:mat}. Results of atomistic LLG simulations (green circles) and the corresponding infinite size fits (solid blue), as well as the $m_{\mathrm{e}}$ fit of the reference particle with $A_{\mathrm{ex}}=21.58$\,pJ/m (dotted black) are plotted. The latter is scaled and shifted to the Curie temperature of the actual changed exchange constant (dashed red and chain dotted pink), respectively.} \label{fig:me_AiexVariations} \end{figure} \begin{figure} \includegraphics{llb_modeling-figure7.pdf} \caption{\small Longitudinal ($\widetilde{\chi}_{\parallel}$) and perpendicular ($\widetilde{\chi}_{\perp}$) susceptibilities of a 5\,nm cylindrical particle with two different exchange constants based on the the material parameters of Tab.~\ref{tab:mat}. Results of atomistic LLG simulations (green circles and crosses) and the corresponding infinite size fits (solid blue), as well as the susceptibility fits of the reference particle with $A_{\mathrm{ex}}=21.58$\,pJ/m (dotted black) are plotted. The latter is scaled to the Curie temperature of the new exchange constant (dashed red).} \label{fig:chi_AiexVariations} \end{figure} Size variations just slightly change the particle's Curie temperature, as for example shown in Fig.~\ref{fig:fit_Tc}. In order to reliably estimate bit error rates and areal storage densities in HAMR simulations $T_{\mathrm{C}}$ distributions must be considered. The main source of these distributions is a variation of the exchange interaction between the neighboring spins in a recording grain. In this section we investigate if the scaling or shifting strategy also works for changes of the exchange constant $A_{\mathrm{ex}}$. For this purpose, we analyze how the temperature dependent material functions of a cylindrical particle with a diameter of 5\,nm and a height of 10\,nm depend on $A_{\mathrm{ex}}$. As reference an exchange constant of $A_{\mathrm{ex}}=21.58$\,pJ/m is used, which is varied by up to $\pm 20$\,\%. Similar to Sec.~\ref{sec:finite_size_effects} we compare fits of $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$, obtained from atomistic LLG simulations, with scaled and shifted curves of the system with $A_{\mathrm{ex}}=21.58$\,pJ/m. The latter two are computed equivalently to Eqs.~\ref{eq:scaling} and \ref{eq:shifting} for exchange constants instead of particle diameters. \begin{figure} \includegraphics{llb_modeling-figure8.pdf} \caption{\small MSD ratios of the scaled ($\mathrm{rMSD}_{\mathrm{sc}}(x)$) and shifted ($\mathrm{rMSD}_{\mathrm{sh}}(x)$) temperature dependent material functions for various exchange constants. Here, $x$ is a placeholder for $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$, respectively.} \label{fig:cylinder_meanSquaredDisp_AiexVar} \end{figure} In the case of the smallest and the largest analyzed exchange constants Figs.~\ref{fig:me_AiexVariations} and \ref{fig:chi_AiexVariations} exemplarily illustrate the equilibrium magnetization and the longitudinal and perpendicular susceptibilities, respectively. Despite the significant change of the Curie temperature, the scaled material curves agree surprisingly well with the atomistic data. In contrast to Sec.~\ref{sec:finite_size_effects}, the shifted $m_{\mathrm{e}}(T)$ curves show significant discrepancies. Due to the large shift of the Curie temperature the correct slope cannot be reproduced, as Fig.~\ref{fig:me_AiexVariations} points out. The ratios of the MSD of the atomistic data and the scaled or shifted $A_{\mathrm{ex}}=21.58$\,pJ/m fits confirm this trend, as displayed in Fig.~\ref{fig:cylinder_meanSquaredDisp_AiexVar}. The scaled material functions are almost identical to the atomistic results in the whole range of exchange constants. The MSD ratios of the shifted susceptibilities show the same agreement, but $\mathrm{rMSD}_{\mathrm{sh}}(m_{\mathrm{e}})$ is just within one magnitude for small deviations of the exchange constant. Nevertheless, the main finding is that the temperature dependent material functions of the scaling approach are as accurate as direct fits of atomistic data within the whole investigated range of $A_{\mathrm{ex}}$ values. \subsection{switching probability} \begin{figure} \includegraphics{llb_modeling-figure9.pdf} \caption{\small Switching probability versus peak temperature curves for various exchange constants. Each plot compares LLB simulation results with $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ or $m_{\mathrm{e}}(T)$ input functions, obtained separately for each exchange constant ($p$), with probabilities computed from the scaling ($p_{\mathrm{sc}}$) and shifting ($p_{\mathrm{sh}}$) approach, respectively.} \label{fig:prob_AiexVariations} \end{figure} \begin{figure} \includegraphics{llb_modeling-figure10.pdf} \caption{\small MSD ratios of the switching probability for various exchange constants. The same plots are shown as in Fig.~\ref{fig:prob_cylinder_meanSquaredDisp}. Here, the scaling and shifting is performed with respect to the exchange constants of the particles. The reference system is a cylindrical grain with 5\,nm diameter and 10\,nm height and an exchange constant of $A_{\mathrm{ex}}=21.58$\,pJ/m. } \label{fig:prob_cylinder_meanSquaredDisp_AiexVariations} \end{figure} To confirm the good accordance of the scaling approach switching probabilities of the grains, as described in Sec.~\ref{sec:finite_size_effects}, are computed. The resulting probabilities for $A_{\mathrm{ex}}\pm 20$\,\% are shown in Fig.~\ref{fig:prob_AiexVariations}. In both cases the switching probability obtained from LLB simulations with the scaled material functions agrees better with $p$ of the directly fitted functions, than $p_{\mathrm{sh}}$. The agreement is worse for $+20\,\% A_{\mathrm{ex}}$ than for $-20\,\% A_{\mathrm{ex}}$. Figure~\ref{fig:prob_cylinder_meanSquaredDisp_AiexVariations} compares the MSD ratios of the switching probabilities in the whole range of $A_{\mathrm{ex}}$ variations. As expected, the scaling approach performs much better than the shifting approach. All MSD ratios $\left \langle \left (p-p_{\mathrm{sc}}\right)^2 \right\rangle_\mathrm{r}$ are below 2.0, with the exception of $+20\,\% A_{\mathrm{ex}}$. In contrast $\left \langle \left (p-p_{\mathrm{sh}}\right)^2 \right\rangle_\mathrm{r}$ is just comparable for $A_{\mathrm{ex}}$ variations up to $\pm 2.5$\,\%. Another important finding is the fact, that scaling of the switching probability curve of a particle with $A_{\mathrm{ex}}=21.58$\,pJ/m, corresponding to the new Curie temperature yields an excellent MSD ratio (see $\left \langle \left (p-p_{\mathrm{0,sh}}\right)^2 \right\rangle_\mathrm{r}$ in Fig.~\ref{fig:prob_cylinder_meanSquaredDisp_AiexVariations}). This means, one has to calculate just the switching probabilities of a desired material and one can consider a change of the exchange constant, and thus $T_{\mathrm{C}}$, by scaling the probability curve per: \begin{equation} \label{eq:scale_prob} \tilde{p}\left(T,T_{\mathrm{C}}\pm\Delta T_{\mathrm{C}}\right)=p\left( T\frac{T_{\mathrm{C}}\pm\Delta T_{\mathrm{C}}}{T_{\mathrm{C}}},T_{\mathrm{C}} \right). \end{equation} According to Fig.~\ref{fig:prob_cylinder_meanSquaredDisp_AiexVariations} this is valid for $A_{\mathrm{ex}}$ changes up to $\pm 10$\,\%. Typically, one assumes a distribution of the Curie temperature of 3\,\%\,$T_{\mathrm{C}}$. Hence, one can use Eq.~\ref{eq:scale_prob} to directly consider $T_{\mathrm{C}}$ distributions without the need to recalculate the switching probability for each variation of the exchange constant. \section{Conclusion} \label{sec:conclusion} To conclude, we presented an extensive study on how the material functions $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$, which are required to correctly integrate the Landau-Lifshitz-Bloch (LLB) equation (Eq.~\ref{eq:LLB}), depend on the size and the exchange constant of typical recording grains. The material functions for each system were extracted from atomistic Landau-Lifshitz-Gilbert (LLG) simulations. Further, we defined a reference particle and analyzed how scaling or shifting of its material curves, according to the changed Curie temperature, coincide with the separately computed ones. Additionally, we simulated a typical write process during heat-assisted recording (HAMR) and compared the resulting switching probabilities, based on the different input functions. We found that in the case of particle size variations the scaling and shifting approaches preform equally, within the investigated temperature range. The scaling and shifting approach well reproduce the correct $\widetilde{\chi}_{\parallel}(T)$, $\widetilde{\chi}_{\perp}(T)$ and $m_{\mathrm{e}}(T)$ curves as well as the correct switching probabilities for volume changes of up to $\pm 40$\,\%. The attempt to directly scale (or shift) the switching probability curve of the reference system (instead of recalculating them with scaled material functions) to the new Curie temperature yielded good results for volume changes of up to $\pm 10$\,\%. For the variation of the exchange constant the scaling approach performed better than the shifting approach. The error was negligible for differences in the exchange constant of up to $\pm 10$\,\%, which corresponds to a $T_{\mathrm{C}}$ variation of more than $\pm 50$\,K. Direct scaling of the switching probabilities turned out to have similar errors. Against the background that typically a 3\,\% $T_{\mathrm{C}}$ distribution must be considered in HAMR simulations, this finding is important to significantly reduce computation time of bit-error rates whilst maintaining accuracy. Our results suggest the conclusion that switching probabilities does not need to be recalculated in HAMR studies if one considers $T_{\mathrm{C}}$ distribution. A simple scaling is sufficient. \section{Acknowledgements} The authors would like to thank the Vienna Science and Technology Fund (WWTF) under grant No. MA14-044, the Advanced Storage Technology Consortium (ASTC), and the Austrian Science Fund (FWF) under Grant Nos. F4112 SFB ViCoM and I2214-N20 for financial support. The support from the CD-laboratory AMSEN (financed by the Austrian Federal Ministry of Economy, Family and Youth, the National Foundation for Research, Technology and Development) was acknowledged. The computational results presented have been achieved using the Vienna Scientific Cluster (VSC).
{ "attr-fineweb-edu": 1.811523, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfj425V5jX_qCejZN
\section{Introduction} We consider a causal Gaussian process $X_{0:T-1}=(X_0^\mathsf{T},\dots,X_{T-1}^\mathsf{T})^\mathsf{T}$ evolving on $\mathbb{R}^d$. In this note we provide an elementary proof of the fact that the empirical covariance: \begin{align}\label{eq:empcov} \widehat \Sigma_X \triangleq \frac{1}{T}\sum_{t=0}^{T-1} X_t X_t^\mathsf{T} \end{align} is never much smaller than its (conditional) expectation. Analyzing the lower tail of \eqref{eq:empcov} has been the subject of a number of recent papers as it is crucial to characterize the rate of convergence in linear system identification \citep{simchowitz2018learning,faradonbeh2018finite, sarkar2019near, tsiamis2019finite, oymak2021revisiting, tu2022learning, jedra2022finite, tsiamis2022statistical}. In these references, a number of elegant but rather advanced techniques can be found to control the lower tail of \eqref{eq:empcov} for various models (but mainly linear dynamical systems). Of these, the perhaps most well-known being the adaptation of the small-ball method of \cite{mendelson2014learning} by \cite{simchowitz2018learning}. Our aim with this note is to give a more accessible proof of these results in the Gaussian setup, but which also easily extends to any causal Gaussian process (\Cref{thm:anticonc}), e.g., ARMA processes (\Cref{sec:arma}). The main idea here is based on \cite{ziemann2022learning}, which shows that one can often encode such "small-ball behavior", even for highly dependent processes, by a one-sided exponential inequality of Bernstein type (\Cref{thm:expineq}). The primary reason for our interest (and that of the above-mentioned references) in \eqref{eq:empcov} is the fact that in the linear regression model: \begin{align*} Y_t &= A_\star X_t + V_t &&t=0,\dots,T-1 &&(V_t\textnormal{ noise}) \end{align*} the error of the least squares estimator $\widehat A$ of the unknown parameter $A_\star$ can be expressed as: \begin{equation}\label{eq:LSEerror} \begin{aligned} \widehat A-A_\star = \left(\sum_{t=0}^{T-1} X_t X_t^\mathsf{T} \right)^{-1/2 }\left[\left(\sum_{t=0}^{T-1} X_t X_t^\mathsf{T} \right)^{-1/2 }\sum_{t=0}^{T-1} X_t V_t^\mathsf{T}\right]. \end{aligned} \end{equation} The rightmost term in \eqref{eq:LSEerror} can be shown to be (almost) time-scale invariant in many situations. For instance, if the noise $V_{0:T-1}$ is a sub-Gaussian martingale difference sequence with respect to the filtration generated by the covariates $X_{0:T-1}$, one can invoke the so-called self-normalized martingale theorem of \cite{abbasi2011improved} to show this. Whenever this is the case, the dominant term in the rate of convergence of the least squares estimator is $ \left(\sum_{t=0}^{T-1} X_t X_t^\mathsf{T} \right)^{-1/2 }$. Thus, providing control of the smallest eigenvalue of \eqref{eq:empcov} effectively yields control of the rate of convergence of the least squares estimator in many situations. Put differently, the smallest eigenvalue of \eqref{eq:empcov} quantifies the notion of persistency of excitation often encountered in system identification \cite{lennart1999system, willems2005note}. We also remark that two-sided bounds are often unsatisfactory for this purpose, and will indeed become hopeless for processes that are not stable.\footnote{While almost sharp for stable systems, the two-sided approach via the Hanson-Wright inequality becomes vacuous in the marginally stable regime \citep{jedra2022finite}.} Nevertheless, a one-sided bound is still possible. \section{Preliminaries} Fix two integers $T$ and $k$ such that $T/k\in \mathbb{N}$. We consider a ($k$)-causal Gaussian process $X_{0:T-1}=(X_0^\mathsf{T},\dots,X_{T-1}^\mathsf{T})^\mathsf{T}$ evolving on $\mathbb{R}^d$. More precisely, we assume the existence of a Gaussian white process evolving on $\mathbb{R}^p$, $W_{0:T-1}\sim N(0, I_{pT})$, and a (block-) lower triangular matrix $\mathbf{L} \in \mathbb{R}^{dT\times pT}$ such that $X_{0:T-1}=\mathbf{L}W_{0:T-1}$. We say that $X_{0:T-1}$ is $k$-causal if the matrix $\mathbf{L}$ has the form: \begin{align*} \mathbf{L} = \begin{bmatrix} \mathbf{L}_{1,1} &0 &0&0&0\\ \mathbf{L}_{2,1} & \mathbf{L}_{2,2} & 0 &0 &0\\ \mathbf{L}_{3,1} & \mathbf{L}_{3,2} & \mathbf{L}_{3,3} &0 &0\\ \vdots & \ddots & \ddots & \ddots &\vdots\\ \mathbf{L}_{T/k,1} &\dots & \dots & \dots&\dots \mathbf{L}_{T/k,T/k} \end{bmatrix} = \begin{bmatrix} \mathbf{L}_{1}\\ \mathbf{L}_{2}\\ \mathbf{L}_{3}\\ \vdots \\ \mathbf{L}_{T/k} \end{bmatrix} \end{align*} where each $\mathbf{L}_{ij} \in \mathbb{R}^{dk\times pk}, i,j \in [T/k] \triangleq \{1,2,\dots,T/k\}$. Obviously, every $1$-causal process is $k$-causal for every $k\in \mathbb{N}$ (for appropriate $T$). To every $k$-causal Gaussian process, we also associate a decoupled random process $\tilde X_{0:T-1} = \mathrm{blkdiag}(\mathbf{L}_{11},\dots, \mathbf{L}_{T/k,T/k})W_{0:T-1}$. This decoupled process will effectively dictate our lower bound, and we will show under relatively mild assumptions that \begin{align*} \lambda_{\min}\left (\frac{1}{T}\sum_{t=0}^{T-1} X_t X_t^\mathsf{T} \right)\gtrsim \lambda_{\min} \left(\frac{1}{T}\sum_{t=0}^{T-1}\mathbf{E} \tilde X_t \tilde X_t^\mathsf{T}\right) \end{align*} with probability that approaches $1$ at an exponential rate in the sample size $T$. Our proof will make heavy use of the following lemma. \begin{restatable}{lemma}{gausscondlem}\label{lem:gausscondlem} Fix $x\in \mathbb{R}^n$ and let $W\sim N(0,I_m)$. For any positive semidefinite $Q \in \mathbb{R}^{(n+m)\times(n+m)}$ of the form $ Q=\begin{bmatrix}Q_{11}& Q_{12}\\ Q_{21} & Q_{22} \end{bmatrix}$ and any $\lambda \geq 0$ we have that: \begin{align*} \mathbf{E} \exp \left( -\lambda \begin{bmatrix}x \\ W \end{bmatrix}^\mathsf{T} \begin{bmatrix}Q_{11}& Q_{12}\\ Q_{21} & Q_{22} \end{bmatrix} \begin{bmatrix}x \\ W \end{bmatrix}\right) \leq\exp \left(-\lambda \tr Q_{22} + \frac{\lambda^2}{2} \tr Q_{22}^2 \right). \end{align*} \end{restatable} In principle, we will use \Cref{lem:gausscondlem} to "throw away" the inter-block correlation in $\mathbf{L}$, thereby reducing the process $X_{0:T-1}$ to $\tilde X_{0:T-1}$, which is easier to analyze. \section{Results} Repeated application of \Cref{lem:gausscondlem} to the process $X_{0:T-1}=\mathbf{L}W_{0:T-1}$ yields our main result. \begin{theorem}\label{thm:expineq} Fix an integer $k \in \mathbb{N}$, let $T \in N$ be divisible by $k$ and suppose $X_{0:T-1}$ is a $k$-causal Gaussian process. Fix also a matrix $\Delta\in \mathbb{R}^{d'\times d}$. Then for every $\lambda \geq 0$: \begin{multline*} \mathbf{E} \exp \left(-\lambda \sum_{t=0}^{T-1}\|\Delta X_t\|_2^2 \right) % \\\leq \exp \Bigg( -\lambda\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right] + \frac{\lambda^2}{2} \sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]^2 \Bigg). \end{multline*} \end{theorem} It is worth to point out that $$\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right] = \sum_{t=0}^{T-1}\|\Delta \tilde X_t\|_2^2 .$$ Hence \Cref{thm:expineq} effectively passes the expectation inside the exponential at the cost of working with the possibly less excited process $\tilde X_{0:T-1}$ and a quadratic correction term. Note also that the assumption that $T$ is divisible by $k$ is not particularly important. If not, let $T'$ be the largest integer such that $T'/k \in \mathbb{N}$ and $T'\leq T$ and apply the result with $T'$ in place of $T$. The significance of \Cref{thm:expineq} is demonstrated by the following simple calculation. Namely, for any fixed $\Delta \in \mathbb{R}^{d'\times d} \setminus \{0\}$ and $\lambda \geq 0$ we have that: \begin{equation} \label{eq:simplechernoff} \begin{aligned} &\mathbf{P} \left( \sum_{t=0}^{T-1} \|\Delta X_t\|^2 \leq \frac{1}{2} \sum_{t=0}^{T-1} \mathbf{E} \|\Delta\tilde X_t\|^2 \right)\\ &\leq \mathbf{E} \exp \left( \frac{\lambda}{2} \sum_{t=0}^{T-1} \mathbf{E} \|\Delta\tilde X_t\|^2 -\lambda \sum_{t=0}^{T-1} \|\Delta X_t\|^2 \right) && (\textnormal{Chernoff})\\ &\leq \exp \Bigg( -\frac{\lambda}{2}\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right] \\ &+ \frac{\lambda^2}{2} \sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]^2 \Bigg) && (\textnormal{\Cref{thm:expineq}})\\ &= \exp \left(-\frac{\left(\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]\right)^2}{8 \sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]^2 } \right) \end{aligned} \end{equation} by optimizing $\lambda$ in the last line. The point is that the bound \eqref{eq:simplechernoff} decays exponentially in $T$ as long as blocks on the diagonal of $\mathbf{L}$ have order constant condition number. In most applications, this can typically be achieved by a judicious choice of $k$. This leads us to define the following hypercontractive parameter: \begin{equation}\label{eq:hypcon} \psi_k \triangleq \inf_{\Delta\in \mathbb{R}^{d'\times d}\setminus\{0\}} \left\{\frac{\left(\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]\right)^2}{ T\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]^2 } \right\}. \end{equation} Here, hypercontractive refers to that \eqref{eq:hypcon} essentially is a moment equivalence condition \citep[Cf.][Definition 4.1]{ziemann2022learning}. Note that $\psi_k$ depends implictly on $k$ since the block-length dictates the covariance structure of $\tilde X_{0:T-1}$. We remark that if all the diagonal blocks of $\mathbf{L}$ are identical, the process $\tilde X_{0:T-1}$ has period $k$. Hence in which case by Cauchy-Schwarz: \begin{equation} \begin{aligned}\label{eq:polykbound} &\frac{\left(\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]\right)^2}{ T\sum_{j=1}^{T/k} \tr\left[ \mathbf{L}_{j,j}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{j,j}\right]^2 } = \frac{\left( \tr\left[ \mathbf{L}_{1,1}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{1,1}\right]\right)^2}{ k \tr\left[ \mathbf{L}_{1,1}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{1,1}\right]^2 } \geq \frac{1}{k}. \end{aligned} \end{equation} This is for instance true for any linear time invariant dynamics and thus, for these, we always have at least $\psi_k \geq 1/k$. Returning to our over-arching goal of providing control of the smallest eigenvalue of the empirical covariance matrix \eqref{eq:empcov}, we now combine \eqref{eq:simplechernoff} (using $d'=1$) with a union bound. We arrive at the following result.\footnote{Similar results can also be obtained for restricted eigenvalues.} \begin{theorem}\label{thm:anticonc} Suppose $ \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_t^\mathsf{T} \right)>0$. Under the hypotheses of \Cref{thm:expineq} we have that: \begin{multline} \mathbf{P} \left( \lambda_{\min} \left( \frac{1}{T} \sum_{t=0}^{T-1} X_t X_t^\mathsf{T}\right) \leq \lambda_{\min}\left(\frac{1}{8T} \sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_t^\mathsf{T}\right) \right) \\ % \leq \left(8 \sqrt{ 1+\frac{\psi_k T \lambda_{\max} (\mathbf{E} [X_{0:T-1}X_{0:T-1}^\mathsf{T}] )}{ \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} [X_tX_t^\mathsf{T}] \right)} } % \times \sqrt{\frac{ \lambda_{\max} \left(\sum_{t=0}^{T-1} \mathbf{E} X_t X_t\right) }{ \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_t^\mathsf{T} \right)}}\right)^d % \exp \left( \frac{-\psi_k T}{8} \right) . \end{multline} \end{theorem} Note that we always have (although this estimate is far from sharp): \begin{equation} \frac{\lambda_{\max} (\mathbf{E} [X_{0:T-1}X_{0:T-1}^\mathsf{T}] )} { \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} [X_tX_t^\mathsf{T}] \right)} \leq \frac{ \lambda_{\max} \left(\sum_{t=0}^{T-1} \mathbf{E} X_t X_t^\mathsf{T} \right) } {\lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_t^\mathsf{T} \right) }. \end{equation} As long as $\lambda_{\max} \left(\sum_{t=0}^{T-1} \mathbf{E} X_t X_t^\mathsf{T} \right) \Big/\lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_t^\mathsf{T} \right) = O(\mathrm{poly}(T)) $ and $\psi_k=O(T^\alpha)$ for some $\alpha \in (0,1)$, \Cref{thm:anticonc} gives a nontrivial lower bound on the smallest eigenvalue of \eqref{eq:empcov} which holds with probabilty approaching $1$ at an exponential rate in the sample size $T$ \begin{remark} We have chosen to apply the union bound to \eqref{eq:simplechernoff} to highlight \Cref{thm:expineq} and keep the rest of the exposition as concise as possible. The union bound is improvable in many situations by the PAC-bayesian approach \cite[cf.][Theorem 1.1]{oliveira2016lower} and we expect this to yield a somewhat sharper failure probability. \end{remark} \section{Application to ARMA Processes}\label{sec:arma} We consider linear time-invariant dynamics of the form: \begin{align}\label{eq:ARMA} X_{t} &= \sum_{l=1}^{L}A_{l}X_{t-l}+\sum_{m=1}^M B_m W_{t-m}, &&X_{-L:-1}=0,&& W_{-M:-1}=0, && t=0,1,2,\dots \end{align} where each $A_l\in \mathbb{R}^{d\times d}$ and each $B_m \in \mathbb{R}^{d \times p}$ with $l \in [L], m \in [M]$. Let $ \kappa \triangleq \left\{ \inf k : \det \left(\sum_{t=0}^{k-1} \mathbf{E} X_t X_t\right) \neq 0 \right\}.$ Set also $\Gamma_k = \frac{1}{k} \sum_{t=0}^{k-1}\mathbf{E} X_t X_t^\mathsf{T}$. Let us also define $A \in \mathbb{R}^{dL\times dL}$ and $B \in \mathbb{R}^{dL \times pM}$ by: \begin{align}\label{eq:defAB} A &\triangleq \begin{bmatrix} A_1 & \dots &\dots & \dots & A_L\\ I_d & 0 &\dots & \dots & 0\\ 0 & I_d & 0 & \dots & \vdots \\ \vdots & \ddots & \ddots & \ddots &\vdots\\ 0 & \dots & 0 & I_d & 0 \end{bmatrix}, & B^\mathsf{T} &\triangleq \begin{bmatrix} B_1^\mathsf{T} &\\ B_2^\mathsf{T} &\\ \vdots &0\\ \vdots & \\ B_M^\mathsf{T} & \end{bmatrix}. \end{align} With these definitions in place, we may invoke \Cref{thm:anticonc} to control the empirical covariance corresponding to \eqref{eq:ARMA}. \begin{corollary}\label{corr:ARMA} Fix an integer $k \geq \kappa$ such that $T/k \in \mathbb{N}$. If $X_{0:T-1}$ is given by \eqref{eq:ARMA}, we have that: \begin{multline}\label{eq:ARMAbound} \mathbf{P} \left( \lambda_{\min} \left( \frac{1}{T} \sum_{t=0}^{T-1} X_t X_t^\mathsf{T}\right) \leq \frac{1}{8} \lambda_{\min}\left(\Gamma_k\right) \right) \\ % \leq \left(32 % \frac{ e T^{5/2} \opnorm{BB^\mathsf{T}} \sum_{t=0}^{T-1}\bigopnorm{ A^{t-1} (A^{t-1})^\mathsf{T}} }{ \sqrt{k }\lambda_{\min} \left( \Gamma_k \right)}\right)^d % \exp \left( \frac{- T}{8k} \right) . \end{multline} \end{corollary} The proof of the above corollary follows immediately by \Cref{thm:anticonc}, \Cref{lem:armastability} combined with the observation that we may choose $\psi_k \geq 1/k$. A few remarks are in order. First, \eqref{eq:ARMAbound} provides nontrivial control of the smallest eigenvalue of the empirical covariance of any ARMA process that satsifies: 1. the matrix $A$ in \eqref{eq:defAB} satisfies $\rho(A) \leq 1$ (marginal stability); and 2. $\kappa < \infty$ (controllability). When specialized to first order processes, our result can be compared with \cite[Section D.1]{simchowitz2018learning}---our failure probabilities match with theirs up to logarithmic factors. \section{Proofs} \paragraph{Proof of \Cref{thm:expineq}} By repeated use of the tower property we have that: \begin{equation}\label{eq:towerprop} \begin{aligned} &\mathbf{E} \exp \left(-\lambda \sum_{t=0}^{T-1}\|\Delta X_t\|_2^2 \right) \leq \mathbf{E}\exp \left(-\lambda \sum_{t=0}^{k-1}\|\Delta X_t\|_2^2\right) \times \dots \times \mathbf{E}_{T-k-1} \exp \left(-\lambda \sum_{t=T-k}^{T-1}\|\Delta X_t\|_2^2 \right). \end{aligned} \end{equation} We will bound each conditional expectation in \eqref{eq:towerprop} separately. Observe that \begin{align*} &\sum_{t=T-k}^{T-1} \|\Delta X_t\|_2^2 = \begin{bmatrix} \Delta X_{T-k}\\ \vdots\\ \Delta X_{T-1} \end{bmatrix}^\mathsf{T} \begin{bmatrix} \Delta X_{T-k}\\ \vdots\\ \Delta X_{T-1} \end{bmatrix} % = W_{T-k:T-1}^\mathsf{T}\mathbf{L}_{T/k}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{T/k} W_{T-k:T-1} \end{align*} In light of \Cref{lem:gausscondlem} we have that: \begin{multline*} \mathbf{E}_{T-k-1} \exp \left(-\lambda \sum_{t=T-k}^{T-1}\|\Delta X_t\|_2^2\right) \\ \leq \exp \Bigg( -\lambda\tr\left[ \mathbf{L}_{T/k,T/k}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{T/k,T/k}\right] + \frac{\lambda^2}{2}\tr\left[ \mathbf{L}_{T/k,T/k}^\mathsf{T} \mathrm{blkdiag}(\Delta^\mathsf{T} \Delta) \mathbf{L}_{T/k,T/k}\right]^2 \Bigg). \end{multline*} Repeatedly applying \Cref{lem:gausscondlem} as above yields the result. \hfill $\blacksquare$ \paragraph{Proof of \Cref{thm:anticonc}} Let $\mathcal{N}_\varepsilon$ be an optimal $\varepsilon$-cover of the unit sphere $\mathbb{S}^{d-1}$ and fix a multiplier $q\in (1,\infty)$. We define the events (i.e. $\Delta = v^\mathsf{T}$): \begin{equation} \begin{aligned} \mathcal{E}_1 &= \bigcup_{v\in \mathcal{N}_e} \left\{ \frac{1}{T} \sum_{t=0}^{T-1} v^\mathsf{T} X_tX_t^\mathsf{T} v \leq \frac{1}{2T} \sum_{t=0}^{T-1} \mathbf{E} v^\mathsf{T} \tilde X_t\tilde X_t^\mathsf{T} v \right\} \\ \mathcal{E}_2& =\bigcup_{v\in \mathcal{N}_e} \Bigg\{ \sum_{t=0}^{T-1} (v-v_i)^\mathsf{T} X_tX_t^\mathsf{T} (v-v_i) \geq q \times \sum_{t=0}^{T-1} (v-v_i)^\mathsf{T} \mathbf{E} [X_tX_t^\mathsf{T}] (v-v_i) \Bigg\}. \end{aligned} \end{equation} for any $v$, it is true on the complement of $\mathcal{E}= \mathcal{E}_1 \cup \mathcal{E}_2$ that for every $v_i \in \mathcal{N}_\varepsilon$: \begin{equation} \begin{aligned} &\frac{1}{T} \sum_{t=0}^{T-1} v^\mathsf{T} X_tX_t^\mathsf{T} v \\ &\geq \frac{1}{2T} \sum_{t=0}^{T-1} v^\mathsf{T}_i X_tX_t^\mathsf{T} v_i - \frac{1}{2T}\sum_{t=0}^{T-1} (v-v_i)^\mathsf{T} X_tX_t^\mathsf{T} (v-v_i) \\ &\geq \frac{1}{4T}\sum_{t=0}^{T-1} \mathbf{E} \tilde v_i^\mathsf{T} X_t \tilde X_t^\mathsf{T} v_i-\frac{q\varepsilon^2}{2T}\sum_{t=0}^{T-1} \tilde v_i^\mathsf{T} \mathbf{E} [X_tX_t^\mathsf{T}] \tilde v_i \end{aligned} \end{equation} where $\tilde v_i = (v-v_i)/\varepsilon$ has norm at most one for some choice of $v_i$ by the covering property. For this choice we have that: \begin{align*} \frac{1}{T} \sum_{t=0}^{T-1} v^\mathsf{T} X_tX_t^\mathsf{T} v &\geq \frac{1}{8T}\sum_{t=0}^{T-1} v_i^\mathsf{T} \mathbf{E} [\tilde X_t \tilde X_T^\mathsf{T}] v_i \end{align*} as long as: \begin{align*} \varepsilon^2 \leq \frac{ \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_t^\mathsf{T} \right)}{4q \lambda_{\max} \left(\sum_{t=0}^{T-1} \mathbf{E} X_t X_t\right) }. \end{align*} To finish the proof, it suffices to estimate the failure probabilities $\mathbf{P}(\mathcal{E}_1)$ and $\mathbf{P}(\mathcal{E}_2)$. By \eqref{eq:simplechernoff}, a volumetric argument \citep[see e.g.][Example 5.8]{wainwright2019high} and our particular choice of $\varepsilon$ we have: \begin{align*} \mathbf{P}(\mathcal{E}_1)& \leq \left(1+\frac{2}{\varepsilon^2} \right)^d \exp \left( \frac{-\psi_k T}{8} \right)\\ &\leq \left(4\sqrt{\frac{ q\lambda_{\max} \left(\sum_{t=0}^{T-1} \mathbf{E} X_t X_T\right) }{ \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_T^\mathsf{T} \right)}}\right)^d \exp \left( \frac{-\psi_k T}{8} \right). \end{align*} Another union bound combined with \eqref{eq:theuppertail} yields: \begin{align*} \mathbf{P}(\mathcal{E}_2) \leq \left(4\sqrt{\frac{ q\lambda_{\max} \left(\sum_{t=0}^{T-1} \mathbf{E} X_t X_T\right) }{ \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} \tilde X_t \tilde X_T^\mathsf{T} \right)}}\right)^d % \exp \left( \frac{-(q-1) \lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E}[X_t X_t^\mathsf{T}]\right) }{8\lambda_{\max} (\mathbf{L}^\mathsf{T} \mathbf{L} )} \right). \end{align*} By choosing $$q=1+\frac{\psi_k T \lambda_{\max} (\mathbf{L}^\mathsf{T} \mathbf{L} )}{\lambda_{\min} \left(\sum_{t=0}^{T-1} \mathbf{E} [X_tX_t^\mathsf{T}] \right)},$$ the result holds on the complement of $\mathcal{E}_1\cup \mathcal{E}_2$ and thus also holds with the desired probability. \hfill $\blacksquare$ \subsection{Proofs related to ARMA processes} \begin{lemma}\label{lem:armastability} For $X_{0:T-1}$ given by \eqref{eq:ARMA}, we have that: \begin{align*} \bigopnorm{\sum_{t=0}^{T-1}\mathbf{E} X_t X_t^T } \leq 2eT^2 \opnorm{BB^\mathsf{T}} \sum_{k=0}^{T-1}\bigopnorm{ A^{T-k-1} (A^{T-k-1})^\mathsf{T}} \end{align*} \end{lemma} \begin{proof} Let $Z_t = X_{t:t-L+1}$ and $V_t = W_{t:t-M+1}$. Then $Z_{t+1} = AZ_t + BV_t$. Notice now that $ Z_t = \sum_{k=0}^{t-1} A^{t-k-1} BV_{k}$. It is straightforward to verify that:\footnote{Observe that for any two size-conforming matrices $M$ and $N$ we have: $(M+N)(M+N)^\mathsf{T} \preceq (1+1/T) MM^\mathsf{T} + (1+T)NN^\mathsf{T}$. Apply this fact repeatedly.} \begin{align*} \sum_{t=0}^{T-1} Z_t Z_t^\mathsf{T} \preceq 2eT \sum_{t=0}^{T-1} \sum_{k=0}^{t-1} A^{t-k-1} BV_{k} V_k^\mathsf{T} B^\mathsf{T} (A^{t-k-1})^\mathsf{T}. \end{align*} Hence \begin{equation} \begin{aligned} \bigopnorm{ \sum_{t=0}^{T-1} \mathbf{E} Z_t Z_t^\mathsf{T}} &\leq 2eT \bigopnorm{\sum_{t=0}^{T-1} \sum_{k=0}^{t-1} A^{t-k-1} BB^\mathsf{T} (A^{t-k-1})^\mathsf{T}}\\ &\leq 2eT^2 \opnorm{BB^\mathsf{T}} \sum_{k=0}^{T-1}\bigopnorm{ A^{T-k-1} (A^{T-k-1})^\mathsf{T}}. \end{aligned} \end{equation} It is further clear that $$\bigopnorm{\sum_{t=0}^{T-1}\mathbf{E} X_t X_t^T } \leq \bigopnorm{\sum_{t=0}^{T-1}\mathbf{E} Z_t Z_t^T }$$ since $X_t$ can be recovered from $Z_t$ by projection and thus the result follows. \end{proof} \subsection{Facts about the Gaussian distribution} \gausscondlem* \paragraph{Proof of \Cref{lem:gausscondlem}} A straightforward calcluation gives \citep[see e.g.][Lemma B.7]{tu2022learning}: \begin{align*} \mathbf{E} \exp \left( -\lambda \begin{bmatrix}x \\ W \end{bmatrix}^\mathsf{T} \begin{bmatrix}Q_{11}& Q_{12}\\ Q_{21} & Q_{22} \end{bmatrix} \begin{bmatrix}x \\ W \end{bmatrix}\right) \leq \left(\det (I+2\lambda Q_{22})\right)^{-1/2}. \end{align*} Hence in particular, by invoking $\log(1+x) \geq x- x^2/2$ (valid for $x\geq 0$) and applying it to each eigenvalue, we have the result. \hfill $\blacksquare$ \begin{lemma} For any $\lambda \in \left[0, \frac{1}{4\lambda_{\max}(\mathbf{L}^\mathsf{T} \mathbf{L})}\right]$ and $v\in \mathbb{R}^d$ with $\|v\|_2^2\leq 1$, we have that: \begin{align*} \mathbf{E} \exp \left(\lambda \sum_{t=0}^{T-1} v^\mathsf{T} X_t X_t^\mathsf{T} v \right) \leq \exp \left( 4\lambda \sum_{t=0}^{T-1} v^\mathsf{T} \mathbf{E} [X_t X_t^\mathsf{T}] v \right). \end{align*} \begin{proof} Let $\mathbf{L}_v = ( I_{T}\otimes v^\mathsf{T} )\mathbf{L}$. Since $(v^\mathsf{T} X)_{0:T-1} =\mathbf{L}_v W_{0:T-1}$, a standard calculation gives \begin{align*} &\mathbf{E} \exp \left(\lambda W_{0:T-1}^\mathsf{T} \mathbf{L}_v^\mathsf{T} \mathbf{L}_v W_{0:T-1} \right) = \left( \det(I-2\lambda \mathbf{L}_v^\mathsf{T} \mathbf{L}_v ) \right)^{-1/2} = \exp \left( -\sum_{i=1}^{Td} \log \left(1-2\lambda \times \lambda_i(\mathbf{L}_v^\mathsf{T} \mathbf{L}_v )\right)\right). \end{align*} The result follows by repeated application of the numerical inequality: $ -\log(1-x) \leq 2x $ (which is valid for all $x\in [0, 1/2]$). \end{proof} \end{lemma} The preceding lemma easily yields an upper tail-bound for the empirical covariance by a Chernoff argument: \begin{equation}\label{eq:theuppertail} \begin{aligned} \mathbf{P} \left( \sum_{t=0}^{T-1} v^\mathsf{T} X_t X_t^\mathsf{T} v \geq q\sum_{t=0}^{T-1} v^\mathsf{T} \mathbf{E} [ X_t X_t^\mathsf{T}] v \right) \leq \exp \left( \frac{-(q-1) \sum_{t=0}^{T-1} v^\mathsf{T} \mathbf{E}[X_t X_t^\mathsf{T}]v }{8\lambda_{\max} (\mathbf{L}^\mathsf{T} \mathbf{L} )} \right). \end{aligned} \end{equation} \input{acks} \bibliographystyle{alpha}
{ "attr-fineweb-edu": 1.420898, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfm45qhLB3UNjqmmW
\section*{SUPPLEMENTARY INFORMATION} \section*{Cycling scheme and level structure} The $X ^2\Sigma_{1/2}^+\!\rightarrow\!A^2\Pi_{1/2}$ electronic transition employed in this work has a lifetime $\tau_{A}=1/\Gamma=24.1$~ns. The vibrational branching for this excited state is shown in Fig. \ref{fig:firstfig}a and dictates that only four lasers are required to cycle $\gtrsim10^{6}$ photons. Uncertainties in the calculated vibrational branching fractions $b_{v'v}$ stem largely from uncertainties in the molecular constants for the $A^2\Pi_{1/2}$ state used to calculate Franck-Condon factors. Although the errors in these constants are small, the resulting fractional uncertainty in calculated values of $b_{v'v}$ may be significant for off-diagonal terms ($v\ne v'$) where Frank-Condon factors are strongly suppressed and vibrational branching ratios are small \cite{Barry2013}. The SR/HF structure for the $X^{2}\Sigma_{1/2}(v=0,N=1)$ state in the presence of a weak magnetic field is shown in Fig. \ref{fig:firstfig}b. For the three repump lasers ($\mathcal{L}_{10}$, $\mathcal{L}_{21}$, and $\mathcal{L}_{32}$), the modulation frequency $f_\text{mod}=42.5$~MHz is chosen so the first- and second-order sidebands address all four SR/HF transitions. The value of $f_\text{mod}=40.4$~MHz for the $\mathcal{L}_{00}$ light is chosen to minimize the root-mean-squared (r.m.s.) value of the detuning for the upper three SR/HF levels at $B=0$~G, while a separate laser addresses the lowest SR/HF level. The need for an additional trapping laser, with the opposite polarization, is discussed in the main text. A Breit-Rabi diagram showing the energy dependence of each sublevel vs. magnetic field is shown in Fig. \ref{fig:firstfig}c. The level crossings in the range $B=15$ to $25$~G may limit the effective trap radius for a given $B$-field gradient since, at sufficiently high fields, the trap light frequency addressing the $|J\!\!=\!\!3/2,F\!\!=\!\!1\rangle$ manifold becomes anti-trapping for the $|J\!\!=\!\!3/2,F\!\!=\!\!2\rangle$ manifold. Note that other trapping/anti-trapping level crossings are located at higher magnetic bias fields. \begin{figure*} \centering \includegraphics[width=18.3cm] {firstfig15g.eps} \caption{\textbf{a}, Vibrational branching in SrF. Solid upward lines denote transitions driven by the MOT lasers. Spontaneous decays from the A$^2\Pi_{1/2}(v'=0)$ state (solid wavy) and A$^2\Pi_{1/2}(v'=1,2)$ states (dashed wavy) are governed by the vibrational branching fractions $b_{0v}$, $b_{1v}$, and $b_{2v}$, as shown. \textbf{b} Optical addressing scheme for the SrF MOT presented and discussed in the main text. \textbf{c}, Energy levels of the $X^2\Sigma_{1/2}(v=0,N=1)$ state versus $B$. Energy levels are labeled by their $m_F$ value with $m_F=2$ {(\color{red}\LARGE\textbf{-}\normalsize\color{black})}, $m_F=1$ {(\color{orange}\LARGE\textbf{-}\normalsize\color{black})}, $m_F=0$ {(\color{green}\LARGE\textbf{-}\normalsize\color{black})}, $m_F=-1$ {(\color{blue}\LARGE\textbf{-}\normalsize\color{black})} , $m_F=-2$ {(\color{Fuchsia}\LARGE\textbf{-}\normalsize\color{black})}.} \label{fig:firstfig} \end{figure*} \section*{Radiation pressure slowing} The molecular beam is slowed by three lasers, denoted $\mathcal{L}_{00}^\text{s}$, $\mathcal{L}_{10}^\text{s}$, and $\mathcal{L}_{21}^\text{s}$ (where the ``s'' superscript indicates slowing), which have powers of 205~mW, 185~mW and 35~mW respectively. The $\mathcal{L}_{00}^\text{s}$ and $\mathcal{L}_{21}^\text{s}$ lasers are horizontally polarized while the $\mathcal{L}_{10}^\text{s}$ laser is vertically polarized. These lasers are spatially overlapped to produce a single beam with $1/e^2$ intensity diameter d $\sim$\! 3~mm, applied counter-propagating to the molecular beam. A uniform field $B^\text{s} \! \sim \! 9$~G is applied at an angle $\theta=45^\circ$ relative to the linear polarizations of the lasers over the distance -200~mm~$\lesssim\! z'\! \lesssim$~1600~mm, where $z'=0$ marks the exit of the cell in the CBGB source and $z'$ denotes the downstream distance along the molecular beam. The magnetic field for the slowing is non-zero only when the slowing lasers are applied. The properties of the slowing (laser center frequencies and frequency extents, application time and duration of the slowing, and value of $B^\text{s}$) are optimized by imaging the MOT after the slowed molecular beam pulse has fully subsided (here from $t=80$ to $t=110$~ms). The optimized frequency detunings of the $\mathcal{L}_{00}^\text{s}$, $\mathcal{L}_{10}^\text{s}$, and $\mathcal{L}_{21}^\text{s}$ lasers are $\Delta_{00}^{\text{s}}=-161$~MHz, $\Delta_{10}^{\text{s}}= -122$~MHz, and $\Delta_{21}^{\text{s}}=-103$~MHz. The spectra are broadened to address a wide range of Doppler shifts associated with the broad velocity spread from the CBGB source. The spectral widths are $340$~MHz, $440$~MHz, and $570$~MHz for the $\mathcal{L}_{00}^\text{s}$, $\mathcal{L}_{10}^\text{s}$, and $\mathcal{L}_{21}^\text{s}$ lasers respectively (Fig. \ref{fig:v00splot}). Further details on the slowing can be found in Refs. \cite{Barry2012,Barry2013}. Fig. \ref{fig:exampleslowing} shows a sample slowed velocity profile used to load the MOT, along with the unslowed velocity profile of the source; both profiles are detected upstream of the trapping region at $z'_\text{det}=1076$~mm. \section*{Trapping region} The trapping region for the MOT is centered at $z'=1382$~mm and is separated from the beam propagation region by a differential pumping tube (127~mm long, 12.7~mm diameter) beginning at $z' \! \approx \! 900$~mm. In the trapping region, the pressure of all background gas excluding He is $P_\text{BG} \approx 4\times 10^{-10}$~Torr while the helium background pressure is $P_\text{He} \approx 2\times 10^{-9}$~Torr. \section*{MOT optimization} The optimum magnetic field gradient for the MOT is $dB_{z}/dz=15$~G/cm; the MOT is visible between 4 and 30~G/cm. The MOT is sensitive to the values of the laser detunings $\Delta_{00}$ and $\Delta_{00}^\dagger$ (and less sensitive to the value of $\Delta_{10}$) as shown in Fig. \ref{fig:LIFvsdetuning}. The MOT is insensitive to the detunings of the $\mathcal{L}_{21}$ and $\mathcal{L}_{32}$ lasers. \begin{figure} \includegraphics[width=8.9cm]{v00splot.eps} \caption{\textbf{a}, Scale diagram showing the frequency extent of the $\mathcal{L}_{00}^s$, $\mathcal{L}_{10}^\text{s}$, and $\mathcal{L}_{21}^s$ slowing lasers relative to the four SR/HF manifolds of SrF's X$^2\Sigma_{1/2}(v=0,1,2;N=1)$ states. The relative splittings of the four SR/HF levels in the X$^2\Sigma_{1/2}(N=1)$ state are the same to within $\sim \! 1$~MHz for $v=0,1,2$ \cite{Barry2013}. The dashed lines mark the centers of the $N=1$ SR/HF levels for the labelled velocity, and the level structure shown corresponds to $v=0$~m/s. \textbf{b}, Optimized spectral profiles of the three slowing lasers. The top scale shows velocity for a Doppler shift equivalent to the frequency labelled on the bottom scale. The $\mathcal{L}_{00}^s$ light is modulated via a fiber EOM with $f_\text{mod}=3.5$~MHz. The $\mathcal{L}_{10}^\text{s}$, and $\mathcal{L}_{21}^s$ lasers are each modulated by passing through two bulk EOMs with resonant frequencies at $\approx40$~MHz and $\approx9$~MHz.} \label{fig:v00splot} \end{figure} \begin{figure} \centering \includegraphics[width=8.9cm]{exampleslowing1.eps} \caption{Examples of slowed (\footnotesize{\color{gray}$\bullet$}\normalsize) and unslowed (\footnotesize{\color{black}$\blacksquare$}\normalsize) velocity profiles of the molecular beam detected upstream from the trapping region at $z'_\text{det}=1076$~mm. These profiles are for the optimized slowing conditions that produce the largest MOTs as discussed in the main text.} \label{fig:exampleslowing} \end{figure} \begin{figure} \includegraphics[width=8.9cm]{LIFvsdetuning.eps} \caption{LIF in the trapping region vs. detuning when $\Delta_{00}$ and $\Delta_{00}^\dagger$ are varied together (\footnotesize{\color{black}$\blacksquare$}\normalsize), when $\Delta_{00}^\dagger$ is varied alone (\small{\color{red}$\bullet$}\normalsize), and when $\Delta_{10}$ is varied (\small{\color{blue}$\blacktriangle$}\normalsize). As expected (and typically observed for atomic MOTs), the SrF MOT operates over a fairly narrow range of red detuning values for the trapping lasers but requires only that the repump laser be sufficiently near resonance.} \label{fig:LIFvsdetuning} \end{figure} As discussed in Refs. \cite{Barry2013,Tarbutt2013}, laser cooling schemes with a large number of resolved ground states can require significantly more power than those employing a two-level system with the same wavelength and electronic excited state lifetime. Briefly, a $F\!=\!1\rightarrow F'\!=\!0$ type transition will have a saturation intensity $3\times$ higher than the saturation intensity for a two-level system of the same wavelength and lifetime. Resolved ground state energy levels also increase the required intensity by dictating that total laser power be divided up among several frequencies, each driving a weaker transition. Hence, as anticipated, our molecular MOT requires substantially more laser power than standard atomic MOTs. Upon exiting the MOT fiber, the $\mathcal{L}_{00}, \mathcal{L}_{00}^\dagger, \mathcal{L}_{10}$, $\mathcal{L}_{21}$, and $\mathcal{L}_{32}$ laser powers are typically 210~mW, 50~mW, 170~mW, 5~mW, and 3~mW respectively. \section*{Spontaneous scattering rate for trapped molecules} To measure the spontaneous photon scattering rate, $R_\text{sc}$, LIF is recorded as a function of imaging start time $t_\text{im}$, which is scanned from $t_\text{im}=54$~ms to $t_\text{im}=62$~ms. The $\mathcal{L}_{21}$ repump light is blocked at $t_\text{bl}=58.6$~ms (see main text). The finite duration of the camera exposure, $t_\text{exp}=1$~ms, results in a recorded LIF signal $Y(t)$ that is a convolution of the real instantaneous LIF intensity, denoted $X(t)$, and the camera exposure time, i.e., \begin{equation} Y(t) = \int_{t_\text{im}}^{t_\text{im}+t_\text{exp}} X(t')dt'. \end{equation} Given the comparatively long unperturbed MOT lifetime, $X(t)$ is modeled as a linear function prior to the blocking of the $\mathcal{L}_{21}$ repump light ($t=54$ to $t_{0}=t_\text{bl}-t_\text{exp}=57.6$~ms), followed by (from $t_{0}$) an exponential decay plus an additional linear background term (to account for LIF from the tail of the slowed but untrapped molecular beam); this background term is deduced from a fit to the data from $t = 59$ to $t=62$~ms. This function has the form \begin{align} X(t) = (m_\text{MOT}t+&c_\text{MOT})H(t_{0}-t)\nonumber \\ & - (D_{0}\,e^{-(t-t_{0})/\tau_\text{v=2}} + m_\text{bg}t+c_\text{bg})H(t-t_{0}), \end{align} where $m_\text{MOT}$ and $c_\text{MOT}$ ($m_\text{bg}$ and $c_\text{bg}$) are the gradient and intercept respectively of the linear fit to the LIF from the MOT (background), $D_{0}$ is the amplitude coefficient of the exponential decay term, and $H(t)$ denotes the Heaviside step function. The measured scattering rate $R_\text{sc}=4.3_{-2.2}^{+4.1}\times10^6$ s$^{-1}$ is close to the maximum scattering rate for this system, $R_\text{max}=\frac{1}{7}\times1/\tau_{A}=5.9\times10^{6}$~s$^{-1}$ \cite{Metcalf1999}. The value of $R_\text{sc}$ is similar to those measured in atomic MOTs. This observation suggests the possibility of producing strong confining and damping forces, roughly comparable to those in atomic MOTs, if the fraction of scattered photons contributing to the confining force can be greatly increased. \section*{MOT detection} The laser induced fluorescence (LIF) collection optics consist of a 150~mm focal-length spherical-singlet lens, followed by a F/0.95 camera lens, then a 650~nm-bandpass interference filter, and finally a CCD camera. The interference filter reflects all repump light at $\lambda_{10}=686.0$~nm, $\lambda_{21}=685.4$~nm, and $\lambda_{32}=684.9$~nm for any angle of incidence (AOI) and transmits $>99$~\% of the $\lambda_{00}=663.3$~nm light at normal incidence; however transmission at $\lambda_{00}$ is reduced for AOI $\gtrsim$ 23$^\circ$. Using the MOT chamber geometry and assuming the distribution of LIF from the MOT is isotropic, we calculate the geometric collection efficiency of the LIF optics to be $\eta_\text{geo} = 1.1\% $. The amount of light reaching the CCD is further reduced by transmission losses (characterized by $\eta_\text{tra}$) and by AOI-dependent losses of the bandpass filter (characterized by $\eta_\text{fil}$). We measure $\eta_\text{tra}=0.84$ by tabulating the transmission efficiency of 663.3~nm light through each element of the collection optics at normal incidence. The value of $\eta_\text{fil}$ is measured as follows. Light emission from the MOT is simulated by back-illuminating a thick piece of white Delrin with 663.3~nm light. The front surface of the Delrin is covered except for a 5~mm hole. This creates approximately uniform emission of light over the range of angles incident on the collection optics. The total number of photons hitting the CCD is measured in the presence of all collection optics and again with only the interference filter and CCD present. In this latter configuration, reflection of $663.3$~nm light by the interference filter is negligible since all light is near normal incidence. The ratio of these two numbers is then divided by the ratio of solid angle subtended by the collection optics versus by the CCD sensor alone. Finally, dividing by the transmission losses through the lenses gives the filter transmission efficiency $\eta_\text{fil} = 0.82$ for this geometry. We measure the CCD gain to be $G\approx5.5$~counts/photoelectron and assume the manufacturer-specified quantum efficiency $\eta_\text{qe}=0.53$ for $663.3$~nm light. The magnification of our imaging system, $M_\text{mag}$, is measured using a grid of black squares back-illuminated with 663~nm light and placed at the appropriate distance from the collection optics. We measure $M_\text{mag}=0.45$, giving a 19.9~mm (horizontal) $\times$ 14.9~mm (vertical) field-of-view. Due to the high power of 663.3~nm laser light passing through the MOT chamber, scattered light is the primary noise source for the imaging. Several steps are taken to minimize the amount of scattered light reaching the camera. High-quality UV fused silica (10-5 scratch-dig) windows are used on all laser windows. These windows are anti-reflection-coated for 663~nm light and mounted on vacuum nipples far ($\approx\!260$~mm) from the MOT. Scattered light is further reduced by lining the vacuum system with UHV-compatible black copper(II) oxide \cite{Barry2013}. We form and blacken copper sheets in various shapes to line the nipples and the region of the trapping chamber directly in the field-of-view of the camera. Also placed in the nipples are 26-mm-diameter apertures, machined with sharp edges and blackened. At atmospheric pressure, the scattered light is dominated by Rayleigh scattering from air; after pumping down to vacuum, the scattered light signal decreases by $\sim50\times$, to a total detected value of $\approx 1.4\times10^{5}$ photons/ms across the entire field-of-view. \section*{Trapped molecule number} The number of molecules observed in the MOT is given by \begin{equation} N_\text{obs} = \frac{N_\text{c}}{G \, \eta_\text{qe} \, \eta_\text{geo} \, \eta_\text{fil} \, \eta_\text{tra} \, N_\text{per}}, \end{equation} where $N_\text{c} \approx 7\times10^{5}$ is the (background-subtracted) number of counts registered on the camera over the entire field-of-view for a single pulse of molecules, and $N_\text{per}$ is the number of photons scattered per molecule during the camera exposure. For the default imaging start time and exposure duration, this last factor is given by \begin{equation} N_\text{per} = R_\text{sc} \int_{t_\text{im}}^{t_\text{im}+t_\text{exp}} e^{-t/\tau_\text{MOT}} dt \approx 1\times 10^5 \end{equation} where the integral accounts for the decay of the trapped population (with $\tau_\text{MOT} \approx50$~ms) during the $t_\text{exp}=60$~ms camera exposure. Since MOT loading is essentially complete when the slowing phase ends at $t=40$~ms, and the camera exposure begins $\Delta t=20$~ms later, the initial trapped population is given by \begin{equation} N_\text{MOT} = e^{\Delta t/\tau_\text{MOT}} N_\text{obs} \approx 4\times 10^2 \text{ molecules.} \end{equation} \section*{Forced MOT oscillation} The confining and damping forces within the MOT are measured by observing the trapped cloud's response to a rapid displacement of the trap center. Prior to the loading/slowing phase, a shim coil applies a $\approx\!4$~G bias field to offset the trap center by $\Delta\rho\approx \!5$~mm downsream along the axis of the molecular beam. When this bias field is switched off at $t=t_\text{off}$, the center-of-mass of the trapped molecules exhibits damped harmonic motion described by \begin{equation} m_\text{SrF}\frac{d^{2}\rho}{dt^{2}}+\alpha \frac{d\rho}{dt}+\kappa_{\rho}\rho = 0. \end{equation} Assuming that the cloud is initially at rest, $(d\rho/dt)|_{t=t_\text{off}}=0$, the center-of-mass position vs. time is given by \begin{equation} \rho(t) =\rho_{0}e^{-\alpha t/(2m_\text{SrF})}\rm{cos}(\omega_\text{obs}t) \end{equation} where $\rho_{0}=\Delta \rho$ is the initial displacement and $\omega_\text{obs}\!=\!\sqrt{\omega_{\rho}^{2} - (\alpha/(2m_\text{SrF}))^{2}}$ is the observed angular oscillation frequency. \section*{Extracting spatial information using LIF detection} The weak confinement of the MOT is crucial in order for our LIF-based detection method to extract certain spatial information from the cloud. For the forced MOT oscillation measurement, the camera exposure duration must be short compared to the oscillation period to precisely determine the position of the cloud. We observe $2\pi/\omega_{\rho} = 58(2)$~ms and set $t_\text{exp}=5$~ms; this satisfies the short exposure condition while also allowing the camera to collect enough LIF to accurately measure the spatial distribution. Similarly, the ballistic expansion measurement uses a camera exposure duration $t_\text{exp}=5$~ms. This duration is short compared to $2\pi/\omega_{z} = 41(1)$~ms, which avoids recapture and compression of the cloud by the trap light during illumination. The maximum free expansion time used, $t_\text{fr}=7$~ms, is capped by the imaging field-of-view rather than the LIF signal-to-noise ratio (in contrast to the case for the MOT oscillation measurement). \section*{Release and recapture} An additional measurement of the MOT temperature is performed using a release-and-recapture method. In order to avoid LIF from the untrapped molecular beam, the MOT is released at a fixed release time $t=t_\text{rel}=90$~ms and the free expansion time $t_\text{fr}$ is varied from $0$ to $50$~ms. After each free expansion the MOT is recaptured at $t=t_\text{rel}+t_\text{fr}$, and imaging begins at $t=t_\text{rel}+t_\text{fr}+3$~ms using $t_\text{exp}=30$~ms. In contrast to the free-expansion measurement, this method uses a longer exposure time that gives enhanced sensitivity to the recaptured number of molecules but erases any spatial information about the cloud prior to recapture. A cloud temperature is determined by comparing the measured recaptured fraction to that of a Monte Carlo simulation, as a function of $t_\text{fr}$. The model assumes isotropic expansion and a spherical trap volume with radius $r_\text{cap}$; molecules inside this radius are assumed to be recaptured with $100$\% efficiency and those outside to be lost. The uncertainty in $r_\text{cap}$ is a well-known limitation of the release-and-recapture method \cite{Lett1988}; we set $r_\text{cap} = d_{\lambda}/2$ to obtain an upper limit on the isotropic temperature. In the Monte Carlo simulation, initial velocities are drawn from a Boltzmann distribution and the effects of gravity are included. Assuming that the MOT is radially symmetric, the initial spatial distribution is inferred from LIF images of the MOT. This procedure gives $T_\text{iso}< 2.7(6)$~mK. \section*{Diffusion lifetime} We estimate the lifetime that would be measured for an SrF cloud in the presence of only optical molasses to cross-check the measured values of $\alpha$ and $T_\text{MOT}$ (given the MOT beam diameter $d_{\lambda}$) and to further verify that a trapping force (rather than simply the cooling effect of optical molasses) is necessary to explain our observations. Here the motion of the molecules is treated as Brownian motion within a viscous fluid \cite{Chu1985,Lett1989}. The position diffusion constant $\mathscr{D}_{x}$ is given by \begin{equation} \mathscr{D}_{x}=\frac{k_{B}T_\text{MOT}}{\alpha}, \end{equation} and, using our measured values of $\alpha$ and $T_\text{MOT}$, we calculate $\mathscr{D}_{x}=1.4(3)\times10^{-3}$~m$^{2}/$s. The molasses lifetime $\tau_\text{mol}$ is then given by \begin{equation} \tau_\text{mol}= \frac{d_{\lambda}^{2}}{4\pi^{2}\mathscr{D}_{x}}=10(2)\text{ ms}. \end{equation} The calculated lifetime $\tau_\text{mol}$ is in agreement with the fits to the data in the presence only of optical molasses. This lifetime is short compared to typical atomic molasses lifetimes (where $\tau_\text{mol}\gtrsim100$~ms) due both to the small damping coefficient $\alpha$ and the relatively high MOT temperature $T_\text{MOT}$. Furthermore, we find the measured MOT lifetime $\tau_\text{MOT}\approx6\times\tau_\text{mol}$, consistent with our observations that molecules are confined in the MOT. \section*{MOT lifetime} Although the measured MOT lifetime $\tau_\text{MOT}=56(4)$~ms is short compared to those of typical atomic MOTs, the lifetime is $\sim5\times$ longer than the observed lifetimes of the molasses ($dB_{z}/dz=0$) and damping/anti-restoring ($\mathcal{L}_{00}$ and $\mathcal{L}_{00}^{\dagger}$ polarizations reversed) configurations which have lifetimes of 11(1)~ms and 10(3)~ms respectively (see Fig. \textcolor{blue}{3}c in the main text). It should be noted that these latter two short lifetimes are upper limits due to the temporal and spatial extent of the slowed molecular beam. As described in the text, it is concluded that the lifetime is limited primarily by ``boil-off" of molecules with energy greater than the MOT trap depth. Before reaching this conclusion, several other possible effects that could limit the MOT lifetime were explored. For example, off-resonant excitation into the ${\rm{A}^{2}\Pi_{1/2}(v'=0,J=3/2)}$ state could lead to decay into the dark X$^2\Sigma_{1/2}(v=0,N=3)$ state. To investigate this loss mechanism, a repump laser was added to the MOT addressing the X$^2\Sigma_{1/2}(v=0,N=3)$ state. The presence of this laser did not change the measured MOT lifetime, indicating that losses due to off-resonant excitations are negligible. Collisions with He or other background gases have been verified not to be the dominant loss mechanism responsible for the small measured value of $\tau_\text{MOT}$. MOT attenuation may be caused by collisions with residual ballistic He from the buffer gas beam, with background (diffuse) He, or with other gases in the trapping region. We test for attenuation by ballistic He by increasing the flow rate $\mathcal{F}$ of He into the buffer gas beam source from the default value of $\mathcal{F}=5$ sccm to $\mathcal{F}=20$ sccm. This increases the flux of ballistic He incident on the MOT by 4$\times$. In this configuration we measure $\tau_\text{MOT}$ to decrease by only $\sim\!20\%$, suggesting that collisions with ballistic helium are not the primary loss mechanism. With $\mathcal{F}$ still at 20~sccm, we reduce the rotation speed of the turbo-molecular pumps in the trapping region by a factor of $5$, resulting in an increase in all background gas pressures by $\sim5\times$. In this configuration we measure $\tau_\text{MOT}$ to decrease by only $\sim\!25~\%$, indicating that collisions with background gases are not the primary loss mechanism. \section*{Modelling trap loss} The measured MOT lifetime $\tau_\text{MOT}=56(4)$~ms corresponds to a total loss rate $1/\tau_\text{MOT}= 18(1)$~s$^{-1}$, $\log(1/\tau_\text{MOT}) = 1.25(3)$. The main loss mechanism is attributed to a shallow trapping potential relative to the MOT temperature, leading to molecules escaping the trap by simply being in the high energy tail of the Boltzmann distribution. Such escape rates depend exponentially on the ratio $U_\text{MOT}/(k_{B}T_\text{MOT})$ \cite{Hanggi1990}. The uncertainties in $U_\text{MOT}$, and to a lesser extent $T_\text{MOT}$, result in predicted loss rates having inherently large associated errors. We have no direct method to measure $U_\text{MOT}$. Instead, $U_\text{MOT}$ is estimated under the assumption that the spring constant $\kappa_\rho$ has a constant value all the way to the edges of the MOT beams. This method is expected to overestimate $U_\text{MOT}$, since the MOT beam intensity is smaller by a factor of $\approx200$ at $\rho = d_{\lambda}/2$ (trap edge) versus at $\rho = 0$ (trap center). We crudely model the trap loss using a simple Van't Hoff-Arrhenius rate in the low damping limit ($\alpha/(2 m_\text{SrF})\ll\omega_{\rho}$) \cite{Hanggi1990}, \begin{equation} \frac{1}{\tau_\text{MOT}} = \frac{2\omega_{\rho}}{\pi}e^{-U_\text{MOT}/(k_{B}T_\text{MOT})}. \end{equation} Here we multiply the standard 1D prefactor $\omega_{\rho}/(2\pi)$ by a factor of $4$ to account for the two trap edges visited per oscillation and the two radial dimensions; we neglect loss along the deeper axial dimension. Note that the low damping condition is only marginally satisfied, so the prefactor must also be considered as only approximate. Using $U_\text{MOT}/(k_{B}T_\text{MOT})\le4$, this yields an estimated loss rate of $\log(1/\tau_\text{MOT})\ge 0$, an order of magnitude smaller than the measured loss rate. If $U_\text{MOT}$ is instead assumed to have a smaller but realistic value, e.g. consistent with a linear restoring force only out to the measured $1/e^{2}$ radius (7~mm) of the MOT beams, then $U_\text{MOT}/(k_{B}T_\text{MOT})\approx1.6$, and $\log(1/\tau_\text{MOT}) \approx 1.1$ in fair agreement with the measured loss rate. Hence we believe that this model can plausibly account for the loss rate observed in our experiment, although the evidence is not definitive.
{ "attr-fineweb-edu": 1.503906, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUforxK0zjCsHedCFt
\section{Introduction} In the past few years, Bender and others \cite{bed,oth} have looked at several complex potentials with PT-symmetry and have shown that the energy eigenvalues are real when PT-symmetry is unbroken, whereas they occur in complex conjugate pairs when PT-symmetry is spontaneously broken. However, there have been relatively few papers discussing periodic potentials with PT-symmetry \cite{bdm,ks5}. Recently, we \cite{ks5} have constructed several new classes of analytically solvable, complex, PT-invariant, periodic potentials with the special feature that they possess just a finite number of band gaps. The purpose of this paper is to substantially increase this list of solvable periodic potentials. A few years ago, we obtained the band edges of the associated Lam\'e (AL) potentials \cite{ks1} \begin{eqnarray}\label{1} V(x)&&=a(a+1)m{\rm sn}^2(x,m)+b(b+1)m{{\rm sn}^2 (x+K(m),m)} \nonumber \\ &&=a(a+1)m{\rm sn}^2(x,m)+b(b+1)m\frac{{\rm cn}^2 (x,m)}{{\rm dn}^2(x,m)}\,. \end{eqnarray} Here, ${\rm sn} \,(x,m)$, ${\rm cn} \,(x,m)$, ${\rm dn} \,(x,m)$ are Jacobi elliptic functions with elliptic modulus parameter $m$ $( 0\leq m \leq 1)$. They are doubly periodic functions with periods $[4K(m), i2K'(m)]$, $[4K(m), 2K(m)+i2K'(m)]$, $[2K(m), i4K'(m)]$ respectively \cite{abr}, where $K(m) \equiv \int_0^{\pi/2} d\theta [1-m\sin^2 \theta]^{-1/2}$ denotes the complete elliptic integral of the first kind, and $K'(m)\equiv K(1-m)$. For simplicity, from now on, we will not explicitly display the modulus parameter $m$ as an argument of Jacobi elliptic functions. It was shown that the AL potentials with integral values of $a,b$ are periodic potentials with a finite number of band gaps \cite{ks2}. We also constructed and studied the PT-invariant potentials $V^{PT}(x) \equiv -V(ix + \beta)$ obtained from the AL potentials via the anti-isospectral transformation of variables $x \rightarrow ix+\beta$ \cite{ks5}. In this paper, we make a substantial generalization of our previous work. We consider the four parameter family of generalized associated Lam\'e (GAL) potentials \begin{eqnarray}\label{1a} V(x)&&=a(a+1)m{\rm sn}^2(x,m)+b(b+1)m{{\rm sn}^2 (x+K(m),m)} \nonumber \\ &&~~~~~~~~+f(f+1)m {{\rm sn}^2 (x+K(m)+iK'(m),m)}+g(g+1)m{{\rm sn}^2 (x+iK'(m),m)} \nonumber \\ &&= a(a+1)m{\rm sn}^2(x,m)+b(b+1)m\frac{{\rm cn}^2 (x,m)}{{\rm dn}^2 (x,m)} +f(f+1) \frac{{\rm dn}^2 (x,m)}{{\rm cn}^2(x,m)} +g(g+1)\frac{1}{{\rm sn}^2 (x,m)}~. \nonumber \\ \end{eqnarray} In contrast to the AL potentials of eq. (\ref{1}) where there are two parameters $a,b$ and the two terms correspond to real translations of the independent variable $x$ by $0$ and $K(m)$, the GAL potentials of eq. (\ref{1a}) have four parameters $a,b,f,g$ and the four terms correspond to complex translations of the independent variable $x$ by $0, K(m), K(m)+iK'(m), iK'(m)$. Although the GAL potentials are real, they do have singularities on the real axis coming from the zeros of the Jacobi elliptic functions ${\rm sn} \,(x)$ and ${\rm cn} \,(x)$ in the last two terms. Consequently, we will focus our attention on the PT-invariant versions of the GAL potentials, which are given by \begin{eqnarray}\label{2} V^{PT}(x)&&=-a(a+1)m{\rm sn}^2(y,m)-b(b+1)m{{\rm sn}^2 (y+K(m),m)} \nonumber \\ &&~~~~~~~~-f(f+1)m {{\rm sn}^2 (y+K(m)+iK'(m),m)}-g(g+1)m{{\rm sn}^2 (y+iK'(m),m)} \nonumber \\ &&= -a(a+1)m{\rm sn}^2(y,m)-b(b+1)m\frac{{\rm cn}^2 (y,m)}{{\rm dn}^2 (y,m)} -f(f+1) \frac{{\rm dn}^2 (y,m)}{{\rm cn}^2(y,m)} -g(g+1)\frac{1}{{\rm sn}^2 (y,m)} \nonumber \\ &&\equiv [a(a+1),b(b+1),f(f+1),g(g+1)]\,, \end{eqnarray} where \begin{equation}\label{2a} y=ix+\beta\,, \end{equation} with $\beta$ being an arbitrary constant. We shall frequently use the notation $[a(a+1),b(b+1),f(f+1),g(g+1)]$ to denote $V^{PT}(x)$. In this notation, PT-invariant ordinary Lam\'e potentials are denoted by $[a(a+1),0,0,0]$, and PT-invariant AL potentials are denoted by $[a(a+1),b(b+1),0,0]$. Here, the arbitrary constant $\beta$ is chosen so as to avoid the singularities of the Jacobi elliptic functions on the real axis. We show that several of these periodic potentials for specific integer values of $a,b,f,g$ have a finite number of band gaps. Looking at the symmetry of these potentials, we are in fact tempted to conjecture that many (and perhaps all) of these potentials with integral values of the parameters $a,b,f,g$ also have a finite number of band gaps. It would be nice if this conjecture could be proved. In addition, we also discover a huge class of mid-band states when at least one of the parameters $a,b,f,g$ is half integral. As a special case, we find some new mid-band eigenstates of the associated Lam\'e potentials. Further, we show that the Schr\"odinger equation for the generalized AL potential is intimately connected with the celebrated Heun's differential equation \cite{ren}. In fact, using the exact solutions obtained in this paper, one can immediately obtain the corresponding solutions of Heun's equation. In another related paper \cite{ks7}, we use this connection and discover a wide class of new quasi-periodic solutions of Heun's equation. Finally, using the exact eigenstates of the GAL potentials (\ref{2}) and the machinery of supersymmetric quantum mechanics \cite{cks}, we construct several more potentials with finite band-gaps. There is one important point involved here using which we are in fact able to construct many more supersymmetric partner potentials corresponding to a given potential. The key point to note is that normally, in supersymmetric quantum mechanics \cite{cks}, given a potential $V_{-}(x)$, the ground state wave function $\psi_0(x)$ is used to construct the superpotential $W(x) = -\psi_0'(x)/\psi_0(x)$, which then yields the supersymmetric (SUSY) partner potential $V_{+}(x)=W^2+W'$. If one uses any excited state wave function $\psi(x)$ of $V_{-}(x)$ to construct a superpotential $W(x)$, then the original potential $V_{-}(x)$ is recovered correctly (by construction), but the corresponding partner potential $V_{+}(x)$ turns out to be singular on the real $x$-axis due to the zeros of the excited state wave function $\psi(x)$. However, as has been noticed recently \cite{rl}, if we consider PT-symmetric complex potentials, then the singularity is not on the real axis. Besides, as we have stressed previously \cite{ks5,ks6}, in the case of doubly periodic potentials composed of Jacobi elliptic functions, both $V(x)$ and $V^{PT}(x)$ can be simultaneously periodic even though their periods are different. In this way, by starting from the analytically solvable Lam\'e and associated Lam\'e potentials and using the excited state band edges of the corresponding PT-symmetric potentials, we discover a wide range of new, analytically solvable, complex PT-invariant periodic potentials with a finite number of band gaps. As an illustration, we discuss a few of these potentials in detail. The plan of the paper is the following. In Sec. 2 we discuss the PT-invariant GAL potentials (\ref{2}) in some detail and obtain band edges as well as mid-band states of several of these potentials. As a byproduct we also obtain some new solutions of the AL potentials (which we had missed in earlier work \cite{ks1,ks2}). Further, we show that the class of potentials $[a(a+1),0,0,g(g+1)]$ have finite number of band-gaps in case $a,g$ are integers. In Sec. 3 we start from the energy eigenstates obtained in Sec. 2 and using both the ground state as well as excited state wave functions, obtain new periodic PT-invariant potentials with a finite number of band gaps. In Sec. 4 we briefly discuss the connection between the solutions of the potentials (\ref{2}) and Heun's differential equation. \section{Solutions for the Generalized Associated Lam\'e (GAL) Potentials} A few years ago, we obtained analytic solutions of the associated Lam\'e potentials (\ref{1}) \cite {ks1,ks2} and showed that when $a,b$ are integers, then the resulting potentials all had a finite number of band gaps. The purpose of this section is to show that the complex PT-invariant GAL potentials as given by eq. (\ref{2}) are also quasi-exactly solvable. In particular, we show that the band edges or mid-band states of these problems can be obtained depending on whether $a+b+f+g$ (or $a-b-f-g$) is an integer or an arbitrary non-integer number. It should be noted that we are considering PT-invariant potentials (\ref{2}), since the corresponding real potentials (\ref{1a}) are singular on the real axis. It may be worthwhile explaining the underlying basic idea here, even though it has been well established by us before \cite{ks5}. Note that if $\psi(x)$ is a solution of the Schr\"odinger equation for the real potential $V(x)$ with energy $E$, then $\psi(ix+\beta)$ is a solution of the Schr\"odinger equation for the complex potential $-V(ix+\beta)$ with energy $-E$, where $\beta$ is an arbitrary nonzero constant. The new potential $-V(ix+\beta)$, generated by the anti-isospectral transformation $x \rightarrow ix+\beta$ \cite{kuw}, is clearly PT-symmetric and will be denoted by $V^{PT}(x)$. Further, if $\psi(x)$ and $\psi(ix+\beta)$ satisfy appropriate boundary conditions, they are eigenfunctions of $V(x)$ and $V^{PT}(x)$ respectively. The ordering of energy levels for $V^{PT}(x)$ is the opposite of the ordering of energy levels for $V(x)$. In this paper, our main focus is on the Schr\"odinger equation ($\hbar=2m=1$) \begin{equation}\label{3.2} -\frac{d^2}{dx^2}\psi(x)+V^{PT}(x)\psi(x)=E\psi(x)\,, \end{equation} where $V^{PT}(x)$ is the potential given by eq. (\ref{2}). Eq. (\ref{3.2}) is called the generalized associated Lam\'e equation, and we are seeking its eigenstates and mid-band states. \subsection{\bf Symmetries} At this stage, it is worth pointing out the symmetries of the PT-invariant GAL potential (\ref{2}) and hence the corresponding Schr\"odinger equation (\ref{3.2}). \begin{enumerate} \item The potential (\ref{2}) and hence the Schr\"odinger eq. (\ref{3.2}) remains unchanged when any one (or more) of the four parameters $a,b,f,g$ change to $-a-1,-b-1,-f-1,-g-1$ respectively. \item Under the translation $y \rightarrow y+K(m)$, the GAL potential $[a(a+1),b(b+1),f(f+1),g(g+1)]$ goes to the potential $[b(b+1),a(a+1),g(g+1),f(f+1)]$. Hence, both GAL potentials must have the same energy eigenvalues and the corresponding energy eigenfunctions are simply related: $y \rightarrow y+K(m)$, i.e. \begin{equation}\label{3a} E^{PT}(b,a,g,f;m) = E^{PT}(a,b,f,g;m)\,,~~\psi(y,b,a,g,f;m) \propto \psi(y+K(m),a,b,f,g;m)\,. \end{equation} \item Similarly, by considering the translations $y \rightarrow y+K(m)+iK'(m)$, and $y \rightarrow y+iK'(m)$, it is easy to show that \begin{equation}\label{1b} E^{PT}(f,g,a,b;m) = E^{PT}(a,b,f,g;m)\,,~~\psi(y,f,g,a,b;m) \propto \psi(y+K(m)+iK'(m),a,b,f,g;m)\,. \end{equation} \begin{equation}\label{1c} E^{PT}(g,f,b,a;m) = E^{PT}(a,b,f,g;m)\,,~~\psi(y,g,f,b,a;m) \propto \psi(y+iK'(m),a,b,f,g;m)\,. \end{equation} \end{enumerate} Thus, once we obtain the eigenvalues and eigenfunctions of a given GAL potential $[a(a+1),b(b+1),f(f+1),g(g+1)]$, then we immediately know the eigenvalues and eigenfunctions of three other potentials: $[b(b+1),a(a+1),g(g+1),f(f+1)]$, $[f(f+1),g(g+1),a(a+1),b(b+1)]$ and $[g(g+1),f(f+1),b(b+1),a(a+1)]$. Therefore, it suffices to present results for only one of the four potentials. \subsection{\bf Duality Relations} We shall now derive some remarkable relations relating the quasi-exactly solvable eigenvalues and eigenfunctions (corresponding either to the band edges or mid-band states) of two GAL potentials at two different values $m$ and $1-m$ of the modulus parameter. To that purpose we start from the Schr\"odinger eq. (\ref{3.2}) for the PT-invariant GAL potential (\ref{2}). On using the relations \cite{abr} \begin{eqnarray}\label{3.2a} &&\sqrt{m}\,{\rm sn}(y,m)=-{\rm dn}\,[iy+K'(m)+iK(m),1-m]\,, \nonumber \\ &&{\rm dn}(y,m)=\sqrt{1-m}\, {\rm sn}\,[iy+K'(m)+iK(m),1-m]\,, \nonumber \\ &&\sqrt{m}\,{\rm cn}(y,m)=i\sqrt{1-m}\, {\rm cn}\,[iy+K'(m)+iK(m),1-m]\,, \end{eqnarray} and defining a new variable $w=iy+K'(m)+iK(m)$, the Schr\"odinger eq. (\ref{3.2}) takes the form \begin{eqnarray}\label{3.2b} &&\-\psi''(w)-[a(a+1)(1-m){\rm sn}^2(w,1-m)+g(g+1)(1-m)\frac{{\rm cn}^2(w,1-m)} {{\rm dn}^2(w,1-m)} f(f+1)\frac{{\rm dn}^2(w,1-m)}{{\rm cn}^2(w,1-m)} \nonumber \\ &&+b(b+1)\frac{!}{{\rm sn}^2(w,1-m)}] \psi(w)=-[a(a+1)+b(b+1)+f(f+1)+g(g+1)+E]\psi(w)\,. \end{eqnarray} On comparing eqs. (\ref{3.2}) and (\ref{3.2b}) we then have the remarkable relations \begin{eqnarray}\label{3.2c} &&E^{PT}(a,b,f,g,m)=-[a(a+1)+b(b+1)+f(f+1)+g(g+1)] -E^{PT}(a,g,f,b,1-m)\,, \nonumber \\ &&\psi(y,m) \propto \psi(iy+K'(m)+iK(m),1-m)\,, \end{eqnarray} which is valid for the QES states corresponding to either the band edges or mid-band states. Note that here, $a,b,f,g$ can be arbitrary (real) numbers and are not restricted to integer values. This is a very powerful relation which has several interesting consequences. One immediate important consequence of eq. (\ref{3.2c}) is that for arbitrary integer values of $a,g$, the potential $[a(a+1),0,0,g(g+1)]$ has only a finite number of band-gaps. This happens because, for $f=g=0$, one has \begin{equation}\label{3.2d} E^{PT}(a,b,0,0,m)=-[a(a+1)+b(b+1)]-E^{PT}(a,0,0,g=b,1-m)\,, \end{equation} so that both the potentials must have the same number of band-edges and band-gaps and we have already proved \cite{ks2} that the AL potentials have finite number of band gaps in case $a,b$ are integers. \subsection{\bf QES Solutions} Let us now seek solutions of the Schr\"odinger eq. (\ref{3.2}) for the PT-invariant GAL potential (\ref{2}). On making the ansatz \begin{equation}\label{3.3} \psi(x)={\rm dn}^{-b}(y) {\rm sn}^{-g}(y) {\rm cn}^{-f}(y) \phi (y)\,,~~y=ix+\beta\,, \end{equation} it is easily shown that $\phi$ satisfies the equation \begin{equation}\label{3.4} \phi''(y)+2[mb\frac{{\rm sn}(y){\rm cn}(y)}{{\rm dn}(y)}-g\frac{{\rm cn}(y){\rm dn}(y)} {{\rm sn}(y)}+f\frac{{\rm dn}(y){\rm sn}(y)}{{\rm cn}(y)}]\phi'(y) +[Qm{\rm sn}^2(y)-R]\phi(y)=0\,, \end{equation} where \begin{equation}\label{3.5} Q=(b+g+f)(b+g+f-1)-a(a+1)\,,~~R=E+(f+g)^2+m(g+b)^2\,. \end{equation} It is well known \cite{bre} that this is a quasi-exactly solvable (QES) problem. We shall now systematically consider solutions of eq. (\ref{3.4}) for several special cases and then finally consider the most general case. \subsection{\bf $b=f=g=0$} The simplest possibility is when three out of the four parameters $a,b,f,g$ are zero. For example, when $b=f=g=0$, then the problem reduces to the PT-invariant version of the well studied Lam\'e potential problem. We might add here that, instead of $a$, if any one of the other parameters $b,f,g$ is nonzero, one still has a potential which is strictly isospectral to the PT-invariant Lam\'e potential. It may be noted that while the Lam\'e potential is a periodic potential with (real) period $2K(m)$, the PT-invariant Lam\'e potential has real period $2K'(m)$. Further, the band edge eigenvalues, eigenfunctions and the discriminant $\Delta$ of $V^{PT}(x)$ are related to those of Lam\'e potential by \cite{ks5} \begin{eqnarray}\label{6} &&E_j^{PT}(m) = -E_{2a-j}(m)\,,~~~\psi_j^{PT}(x,m)~\propto~\psi_{2a-j}(ix+\beta,m)~, ~~~j=0,1,2,...\, ,2a\, \nonumber \\ &&\Delta^{PT} (E,m)=\Delta[E+a(a+1),1-m]\,. \end{eqnarray} {}From eq. (\ref{3.2c}), it follows that the PT-invariant Lam\'e band-edge eigenvalues and eigenfunctions, for integral $a$ satisfy the remarkable relations ($j=0,1,2,...2a$) \begin{equation}\label{55a} E^{PT}_{j}(m)=-a(a+1)-E^{PT}_{2a-j}(m)\,,~~\psi_{j}(y,m) \propto \psi_{2a-j}(iy+K'(m)+iK(m),1-m)\,. \end{equation} We would like to add here that even the mid-band states satisfy (for half-integral $a$) relations analogous to (\ref{55a}): \begin{equation} E_{j}(m)=a(a+1)-E_{a-1/2-j}(m)\,,~~\psi_{j}(y,m) \propto \psi_{a-1/2-j}(iy+K'(m)+iK(m),1-m)\,, \end{equation} where $j=0,1,2,...,a-1/2$ . Note the remarkable fact that for any integer $a$, all bands and band gaps exchange their role as one goes from the Lam\'e potential to its PT-invariant version $V^{PT}(x)$ \cite{ks5}. The next simple possibility is when two of the four parameters $a,b,f,g$ are zero. Here there are three distinct possibilities which we discuss one by one. \subsection{\bf $f=g=0$} In this case the problem reduces to the PT-invariant AL potential which we have already discussed at great length \cite{ks1,ks2}. Note that if either $a$ or $b$ is zero (or -1), then this potential reduces to the PT-invariant of the Lam\'e potential. As previously shown by us \cite{ks2}, for arbitrary integral values of $a$ and $b$, AL potentials are exactly solvable problems with finite number of band-gaps for which one can write down the form of all the band edge eigenfunctions, as we do below. We note here that when $a > b$ are both integers, then there are precisely $a$ bound bands (some of which are unusual in that both the band edges are of the same period), same ($a$) number of band gaps and all the $2a+1$ band edges are analytically known beyond which there is a continuum band extending to $E= \infty$. Note that if $b > a$, then also there are $b$ bound bands and $b$ band gaps and the corresponding eigenfunctions are simply obtained from the $a>b$ case by the transformation $x \rightarrow x+K(m)$ while the $a=b$ case essentially corresponds to the Lam\'e potential $[a(a+1),0,0,0]$. Without any loss of generality, we shall only consider AL potentials with $a > b$. The form of the $2a+1$ band edge eigenfunctions of the AL potential depends on whether $a-b$ is an odd or an even integer. For example, when $b=a-2p-1\, (p=0,1,2,...$), then there are: $p$ eigenstates of the form ${\rm sn}(x){\rm cn}(x){\rm dn}(x) F_{p-1} [{\rm sn}^2(x)]$, $p+1$ eigenstates of the form ${\rm dn}^{a-2p}(x) F_{p} [{\rm sn}^2(x)]$, $a-p$ eigenstates of the form ${\rm cn}(x){\rm dn}^{2p+1-a}(x) F_{a-p-1} [{\rm sn}^2(x)]$, $a-p$ eigenstates of the form ${\rm sn}(x)({\rm dn}^{2p+1-a}(x) F_{a-p-1} [{\rm sn}^2(x)]$. \noindent On the other hand, when $b=a-2p$, there are: $p$ eigenstates of the form ${\rm cn}(x){\rm dn}^{a-2p+1}(x) F_{p-1} [{\rm sn}^2(x)]$, $p$ eigenstates of the form ${\rm sn}(x){\rm dn}^{a-2p+1}(x) F_{p-1} [{\rm sn}^2(x)]$, $a-p$ eigenstates of the form ${\rm sn}(x){\rm cn}(x){\rm dn}^{2p-a}(x)F_{a-p-1} [{\rm sn}^2(x)]$, $a-p+1$ eigenstates of the form ${\rm dn}^{2p-a}(x) F_{a-p} [{\rm sn}^2(x)]$. \noindent Here $F_n [{\rm sn}^2(x)]$ denotes a polynomial in ${\rm sn}^2(x)$ of order $n$. We would like to re-state here that all the eigenstates of the PT-invariant version of the AL potentials are immediately obtained from the known eigenfunctions of the associated Lam\'e problem and the ordering of energy levels of these is the opposite of the corresponding AL problem. Hence, this is also an exactly solvable problem with a finite number ($a$) of band gaps and $2a+1$ known band edges when both $a,b$ are integers. \subsection{\bf $b=f=0$} Following our discussion for the AL case, without any loss of generality we assume here that $a > g$. In this case, one obtains $n+1$ QES solutions when $a+g=n$ (or $g-a=n+1$) with $n=0,1,2,...$. The QES solutions for $n=0,1,2,3,4$ are given in Table 1. In particular, for any choice of $a(a+1)$, Table 1 lists the eigenstates for various values of $g(g+1)$. The general form of these eigenfunctions is obtained from the corresponding AL eigenfunctions as given in Table 3 of \cite{ks1} by simply interchanging ${\rm dn}(y)$ and ${\rm sn}(y)$. A few remarks are in order. \begin{enumerate} \item Since we are considering the case ($b=f=0$), the duality relation (\ref{3.2c}) takes the form \begin{eqnarray}\label{3.2e} &&E^{PT}(a,0,0,g;m)=-[a(a+1)+g(g+1)]-E^{PT}(a,b=g,0,0;1-m)\,, \nonumber \\ && \psi(y,a,0,0,g;m) \propto \psi(iy+K'(m)+iK(m),a,b=g,0,0;1-m)\,, \end{eqnarray} Using Table 3 of ref. \cite{ks1} and this duality relation, it is straightforward to obtain all the QES eigenstates, thereby providing an independent check on the results given in Table 1. Further, it follows that for arbitrary integer values of $a$ and $g$, $[a(a+1),0,0,g(g+1)]$ is an exactly solvable potential problem with a finite number ($a$) of band-gaps. From the duality relation (\ref{3.2e}), it follows that for integer values of $a,g$ \begin{equation}\label{3.2f} E_j^{PT}(a,0,0,g;m)=-[a(a+1)+g(g+1)]+E_j (a,b=g,0,0;1-m)\,, \end{equation} and hence the corresponding discriminants $\Delta$ are related by \begin{equation}\label{3.2g} \Delta^{PT}(E,m;a,0,0,g)=\Delta[E+a(a+1)+g(g+1),1-m;a,b=g,0,0]\,. \end{equation} \item Following the structure of the eigenfunctions of the AL potentials as given above, it is now straightforward to write down the general form of the eigenfunctions for arbitrary value of $n$. However, to obtain the corresponding eigenvalues, one needs to solve cubic and higher order equations. \item Under the transformation $y \rightarrow y+iK'(m)$ followed by the interchange of $a$ and $g$ (note $b=f=0$), the Schr\"odinger eq. (\ref{3.2}) for the GAL potential (\ref{2}) remains unchanged. Thus it follows that under the interchange of $a$ with $g$, the eigenvalue spectrum must remain unaltered. Clearly, this is only possible if either the energy eigenvalues remain unchanged under this transformation, or if two of the eigenvalues go into each other. It is easy to verify from Table 1 that the eigenvalues corresponding to the eigenfunctions of period $2iK'(m)$ remain unaltered under $a \rightarrow g$ while the other eigenvalues go into each other under this transformation. \item Similarly, From Table 3 of \cite{ks1}, it is easy to check that for the AL potentials (\ref{1}), the eigenvalues corresponding to the eigenfunctions of period $2K(m)$ remain unaltered under $a \rightarrow b$ while the other eigenvalues go into each other under this transformation. This happens because the AL potentials remain unaltered under the transformation $y \rightarrow y+K(m)$ followed by the interchange of $a$ with $b$. \end{enumerate} Summarizing, we have discovered new exactly solvable potential problems with a finite number of band gaps when $a,g$ are arbitrary integers. In fact everything about these potentials can be derived from previously known results for AL potentials. \subsection{$b=g=0$} In this case, one obtains $n+1$ QES solutions when $a+f=n$ with $n=0,1,2,...$. The solutions for $n=0,1,2,3,4$ are given in Table 2. In particular, for any choice of $a(a+1)$, Table 2 lists the eigenstates for various values of $f(f+1)$. The general form of these eigenfunctions is simply obtained from the corresponding AL eigenfunctions as given in Table 2 of \cite{ks1} by interchanging ${\rm dn}(y)$ and ${\rm cn}(y)$. Some comments are in order at this stage. \begin{enumerate} \item The form of eigenfunctions for arbitrary value of $n$ is easily written down following the structure of the AL eigenfunctions given in the last section. \item From eq. (\ref{3.2c}) it follows that the potential (\ref{2}) with $b=g=0$ is a self-dual potential, satisfying \begin{equation} E^{PT}_{j_1}(a,f,m)=-[a(a+1)+f(f+1)]-E^{PT}_{j_2}(a,f,1-m)\,. \end{equation} Using Table 2, it is easily checked that indeed this is true, for any values of $a,f$. In particular, whereas $\delta_{5},\delta_{8}$ are invariant under $m \rightarrow 1-m$, $\delta_6 \leftrightarrow \delta_7$ under the same transformation. \item Under the transformation $y \rightarrow y+K(m)+iK'(m)$ followed by the interchange of $a$ and $f$ (note $b=g=0$), the Schr\"odinger eq. (\ref{3.2}) for the GAL potential (\ref{2}) remains unchanged. Thus it follows that under the interchange of $a$ with $f$, the eigenvalue spectrum must remain unaltered. Clearly, this is only possible if either the energy eigenvalues remain unchanged under this transformation, or if two of the eigenvalues go into each other. It is easy to verify from Table 2 that the eigenvalues corresponding to the eigenfunctions of period $2K(m)+2iK'(m)$ remain unaltered under $a \rightarrow f$ while the other eigenvalues go into each other under this transformation. In particular, while $\delta_5,\delta_8$ are invariant under $a \leftrightarrow f$, $\delta_6 \leftrightarrow \delta_7$ under the same transformation. \end{enumerate} \subsection{$f=0$} Let us consider the case when only one out of the four parameters $a,b,f,g$ is zero. As an illustration, we discuss the case $f=0$. In fact, as described below, once we know the eigenstates of this problem, the eigenstates of the other three problems corresponding to either $b$ or $g$ or $a$ equal to zero are immediately obtainable, since the four potentials are related by translations of the independent variable. For the case $f=0$, one obtains $\frac{n+2}{2} (\frac{n+1}{2})$ QES solutions when $n$ is even (odd). Here $a+b+g=n$ with $n=0,1,2,...$. The QES solutions for $n=0,1,2,3$ are given in Table 3. In particular, for any choice of $a(a+1)$, Table 3 lists the eigenstates for various values of $(b+g)(b+g+1)$. Some remarks are appropriate. \begin{enumerate} \item By looking at the structure of the QES eigenfunctions in Table 3, it is easy to write down the nature of eigenfunctions for the general case. \item From Table 3, it is easily checked that the duality relation \begin{equation}\label{3.2h} E^{PT}(a,b,g,m)=-[a(a+1)+b(b+1)+g(g+1)]-E^{PT}(a,g,b,1-m)\,. \end{equation} is indeed satisfied. In particular, both $\delta_9,\delta_{10}$ are invariant under $b \leftrightarrow g$ followed by $m \rightarrow 1-m$. \item Under the transformation $y \rightarrow y+K(m)$ followed by the interchange of $a$ and $b$, and replacing $g$ by $f$, the Schr\"odinger eq. (\ref{3.2}) for the GAL potential (\ref{2}) with $f=0$ goes over to the Schr\"odinger equation for the GAL potential (\ref{2}) with $g=0$. Hence, under the interchange of $a$ and $b$ and replacing $g$ by $f$, all the energy eigenvalues of the potential (\ref{2}) with $f=0$ must go over into those of (\ref{2}) with $g=0$, while the corresponding eigenfunctions are simply obtained from Table 3 by replacing $y$ by $y+K(m)$. \item Using similar reasoning it also follows that under the interchange of $a$ with $g$ and replacing $b$ by $f$, all the energy eigenvalues of the GAL potential (\ref{2}) with $f=0$ go over to those of potential (\ref{2}) with $b=0$ while the corresponding eigenfunctions are obtained from Table 3 by replacing $y$ by $y+iK'(m)$. And finally, under the interchange of $b$ with $g$ and replacing $a$ by $f$, all the energy eigenvalues of the GAL potential (\ref{2}) with $f=0$ go over to those of potential (\ref{2}) with $a=0$, while the corresponding eigenfunctions are easily obtained from Table 3 by replacing $y$ by $y+K(m)+iK'(m)$. \end{enumerate} \subsection{The General Case: $a,~b,~f,~g$ All Nonzero} Finally, let us discuss the most general case when all the four parameters are nonzero. In this case one obtains $n+1$ solutions when $a+b+f+g=2n$ with $n=0,1,2,...$. The QES solutions for $n=0,1$ are given in Table 4. \begin{enumerate} \item It is easy to see that in the general case, the eigenfunction is of the form \begin{equation} \psi = {\rm sn}^{-g}(y){\rm cn}^{-f}(y){\rm dn}^{-b}(y) \sum_{k=0}^{n} A_k {\rm sn}^{2k} (y)\,, \end{equation} while the corresponding eigenvalues are solutions of a $n+1$'th order equation. \item It can be checked from Table 4 that $\delta_{11}$ is invariant under $b \leftrightarrow g$ followed by $m \rightarrow 1-m$. \item The GAL potential (\ref{2}) and hence the corresponding Schr\"odinger eq. (\ref{3.2}) is invariant under the transformation $y \rightarrow y+K(m)$ followed by the interchange of $a$ with $b$ and $f$ with $g$. Hence, under the interchange of $a$ with $b$ and $f$ with $g$, all the eigenvalues of the GAL system must either remain invariant or go into each other. In fact it is easily checked from Table 4 that all the eigenvalues are invariant under the interchange of $a$ with $b$ and $f$ with $g$. Extending this argument, in fact one finds that all the eigenvalues are also invariant under $a \leftrightarrow f, b \leftrightarrow g$ as well as under $a \leftrightarrow g, b \leftrightarrow f$. \end{enumerate} \subsection{Mid-Band States} So far we have discussed the results for the PT-invariant GAL potentials, which give eigenvalues and eigenfunctions corresponding to the band edges. It may be noted that in all these cases, while $a,b,f,g$ need not be integers, either $a+b+f+g$ or $a-b-f-g$ is always integral. We now show that when at least one of $a,b,f,g$ is half-integral and either $a+b+f+g$ and/or $a-b-f-g$ is an arbitrary number (being an integer is of course a very special case here), then one can obtain doubly degenerate eigenstates which correspond to mid-band states. In fact depending on whether we want $b$ or $f$ or $g$ to be half-integral (with the other two parameters being integral), we need to use different trial solutions. Therefore, we shall consider all three cases one by one. {\bf Case 1: $b$ half-integral} We start from eq. (\ref{3.4}) and further substitute the ansatz \begin{equation}\label{3.9} \phi(y) = [{\rm cn}(y)+i{\rm sn}(y)]^{t} Z(y)\,, \end{equation} where $t$ is any real number. After lengthy but straightforward algebra, one can show that $Z(y)$ satisfies the equation \begin{eqnarray}\label{3.10} &&Z''(y)+[2it{\rm dn}(y)+2mb\frac{{\rm sn}(y){\rm cn}(y)}{{\rm dn}(y)} -2g\frac{{\rm cn}(y){\rm dn}(y)}{{\rm sn}(y)}+2f\frac{{\rm dn}(y){\rm sn}(y)} {{\rm cn}(y)}]Z'(y) \nonumber \\ &&+[-(R+t^2)+(Q+t^2)m{\rm sn}^2(y)-2itg\frac{{\rm cn}(y)}{{\rm sn}(y)} +2itf(1-m)\frac{{\rm sn}(y)}{{\rm cn}(y)}\nonumber \\ &&+imt(2b+2f+2g-1){\rm sn}(y){\rm cn}(y)]Z(y)=0\,, \end{eqnarray} where $R$ and $Q$ are as given by eq. (\ref{3.5}). Not surprisingly, $Z(y)=$constant is a solution with energy $E=-(4t^2+m)/4$ provided $f=g=0,b=1/2,a=t-1/2$ (i.e. $b+f+g=1/2$). One can build solutions for higher values of $b+f+g$ from here. In particular, for $b+f+g=2M+1/2$, we consider the ansatz ($M=0,1,2,...$) \begin{equation}\label{3.11} Z(y)=\sum_{k=0}^{M} A_k {\rm sn}^{2k} (y)+{\rm cn}(y){\rm sn}(y)\sum_{k=0}^{M-1} B_k {\rm sn}^{2k} (y)\,, \end{equation} while if $b+f+g=2M+3/2$ then we consider the ansatz ($M=0,1,2,...$) \begin{equation}\label{3.12} Z(y)={\rm cn}(y)\sum_{k=0}^{M} A_k {\rm sn}^{2k} (y)+{\rm sn}(y)\sum_{k=0}^{M} B_k {\rm sn}^{2k} (y)\,, \end{equation} Substitution into eq. (\ref{3.10}) and simplification yields analytic expressions for the energy eigenvalues and eigenfunctions for arbitrary $M$ for $b=1/2$ and $b=3/2$. In particular, for $b=1/2$, we find that \begin{equation}\label{3.13} b=1/2,f=p,f+g=N,a=t-1/2\,,~~E=-[t^2+m(g+b)^2]\,, \end{equation} where both $f,g$ are nonnegative integers satisfying $f+g=N$ with $N=0,1,2,...$. Similarly, when $b=3/2,a=t-1/2,f=p,f+g=N$ we find that \begin{equation}\label{3.14} E=m(2g+1)-[1+t^2+m(g+b)^2]\pm \sqrt{(2g+1)^2m^2 +4m(N+1)(f-g)+4(1-m)t^2}\, \end{equation} where, $f$ and $g$ are again nonnegative integers. In all these cases, the corresponding eigenfunctions have the form as given above in eqs. (\ref{3.11}) and (\ref{3.12}). For small values of N, the explicit coefficients $A_k,B_k$ appearing in the eigenfunction expressions can be easily written down. For example, for $b=1/2$ and $N=1$, the eigenfunction is $Z(y)=A{\rm cn}(y)+B{\rm sn}(y)$ with $\frac{B}{A}=it$ in case $f=1,g=0$ while $\frac{B}{A}=i$ in case $g=1,f=0$. For the special case of $f=g=0$ and $t \ne 1/2$, these results represent the generalization of results obtained by us previously \cite{ks2} in the case of AL potential. Further, for $f=g=0,t=1/2$, the results obtained above match with the energy eigenvalue expressions obtained in ref. \cite{ks2} (as they should!). Several comments can be readily made. \begin{enumerate} \item Since, in the variable y, the GAL potential (\ref{2}) has period $2K(m)$ as well as $2iK'(m)$, hence $\psi(y)$ and $\psi(y+2K(m))$ as well as $\psi(y+2iK'(m))$ are all eigenfunctions of GAL equation with the same eigenvalue. As a consequence, $\phi(y)=[{\rm cn}(y)-i{\rm sn}(y)]^{t}Z(y)$ is also the eigenfunction with the {\it same} eigenvalue. Thus for any nonintegral $t$, each level is doubly-degenerate. The same remark also applies to the other two solutions (when $f$ or $g$ is half integral) discussed below. \item There is one remarkable symmetry associated with eq. (\ref{3.10}). In particular, notice that this equation is invariant under $t \rightarrow -t$ followed by $i \rightarrow -i$ (where $i=\sqrt{-1}$). But under this transformation, the ansatz (\ref{3.9}) becomes \begin{equation}\label{3.9aa} \phi(y)=[{\rm cn}(y,m)-i{\rm sn}(y,m)]^{-t}\,, \end{equation} Hence it follows that the energy eigenvalues must be independent of sign of $t$, i.e. they must be a function of $t^2$. Similar remarks also apply in the other two cases discussed below (i.e. when $f,g$ are half-integral). \item For integral $t$, both $a,b$ are half integral and these solutions reduce to those discussed in the last section and in that case they correspond to QES band edge eigenstates. \item Here we have obtained solutions $\psi(y)$ in which $a =t-1/2,f=p,g=N-p$ and $b=1/2$ or $3/2$. In view of the symmetries of the GAL potentials, we then also have solutions $\psi(y+K(m))$ with the same energy in case $b=t-1/2, g=p, f=N-p$ and $a$ is either $1/2$ or $3/2$. Similarly we have solutions $\psi(y+K(m)+iK'(m))$ with the same energy in case $f=t-1/2,a=p,b=N-p$ and $g=1/2$ or $3/2$. Further, we also have solutions $\psi(y+iK'(m))$ with the same energy in case $g=t-1/2, a=N-p, b=p$ and $f=1/2$ or $3/2$. \end{enumerate} {\bf Case 2: $f$ half-integral} We start from eq. (\ref{3.4}) and further substitute the ansatz \begin{equation}\label{3.15} \phi(y) = [{\rm dn}(y)+ik{\rm sn}(y)]^{t} Z(y)\,, \end{equation} where $t$ is any real number and $k = \sqrt{m}$. After some lengthy but straightforward algebra, one finds that $Z(y)$ satisfies the equation \begin{eqnarray}\label{3.16} &&Z''(y)+[2ikt{\rm cn}(y)+2mb\frac{{\rm sn}(y){\rm cn}(y)}{{\rm dn}(y)} -2g\frac{{\rm cn}(y){\rm dn}(y)}{{\rm sn}(y)}+2f\frac{{\rm dn}(y){\rm sn}(y)} {{\rm cn}(y)}]Z'(y) \nonumber \\ &&+[-(R+m t^2)+(Q+t^2)m{\rm sn}^2(y)-2itkg\frac{{\rm dn}(y)}{{\rm sn}(y)} -2iktb(1-m)\frac{{\rm sn}(y)}{{\rm dn}(y)}\nonumber \\ &&+ikt(2b+2f+2g-1){\rm sn}(y){\rm dn}(y)]Z(y)=0\,, \end{eqnarray} where $R$ and $Q$ are as given by eq. (\ref{3.5}). Not surprisingly, $Z(y)=$constant is a solution with energy $E=-(4mt^2+1)/4$ provided $b=g=0,f=1/2,a=t-1/2$ (i.e. $b+f+g=1/2$). One can build solutions for higher values of $b+f+g$ from here. In particular, in case $b+f+g=2M+1/2$, we consider the ansatz ($M=0,1,2,...$) \begin{equation}\label{3.17} Z(y)=\sum_{k=0}^{M} A_k {\rm sn}^{2k} (y)+{\rm sn}(y){\rm dn}(y)\sum_{k=0}^{M-1} B_k {\rm sn}^{2k} (y)\,, \end{equation} while if $b+f+g=2M+3/2$ then we consider the ansatz ($M=0,1,2,...$) \begin{equation}\label{3.18} Z(y)={\rm dn}(y)\sum_{k=0}^{M} A_k {\rm sn}^{2k} (y)+{\rm sn}(y)\sum_{k=0}^{M} B_k {\rm sn}^{2k} (y)\,, \end{equation} On substituting this ansatz in eq. (\ref{3.16}) and making algebraic simplifications, we obtain analytic expressions for the energy eigenvalues and eigenfunctions for arbitrary $M$ for $f=1/2$ and $f=3/2$. In particular, for $f=1/2$, we find that \begin{equation}\label{3.19} f=1/2\,,~ b+g=N\,,~ a=t-1/2\,,~~E=-[mt^2+(g+f)^2]\,, \end{equation} where both $b,g$ are nonnegative integers satisfying $b+g=N$ with $N=0,1,2,...$. Similarly, when $f=3/2,a=t-1/2,b+g=N$ we find that \begin{equation}\label{3.20} E=(2g+1)-[(1+t^2)m+(g+f)^2] \pm \sqrt{(2g+1)^2 +4m(N+1)(f-g)-4m(1-m)t^2}\, \end{equation} where, $b$ and $g$ are again nonnegative integers. In all these cases, the corresponding eigenfunctions have the form as given above in eqs. (\ref{3.17}) and (\ref{3.18}). For small values of N, the explicit coefficients $A_k,B_k$ in the eigenfunction expressions can be easily written down. Further, as in the half-integral $b$ case, one can write down three more solutions with the same energy. {\bf Case 3: $g$ half-integral} We start from eq. (\ref{3.4}) and further substitute the ansatz \begin{equation}\label{3.21a} \phi(y) = [{\rm dn}(y)+k{\rm cn}(y)]^{t} Z(y)\,, \end{equation} where $t$ is any real number. After algebraic simplification, it is easy to show that $Z(y)$ satisfies the equation \begin{eqnarray}\label{3.21} &&Z''(y)+[-2kt{\rm sn}(y)+2mb\frac{{\rm sn}(y){\rm cn}(y)}{{\rm dn}(y)} -2g\frac{{\rm cn}(y){\rm dn}(y)}{{\rm sn}(y)}+2f\frac{{\rm dn}(y){\rm sn}(y)} {{\rm cn}(y)}]Z'(y) \nonumber \\ &&+[-R+(Q+t^2)m{\rm sn}^2(y)-2ktb\frac{cn(y)}{{\rm dn}(y)} -2ktf\frac{{\rm dn}(y)}{{\rm cn}(y)}\nonumber \\ &&+kt(2b+2f+2g-1){\rm cn}(y){\rm dn}(y)]Z(y)=0\,, \end{eqnarray} where $R$ and $Q$ are as given by eq. (\ref{3.5}). Not surprisingly, $Z(y)=$ constant is a solution with energy $E=-(1+m)/4$ provided $b=f=0,g=1/2,a=t-1/2$ (i.e. $b+f+g=1/2$). One can build solutions for higher values of $b+f+g$ from here. In particular, in case $b+f+g=2M+1/2$, we consider the ansatz ($M=0,1,2,...$) \begin{equation}\label{3.22} Z(y)=\sum_{k=0}^{M} A_k {\rm sn}^{2k} (y)+{\rm cn}(y){\rm dn}(y)\sum_{k=0}^{M-1} B_k {\rm sn}^{2k} (y)\,, \end{equation} while if $b+f+g=2M+3/2$ then we consider the ansatz ($M=0,1,2,...$) \begin{equation}\label{3.23} Z(y)={\rm cn}(y,m)\sum_{k=0}^{M} A_k {\rm sn}^{2k} (y)+{\rm dn}(y)\sum_{k=0}^{M} B_k {\rm sn}^{2k} (y)\,, \end{equation} Substituting this ansatz in eq. (\ref{3.21}) and simplifying, one gets analytic expressions for the energy eigenvalues and eigenfunctions for arbitrary $M$ for $b=1/2$ and $b=3/2$. In particular, for $b=1/2$, we find that \begin{equation}\label{3.24} g=1/2\,,~ b+f=N\,,~ a=t-1/2\,,~~E=-[(f+g)^2+m(g+b)^2]\,, \end{equation} where both $b,f$ are nonnegative integers satisfying $b+f=N$ with $N=0,1,2,...$. Similarly, when $g=3/2,a=t-1/2,b+f=N$ we find that \begin{equation}\label{3.25} E=1+2f+(2b+1)m-[(f+g)^2+m(g+b)^2]\pm \sqrt{(1-m)[(2f+1)^{2}-(2b+1)^{2} m] +4mt^2}\, \end{equation} where, $b$ and $f$ are again nonnegative integers. In all these cases, the corresponding eigenfunctions have the form as given above in eqs. (\ref{3.22}) and (\ref{3.23}). For small values of $N$, the coefficients $A_k,B_k$ appearing in the eigenfunctions can be easily written down. Further, as in the half-integral $b$ case, one can write down three more solutions with the same energy. \section{\bf Supersymmetry and Potentials with a Finite Number of Band Gaps} We shall now start with the ground state as well as the excited state eigenfunctions of various PT-invariant GAL potentials discussed in the last section and using supersymmetry obtain the corresponding SUSY partner potentials. In this manner, we obtain many new periodic potentials $V_{+}(x)$ with a finite number of band gaps. As emphasized in the introduction, unlike real potentials, if we take a complex PT-invariant potential, then even if we start with an excited state wave function and calculate the corresponding superpotential $W$, the singularities in $W$ and hence $V_{+}(x)$ are not on the real axis, and do not cause problems. \subsection{Supersymmetry Partners of PT-Invariant Lam\'e Potentials} The simplest case is when only one parameter, (say $a$) is nonzero. This gives the PT-invariant Lam\'e potential \begin{equation}\label{3} V(x)=-a(a+1)m{\rm sn}^2(y)\,. \end{equation} For concreteness, take $a=1$, which yields $V(x)=-2m{\rm sn}^2(y)$. Here, the three band edge eigenfunctions (in order of increasing energy eigenvalues) are ${\rm sn}(y),{\rm cn}(y),$ ${\rm dn}(y)$. It is easily computed that corresponding to these three eigenstates, the corresponding partner potentials (up to a constant) are $V_{+}(x) = -2m{\rm sn}^2(y+K(m)),-2m{\rm sn}^2(y+iK'(m)), -2m{\rm sn}^2(y+K(m)+iK'(m))$ which are all strictly isospectral potentials to the original Lam\'e potential. Thus, in this case we do not obtain any new solvable potentials by using supersymmetry. Now consider the case $a=2$. All the five band edge eigenvalues and eigenfunctions of the PT-invariant Lam\'e potential $V(x)=-6m{\rm sn}^2 (y)$ have already been given by us in Table 4 of ref. \cite{ks6}. Starting from any of the five band edge eigenfunctions and calculating the corresponding superpotentials, we obtain five different supersymmetric partner potentials all of which have the same band edge energy eigenvalues as given in Table 4 of ref. \cite{ks6}. In Table 5 we have given the expressions for these five different strictly isospectral potentials. It is worth noting that out of these five potentials, three are self-isospectral - they are the PT-invariant GAL potentials $[2,2,2,0]$. Hence, truly speaking, we only have three genuinely different potentials, all having the same band edge energies. For each of these cases, using the formalism of supersymmetric quantum mechanics \cite{cks}, we can easily obtain expressions for the corresponding five eigenstates. Now, again by starting from these eigenfunctions, we can construct still different partner potentials but with identical band edges. In this way, one could construct a large number of periodic potentials with five band edges and two band gaps, all strictly isospectral to the PT-invariant Lam\'e potential (\ref{3}) with $a=2$. Similarly, if we consider the PT invariant Lam\'e potential (\ref{3}) with $a=3$, then we have 7 band edge eigenfunctions and eigenvalues all of which are analytically known and are given in Table 1 of ref. \cite{ks5}. Again, using supersymmetry, we can obtain seven different partner potentials $V_{+}$ all with the same band edge eigenvalues. By starting from any one of them and using other eigenfunctions recursively, we can in principle construct a huge class of new isospectral potentials. Particular mention may be made of the case when we start from the eigenfunction ${\rm sn}(y){\rm cn}(y){\rm dn}(y)$ of the potential $V(x)=-12m{\rm sn}^2(y)$. It is easily shown that the corresponding partner potential $V_{+}$ (up to a constant) is given by \begin{equation}\label{8} V(x)=-m[6{\rm sn}^2(y)+2{\rm sn}^2(y+K(m))+2{\rm sn}^2(y+iK'(m)) +2{\rm sn}^2(y+K(m)+iK'(m))]\,. \end{equation} Thus, we see that the PT-invariant GAL potential $[6,2,2,2]$ has precisely three bands, three band gaps and seven band edges, since it is the supersymmetric partner of the PT-invariant Lam\'e potential (\ref{3}) with $a=3$. The process described above is readily extended to any Lam\'e potential with integer $a$. We can start from any of the $2a+1$ band edges and obtain the corresponding supersymmetric partner potentials all having the same band edges. We have shown that the SUSY partners of the PT-invariant Lam\'e potentials $[6,0,0,0]$ and $[12,0,0,0]$ are the potentials $[2,2,2,0]$ and $[6,2,2,2]$ respectively. What about the higher Lam\'e potentials? In this connection, it is amusing to notice that the band edges of the PT-invariant Lam\'e potential $[20,0,0,0]$ and the potential $[6,6,6,2]$ (which follow from Table 4) are identical. For example, out of 9 band edges, the 6 band edge energy eigenvalues of $[20,0,0,0]$ are given by \begin{equation}\label{3.8} E=-5(m+2)\pm \sqrt{4m^2-9m+9}\,,~~E=-5(1+m)\pm 2\sqrt{4m^2+m+4}\,,~~ E=5(1+2m)\pm 2\sqrt{9m^2-9m+4}\,. \end{equation} It is easily seen from Table 4 that exactly the same eigenvalues are obtained when $a,b,f,g$ take the values $(2,2,-3,1),(2,-3,2,1),(-3,2,2,1)$. Similarly, one can show that the three remaining eigenvalues of $[20,0,0,0]$ satisfy the same cubic equation as $[6,6,6,2]$ when $a,b,f,g$ take the values $(2,2,2,-2)$. In fact, one can show that the number (and structure) of band edges of the PT-invariant Lam\'e potential $[2a(2a+1),0,0,0]$ is same as the QES states of the potential $[a(a+1),a(a+1),a(a+1),(a-1)a]$. For example, for this PT-invariant Lam\'e potential it is well known that out of the $4a+1$ band edges of the Lam\'e potential, $a$ states each are of the form ${\rm cn}(y){\rm sn}(y) F_{a-1}({\rm sn}^2(y))$, ${\rm cn}(y){\rm dn}(y) F_{a-1}({\rm sn}^2(y))$, ${\rm dn}(y){\rm sn}(y) F_{a-1}({\rm sn}^2(y))$, while the remaining $a+1$ states are of the form $F_{a}({\rm sn}^2(y))$. Using Table 4, it is easily shown that there are again $4a+1$ QES states of the potential $[a(a+1),a(a+1),a(a+1),(a-1)a]$, out of which $a$ QES states each are obtained when $a,b,f,g$ are of the form $a,a,a-1,a-1$, or $a,a-1,a,a-1$, or $a-1,a,a,a-1$, while $a+1$ QES states are obtained when $a,b,f,g$ are of the form $a,a,a,-a$. In fact we believe that all the band edge eigenvalues of the potentials $[2a(2a+1),0,0,0]$ and $[a(a+1),a(a+1),a(a+1),(a-1)a]$ are identical. While this is easily shown for low values of $a$, at the moment, a general proof is still lacking. Similarly, one can show that the number (as well as the structure) of band edges of the PT-invariant Lam\'e potential $[(2a-1)2a,0,0,0]$ is the same as the QES states of the potential $[a(a+1),(a-1)a,(a-1)a,(a-1)a]$. For example, it is well known that out of the $4a-1$ band edges of the PT-invariant Lam\'e potential, $a$ states each are of the form ${\rm cn}(y) F_{a-1}({\rm sn}^2(y))$, ${\rm dn}(y) F_{a-1}({\rm sn}^2(y))$, ${\rm sn}(y) F_{a-1}({\rm sn}^2(y))$, while the remaining $a-1$ states are of the form $ {\rm sn}(y){\rm cn}(y){\rm dn}(y)$$F_{a-2}({\rm sn}^2(y)$. Using Table 4, it is easily shown that there are $4a-1$ QES states of the potential $[a(a+1),(a-1)a,(a-1)a,(a-1)a]$, out of which $a$ QES states each are obtained when $a,b,f,g$ are of the form $a,-a,a-1,a-1$, or $a,a-1,-a,a-1$, or $a,a-1,a-1,-a$, while $a-1$ QES states are obtained when $a,b,f,g$ are of the form $-a-1,a-1,a-1,a-1$. In fact we believe that all the band edge eigenvalues of the potentials $[(2a-1)2a,0,0,0]$ and $[a(a+1),(a-1)a,(a-1)a,(a-1)a]$ are identical. While this is easily shown for low values of $a$, a general proof is not available. On the basis of these results, we then conjecture that the potentials $[a(a+1),a(a+1),a(a+1),(a-1)a]$, for integer $a$, have the same band edges as the Lam\'e potential $[2a(2a+1),0,0,0]$ and hence these potentials also have precisely $2a$ band gaps and $(4a+1)$ band edges, all of which are known in principle. Further, the potentials $[a(a+1),(a-1)a,(a-1)a,(a-1)a]$ have the same band edges as the Lam\'e potential $[(2a-1)2a,0,0,0]$ and hence are also potentials with a finite number ($2a-1$) of band gaps. It would be nice to have a general proof. \subsection{\bf Supersymmetry Partners of PT-Invariant Associated Lam\'e Potentials} We start our discussion with the $a=2$, $b=1$ associated Lam\'e potential and its corresponding PT-invariant potential $V^{PT}(x)=-6m{\rm sn}^2(y)-2m{{\rm cn}^2 (y)}/{{\rm dn}^2(y)}\,$. All five band-edge eigenvalues and eigenfunctions for this potential have been given by us in Table 3 of ref. \cite{ks6}. As established previously \cite{ks1,ks6}, this is a self-isospectral potential and hence using the band edge eigenfunction ${\rm dn}^2(y)$ does not give any new partner potential. However, if instead we use the remaining four band edge eigenfunctions, then one gets four new SUSY partner potentials which are strictly isospectral to the PT-invariant [6,2,0,0] potential. Let us now consider the PT-invariant AL potential $[a(a+1), (a-2)(a-1),0,0]$, i.e. the potential (\ref{2}) with $b=a-2, f=g=0$. As shown by us \cite{ks1}, one of its exact band edge eigenfunction is $\psi(x) = {\rm cn}(y){\rm dn}^{a-1} (y)$. It is easy to see that the corresponding partner potential $V_{+}$ (up to a constant) is the potential $[(a-1)a,(a-1)a,2,0]$. Thus we immediately conclude that the PT-invariant potential $[(a-1)a,(a-1)a,2,0]$ is strictly isospectral to the PT-invariant AL potential $[a(a+1),(a-2)(a-1),0,0]$. In the special case when both $a,b$ are integers, in view of our results on AL potentials \cite{ks2}, it then follows that the GAL potential $[(a-1)a,(a-1)a,2,0]$ has $a$ band gaps and $a$ bands, out of which $b=a-2$ bands are rather unusual. Note that if instead we use $\psi(x) = {\rm sn}(y){\rm dn}^{a-1} (y,m)$, which is also one of the exact eigenfunctions of the above AL potential, then nothing new is obtained. In particular, the corresponding partner potential is $[(a-1)a,(a-1)a,0,2]$ which is strictly isospectral to the potential $[(a-1)a,(a-1)a,2,0]$. Let us now consider the PT-invariant AL potential $[a(a+1), (a-3)(a-2),0,0]$, i.e. the potential (\ref{2}) with $b=a-3, f=g=0$. As shown by us \cite{ks1}, one of its exact band edge eigenfunction is $\psi(x) = {\rm sn}(y) {\rm cn}(y){\rm dn}^{a-2} (y)$. It is easy to see that the corresponding partner potential $V_{+}$ (up to a constant) is the potential $[(a-1)a,(a-2)(a-1),2,2]$ in the notation of (\ref{2}). Thus we immediately conclude that when $a,b$ are integers, then this PT-invariant potential is strictly isospectral to the AL potential $[a(a+1),(a-3)(a-2),0,0]$, has $a$ band gaps and $a$ bands, out of which $b=a-3$ bands are rather unusual. We can generalize the above arguments. In particular, we find that the number (and even structure) of the potentials $[(a-p)(a-p+1),(a-p-1)(a-p),p(p+1),p(p+1)]$ is the same as the AL potentials $[a(a+1),(a-2p-1)(a-2p),0,0]$. For example, as remarked in the previous section, if $b=a-2p-1 (p=0,1,2,...$), then there are $p$ eigenstates of the form ${\rm sn}(y){\rm cn}(y){\rm dn}(y)$$F_{p-1}({\rm sn}^2(y))$, $p+1$ eigenstates of the form ${\rm dn}^{a-2p}(y)$$F_{p}({\rm sn}^2(y))$, $a-p$ eigenstates of the form ${\rm cn}(y){\rm dn}^{2p+1-a}(y)$$F_{a-p-1}({\rm sn}^2(y))$ and also $a-p$ eigenstates of the form ${\rm sn}(y){\rm dn}^{2p+1-a}(y)$$F_{a-p-1}({\rm sn}^2(y))$. Using Table 4 it is easy to show that for the GAL potential, $[(a-p)(a-p+1),(a-p-1)(a-p),p(p+1),p(p+1)]$, there are $p$ eigenstates of the form ${\rm sn}^{-p}(y)$${\rm cn}^{-p}(y)$ ${\rm dn}^{1+p-a}(y)$$F_{p-1}({\rm sn}^2(y))$, $p+1$ eigenstates of the form ${\rm dn}^{a-p}(y)$${\rm cn}^{-p}(y)$ ${\rm sn}^{-p}(y)$$F_{p} ({\rm sn}^2(y))$, $a-p$ eigenstates of the form ${\rm cn}^{p+1}(y)$${\rm sn}^{-p}(y)$ ${\rm dn}^{p+1-a}(y)$ $F_{a-p-1}({\rm sn}^2(y))$ and also $a-p$ eigenstates of the form ${\rm sn}^{p+1}(y)$${\rm cn}^{-p}(y)$ ${\rm dn}^{p+1-a}(y)$$F_{a-p-1}({\rm sn}^2(y))$. In fact we believe that all the band edge eigenvalues of the potentials $[a(a+1),(a-2p)(a-2p+1),0,0]$ and $[(a-p)(a-p+1),(a-p-1)(a-p),p(p+1),p(p+1)]$ are identical. While this is easily shown for low values of $a$ and $p$, a general proof is still lacking. Similarly, we can show that the number (and even structure) of the potentials $[(a-p)(a-p+1),(a-p)(a-p+1),p(p+1),(p-1)p]$ is the same as the AL potentials $[a(a+1),(a-2p)(a-2p+1),0,0]$. In particular, for the AL potential, as shown in Sec. 2, when $b=a-2p$, then there are $p$ eigenstates of the form ${\rm cn}(x){\rm dn}^{a-2p+1}(x)$$F_{p-1}({\rm sn}^2(x))$, $p$ eigenstates of the form ${\rm sn}(x)$${\rm dn}^{a-2p+1}(x)$$F_{p-1}({\rm sn}^2(x))$, $a-p$ eigenstates of the form ${\rm sn}(x)$${\rm cn}(x){\rm dn}^{2p-a}(x)$$F_{a-p-1}({\rm sn}^2(x))$, and $a-p+1$ eigenstates of the form ${\rm dn}^{2p-a}(x)F_{a-p}({\rm sn}^2(x))$. It is easily shown that for the potential $[(a-p)(a-p+1),(a-p)(a-p+1),p(p+1),(p-1)p]$, there are $4a-1$ QES states of similar form. In particular, there are $p$ eigenstates of the form ${\rm sn}^{-p}(y){\rm cn}^{1-p}(y)$ ${\rm dn}^{p-a}(y)$$F_{p-1}({\rm sn}^2(y))$, $p$ eigenstates of the form ${\rm dn}^{a+1-p}(y){\rm cn}^{1-p}(y)$ ${\rm sn}^{-p}(y)$$F_{p-1}({\rm sn}^2(y))$, $a-p+1$ eigenstates of the form ${\rm cn}^{p}(y)$${\rm sn}^{-p}(y)$ ${\rm dn}^{p-a}(y)$$F_{a-p}({\rm sn}^2(y))$ and also $a-p$ eigenstates of the form ${\rm sn}^{p+1}(y)$${\rm cn}^{1-p}(y)$ ${\rm dn}^{p-a}(y)$ $F_{a-p-1}({\rm sn}^2(y))$. In fact we believe that all the band edge eigenvalues of the potentials $[a(a+1),(a-2p)(a-2p+1),0,0]$ and $[(a-p)(a-p+1),(a-p)(a-p+1),p(p+1),(p-1)p]$ are identical. While this is easily shown for low values of $a$ and $p$, we don't yet have a general proof. \subsection{\bf SUSY Partners of Potentials with $b=f=0$} Let us now consider the SUSY partners of the potential $[a(a+1),0,0,g(g+1)]$ which for integral values of $a,g$, is a problem with a finite number of band gaps. By exactly following the above discussion about the PT-invariant AL potential, we can construct a host of new potentials with a finite number of band gaps. For example, by starting from the potential $[6,0,0,2]$ and following the procedure as in the AL case, we can easily obtain four new SUSY partner potentials, all with two band gaps. From Table 1 we observe that for integral $a$, two of the exact eigenfunctions of the potential $[a(a+1),0,0,(a-2)(a-1)]$ with $a$ band gaps are ${\rm cn}(y){\rm sn}^{a-2}(y)$ and ${\rm dn}(y){\rm sn}^{a-2}(y)$. It is easily seen that if we start with either of these eigenfunctions, then the corresponding SUSY partner potential with the same finite ($a$) number of band gaps is the potential $[(a-1)a,2,,0,(a-1)a]$ (or its isospectral partner $[(a-1)a,0,2,(a-1)a]$). From Table 1 we also observe that one of the exact eigenfunction of the potential $[a(a+1),0,0,(a-3)(a-2)]$ is ${\rm cn}(y,m){\rm dn}(y,m){\rm sn}^{a-3}(y,m)$. On starting with this eigenfunction, it is easily shown that the corresponding SUSY partner potential is $[(a-1)a,2,2,(a-2)(a-1)]$ which therefore must also be a potential with finite ($a$) number of band-gaps in case $a$ is an integer. Similarly, by starting from the finite band-gap potentials $[a(a+1),0,0,(a-2p-1)(a-2p)]$ as well as $[a(a+1),0,0,(a-2p)(a-2p+1)]$, and following the discussion in the case of PT-invariant AL potential, it is easily shown that the corresponding SUSY partners with the same (finite) number of band gaps are the potentials $[(a-p)(a-p+1),p(p+1),p(p+1),(a-p-1)(a-p)]$ and $[(a-p)(a-p+1),p(p+1),(p-1)p,(a-p)(a-p+1)]$ and respectively, where $a$ and $p$ are positive integers. \subsection{\bf SUSY Partners of Potentials with $b=g=0$} Let us now consider the SUSY partners of the PT-invariant potential $[a(a+1),0,f(f+1),0]$. {}From Table 2 we observe that two of the exact eigenfunctions of the potential $[a(a+1),0,(a-2)(a-1),0]$ are ${\rm sn}(y){\rm cn}^{a-2}(y)$ and ${\rm dn}(y){\rm cn}^{a-2}(y)$. It is easily seen that if we start with either of these eigenfunctions, then the corresponding SUSY partner potentials turn out to be $[(a-1)a,2,(a-1)a,0]$ or $[(a-1)a,0,(a-1)a,2]$. Since we know that the potentials $[a(a+1),a(a-1),2,0]$ as well as $[a(a+1),2,0,(a-1)a]$ have a finite number of band gaps, we conjecture that maybe the potential $[a(a+1),2,(a-1)a,0]$ also has only a finite number ($a$) of band gaps when $a$ is an integer. {}From Table 2 we also observe that one of the exact eigenfunctions of the potential $[a(a+1),0,(a-3)(a-2),0]$ is ${\rm sn}(y){\rm dn}(y){\rm cn}^{a-3}(y)$. Starting with this eigenfunction, it is easily shown that the corresponding SUSY partner potential is $[(a-1)a,2,(a-2)(a-1),2]$. Again, since for integer $a$, the potential $[(a-1)a,(a-2)(a-1),2,2]$ has only a finite number of band gaps, it is tempting to conjecture that the same may also be true for the potential $[(a-1)a,2,(a-2)(a-1),2]$. Similarly, by starting from the finite band-gap potentials $[a(a+1),0,(a-2p-1)(a-2p),0]$ as well as $[a(a+1),0,(a-2p)(a-2p+1),0]$, and following the discussion in the case of PT-invariant AL potentials, it is easily shown that the corresponding SUSY partners with the same number of band gaps are the GAL potentials $[(a-p)(a-p+1),p(p+1),p(p+1),(a-p-1)(a-p)]$ and $[(a-p)(a-p+1),p(p+1),(p-1)p,(a-p)(a-p+1)]$ respectively when $a$ and $p$ are integers. \subsection{\bf SUSY Partners of Potentials with $f=0$} Let us now consider the SUSY partners of the potential $[a(a+1),b(b+1),0,g(g+1)]$. {}From Table 3 we observe that one of the exact eigenfunctions is ${\rm dn}^{-b}(y){\rm sn}^{-g}(y)$ when $a+b+g=0$. If we start with this eigenfunction, then the corresponding SUSY partner potential turns out to be $[(a-1)a,(b-1)b,0,(g-1)g]$. {}From Table 3 we also observe that an exact eigenfunction of the potential $[a(a+1),b(b+1),0,g(g+1)]$ is ${\rm cn}(y){\rm dn}^{-b}(y){\rm sn}^{-g}(y)$ when $a+b+g=1$. Starting with this eigenfunction, it is easily shown that the corresponding SUSY partner potential is $[(a-1)a,(b-1)b,2,(g-1)g]$. In summary, we have discovered a large number of complex PT-invariant periodic potentials with a finite number of band gaps, many occurring when the parameters $a,b,c,d$ have specific integer values. This leads us to make the plausible conjecture that all GAL potentials (\ref{2}) for integer values of $a,b,f,g$ have a finite number of band-gaps, but there is as yet no formal proof. \section{Heun's Equation and the Generalized Associated Lam\'e Equation} In this section, we point out an interesting connection between Heun's differential equation \cite{ren} and the generalized associated Lam\'e equation (\ref{3.2}). This connection enables us to use the various solutions of eq. (\ref{3.2}) obtained in this paper to write down several solutions of Heun's equation which have apparently not been studied in the mathematics literature. The canonical form of Heun's equation is given by \cite{ren} \begin{equation}\label{4.1} \bigg [\frac{d^2}{dx^2}+\big (\frac{\gamma}{x}+\frac{\delta}{x-1} +\frac{\epsilon}{x-c} \big )\frac{d}{dx}+\frac{\alpha \beta x-q}{x(x-1)(x-c)} \bigg ]G(x) =0\,, \end{equation} where $\alpha,\beta,\gamma,\delta,\epsilon,q,c$ are real parameters, except that $c \ne 0,1$ and the first five parameters are constrained by the relation \begin{equation}\label{4.1a} \gamma+\delta+\epsilon=\alpha+\beta+1\,. \end{equation} If we make the transformation $x={\rm sn}^2(y,m)$, then Heun's equation takes the form \cite{ren} \begin{eqnarray}\label{4.2} &&F''(y)+[(1-2\epsilon) m \frac{{\rm sn}(y) {\rm cn}(y)}{{\rm dn}(y)} +(1-2\delta) \frac{{\rm sn}(y){\rm dn}(y)}{{\rm cn}(y)} +(2\gamma-1) \frac{{\rm cn}(y) {\rm dn}(y)}{{\rm sn}(y)}]F'(y) \nonumber \\ &&-[4mq -4\alpha \beta m {\rm sn}^2 (y)]F(y) =0\,, \end{eqnarray} where $[G(x) \equiv F(y)]$ and $m=1/c$. It is interesting to note that eq. (\ref{4.2}) is very similar to the $\phi$ equation (\ref{3.4}) which we have analyzed in great detail. In particular, with the identification \begin{equation}\label{4.3} b=\frac{1}{2}-\epsilon\,,~f=\frac{1}{2}-\delta\,,~g=\frac{1}{2}-\gamma\,, ~b+f+g=\frac{1}{2}-\alpha-\beta, 4\alpha \beta = Q\,, 4mq=R\,, \end{equation} all the results discussed above can be immediately used to obtain different solutions of Heun's equation. It turns out that using the mid-band states obtained in Sec. 2, one generates new quasi-periodic solutions of Heun's eq. (\ref{4.2}), which we discuss in a separate publication \cite{ks7}. \newpage
{ "attr-fineweb-edu": 1.537109, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUfwrxK0wg05VB93nQ
\section{Introduction} Flow generated by the Gauss curvature was first studied by Firey \cite{Fir74} to model the shape change of tumbling stones. Since then the evolution of hypersurfaces by their Gauss curvature has been studied by many authors \cite{And96}-\cite{AndGuNi16}, \cite{BCD16}-\cite{ChWang00}, \cite{DaskLee04, GuNi16, Ha94}. A main interest is to understand the asymptotic behavior of the flows. It was conjectured that the $\alpha$-power of the Gauss curvature, for $\alpha>\frac{1}{n+2}$, deforms a convex hypersurface in $\mathbb R^{n+1}$ into a round point. This is a difficult problem and has been studied by many authors in the last three decades. The first result was by Chow \cite {Chow85} who provided a proof for the case $\alpha=1/n$. In \cite{And99} Andrews proved the conjecture for the case $n=2$ and $\alpha=1$. Very recently, Brendle, Choi and Daskalopoulos \cite{BCD16} resolved the conjecture for all $\alpha>\frac{1}{n+2}$, in all dimensions. As a natural extension, anisotropic flows have also attracted much attention and have been extensively investigated \cite{ChZhu99,Ga93,GaLi94}. They provide alternative proofs for the existence of solutions to elliptic PDEs arising in geometry and physics. For example a proof based on the logarithmic Gauss curvature flow was given in \cite{ChWang00} for the classical Minkowski problem, and in \cite {Wang96} for a prescribing Gauss curvature problem. Expansion of convex hypersurfaces by their Gauss curvature has also been studied by several authors \cite{Gerh90,Gerh14, Li10, Sch06, Urb91}. Let $\mathcal M_0$ be a smooth, closed, uniformly convex hypersurface in $\mathbb R^n$ enclosing the origin. In this paper we study the following anisotropic Gauss curvature flow, \begin{equation}\label{flow} \left\{ { \begin{split} \frac{\partial X}{\partial t} (x,t) &= - f(\nu) r^{\alpha} K(x,t) \nu,\\ X(x,0) &=X_0(x), \end{split} }\right. \end{equation} where $K(\cdot,t)$ is the Gauss curvature of hypersurface $\mathcal M_t$, parametrized by $X(\cdot,t):\mathbb S^n\to \mathbb R^{n+1}$, $\nu(\cdot,t)$ is the unit outer normal at $X(\cdot, t)$, and $f$ is a given positive smooth function on $\mathbb S^n$. We denote by $r = |X(x,t)|$ the distance from the point $X(x, t)$ to the origin, and regard it as a function of $\xi=\xi (x,t) := X(x,t) / |X(x,t)|\in \mathbb S^n $. We call it the radial function of $\mathcal M_t$. When $\alpha\ge n+1$, we prove that if $f\equiv 1$, the hypersurface $\mathcal M_t$ converges smoothly after normalisation to a sphere. For general positive and smooth function $f$, we prove that $\mathcal M_t$ converges smoothly after normalisation to a hypersurface which is a solution to the classical Aleksandrov problem \cite {Aleks42} ($\alpha=n+1$) and to the dual $q$-Minkowski problem \cite {HLYZ16} for $q\le 0$ ($\alpha>n+1$). Our proof of the smooth convergence consists of two parts: \begin{itemize} \item [(i)] uniform positive upper and lower bounds for the radial function of $\widetilde \mathcal M_t$; and \item [(ii)] uniform positive upper and lower bounds for the principal curvatures of $\widetilde \mathcal M_t$, \end{itemize} where $\widetilde \mathcal M_t$ is the normalised solution given in \eqref{rescaled surface} below. Once the upper and lower bounds for the principal curvatures are established, higher order regularity of $\widetilde \mathcal M_t$ follows from Krylov's regularity theory. We then infer the smooth convergence by using the functional \eqref {functional}. Our proof of part (ii) applies to the flow \eqref{flow} for all $\alpha\in\mathbb R^1$, as long as part (i) is true. In particular it also applies to the original Gauss curvature flow (namely the case $\alpha=0$) for which the estimates (ii) were established for $f\equiv 1$ in \cite{GuNi16}. When $\alpha<n+1$, we establish the smooth convergence for even $f$, provided the initial hypersurface is symmetric with respect to the origin. We also give examples to show that, without the symmetry assumption, part (i) above fails and so the smooth convergence does not hold. As a result we also obtain the existence of smooth symmetric solutions to the dual $q$-Minkowski problem for all $q\in\mathbb R^1$, assuming the function $f$ is smooth, positive, and $f$ is even when $q>0$. The dual $q$-Minkowski problem was recently introduced by Huang, Lutwak, Yang, and Zhang \cite {HLYZ16} where they proved the existence of symmetric weak solutions for the case $q \in (0, n+1)$ under some conditions. Their conditions were recently improved by Zhao \cite{ZY17}. For $q<0$ the existence and uniqueness of weak solution were obtained in \cite {ZY}. When $q=n+1$ it is the logarithm Minkowski problem studied in \cite {BLYZ}. In \cite {BLYZ} and \cite {HLYZ16}, the existence of weak solutions was proved when the inhomogeneous term is a non-negative measure not concentrated in any sub-spaces. For other related results, we refer the readers to \cite{BHP17,BLYZZ,Schn14} and the references therein. Let us state our first main result as follows. \begin{theorem}\label{thmA} Let $\mathcal M_0$ be a smooth, closed, uniformly convex hypersurface in $\mathbb R^{n+1}$ enclosing the origin. If $f \equiv 1$ and $\alpha \ge n+1$, then the flow \eqref{flow} has a unique smooth solution $\mathcal M_t$ for all time $t>0$, which converges to the origin. After a proper rescaling $X\to \phi^{-1}(t)X$, the hypersurface $\widetilde \mathcal M_t = \phi^{-1}(t) \mathcal M_t$ converges exponentially fast to the unit sphere centred at the origin in the $C^\infty$ topology. \end{theorem} Our choice for the rescaling factor $\phi(t)$ is motivated by the following calculation. Assume \begin{equation}\label{homothety} X(\cdot,t) = \phi(t) X_0(\cdot) \end{equation} evolves under the flow \eqref{flow} with initial data $\phi_0X_0$, where $\phi$ is a positive function and $\phi_0 = \phi(0)$. Since the normal vector is unchanged by the homothety, we obtain, by differentiating \eqref{homothety} in $t$ and multiplying $\nu_0=\nu(\cdot,t)$ to both sides, \begin{equation}\label{s1 t1} \phi'(t)\langle X_0, \nu_0 \rangle = -\phi^{\alpha-n} (t) f r_0^\alpha K_0, \end{equation} where $K_0$ is the Gauss curvature of $\mathcal M_0 = X_0(\mathbb S^n)$, and $r_0$ is the radial function of $\mathcal M_0$. By \eqref{s1 t1} we have \begin{eqnarray*} \phi'(t) = - \lambda \phi^{\alpha - n} (t) \end{eqnarray*} for some constant $\lambda>0$. We may suppose $\lambda =1$. Then \begin{equation}\label{scaling factor} { \begin{split} \phi (t) &= \phi_0 e^{-t},\;\;&\text{if}\;\; \alpha=n+1,\\ \phi (t) &= [\phi_0^{q} -q t]^{\frac{1}{q}},\;\;&\text{if}\;\; \alpha \not=n+1 , \end{split} } \end{equation} where $q =n+1- \alpha $, $\phi_0 = \phi(0)>0$. By \eqref{s1 t1}, one sees that $\mathcal M_0$ satisfies the following elliptic equation \begin{equation}\label{soliton sol} \frac{u(x)}{r^\alpha(\xi) K(p)} = f(x) \ \ \ \forall\ x\in \mathbb S^n, \end{equation} where $p \in \mathcal M_0$ is the point such that the unit outer normal $\nu(p)=x$, $\xi = p/|p| \in \mathbb S^n$, and $u$ is the support function of $\mathcal M_0$, given by \begin{eqnarray*} u(x) = \sup\{\langle x, y \rangle:\ y\in \mathcal M_0\}. \end{eqnarray*} The above calculation suggests that if we expect that our flow converges to a soliton which satisfies \eqref{soliton sol}, it is reasonable to rescale the flow by a time-dependent factor $\phi(t)$ which is in the form of \eqref{scaling factor}. Let us introduce the normalised flow for \eqref{flow}. Let \begin{equation}\label{rescaled surface} {\begin{split} \widetilde M_t & = \phi^{-1}(t) \mathcal M_t, \\ \widetilde X(\cdot, \tau) & = \phi^{-1}(t) X(\cdot,t) , \end{split}} \end{equation} where $$ \tau = \left\{ {\begin{split} & t \hskip80pt \text{if}\ \alpha = n+1,\\ & \frac{1}{q} \log\frac{\phi_0^{q}}{\phi_0^{q}-qt} \ \ \ \ \ \text{if}\ \alpha \ne n+1. \end{split}} \right. $$ Then $\widetilde X(\cdot,\tau)$ satisfies the following normalised flow \begin{equation}\label{normalised flow} \left\{{\begin{split} \frac{\partial X}{\partial t} (x,t) &= - f(\nu) r^\alpha K(x,t) \nu + X(x,t),\\ X(\cdot,0) &= \phi_0^{-1} X_0. \end{split}}\right. \end{equation} For convenience we still use $t$ instead of $\tau$ to denote the time variable and omit the ``tilde" if no confusions arise. The asymptotic behavior of \eqref{flow} is equivalent to the long time behaviour of the normalised flow \eqref{normalised flow}. Indeed, in order to prove Theorem \ref{thmA}, we shall establish the a priori estimates for \eqref{normalised flow}, and show that $|X| \to 1$ smoothly as $t\to \infty$, provided $f\equiv 1$ and $\phi_0$ is chosen such that \begin{equation}\label{phi_0} { \begin{split} \phi_0 = \exp(\frac{1}{o_n}\int_{\mathbb S^n} \log r_0(\xi)d\xi), &\;\;\text{if}\;\alpha=n+1,\\ \min_{\mathbb S^n}\, r_0(\cdot) \le \phi_0 \le \max_{\mathbb S^n}\, r_0(\cdot), &\;\;\text{if}\; \alpha>n+1, \end{split} } \end{equation} where $o_n = |\mathbb S^n|$ denotes the area of the sphere $\mathbb S^n$. The following functional plays an important role in our argument, \begin{equation}\label{functional} \mathcal J_\alpha (\mathcal M_t) = \left\{{ \begin{split} \int_{\mathbb S^n} f(x) \log u(x,t) dx- \int_{\mathbb S^n} \log r(\xi,t)d\xi, &\;\;\text{if}\;\alpha=n+1,\\ \int_{\mathbb S^n} f(x) \log u(x,t) dx - \frac1q\int_{\mathbb S^n} r^q(\xi,t)d\xi, &\;\;\text{if}\; \alpha \not = n+1, \end{split} }\right. \end{equation} where $q = n+1-\alpha $ as above, $u(\cdot, t)$ and $r(\cdot, t)$ are respectively the support function and radial function of $\mathcal M_t$. This functional was introduced in \cite{HLYZ16}. We will show in Lemma \ref{descending flow} below that $\mathcal J_\alpha(\mathcal M_t)$ is strictly decreasing unless $\mathcal M_t$ solves the elliptic equation \eqref{soliton sol}. By this functional and the a priori estimates for the normalised flow \eqref{normalised flow}, we obtain the following convergence result for the anisotropic flow \eqref{flow}. \begin{theorem}\label{thmB} Let $\mathcal M_0$ be a smooth, closed, uniformly convex hypersurface in $\mathbb R^{n+1}$ which contains the origin in its interior. Let $f$ be a smooth positive function on $\mathbb S^n$. If $\alpha>n+1$, then the flow \eqref{flow} has a unique smooth solution $\mathcal M_t$ for all time $t>0$. When $t\to \infty$, the rescaled hypersurfaces $\widetilde \mathcal M_t$ converge smoothly to the unique smooth solution of \eqref{soliton sol}, which is a minimiser of the functional \eqref{functional}. \end{theorem} When $\alpha = n+1$, in order that the solution of \eqref{flow} converges to a solution of \eqref{soliton sol}, we assume that $f\in C^\infty(\mathbb S^n;\mathbb R_+)$ and satisfies the following conditions \begin{eqnarray} &&\int_{\mathbb S^n} f = o_n:=|\mathbb S^n|, \label{Aleks f cdt1}\\ && \int_{\omega} f <|\mathbb S^n|- |\omega^*| \label{Aleks f cdt2} \end{eqnarray} for any spherically convex subset $\omega\subset \mathbb S^n$. Here $|\cdot|$ denotes the $n$-dimensional Hausdorff measure, and $\omega^* \subset \mathbb S^n$ is the dual set of $\omega$, namely $\omega^* = \{\xi\in \mathbb S^n:\;\;x\cdot \xi \le 0,\;\;\forall\; x\in \omega\}$. \begin{theorem}\label{thmC} Let $\mathcal M_0$ be as in Theorem \ref{thmB}. Assume $\alpha=n+1$ and \eqref{Aleks f cdt1}, \eqref{Aleks f cdt2} hold. Then \eqref{flow} has a unique smooth solution $\mathcal M_t$ for all time $t>0$. When $t\to \infty$, the rescaled hypersurfaces $\widetilde \mathcal M_t$ converge smoothly to the smooth solution of \eqref{soliton sol}, which is a minimiser of the functional \eqref{functional}. \end{theorem} Theorem \ref{thmC} gives a proof for the classical Aleksandrov problem in smooth category by a curvature flow approach. We point out that conditions \eqref{Aleks f cdt1} and \eqref{Aleks f cdt2} are necessary for Aleksandrov's problem \cite{Aleks42}, but for the flow \eqref{flow}, condition \eqref{Aleks f cdt1} is satisfied by any bounded positive function $f$ provided we make a scaling of the time $t$. At the end of the paper we will show that Theorem \ref{thmC} does not hold if \eqref{Aleks f cdt2} is violated. Let $\mathcal M$ be a convex hypersurface in $\mathbb R^{n+1}$ with the origin $\mathcal O$ in its interior. Then $\mathcal M$ is a spherical radial graph via the mapping \begin{eqnarray*} \vec{\hskip1pt r}: \xi\in \mathbb S^n \mapsto r(\xi)\xi \in \mathcal M. \end{eqnarray*} Let $\mathscr A=\mathscr A_{\mathcal M}$ be a set-valued mapping given by \begin{eqnarray*} \mathscr A(\omega) = \cup_{\xi\in \omega}\{\nu(\vec{\hskip1pt r}(\xi)) \}, \end{eqnarray*} where $\nu$ is the Gauss map of $\mathcal M$. Aleksandrov raised the following problem: given a finite nonnegative Borel measure $\mu$ on $\mathbb S^n$, whether there exists a convex hypersurface $\mathcal M$ such that \begin{equation}\label{Aleks problem} |\mathscr A(\omega) |= \mu(\omega)\ \ \forall\ \text{Borel sets}\ \omega\subset \mathbb S^n. \end{equation} The left hand side of \eqref{Aleks problem} defines a measure on $\mathbb S^n$, which is called the integral Gauss curvature of $\mathcal M$. The existence and uniqueness (up to a constant multiplication) of weak solution to this problem were obtained by Aleksandrov \cite{Aleks42}, assuming that $\mu$ is nonnegative, $\mu(\mathbb S^n) = o_n$ and $\mu (\mathbb S^n \setminus \omega) > |\omega^*|$ for any convex $\omega \subset \mathbb S^n$. These conditions are equivalent to \eqref{Aleks f cdt1} and \eqref{Aleks f cdt2}, if $\mu$ has a density function $f$. If $\mathcal M$ is a hypersurface with prescribed integral Gauss curvature $\mu$, then its polar dual \begin{equation}\label{polar dual} \mathcal M^*=\partial \{z\in \mathbb R^{n+1}:\;z\cdot y \le 1\;\;\forall \ y\in \mathcal M\}\ \end{equation} solves \eqref{soliton sol} for $\alpha=n+1$. For general $\alpha$, the limiting hypersurface of the flow \eqref{flow} is related to the dual Minkowski problem introduced most recently in \cite{HLYZ16}. Given a real number $q$ and a finite Borel measure $\mu$ on the sphere $\mathbb S^n$, the authors asked if there exists a convex body $\Omega$ with the origin inside such that its $q$-th dual curvature measure \begin{equation}\label{dual Mink prob} \widetilde C_q(\Omega,\cdot) = \mu(\cdot). \end{equation} Denote by $\mathcal M$ the boundary of $\Omega$, and by $\mathscr A^* = \mathscr A^*_\mathcal M$ the ``inverse" of $\mathscr A_\mathcal M$, namely \begin{eqnarray*} \mathscr A^* (\omega) =\{\xi \in \mathbb S^n\ : \ \nu(\vec{\hskip1pt r}(\xi))\in\omega\} . \end{eqnarray*} The $q$-th dual curvature measure is defined by \begin{equation}\label{dual curvature meas} \widetilde C_q(\Omega,\omega) = \int_{\mathscr A^*(\omega)} r^{q}(\xi)d\xi. \end{equation} Hence the dual Minkowski problem \eqref{dual Mink prob} is equivalent to the equation \begin{equation}\label{s2 a1} r^q |\text{Jac} \mathscr A^* | = f \ \ \text{on} \ \mathbb S^n, \end{equation} provided $\mu$ has a density function $f$. Here $|\text{Jac} \mathscr A^*|$ denotes the determinant of the Jacobian of the mapping $x \mapsto \xi = \mathscr A_\mathcal M^*(x)$. By \eqref{s2 t9} below, we see that the dual Minkowski problem is equivalent to the solvability of the equation \eqref{soliton sol} with $\alpha=n+1-q$. Noting that \begin{equation} {\begin{split} \mathscr A_{\mathcal M}^*(\omega) &= \{\xi \in \mathbb S^n\ : \ \nu(\vec{\hskip1pt r}(\xi))\in\omega\} \\ &= \{\nu^*(\vec{\hskip1pt r}^*(x)): \ x\in \omega\} \\ &= \mathscr A_{\mathcal M^*} (\omega), \end{split}} \end{equation} where $\nu$ and $\nu^*$ denote the unit outer normal of $\mathcal M$ and $\mathcal M^*$ respectively, we also see that if $\mathcal M^*$ solves the Aleksandrov problem \eqref{Aleks problem}, then $\mathcal M$ solves the dual Minkowski problem \eqref{dual Mink prob} for $q=0$, and so is a solution to \eqref{soliton sol} for $\alpha=n+1$. When $\alpha<n+1$, we consider the behaviour of origin-symmetric hypersurfaces under the flow \eqref{flow}, assuming that $f$ is an even function, namely $f(x) = f(-x)$ for all $x\in \mathbb S^n$. In this case the solution $\mathcal M_t$ shrinks to a point in finite time, namely as $t\to T$ for some $T<\infty$. Our next theorem shows that the normalised solution converges smoothly if $f$ is smooth and positive. \begin{theorem}\label{thmDa} Let $\mathcal M_0$ be a smooth, closed, uniformly convex, and origin-symmetric hypersurface in $\mathbb R^{n+1}$. Let $ \alpha<n+1$. If $f$ is a smooth, positive, even function on $\mathbb S^n$, then the flow \eqref{flow} has a unique smooth solution $\mathcal M_t$. After normalisation, the rescaled hypersurfaces $\widetilde \mathcal M_t$ converge smoothly to a smooth solution of \eqref{soliton sol}, which is a minimiser of the functional \eqref{functional}. Moreover, if $f\equiv 1$ and $0\le \alpha<n+1$, then $\widetilde \mathcal M_t$ converge smoothly to a sphere. \end{theorem} In the proof of Theorem \ref{thmDa}, we will choose the constant $\phi_0$ in the rescaling \eqref{scaling factor} by \begin{equation}\label{thmDa phi0} \phi_0=\Big(\int_{\mathbb S^n} r_0^q(\xi)d\xi \Big \slash \int_{\mathbb S^n} f(x) dx \Big)^{\frac1q}, \end{equation} where $r_0$ is the radial function of the initial convex hypersurface $\mathcal M_0$. This choice is such that the functional $\mathcal I_q$ in \eqref{sym Lq-r} is a constant. This property is crucial for the uniform positive upper and lower bounds for the support function in the normalised flow \eqref{normalised flow}. Without the symmetry assumption, Theorem \ref{thmDa} is not true. In fact, when $\alpha <n+1$, we find that the hypersurfaces evolving by \eqref{flow} may reach the origin in finite time, before the hypersurface shrinks to a point. Therefore the smooth convergence does not hold in general. \begin{theorem} \label{thmD} Suppose $n\ge 1$ and $\alpha < n+1$. There exists a smooth, closed, uniformly convex hypersurface $\mathcal M_0$, such that under the flow \eqref{flow}, \begin{equation}\label{unbounded ratio} \mathcal R (X(\cdot,t)) := \frac{\max_{\mathbb S^n} r(\cdot,t)}{\min_{\mathbb S^n} r(\cdot,t)} \to \infty\;\;\text{as}\;\;t\to T \end{equation} for some $T>0$. \end{theorem} Equation \eqref{soliton sol} can be written, in terms of the support function $u$, as a Monge-Amp\`ere equation on the sphere, \begin{equation}\label{soliton sol-u} \det(\nabla^2 u+uI)=\frac {f(x)}{u(x)} (|\nabla u|^2+u^2)^{\alpha/2}\ \ \text{on}\ \mathbb S^n. \end{equation} By Theorems \ref{thmA}-\ref{thmDa}, we have the following existence results for equation \eqref{soliton sol-u}. \begin{theorem} \label{thmE} Let $f$ be a smooth and positive function on the sphere $\mathbb S^n$. \newline (i) If $\alpha>n+1$, there is a unique smooth, uniformly convex solution to \eqref {soliton sol-u}. \newline (ii) If $\alpha=n+1$ and $f$ satisfies \eqref {Aleks f cdt1}, \eqref{Aleks f cdt2}, there is a smooth, uniformly convex solution to \eqref {soliton sol-u}. The solution is unique up to dilation. \newline (iii) If $\alpha<n+1$ and $f$ is even, there is an origin-symmetric solution to \eqref {soliton sol-u}. \newline (iv) If $f\equiv 1$, then the solution must be a sphere when $\alpha\ge n+1$, and the origin-symmetric solution must be a sphere when $0\le \alpha<n+1$. \end{theorem} In case (ii) of Theorem \ref{thmE}, the existence and uniqueness (up to dilation) of the solution were proved by \cite{Aleks42}, and the regularity of the solution was obtained in \cite{Olik83,Pog73}. In this paper we use the generalised solution to the Aleksandrov problem as a barrier to establish the uniform estimate for the corresponding Gauss curvature flow. Our main concern of this paper is the smooth convergence of the flow, which also provides an alternative proof for the regularity of the solution. It is interesting to compare equation \eqref{soliton sol-u} with the $L_p$-Minkowski problem \begin{equation}\label{p-Minkow} \det(\nabla^2 u+uI)=\frac {f(x)}{u^{1-p}(x)} \ \ \text{on}\ \mathbb S^n. \end{equation} For equation \eqref{p-Minkow}, there is a solution if $p>-n-1$ \cite {ChWang06} and no solution in general if $p\le -n-1$. In Theorem \ref{thmE} we proved the existence of solutions to \eqref{soliton sol-u} for all $\alpha\in\mathbb R^1$, which looks stronger. This is due to the associated functional \eqref{functional}, in which the first integral $\int_{\mathbb S^n} f\log u$ is bounded for our solution. This property, together with \eqref{sym Lq-r}, enables us to establish a uniform bound for the support function $u$ (Lemma \ref{s3 lem1c}). This paper is organised as follows. In Section 2 we collect some properties of convex hypersurfaces, and show that the flow \eqref{flow} can be reduced to a scalar parabolic equation of Monge-Amp\`ere type, via the support function or the radial function. We will also show in Section 2 that \eqref{normalised flow} is a descending gradient flow of the functional \eqref{functional}. In Section 3 we establish the uniform positive upper and lower bounds for the support function of the normalised flow \eqref{normalised flow}. The uniform positive upper and lower bounds for the principal curvatures are proved in Section 4. The a priori estimates ensure the longtime existence and the convergence of the normalised flow. The proofs of Theorems \ref{thmA}-\ref{thmDa} will be presented in Section 5. Finally in Section 6 we prove Theorem \ref{thmD}. \vspace{2mm} \noindent{\bf Acknowledgement} The authors would like to thank the referees for careful reading of the manuscript and their helpful comments. In particular, we would like to thank a referee who provided the proof for the uniqueness in the case $f\equiv 1$ and $0\le \alpha<n+1$ in Theorem 1.4 and Theorem 1.6 (iv). He/She also pointed out the monotonicity of the functional $\mathcal J_{n+1}$ in Lemma \ref{lem2.2}. \section{Preliminaries} Let us first recall some basic properties of convex hypersurfaces. Let $\mathcal M$ be a smooth, closed, uniformly convex hypersurface in $\mathbb R^{n+1}$. Assume that $\mathcal M$ is parametrized by the inverse Gauss map $X:\;\mathbb S^n \to \mathcal M$. The support function $u:\mathbb S^n \to \mathbb R$ of $\mathcal M$ is defined by \begin{equation}\label{s2 t1} u(x) = \sup\{\langle x,y\rangle:\;y\in \mathcal M\}. \end{equation} The supremum is attained at a point $y$ such that $x$ is the outer normal of $\mathcal M$ at $y$. It is easy to check that \begin{equation}\label{s2 t2} y = u(x) x+\nabla u(x), \end{equation} where $\nabla$ is the covariant derivative with respect to the standard metric $e_{ij}$ of the sphere $\mathbb S^n$. Hence \begin{equation}\label{s2 r} r = |y|= \sqrt{u^2 + |\nabla u|^2}. \end{equation} The second fundamental form of $\mathcal M$ is given by, see e.g. \cite{And00,Urb91}, \begin{equation}\label{s2 t3} h_{ij} =u_{ij} + ue_{ij}, \end{equation} where $u_{ij} = \nabla^2_{ij} u$ denotes the second order covariant derivative of $u$ with respect the spherical metric $e_{ij}$. By Weingarten's formula, \begin{equation}\label{s2 t4} e_{ij} =\big \langle\frac{\partial \nu}{\partial x_i}, \frac{\partial \nu}{\partial x_j}\big\rangle = h_{ik}g^{kl}h_{jl}, \end{equation} where $g_{ij}$ is the metric of $\mathcal M$ and $g^{ij}$ its inverse. It follows from \eqref{s2 t3} and \eqref{s2 t4} that the principal radii of curvature of $\mathcal M$, under a smooth local orthonormal frame on $\mathbb S^n$, are the eigenvalues of the matrix \begin{equation}\label{principal radii} b_{ij} = u_{ij} + u\delta_{ij}. \end{equation} In particular the Gauss curvature is given by \begin{equation}\label{s2 Gauss} K = 1/\det(u_{ij} + u \delta_{ij}) = S_n^{-1}(u_{ij} + u \delta_{ij}), \end{equation} where $$S_k= \sum_{i_1<\cdots<i_k} \lambda_{i_1}\cdots\lambda_{i_k}$$ denotes the $k$-th elementary symmetric polynomial. Let $X(\cdot,t)$ be a smooth solution to the normalised flow \eqref{normalised flow} and let $u(\cdot,t)$ be its support function. From the above discussion we see that the flow \eqref{normalised flow} can be reduced to the initial value problem for the support function $u$: \begin{equation}\label{normalised flow spt} {\left\{ \begin{split} \frac{\partial u}{\partial t}(x,t) &= - f(x) r^\alpha S_n^{-1}(u_{ij} + u \delta_{ij})(x,t) + u(x,t) \;\;\text{on}\;\mathbb S^n\times[0,\infty), \\ u(\cdot,0) &= \widetilde u_0 := \phi^{-1}_0 u_0, \end{split}\right.} \end{equation} where $r=\sqrt{u^2+|\nabla u|^2}(x,t)$ as in \eqref{s2 r}, $u_0$ is the support function of the initial hypersurface $\mathcal M_0$, and $\phi_0$ is the dilation constant in \eqref{normalised flow}. As $\mathcal M$ encloses the origin, it can be parametrized via the radial function $r: \mathbb S^n \to \mathbb R_+$, $$\mathcal M = \{r(\xi)\xi:\; \xi\in \mathbb S^n\}.$$ The following formulae are well-known, see e.g. \cite{Gerh14}, \begin{equation}\label{s2 t5} \nu = \frac{r \xi - \nabla r}{\sqrt{r^2+|\nabla r|^2}}, \end{equation} \begin{equation}\label{s2 t6} {\begin{split} g_{ij} &= r^2 e_{ij} + r_ir_j,\\ h_{ij} &= \frac{r^2 e_{ij} + 2r_ir_j - rr_{ij} }{\sqrt{r^2+|\nabla r|^2}}. \end{split}} \end{equation} Set \begin{equation}\label{def v} v = \frac{r}{u} = \sqrt{1+|\nabla \log r|^2}, \end{equation} where the last equality follows by multiplying $\xi$ to both sides of \eqref{s2 t5}. The normalised flow \eqref{normalised flow} can be also described by the following scalar equation for $r(\cdot,t)$, \begin{equation}\label{normalised flow rad} {\left\{ \begin{split} \frac{\partial r}{\partial t}(\xi,t) &= - v f r^\alpha K(\xi,t) +r(\xi,t) \;\;\text{on}\;\mathbb S^n\times[0,\infty), \\ r(\cdot,0) &= \widetilde r_0 := \phi^{-1}_0 r_0, \end{split}\right.} \end{equation} where $r_0$ is the radial function of $\mathcal M_0$, and $K(\xi,t)$ denotes the Gauss curvature at $r(\xi,t)\xi \in \mathcal M_t$. Note that in \eqref{normalised flow rad} $f$ takes value at $\nu = \nu(\xi,t)$ given by \eqref{s2 t5}. By \eqref{s2 t6} we have, under a local orthonormal frame on $\mathbb S^n$, \begin{equation}\label{s2 t7} K = \frac{\det h_{ij}}{\det g_{ij}} = v^{-n-2} r^{-3n} \det (r^2\delta_{ij} + 2r_ir_j-rr_{ij}). \end{equation} Given any $\omega \subset \mathbb S^n$, let $\mathcal C = \mathcal C_{\mathcal M,\omega}$ be the ``cone-like" region with the vertex at the origin and the base $\nu^{-1}(\omega) \subset \mathcal M$, namely \begin{eqnarray*} \mathcal C:=\{z\in \mathbb R^{n+1}:\; z = \lambda \nu^{-1}(x),\;\lambda\in [0,1],x\in \omega\}. \end{eqnarray*} It is well-known that the volume element of $\mathcal C$ can be expressed by \begin{equation}\label{vol formula} d\text{Vol}(\mathcal C) =\frac{1}{n+1} \frac{u(x)}{K(p)} dx = \frac{1}{n+1} r^{n+1}(\xi) d\xi, \end{equation} where $p = \nu^{-1}(x) \in \mathcal M$, and $\xi$ and $x$ are associated by \begin{equation}\label{s2 t8} r(\xi)\xi = u(x) x + \nabla u(x), \end{equation} namely $p=\nu^{-1}(x) = \vec{\hskip1pt r}(\xi)$. By the second equality in \eqref{vol formula}, we find that the determinant of the Jacobian of the mapping $x \mapsto \xi = \mathscr A_\mathcal M^*(x)$ is given by \begin{equation}\label{s2 t9} |\text{Jac} \mathscr A^* |= \left| \frac{d\xi}{d x} \right| = \frac{u(x)}{ r^{n+1}(\xi)K(p)}. \end{equation} \begin{lemma}\label{descending flow} The functional \eqref{functional} is non-increasing along the normalised flow \eqref{normalised flow}. Namely $\frac{d}{dt} \mathcal J_\alpha(\mathcal M_t) \le 0$, and the equality holds if and only if $\mathcal M_t$ satisfies the elliptic equation \eqref{soliton sol}. \end{lemma} \begin{proof} For $\alpha \not = n+1$, it is easy to see \begin{equation}\label{s2 lem1 t1} \frac{d}{dt} \mathcal J_\alpha(\mathcal M_t) = \int_{\mathbb S^n} f(x)\frac{u_t}{u} dx -\int_{\mathbb S^n} \frac {r_t}{r^{1-q}} d\xi. \end{equation} Let $x=x(\xi,t)=\nu(r(\xi,t)\xi)$. By \eqref{s2 t8} we have $$\log r(\xi,t) = \log u(x,t) -\log (x\cdot\xi).$$ Differentiate the above identity and denote $\dot{x} = \partial_t x(\xi,t)$. We obtain \begin{eqnarray}\label{s2 lem1 t2} \frac{r_t}{r}(\xi,t) &=& \frac{u_t + \nabla u \cdot \dot{x}}{u} - \frac{\dot{x} \cdot \xi}{x \cdot \xi} \\ &=& \frac{u_t +( \nabla u-r\xi) \cdot \dot{x} }{u} \notag \\ &=& \frac{u_t}{u} (x,t) \notag. \end{eqnarray} This identity can be also seen from \eqref{normalised flow spt}, \eqref{def v} and \eqref{normalised flow rad}. Plugging \eqref{s2 lem1 t2} in \eqref{s2 lem1 t1} and then using \eqref{s2 t9} to change the variables, we obtain \begin{eqnarray*} \frac{d}{dt} \mathcal J_\alpha(\mathcal M_t) &=& \int_{\mathbb S^n} \frac{u_t}{u}\big(f- \frac{u}{r^{n+1-q} K}\big) dx \\ &=&- \int_{\mathbb S^n} \frac{\big(f r^\alpha K- u\big)^2}{ur^\alpha K} dx \\ &\le& 0. \end{eqnarray*} Clearly $\frac{d}{dt} \mathcal J_\alpha (\mathcal M_t) =0$ if and only if $$f(x) r^\alpha(\xi,t) K(p)-u(x,t) =0.$$ Namely $\mathcal M_t$ satisfies \eqref{soliton sol}. When $\alpha = n+1$, we have by differentiating \eqref{functional} \begin{eqnarray*} \frac{d}{dt} \mathcal J_{n+1}(\mathcal M_t) = \int_{\mathbb S^n} f(x) \frac{u_t}{u} dx - \int_{\mathbb S^n} \frac{r_t}{r} d\xi. \end{eqnarray*} By \eqref{s2 t9} and \eqref{s2 lem1 t2} we get \begin{eqnarray*} \frac{d}{dt} \mathcal J_{n+1}(\mathcal M_t) &=& \int_{\mathbb S^n} \frac{u_t}{u} \big( f(x) - \Big | \frac{d\xi}{dx} \Big | \big )dx \\ &=& -\int_{\mathbb S^n} \frac{( f r^{n+1}K - u)^2}{ur^{n+1}K} dx \\ &\le& 0. \end{eqnarray*} This completes the proof. \end{proof} The next lemma shows that the functional $\mathcal J_{n+1}$ is also monotone along the Gauss curvature flow for origin-symmetric solutions. This lemma is of interest itself, though it is not needed in the proof of our main theorems. \begin{lemma}\label{lem2.2} Let $\mathcal M_t$ be a family of smooth, closed, uniformly convex and origin-symmetric hypersurfaces which evolve under the normalised Gauss curvature flow \eqref{normalised flow} with $\alpha=0$ and $f\equiv 1$. Assume $\text{Vol}(\mathcal M_0)=\text{Vol}(B_1)$. Then $\frac{d}{dt}\mathcal J_{n+1}(\mathcal M_t)\le 0$, and the equality holds if and only if $\mathcal M_t$ is the unit sphere. \end{lemma} \begin{proof} Let $\Omega_t$ denote the convex body whose boundary is $\mathcal M_t$. Note that the functional $\mathcal J_{n+1}$ is unchanged under dilation. The volume $\text{Vol}(\Omega_t)$ is preserved under the normalised Gauss curvature flow \begin{equation}\label{0} \partial_t u = - K +u, \end{equation} where $u(\cdot,t)$ is the support function of $\mathcal M_t$. This can be easily seen from the following evolution equation \begin{eqnarray*} \frac{d}{dt}\text{Vol}(\Omega_t) &=& \frac{1}{n+1}\frac{d}{dt} \int_{\mathbb S^n}\frac{u}{K}dx \\ &=&\int_{\mathbb S^n} \frac{u_t}{K} dx \\ &=&(n+1)\big(-\text{Vol}(\Omega_0)+\text{Vol}(\Omega_t)\big). \end{eqnarray*} Hence \begin{equation}\label{1} \int_{\mathbb S^n} \frac{u}{K} dx =|\mathbb S^n|, \ \ \ \forall \ t\ge0. \end{equation} By the H\"older inequality $$\Big(\int_{\mathbb S^n} dx\Big)^2 \le\Big( \int_{\mathbb S^n} \frac{K}{u}dx\Big) \Big(\int_{\mathbb S^n} \frac{u}{K}dx\Big).$$ This together with \eqref{1} shows \begin{equation}\label{2} \int_{\mathbb S^n}\frac{K}{u}dx \ge \int_{\mathbb S^n} dx, \ \ \ \forall \ t\ge 0 . \end{equation} Recall that Blaschke-Santol\'o inequality of origin-symmetric convex body gives \begin{equation}\label{sb inequality} \text{Vol}(\Omega_t)\text{Vol}(\Omega^*_t)\le \text{Vol}^2(B_1), \end{equation} where $\Omega^*$ is the polar dual of $\Omega$. Therefore $$\Big(\int_{\mathbb S^n} dx\Big)^2\ge \int_{\mathbb S^n}\frac{u}{K}dx\int_{\mathbb S^n} (r^*)^{n+1}d\xi^* = \int_{\mathbb S^n}\frac{u}{K}dx\int_{\mathbb S^n} \frac{1}{u^{n+1}}dx.$$ This together with \eqref{1} implies that \begin{equation}\label{3} \int_{\mathbb S^n} \frac{1}{r^{n+1}}dx\le \int_{\mathbb S^n} \frac{1}{u^{n+1}}dx \le \int_{\mathbb S^n} dx. \end{equation} Combining \eqref{2} and \eqref{3}, we conclude that, under the flow \eqref{0}, \begin{eqnarray*} \frac{d}{dt} \mathcal J_{n+1} &=& \int_{\mathbb S^n} \frac{u_t}{u} dx - \int_{\mathbb S^n} \frac{r_t}{r} d\xi \\ &=& -\int_{\mathbb S^n} \frac{K}{u} dx + \int_{\mathbb S^n} \frac{K}{u} \Big|\frac{d\xi}{dx}\Big| dx \\ &=&-\int_{\mathbb S^n} \frac{K}{u} dx + \int_{\mathbb S^n} \frac{1}{r^{n+1}} dx\\ &\le&0. \end{eqnarray*} The last equality holds if and only if the equality in \eqref{2} and in \eqref{3} holds, by \eqref{sb inequality} it occurs when $\mathcal M_t=\mathbb S^n$ only. \end{proof} \section{A priori estimates I} In this section we establish the uniform positive upper and lower bounds for the support function of the normalised flow \eqref{normalised flow}. \begin{lemma}\label{s3 lem1} Let $u(\cdot,t)$, $t\in (0,T]$, be a smooth, uniformly convex solution to \eqref{normalised flow spt}. If $\alpha > n+1$, then there is a positive constant $C$ depending only on $\alpha$, and the lower and upper bounds of $f$ and $u(\cdot,0)$ such that \begin{equation}\label{s3 lem1 1} 1/C \le u(\cdot,t)\le C\ \ \ \forall\ t\in (0,T]. \end{equation} If $\alpha = n+1$ and $f \equiv 1$, then \begin{equation}\label{s3 lem1 2} \min_{\mathbb S^n} u(\cdot,0) \le u(\cdot,t) \le \max_{\mathbb S^n} u(\cdot,0)\ \ \ \forall\ t\in (0,T]. \end{equation} \end{lemma} \begin{proof} Let $u_{\min}(t) = \min_{x\in\mathbb S^n} u(x,t)$. By \eqref{normalised flow spt} we have \begin{equation}\label{s3 lem1 t1} \frac{d}{dt} u_{\min} \ge -(f u^{-q}_{\min}-1)u_{\min}. \end{equation} where $q = n+1-\alpha \le 0$. If $\alpha >n+1$, we may assume that $ u_{\min}(t) < (\max_{\mathbb S^n} f)^{\frac1q} $, otherwise we are through. Hence $\frac{d}{dt} u_{\min} \ge 0$. This implies $$u(\cdot,t) \ge \min \big\{(\max_{\mathbb S^n} f)^{\frac1q},\min_{\mathbb S^n} u(\cdot,0)\big\}.$$ Similarly we have $$u(\cdot,t) \le \max\big\{(\min_{\mathbb S^n}f)^{\frac1q}, \max_{\mathbb S^n} u(\cdot,0)\big\}.$$ This proves \eqref{s3 lem1 1} If $\alpha = n+1$ and $f \equiv 1$, then \eqref{s3 lem1 t1} gives $\frac{d}{dt} u_{\min} \ge 0$. Similarly we have $\frac{d}{dt} u_{\max} \le 0$. Therefore \eqref{s3 lem1 2} follows. \end{proof} When $\alpha = n+1$, for general positive function $f$ which satisfies \eqref{Aleks f cdt1} and \eqref{Aleks f cdt2}, we can use a barrier argument to derive the $L^\infty$-norm estimates. \begin{lemma}\label{s3 lem1b} Let $u$ be as in Lemma \ref{s3 lem1}. If $\alpha = n+1$, and $f$ satisfies \eqref{Aleks f cdt1} and \eqref{Aleks f cdt2}, then there is a positive constant $C$ depending only on $\min_{\mathbb S^n}u(\cdot,0)$, $\max_{\mathbb S^n}u(\cdot,0)$ and $f$ such that \begin{equation}\label{s3 lem1b est} 1/C\le u(\cdot,t) \le C \ \ \forall \ t\in (0,T]. \end{equation} \end{lemma} Before proving Lemma \ref{s3 lem1b}, we recall the existence of generalised solutions to Aleksandrov's problem, of which the proof consists of two steps. The first one is to prove the existence of polyhedron $\mathcal N^*_k$ whose integral Gauss curvature is a discrete measure converging weakly to $f$. Noting that the integral Gauss curvature is invariant under dilation, one may assume that the diameter of $\mathcal N^*_k$ is equal to 1. Hence by convexity $\mathcal N^*_k$ converges to a limit $\mathcal N^*$. In the second step one uses condition \eqref{Aleks f cdt2} to show that $\mathcal N^*$ contains nonempty interior and the origin is an interior point. The proof of the second step is also elementary, see page 520, lines 17-27, \cite{Pog73}. \begin{proof} [Proof of Lemma \ref{s3 lem1b}] Let $\mathcal N$ be the polar dual of the generalised solution $\mathcal N^*$, defined in \eqref{polar dual}. We use $\mathcal N$ as barrier to prove \eqref{s3 lem1b est}. Let $\mathcal M_t$ be a smooth convex solution to the normalised flow \eqref{normalised flow}. Let $\mathcal N_0 = s_0 \mathcal N$ and $\mathcal N_1 = s_1 \mathcal N$, where the constants $s_1>s_0>0$ are chosen such that $\mathcal N_0$ is strictly contained in $\mathcal M_0$ and $\mathcal M_0$ is strictly contained in $\mathcal N_1$. Let $r_t, \rho_0, \rho_1$ be respectively the radial functions of $\mathcal M_t, \mathcal N_0, \mathcal N_1$. Note that for any constant $s>0$, $s\mathcal N$ is a stationary solution to \eqref{normalised flow} in the generalised sense. We claim that $\mathcal M_t$ is contained in $\mathcal N_1$ for all $t>0$. For if not, there exists a time $t_0>0$ such that $\sup_{\xi\in \mathbb S^n} r_{t_0}(\xi)/\rho_1(\xi)=1$. Denote $G=\mathcal M_{t_0}\cap\mathcal N_1$ ($G$ can be a point or a closed set). Since $\frac{\partial}{\partial t} r_t(\xi)$ is smooth in both $\xi$ and $t$, replacing $\rho_1$ by $(1+a)\rho_1$ for a small constant $a$, we may assume that the velocity of $\mathcal M_t$ is positive on $G\times \{t_0\}$, and so also in a neighbourhood of $G\times \{t_0\}$. Therefore there exist sufficiently small constants $\epsilon, \delta>0$, such that the velocity of $\mathcal M_t$ is greater than $\delta$ at $\xi r_{t_0}(\xi)\in \mathcal M_{t_0}$, for all $\xi\in\omega$, where $\omega=\{\xi \in\mathbb S^n:\ r_{t_0}(\xi)>(1-\epsilon) \rho_1(\xi)\}$. By equation \eqref{normalised flow rad}, it means the Gauss curvature of $\mathcal M_{t_0}$ is strictly smaller than that of $\mathcal N_1$ for all $\xi\in\omega$. Applying the comparison principle for generalised solutions of the elliptic Monge-Amp\`ere equation to the functions $r_{t_0}$ and $(1-\epsilon) \rho_1$, we reach a contradiction. Similarly we can prove that $\mathcal N_0$ is contained in $\mathcal M_t$ for all $t>0$. \end{proof} For $\alpha <n+1$, we consider the origin-symmetric hypersurfaces and give the following $L^\infty$-norm estimates. \begin{lemma}\label{s3 lem1c} Let $\mathcal M_t$, where $t\in (0,T]$, be an origin-symmetric, uniformly convex solution to the normalised flow \eqref{normalised flow}, and $u(\cdot,t)$ be its support function. For $ \alpha<n+1$, there is a positive constant $C$ depending only on $\alpha$, $\mathcal M_0$ and $f$, such that \begin{equation}\label{s3 lem1c est} 1/C\le u(\cdot,t) \le C \ \ \forall \ t\in (0,T]. \end{equation} \end{lemma} \begin{proof} Let us denote by $\mathcal I_q(\mathcal M_t)$ the $L^q$ integral of the radial function $r(\xi,t)$, i.e., \begin{eqnarray*} \mathcal I_q(\mathcal M_t)= \int_{\mathbb S^n} r^q (\xi,t)d\xi, \end{eqnarray*} where $q=n+1-\alpha$. By \eqref{normalised flow rad}, we have \begin{eqnarray*} \frac{d}{dt} \mathcal I_q (\mathcal M_t) &=& q \int_{\mathbb S^n} r^{q-1} \Big(-\frac{r}{u} f(x) r^\alpha K +r \Big) d\xi \\ &=& -q \int_{\mathbb S^n} f(x) \frac{r^{n+1} K}{u} d\xi +q\int_{\mathbb S^n} r^q d\xi, \end{eqnarray*} where $f$ takes value at $x=\nu(\xi,t)$ given by \eqref{s2 t5}. By the variable change formula \eqref{s2 t9}, we obtain \begin{eqnarray*} \frac{d}{dt} \mathcal I_q(\mathcal M_t)= q \Big( -\int_{\mathbb S^n} f(x) dx + \mathcal I_q(\mathcal M_t) \Big). \end{eqnarray*} Solving this ODE, one sees \be \mathcal I_q(\mathcal M_t)= e^{qt} \Big(\mathcal I_q(\mathcal M_0) - \int_{\mathbb S^n} f \Big) +\int_{\mathbb S^n} f \end{equation} It follows that, by our choice of the rescaling factor $\phi_0$ in \eqref{thmDa phi0}, \begin{equation}\label{sym Lq-r} \mathcal I_q(\mathcal M_t) \equiv \int_{\mathbb S^n}f(x)dx, \ \ \forall \ t\in(0,T]. \end{equation} Let $r_{\min}(t) =\min_{\mathbb S^n} r(\cdot,t)$ and $r_{\max}(t)=\max_{\mathbb S^n} r(\cdot,t)$. By a rotation of coordinates we may assume that $r_{\max}(t) = r(e_1,t)$. Since $\Omega_t$ is origin-symmetric, the points $\pm r_{\max}(t) e_1\in \mathcal M_t$. Hence $$u(x,t) =\sup\{p\cdot x: \ p\in \mathcal M_t\}\ge r_{\max}(t) |x\cdot e_1|, \ \forall \ x\in \mathbb S^n.$$ Therefore \begin{eqnarray}\label{sym lem3 t1} \int_{\mathbb S^n} f(x) \log u(x,t) dx &\ge&\Big( \int_{\mathbb S^n} f(x)dx\Big) \log r_{\max}(t) +\int_{\mathbb S^n} f(x) \log |x\cdot e_1| dx \notag\\ &\ge& |\mathbb S^n|(\min_{\mathbb S^n} f ) \log r_{\max}(t) - C \max_{\mathbb S^n} f. \end{eqnarray} By Lemma \ref{descending flow} and \eqref{sym Lq-r}, we conclude \begin{eqnarray*} \mathcal J_{\alpha}(\mathcal M_0)\ge \mathcal J_{\alpha}(\mathcal M_t) =\int_{\mathbb S^n}f(x)\log u(x,t) dx -\frac1q \int_{\mathbb S^n} f. \end{eqnarray*} This together with \eqref{sym lem3 t1} implies \begin{equation}\label{sym ubd} r_{\max}(t) \le C_1 e^{C_2 \mathcal J_\alpha(\mathcal M_0)} \le C. \end{equation} This proves the upper bound in \eqref{s3 lem1c est}. Next we derive a positive lower bound for $u(\cdot,t)$. We divide it into two cases. Case (i), $q \in(0,n+1]$. By H\"older inequality, $$\mathcal I_q (\mathcal M_t) \le \mathcal I_{n+1}^{\frac{q}{n+1}}(\mathcal M_t) |\mathbb S^n|^{\frac{\alpha}{n+1}}. $$ Hence \begin{equation}\label{sym lem2 t1} \frac{|\mathbb S^n|^{-\frac{\alpha}{q}}}{n+1}\mathcal I_q^{\frac{n+1}{q}}(\mathcal M_t) \le \frac{1}{n+1} \mathcal I_{n+1}(\mathcal M_t) = \text{Vol}(\Omega_t), \end{equation} where $\Omega_t$ denotes the convex body enclosed by $\mathcal M_t$. Assume by a rotation if necessary $r(e_{n+1},t) = r_{\min}(t)$. Since $\Omega_t$ is origin-symmetric, we find that $\Omega_t$ is contained in a cube $$Q_t=\{z\in \mathbb R^{n+1}: \ -r_{\max}(t) \le z_i\le r_{\max}(t) \ \text{for} \ 1\le i \le n, \ -r_{\min}(t)\le z_{n+1}\le r_{\min}(t) \}.$$ Therefore by \eqref{sym lem2 t1} \begin{eqnarray*} \frac{|\mathbb S^n|^{-\frac{\alpha}{q}}}{n+1}\mathcal I_q^{\frac{n+1}{q}}(\mathcal M_t) \le 2^{n+1}r^n_{\max}(t) r_{\min}(t). \end{eqnarray*} By \eqref{sym Lq-r}, the left hand side of the above inequality is a positive constant. Using \eqref{sym ubd}, we get $\min_{\mathbb S^n} u (\cdot,t) = r_{\min}(t) \ge 1/C$. Case (ii), $q> n+1$. We have \begin{eqnarray*} \mathcal I_q(\mathcal M_t) &=& r_{\max}^q(t) \int_{\mathbb S^n} \Big(\frac{r(\xi,t)}{r_{\max}(t)}\Big)^q d\xi\\ &\le& r_{\max}^q(t) \int_{\mathbb S^n} \Big(\frac{r(\xi,t)}{r_{\max}(t)}\Big)^{n+1} d\xi \\ &=&(n+1) r_{\max}^{q-n-1}(t)\text{Vol}(\Omega_t) \\ &\le& C r^{q-1}_{\max}(t) r_{\min}(t). \end{eqnarray*} The lower bound of $r_{\min}(t)$ now follows from \eqref{sym Lq-r} and \eqref{sym ubd}. \end{proof} For convex hypersurface, the gradient estimate is a direct consequence of the $L^\infty$-norm estimate. \begin{lemma}\label{s3 lem2} Let $u(\cdot,t)$, $t\in(0,T]$, be a smooth, uniformly convex solution to \eqref{normalised flow spt}. Then we have the gradient estimate \begin{equation}\label {3.8} |\nabla u (\cdot,t)| \le \max_{\mathbb S^n\times (0,T]} u, \;\;\forall \; t\in (0,T]. \end{equation} \end{lemma} \begin{proof} This is due to convexity. \end{proof} Similarly we have the estimates for the radial function $r$. \begin{lemma}\label{s3 lem3} Let $X(\cdot,t)$, $t\in (0,T]$, be a uniformly convex solution to \eqref{normalised flow}. Let $u$ and $r$ be its support function and radial function, respectively. Then \begin{equation}\label {3.9} \min_{\mathbb S^n\times(0,T]} u \le r(\cdot,t)\le\max_{\mathbb S^n\times(0,T]} u\ \ \forall \ t\in (0,T], \end{equation} and \begin{equation}\label {3.10} |\nabla r(\cdot,t)| \le C \ \ \forall \ t\in(0,T], \end{equation} where $C>0$ depends only on $\min_{\mathbb S^n\times(0,T]} u$ and $ \max_{\mathbb S^n\times(0,T]} u$ \end{lemma} \begin{proof} Estimates \eqref{3.9} follow from \eqref{s3 lem1b est} as one has $\min_{\mathbb S^n} u(\cdot,t) = \min_{\mathbb S^n} r(\cdot,t)$ and $\max_{\mathbb S^n} u(\cdot,t) = \max_{\mathbb S^n} r(\cdot,t)$. Estimate \eqref{3.10} follows from \eqref{3.9} because by \eqref{def v} we have $|\nabla r| \le \frac{r^2}{u}.$ \end{proof} \section{A priori estimates II} In this section we establish uniform positive upper and lower bounds for the principal curvatures for the normalised flow \eqref{normalised flow}. We point out that the curvature estimates in this section are for any $\alpha\in\mathbb R^1$. We first derive an upper bound for the Gauss curvature $K(\cdot,t)$. \begin{lemma}\label{s3 lem4} Let $X(\cdot, t)$ be a uniformly convex solution to the normalised flow \eqref{normalised flow} which encloses the origin for $t\in (0,T]$. Then there is a positive constant $C$ depending only on $\alpha$, $ f $, $\min_{\mathbb S^n\times(0,T]} u$ and $ \max_{\mathbb S^n\times(0,T]} u$, such that \begin{equation}\label{ubd-K} K(\cdot,t) \le C,\;\;\forall \; t\in (0,T]. \end{equation} \end{lemma} \begin{proof} Consider the auxiliary function $$Q = \frac{-u_t}{u - \epsilon_0} = \frac{fr^\alpha K - u}{u-\epsilon_0},$$ where $$\epsilon_0 = \frac12 \min_{x\in \mathbb S^n,\;t\in (0,T]} u (x,t) >0.$$ At the point where $Q$ attains its spatial maximum, we have \begin{equation}\label{s3 lem4 t1} 0 = \nabla_i Q = \frac{-u_{ti}}{u-\epsilon_0} +\frac{u_t u_i}{(u-\epsilon_0)^2}, \end{equation} and \begin{eqnarray}\label{s3 lem4 t2} 0 \ge \nabla^2_{ij} Q &=& \frac{-u_{tij}}{u-\epsilon_0} +\frac{u_{ti} u_j +u_{tj} u_i +u_t u_{ij}}{(u-\epsilon_0)^2} -\frac{2u_t u_i u_j}{(u-\epsilon_0)^3} \notag \\ &=& \frac{-u_{tij}}{u-\epsilon_0} +\frac{u_t u_{ij}}{(u-\epsilon_0)^2} , \end{eqnarray} where \eqref{s3 lem4 t1} was used in the second equality above. The first inequality in \eqref{s3 lem4 t2} should be understood in the sense of negative-semidefinite matrix. By \eqref{s3 lem4 t2} and \eqref{principal radii} we infer that \begin{equation}\label{s3 lem4 t3} -u_{tij}-u_t\delta_{ij} \le (b_{ij} - \epsilon_0\delta_{ij})Q. \end{equation} Using the equation \eqref{normalised flow spt}, we then have \begin{eqnarray}\label{s3 lem4 t4} Q_t &=& \frac{-u_{tt}}{u-\epsilon_0} + Q^2 \notag \\ &=& \frac{fr^\alpha S_n^{-2}}{u-\epsilon_0}S_n^{ij}(-u_{tij}-u_t\delta_{ij}) + \frac{\alpha f r^{\alpha-1}}{(u-\epsilon_0)S_n} r_t + Q + Q^2 \notag \\ &\le& \frac{fr^\alpha K}{u-\epsilon_0}(n-\epsilon_0 H)Q + \frac{\alpha f r^{\alpha-1}K}{u-\epsilon_0} r_t + Q + Q^2, \end{eqnarray} where $H$ denotes the mean curvature of $X(\cdot,t)$. By \eqref{s2 r} and \eqref{s3 lem4 t1}, \begin{equation}\label{s3 lem4 t5} r_t = \frac{uu_t+\sum u_k u_{kt}}{r} = \frac{\epsilon_0 u -r^2}{r} Q. \end{equation} Without loss of generality we assume that $K \approx Q \gg 1$. Plugging \eqref{s3 lem4 t5} into \eqref{s3 lem4 t4} and noticing that $H \ge n K^{\frac{1}{n}}$, we obtain \begin{eqnarray*} Q_t \le C_0 Q^2\big(C_1 - \epsilon_0Q^{\frac1n}\big), \end{eqnarray*} for some $C_0,C_1$ only depending on $\alpha,f$ and the $L^\infty$-norm of $u$. From the ode we infer that $Q\le C$ for some $C>0$ depending on $Q(0)$, $C_1$ and $\epsilon_0$. Our a priori bound \eqref{ubd-K} follows consequently. \end{proof} Next we prove that the principal curvatures of $\mathcal M_t$ are bounded by positive constants from both above and below. To obtain the positive lower bound for the principal curvatures of $\mathcal M_t$, we will study an expanding flow by Gauss curvature for the dual hypersurface of $\mathcal M_t$. This technique was previously used in \cite{BIS,IS13, Ivak15, Ivak16}. Expanding flows by Gauss curvature have been studied in \cite{Gerh90, Gerh14, Sch06, Urb90,Urb91}. Our estimates are also inspired by these works. \begin{lemma}\label{s3 lem5} Let $X(\cdot, t)$ be the solution of the normalised flow \eqref{normalised flow} for $t\in (0,T]$. Then there is a positive constant $C$ depending only on $\alpha$, $f$, $\min_{\mathbb S^n\times(0,T]} u$ and $ \max_{\mathbb S^n\times(0,T]} u$, such that the principal curvatures of $X(\cdot,t)$ are bounded from above and below \begin{equation}\label{bd-kappa} C^{-1} \le \kappa_i(\cdot,t) \le C,\;\;\forall \; t\in(0,T], \ \text{and} \ i =1,\ldots,n. \end{equation} \end{lemma} \begin{proof} To prove the lower bound in \eqref{bd-kappa}, we employ the dual flow of \eqref{normalised flow}, and establish an upper bound of principal curvature for the dual flow. This, together with Lemma \ref{s3 lem4}, also implies the upper bound in \eqref{bd-kappa}. We denote by $\mathcal M_t^*$ the polar set of $\mathcal M_t = X(\mathbb S^n,t)$, see \eqref{polar dual} for the definition of the polar set. It is well-known that \begin{equation}\label{s3 e1} r(\xi,t)= \frac{1}{u^*(\xi,t)}, \end{equation} where $u^*(\cdot,t)$ denotes the support function of $\mathcal M_t^*$. Hence by \eqref{s2 t7}, we obtain the following relation \begin{equation}\label{s3 e2} \frac{u^{n+2}(x,t)(u^*(\xi,t))^{n+2}}{K(p)K^*(p^*)} = 1, \end{equation} where $p\in \mathcal M_t$, $p^*\in \mathcal M^*_t$ are the two points satisfying $p\cdot p^* = 1$, and $x,\xi$ are respectively the unit outer normals of $\mathcal M_t$ and $\mathcal M^*_t$ at $p$ and $p^*$. Therefore by equation \eqref{normalised flow rad} we obtain the equation for $u^*$, \begin{equation}\label{s3 e3} \partial_t u^*(\xi,t) = \frac{(u^*(\xi,t))^{n+3-\alpha}f}{(r^*)^{n+1}K^*} - u^*(\xi,t), \ \xi\in \mathbb S^n, \ t\in (0,T], \end{equation} where $$K^* = S_n^{-1}(\nabla^2 u^* + u^* I)(\xi,t)$$ is the Gauss curvature of $\mathcal M_t^*$ at the point $p^*=\nabla u^*(\xi,t) + u^*(\xi,t) \xi$, and $$r^* = |p^*|=\sqrt{|\nabla u^*|^2+(u^*)^2}(\xi,t)$$ is the distance from $p^*$ to the origin. Note that $f$ takes value at $$x = p^*/|p^*| =\frac{\nabla u^* + u^* \xi}{\sqrt{|\nabla u^*|^2+(u^*)^2}} \in \mathbb S^n.$$ By \eqref{s3 e1}, $1/C\le u^* \le C$ and $|\nabla u^*| \le C$ for some $C$ only depending on $\max_{\mathbb S^n\times(0,T]}u$, $\min_{\mathbb S^n\times(0,T]}u$. Let $b_{ij}^* = u^*_{ij} +u^* \delta_{ij}$, and $h_*^{ij}$ be the inverse matrix of $b^*_{ij}$. As discussed in Section 2, the eigenvalues of $b^*_{ij}$ and $h_*^{ij}$ are respectively the principal radii and principal curvatures of $\mathcal M^*_t$. Consider the function \begin{equation}\label{s3 e4} w= w (\xi,t,\tau)=\log h_*^{\tau\tau} -\beta \log u^* + \frac{A}{2}(r^*)^2, \end{equation} where $\tau$ is a unit vector in the tangential space of $\mathbb S^n$, while $\beta$ and $A=A(\beta)$ are large constants to be specified later on. Assume $w$ attains its maximum at $(\xi_0,t_0)$, along the direction $\tau =e_1$. By a rotation, we also assume $h_*^{ij}$ and $b^*_{ij}$ are diagonal at this point. It is direct to see, at the point where $w$ attains its maximum, \begin{eqnarray}\label{s3 e5} 0\le \partial_t w &=&b^*_{11} \partial_t h_*^{11} -\beta\frac{u^*_t}{u^*} + Ar^* r^*_t \notag \\ &=& -h_*^{11}\partial_t b^*_{11} -\beta\frac{u^*_t}{u^*} + A r^* r^*_t, \end{eqnarray} \begin{eqnarray}\label{s3 e6} 0 &=& \nabla_i w = -h_*^{11}\nabla_ib^*_{11} -\beta\frac{u^*_i}{u^*} +Ar^* r^*_{i} \notag \\ &=& - h_*^{11} u^*_{i11} - h_*^{11}u^*_1\delta_{1i} -\beta\frac{u^*_i}{u^*} +Ar^* r^*_{i}, \end{eqnarray} where $u^*_{ijk}=\nabla_k u^*_{ij}$ throughout this paper. We also have \begin{eqnarray}\label{s3 e7} 0 \ge \nabla^2_{ij} w&=&-h_*^{11}\nabla^2_{ij}b^*_{11} +2 h_*^{11} \sum h_*^{kk} \nabla_1b^*_{ik}\nabla_1b^*_{kj} \\ && -(h_*^{11})^2 \nabla_i b^*_{11}\nabla_j b^*_{11} -\beta\frac{u^*_{ij}}{u^*} +\beta\frac{u^*_iu^*_j}{(u^*)^2} \notag \\ && +A(r^* r^*_{ij} + r^*_ir^*_j), \notag \end{eqnarray} where the first inequality means that $\nabla_{ij} w$ is a negative-semidefinite matrix. Note that $\nabla_k b^*_{ij}$ is symmetric in all indices. The equation \eqref{s3 e3} can be written as \begin{equation}\label{s3 e8} \log(u^*_t+u^*) - \log S_n = \log\Big(\frac{(u^*)^{n+3-\alpha}}{(r^*)^{n+1}}f\Big)=: \psi(\xi,u^*,\nabla u^*). \end{equation} Differentiating \eqref{s3 e8} gives \begin{eqnarray}\label{s3 e9} \frac{u^*_{tk} + u^*_k}{u^*_t+u^*} &=& \sum h_*^{ij}\nabla_kb^*_{ij} + \nabla_k\psi \notag \\ &=&\sum h_*^{ij}u^*_{kij}+ \sum h_*^{ij}u^*_j\delta_{ik} + \nabla_k \psi, \end{eqnarray} and \begin{eqnarray}\label{s3 e10} \frac{u^*_{t11} + u^*_{11}}{u^*_t+u^*} - \frac{(u^*_{t1}+u^*_1)^2}{(u^*_t+u^*)^2} &=& \sum h_*^{ij}\nabla^2_{11}b^*_{ij} - \sum h_*^{ii}h_*^{jj}(\nabla_1 b^*_{ij})^2 + \nabla^2_{11}\psi . \end{eqnarray} Dividing \eqref{s3 e5} by $u^*_t+u^*$ and using \eqref{s3 e10}, we have \begin{eqnarray}\label{s3 e11} 0 &\le& -h_*^{11}\Big(\frac{u^*_{11t}+u^*_{11}}{u^*_t + u^*} -\frac{b^*_{11}}{u^*_t + u^*} +1\Big) -\frac{\betau^*_t}{u^*(u^*_t+u^*)} + \frac{Ar^* r^*_t}{u^*_t+u^*} \notag \\ &=& -h_*^{11} \frac{u^*_{11t}+u^*_{11}}{u^*_t + u^*} -h_*^{11}+ \frac{1+\beta}{u^*_t+u^*} -\frac{\beta}{u^*}+\frac{A r^* r^*_t}{u^*_t+u^*} \notag \\ &\le&-h_*^{11}\sum h_*^{ij}\nabla^2_{11} b^*_{ij} + h_*^{11}\sum h_*^{ii}h_*^{jj}(\nabla_1b^*_{ij})^2 \\ && -h_*^{11}\nabla^2_{11}\psi + \frac{1+\beta}{u^*_t+u^*} +\frac{A r^* r^*_t}{u^*_t+u^*}. \notag \end{eqnarray} By the Ricci identity, we have \begin{eqnarray*} \nabla^2_{11}b^*_{ij} = \nabla^2_{ij}b^*_{11} -\delta_{ij}b^*_{11} + \delta_{11}b^*_{ij}-\delta_{i1}b^*_{1j}+\delta_{1j}b^*_{1i}. \end{eqnarray*} Plugging this identity in \eqref{s3 e11} and employing \eqref{s3 e7}, we obtain \begin{eqnarray}\label{s3 e12} 0&\le& h_*^{11}\sum\Big( h_*^{11} h_*^{ii}(\nabla_i b^*_{11})^2 - h_*^{ii}h_*^{jj}(\nabla_1b^*_{ij})^2\Big) +(H^*-nh_*^{11}) \notag \\ && -\beta H^* +C\beta -\beta\sum h_*^{ij} \frac{u^*_iu^*_j}{(u^*)^2} -h_*^{11}\nabla_{11}^2\psi \notag \\ && +\frac{1+\beta}{u^*_t +u^*} +\frac{Ar^* r^*_t}{u^*_t+u^*} - A\sumh_*^{ij}(r^* r^*_{ij} +r^*_ir^*_j) \notag \\ &\le& -\beta H^* +C\beta -h_*^{11}\nabla_{11}^2\psi \\ &&+\frac{1+\beta}{u^*_t +u^*} +\frac{Ar^* r^*_t}{u^*_t+u^*} - A\sumh_*^{ij}(r^* r^*_{ij} +r^*_ir^*_j), \notag \end{eqnarray} where $H^* = \sum h_*^{ii}$ is the mean curvature of $\mathcal M_t^*$. It is direct to calculate \begin{eqnarray*} r^*_t = \frac{u^*\u_t + \sum u^*_ku^*_{kt}}{r^*}, \end{eqnarray*} \begin{equation}\label{s3 e15} r^*_i = \frac{u^*\u_i + \sum u^*_ku^*_{ki}}{r^*} = \frac{u^*_ib^*_{ii}}{r^*}, \end{equation} and \begin{eqnarray*} r^*_{ij} = \frac{u^*\u_{ij} + u^*_iu^*_j+ \sum u^*_ku^*_{kij} +\sum u^*_{ki}u^*_{kj}}{r^*} - \frac{u^*_iu^*_jb^*_{ii}b^*_{jj}}{(r^*)^3}. \end{eqnarray*} Hence, by \eqref{s3 e9}, \begin{eqnarray*} \frac{r^* r^*_t}{u^*_t+u^*} - \sumh_*^{ij}(r^* r^*_{ij} +r^*_ir^*_j) &=&\frac{u^*\u_t}{u^*_t+u^*} - u^* \sum h_*^{ij}u^*_{ij} \notag \\ &&- \sumh_*^{ii}(u^*_{ii})^2 -\frac{|\nabla u^*|^2}{u^*_t+u^*} +\sum u^*_k \nabla_k \psi . \notag \end{eqnarray*} Since \begin{eqnarray*} \frac{u^* u^*_t}{u^*_t+u^*} - \frac{|\nabla u^*|^2}{u^*_t + u^*} = u^* - \frac{(r^*)^2}{u^*_t +u^*}, \end{eqnarray*} and \begin{eqnarray*} -u^*\sum h_*^{ij} u^*_{ij} - \sum h_*^{ii}(u^*_{ii})^2 &=&-u^*\sum h_*^{ii} (b^*_{ii} -u^*\delta_{ii}) -\sum h_*^{ii} (b^*_{ii} -u\delta_{ii})^2 \\ &=& nu^*-\sum b^*_{ii}, \end{eqnarray*} we further deduce \begin{eqnarray}\label{s3 e13} \frac{r^* r^*_t}{u^*_t+u^*} - \sumh_*^{ij}(r^* r^*_{ij} +r^*_ir^*_j) &\le & C - \frac{(r^*)^2}{u^*_t+u^*} + \sum u^*_k \nabla_k \psi. \end{eqnarray} Plugging \eqref{s3 e13} in \eqref{s3 e12}, we get \begin{eqnarray}\label{s3 e14} 0 &\le& -\beta H^* +C\beta+CA-h_*^{11}\nabla^2_{11}\psi+\frac{1+\beta-A(r^*)^2}{u^*_t + u^*} +A \sumu^*_k\nabla_k\psi \notag \\ &\le& -\beta H^* +C\beta +CA -h_*^{11}\nabla^2_{11}\psi +A\sumu^*_k\nabla_k\psi, \end{eqnarray} provided $A > 2(1+\beta)/\min_{\mathbb S^n\times(0,T]}(r^*)^2 \ge C(1+\beta)$, for some $C>0$ only depending on $\max_{\mathbb S^n\times(0,T]}u$. By \eqref{s3 e6} and \eqref{s3 e15}, we have \begin{eqnarray*} -h_*^{11}\nabla^2_{11}\psi + A\sum u^*_k\nabla_k \psi &\le& C h_*^{11}( 1+ (u^*_{11})^2)+CA \\ &&-h_*^{11} \sum\psi_{u^*_k} u^*_{k11} + A \sum \psi_{u^*_k}u^*_ku^*_{kk} \\ &\le& Ch_*^{11}+ C/h_*^{11} + CA+ C\beta. \end{eqnarray*} Hence \eqref{s3 e14} can be further estimated as \begin{eqnarray*} 0 &\le& -\beta H^* +Ch_*^{11} +C\beta+CA \\ &\le& -\frac12\beta h_*^{11} +C\beta+CA, \end{eqnarray*} by choosing $\beta$ large. This inequality tells us the principal curvature of $\mathcal M^*$ are bounded from above, namely $$\max_{\xi\in \mathbb S^n} \kappa_i^*(\xi,t)\le C, \ \ \forall \ t\in (0,T] \ \text{and} \ i=1,\ldots,n.$$ By Lemma \ref{s3 lem4} and \eqref{s3 e2}, we have $K^*(\cdot,t) \ge 1/C$. Therefore $$1/C\le \kappa_i^*(\cdot,t) \le C, \ \ \forall \ t\in (0,T] \ \text{and} \ i=1,\ldots,n.$$ By duality, \eqref{bd-kappa} follows. \end{proof} As a consequence of the above a priori estimates, one sees that the convexity of the hypersurface $\mathcal M_t$ is preserved under the flow \eqref{normalised flow} and the solution $X(\cdot, t)$ is uniformly convex. By estimates \eqref{bd-kappa}, equation \eqref{normalised flow spt} is uniformly parabolic. By the $L^\infty$-norm estimates and gradient estimates in Lemmas \ref {s3 lem1}--\ref {s3 lem2}, one obtains the H\"older continuity of $\nabla^2 u$ and $u_t$ by Krylov's theory \cite{Kry87}. Estimates for higher derivatives then follows from the standard regularity theory of uniformly parabolic equations. Hence we obtain the long time existence and regularity of solutions for the normalised flow \eqref{normalised flow}. The uniqueness of smooth solutions to \eqref{normalised flow spt} follows from the comparison principle, see Lemma \ref{lem5.1} below. We obtain the following theorem. \begin{theorem}\label{long time existence} Let $\mathcal M_0$ be a smooth, closed, uniformly convex hypersurface in $\mathbb R^{n+1}$ which encloses the origin. Let $f$ be a positive smooth function on $\mathbb S^n$. Then the normalised flow \eqref{normalised flow} has a unique smooth, uniformly convex solution $\mathcal M_t$ for all time, if one of the following is satisfied \begin{itemize} \item[(i)] $\alpha >n+1$; \item[(ii)] $ \alpha =n+1$, and $f$ satisfies \eqref{Aleks f cdt1}, \eqref{Aleks f cdt2}; \item[(iii)] $\alpha <n+1$, and $\mathcal M_t$ is origin-symmetric as long as the flow exists. \end{itemize} Moreover we have the a priori estimates \begin{equation} \|u\|_{C_{x,t}^{k,m}\big(\mathbb S^n\times[0,\infty) \big)} \le C_{k,m}, \end{equation} where $C_{k,m}>0$ depends only on $k,m,f, \alpha$ and the geometry of $\mathcal M_0$. \end{theorem} \section{Proofs of Theorems \ref{thmA} - \ref{thmDa}} In this section we prove the asymptotical convergence of solutions to the normalised flow \eqref{normalised flow}. First we prove Theorem \ref{thmA}. \begin{proof}[Proof of Theorem \ref{thmA}] Case i): $\alpha >n+1$. Let $u(\cdot,t)$ be the solution of \eqref{normalised flow spt}. By our choice of $\phi_0$ in \eqref{phi_0}, we have $$a:=\min_{\mathbb S^n}u(\cdot,0)\le 1\le \max_{\mathbb S^n}u(\cdot,0)=:b.$$ Let us introduce two time-dependent functions \begin{eqnarray*} {\begin{split} u_1 &= [1-(1-a^{q})e^{qt}]^{1/q}, \\ u_2 &= [1-(1-b^{q})e^{qt}]^{1/q}, \end{split}} \end{eqnarray*} where $q = n+1-\alpha <0$. It is easy to check that both $u_1$ and $u_2$ satisfy equation \eqref{normalised flow spt}, and the spheres of radii $u_1$ and $u_2$ are solutions of \eqref{normalised flow}. By the comparison principle, $u_1\le u \le u_2$. Hence $$(b^{q}-1) e^{qt}\le u^{q}-1 \le (a^{q}-1) e^{qt}.$$ Thus $u$ converges to $1$ exponentially. To obtain the exponential convergence of $u$ to $1$ in the $C^k$ norms, we use the following interpolation inequality, see e.g. \cite{Ha82}, \begin{equation}\label{interpolation} \int_{\mathbb S^n} |\nabla^k T|^2 \le C_{m,n} \Big(\int_{\mathbb S^n}|\nabla^m T|^2\Big)^{\frac{k}{m}} \Big(\int_{\mathbb S^n}|T|^2\Big)^{1-\frac{k}{m}} \end{equation} where $T$ is any smooth tensor field on $\mathbb S^n$, and $k,m$ are any integers such that $0\le k\le m$. Applying this to $u-1$ and using the fact that all derivatives of $u$ are bounded independently of $t$, we conclude $$\int_{\mathbb S^n}|\nabla^k u|^2 \le C_{k,\gamma} e^{-\gamma t}$$ for any $\gamma\in(0,\tilde \gamma)$ and any positive integer $k$, where $\tilde \gamma>0$ is a constant depending only on $q$. By the Sobolev embedding theorem on $\mathbb S^n$, see \cite{Aub82}, we have \begin{eqnarray*} \|u-1\|_{C^l(\mathbb S^n)} \le C_{k,l}\Big(\int_{\mathbb S^n} |\nabla^k u|^2+\int_{\mathbb S^n}|u-1|^2\Big)^\frac12 \end{eqnarray*} for any $k>l+n/2$. It follows that $\|u(\cdot, t)-1\|_{C^l(S^n)}\to 0$ exponentially as $t\to\infty$ for all integers $l\ge 1$. Namely $u(\cdot,t)$ converges to $1$ in $C^\infty$ topology as $t\to \infty$. Case ii): $\alpha=n+1$. We first prove the following lemma. \begin{lemma}\label{s4 lem1} There exist positive constants $C$ and $\gamma$ such that if $X(\cdot,t)$ is a solution to the normalised flow \eqref{normalised flow}, we have the estimate \begin{equation}\label{esti1} \max_{\mathbb S^n} \frac{|\nabla r(\cdot,t)|}{r(\cdot,t)} \le C e^{-\gamma t}\ \ \forall \ t>0, \end{equation} where $r(\cdot,t)$ is the radial function of $X(\cdot,t)$. \end{lemma} \begin{proof} Denote $w = \log r$. By \eqref{s2 t6} and \eqref{s2 t7}, we have, under a local orthonormal frame, \begin{eqnarray*} {\begin{split} g_{ij} &= e^{2w}(\delta_{ij} + w_iw_j),\\ h_{ij} &= e^w(1+|\nabla w|^2)^{-\frac12} (\delta_{ij} + w_iw_j -w_{ij}), \end{split}} \end{eqnarray*} and \begin{equation}\label{s4 lem1 t0} K = \frac{\det h_{ij}}{\det g_{ij}} = (1+|\nabla w|^2)^{-\frac{n+2}{2}} e^{-n w} \det a_{ij}, \end{equation} where $$a_{ij} = \delta_{ij} + w_i w_j -w_{ij}.$$ By \eqref{def v}, \eqref{normalised flow rad} and \eqref{s4 lem1 t0}, it is not hard to verify that $w$ satisfies the following PDE \begin{equation}\label{s4 lem1 t1} w_t= - (1+|\nabla w|^2)^{-\frac{n+1}{2}} \det a_{ij} + 1. \end{equation} Consider the auxiliary function $$Q = \frac12 |\nabla w|^2.$$ At the point where $Q$ attains its spatial maximum, we have $$0=\nabla_k Q = \sum w_i w_{ik},$$ and $\nabla^2_{ij} Q$ is a non-positive matrix $$0\ge \nabla^2_{ij} Q = \sum w_k w_{kij} + \sum w_{ik}w_{kj}.$$ Denote $\varrho = (1+|\nabla w|^2)^{-\frac{n+1}{2}}$. By differentiating \eqref{s4 lem1 t1}, we obtain, at the point where $Q$ achieves its spatial maximum, \begin{eqnarray*} \partial_t Q &=& \sum w_k w_{kt} = -\det a_{ij} \sum w_k\varrho_k -\varrho \sum w_k \nabla_k \det a_{ij} \\ &=& \varrho \sum S_n^{ij} \nabla_k w_{ij} w_k. \end{eqnarray*} By the Ricci identity, we have $$ \nabla_k w_{ij} = \nabla_j w_{ik} + \delta_{ik}w_j - \delta_{ij}w_k.$$ Hence \begin{eqnarray*} \partial_t Q &=& \varrho \sum S_n^{ij} \big( Q_{ij} - w_{ik}w_{kj} + w_iw_j -\delta_{ij} |\nabla w|^2\big) \\ &\le & \varrho \big( \max_i S^{ii}_n -\sum S^{ii}_n\big) |\nabla w|^2. \end{eqnarray*} If $n\ge2$, we get \begin{equation}\label{s4 lem1 s2} \partial_t Q \le -\gamma Q, \end{equation} for some positive constant $\gamma$, where we have used the estimates $\varrho \ge C^{-1}$ and $C^{-1} \le \kappa(\cdot,t) \le C$, which are established in Section 3. Estimate \eqref{esti1} follows from \eqref{s4 lem1 s2} immediately. For $n=1$, the equation \eqref{s4 lem1 t1} becomes quasi-linear \begin{equation}\label{s4 lem1 a1} w_t = \frac{w_{xx}}{1+w^2_x} \ \ \ \text{on} \ \mathbb S^1\times[0,\infty). \end{equation} Let $$\bar w: = \frac{1}{2\pi}\int_{\mathbb S^1} w(x,t) dx$$ be the average of $w$. By the divergence theorem, \begin{eqnarray*} \frac{d}{dt}\bar w = \frac{1}{2\pi} \int_{\mathbb S^1} (\arctan(w_x))_x dx =0. \end{eqnarray*} Hence $\bar w$ is a constant. Then it is simple to calculate \begin{eqnarray*} \frac{d}{dt} \Big(\frac12 \int_{\mathbb S^1} (w-\bar w)^2 \Big) &=& \int_{\mathbb S^1}(w-\bar w)(\arctan w_x)_x dx \\ &=& - \int_{\mathbb S^1} w_x \arctan w_x dx. \end{eqnarray*} Note that, $w_x \arctan w_x \ge \delta_0 w_x^2$ for some $\delta_0>0$ depending only on the upper bound of $|w_x|$. We deduce that, by the Poincar\'e inequality, \begin{eqnarray*} \frac{d}{dt} \Big(\frac12 \int_{\mathbb S^1} (w-\bar w)^2 \Big) &\le& -\delta_0 \int_{\mathbb S^1} w_x^2 dx \le -C\int_{\mathbb S^1} (w-\bar w)^2. \end{eqnarray*} This implies $w$ exponentially converges to a constant in $L^2$-norm at $t \to \infty$. The exponential decay \eqref{esti1} follows by applying the interpolation inequality \eqref{interpolation} to $w-\bar w$. \end{proof} Now we prove Case ii) of Theorem \ref{thmA}. Lemma \ref{s4 lem1} implies $|\nabla r(\cdot,t)| \to 0$ exponentially as $t\to \infty$. As in Case i), we infer by the interpolation inequality and the a priori estimates in Section 3, that $r$ exponentially converges to a constant in the $C^\infty$ topology as $t\to \infty$. Let us show that the constant must be $1$. By \eqref{normalised flow rad}, we get \begin{eqnarray*} \frac{d}{dt}\Big(\int_{\mathbb S^n} \log r(\xi,t)d\xi\Big) = \int_{\mathbb S^n} \Big(-\frac{r^{n+1}K}{u} +1\Big) d\xi. \end{eqnarray*} By \eqref{s2 t9}, $$\frac{d}{dt}\Big(\int_{\mathbb S^n} \log r(\xi,t)d\xi\Big) = 0.$$ Therefore by our choice of $\phi_0$ in \eqref{phi_0} $$\int_{\mathbb S^n} \log r(\xi,t) d\xi = \int_{\mathbb S^n} \log r(\xi,0)d\xi = 0.$$ This implies $r(\cdot,t) \to 1$ as $t\to \infty$. \end{proof} Recall that the normalised flow \eqref{normalised flow} is a gradient flow of the functional $\mathcal J_\alpha$ (see \eqref{functional} for the definition). We next complete the proofs of Theorem \ref{thmB} -\ref{thmDa}. \begin{proof}[Proof of Theorem \ref{thmB}] By our a priori estimates Lemmas \ref{s3 lem1} and \ref{s3 lem3}, there is a constant $C>0$, independent of $t$, such that \begin{equation}\label{s4 thm t1} |\mathcal J_\alpha (X(\cdot,t))| \le C\;\;\forall \; t\in[0,\infty). \end{equation} By Lemma \ref{descending flow}, we obtain \begin{eqnarray*} \mathcal J_\alpha(X(\cdot,T)) - \mathcal J_\alpha(X(\cdot,0)) &=& - \int_0^T\int_{\mathbb S^n} \frac{(fr^\alpha K-u)^2}{u r^\alpha K} dxdt\\ &\le& -\delta_0 \int_0^T\int_{\mathbb S^n} (fr^\alpha K-u)^2 dx dt. \end{eqnarray*} By \eqref{s4 thm t1}, the above inequality implies there exists a subsequence of times $t_j \to \infty$ such that $\mathcal M_{t_j}$ converges to a limiting hypersurface which satisfies \eqref{soliton sol}. To complete the proof of Theorem \ref{thmB}, it suffices to show that the solution of \eqref{soliton sol} is unique. Using \eqref{s2 r} and \eqref{s2 Gauss}, the equation \eqref{soliton sol} can be written as \begin{equation}\label{s4 thm t2} \frac{u}{(u^2+|\nabla u|^2)^{\frac{\alpha}{2}}} \det(\nabla^2 u +u I) = f. \end{equation} Let $u_1$ and $u_2$ be two smooth solutions of \eqref{s4 thm t2}. Suppose $G = u_1/u_2$ attains its maximum at $x_0\in \mathbb S^n$. Then at $x_0$, $$0=\nabla \log G = \frac{\nabla u_1}{u_1} - \frac{\nabla u_2}{u_2},$$ and $\nabla^2 \log G$ is a negative-semidefinite matrix at $x_0$ \begin{eqnarray*} 0 &\ge& \nabla^2 \log G \\ &=& \frac{\nabla^2 u_1}{u_1} -\frac{\nabla u_1\otimes\nabla u_1}{u_1^2} - \frac{\nabla^2 u_2}{u_2}+ \frac{\nabla u_2\otimes\nabla u_2}{u_2^2} \\ &=& \frac{\nabla^2 u_1}{u_1} - \frac{\nabla^2 u_2}{u_2} . \end{eqnarray*} By \eqref{s4 thm t2} we get at $x_0$, \begin{eqnarray*} 1 &=& \frac{u_2^{n+1-\alpha}}{u_1^{n+1-\alpha}} \frac{(1+|\nabla u_1|^2/u_1^2)^{\frac{\alpha}{2}}}{(1+|\nabla u_2|/u_2^2)^{\frac{\alpha}{2}}} \frac{\det(u_2^{-1}\nabla^2u_2 + I)}{\det( u_1^{-1}\nabla^2 u_1 +I)} \\ &\ge& G^{\alpha-n-1} (x_0). \end{eqnarray*} Since $\alpha > n+1$, $G(x_0) =\max_{\mathbb S^n} G \le 1$. Similarly one can show $\min_{\mathbb S^n} G \ge 1$. Therefore $u_1 \equiv u_2$. \end{proof} \vskip15pt \begin{proof}[Proof of Theorem \ref{thmC}] The long time existence of the flow \eqref{normalised flow} follows from Theorem \ref{long time existence}. As in the proof of Theorem \ref{thmB}, $\mathcal M_t$ converges by a subsequence to a homothetic limit. To prove the convergence of $\mathcal M_t$ along $t\to \infty$, it suffices to show the limiting hypersurface is unique. First we claim that if $\mathcal M_1$ and $\mathcal M_2$ are two smooth solutions to \eqref{soliton sol} for $\alpha =n+1$, then $\mathcal M_1$ and $\mathcal M_2$ differ only by a dilation. This is a well known result \cite{Aleks42, Pog73}. We sketch the proof in \cite{Pog73} for reader's convenience. Assume not, then there is a $\lambda >0$ such that \begin{itemize} \item [(i)] the set $\omega:=\{\xi\in \mathbb S^n: r_{\lambda \mathcal M_2}(\xi) \ge r_{\mathcal M_1}(\xi)\}$ is a proper subset of $\mathbb S^n$ with positive measure. \item [(ii)] the set $\omega_1 := \mathscr A_{\mathcal M_1}(\omega)$ is contained in $\omega_2 := \mathscr A_{\lambda \mathcal M_2} (\omega)$, and $ |\omega_2| > |\omega_1|$. \end{itemize} But on the other hand, by \eqref{s2 t9}, we have $${\begin{split} & \int_{\omega_1} f = |\mathscr A^*_{\mathcal M_1}(\omega_1)| =|\omega|, \\ & \int_{\omega_2} f = |\mathscr A^*_{\mathcal M_2}(\omega_2)| = |\mathscr A^*_{\lambda \mathcal M_2}(\omega_2)|= |\omega|. \end{split}} $$ Hence $\int_{\omega_1} f = \int_{\omega_2} f $, which is in contradiction with (ii) above. Next we show that \begin{equation}\label{const int} \int_{\mathbb S^n} \log r (\xi,t)d\xi= \int_{\mathbb S^n} \log r (\xi,0)d\xi =\text{const.}. \end{equation} This formula and the above claim imply that $\mathcal M_t$ converges to a unique limit. To prove \eqref{const int}, dividing \eqref{normalised flow rad} by $r$ and integrating over $\mathbb S^n$, we obtain, by \eqref{def v}, $$ \frac{d}{dt}\Big( \int_{\mathbb S^n} \log r(\xi,t)d\xi\Big)= -\int_{\mathbb S^n} f(x) \frac{r^{n+1}K}{u} d\xi + o_n . $$ By the variable change \eqref{s2 t9} and using \eqref{Aleks f cdt1}, we have \begin{eqnarray*} \frac{d}{dt}\Big( \int_{\mathbb S^n} \log r(\xi,t)d\xi\Big) = -\int_{\mathbb S^n} f(x) dx + o_n = 0. \end{eqnarray*} Hence we obtain \eqref{const int}. \end{proof} \vskip15pt \begin{proof}[Proof of Theorem \ref{thmDa}] Since $f$ is even and $\mathcal M_0$ is origin-symmetric, the solution remains origin-symmetric for $t>0$. The long time existence of the flow \eqref{normalised flow} now follows from Theorem \ref{long time existence}. As in the proof of Theorem \ref{thmB}, $\mathcal M_t$ converges by a subsequence to a homothetic limit. We conclude that $\mathcal M_t$ converges in $C^\infty$-topology to a smooth solution of \eqref{soliton sol} as $t\to \infty$ by using the argument of \cite{And97,GuWg03}. A tractable proof for this was presented in Section 4 of \cite{GuWg03}. It remains to show, when $f\equiv1$ and $\alpha \ge 0$, the only origin-symmetric solitons are spheres. By \eqref{soliton sol}, a soliton to the flow \eqref{normalised flow spt} satisfies \begin{equation}\label{rf1} uS_n = (u^2+|\nabla u|^2)^{\frac{\alpha}{2}} \ge u^\alpha. \end{equation} While using \eqref{s3 e2}, the polar body of our soliton satisfies \begin{equation}\label{rf2} u^*S^*_n =\Big(\frac{((u^*)^2+|\nabla u^*|^2)^{\frac12}}{u^*}\Big)^{n+1} (u^*)^\alpha \ge (u^*)^\alpha. \end{equation} Let us denote by $\Omega$ and $\Omega^*$ the convex bodies whose support functions are $u$ and $u^*$ respectively. Integrating \eqref{rf1} and \eqref{rf2} over $\mathbb S^n$ and then multiplying yield \begin{eqnarray}\label{rf3} \text{Vol}(\Omega)\text{Vol}(\Omega^*) &\ge& \frac{1}{(n+1)^2}\Big(\int_{\mathbb S^n} u^\alpha \Big)\Big(\int_{\mathbb S^n} (u^*)^\alpha\Big) \notag \\ &\ge&\frac{1}{(n+1)^2} \Big(\int_{\mathbb S^n} (uu^*)^{\frac{\alpha}{2}}\Big)^2. \end{eqnarray} Note that $uu^* = \frac{1}{rr^*}$, and by definition the polar dual, $$0<r(\xi)r^*(\xi) = \big\langle r(\xi)\xi, r^*(\xi)\xi\big\rangle \le 1.$$ Hence $uu^* \ge 1$. It then follows from \eqref{rf3} that \begin{equation}\label{rf4} \text{Vol}(\Omega)\text{Vol}(\Omega^*) \ge \text{Vol}^2(B_1), \end{equation} where $B_1$ denotes the unit ball in $\mathbb R^{n+1}$. The Blaschke-Stanl\'o inequality tells us $$\text{Vol}(\Omega)\text{Vol}(\Omega^*) \le \text{Vol}^2(B_1).$$ Therefore by the characterisation of equality cases, $\Omega$ must be an ellipsoid. By \eqref{rf1} and \eqref{rf2}, we infer that $\Omega=B_1$, otherwise the inequality in \eqref{rf4} would become strict, which is not possible. \end{proof} \section{Proof of Theorem \ref{thmD} } In this section we show that if $\alpha < n+1$ the flow \eqref{flow} may have unbounded ratio of radii, namely \begin{equation}\label{5.1} \mathcal R (X(\cdot,t)) = \frac{\max_{\mathbb S^n} r(\cdot,t)}{\min_{\mathbb S^n} r(\cdot,t)} \to \infty\;\;\text{as}\;\;t\to T \end{equation} for some $T>0$. To prove \eqref{5.1}, we show that $\min_{\mathbb S^n} r(\cdot,t)\to 0$ in finite time while $\max_{\mathbb S^n} r(\cdot,t)$ remains positive. In contrast, it is worth mentioning that in \cite{Treib90}, the author obtained an a priori bound for the ratio $\max_{\mathbb S^n} r / \min_{\mathbb S^n} r$ if $r$ is the radial function of the solution to the Aleksandrov problem. Let $X(\cdot, t)$ be a convex solution to \eqref{flow}. Then its support function $u$ satisfies the equation \begin{equation}\label{flow-u} {\left\{ \begin{split} \frac{\partial u}{\partial t}(x,t) &= - f r^\alpha S_n^{-1}(\nabla_{ij}^2 u + u \delta_{ij})(x,t), \\ u(\cdot,0) &=u_0. \end{split}\right.} \end{equation} Given a smooth, closed, uniformly convex hypersurface $\mathcal M_0$, our a priori estimates in Section 3 imply the existence of a smooth, closed, uniformly convex solution to the flow \eqref{flow} for small $t>0$. The solution remains smooth until either the solution shrinks to the origin, or \eqref{5.1} occurs at some time $T>0$. \begin{definition} A time dependent family of convex hypersurfaces $Y(\cdot, t)$ is a {sub-solution} to \eqref{flow-u} if its support function $w$ satisfies \begin{equation}\label{flow-w} {\left\{ \begin{split} \frac{\partial w}{\partial t}(x,t) &\ge - f r^\alpha S_n^{-1}(\nabla_{ij}^2 w + w \delta_{ij})(x,t) , \\ w(\cdot,0) &\ge u_0, \end{split}\right.} \end{equation} where $r$ is the radial function of the associated hypersurface. \end{definition} By definition, the hypersurface $\mathcal M_0$ (independent of $t$), whose support function is $u_0$, is a sub-solution to \eqref{flow-u}. We will use the following comparison principle. \begin{lemma}\label{lem5.1} Let $X(\cdot, t)$ be a solution to \eqref{flow} and $Y(\cdot, t)$ a sub-solution. Suppose $X(\cdot, 0)$ is contained in the interior $Y(\cdot, 0)$. Then $X(\cdot, t)$ is contained in the interior $Y(\cdot, t)$ for all $t>0$, as long as the solutions exist. \end{lemma} \begin{proof} Let $u(\cdot, t)$ and $w(\cdot, t)$ be the support functions of $X(\cdot,t)$ and $Y(\cdot,t)$. Then $u$ and $w$ satisfy \eqref{flow-u} and \eqref{flow-w} respectively with $u(x,0)\le w(x,0)$ for all $x\in\mathbb S^n$. For $\lambda>0$, let us denote $u^\lambda(x,t) = \lambda u(x,\lambda^\beta t)$, where $\beta = \alpha-n-1$. It is easily seen that $u^\lambda$ solves \eqref{flow-u} with $u^\lambda(\cdot,0)=\lambda u_0$. Let $\lambda <1$. Then $u^\lambda(\cdot,0)< w(\cdot,0)$. By the comparison principle for parabolic equation, \begin{equation}\label{contain t1} u^\lambda(x,t) < w(x,t), \forall\ x\in\mathbb S^n \ \text{and} \ t>0, \end{equation} as long as the solutions exist. Sending $\lambda \to 1$, we obtain $u(x,t) \le w(x,t)$. \end{proof} Note that in Lemma \ref{lem5.1}, we do not require that $Y(\cdot, t)$ is shrinking. Moreover, it suffices to assume that $Y(\cdot, t)$ is a sub-solution in the viscosity sense. In particular Lemma \ref{lem5.1} applies if $Y(\cdot, t)$ is $C^{1,1}$ smooth. To prove Theorem \ref{thmD}, by the comparison principle (Lemma \ref{5.1}), it suffices to construct a sub-solution $Y(\cdot, t)$ such that $\min_{\mathbb S^n} w(\cdot, t)\to 0$ but $\max_{\mathbb S^n} w(\cdot,t)$ remains positive, as $t\to T$ for some finite time $T>0$. By a translation of time, we show below that there is a sub-solution $Y(\cdot, t)$ for $t\in (-1, 0)$ such that \eqref{5.1} holds as $t\nearrow 0$. \begin{lemma}\label{lem5.2} For any given a positive function $f$, there is a sub-solution $Y(\cdot, t)$, where $t\in (-1, 0)$, to \begin{equation}\label{flow-af} {\left\{ \begin{split} \frac{\partial u}{\partial t}(x,t) &= - af r^\alpha S_n^{-1}(\nabla_{ij}^2 u + u \delta_{ij})(x,t), \\ u(\cdot,0) &=u_0. \end{split}\right.} \end{equation} for a sufficiently large constant $a>0$, such that $\min_{\mathbb S^n} w(\cdot, t)\to 0$ but $\max_{\mathbb S^n} w(\cdot,t)$ remains positive, as $t\nearrow 0$. \end{lemma} \begin{proof} The sub-solution we constructed is a family of closed convex hypersurfaces $\widehat \mathcal M_t :=Y(\mathbb S^n, t)$. First note that it suffices to prove Lemma \ref{lem5.2} for $q=n+1-\alpha>0$ is small. Indeed, if $Y(\mathbb S^n, t)$ is a sub-solution to \eqref{flow-af} for some $\alpha$, it is also a sub-solution to \eqref{flow-af} for $\alpha'<\alpha$, provided we replace $a$ by $a\sup\{|p|^{\alpha-\alpha'};\ \ p\in \widehat \mathcal M_t, t\in (-1, 0)\}$. Near the origin, let $\widehat \mathcal M_t$ be the graph of a function on $\mathbb R^n$, $\phi (\rho, t)$ ($\rho=|x|$), given by \begin{equation}\label{sub-solu} \phi(\rho, t)= \left\{ { \begin{split} & - |t|^{\theta} + |t|^{-\theta+\sigma\theta} \rho^2,\ \ \ \text{if}\ \rho < |t|^{\theta}, \\ & - |t|^{\theta} - \frac{1-\sigma}{1+\sigma} |t|^{\theta(1+\sigma)} +\frac{2}{1+\sigma} \rho^{1+\sigma}, \ \ \text{if}\ |t|^{\theta} \le \rho \le 1, \end{split}}\right.\end{equation} where $\sigma=\frac{q\theta-1}{n\theta}$ and $\theta>\frac 1q$ is a constant. It is easy to verify that $\phi$ is strictly convex, and $\phi\in C^{1,1}(B_1(0))$. By direct computation, we have, \begin{itemize} \item [(i)] if $0\le \rho\le |t|^\theta$, then \begin{equation} { \begin{split} &r^\alpha K \ge |t|^{\alpha\theta} |t|^{n\theta(\sigma-1)}=|t|^{\theta-1},\\ & |\frac{\partial}{\partial t} Y(p, t) | \le \theta |t|^{\theta-1}. \end{split}} \end{equation} where $p=(x, \phi(|x|, t))$ is a point on the graph of $\phi$ and $K$ is the Gauss curvature of the graph of $\phi$ at $p$. \item [(ii)] if $|t|^\theta\le \rho\le 1$, then \begin{equation} { \begin{split} & r^\alpha K \ge \rho^\alpha K \ge C\rho^\alpha \rho^{(\sigma-1)n} \ge C \rho^{1-\frac 1\theta}\ge C |t|^{\theta-1},\\ & |\frac{\partial}{\partial t} Y(p, t) | \le \theta |t|^{\theta-1}. \end{split}} \end{equation} \end{itemize} Hence the graph of $\phi(\cdot, t)$ is a sub-solution to \eqref{flow-af}, provided $a$ is sufficiently large. Next we extend the graph of $\phi$ to a closed convex hypersurface $\widehat \mathcal M_t$, such that it is $C^{1,1}$ smooth, uniformly convex, rotationally symmetric, and depends smoothly on $t$. Moreover we may assume that the ball $B_1(z)$ is contained in the interior of $\widehat \mathcal M_t$, for all $t\in (-1, 0)$, where $z=(0, \cdots, 0, 10)$ is a point on the $x_{n+1}$-axis. Then $\widehat \mathcal M_t$ is a sub-solution to \eqref{flow-af}, for sufficiently large $a$. \end{proof} We are in position to prove Theorem \ref{thmD}. For a given $\tau\in (-1, 0)$, let $\mathcal M_0$ be a smooth, closed, uniformly convex hypersurface inside $\widehat \mathcal M_{\tau}$ and enclosing the ball $B_1(z)$. Let $\mathcal M_t$ be the solution to the flow \eqref{flow-af} with initial data $\mathcal M_0$. By Lemma \ref{5.1}, $\mathcal M_t$ touches the origin at $t=t_0$, for some $t_0\in (\tau, 0)$. We choose $\tau$ very close to 0, so that $t_0$ is sufficiently small. On the other hand, let $\tilde X(\cdot, t)$ be the solution to \begin{equation}\label{s5 t1} \frac{\partial X}{\partial t} = - \beta \tilde f \tilde r^\alpha K \nu, \end{equation} with initial condition $\tilde X(\cdot, \tau)=\partial B_1(z)$, where $\beta=2^\alpha\sup\{ |p|^\alpha:\ p\in \mathcal M_t, \tau <t<t_0\}$, $\tilde f =a \max_{\mathbb S^n} f$, and $\tilde r = |X-z|$ is the distance from $z$ to $X$. We can choose $\tau$ close enough to $0$ that the ball $B_{1/2}(z)$ is contained in the interior of $\tilde X(\cdot, t)$ for all $t\in (\tau, t_0)$. Since $\mathcal M_t$ is a sub-solution to \eqref{s5 t1}, by the comparison principle we see that the ball $B_{1/2}(z)$ is contained in the interior of $ \mathcal M_t $ for all $t\in (\tau, t_0)$. Hence as $t\nearrow t_0$, we have $\min r(\cdot, t)\to 0$ and $\max r(\cdot, t)>|z|=10$. Hence \eqref{5.1} is proved for $\mathcal M_t$. We have proved Theorem \ref{thmD} when $f$ is replaced by $af$, for large constant $a>0$. Making the rescaling $\widetilde \mathcal M_t = a^{-1/q} \mathcal M_t$, one easily verifies that $\widetilde \mathcal M_t$ solves the flow \eqref{flow} for the function $f$. Theorem \ref{thmD} is proved. \vskip10pt Finally we point out that if $f$ does not satisfy \eqref{Aleks f cdt2}, then \eqref{unbounded ratio} holds for $\alpha = n+1$. Indeed, assume to the contrary that the ratio $ \frac{\max_{\mathbb S^n} r(\cdot,t)}{\min_{\mathbb S^n} r(\cdot,t)}$ is uniformly bounded, then by \eqref{const int}, the radial function $r(\cdot, t)$ is uniformly bounded from both above and below. Hence by the a priori estimates (Lemmas \ref{s3 lem4} and \ref{s3 lem5}), the flow converges smoothly to a limit which solves \eqref{soliton sol}. It means the Aleksandrov problem has a smooth solution without condition \eqref{Aleks f cdt2}. But this is impossible as \eqref{Aleks f cdt2} is necessary for the solvability of the Aleksandrov problem.
{ "attr-fineweb-edu": 1.724609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUgPPxK4sA-7sDiBuQ
\section{Introduction} Molecular sorting is a major process responsible for the organization of cellular matter in eukaryotic cells~\cite{MN08}. This highly complex task is accomplished by selectively concentrating and distilling specific proteins and lipids that dwell on the plasma membrane and on the membranes of inner cellular bodies into submicrometric lipid vesicles. Once formed, these vesicles detach from the membrane and are subsequently delivered to their appropriate destinations. It has recently been proposed that molecular sorting may emerge from the combination of two fundamental physical processes~\cite{ZVS+21}: a) phase separation of specific molecules into localized sorting domains, and b) domain-induced membrane bending, leading to the formation of vesicles constitutively enriched in the biochemical factors of the engulfed domains, thus resulting in a natural distillation process. In the proposed abstract model of the process, molecules arriving on a membrane region can laterally diffuse and aggregate into localized domains, whose formation and growth occurs through the typical stages of phase separation: after the initial nucleation stage, in the case of low supersaturation, the growth of domains is mainly governed by the absorption of freely diffusing molecules. One of the main predictions of the classical theory of phase separation is that a {\it critical size}~$A_{\rm c}$ has to be reached in order for domains to survive and continue to grow irreversibly to larger and larger scales~\cite{LP81,Sle09}. In the present theory of molecular distillation such domains are extracted once they reach a characteristic size~$A_E\gg A_\mathrm{c}$, determined by the physical and biomolecular processes that induce membrane bending and vesicle formation. In the presence of a constant flux of incoming molecules, the membrane system selforganizes in a driven non-equilibrium stationary state, which can be seen as a realization in Nature of the classical Szilard's model of droplet formation~\cite{Far27,SR97,Sle09}. Phase separation phenomena are emerging as central drivers of the selforganization of cell structures~\cite{BBH18,HWJ14,LPR21,FPA+21}, and the idea that phase separation is an essential step for molecular sorting is increasingly finding support in recent studies~\cite{BKH+21,KK22,DKW+21,ZZ20,LSD22}. As advances in live-cell imaging have enabled more accurate observations in real time, a striking heterogeneity in domain growth kinetics has emerged, and several approaches to unambiguously classify different dynamic populations have been proposed~\cite{LMY+09,AAM+13,KSL+17,GCH+14,HSU+20,WCM+20}. In~the experiments, a crucial parameter used to describe the sorting process is the lifetime of a sorting domain. It has been recently shown that the lifetime of a sorting domain is related to the domain stability, which in its turn depends on the number of molecules contained in the domain, and thus on the domain size~\cite{LLN+19}. It is therefore tempting to relate the existence, in the context of phase separation, of a critical size for domain growth, to the observation that sorting domains on cell membranes can undergo qualitatively different final fates. As a matter of fact, sorting domains are commonly classified in two groups: {\it productive} domains, if their growth eventually terminates in the nucleation of a vesicle which is ultimately detached from the membrane, and {\it unproductive} (or abortive) domains which, instead, progressively dismantle and are ultimately dissolved~\cite{EBV+04,AAM+13,WCM+20}. It seems natural to interpret this distinction in the context of classical nucleation theory, where the fate of a domain results from the balance between bulk stabilization and the propensity to dismantle along the domain boundary, which in its turn is controlled by the value of a characteristic boundary tension~\cite{LP81,GKL+09,FPA+21}. As a result, circular domains (that minimize the boundary perimeter) are favored, subcritical domains (having size $A<A_\mathrm{c}$) have short lifetimes and a low probability of reaching the extraction size $A_E$, while supercritical domains have a high probability of being ultimately extracted. Here we discuss the implications of this picture in the framework of the phenomenological theory of molecular sorting introduced in Ref.~\citenum{ZVS+21}. Several predictions of the phenomenological theory are verified by extensive numerical simulations of a lattice-gas model. To help in the analysis of experimental data, we introduce an operational definition of critical size, and discuss its relation to recently introduced methods for the classification of domain formation events into productive and unproductive classes~\cite{WCM+20}. A~direct comparison with experimental results shows that the statistical properties of productive/unproductive domains inferred from experimental data are in good qualitative agreement with those emerging from simulations performed in some specific parameter regions. These results hint at a central role of phase separation, and of the related notions of boundary tension and critical size, in the processes of molecular sorting that control the establishment and maintenance of distinct chemical identities on cell membranes. \section{Phenomenological theory}\label{sec:pheno} The phenomenological theory of phase-separation-driven molecular sorting is based on the following non-equilibrium steady-state picture~\cite{ZVS+21}: a constant flux $\phi$ of ``sortable'' cargo molecules is deposited on the lipid membrane; each molecule occupies a characteristic area~$A_0$ on the membrane, diffuses laterally, and can aggregate into sorting domains with the help of a pool of specialized auxiliary molecules, which sustain ``active'' domain formation by triggering localized positive feedback loops~\cite{ZCT+15,FPA+21,GCT+05}, and/or ``passive'' aggregation, driven by weak attractive intermolecular interactions~\cite{LPR20,BBH18}. Since domain formation is characterized by competing effects, according to classical nucleation theory, a critical size~$A_{\rm c}$ is required for a domain to continue to grow irreversibly and avoid decay~\cite{BD35,Zel43,LP81}. Once formed, sorting domains coarsen due to the incoming flux of laterally diffusing molecules, and are eventually extracted from the membrane in the form of lipid vesicles of characteristic area $A_E = mA_0$. It~follows that the growing domains coexist with a continuously repleted two-dimensional ``gas'' of laterally diffusing molecules in a statistically stationary state. If we consider a region of linear size $L$ of the order of the average interdomain half distance, centered around a growing supercritical domain of approximately circular shape and radius~$R$, the quasi-static profile $n_R(r)$ of the density of the gas of freely diffusing molecules in the proximity of the domain can be approximately obtained by solving a Laplace equation with Dirichlet boundary conditions $n_R(R)= n_0$ and $n_R(L)= \bar{n}$, obtaining \begin{equation} n_R(r) = n_0+ \frac{\log(r/R)}{\log(L/R)} \Delta n, \label{eq:densprofile} \end{equation} where $r\geq R$ denotes the distance from the domain center, and $\Delta n=\bar{n} -n_0$. Domain growth is induced by the flux $\Phi_A$ of molecules from the gas to the domain, which can be calculated by integrating the flux density $-D\nabla n_R(r)$ across the boundary of the domain of size $A=\pi R^2$, obtaining \begin{equation} \Phi_A =\frac{4 \pi D \Delta n}{\log(A_L/A)} \end{equation} where $D$ is the lateral diffusivity of the molecules. This formula implies that the domain will grow according to the dynamic equation \begin{equation} \dot{A}=\frac{4 \pi A_0 D \Delta n}{\log(A_L/A)}.\label{eq:dAdt} \end{equation} In a membrane system where sorting domains may be assumed to be approximately evenly distributed, the statistics of supercritical domains can be conveniently described in terms of the number density $N(t,A)\,\mathrm{d} A$, giving the average number per unit membrane area of supercritical domains with size comprised between $A$ and $A+\mathrm{d} A$. Since the effects of random fluctuations can be approximately neglected in the case of supercritical domains, $N(A,t)$ satisfies the continuity equation \begin{equation} \frac{\partial N}{\partial t} + \frac{\partial}{\partial A} (\dot A N)+\gamma (A)N=0, \label{eq:smo} \end{equation} where the rate of removal of domains of size $A$ from the system is $\gamma (A)=0$ for $A < A_{E}$, and $\gamma (A)=\gamma _0>0$ for $A > A_{E}$. The stationary solution of Eq.~\eqref{eq:smo}, \begin{equation}\label{solutionSmolu} N_{\rm st}(A) = \frac{J \log{\left(A_L/A\right)}}{4 \pi D \Delta n} \exp \left [ -\int_{A_{\rm c}}^{A}{\frac{\gamma (a) \log{\left(A_L/a\right)}}{4 \pi A_0 D \Delta n} \mathrm{d} a} \right ] \end{equation} has a universal logarithmic behavior for $A<A_E$. The normalization constant $J$ can be determined from the steady state condition \begin{equation} \phi = \int_{A_\mathrm{c}}^\infty \Phi_A N_{\rm st}(A)\, \mathrm{d} A \simeq JA_E \end{equation} for large $\gamma_0$ and $A_E \gg A_{\rm c}$. Assuming that the incoming flux $\phi$ of molecules is evenly distributed in average among all available supercritical sorting domains, and neglecting logarithmic corrections, the average number of supercritical domains per unit area is given by \begin{equation} \bar{N}_d \sim \frac{\phi}{\Phi_A} \sim \frac{\phi}{D \Delta n}. \label{eq:phioverDetc} \end{equation} Numerical observations suggest that faster responses of the membrane system to changing environmental conditions are related to shorter residence times of the sorted molecules on the membrane in the steady-state~\cite{ZVS+21}. It~is therefore interesting to investigate under which parametric conditions this residence time can be minimized. From the moment of insertion to the moment of extraction, molecules spend an average time $\bar{T}_f$ diffusing freely and an average time $\bar{T}_d$ attached to supercritical sorting domains. In principle, for the molecules that aggregate in the initial stage of the domain formation process, when the domain is still subcritical, one should also consider the time spent in the subcritical stage, but this is generally negligible if the critical size is small. In the following, repeated use will be made of a general steady-state relation, which applies to open systems in a driven non-equilibrium stationary state~\cite{ZAG19}: the average density of molecules in the system is given by the product of the average density flux of molecules (entering or leaving the system) and the average time that a molecule spends in the system. According to this general relation, the steady-state average density of molecules that are freely diffusing as a two-dimensional gas on the membrane is \begin{eqnarray} \bar{n} & = & \phi \,\bar{T}_f. \label{eq:dens} \end{eqnarray} The same steady-state relation can be applied to the average density $\bar{N}_d$ of supercritical domains that are generated and ultimately extracted from the membrane, giving \begin{eqnarray} \bar{N}_d & = & \frac{{\rm d} \bar{N}_d}{{\rm d} t}\, \bar{T}_d \;=\; \frac{\phi }{m}\,\bar{T}_d. \label{eq:nd} \end{eqnarray} On the other hand, since each new domain starts its aggregation process from the encounter of two freely diffusing molecules, one can write (see also App.~\ref{app:islandmodel}) \begin{equation} \frac{\mathrm{d} \bar{N}_d}{\mathrm{d} t} = CD\,\bar{n}^2, \label{eq:cdn2} \end{equation} where $C$ is a dimensionless proportionality constant measuring the strength of the effective interaction that keeps molecules together in a sorting domain. Combining (\ref{eq:dens}), (\ref{eq:nd}) and (\ref{eq:cdn2}), the following steady-state relations are obtained: \begin{align} \bar{n} & \sim \left(\frac{\phi}{m\, C D} \right)^{1/2},\\ \bar{T}_f & = \frac{\bar{n}}{\phi} \sim \left( m\, C\, D \phi \right)^{-1/2}. \end{align} For approximately absorbing domains, $n_0 \ll \Delta n$ and $\Delta n \sim \bar n$, therefore (\ref{eq:phioverDetc}), (\ref{eq:nd}) and (\ref{eq:cdn2}) give: \begin{align} \bar{N}_d & \sim \left( \frac{m\, C \phi }{D}\right)^{1/2} \sim\; m\, C\,\bar{n}\,, \label{eq:enned} \\ \bar{T}_d & \sim \frac{C\, m^2\,\bar{n}}{\phi} \sim \left( \frac{C\, m^3}{D\, \phi}\right)^{1/2}. \end{align} The average time spent by molecules in the system is approximately $\bar{T}=\bar{T}_d + \bar{T}_f$, which is minimum for \begin{equation} \bar{T}_f + \bar{T}_d = \bar{T}_{\rm opt} \sim \left( \frac{m}{D\, \phi} \right)^{1 / 2}. \end{equation} The optimal value $\bar{T}_{\rm opt}$ is obtained for \begin{equation} C\;=\;C_{\rm opt} \sim \frac{1}{m^{2}}\,. \label{eq:coptimal} \end{equation} For this value, the average number densities of gas molecules and of supercritical domains are: \begin{equation} \bar{n}_{\rm opt} \sim \left(\frac{\phi A_E}{D A_0}\right)^{1/2}, \quad \bar{N}_{d, {\rm opt}} \sim \left( \frac{\phi}{m\,D} \right)^{1 / 2}. \end{equation} \section{Numerical validation }\label{sec:validation} \begin{figure*}[htb] \begin{center} \includegraphics[width=1\textwidth]{fig1.pdf} \end{center} \vspace{-.3cm} \caption{Snapshots of configurations of the lattice-gas model of molecular sorting for a system of $400^2$ sites in the steady state, with incoming flux $\phi/k_D = 10^{-6}$ and increasing values of the interaction strength~$g$ (from left to right). In the central panel the interaction strength is close to the optimal value $g_\mathrm{opt}=31$.} \label{fig:snapshots} \end{figure*} In a minimal lattice-gas model of the distillation process, the lipid membrane is modelled as a two-dimensional square lattice with periodic boundary conditions, where each site can be occupied by a single molecule at most~\cite{ZVS+21}. The system evolves according to a Markov process consisting of the following three elementary events: 1) {\it insertion}: molecules from an infinite reservoir arrive and are inserted on empty sites with rate $k_I$; 2) {\it diffusion and aggregation}: molecules can perform diffusive jumps to an empty neighboring site with rate $k_D/g^{N_\mathrm{nn}}$, where $g > 1$ is a dimensionless parameter representing the interaction strength, and $N_\mathrm{nn}$ is the number of neighboring molecules of the hopping molecule before the jump occurs; 3) {\it extraction}: molecules are extracted from the system by simultaneously removing all connected clusters of molecules that contain a completely filled square of size $m$. In what follows, $A_0=1$, i.e. areas are measured as numbers of lattice sites, and $m=10^2$. In every simulation, the system is allowed to relax to the steady state before starting the collection of relevant statistical data. One of the main observations of Ref.~\citenum{ZVS+21} is that both the average permanence time $\bar{T}$ of sorted molecules on the membrane system and the average molecule density~$\rho$ in the steady state are minimal in an intermediate, optimal range of values of the interaction strength~$g$, where the molecular distillation process is most efficient. Snapshots of the simulations taken in the steady state show the typical behavior of the system both inside and outside of this optimal range~(Fig.~\ref{fig:snapshots}). For low interaction strength, molecular crowding accompanied by a hectic formation of small short-lived domains is observed (Fig.~~\ref{fig:snapshots}, left). As the interaction strength increases, the density of freely diffusing molecules decreases (Fig.~\ref{fig:snapshots}, middle panels). Consistently with the predictions of the phenomenological theory, the molecular density $\rho$ and residence time~$\bar{T}$ are lower in this intermediate range, and reach a minimum in correspondence with the optimal value of the interaction strength~$g$ (Ref.~\citenum{ZVS+21} and Fig.~\ref{fig:snapshots}, central panel). When the interaction strength becomes much larger than its optimal value, the gas of free molecules is strongly depleted, and the system enters into a regime of domain crowding (Fig.~\ref{fig:snapshots}, fourth panel). Here, a large number of sorting domains shares the incoming molecular flux, the growth of each sorting domain is slowed down, and the efficiency of the distillation process is impaired, as both the molecular density and molecular residence time are much larger than in the optimal region. For very high values of the microscopic interaction strength~$g$, the formation of highly irregular domains of the type predicted by the theory of diffusion-limited aggregation~\cite{BS95} is observed (Fig.~\ref{fig:snapshots}, rightmost panel). This latter regime is unlikely to correspond to physiological sorting, but could be related to pathological conditions where high intermolecular interaction strength induced by mutations promote the formation of irregular, solid-like aggregates associated to degenerative deseases~\cite{BAF+18,PLJ+15}. Similar behaviors have also been observed in experiments, where overexpression of adaptor proteins responsible for mediating intermolecular interactions leads to the formation of large and irregularly shaped sorting domains~\cite{MLY+10}. \begin{figure*}[tb] \begin{center} \includegraphics[width=0.83\textwidth]{fig2.pdf} \end{center} \vspace{-0.4cm} \caption{Left: Nearest-neighbor distances between simulated sorting domains are highlighted in red in a snapshot from a simulation performed with incoming flux $\phi/k_D=10^{-7}$ and interaction strength $g=10^2$. Center: Scaling of the optimal values of the average interdomain half distance. The red line is a fit with the power law $\phi^ {-a}$, with $a=0.23$. Right: The frequency density and cumulative frequency distribution (inset) for the rescaled half distances $L/\bar{L}$ for varying values of the incoming flux $\phi/k_D$ collapse on a single universal frequency distribution. } \label{fig:NNdist} \end{figure*} Numerical simulations confirm the validity of the scaling laws $\rho_{\rm opt}\sim \phi^a$, $\bar{n}_{\rm opt} \sim \phi^b$ and $\bar{T}_{\rm opt} \sim \phi^{-c}$, as the numerically obtained values $a=0.48$, $b=0.46$ and $c=0.52$ are in good agreement with the theoretical predictions $a=b=c=1/2$~\cite{ZVS+21}, that were derived under simplifying assumptions. In addition to these former results, other predictions of the phenomenological theory can be verified numerically using the microscopic lattice-gas model. The previously exposed phenomenological theory is valid in the regime where supercritical domains are well separated objects, with a well defined value of the average interdomain half distance $\bar{L}$. Since the number of supercritical domains scales as $\bar{N}_d \sim \phi^{1/2}$, and $\pi \bar{L}^2 \bar{N}_d \approx 1$, it is expected that $\bar{L} \sim \phi^{-1/4}$. This scaling law can be verified numerically in the following way. First, the center of mass of each domain is computed. A critical size is determined using the operational definition given in the following Sect.~\ref{sec:criticalsize}. Domains with size smaller than the critical size are neglected. The nearest neighbour of each domain is found (Fig.~\ref{fig:NNdist}, left). Finally, the distances between nearest neighbors and the corresponding statistical measures are computed. The numerical values of the average interdomain half distance $\bar{L}$ obtained by this method follow a scaling law $\bar{L}\sim \phi^{-d}$ with $d=0.23$, close to the theoretically predicted value $d=1/4$ (Fig.~\ref{fig:NNdist}, center). When the mean value $\bar{L}$ is used to rescale the interdomain half distances, the corresponding frequency distributions for different values of $\phi$ collapse on a single universal distribution (Fig.~\ref{fig:NNdist}, right). Several results of the phenomenological theory stem from the assumption that the steady-state profile of molecule density around a sorting domain has the logarithmic form (\ref{eq:densprofile}), and from the related idea that the membrane region can be divided into ``attraction basins'' of linear size $\sim L$ pertaining to distinct sorting domains. Given the approximate nature of these hypotheses, it is interesting to check their validity by direct numerical simulations. A convenient way to computationally define this kind of attraction basins is the use of a Voronoi decomposition, which is a partition of the plane into non-overlapping regions according to their proximity to points of a given set~\cite{OBS+00}. The two-dimensional square lattice used for the numerical simulations was therefore decomposed according to the following procedure. Once all supercritical domains were identified and tracked, for each time frame the center of mass of each domain was computed and the set of these centers was ~used to partition the lattice area into Voronoi regions (Fig.~\ref{fig:densityprofile}, bottom). Then, free molecules belonging to each region were identified, and their distance from the domain center of mass computed. A~direct validation of the theoretical expression (\ref{eq:densprofile}) is computationally very demanding, as it requires building histograms of distances conditional to the radius~$R$ of a given sorting domain. We studied a slightly different quantity, i.e. the average frequency of the distances of free molecules from domains of linear sizes $R$ comprised between the critical radius $R_{\rm c}$ and the extraction radius~$R_E$: \begin{equation}\label{eq:nr} \bar{n}(r)= \int_{R_{\rm c}}^{R_E} n_R(r) N_\mathrm{st}(R) \,\mathrm{d} R \end{equation} for $0\leq r \leq L$, where the theoretical model describes a density profile characterized by gas depletion in the proximity of the sorting domain. Computing the integral in (\ref{eq:nr}) we obtain \begin{equation} \bar{n}(r) = K_1 +K_2 \log(r), \label{eq:profile_prediction} \end{equation} where $K_1$ and $K_2$ are functions of the model parameters. If $p(r)\,\mathrm{d} r$ is the empirical probability of finding a molecule at a distance comprised between $r$ and $r+\mathrm{d} r$ from the center of mass of a domain, then \begin{equation} \bar{n}(r)=\frac{p(r)}{2\pi r}. \end{equation} The measure of $\bar{n}(r)$ obtained from the numerical simulations by this procedure is in agreement with a fit of the theoretical prediction (Fig.~\ref{fig:densityprofile}, top) \begin{figure}[tb] \begin{center} ~\vskip0.16cm \includegraphics[width=0.322\textwidth]{fig3.pdf} \end{center} \vspace{1.3cm} \caption{Top: Average density profile $\bar{n}(r)$ of the gas of free molecules at a distance $r$ from the center of supercritical domains, obtained from the simulations, and fitted with the theoretical prediction Eq.~\ref{eq:profile_prediction} ($\phi/k_D=10^{-7}$, $g=10^2$). Bottom: Voronoi decomposition obtained from a set of simulated supercritical sorting domains. (\ref{eq:profile_prediction}). } \label{fig:densityprofile} \end{figure} In the phenomenological theory, a central role is played by the dimensionless effective interaction strength $C$. A~convenient expression for $C$, amenable to empirical estimation, can be obtained by inverting Eq.~\eqref{eq:cdn2} and making use of (\ref{eq:nd}) to get \begin{equation}\label{eq:operC} C = \frac{\phi}{m D \,\bar{n}^2}, \end{equation} which is a function of directly measurable quantities, such as the incoming flux $\phi$ and the bulk gas density~$\bar{n}$. The theory predicts that the optimal value $C=C_\mathrm{opt}$ scales as $m^{-h}$, with $h=2$ (cf. Eq.~\ref{eq:coptimal}). Numerical simulations yield the compatible value $h=1.8$ (Fig~\ref{fig:Coptvsm}, top). \begin{figure}[tb] \begin{center} \includegraphics[width=0.38\textwidth]{fig4.pdf} \end{center} \vspace{-0.5cm} \caption{Top: Optimal effective interaction strength $C_{\rm opt}$ as a function of $m=A_E/A_0$, at fixed $\phi/k_D=10^{-6}$. The red line is a fit with the power law $m^{-h}$, with $h=1.8$. Bottom: Effective interaction strength $C$ as a function of the microscopic interaction strength~$g$, for different values of the incoming flux.} \label{fig:Coptvsm} \end{figure} One of the main tenets of the phenomenological theory is the existence of a well-defined critical domain size~$A_{\rm c}$, arising from the balance between the mixing power of lateral diffusion and the tendency of sorted molecules to aggregate. In the lattice-gas model, the tendency to aggregation is controlled by the microscopic parameter~$g$, while in the phenomenological theory, an analogous role is played by the effective interaction strength $C$. The operational definition provided by Eq.~\ref{eq:operC} allows to determine $C$ from the simulated molecule density $ \bar{n}$ as a function of model parameters (Fig~\ref{fig:Coptvsm}, bottom). Accordingly with its interpretation as an effective interaction strength, $C$ is observed to be a non linear, monotonically increasing function of the microscopic parameter~$g$. The critical domain size $A_\mathrm{c}$ is a central control parameter of the molecular distillation process, but there is no simple analytical expression for it in the framework of the phenomenological theory. Explicit approximate expressions for the critical size can be obtained using classical metastability analysis in quasi-equilibrium lattice-gas models (see App.~\ref{app:meta} and references therein). Such an analysis predicts that $A_{\rm c}$ is a monotonically decreasing function of the \textit{microscopic} interaction strength between sorted molecules, which, however, is not practically measurable. For this reason, in the next Section we provide an operational definition of critical size that can be more directly related to the analysis of experimental observations. \section{Operational definition of the critical size}\label{sec:criticalsize} In experimental studies of molecular sorting, domain ``trajectories'' have been observed to fall into two classes, depending on their fate~\cite{EBV+04,AAM+13,WCM+20}: \textit{productive} trajectories, where the domain is finally extracted as a part of a lipid vesicle, and \textit{unproductive} trajectories, where the domain progressively dismantles and is ultimately dissolved. It is worth observing here that these are properties of the domain \textit{history}, and not of its state at a given instant. However, for simplicity, we will define in what follows as \textit{productive} or \textit{unproductive} domains, those that belong to productive or unproductive trajectories, respectively. In our lattice-gas model, productive and unproductive domains can be directly distinguished by tracking their evolution in time, and checking whether their trajectory ends up with an extraction event, or not~(Fig.~\ref{fig:trajectories}). The classification into productive and unproductive trajectories can be used to provide a natural, operational definition of critical size, applicable to the analysis of actual experimental data. \begin{figure}[tb] \begin{center} \includegraphics[width=0.37\textwidth]{fig5.pdf} \end{center} \vspace{-0.5cm} \caption{Time evolution of the size of productive (blue) and unproductive (red) sorting domains, from numerical simulation of the lattice-gas model ($\phi/k_D=10^{-6}$, $g=20$). } \label{fig:trajectories} \end{figure} \begin{figure*}[tb] \begin{center} \includegraphics[width=1\textwidth]{fig6.pdf} \end{center} \vskip -0.5cm \caption{Left: Empirical histograms of domain sizes for productive (blue) and unproductive (red) domains obtained from numerical simulations of the lattice-gas model ($\phi/k_D=10^{-7}$, $g=20$). Center: Probability of a domain being productive or unproductive, conditioned by its size~$A$. The vertical dashed lines mark the position of the critical size $\mathcal{A}_\mathrm{c}$, that can be found, according to (\ref{prob3}), where the frequency of productive domains surpasses the frequency of unproductive domains (left), or~equivalently, according to (\ref{prob2}), where the conditional probability of a domain of size $A$ being productive exceeds $1/2$ (center). Right: Critical size~$\mathcal{A}_\mathrm{c}$ as a function of the interaction strength~$g$ for different values of the incoming flux~$\phi/k_D$.} \label{fig:criticalsize} \end{figure*} \begin{figure*}[tb] \begin{center} \includegraphics[width=0.7\textwidth]{fig7.pdf} \end{center} \vskip -0.5cm \caption{Left: Full histogram of all domain sizes ($\phi/k_D=10^{-7}$, $g=20$). The lines are fits with Eq.~\ref{eq:domainsize1} (red) for $A<\tilde{A}_\mathrm{c}$, and with Eq.~\ref{eq:domainsize2} (blue) for $\tilde{A}_\mathrm{c}<A<A_E$. The $A>A_E$ tail depends on the details of the extraction mechanism and is therefore non universal. Right: Numerical estimate of the prefactor $N_0$ appearing in Eq.~\ref{eq:domainsize2}, as a function of the incoming flux $\phi/k_D$, in~the optimal region. The red line is a fit with the power-law $\phi^f$ with $f = 0.54$. } \label{fig:domainsize} \end{figure*} Let us define the `operational' critical size as the value $\mathcal{A}_\mathrm{c}$ such that a domain of size $\mathcal{A}_\mathrm{c}$ has 50\% probability of being productive: \begin{equation} P( \mathrm{prod.}|\mathcal{A}_\mathrm{c} ) = \frac{1}{2}, \label{eq:prob} \end{equation} (similar definitions have been adopted in previous works, see e.g. Ref.~\citenum{ryu2010numerical}). In terms of (joint) probability density functions (pdf's), Eq.~\ref{eq:prob} is equivalent to \begin{equation} p(\mathcal{A}_\mathrm{c},\mathrm{prod.}) =p(\mathcal{A}_\mathrm{c},\mathrm{unprod.}), \label{eq:firstcrit} \end{equation} i.e., the critical size is found at the intersection of the joint pdf's of, respectively, productive and unproductive domain sizes. Under a few additional hypotheses (see App.~\ref{app:alternative_english}), Eq.~\ref{eq:prob} implies \begin{equation} P( \mathrm{prod.}|A ) \geq \frac{1}{2} \quad\mathrm{for\ all}\quad A\geq\mathcal{A}_\mathrm{c} \label{prob2} \end{equation} consistently with the phenomenological picture, where smaller domains decay with high probability, while, once a domain exceeds the critical size, the probability that it will continue to grow up to the extraction size is larger than the probability that it will disappear. In terms of the joint pdf's of, respectively, productive and unproductive domains, Eq.~\ref{prob2} is in its turn equivalent to the condition that \begin{equation} p(A,\mathrm{prod.}) \geq p(A,\mathrm{unprod.}) \quad \mathrm{for\ all} \quad A\geq\mathcal{A}_\mathrm{c}. \label{prob3} \end{equation} Either (\ref{prob2}) or (\ref{prob3}) can be conveniently applied to the analysis of empirical data, which are given as integer or floating-point numbers of finite precision. The critical size $\mathcal{A}_\mathrm{c}$ can thus be estimated either from conditional frequencies (using Eq.~\ref{prob2}) or from frequency histograms of domain sizes (using Eq.~\ref{prob3}), as long as productive and unproductive domains can be effectively discriminated. As an example, in Fig.~\ref{fig:criticalsize}~(left), $\mathcal{A}_\mathrm{c}$ is found at the approximate intersection of the (joint) frequency histograms of, respectively, productive and unproductive domains. The existence of this intersection appears to be guaranteed by the fact that $p(A,\mathrm{unprod.})$ is a decreasing function of $A$, while $p(A,\mathrm{prod.})$ is initially increasing. Fig.~\ref{fig:criticalsize}~(center) shows that the probability of a domain being productive increases with its size, while the complementary probability of being unproductive decreases. The above procedure allows to compute $\mathcal{A}_\mathrm{c}$ from numerical simulations for different values of model parameters. The critical size~$\mathcal{A}_\mathrm{c}$ is thus found to be a decreasing function of both the microscopic interaction strength~$g$, and of the incoming molecule flux $\phi$ Fig.~\ref{fig:criticalsize}, right). \begin{figure*}[tb] \begin{center} \includegraphics[width=1.0\textwidth]{fig8.pdf} \end{center} \vskip -0.7cm \caption{Left: The number of free molecules per unit area decreases for increasing interaction strength~$g$ (magenta), while the number of molecules found inside of sorting domains has an increasing trend at large~$g$ (orange). As a consequence, the total number of molecules per unit area (black) has a minimum, which marks the position of the optimal sorting regime~\cite{ZVS+21}. Center: In its turn, the number of molecules inside of sorting domains (orange) is a non-monotonic function of the interaction strength~$g$. This can be understood as follows. The number of molecules inside of unproductive domains (red) decreases with increasing interaction strength, while the number of molecules inside of productive domains (blue) increases. As a consequence, the total number of molecules found inside of sorting domains of any of the two types (orange) has a minimum close to the optimal sorting regime. Right: Similarly, the number of unproductive domains per unit area (red) decreases with the interaction strength, whereas the number of productive domains (blue) increases. As a consequence, the total number of sorting domains of the two types (orange) has a minimum for intermediate interaction strength, close to the optimal sorting regime. Simulations performed with $\phi/k_D=10^{-8}$. The number of both productive and unproductive domains increase with increasing $\phi$ (not shown here). } \label{fig:numofdomains} \end{figure*} \begin{figure*}[tb] \begin{center} \includegraphics[width=1\textwidth]{fig9.pdf} \end{center} \vskip -0.5cm \caption{Statistical properties of productive (blue) and unproductive (red) domains for incoming flux $\phi/k_D=10^{-6}$ and interaction strength $g=10^2$ (top, $5\cdot 10^4$ domain trajectories) and $g=10^1$ (bottom, $1.5\cdot 10^6$ domain trajectories), collected over a $3\cdot 10^6/k_D$ time interval. Simulated trajectories were classified into productive and unproductive depending on whether they ended up in an extraction event, or not. First two columns: Scatter plots of domain lifetimes vs. maximum sizes (first column) and of DASC indicators $d_1,d_2$ (second column). Last two columns: frequency distributions of maximum sizes and lifetimes. Insets: complementary cumulative frequency distributions. Domain sizes are given as number of occupied lattice sites, lifetimes are measured in units of $10^3/k_D$.} \label{fig:classification} \end{figure*} \begin{figure}[tb] \begin{center} \hspace{-0.5cm}\includegraphics[width=0.51 \textwidth]{fig10.pdf} \end{center} \vskip -0.5cm \caption{ Comparison between the experimental distributions of lifetimes (left) and maximum sizes (right) of unproductive (red lines) and productive (blue lines) domains from Ref.~\citenum{WCM+20}, Fig.~2B,C (kindly shared by Dr. Xinxin Wang), and corresponding distributions obtained from simulations of the lattice-gas model (red and blue histograms, respectively) with fitted values of the model parameters ($g=6.5$, $\phi/k_D = 10^{-6}$) and fitted rescaling factors for lifetime and domain size units ($k_D=715\,\mathrm{s}^{-1}$, 1~lattice site\;=\;0.3~a.u.). Lower cutoffs on lifetime and maximum size approximately equal to the values reported in the experimental data were used. In the experiments, productive and unproductive domains were classified by DASC. In the analysis of simulated data (histograms), use was made of both the exact classification obtained directly from the simulations (top), and a posteriori use of DASC on the numerically generated domains (bottom), obtaining similar results. } \label{fig:comparison} \end{figure} Having at our disposal an operational definition of critical size, we are now in a position to check numerically the validity of theoretical predictions about the shape of the domain size distribution. The theory predicts functionally different forms for the number densities for the size of, respectively, subcritical and supercritical domains. In the subcritical region, transient domains continuously form and dissolve. This quasi-equilibrium state is approximately described by classical nucleation theory~\cite{BD35,Zel43}, which predicts that the stationary number density for domains of size $A<A_{\rm c}$ is \begin{equation}\label{eq:domainsize1} N_{\rm st}^\mathrm{sub}(A) = N^\mathrm{sub}_{0} \mathrm{e}^{\lambda \left(A^{1/2}-A_{\rm c}^{1/2}\right)^2}, \end{equation} where $\lambda$ is a constant, which is expected to be proportional to the interaction strength between sorted molecules. For $A>A_{\rm c}$, according to Eq.~\ref{solutionSmolu}, the shape of the number density is instead of the logarithmic type: \begin{equation}\label{eq:domainsize2} N_{\rm st}(A)= N_{0} \log \frac{A_L}{A}, \end{equation} with ${N}_0 \sim \phi^{1/2}$. By fitting the full histogram of all domain sizes with Eq.~\ref{eq:domainsize1} for small $A$ and with Eq.~\ref{eq:domainsize2} for large $A$, and by imposing the continuity condition \begin{equation} N^\mathrm{sub}_0=N_0 \log \frac{A_L}{A_{\rm c}} \end{equation} one obtains an estimate $\tilde{A}_\mathrm{c}$ of the critical size $A_\mathrm{c}$ in the framework of classical nucleation theory (see Fig.~\ref{fig:domainsize}, left). The thus obtained value $\tilde{A}_\mathrm{c}$ is of the same order as the previously introduced value $\mathcal{A}_\mathrm{c}$, the difference being due to the presence of a small tail of unproductive domains with $A>\mathcal{A}_\mathrm{c}$ (see Fig.~\ref{fig:criticalsize}, left). The definition of $\mathcal{A}_\mathrm{c}$ has a clear probabilistic interpretation and is independent of phenomenological assumptions about the underlying process of domain formation. On the other hand, the estimate~$\tilde{A}_\mathrm{c}$ by the above empirical fitting procedure can be used when it is not possible to discriminate between productive and unproductive domains. A numerical estimate of the prefactor $N_0$ for different values of the incoming molecule flux $\phi$ gives $N_0\sim \phi^f$ with $f=0.54$, in reasonably good agreement with the theoretical value $f=1/2$ (Fig.~ \ref{fig:domainsize}, right). The systematic discrimination of productive and unproductive domains allows to unravel additional aspects of the phenomenology. Optimal sorting takes place when the total number of molecules in the system is minimal~\cite{ZVS+21} (Fig.~\ref{fig:numofdomains}, left). In a neighborhood of this optimal value, one observes also a minimum in the number of molecules contained in the domains (Fig.~\ref{fig:numofdomains}, center), and in the number of domains itself (Fig.~\ref{fig:numofdomains}, right). This is a somehow paradoxical effect, since at first sight, one would expect that a larger number of sorting domains could increase the speed of the sorting process. Instead, sorting turns out to be most efficient precisely when the number of sorting domains is close to a minimum. As a matter of fact, when the interaction strength increases, the number of molecules in unproductive domains decreases, while the number of those in productive domains increases. As a consequence, their sum, i.e. the number of molecules in any of the two types of domains, has a minimum (Fig.~\ref{fig:numofdomains}, center). A similar argument applies directly to the total numbers of productive and unproductive domains: the number of unproductive domains decreases when the interaction strength increases, while the number of productive domains increases, as predicted by Eq.~\ref{eq:enned}~\footnote{Recalling also that the macroscopic interaction strength $C$ is a monotonically increasing function of the microscopic parameter~$g$ (Fig.~\ref{fig:Coptvsm}, bottom).}. This leads to the appearance of an intermediate minimum in the total number of domains (Fig.~\ref{fig:numofdomains}, right). The emerging picture is that the efficiency of the sorting process is not favored by a proliferation in the number of sorting domains: in that case, the flux of incoming molecules has to be shared among a larger number of domains, and the growth rate of individual domains is slowed down. A balance has therefore to be struck between two competing requirements: the interaction strength should be large enough to allow for easy nucleation of new sorting domains, but small enough to avoid their unnecessary proliferation. These theoretical predictions are compatible with former experimental work where the strength of interaction between transferrin receptors on cell plasmamembranes was experimentally controlled, and higher interaction strength was shown to induce higher rates of generation of productive sorting domains, and lower numbers of unproductive events~\cite{LAD+10}. \section{Interpretation of experimental data} \label{sec:classification} The correct classification of productive/unproductive trajectories in data obtained from living cell experiments is a challenging process. Several approaches have been adopted. Productive trajectories can be singled out by detecting bursts in the concentration of specific molecules involved in the process of vesicle detachment, such as dynamin~\cite{FC12,GCH+14,EBV+04}. Other approaches rely on the measure of extremal properties of domain trajectories, such as the maximum size reached by domains, or their lifetime~\cite{EBV+04,LMY+09,AAM+13,HCD15,ZVS+21}, which are expected to be less dependent on the small-scale details of the stochastic process. More recently, a new classification method based on a ``disassembly asymmetry score'' (DASC)~\cite{WCM+20} has been proposed. In this context, productive and unproductive trajectories are discriminated by clustering the values of a set of statistical indicators that compare properties of the backward and forward histories of the domains~\cite{WCM+20}. The effectiveness of some of these approaches can be tested on numerical simulations of the lattice-gas model discussed in the previous Sections, where the productive vs. unproductive classification can be performed exactly. The first two columns of Fig.~\ref{fig:classification} show scatter plots of maximum size vs. lifetime, and of the DASC indicators $d_1,d_2$~\cite{WCM+20}, for $g=10^2$ (top) and $g=10^1$ (bottom). Different colors are used for productive (blue) and unproductive (red) trajectories. For $g=10^2$ the two populations are clearly separated, and can be easily discriminated automatically using standard clustering methods. For $g=10^1$ instead the representative points of the two populations start to overlap, and clustering methods are likely to return a certain number of erroneously classified points. For $g=10^2$ the existence of two distinct populations of domain trajectories is reflected in the bimodal shape of the frequency distributions of maximum sizes and lifetimes (Fig.~\ref{fig:classification}, last two columns, top). This clear separation corresponds to a distinct plateau in the (complementary) cumulative frequency distribution (insets). For $g=10^1$ instead (bottom), the frequency distributions of the two populations start to overlap and the bimodal character of the two frequency distributions tends to disappear. The loss of discriminating power takes place approximately for values of the interaction strength such that the critical size~$\mathcal{A}_\mathrm{c}$ becomes of the order of the extraction size~$A_E$ (cf. Fig.~\ref{fig:criticalsize}, right). Interestingly, the model predictions for the frequency distributions of the maximum sizes and lifetimes of sorting domains are similar to those resulting from experimental observations. In particular, the maximum size and lifetime distributions for unproductive domains show a rapid monotonic decay, while the corresponding distributions for productive domains show a distinct maximum and a slower decaying tail (Fig.~\ref{fig:classification}, last two columns). Both of these features have been observed in experiments of endocytic sorting~\cite{AAM+13,WCM+20}, where productive and unproductive domains correspond to clathrin-coated pits (CCPs) and abortive coats (ACs), respectively. (A~third population of outlier traces (OTs)~\cite{WCM+20}, characterized by short lifetimes and large sizes, likely correspond to cytoplasm-originated events~\cite{HCD15} and are not observed in the simulations.) We looked for model parameters providing the best fit of simulated frequency distributions with data from Fig.~2B,C of Ref.~\citenum{WCM+20}, where productive and unproductive domains were classified using DASC. By a single fit of the two parameters of the model and of two rescaling factors for the time and length scales, good agreement between simulation and experimental data was found for both the lifetime and maximum size distributions, simultaneously for both productive and unproductive domains (Fig.~\ref{fig:comparison}). The frequency histograms obtained from the exact classification of simulated productive and unproductive domains (Fig.~\ref{fig:comparison}, top) was compared with the frequency histograms obtained with the same model parameters, where however simulated domains were classified by the DASC method, yelding similar results (Fig.~\ref{fig:comparison}, bottom). \vskip 0.3cm \section{Conclusions}\label{sec:conclusions} To generate and maintain their internal order and guarantee proper physiological functioning, eukaryotic cells rely on a sophisticated process by which specific biomolecules are sorted and concentrated on small lipid vesicles, that are later delivered to appropriate membrane subregions through well-defined pathways. A recently proposed phenomenological theory of molecular sorting assumes that this process emerges from the coupling of two simpler biophysical mechanisms~\cite{ZVS+21}: a) the tendency of similar molecules to phase separate into localized sorting domains, and b) domain-induced membrane bending, leading to the formation and ultimate detachment of specifically enriched vesicles. A central notion of the theory of phase separation is that only domains larger than a critical size $A_\mathrm{c}$ are able to grow indefinitely, while smaller domains tend to be dissolved. In combination with a contextual process of domain extraction at a larger scale $A_E > A_\mathrm{c}$, this introduces a sort of ``physical checkpoint'', such that only domains that are able to reach the ``critical mass'' $A_\mathrm{c}$ can drive extraction (distillation) events, and are thus ``productive''. This scenario is consistent with experimental observations where, in addition to ``productive'' long-lived domains that grow into vesicles that are ultimately extracted from the membrane, a large number of short-lived, small domains, which tend to disassemble and ultimately disappear, is also detected. The existence of such a ``physical checkpoint'' is reflected in the particular shape of the size distribution for productive domains (Eq.~\ref{solutionSmolu}), which exhibits a maximum at sizes of the order of the critical size $A_\mathrm{c}$, a slowly (logarithmically) decaying intermediate region, followed by a non-universal decaying tail at scales larger than the extraction threshold $A_E$ (Fig.~\ref{fig:criticalsize}, left, blue histogram). On the other hand, the existence of a biochemical checkpoint has also been postulated in this regard~\cite{LMY+09,AAM+13}. It~would be quite interesting to further investigate the relation between these two effects. It is worth observing here that in the actual biophysical process, a wealth of different biomolecular species takes place in the formation and stabilization of sorting domains. In the theoretical model, the complex interplay between these different species is effectively encoded into the value of the single dimensionless interaction parameter~$g$. Intriguingly, even such a highly simplified abstract model, founded on basic notions from the theory of phase separation, is able to capture relevant features of the real process. This yields support to the hypothesis that endocytic sorting is driven by an underlying phase separation process. We have here considered a spatially homogeneous probability of nucleation of sorting domains. It has been observed however that nucleation events may cluster in ``hotspots'' or ``nucleation organizers''~\cite{NAM+11}. The origin of such hotsposts is an interesting open question, that deserves to be investigated in the framework of phase separation theory. ~ \acknowledgments We gratefully acknowledge useful discussions with Igor Kolokolov, Vladimir Lebedev, Guido Serini, Carlo Campa and Roland Wedlich-S\"{o}ldner. We thank Xinxin Wang, Sandra Schmid and Gaudenz Danuser for kindly sharing their data and for their insightful observations. Numerical calculations were made possible by a CINECA-INFN agreement providing access to CINECA high-performing computing resources. \vskip -0.3cm
{ "attr-fineweb-edu": 1.916016, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUgd85qhLBxVCRg7F_
\section{Introduction} In this paper, we consider the design and analysis of average-consensus protocols (averaging vectors in $\mathbb{R}^n$) in the presence of network interference. Each agent, while communicating locally with its neighbors for consensus, causes an interference in other communication links. We assume that these interferences are additive and lie on low-dimensional subspaces. Such interference models have been widely used in several applications, e.g. electromagnetic brain imaging~\cite{Anonymous:HlXifkm4}, magnetoencephalography~\cite{Gutierrez:2004ca,Sekihara:2004bn}, beamforming~\cite{McCloud:1998im,Dogandzic:2002jm}, and multiple-access channels~\cite{Lupas:1989bz,Varanasi:1998dx}. Interference cancellation, thus, has been an important subject of study in the aforementioned areas towards designing matched detectors, adaptive beamformers, and generalized hypothesis testing~\cite{Scharf:1994jv,McCloud:1997je,Goldstein:1997jb,Wang:1998cm,Dogandzic:2007dj,Monsees:2013if}. As distributed architectures are getting traction, information is to be distributedly processed for the purposes of learning, inference, and actuation. Average-consensus, thus, is a fundamental notion in distributed decision-making, see~\cite{jadbabailinmorse03,Xiao05distributedaverage,Mesbahi-parameter,Giannakis-est,usman_hdctsp09,olfati:cdc09,Sayed-LMS,KarSIAM2013} among others. When the inter-agent communication is noiseless and interference-free, the standard protocol is developed in~\cite{boyd:04}. Subsequently, a number of papers~\cite{Bucklew,Kashyap,Tuncer} consider average-consensus in imperfect scenarios. Reference~\cite{Kar_TSP2009} considers consensus with link failures and channel noise, while~\cite{Chen_ECDC2011} addresses asymmetric links with asymmetry in packet losses. Consensus under stochastic disturbances is considered in~\cite{Aysal_TIT2010}, while~\cite{Nazer_JSTSP2011} studies a natural superposition property of the communication medium and uses computation codes to achieve energy efficient consensus. In contrast to the past work outlined above, we focus on an algebraic model for network interference. We assume that the underlying communication on any link suffers from additive interference caused due to the communication by other agents following their own consensus protocol. The corresponding interference subspace, in general, depends on the communication link and the interfering agent. A fortiori, it is clear that if the interference by an agent is persistent in all dimensions~($\mathbb{R}^n$), there is no way to recover the true average unless schemes similar to interference alignment~\cite{Jafar11} are used. In these alignment schemes, the data is projected onto higher dimensions such that the interferences and the data lie in different low-dimensional subspaces; clearly, requiring an increase in the communication resources. On the other hand, if the interference from each agent already lies in (possibly different) low-dimensional subspaces, the problem we address is whether one can exploit this low-dimensionality for interference cancellation, and subsequently, for consensus. Furthermore, we address how much information can be recovered when the collection of the local interferences span the entire vector space, $\mathbb{R}^n$? Our contribution in this context is to develop information alignment strategies for interference cancellation and derive a class of (vector) consensus protocols that lead to a meaningful consensus. In particular, we show that the prospoed alignment achieves the average in a subspace whose dimension is complimentary to the maximal dimension of the interference subspaces (over all of the communication links). To be specific, if agent~$j$ sends~$\mathbf{x}^j\in\mathbb{R}^n$ to agent~$i$, agent~$i$ actually receives\footnote{In general, the interference matrix, $\Gamma$, may depend on the particular link,~$j\rightarrow i$, and the interfering agent,~$m$, and will be denoted by~$\Gamma_{ij}^m$.}~$\mathbf{x}^j + \sum_m\Gamma\mathbf{x}^m$, with~$\overline{\gamma}\triangleq\mbox{\upshape{rank}}(\Gamma)<n$. In this context, we address the following challenges: \begin{inparaenum}[(i)] \item The received signal is corrupted by several interferers, each on a distinct (low-rank) subspace. Is it possible to design a \emph{local} operation that cancels each interference? \item The aforementioned cancellation has to be \emph{locally} reversible (to be elaborated later) in order to build a meaningful consensus. \item The signal hampered with interference passes through consensus weights,~$w_{ij}$, iteratively. Notice again the received signal,~$\sum_{j\in\mathcal{N}_i}w_{ij}(\mathbf{x}^j + \sum_m\Gamma\mathbf{x}^m)$, at agent~$i$, where~$\mathcal{N}_i$ is the neighbors at agent~$i$. An arbitrary small disturbance due to the interference can result in perturbing the spectral radius of the consensus weight matrix to~$1+\varepsilon$, which forces the iterations to converge to~$0$ when~$\varepsilon<0$, or diverge when~$\varepsilon>0$,~\cite{usman_hdctsp09}. \end{inparaenum} We explicitly assume that no agent in the network knows how many and which agents may be interfering with its received signals. Additionally, we assume that only the null space of the underlying interferences are known \emph{locally} (singular values and basis vectors may not be known). Within these assumptions, it is clear that the aforementioned challenges are non-trivial. What we describe in this paper are completely local \emph{information alignment} strategies that not only ensure that average-consensus is reached, but also characterize where this consensus is reached. In particular, we show that average of the initial conditions, vectors in~$\mathbb{R}^n$, can be recovered in the subspace whose dimension, $n-\overline{\gamma}$, is complimentary to the (maximal) dimension, $\overline{\gamma}$, of the local interferences. The rest of the paper is organized as follows. Section \ref{pre_not} outlines the notation and gathers some useful facts from linear algebra. Section \ref{pf} formulates the problem while Section \ref{aca} presents a simple architecture, termed as \emph{uniform interference}, and develops the information alignment scheme. Section \ref{aca} then identifies two generalizations of the uniform interference, namely \emph{uniform outgoing interference} and \emph{uniform incoming interference}, subsequently treated in Sections \ref{s_uoi} and \ref{s_uii}, respectively. In each of these sections, we provide simulations to illustrate the main theoretical results and their implications. Section \ref{s_discuss} provides a summary and discussion of the main results and Section~\ref{s_conclude} concludes the paper. \section{Notation and Preliminaries}\label{pre_not} We use lowercase bold letters to denote vectors and uppercase italics for matrices (unless clear from the context). The symbols~$\mathbf{1}_n$ and~$\mathbf{0}_n$ are the~$n$-dimensional column vectors of all~$1$'s and all~$0$'s, respectively. The identity and zero matrices of size~$n$ are denoted by~$I_n$ and~$\mathbf{0}_{n\times n}$, respectively. We assume a network of~$N$ agents indexed by,~$i=1,\ldots,N$, connected via an undirected graph,~$\mathcal{G}=(\mathcal{V},\mathcal{E})$, where~$\mathcal{V}$ is the set of agents, and~$\mathcal{E}$, is the set of links,~$(i,j)$, such that agent~$j\in\mathcal{V}$ can send information to agent~$i\in\mathcal{V}$, i.e.~$j\rightarrow i$. Over this graph, we denote the neighbors of agent~$i$ as~$\mathcal{N}_i$, i.e. the set of all agents that can send information to agent~$i$: $\mathcal{N}_i = \{j~|~(i,j)\in\mathcal{E}\}.$ In the entire paper, the initial condition at an agent,~$i\in\mathcal{V}$, is denoted by an~$n$-dimensional vector,~$\mathbf{x}_0^i\in\mathbb{R}^n$. For any arbitrary vector,~$\mathbf{x}_0^i\in\mathbb{R}^n$, we use~$\oplus \mathbf{x}_0^i$ to denote the subspace spanned by~$\mathbf{x}_0^i$, i.e. the collection of all~$\alpha\mathbf{x}_0^i$, with~$\alpha\in\mathbb{R}$. Similarly, for a matrix,~$A\in\mathbb{R}^{n\times n}$, we use~$\oplus A$ to denote the (range space) subspace spanned by the columns of~$A$: \begin{eqnarray*} \oplus A = \left\{\sum_{i=1}^n\alpha_i \mathbf{a}_i~|~\alpha_i\in\mathbb{R}\right\},\qquad A = \left[ \begin{array}{ccc} \mathbf{a}_1 & \ldots & \mathbf{a}_n \end{array} \right]. \end{eqnarray*} For a collection of matrices,~$A_j\in\mathbb{R}^{n\times n}$,~$j=1,\ldots,N$, we use~$\oplus_jA_j$ to denote the subspace spanned by all of the columns in all of the~$A_j$'s: let $A_j = \left[ \begin{array}{ccc} \mathbf{a}_{j1} & \ldots & \mathbf{a}_{jn} \end{array} \right]$, then \begin{eqnarray*} \oplus_{j} A_j = \left\{\sum_{j=1}^N\beta_j\sum_{i=1}^n\alpha_i \mathbf{a}_{ji}~|~\alpha_i,\beta_j\in\mathbb{R}\right\}. \end{eqnarray*} Let~$\mbox{\upshape{rank}}(A)=\underline{\gamma},$ for some non-negative integer,~$\underline{\gamma}\leq n$, then~$\dim(\oplus A)=\mbox{\upshape{rank}}(A)=\underline{\gamma}$. The pseudo-inverse of~$A$ is denoted by~$A^\dagger\in\mathbb{R}^{n\times n}$; the orthogonal projection,~$\widetilde{\mathbf{x}}_0^i$, of an arbitrary vector,~$\mathbf{x}_0^i\in\mathbb{R}^n$, on the range space,~$\oplus A$, is given by the matrix~$I_{A}=AA^\dagger$, i.e. \begin{eqnarray} \widetilde{\mathbf{x}}_0^i = I_A\mathbf{x}_0^i = AA^\dagger\mathbf{x}_0^i. \end{eqnarray} With this notation,~$\widetilde{\mathbf{x}}_0^i\in{\oplus A}\subseteq\mathbb{R}^n$. Clearly,~$I_A ^2 = AA^\dagger AA^\dagger = AA^\dagger = I_A$ is a projection matrix from the properties of pseudo-inverse:~$AA^\dagger A = A$ and~$A^\dagger A A^\dagger = A^\dagger$. Note that when~$\mathbf{x}_0^i\in{\oplus A}$, then~$I_A \mathbf{x}_0^i=\mathbf{x}_0^i.$ The Singular Value Decomposition (SVD) of~$A$ is given by~$A=U_AS_AV^\top_A$ with~$U_AU_A^\top=I_n,V_AV_A^\top=I_n$, then~$A^\dagger = VS_A^\dagger U^\top,$ where~$S_A^\dagger$ is the pseudo-inverse of the diagonal matrix of the singular values,~$S_A$ (with~$0^\dagger = 0$). When~$A$ is full-rank, we have~$A^\dagger=A^{-1},I_A =I_n$. Since~$\overline{\gamma} = \mbox{\upshape{rank}}(A)$, the singular vectors ($U_A,V_A$) can be arranged such that \begin{eqnarray} I_A & = AA^\dagger = U_AS_AV^\top_A V_AS_A^\dagger U_A^\top = U_AS_A S_A^\dagger U_A^\top,\\ & = U_A\left[ \begin{array}{cc} \mathbf{0}_{\overline{\gamma}\times\overline{\gamma}}\\ &I_{\underline{\gamma}} \end{array} \right] U_A^\top. \end{eqnarray} From the above, the projection matrix,~$I_A$, is symmetric with orthogonal eigenvectors (or left and right singular vectors),~$U_A$, such that its eigenvalues (singular values) are either~$0$'s or~$1$'s. For some~$W=\{w_{ij}\}\in\mathbb{R}^{N\times N}$ and some~$A=\{a_{ij}\}\in\mathbb{R}^{n\times n}$ with~$w_{ij},a_{ij}\in\mathbb{R}$, the matrix Kronecker product is \begin{eqnarray} W\otimes A = \left[\begin{array}{cccc} w_{11}A & w_{12}A & \ldots&w_{1N}A\\ \vdots & \vdots & \ddots&\vdots\\ w_{N1}A & w_{N2}A & \ldots&w_{NN}A\\ \end{array}\right], \end{eqnarray} which lies in~$\mathbb{R}^{nN\times nN}$. It can be verified that~$I_N\otimes A$ is a block-diagonal matrix where each diagonal block is~$A$ with a total of~$N$ blocks. We have $W\otimes A = (W\otimes I_n) (I_N\otimes A).$ The following properties are useful in the context of this paper. \begin{eqnarray} \left(W\otimes I_n\right)\left(I_N\otimes A\right) &=& \left(I_N\otimes A\right)\left(W\otimes I_n\right),\\ \left(W\otimes I_n\right)^k &=& (W^k\otimes I_n), \end{eqnarray} for some non-negative integer,~$k$. More details on these notions can be found in~\cite{hornJ}. \section{Problem Formulation}\label{pf} We consider average consensus in a multi-agent network when the inter-agent communication is subject to unwanted interference, i.e. the desired communication,~$\mathbf{x}^j\in\mathbb{R}^n$, from agent~$j\in\mathcal{V}$ to agent~$i\in\mathcal{V}$ has an additive term,~$\mathbf{z}^{ij}\in\mathbb{R}^n$, resulting into agent $i$ receiving~$\mathbf{x}^j + \mathbf{z}^{ij}$ from agent $j$. We consider the case when this unwanted interference is linear. In particular, every link,~$j\rightarrow i$ or~$(i,j)\in\mathcal{E}$, incurs the following additive interference: \begin{eqnarray} \mathbf{z}^{ij} = \sum_{m\in\mathcal{V}}a_{ij}^m\Gamma_{ij}^m\mathbf{x}^{m}, \end{eqnarray} where:~$a_{ij}^m=1,$ if agent~$m\in\mathcal{V}$ interferes with~$j\rightarrow i$, and~$0$ otherwise; and~$\Gamma_{ij}^m\in\mathbb{R}^{n\times n}$ is the interference gain when~$m\in\mathcal{V}$ interferes with the~$j\rightarrow i$ communication. What agent~$i$ actually receives from agent~$j$ is thus: \begin{eqnarray} \mathbf{x}_k^j + \sum_{m\in\mathcal{V}} a_{ijk}^m \Gamma_{ijk}^m \mathbf{x}_k^m, \end{eqnarray} at time~$k$, where the subscript `${ijk}$' introduces the time dependency on the corresponding variables, see Fig.~\ref{fig_gl}. \begin{figure}[h!] \centering \includegraphics[width=2.5in]{intStruc3gl.pdf} \caption{Interference model: Note that agent~$j$ may also interfere with~$j\rightarrow i$ communication, i.e.~$m_1$ or~$m_2$ can be~$j$. This may happen when agent~$j$'s transmission to agents other than agent~$i$ interfere with the~$j\rightarrow i$ channel.} \label{fig_gl} \end{figure} Given the interference setup, average-consensus implemented on the multi-agent network is given by \begin{eqnarray}\label{cpi1} \mathbf{x}_{k+1}^i = \sum_{j\in\mathcal{N}_i} w_{ij}\left(\mathbf{x}_k^j + \sum_{m\in\mathcal{V}} a_{ijk}^m \Gamma_{ijk}^m \mathbf{x}_k^m\right), \end{eqnarray} for~$k\geq0,i\in\mathcal{V}$, with~$\mathbf{x}_0^i\in\mathbb{R}^n$. Interference is only incurred when~$w_{ij}\neq0$, which is true for each~$j\in\mathcal{N}_i$, in general. In other words, interference is incurred on all the links that are allowed by the underlying communication graph,~$\mathcal{G}$. The protocol in Eq.~\eqref{cpi1} reduces to the standard average-consensus~\cite{boyd:04}, when there is no interference, i.e. when~${a}_{ijk}^m=0,$ for all~$i,j,k,m$, and converges to\footnote{See~\cite{boyd:04} for relevant conditions for convergence:~$W\mathbf{1}_n=\mathbf{1}_n, \mathbf{1}_n^\top W = \mathbf{1}_n^\top,~\mathcal{G}$ is strongly-connected, and~$w_{ij}\neq0$ for each~$(i,j)\in\mathcal{E}$.} \begin{eqnarray}\label{pavg} \mathbf{x}_\infty^i \triangleq \lim_{k\rightarrow\infty}\mathbf{x}_k^i = \frac{1}{N}\sum_{j=1}^N\mathbf{x}_0^j. \end{eqnarray} However, when there is interference, i.e.~$a_{ijk}^m\neq{0}$, Eq.~\eqref{cpi1}, in general, either goes to zero or diverges at all agents. The former is applicable when the effect of the interference results into a stable weight matrix,~$W=\{w_{ij}\}$, and the latter is in effect when the interference forces the spectral radius of the weight matrix to be greater than unity. The primary reason is that if~$w_{ij}$'s are chosen to sum to~$1$ in each neighborhood (to ensure~$W\mathbf{1}^\top=\mathbf{1}^\top$), their effective contribution in Eq.~\eqref{cpi2} is not~$1$ because of the unwanted interference. This paper studies appropriate modifications to Eq.~\eqref{cpi1} in order to achieve average-consensus. The design in this paper is based on a novel \emph{information alignment} principle that ensures that the spectral radius of the mixing matrix,~$W$, is not displaced form unity. We assume the following: \begin{enumerate}[(a)] \item \emph{No agent,~$i\in\mathcal{V}$, knows which (or how many) agents are interfering with its incoming or outgoing communication.} \item \emph{The interference structure,~$a_{ijk}^m$ and~$\Gamma_{ijk}^m$, are constant over time,~$k$.} This assumption is to keep the exposition simple and is made without loss of generality as we will elaborate later. \end{enumerate} Under these assumptions, the \emph{standard average-consensus protocol} is given by \begin{eqnarray}\label{cpi2} \mathbf{x}_{k+1}^i = \sum_{j\in\mathcal{N}_i} w_{ij}\mathbf{x}_k^j + \sum_{j\in\mathcal{N}_i} w_{ij}\sum_{m\in\mathcal{V}} a_{ij}^m\Gamma_{ij}^m \mathbf{x}_k^m, \end{eqnarray} for~$k\geq0,\mathbf{x}_0^i\in\mathbb{R}^n$. The goal of this paper is to consider \emph{distributed averaging operations} in the presence of interference not only to establish the convergence, but further to ensure that the convergence is towards a meaningful quantity. To these aims, we present a conservative solution to this problem in Section~\ref{aca}, which is further improved in Sections~\ref{s_uoi} and~\ref{s_uii} for some practically relevant scenarios. \section{A Conservative Approach}\label{aca} Before considering the general case within a conservative paradigm, we explore a special case of uniform interference in Sections~\ref{s_ui} and~\ref{ill_ui}. We then provide the generalization in Section~\ref{ui_gen} and shed light on the conservative solution. \subsection{Uniform Interference}\label{s_ui} Uniform interference is when each communication link in the network experiences the same interference gain, i.e.~$\Gamma_{ij}^m = \Gamma_1,\forall i,j,m$. In other words, all of the blocks in the interference channel of Fig.~\ref{fig_gl} represent the same interference gain matrix,~$\Gamma_1\in\mathbb{R}^{n\times n}$. In this context, Eq.~\eqref{cpi2} is given by \begin{eqnarray}\label{cpi3} \mathbf{x}_{k+1}^i &=& \sum_{j\in\mathcal{N}_i} w_{ij}\mathbf{x}_k^j + \sum_{m\in\mathcal{V}} b_i^m\Gamma_1 \mathbf{x}_k^m, \end{eqnarray} where~$b_i^m = \sum_{j\in\mathcal{N}_i} w_{ij}a_{ij}^m$. Here,~$b_i^m\neq 0$ means that agent~$m\in\mathcal{V}$ interferes with agent~$i\in\mathcal{V}$ over some of the messages (from~$j\in\mathcal{N}_i$) received by agent~$i$. In fact, an agent~$m\in\mathcal{V}$ may interfere with agent~$i$'s reception on multiple incoming links, while an interferer,~$m$, may also belong to~$\mathcal{N}_i$, i.e. the neighbors of agent~$i$. To proceed with the analysis, we first write Eq.~\eqref{cpi2} in its matrix form: Let~$B_1$ be an~${N\times N}$ matrix whose~`$im$'th element is given by~$b_i^m$. Define the network state at time~$k$: \begin{eqnarray} \mathbf{x}_{k} = \left[ \begin{array}{cccc} \left(\mathbf{x}_k^1\right)^\top&\left(\mathbf{x}_k^2\right)^\top&\ldots&\left(\mathbf{x}_k^N\right)^\top \end{array} \right]^\top. \end{eqnarray} Then, it can be verified that Eq.~\eqref{cpi3} is compactly written as \begin{eqnarray} \mathbf{x}_{k+1} &=& \left(W\otimes I_n+B_1\otimes \Gamma_1\right)\mathbf{x}_k. \label{cpm_s1} \end{eqnarray} The~$N\times N$ weight matrix,~$W$, has the sparsity pattern of the consensus graph,~$\mathcal{G}$, while the~$N\times N$ matrix,~$B_1$, has the sparsity pattern of what can be referred to as the \emph{interference graph}--induced by the interferers. We have the following result. \begin{lem}\label{lem1} If~$\Gamma_1\mathbf{x}_0^i = \mathbf{0}_n,\forall i$, then~$\Gamma_1\mathbf{x}_k^i = \mathbf{0}_n,\forall i,k.$ \end{lem} \begin{proof} Note that~$\Gamma_1\mathbf{x}_k^i$ is a local operation at the~$i$th agent. This is equivalent to multiplying~$I_N\otimes\Gamma_1$ with the network vector,~$\mathbf{x}_k$. From the lemma's statement, we have~$(I_N\otimes \Gamma_1)\mathbf{x}_0=\mathbf{0}_{nN}.$ Now note that (recall Section~\ref{pre_not}) \begin{eqnarray*} \left(I_N\otimes\Gamma_1\right)\left(W\otimes I_n + B_1\otimes \Gamma_1\right) =\left(W\otimes \Gamma_1 + B_1\otimes \Gamma_1^2\right),\\ =\left(W\otimes I_n + B_1\otimes \Gamma_1\right) \left(I_N\otimes\Gamma_1\right). \end{eqnarray*} Subsequently, multiply both sides of Eq.~\eqref{cpm_s1} by~$\left(I_N\otimes\Gamma_1\right)$: \begin{eqnarray*} \left(I_N\otimes\Gamma_1\right)\mathbf{x}_{k+1} = \left(W\otimes I_n + B_1\otimes \Gamma_1\right) \left(I_N\otimes\Gamma_1\right)\mathbf{x}_k,\\ = \left(W\otimes I_n + B_1\otimes \Gamma_1\right)^{k+1} \left(I_N\otimes\Gamma_1\right)\mathbf{x}_0=\mathbf{0}_n, \end{eqnarray*} and the lemma follows. \end{proof} The above lemma shows that the effect of \emph{uniform interference} can be removed from the average-consensus protocol if the data (initial conditions) lies in the null space of the interference,~$\Gamma_1$. To proceed, let us denote the interference null space (of $\Gamma_1$) by $\Theta_{\Gamma_1}$. Recall that~$\oplus_i \mathbf{x}_0^i$ denotes the subspace spanned by all of the initial conditions, the applicability of Lemma~\ref{lem1} is not straightforward because: \begin{inparaenum}[(i)] \item~$\dim(\oplus_i \mathbf{x}_0^i)>\dim(\Theta_{\Gamma_1})$, in general; and, \item even when~$\dim(\oplus_i \mathbf{x}_0^i)\leq\dim(\Theta_{\Gamma_1})$, the data subspace,~$\oplus_i \mathbf{x}_0^i$, may not belong to the null space of the interference,~$\Theta_{\Gamma_1}$. \end{inparaenum} However, intuitively, a scheme can be conceived as follows: \emph{Project} the data on a low-dimensional subspace,~$\mathcal{S}$, such that~$\dim(\mathcal{S})\leq\dim(\Theta_{\Gamma_1})$; and, \emph{Align} this projected subspace,~$\mathcal{S}$, on the null-space,~$\Theta_{\Gamma_1}$, of the interference. At this point, we must ensure that this alignment is reversible so that its effect can be undone in order to recover the projected data subspace,~$\mathcal{S}$. To this aim, we provide the following lemma. \begin{lem}\label{Tlem} For some~$0\leq\underline{\gamma}\leq n$, let~$\Gamma_1\in\mathbb{R}^{n\times n}$ have rank~$\overline{\gamma}=n-\underline{\gamma}$, and let another matrix,~$I_{\mathcal{S}}\in\mathbb{R}^{n\times n}$ have rank~$\underline{\gamma}$. There exists a full-rank preconditioning,~$T_1\in\mathbb{R}^{n\times n}$, such that~$\Gamma_1 T_1I_{\mathcal{S}}=\mathbf{0}_{n\times n}$. \end{lem} \begin{proof} Since~$\Gamma_1$ has rank~$\overline{\gamma}$, there exists a singular value decomposition,~$\Gamma_1=U_1S_1V_1^\top$, where the~$n\times n$ diagonal matrix~$S_1$ is such that its first~$\overline{\gamma}$ elements are the singular values of~$\Gamma_1$, and the remaining~$\underline{\gamma}$ elements are zeros. With this structure on~$\mathcal{S}$, the matrix~$V_1$ can be partitioned into \begin{eqnarray} V_1 = \left[ \begin{array}{ccc} \overline{V}_1 & \underline{V}_1 \end{array} \right], \end{eqnarray} (with~$\overline{V}_1\in\mathbb{R}^{n\times \overline{\gamma}}$ and~$\underline{V}_1\in\mathbb{R}^{n\times \underline{\gamma}}$), where~$\oplus\underline{V}_1$ is the null-space of~$\Gamma_1$. Similarly,~$I_{\mathcal{S}}=U_{\mathcal{S}}S_{\mathcal{S}}V_{\mathcal{S}}^\top$ with rank~$\underline{\gamma}$, where the matrices,~$U_{\mathcal{S}}$ and~$V_{\mathcal{S}}$, are arranged such that the first~$\overline{\gamma}$ diagonals of~$S_{\mathcal{S}}$ are zeros and the remaining are the~$\underline{\gamma}$ singular values of~$I_{\mathcal{S}}$. Define \begin{eqnarray}\label{T0map} T_1 = \left[ \begin{array}{ccc} \overline{V}_1^\prime & \underline{V}_1^\prime \end{array} \right]U_{\mathcal{S}}^\top, \end{eqnarray} where~$\underline{V}_1^\prime$ is such that ~$\oplus\underline{V}_1^\prime = \oplus\underline{V}_1$, and~$\overline{V}_1^\prime$ is chosen arbitrarily such that~$T_1$ is invertile. With this construction, note that~$\overline{V}_1^\top\underline{V}_1^\prime$ is a zero matrix because~$\overline{V}_1$ is orthogonal to the column-span of~$\underline{V}_1$ (by the definition of the SVD). We have \begin{eqnarray}\nonumber \Gamma_1 T_1I_{\mathcal{S}} = U_1S_1\left[ \begin{array}{cc} \overline{V}_1^\top\overline{V}_1^\prime & \mathbf{0}_{\overline{\gamma}\times\underline{\gamma}}\\ \underline{V}_1^\top\overline{V}_1^\prime & \underline{V}_1^\top\underline{V}_1^\prime \end{array} \right] S_{\mathcal{S}}V_{\mathcal{S}}^\top = U_1\mathbf{0}_{n\times n}V_{\mathcal{S}}^\top, \end{eqnarray} and the lemma follows. \end{proof} The above lemma shows that the computation of the preconditioning only requires the knowledge of the (uniform) interference null-space,~$\Theta_{\Gamma_1}\triangleq\oplus{\underline{V}_1}$. Clearly,~$T_1=V_1U_{\mathcal{S}}^\top$ is a valid preconditioning as with this~$\Gamma_1T_1I_{\mathcal{S}}$ is a zero matrix, but this choice is more restrictive and not necessary. \emph{Information alignment}: Lemma~\ref{Tlem} further sheds light on the notion of \emph{information alignment}, i.e. the desired information sent by the transmitter can be projected and aligned in such a way that it is not distorted by the interference. Not only that the information remains unharmed, it can be recovered at the receiver as the preconditioning~$T$, is invertible. The following theorem precisely establishes the notion of information alignment with the help of Lemmas~\ref{lem1} and~\ref{Tlem}. \begin{thm}[Uniform Interference]\label{cui_th} Let~${\Theta_{\Gamma_1}}$ denote the null space of~$\Gamma_1$ and let~$\underline{\gamma}=\dim({\Theta_{\Gamma_1}})$. In the presence of uniform interference, the protocol in Eq.~\eqref{cpm_s1} recovers the average in a~$\underline{\gamma}$-dimensional subspace,~${\mathcal{S}}$, of~$\mathbb{R}^n$, via an information alignment procedure based on the preconditioning. \end{thm} \begin{proof} Without loss of generality, we assume that~${\mathcal{S}}={\oplus A}$, where~${\oplus A}$ denotes the range space (column span) of some matrix,~$A\in\mathbb{R}^{n\times n}$, such that~$\dim({\oplus A})=\underline{\gamma}$. Define $I_{\mathcal{S}} = A^\dagger A$, where~$I_{\mathcal{S}}$ is the orthogonal projection that projects any arbitrary vector in~$\mathbb{R}^n$ on~${\mathcal{S}}$. Define the projected (on~${\mathcal{S}}$) and transformed initial conditions:~$\widehat{\mathbf{x}}_0^i\triangleq T_1I_{\mathcal{S}}\mathbf{x}_0^i,\forall i\in\mathcal{V}$, where~$T_1$ is the invertible preconditioning given in Lemma~\ref{Tlem}. From Lemma~\ref{Tlem}, we have \begin{eqnarray} \Gamma_1\widehat{\mathbf{x}}_0^i=\Gamma_1T_1I_{\mathcal{S}}\mathbf{x}_0^i=\mathbf{0}_n,\qquad\forall i\in\mathcal{V}, \end{eqnarray} i.e. the alignment makes the initial conditions invisible to the interference. From Lemma~\ref{lem1}, Eq.~\eqref{cpm_s1} reduces to $ \widehat{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i}w_{ij}\widehat{\mathbf{x}}^j_k$, when the initial conditions are~$\widehat{\mathbf{x}}_0^i, \forall i\in\mathcal{V}$, which converges to the average of the \emph{transformed and projected} initial conditions,~$\widehat{\mathbf{x}}_0^i$'s, under the standard average-consensus conditions on~$\mathcal{G}$ and~$W$. Finally, average in~${\mathcal{S}}$ is recovered by \begin{eqnarray*} \widetilde{\mathbf{x}}_\infty^i = T_1^{-1}\widehat{\mathbf{x}}_{\infty}^i = \frac{1}{N}\sum_{j=1}^NT_1^{-1}\widehat{\mathbf{x}}_0^j = \frac{1}{N}\sum_{j=1}^N I_{\mathcal{S}}\mathbf{x}_0^j,\qquad \forall i\in\mathcal{V}, \end{eqnarray*} and the theorem follows. \end{proof} The above theorem shows that in the presence of uniform interference, a careful information alignment results into obtaining the data (initial conditions) average projected onto any arbitrary~$\underline{\gamma}$-dimensional subspace,~$\mathcal{S}$, of~$\mathbb{R}^n$. We note that a completely distributed application of Theorem~\ref{cui_th} requires that each agent knows the null-space,~$\Theta_{\Gamma_1}$, of the (uniform) interference, recall Lemma~\ref{Tlem}; and thus is completely local. In addition, all of the agents are required to agree on the desired signal subspace,~$\mathcal{S}$, where the data is to be projected. \subsection{Illustration of Theorem~\ref{cui_th}}\label{ill_ui} In essence, Theorem~\ref{cui_th} can be summarized in the following steps, illustrated with the help of Fig.~\ref{f11}: \begin{enumerate}[(i)] \item \emph{Project} the data,~$\mathbb{R}^n$, on a~$\underline{\gamma}$-dimensional subspace,~${\mathcal{S}}$, via the projection matrix,~$I_{\mathcal{S}}$. In Fig.~\ref{f11} (a), the data (initial conditions) lies arbitrarily in~$\mathbb{R}^3$ projected on a~$\underline{\gamma}=2$-dimensional subspace,~${\mathcal{S}}$, in Fig.~\ref{f11} (b). Interference is given by a rank~$1$ matrix,~$\Gamma_1$; the interference subspace is shown by the black line; \item \emph{Align} the projected subspace,~${\mathcal{S}}$, on the null space,~${\Theta_{\Gamma_1}}$, of interference,~$\Gamma_1$, via the preconditioning,~$T_1$. In Fig.~\ref{f11} (c), the projected subspace,~${\mathcal{S}}$, is aligned to the null of space,~${\Theta_{\Gamma_1}}$, of the interference via preconditioning with~$T_1$. Note that after the alignment, the data is orthogonal to the interference subspace (black line); \item \emph{Consensus} is implemented now on the null space of the interference, see Fig.~\ref{f11} (d). \item \emph{Recover} the average in~${\mathcal{S}}$ via~$T_1^{-1}$. Finally, the average in the null space,~${\Theta_{\Gamma_1}}$, is translated back to the the signal subspace,~${\mathcal{S}}$, via~$T_1^{-1}$. We also show the true average in~$\mathbb{R}^3$ by the `$\star$', see Fig.~\ref{f11} (e). \end{enumerate} \begin{figure*} \centering \subfigure{\includegraphics[width=1.25in]{f1c.pdf}} \subfigure{\includegraphics[width=1.25in]{f2c.pdf}} \subfigure{\includegraphics[width=1.25in]{f3c.pdf}} \subfigure{\includegraphics[width=1.25in]{f4c.pdf}} \subfigure{\includegraphics[width=1.25in]{f5c.pdf}} \caption{Consensus under uniform interference: (a) Signal space,~$\mathbb{R}^3$, data shown as squares and the average as `$\star$'; (b) Projected signal subspace,~${\mathcal{S}}$, shown as circles and the average as `$\diamond$'; (c) Alignment on the null space of the interference,~$T_1I_{{\mathcal{S}}}\mathbf{x}_0^i$; (d) Consensus in the null space of the interference,~$\widehat{\mathbf{x}}_k^i$, average shown as large filled circle; and, (e) Translation back to the signal subspace,~$T_1^{-1}\widehat{\mathbf{x}}_{\infty}^i$.} \label{f11} \end{figure*} From Theorem~\ref{cui_th}, when~$\Gamma_1$ is full-rank, i.e.~$\underline{\gamma}=0,$ the iterations converge to a zero-dimensional subspace and are not meaningful. However, if the interference is low-rank, consensus under uniform interference may still remain meaningful. In fact, we can establish the following immediate corollaries. \begin{cor}[Perfect Consensus]\label{cor1} Let~$\mathbf{x}_0^i\in\mathbb{R}^n$ be such that~$\dim(\oplus_i \mathbf{x}_0^i)\leq\dim(\Theta_{\Gamma_1})$. Then consensus under uniform interference, Eq.~\eqref{cpm_s1}, recovers the true average of the initial conditions,~$\mathbf{x}_0^i$. \end{cor} \begin{cor}[Principal/Selective Consensus] Let the initial conditions,~$\mathbf{x}_0^i$, belong to the range space,~$\oplus A$, of some matrix,~$A\in\mathbb{R}^{n\times n}$. Then consensus under uniform interference, Eq.~\eqref{cpm_s1}, recovers the average in a~$\underline{\gamma}=\dim(\Theta_{\Gamma_1})$ subspace that can be chosen along any~$\underline{\gamma}$ singular values of~$A$. \end{cor} The proofs of the above two corollaries immediately follow from Theorem~\ref{cui_th}. In fact, the protocol, Eq.~\eqref{cpm_s1}, can be tailored towards the~$\underline{\gamma}$ largest singular values (principal consensus), or towards any arbitrary~$\underline{\gamma}$ singular values (selective consensus). The former is applicable to the cases when the data (initial conditions) lies primarily along a few singular values. While the latter is applicable to the cases when the initial conditions are known to have meaningful components in some singular values. We now show a few examples on this approach. \begin{ex} Consider the initial conditions,~$\mathbf{x}_0^i,\forall i$, to lie in the range space,~$\oplus A$, with the following: \begin{eqnarray}\label{eq_ill1} A = \left[ \begin{array}{cc} 1 & 1\\ 1 & 1 \end{array} \right], I_{\mathcal{S}}= \frac{1}{2}\left[ \begin{array}{cc} \frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2} \end{array} \right], U_{\mathcal{S}}= \left[ \begin{array}{rr} \frac{-1}{\sqrt{2}}&\frac{-1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&\frac{-1}{\sqrt{2}} \end{array} \right]. \end{eqnarray} Clearly,~$\dim({\oplus A})=1$. Consider any rank~$1$ interference,~$\Gamma$: \begin{eqnarray*} \Gamma_1 = \alpha\left[ \begin{array}{cc} 1&1\\ 1&1 \end{array} \right],{\Theta_{\Gamma_1}} = \beta \left[ \begin{array}{rr} 1\\ -1 \end{array} \right],\qquad\alpha,\beta\in\mathbb{R}. \end{eqnarray*} It can be easily verified that originally the data subspace,~$\oplus A$, is aligned with the interference subspace,~$\oplus \Gamma_1$, and standard consensus operation is not applicable as no agent knows from which agents and on what links this interference is being incurred (recall Assumption (a) in Section~\ref{pf}). In other words, each agent~$i$, implementing Eq.~\eqref{cpi1}, cannot ensure that~$\sum_{j\in\mathcal{N}_i}w_{ij} + \sum_{j\in\mathcal{N}_i}w_{ij}\sum_{m\in\mathcal{V}} a_{ij}^m =1$ for the above iterations to remain meaningful and convergent. Following Theorem~\ref{cui_th}, we choose $T_1 = V_1U^\top_{{\mathcal{S}}}$, which can be verified to be a diagonal matrix with $1$ and $-1$ on the diagonal, resulting into~$\Gamma_1 T_1I_{\mathcal{S}}=\mathbf{0}_{2\times 2}$. The effect of preconditioning,~$T_1$, is to move the entire~$1$-dimensional signal subspace in the null space of the interference. Subsequently, \begin{eqnarray*} \widehat{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} w_{ij}\widehat{\mathbf{x}}^j_k + \sum_{m\in\mathcal{V}} b_{i}^{m}\Gamma_1\widehat{\mathbf{x}}^{m}_k = \sum_{j\in\mathcal{N}_i} w_{ij}\widehat{\mathbf{x}}^j_k + \mathbf{0}_n, \end{eqnarray*} when~$\widehat{\mathbf{x}}_0^i=T_1I_{\mathcal{S}}\mathbf{x}_0^i=T_1\mathbf{x}_0^i$, and true average is recovered via~$T_1^{-1}$ (see Corollary~\ref{cor1}). \end{ex} \subsection{A Conservative Generalization}\label{ui_gen} In Section~\ref{s_ui}, we assume that the overall interference structure, recall Fig.~\ref{fig_gl}, is such that the interference gains are uniform, i.e.~$\Gamma_{ij}^m=\Gamma_1.$ We now provide a conservative generalization of Theorem~\ref{cui_th} to the case when the interferences do not have a uniform structure. \begin{thm}\label{con_th} Define~$\Gamma\in\mathbb{R}^{n\times n}$ to be the network interference matrix such that \begin{eqnarray} \oplus_{i,j,m}{\Gamma_{ij}^m}~~\subseteq~~\oplus\Gamma,\qquad i,j,m,\in\mathcal{V}. \end{eqnarray} Let~${\Theta_{\Gamma}}$ be the null space of~$\Gamma$ with~$\underline{\gamma}=\dim({\Theta_{\Gamma}})$. The protocol in Eq.~\eqref{cpi2} recovers the average in a~$\underline{\gamma}$-dimensional subspace,~${\mathcal{S}}$, of~$\mathbb{R}^n$, with an appropriate alignment. \end{thm} The proof follows directly from Lemmas~\ref{lem1},~\ref{Tlem}, and Theorem~\ref{cui_th}. Following the earlier discussion, we choose a global preconditioning,~$T\in\mathbb{R}^{n\times n}$, based on the null-space,~$\Theta_\Gamma$, of the network interference,~$\Gamma$. The solution described by Theorem~\ref{con_th} requires each interference to belong to some subspace of the network interference,~${\oplus}\Gamma$, and each agent to have the knowledge of this network interference. However, this global knowledge is not why the approach in Theorem~\ref{con_th} is \emph{conservative}, as we explain below. Consider~${\oplus}_{i,j,}{\Gamma_{ij}^m}\subseteq\mathbb{R}^n$, to be such that~$\dim\left({\oplus}{\Gamma_{ij}^m}\right)=1$, for each~$i,j,m\in\mathcal{V}$. In other words, each interference block in Fig.~\ref{fig_gl} is a one-dimensional line in~$\mathbb{R}^n$. Theorem~\ref{con_th} assumes a network interference matrix,~$\Gamma$, such that its range space,~${\oplus}\Gamma$, includes every local interference subspace,~${\oplus}{\Gamma_{ij}^m}$. When each local interference subspace,~${\oplus}{\Gamma_{ij}^m}$, is one-dimensional, we can easily have~$\dim({\oplus}_{i,j,m}\Gamma_{ij}^m)=n$, subsequently requiring~$\dim(\oplus\Gamma)=n$. This happens when the local interference subspaces are not aligned perfectly. Theorem~\ref{cui_th} is a very special scenario when all of the local interference subspaces are exactly the same (perfectly aligned). Extending it to Theorem~\ref{con_th}, however, shows that when the local interference are misaligned,~${\oplus}\Gamma$ may have dimension~$n$, and consensus is only ensured on a zero-dimensional subspace, i.e. with~$I_{\mathcal{S}} = \mathbf{0}_{n\times n}$. This limitation of Theorem~\ref{con_th} invokes a significant question: \emph{When all of the local interferences are misaligned such that their collection spans the entire~$\mathbb{R}^n$, can consensus recover anything meaningful?} Is it true that Theorem~\ref{con_th} is the only candidate solution? In the next sections, we show that there are indeed \emph{distributed and local} protocols that can recover meaningful information. To proceed, we add another assumption, (c), to Assumptions~(a) and (b) in Section~\ref{pf}: \begin{enumerate}[(c)] \item \emph{The interference matrices,~$\Gamma_{ij}^m$, are independent over~$j$}. Note that in our interference model, any agent~$m\in\mathcal{V}$ can interfere with~$j\rightarrow i$ communication; from Assumption (a), these are unknown to either agent~$j$ or~$i$. Assumption (c) is equivalent to saying that this interference is only a function of the interferer,~$m\in\mathcal{V}$, or the receiver,~$i\in\mathcal{V}$, and is independent of communication link,~$j\rightarrow i$. \end{enumerate} We consider the design and analysis in the following cases: \emph{Uniform Outgoing Interference}:~$\Gamma_{i}^m=\Gamma_m,\forall i,m\in{\mathcal{V}}$. In this case, each agent,~$m\in\mathcal{V}$, interferes with every other agent via the same interference matrix,~$\Gamma_m$, see Fig.~\ref{uoi_T} (top). This case is discussed in Section~\ref{s_uoi}; \emph{Uniform Incoming Interference}:~$\Gamma_{i}^m = \Gamma_i,\forall i,m\in{\mathcal{V}}.$ In this case, each agent~$i$ incurs the same interference,~$\Gamma_i$, over all the interferers,~$m\in\mathcal{V}$, see Fig.~\ref{uoi_T} (bottom). This case is discussed in Section~\ref{s_uii}. \begin{figure}[h!] \centering \subfigure{\includegraphics[width=2.5in]{intStruc3uoiT.pdf}} \hspace{1cm} \subfigure{\includegraphics[width=2.5in]{intStruc3uiiT.pdf}} \caption{(Top) Uniform Outgoing (Bottom) Uniform Incoming. The blocks, $T_i$'s and $R_i$'s, will become clear from Sections~\ref{s_uoi} and \ref{s_uii}.} \label{uoi_T} \end{figure} \section{Uniform Outgoing Interference}\label{s_uoi} This section presents results for the uniform outgoing interference, i.e. each agent,~$m\in\mathcal{V}$, interferes with every other agent in the same way. Recall that agent~$j$ wishes to transmit~$\mathbf{x}^j$ to agent~$i$ in the presence of interference. When this interference depends only on the interfere, agent~$i$ receives \begin{eqnarray} \mathbf{x}_k^j + \sum_{m\in\mathcal{V}} a_{ij}^m\Gamma_m \mathbf{x}_k^m, \end{eqnarray} from agent~$j$ at time~$k$. We modify the transmission as~$T_m\widetilde{\mathbf{x}}_k^m,$ for all~$m\in\mathcal{V}$ for some auxiliary state variable,~$\widetilde{\mathbf{x}}_k^i\in\mathbb{R}^{n}$, to be explicitly defined shortly; agent~$i$ thus receives \begin{eqnarray} T_j\widetilde{\mathbf{x}}_k^j + \sum_{m\in\mathcal{V}} a_{ij}^m\Gamma_m T_m\widetilde{\mathbf{x}}_k^m, \end{eqnarray} from agent~$j$ at time~$k$. Consider the following protocol: \begin{eqnarray}\label{cpi_uoia} \widetilde{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} W_{ij}\left(T_j\widetilde{\mathbf{x}}_k^j + \sum_{m\in\mathcal{V}} a_{ij}^m\Gamma_m T_m\widetilde{\mathbf{x}}_k^m\right), \end{eqnarray} where~$W_{ij}\in\mathbb{R}^{n\times n}$ is now a \emph{matrix} that agent~$i$ associates with agent~$j$; recall that earlier~$W_{ij} = w_{ij}I_n$. We get \begin{eqnarray}\label{cpi_uoib} \widetilde{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} W_{ij}T_j\widetilde{\mathbf{x}}_k^j + \sum_{m\in\mathcal{V}}B_{im} \Gamma_m T_m\widetilde{\mathbf{x}}_k^m, \end{eqnarray} where~$B_{im} = \sum_{j\in\mathcal{N}_i} W_{ij}a_{ij}^m$. We have the following result. \begin{lem}\label{Tlem_uoi} For some non-negative integer,~$\underline{\gamma}\leq n$, let each outgoing interference matrix,~$\Gamma_i$, have rank~$\overline{\gamma}\triangleq n-\underline{\gamma}$. Let~$I_{\mathcal{S}}\in\mathbb{R}^{n\times n}$ be the projection matrix that projects~$\mathbb{R}^n$ on~${\mathcal{S}}$, where~$\dim({\mathcal{S}})=\underline{\gamma}$. Then, there exist~$T_i$ at each~$i\in\mathcal{V}$, and~$W_{ij}$'s for all~$(i,j)\in\mathcal{E}$ such that Eq.~\eqref{cpi_uoib} becomes \begin{eqnarray}\nonumber \widetilde{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} w_{ij}\widetilde{\mathbf{x}}_k^j, \end{eqnarray} at each~$i\in\mathcal{V}$, when~$\widetilde{\mathbf{x}}_0^i\in {\mathcal{S}}$. \end{lem} \begin{proof} Without loss of generality, we assume that~${\mathcal{S}}={\oplus A}$, where~${\oplus A}$ denotes the range space of some matrix,~$A\in\mathbb{R}^{n\times n}$, such that~$\dim({\oplus A})=\underline{\gamma}$. Define $I_{\mathcal{S}} = A^\dagger A$, where~$I_{\mathcal{S}}$ is the orthogonal projection that projects any arbitrary vector in~$\mathbb{R}^n$ on~${\mathcal{S}}$. Define~$\widetilde{\mathbf{x}}_0^i$ to be the projected initial conditions, i.e.~$\widetilde{\mathbf{x}}_0^i\triangleq I_{\mathcal{S}}\mathbf{x}_0^i$. Let~$T_i$ be the \emph{locally designed}, invertible preconditioning, obtained at each~$i\in\mathcal{V}$ from the null-space,~$\Theta_{\Gamma_i}$, of its outgoing interference matrix,~$\Gamma_i$, see Lemma~\ref{Tlem}. Clearly, following Lemma~\ref{Tlem}, we have $\Gamma_iT_i\widetilde{\mathbf{x}}_0^i=\mathbf{0}_n,\forall i\in\mathcal{V}$. Choose \begin{eqnarray} W_{ij}=w_{ij}T_j^{-1}. \end{eqnarray} From Eq.~\eqref{cpi_uoib}, we have \begin{eqnarray}\nonumber \widetilde{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} w_{ij}\widetilde{\mathbf{x}}_k^j + \sum_{m\in\mathcal{V}} B_{im} \Gamma_m T_m\widetilde{\mathbf{x}}_k^m. \end{eqnarray} We claim that when~$\widetilde{\mathbf{x}}_0^i\in\mathcal{S},\forall i\in\mathcal{V}$, then~$\widetilde{\mathbf{x}}_k^i\in {{\mathcal{S}}},\forall i\in\mathcal{V},k$, proven below by induction. Consider~$k=0$, then \begin{eqnarray}\nonumber \widetilde{\mathbf{x}}_{1}^i = \sum_{j\in\mathcal{N}_i} w_{ij}\widetilde{\mathbf{x}}_0^j + \sum_{m\in\mathcal{V}} B_{im} \Gamma_m T_m\widetilde{\mathbf{x}}_0^m = \sum_{j\in\mathcal{N}_i} w_{ij}\widetilde{\mathbf{x}}_0^j, \end{eqnarray} which is a linear combination of vectors in~${\mathcal{S}}$ and thus lies in~${\mathcal{S}}$. Assume that~$\widetilde{\mathbf{x}}_k^i\in{\mathcal{S}},\forall i\in\mathcal{V}$, and some~$k$, leading to~$\Gamma_iT_i\widetilde{\mathbf{x}}_k^i=\mathbf{0}_n.$ Then for~$k+1$: \begin{eqnarray}\nonumber \widetilde{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} w_{ij}\widetilde{\mathbf{x}}_k^j + \sum_{m\in\mathcal{V}} B_{im} \Gamma_m T_m\widetilde{\mathbf{x}}_k^m = \sum_{j\in\mathcal{N}_i} w_{ij}\widetilde{\mathbf{x}}_k^j, \end{eqnarray} which is a linear combination of vectors in~${\mathcal{S}}$. \end{proof} \noindent The main result on uniform outgoing interference is as follows. \begin{thm}\label{th_uoi} Let~$\Theta_{\Gamma_i}$ denote the null space of~$\Gamma_i$, and let~$\underline{\gamma}\triangleq\min_{i\in\mathcal{V}}\{\dim(\Theta_{\Gamma_i})\}$. In the presence of uniform outgoing interference, Eq.~\eqref{cpi_uoia} recovers the average in a~$\underline{\gamma}$-dimensional subspace,~${\mathcal{S}}$, of~$\mathbb{R}^n$, when we choose~$T_i$ according to Lemma~\ref{Tlem}, and~$W_{ij}=w_{ij}T_j^{-1}$, at each~$i,j\in\mathcal{N}_i$. \end{thm} The proof follows from Lemma~\ref{Tlem_uoi}. In other words, the consensus protocol in the presence of uniform outgoing interference, Eq.~\eqref{cpi_uoia}, converges to \begin{eqnarray} \widetilde{\mathbf{x}}_\infty^i = \frac{1}{N}\sum_{j=1}^N \widetilde{\mathbf{x}}^j_0 = \frac{1}{N}\sum_{j=1}^NI_{{\mathcal{S}}}\mathbf{x}^j_0, \end{eqnarray} for any~$\mathbf{x}_0^i\in\mathbb{R}^n,\forall i\in\mathcal{V}.$ We note that each agent,~$i\in\mathcal{V}$, is only required to know the null-space of its outgoing interference,~$\Gamma_i$, to construct an appropriate preconditioning,~$T_i$. In addition, each agent,~$i\in\mathcal{V}$, is required to obtain the local pre-conditioners,~$T_j$'s, \emph{only} from its neighbors,~$j\in\mathcal{N}_i$; and thus, this step is also completely local. The protocol described in Theorem~\ref{th_uoi} can be cast in the purview of Fig.~\ref{uoi_T} (top). Notice that a transmission from any agent,~$i\in\mathcal{V}$, passes through agent~$i$'s dedicated preconditioning matrix,~$T_i$. The network (both non-interference and interference) sees only~$T_i\mathbf{x}_k^i$ at each~$k$. Since the interference is a function of the transmitter (uniform outgoing), all of the agents ensure that a particular signal subspace,~$\mathcal{S}$, is not corrupted by the interference channel. The significance here is that even when the interferences are misaligned such that~${\oplus}_{i\in\mathcal{V}}{\Gamma_i} = \mathbb{R}^n$, the protocol in Eq.~\eqref{cpi_uoia} recovers the average in~$\underline{\gamma}=\min_{i\in\mathcal{V}}\{\Theta_{\Gamma_i}\}$ dimensional signal subspace. On the other hand, the null space of the entire collection,~${\oplus}_{i\in\mathcal{V}}{\Gamma_i}$, may very well be~$0$-dimensional. For example, if each~$\Gamma_i$ is rank~$1$ such that each of the corresponding one-dimensional subspace is misaligned, Eq.~\eqref{cpi_uoia} recovers the average in an~$n-1$ dimensional signal subspace. On the other hand, Theorem~\ref{con_th} does not recover anything other than~$\mathbf{0}_n$. \subsection{Illustration of Theorem~\ref{th_uoi}} Let the initial conditions belong to a~$2$-dimensional subspace in~$\mathbb{R}^3$ and consider~$N=10$ agents, with random initial conditions, shown as blue squares in Fig.~\ref{f11_uoi} (a). Uniform outgoing interference is chosen as one of the three~$1$-dimensional subspaces such that each interference appears at some agent in the network, see Fig.~\ref{f11_uoi} (b). Clearly, each interference is misaligned and~$\dim({\oplus}_i{\Gamma_i})=n=3.$ Hence, the protocol following Theorem~\ref{con_th} requires the signal subspace to be~$n-\dim({\oplus}_i{\Gamma_i})=0$ dimensional. However, when the agent transmissions are preconditioned using~$T_i$'s, each agent projects its transmission on the null space of its interference. Each receiver,~$i\in\mathcal{V}$, receives a misaligned data,~$T_j\mathbf{x}^j$, from each of its neighbors,~$j\in\mathcal{N}_i$, see Fig.~\ref{f11_uoi} (c). Since each~$T_j\mathbf{x}^j$ is a function of the corresponding neighbor,~$j$, the data can be translated back to~$\mathcal{S}$ via~$T_j^{-1}$, which is incorporated in the consensus weights,~$W_{ij}=w_{ij}T_j^{-1}$. \begin{figure*} \centering \subfigure[]{\includegraphics[width=1.5in]{f1_uoie.pdf}} \subfigure[]{\includegraphics[width=1.5in]{f2_uoie.pdf}} \subfigure[]{\includegraphics[width=1.5in]{f3_uoie.pdf}} \subfigure[]{\includegraphics[width=1.5in]{f4_uoie.pdf}} \caption{Consensus under uniform outgoing interference: (a) Signal space,~${\mathcal{S}} \subseteq \mathbb{R}^3$, where~$\dim({\mathcal{S}})=2$; (b) One-dimensional range spaces,~${\oplus}{\Gamma_i}$, of~$\Gamma_i$'s--the null spaces of each are~$\underline{\gamma}=2$-dimensional, shown as planes; (c) Agent transmissions aligned in the corresponding null spaces over time,~$k$; (d) Consensus in the signal subspace,~${\mathcal{S}}$, after appropriate translations, at each~$i\in\mathcal{V}$, back to the signal subspace by~$T_j^{-1}$, with~$j\in\mathcal{N}_i$.} \label{f11_uoi} \end{figure*} \section{Uniform Incoming Interference}\label{s_uii} In this section, we consider the case of uniform incoming interference, i.e. each agent~$i\in\mathcal{V}$ incurs the same interference,~$\Gamma_i$, over all of the interferers,~$m\in\mathcal{V}$. This scenario is shown in Fig.~\ref{uoi_T} (bottom). We note that Theorem~\ref{con_th} is applicable here but results into a conservative approach as elaborated earlier. We note that this case is completely different from the uniform outgoing case (of the previous section), since preconditioning (alone) may not work as we explain below. When an agent,~$m\in\mathcal{V}$, employs preconditioning, it may not precondition to account for the interference,~$\Gamma_i$, experienced at each receiver,~$i$, with which~$m$ may interfere. In the purview of Fig.~\ref{uoi_T} (bottom), if agent~$m_2\in\mathcal{V}$ preconditions using~$T_{m_2}$ to cancel the interference,~$\Gamma_i$, experienced by agent~$i$; the same preconditioning,~$T_{m_2}$, is not helpful to agent~$l$. For example, let agent~$m_2$ choose~$T_{m_2}=V_iU_{{\mathcal{S}}}^\top$ (a valid choice following Lemma~\ref{Tlem}), then as discussed earlier~$\Gamma_iV_iU_{{\mathcal{S}}}^\top I_{\mathcal{S}}=\mathbf{0}_{n\times n}$ and~$m_2$'s interference is not seen by agent~$i$. However, this preconditioning appears as~$\Gamma_lV_iU_{{\mathcal{S}}}^\top I_{\mathcal{S}}$ at agent~$l$, which is~$\mathbf{0}_{n\times n}$ only when~$V_l^\top V_i=I_n$. This is not true in general. We now explicitly address the uniform incoming interference scenario. In this case, Eq.~\eqref{cpi2} takes the following form: \begin{eqnarray}\label{cpi_uii_s} \mathbf{x}_{k+1}^i = \sum_{j\in\mathcal{N}_i} W_{ij}\left(\mathbf{x}_k^j + \Gamma_{i}\sum_{m\in\mathcal{V}} a_{ij}^m \mathbf{x}_k^m\right), \end{eqnarray} $ k\geq0,\mathbf{x}_0^i\in\mathbb{R}^n$ and where, as in Section~\ref{s_uoi}, we use a matrix,~$W_{ij}\in\mathbb{R}^{n\times n}$ to retain some design flexibility. The only possible way to cancel the unwanted interference now is via what can be referred to as \emph{post-conditioning}. Each agent,~$i\in\mathcal{V}$, chooses a post-conditioner,~$R_i\in\mathbb{R}^{n\times n}$. As before, we assume~$I_\mathcal{S} = U_\mathcal{S}S_\mathcal{S}V_\mathcal{S}^\top$ to be the projection matrix for some subspace,~$\mathcal{S}\subseteq\mathbb{R}^n$, and modify the transmission as~$S_\mathcal{S}\widehat{\mathbf{x}}_k^m$, for some auxiliary state variable,~$\widehat{\mathbf{x}}_k^i\in\mathbb{R}^{n}$, to be explicitly defined shortly. The modified protocol is \begin{eqnarray}\label{cpi_uii} \widehat{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} W_{ij}R_i\left(S_\mathcal{S}\widehat{\mathbf{x}}_k^j + \Gamma_{i}\sum_{m\in\mathcal{V}} a_{ij}^m S_\mathcal{S}\widehat{\mathbf{x}}_k^m\right). \end{eqnarray} The goal is to design an~$R_i$ such that~$R_i\Gamma_i=\mathbf{0}_{n\times n}$. Following the earlier approaches, we assume that $\mbox{rank}(\Gamma_i)=\overline{\gamma},\forall i\in\mathcal{V}$, and $\mbox{\upshape{rank}}(I_{\mathcal{S}})=\underline{\gamma}$, such that~$\overline{\gamma} + \underline{\gamma} = n$, with SVDs,~$\Gamma_i=U_iS_iV_i^\top$ and~$I_{\mathcal{S}}=U_{{{\mathcal{S}}}}S_{{{\mathcal{S}}}}V_{{{\mathcal{S}}}}^\top$, where the singular value matrices are arranged as \begin{eqnarray}\label{SiSS} S_i=\left[ \begin{array}{ccc} S_i^{1:\overline{\gamma}}\\ &\mathbf{0}_{\underline{\gamma}\times \underline{\gamma}} \end{array} \right],\qquad S_{\mathcal{S}}=\left[ \begin{array}{ccc} \mathbf{0}_{\overline{\gamma}\times \overline{\gamma}}\\ &I_{\underline{\gamma}} \end{array} \right]. \end{eqnarray} The next lemma characterizes the post-conditioner,~$R_i$. \begin{lem}\label{Rlem} Let~$\Gamma_i=U_iS_iV_i^\top$ and~$S_\mathcal{S}$ have the structure of Eq.~\eqref{SiSS}. Given the null-space of~$\Gamma_i^\top$, there exists a rank~$\underline{\gamma}$ post-conditioner,~$R_i$, such that~$R_i\Gamma_i=\mathbf{0}_{n\times n}$. \end{lem} \begin{proof} We assume that~$U_i$ is partitioned as~$\left[\begin{array}{ccc} \overline{U}_i~|~\underline{U}_i \end{array} \right]$, where~$\overline{U}_i\in\mathbb{R}^{n\times\overline{\gamma}}$ and~$\underline{U}_i\in\mathbb{R}^{n\times\underline{\gamma}}$. Clearly,~$\underline{U}_i$ is the null-space of~$\Gamma_i^\top$. Define \begin{eqnarray} R_i = S_\mathcal{S} \left[\begin{array}{ccc} \overline{U}_i^\prime~|~\underline{U}_i^\prime \end{array} \right]^\top, \end{eqnarray} where~$\underline{U}_i^\prime$ is such that~$\oplus\underline{U}_i^\prime=\oplus\underline{U}_i$, and~$\overline{U}_i^\prime$ is arbitrary. By definition, we have~$\underline{U}_i^\top\overline{U}_i=\mathbf{0}_{\underline{\gamma}\times\overline{\gamma}}$; hence, by construction,~${\underline{U}_i^\prime}^\top\overline{U}_i=\mathbf{0}_{\underline{\gamma}\times\overline{\gamma}}$. It can be verified that the post-conditioning results into \begin{eqnarray*} R_i\Gamma_i =\left[\begin{array}{ccc} \mathbf{0}&\mathbf{0}\\ I_{\underline{\gamma}}{\underline{U}_i^\prime}^\top\overline{U}_iS_i^{1:\overline{\gamma}}&\mathbf{0} \end{array} \right]V_i^\top, \end{eqnarray*} and the lemma follows. Note that~$R_i=S_\mathcal{S}U_i^\top$ is a valid choice but it is not necessary. \end{proof} With the help of Lemma~\ref{Rlem}, Eq.~\eqref{cpi_uii} is now given by \begin{eqnarray}\label{cpi_uii3} \widehat{\mathbf{x}}_{k+1}^i &=& \sum_{j\in\mathcal{N}_i} W_{ij}S_{\mathcal{S}} \left[\begin{array}{ccc} \overline{U}_i^\prime~|~\underline{U}_i^\prime \end{array} \right]^\top S_\mathcal{S}\widehat{\mathbf{x}}_k^j. \end{eqnarray} Recall that~$\underline{U}_i^\prime$ is an~$n\times\underline{\gamma}$ matrix whose column-span is the same as the column-span of~$\underline{U}_i$, and the column-span of~$\underline{U}_i$ is the null-space of~$\Gamma_i^\top$. We now denote the lower~$\underline{\gamma}\times\underline{\gamma}$ sub-matrix of~$\underline{U}_i^\prime$ by~$\widehat{U}_i$. In order to simply the above iterations, we note that \begin{eqnarray}\label{uii_P1} S_{\mathcal{S}} \left[\begin{array}{ccc} \overline{U}_i^\prime~|~\underline{U}_i^\prime \end{array} \right]^\top S_{\mathcal{S}} &=&\left[ \begin{array}{ccc} \mathbf{0}_{\overline{\gamma}\times \overline{\gamma}}\\ &\widehat{U}_i^\top \end{array} \right], \end{eqnarray} and~$\dim(\underline{U}_i^\prime)=\dim(\underline{U}_i)=n-\overline{\gamma}=\underline{\gamma}$. It is straightforward to show that~$\widehat{U}_i^\top$ is always invertible. Based on this discussion, the following lemma establishes the convergence of Eq.~\eqref{cpi_uii}. \begin{lem}\label{lem_ui} Let~$\Gamma_i=U_iS_iV_i^\top, \forall i\in\mathcal{V},$ and some projection matrix,~$I_{\mathcal{S}}=U_{{{\mathcal{S}}}}S_{{{\mathcal{S}}}}V_{{{\mathcal{S}}}}^\top$, have ranks~$\overline{\gamma}$, and~$\underline{\gamma}\triangleq n-\overline{\gamma}$, respectively~$(0\leq\underline{\gamma}\leq n)$, such that~$S_i$ and~$S_{\mathcal{S}}$ are arranged as in Eq.~\eqref{SiSS}. When~$R_i$ is chosen according to Lemma~\ref{Rlem}, and for each~$i\in\mathcal{V}$,~$W_{ij}$ is chosen as \begin{eqnarray}\label{uii_W} W_{ij} = w_{ij}\left[ \begin{array}{ccc} \mathbf{0}_{\overline{\gamma}\times \overline{\gamma}}\\ &\left(\widehat{U}_i^\top\right)^{-1} \end{array} \right], \end{eqnarray} the protocol in Eq.~\eqref{cpi_uii} recovers the average of the last~$\underline{\gamma}$ components of the initial conditions,~$\widehat{\mathbf{x}}_0^i$. \end{lem} \begin{proof} We note that under the given choice for~$R_i$'s, the interference term is~$\mathbf{0}_n$, and Eq.~\eqref{cpi_uii} reduces to Eq.~\eqref{cpi_uii3}. Now we use Eqs.~\eqref{uii_P1} and~\eqref{uii_W} in Eq.~\eqref{cpi_uii3} to obtain: \begin{eqnarray*} \widehat{\mathbf{x}}_{k+1}^i = \sum_{j\in\mathcal{N}_i} W_{ij}S_{{{\mathcal{S}}}}U_i^\top S_{{{\mathcal{S}}}}\widehat{\mathbf{x}}^j_k = \sum_{j\in\mathcal{N}_i} w_{ij} \left[ \begin{array}{ccc} \mathbf{0}_{\overline{\gamma}\times \overline{\gamma}}\\ &I_{\underline{\gamma}} \end{array} \right] \widehat{\mathbf{x}}^j_k, \end{eqnarray*} which in the limit as~$k\rightarrow\infty$ converges to \begin{eqnarray} \widehat{\mathbf{x}}_\infty^i &=& \frac{1}{N}\sum_{j=1}^N \left[ \begin{array}{ccc} \mathbf{0}_{\overline{\gamma}\times \overline{\gamma}}\\ &I_{\underline{\gamma}} \end{array} \right]\widehat{\mathbf{x}}_0^i,\qquad\forall i\in\mathcal{V}. \end{eqnarray} That~$\widehat{U}_i^\top$ is invertible is always true because it is a principal minor of an invertible matrix,~$U_i^\top$. \end{proof} \noindent Following is the main result of this section. \begin{thm}\label{th_cui} Let~$\Gamma_i$'s,~$R_i$'s, and~$W_{ij}$'s, be chosen according to Lemma~\ref{lem_ui}. The protocol in Eq.~\eqref{cpi_uii} under uniform incoming interference recovers the average in a~$\underline{\gamma}$-dimensional subspace,~$\mathcal{S}$, of~$\mathbb{R}^n$. \end{thm} \begin{proof} Without loss of generality, assume that~$\mathcal{S}$ has a projection matrix,~$I_{\mathcal{S}}$, with SVD as defined above. Let~$\widehat{\mathbf{x}}_0^i=V_{\mathcal{S}}^\top\mathbf{x}_0^i$ and define~$\widetilde{\mathbf{x}}_k^i=U_{\mathcal{S}}\widehat{\mathbf{x}}_k^i, \forall i\in\mathcal{V}$. Then, from Lemma~\ref{lem_ui} \begin{eqnarray*} \widetilde{\mathbf{x}}_\infty^i = U_{\mathcal{S}}\frac{1}{N}\sum_{j=1}^N \left[ \begin{array}{ccc} \mathbf{0}_{\overline{\gamma}\times \overline{\gamma}}\\ &I_{\underline{\gamma}} \end{array} \right]V_{\mathcal{S}}^\top\mathbf{x}_0^i=\frac{1}{N}\sum_{j=1}^N U_{\mathcal{S}}S_{\mathcal{S}}V_{\mathcal{S}}^\top\mathbf{x}_0^i, \end{eqnarray*} $\forall i\in\mathcal{V}$, and the theorem follows. \end{proof} Some remarks are in order to explain the mechanics of Theorem~\ref{th_cui}. Let $I_\mathcal{S}=U_\mathcal{S}S_\mathcal{S}V_\mathcal{S}^\top$ with $ V_\mathcal{S} = \left[ \begin{array}{ccc} \overline{V}_\mathcal{S}&|&\underline{V}_\mathcal{S} \end{array}\right],$ and $ U_\mathcal{S} = \left[ \begin{array}{ccc} \overline{U}_\mathcal{S}&|&\underline{U}_\mathcal{S} \end{array}\right],$ where~$\overline{V}_\mathcal{S}$ is the null space of~$I_\mathcal{S}$. (i) When any agent~$i\in\mathcal{V}$ receives~$S_\mathcal{S}\widehat{\mathbf{x}}_0^m$ as an interference, it is canceled via the post-conditioning by~$R_i$, regardless of the transmission,~$S_\mathcal{S}\widehat{\mathbf{x}}_0^m$: \begin{eqnarray*} R_i~\Gamma_i~S_\mathcal{S}\widehat{\mathbf{x}}_0^m = S_{\mathcal{S}}S_i~~V_i^\top~~S_\mathcal{S}\widehat{\mathbf{x}}_0^m= \mathbf{0}_n, \end{eqnarray*} because of the structure in the~$S_\mathcal{S}$ and~$S_i$ from Eq.~\eqref{SiSS}. {(ii) It is more interesting to observe the effect on the intended transmission,~$j\rightarrow i$, after the post-conditioning and multiplication with~$W_{ij}$. It is helpful to note that~$S_\mathcal{S}=S_\mathcal{S}^\dagger$, and consider the transmission as~$S_\mathcal{S}^\dagger\widehat{\mathbf{x}}_0^j$ instead of~$S_\mathcal{S}\widehat{\mathbf{x}}_0^j$: \begin{eqnarray*} W_{ij}~R_i~~S_\mathcal{S}^\dagger\widehat{\mathbf{x}}_0^j &=& W_{ij}~~\underbrace{S_{\mathcal{S}}U_i^\top}_{\scriptsize\mbox{Rx}}~~\underbrace{S_\mathcal{S}^\dagger\widehat{\mathbf{x}}_0^j}_{\scriptsize\mbox{Tx}}. \end{eqnarray*} The operation,~$S_\mathcal{S}U_i^\top$, by the receiver, Rx, is vital to cancel the interference as shown in the previous step. However, this measure by the receiver also `distorts' the intended transmission. What agent~$i$ receives is now multiplied by a low-rank matrix,~$S_\mathcal{S}^\dagger$, in general. Consider for a moment that agent~$j$ were to send~$\widehat{\mathbf{x}}_0^j$ and agent~$i$ obtains~$S_\mathcal{S}U_i^\top\widehat{\mathbf{x}}_0^j$, after the interference canceling operation. How can agent~$i$ choose an appropriate~$W_{ij}$ to undo this post-conditioning? Such a procedure is not possible unless in trivial scenarios, e.g., when the interference was a diagonal matrix and~$U_i=I_n$. \emph{However, the transmitter may preemptively undo the distortion eventually incurred by the receiver's interference canceling operation}. This is precisely what is achieved by sending~$S_\mathcal{S}^\dagger\widetilde{\mathbf{x}}_0^j$. (iii) As we discussed, a preemptive measure, sending~$S_\mathcal{S}^\dagger\widetilde{\mathbf{x}}_0^j$, by the transmitter is vital so that the distortion bound to be added at the receiver is reversed. This reorientation, however, can be harmful, e.g.,~$\widehat{\mathbf{x}}_0^j$ may only contain meaningful (non-zero) information in the first~$\overline{\gamma}$ components and the multiplication by~$S_\mathcal{S}$ destroys this information. To avoid this issue, we choose the initial condition at each agent as~$\widehat{\mathbf{x}}_0^i=V_{\mathcal{S}}^\top\mathbf{x}_0^i$; the first transmission at any agent~$i$ is thus: \begin{eqnarray*} S_\mathcal{S}\widehat{\mathbf{x}}_0^i &=&S_\mathcal{S}V_\mathcal{S}^\top\mathbf{x}_0^i = \left[ \begin{array}{cc} \mathbf{0}_{\overline{\gamma}}\\ \underline{V}_\mathcal{S}^\top\mathbf{x}_0^i \end{array} \right], \end{eqnarray*} which is to transform any arbitrary initial condition \emph{orthogonal to the null-space} of the desired signal subspace,~$\mathcal{S}$. Since, the signal subspace,~$\mathcal{S}$, is~$\underline{\gamma}$-dimensional, retaining only the last~$\underline{\gamma}$ components, after the transformation by~$V_\mathcal{S}^\top$, suffices. (iv) We choose~$W_{ij}$ according to Eq.~\eqref{uii_W} and obtain \begin{eqnarray*} \widehat{\mathbf{x}}_1^i = \sum_{j\in\mathcal{N}_i}W_{ij}R_iS_\mathcal{S}\widehat{\mathbf{x}}_0^j = S_\mathcal{S}V_\mathcal{S}^\top\sum_{j\in\mathcal{N}_i}w_{ij}\mathbf{x}_0^j = S_\mathcal{S}V_\mathcal{S}^\top \mathbf{x}_{1}^i, \end{eqnarray*} $\forall i\in\mathcal{V}$, where~$\mathbf{x}_{k}^i$ are the interference-free consensus iterates. Now lets look at~$\widehat{\mathbf{x}}_2^i$, ignoring the interference terms as they are~$\mathbf{0}_n$, regardless of the transmission: \begin{eqnarray*} \widehat{\mathbf{x}}_{2}^i = \sum_{j\in\mathcal{N}_i} W_{ij}R_iS_\mathcal{S}~~S_\mathcal{S}V_\mathcal{S}^\top \mathbf{x}_1^j =S_\mathcal{S}V_\mathcal{S}^\top \mathbf{x}_2^i, \end{eqnarray*} by the same procedure that we followed to obtain~$\widehat{\mathbf{x}}_1^i$. In fact, the process continues and we get $\widehat{\mathbf{x}}_{k+1}^i = S_\mathcal{S}V_\mathcal{S}^\top \mathbf{x}_{k+1}^i,$ or $\widehat{\mathbf{x}}_\infty^i = S_\mathcal{S}V_\mathcal{S}^\top \mathbf{x}_\infty^i$, and the average in~$\mathcal{S}$, is obtained by $\widetilde{\mathbf{x}}_\infty^i = U_\mathcal{S}\widehat{\mathbf{x}}_\infty^i = U_\mathcal{S}S_\mathcal{S}V_\mathcal{S}^\top \mathbf{x}_\infty^i = I_\mathcal{S}\mathbf{x}_\infty^i.$ \subsection{Illustration of Theorem~\ref{th_cui}} We now provide a graphical illustration of Theorem~\ref{th_cui}. The network is comprised of~$N=10$ agents each with a randomly chosen initial condition on a~$2$-dimensional subspace,~$\mathcal{S}$, of~$\mathbb{R}^3$, shown in Fig.~\ref{f11_uii} (a). Incoming interference is chosen randomly as a one-dimensional subspace at each agent, shown as grey lines in Fig.~\ref{f11_uii} (b). It can be easily verified that the span of all of the interferences,~$\oplus_{i\in\mathcal{V}}\Gamma_i$, is the entire~$\mathbb{R}^3$. The initial conditions are now transformed with~$V_\mathcal{S}^\top$ so that the transmission,~$S_\mathcal{S}\widehat{\mathbf{x}}_k^i$, does not destroy the signal subspace,~$\mathcal{S}$. This transformation is shown in Fig.~\ref{f11_uii} (c). Consensus iterations are implemented in this transformed subspace,~$\widehat{\mathbf{x}}_k^i$, Fig.~\ref{f11_uii} (d), and finally, the iterations,~$\widetilde{\mathbf{x}}_k^i$, in the signal subspace,~$\mathcal{S}$, are obtained via a post-multiplication by~$U_\mathcal{S}$. \begin{figure*} \centering \subfigure{\includegraphics[width=1.25in]{f0_uiib.pdf}} \subfigure{\includegraphics[width=1.25in]{f1_uiib.pdf}} \subfigure{\includegraphics[width=1.25in]{f2_uiib.pdf}} \subfigure{\includegraphics[width=1.25in]{f3_uiib.pdf}} \subfigure{\includegraphics[width=1.25in]{f4_uiib.pdf}} \caption{Uniform Incoming Interference: (a) Signal subspace,~$\mathcal{S}\subseteq\mathbb{R}^3$, with~$\dim(\mathcal{S})=2$. The initial conditions are shown as blue squares and the true average is shown as a white diamond; (b) One-dimensional interference null-spaces at each agent,~$i\in\mathcal{V}$; (c) Auxiliary state variables,~$\widetilde{\mathbf{x}}_0^j=V_\mathcal{S}^\top\mathbf{x}_0^i$, shown as red circles; (d) Consensus iterates in the auxiliary states and the average in the auxiliary initial conditions; and, (e) Recovery via~$\widetilde{\mathbf{x}}_k^i=U_\mathcal{S}\widehat{\mathbf{x}}_k^i$.} \label{f11_uii} \end{figure*} \section{Discussion} \label{s_discuss} We now recapitulate the development in this paper. {\bf Assumptions:} The exposition is based on three assumptions, (a) and (b) in Section~\ref{pf}, and (c) in Section~\ref{ui_gen}. Assumption (a), in general, ensures that the setup remains practically relevant, and further makes the averaging problem non-trivial. Assumption (b) is primarily for the sake of simplicity; the strategies described in this paper are applicable to the time-varying case. What is required is that when any incoming (or outgoing) interference subspace changes with time, this change is known to the interferer (or the receiver) so that appropriate pre- (or post-) conditioning is implemented. Finally, Assumption (c) is noted to cast a concrete structure on the proposed interference modeling. In fact, one can easily frame the incoming or outgoing interference as a special case of the general framework. However, explicitly noting it establishes a clear distinction among the different structures. {\bf Conservative Paradigm:} We consider a special case when each of the interference block in the network, see Fig.~\ref{fig_gl}, is identical. This approach, rather restrictive, sheds light on the information alignment notion that keeps recurring throughout the development, i.e. hide the information in the null space of the interference. When the local interferences,~$\Gamma_{ij}^m$, are not identical, we provide a conservative solution that utilizes an interference `blanket' (that covers each local interference subspace) to implement the information alignment. However, as we discussed, this interference blanket soon loses relevance as it may be~$n$-dimensional to provide an appropriate cover. When this is true, the only reliable data hiding is via a zero-dimensional hole (origin) and no meaningful information is transmitted. This conservative approach is improved in the cases of uniform outgoing and incoming interference models. {\bf Uniform Outgoing Interference:} The fundamental concept in the uniform outgoing setting is to hide the desired signal in the null-space of the interferences,~$\Gamma_m$'s. This alignment is possible at each transmitter as the eventual interference is only a function of the transmitter. {\bf Uniform Incoming Interference:} The basic idea here is to hide the desired signal in the null-space of the transpose of incoming interferences,~$\Gamma_i^\top$'s. This alignment is possible at each receiver as the eventual interference is only a function of the receiver. It can be easily verified that the resulting procedure is non-trivial. {\bf Null-spaces}: Incoming and outgoing interference comprise the two major results in this paper. It is noteworthy that both of these results only assume the knowledge of the corresponding interference null-spaces; the basis vectors of these null spaces can be arbitrary while the knowledge of the interference singular values is also not required. It is noteworthy that in a time-varying scenario where the basis vectors of the corresponding null-spaces change such that their span remains the same, no time adjustment is required. {\bf Uniform Link Interference}: One may also consider the case when $\Gamma_{ij}^m=\Gamma_{ij}$, see Eq.~\eqref{cpi2}, i.e., each interference gain is only a function of the communication link, $j\rightarrow i$. Subsequently, when each receiving agent, $i\in\mathcal{V}$, knows the null space of $\Gamma_{ij}^\top$, a protocol similar to the uniform incoming interference can be developed. {\bf Performance}: To characterize the steady-state error, denoted by~$\mathbf{e}_\infty^i$ at an agent~$i$, define~$\mathbf{e}_\infty^i = \mathbf{x}_\infty^i - I_\mathcal{S}\mathbf{x}_\infty^i$, where~$\mathbf{x}_\infty^i$ is the true average, Eq.~\eqref{pavg}. Clearly, \begin{eqnarray}\nonumber \left(I_\mathcal{S}\mathbf{x}_\infty^i\right)^\top \mathbf{e}_\infty^i = (\mathbf{x}_\infty^i)^\top I_\mathcal{S}^\top\left(I_n - I_\mathcal{S}\right)\mathbf{x}_\infty^i = 0,\qquad \forall i\in\mathcal{V}, \end{eqnarray} i.e. the error is orthogonal to the estimate, or the average obtained is the best estimate in~$\mathcal{S}\subseteq\mathbb{R}^n$ of the perfect average. \section{Conclusions} \label{s_conclude} In this paper, we consider three particular cases of a general interference structure over a network performing distributed (vector) average-consensus. First, we consider the case of uniform interference when the interference subspace is uniform across all agents. Second, we consider the case when this interference subspace depends only on the interferer (transmitter), referred to as \emph{uniform outgoing interference}. Third, we consider the case when the interference subspace depends only on the receiver, referred to as \emph{uniform incoming interference}. For all of these cases, we show that when the nodes are aware of the complementary subspaces (null spaces) of the corresponding interference, consensus is possible in a low-dimensional subspace whose dimension is complimentary to the largest interference subspace (across all of the agents). For all of these cases, we derive a completely local \emph{information alignment} strategy, followed by local consensus iterations to ensure perfect subspace consensus. We further provide the conditions under which this subspace consensus recovers the exact average. The analytical results are illustrated graphically to describe the setup and the information alignment scheme. \bibliographystyle{IEEEbib}
{ "attr-fineweb-edu": 1.604492, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa0LxaL3Sug5GJ0ar
\section{Multitime hybrid differential game with \\curvilinear integral functional} Let $t=(t^\alpha)\in \Omega_{0T}\subset \mathbb{R}^m_+$, $\alpha =1,...,m,$ be an evolution multi-parameter, called multitime. Consider an arbitrary $C^1$ curve $\Gamma_{0T}$ joining the diagonal opposite points $0=(0,\ldots,0)$ and $T=(T^1,\ldots,T^m)$ in the $m$-dimensional parallelepiped $\Omega_{0T}=[0,T]$ (multitime interval) in $\mathbb{R}^m_+$ endowed with the product order, a $C^2$ state vector $x:\Omega_{0T}\rightarrow \mathbb{R}^n, x(t)=(x^i(t)),$ $i=1,...,n$, a $C^1$ control vector $u(t)=(u_\alpha(t)):\Omega_{0T}\rightarrow U\subset \mathbb{R}^{qm},$ for the first equip of $m$ players (who wants to maximize), a $C^1$ control vector $v(t)=(v_\alpha(t)):\Omega_{0T}\rightarrow V\subset \mathbb{R}^{qm},$ for the second equip of $m$ players (who wants to minimize), $u_\alpha(\cdot)=\Phi(\cdot,\eta_1(\cdot)), v_\alpha(\cdot)=\Psi(\cdot,\eta_2(\cdot)),$ a running cost $L_\alpha(t,x(t),u_\alpha(t),v_\alpha(t))dt^\alpha$ as a nonautonomous closed Lagrangian $1$-form (satisfies $D_\beta L_\alpha=D_\alpha L_\beta),$ a terminal cost $g(x(T))$ and the $C^1$ vector fields $X_\alpha=(X_\alpha^i)$ satisfying the complete integrability conditions (CIC) $D_{\beta}X_{\alpha}=D_{\alpha}X_{\beta}$ (m-flow type problem). In our paper, a {\it multitime hybrid differential game} is given by a multitime dynamics, as a PDE system controlled by two controllers (first equip, second equip) and a target including a curvilinear integral functional. The approach we follow below is those in the paper \cite{[2]}, but we must be more creative since our theory is multitemporal one (see also \cite{[8]}-\cite{[21]}). More precisely, we introduce and analyze a multitime differential game whose Bolza payoff is the sum between a path independent curvilinear integral (mechanical work) and a function of the final event (the terminal cost, penalty term), and whose evolution PDE is an m-flow: {\it Find $$\min_{v(\cdot)\in V}\max_{u(\cdot)\in U} J(u(\cdot),v(\cdot))=\int_{\Gamma_{0T}} L_\alpha (s,x(s),u_\alpha(s),v_\alpha(s))ds^\alpha+g(x(T)),$$ subject to the Cauchy problem $$\frac{\partial x^i}{\partial s^\alpha}(s)=X^i_\alpha(s,x(s),u_\alpha(s),v_\alpha(s)),$$ $$x(0)=x_0, \,\,s\in \Omega_{0T}\subset \mathbb{R}_+^m, \,\,x\in \mathbb{R}^n.$$} Let $D_{\alpha}$ be the total derivative operator and $[X_{\alpha},X_{\beta}]$ be the bracket of vector fields. Suppose the piecewise complete integrability conditions (CIC) $$ \left( \frac{\partial X_{\alpha}}{\partial u^a_\lambda}\delta^{\gamma}_{\beta} - \frac{\partial X_{\beta}}{\partial u^a_\lambda}\delta^{\gamma}_{\alpha}\right)\frac{\partial u^a_\lambda}{\partial s^{\gamma}}+\left( \frac{\partial X_{\alpha}}{\partial v^b_\lambda}\delta^{\gamma}_{\beta} - \frac{\partial X_{\beta}}{\partial v^b_\lambda}\delta^{\gamma}_{\alpha}\right)\frac{\partial v^b_\lambda}{\partial s^{\gamma}}=\left[ X_{\alpha},X_{\beta}\right] + \frac{\partial X_{\beta}}{\partial s^{\alpha}} - \frac{\partial X_{\alpha}}{\partial s^{\beta}},$$ where $a, b =1,...,q$, are satisfied throughout. To simplify, suppose that the curve $\Gamma_{0T}$ is an increasing curve in the multitime interval $\Omega_{0T}$. If we vary the starting multitime and the initial point, then we obtain a larger family of similar multitime problems containing the functional $$J_{x,t}(u(\cdot),v(\cdot))=\int_{\Gamma_{tT}} L_\alpha (s,x(s),u_\alpha(s),v_\alpha(s))ds^\alpha+g(x(T)),$$ and the evolution constraint $$\frac{\partial x^i}{\partial s^\alpha}(s)=X^i_\alpha(s,x(s),u_\alpha(s),v_\alpha(s)),$$ $$x(t)=x, \,\,s\in \Omega_{tT}\subset \mathbb{R}_+^m,\,\, x\in \mathbb{R}^n.$$ We assume that each vector field $X_\alpha:\Omega_{0T}\times \mathbb{R}^n\times U\times V\rightarrow \mathbb{R}^n$ is uniformly continuous, satisfying $$\left\{\begin{array}{ll} \Vert X_\alpha(t,x,u_\alpha,v_\alpha)\Vert\leqslant A_\alpha\\ \Vert X_\alpha(t,x,u_\alpha,v_\alpha) - X_\alpha(t,\hat{x},u_\alpha,v_\alpha)\Vert\leqslant A_\alpha\Vert x-\hat{x}\Vert, \end{array}\right.$$ for some constant 1-form $A=(A_\alpha)$ and all $t\in \Omega_{0T}, \,\,x, \hat{x}\in \mathbb{R}^n, u\in U,v\in V.$ Suppose the functions $$g:\mathbb{R}^n\rightarrow \mathbb{R}, \quad L_\alpha:\Omega_{0T}\times \mathbb{R}^n\times U\times V\rightarrow \mathbb{R}$$ are uniformly continuous and satisfy the boundedness conditions $$\left\{\begin{array}{ll} \vert g(x)\vert\leqslant B\\ \vert g(x)-g(\hat{x})\vert\leqslant B\Vert x-\hat{x}\Vert, \end{array}\right.$$ $$\left\{\begin{array}{ll} \vert L_\alpha(t,x,u_\alpha,v_\alpha)\vert\leqslant C_\alpha\\ \vert L_\alpha(t,x,u_\alpha,v_\alpha)-L_\alpha(t,\hat{x},u_\alpha,v_\alpha)\vert\leqslant C_\alpha\Vert x-\hat{x}\Vert, \end{array}\right.$$ for constant $1$-form $C=(C_\alpha)$ and all $t\in \Omega_{0T},\,\, x, \hat{x}\in \mathbb{R}^n,\,\, u\in U,\,\,v\in V.$ \begin{definition} (i) The set $$\mathcal {U}(t)=\left\lbrace u_\alpha(\cdot):\mathbb{R}^m_+\rightarrow U \vert \ u_\alpha(\cdot) \mathrm{ \ is \ measurable \ and \ satisfies \ CIC}\right\rbrace $$ is called \textbf{the control set for the first equip of players}. (ii) The set $$\mathcal {V}(t)=\left\lbrace v_\alpha(\cdot):\mathbb{R}^m_+\rightarrow V \vert \ v_\alpha(\cdot) \mathrm{ \ is \ measurable \ and \ satisfies \ CIC}\right\rbrace $$ is called \textbf{the control set for the second equip of players}. \end{definition} \begin{definition} (i) A map $\Phi:\mathcal {V}(t)\rightarrow \mathcal {U}(t)$ is called \textbf{a strategy for the first equip of players}, if the equality $v(\tau)=\widehat{v}(\tau), t\leq \tau \leq s \leq T$ implies $\Phi[v](\tau)=\Phi[\widehat{v}](\tau).$ (ii) A map $\Psi:\mathcal {U}(t)\rightarrow \mathcal {V}(t)$ is called \textbf{a strategy for the second equip of players}, if the equality $u(\tau)=\widehat{u}(\tau), t\leq \tau \leq s \leq T$ implies $\Psi[u](\tau)=\Psi[\widehat{u}](\tau).$ \end{definition} Let $ \mathcal{A}(t)$ be \textbf{\ the set of strategies for the first equip of players} and $\mathcal{B}(t)$ be \textbf{\ the set of strategies for the second equip of players}. \begin{definition} (i) The function $$m(t,x)=\min_{\Psi\in \mathcal{V}} \max_{u(\cdot)\in U} J_{t,x}( u(\cdot),\Psi[u](\cdot))$$ is called \textbf{the multitime lower value function}. (ii) The function $$M(t,x)=\max_{\Phi\in \mathcal{U}} \min_{v(\cdot)\in V} J_{t,x}(\Phi[v](\cdot),v(\cdot)) $$ is called \textbf{the multitime upper value function}. \end{definition} The multitime lower value function $m(t,x)$ and the multitime upper value function $M(t,x)$ are piecewise continuously differentiable (see below, the boundedness and continuity of the values functions). \section{Properties of lower and upper values} \begin{theorem}\textbf{(multitime dynamic programming optimality conditions)} For each pair of strategies $(\Phi,\Psi),$ the lower and upper value functions can be written respectively in the form \begin{equation}\begin{split} m(t,x)\ & =\min_{\Psi\in \mathcal{B}(t)} \max_{u_\alpha\in \mathcal{U}(t)}\bigg\{ \int_{\Gamma_{tt+h}} L_\alpha (s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha \\& +m(t+h,x(t+h))\bigg\} \end{split}\end{equation} and \begin{equation}\begin{split} M(t,x)\ & =\max_{\Phi\in \mathcal{A}(t)}\min_{v_\alpha\in \mathcal{V}(t)}\bigg\{ \int_{\Gamma_{tt+h}} L_\alpha (s,x(s),\Phi [v_\alpha](s),v_\alpha(s))ds^\alpha \\ & +M(t+h,x(t+h))\bigg\}, \end{split}\end{equation} for all $(t,x) \in \Omega_{tT}\times \mathrm{R}^n$ and all $h\in \Omega_{0T-t}.$ \end{theorem} \begin{proof} First we recognize the Bellman principle (we write the value of a decision problem at a certain point in multitime in terms of the payoff from some initial choices and the value of the remaining decision problem that results from those initial choices). To confirm the first statement, we shall use the function \begin{equation}\begin{split} w(t,x)\ &=\min_{\Psi\in \mathcal{B}(t)} \max_{u_\alpha\in \mathcal{U}(t)}\bigg\{ \int_{\Gamma_{tt+h}} L_\alpha (s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha \\ & +m(t+h,x(t+h))\bigg\}. \end{split}\end{equation} We will show that, for all $\varepsilon >0,$ the lower value function $m(t,x)$ will satisfies two inequalities, $m(t,x)\leq w(t,x)+2\varepsilon$ and $m(t,x)\geq w(t,x)-3\varepsilon.$ Since $\varepsilon>0$ is arbitrary, it follows $m(t,x)=w(t,x).$ \begin{enumerate}[i)] \item For $\varepsilon >0,$ there exists a strategy $\Upsilon\in \mathcal{B}(t)$ such that \begin{equation}\begin{split}\label{eq.7} w(t,x)\ &\geqslant \max_{u_\alpha\in \mathcal{U}(t)}\bigg\{ \int_{\Gamma_{tt+h}} L_\alpha (s,x(s),u_\alpha(s),\Upsilon[u_\alpha](s))ds^\alpha \\ & +m(t+h,x(t+h))\bigg\}-\varepsilon. \end{split}\end{equation} We shall use the state $x(\cdot)$ which solves the (PDE), with the initial condition $\overline{x}=x(t+h)$ (Cauchy problem) on the set $\Omega_{tT}\setminus \Omega_{tt+h},$ for each $\overline{x}\in \mathbb{R}^n$. We can write \begin{equation}\begin{split} m(t+h,\overline{x}) \ & =\min_{\Psi\in \mathcal{B}(t+h)} \max_{u_\alpha\in \mathcal{U}(t+h)}\bigg\{ \int_{\Gamma_{t+hT}} L_\alpha (s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha \\ & +g(x(T))\bigg\}. \end{split}\end{equation} Thus there exists a strategy $\Upsilon_{\overline{x}}\in \mathcal{B}(t+h)$ for which \begin{equation}\begin{split}\label{eq.8} m(t+h,\overline{x})\ & \geqslant \max_{u_\alpha\in \mathcal{U}(t+h)}\bigg\{ \int_{\Gamma_{t+hT}} L_\alpha (s,x(s),u_\alpha(s),\Upsilon_{\overline{x}}[u_\alpha](s))ds^\alpha \\& + g(x(T))\bigg\}-\varepsilon. \end{split}\end{equation} Define a new strategy $$\Psi\in\mathcal{B}(t), \Psi[u_\alpha](s)\equiv \left\{\begin{array}{ll} \Upsilon[u_\alpha](s) & s\in \Omega_{tt+h}\\ \Upsilon_{\overline{x}}[u_\alpha](s) & s\in \Omega_{tT}\setminus\Omega_{tt+h}, \end{array}\right.$$ for each control $u_\alpha\in \mathcal{U}(t).$ For any $u_\alpha\in \mathcal{U}(t)$, replacing the inequality $\eqref{eq.8}$ in the inequality $\eqref{eq.7}$, we obtain $$w(t,x)\geqslant \int_{\Gamma_{tT}} L_\alpha (s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha+g(x(T))-2\varepsilon.$$ Consequently $$ \max_{u_\alpha\in \mathcal{U}(t)}\left\lbrace \int_{\Gamma_{tT}} L_\alpha (s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha+g(x(T))\right\rbrace\leq w(t,x)+2\varepsilon.$$ Hence $$m(t,x)\leq w(t,x)+2\varepsilon.$$ \item On the other hand, there exists a strategy $\Psi\in\mathcal{B}(t)$ for which we can write the inequality \begin{equation}\label{eq.9} m(t,x)\geqslant \max_{u_\alpha\in \mathcal{U}(t)}\left\lbrace \int_{\Gamma_{tT}} L_\alpha (s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha+g(x(T))\right\rbrace-\varepsilon. \end{equation} By the definition of $w(t,x),$ we have \begin{equation}\begin{split} w(t,x) \ & \leqslant \max_{u_\alpha\in U(t)}\bigg\{ \int_{\Gamma_{tt+h}} L_\alpha (s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha \\& +m(t+h,x(t+h))\bigg\} \end{split}\end{equation} and consequently there exists a control $u^1_\alpha\in \mathcal{U}(t)$ such that \begin{equation}\begin{split}\label{eq.10} w(t,x) \ & \leqslant \int_{\Gamma_{tt+h}} L_\alpha (s,x(s),u^1_\alpha(s),\Psi[u^1_\alpha](s))ds^\alpha \\& +m(t+h,x(t+h))+\varepsilon. \end{split}\end{equation} Define a new control $${u_\alpha^\star}\in \mathcal{U}(t), {u_\alpha^\star}(s)\equiv \left\{\begin{array}{ll} u^1_\alpha(s) & s\in \Omega_{tt+h}\\ u_\alpha(s) & s\in \Omega_{tT}\setminus\Omega_{tt+h}, \end{array}\right.$$ for each control $u_\alpha\in \mathcal{U}(t+h)$ and then define the strategy ${\Psi}^\star\in\mathcal{B}(t+h), \Psi^\star[u_\alpha](s)\equiv\Psi[{u_\alpha^\star}](s), s\in \Omega_{tT}\setminus\Omega_{tt+h}.$ We find the inequality \begin{equation}\begin{split} \ & m(t+h,x(t+h)) \\ & \leq\max_{u_\alpha\in \mathcal{U}(t+h)}\left\lbrace \int_{\Gamma_{tt+h}} L_\alpha(s,x(s),u_\alpha(s),\Psi^\star[u_\alpha](s))ds^\alpha+g(x(T))\right\rbrace \end{split}\end{equation} and so there exists the control $u^2_\alpha\in \mathcal{U}(t+h)$ for which \begin{equation}\begin{split}\label{eq.11} \ & m(t+h, x(t+h))\\& \leq \int_{\Gamma_{tT}\setminus \Gamma_{tt+h}} L_\alpha(s,x(s),u^2_\alpha(s),\Psi^\star[u^2_\alpha](s))ds^\alpha+g(x(T)) +\varepsilon. \end{split}\end{equation} Define a new control $$u_\alpha\in \mathcal{U}(t), u_\alpha(s)\equiv \left\{\begin{array}{ll} u^1_\alpha(s) & s\in \Omega_{tt+h}\\ u^2_\alpha(s) & s\in \Omega_{tT}\setminus\Omega_{tt+h}. \end{array}\right.$$ Then the inequalities $\eqref{eq.10}$ and $\eqref{eq.11}$ yield $$ w(t,x)\leq \int_{\Gamma_{tT}} L_\alpha(s,x(s),u_\alpha(s),\Psi[u_\alpha](s))ds^\alpha+g(x(T)) +2\varepsilon,$$ and so $\eqref{eq.9}$ implies the inequality $$ w(t,x)\leq m(t,x)+3\varepsilon.$$ This inequality and $m(t,x)\leq w(x,t)+2\varepsilon$ complete the proof. \end{enumerate} \end{proof} \begin{theorem}\textbf{(boundedness and continuity of the values functions)} The lower, upper value function $m(t,x)$, $M(t,x)$ satisfy the boundedness conditions $$\vert m(t,x)\vert, \vert M(t,x)\vert\leq D$$ $$\vert m(t,x)-m(\hat{t},\hat{x})\vert, \vert M(t,x)-M(\hat{t},\hat{x})\vert\leq E\,\, \ell (\Gamma_{\hat{t}\,t})+ F\,\Vert x-\hat{x}\Vert,$$ for some constants $D, E, F$ and for all $t, \hat{t}\in \Omega_{0T}, x, \hat{x} \in \mathbb{R}^n.$ \end{theorem} \begin{proof} We prove only the statements for upper value function $M(t,x).$ Since $\vert g(x)\vert\leqslant B, \vert L_\alpha(t,x,u_\alpha,v_\alpha)\vert\leqslant C_\alpha, \alpha=\overline{1,m}$, we find \begin{equation}\begin{split} \vert J_{t,x}(u(\cdot),v(\cdot))\vert \ & =\Big\vert \int_{\Gamma_{tT}} L_\alpha (s,x(s),u_\alpha(s),v_\alpha(s))ds^\alpha+g(x(T))\Big\vert \\& \leq \Big\vert \int_{\Gamma_{tT}} L_\alpha (s,x(s),u_\alpha(s),v_\alpha(s))ds^\alpha \Big\vert + \vert g(x(T))\vert \\& \leq \int_{\Gamma_{tT}} \Vert L_\alpha (s,x(s),u_\alpha(s),v_\alpha(s))\Vert \Vert ds^\alpha \Vert +\vert g(x(T))\vert \\& \leq \Vert C \Vert \int_{\Gamma_{tT}} ds + B= \Vert C \Vert l(\Gamma_{tT}) +B\leq \Vert C \Vert l(\Gamma_{0T}) +B=D \\ & \Longrightarrow \vert M(t,x)\vert\leq D, \end{split}\end{equation} for all $u_\alpha(\cdot)\in \mathcal{U}(t),v_\alpha(\cdot)\in \mathcal{V}(t).$ Let $x_1,x_2\in \mathbb{R}^n,\,\, t_1, t_2 \in \Omega_{0T}.$ For $\varepsilon>0$ and the strategy $\Phi\in \mathcal{A}(t_1),$ we have \begin{equation}\label{eq:1} M(t_1,x_1)\leq \min_{v_\alpha \in \mathcal{V}(t_1)} J(\Phi[v_\alpha],v_\alpha)+\varepsilon. \end{equation} Define the control $$\overline{v}_\alpha\in \mathcal{V}(t_1),\overline{v}_\alpha(s)\equiv \left\{\begin{array}{ll} {v}_\alpha^1(s) & s\in \Omega_{0t_2}\setminus\Omega_{0t_1}\\ {v}_\alpha(s) & s\in \Omega_{0T}\setminus\Omega_{0t_2}, \end{array}\right.$$ for any $v_\alpha \in \mathcal{V}(t_2)$ and some $v_\alpha^1 \in V$ and for each $v_\alpha \in \mathcal{V}(t_2), \underline{\Phi}\in \mathcal{A}(t_2)$ (the restriction of $\Phi$ over $\Omega_{0T}\setminus \Omega_{0t_1})$ by $\underline{\Phi}[v_\alpha]=\Phi[\overline{v}_\alpha], s\in \Omega_{0T}\setminus\Omega_{0t_2}.$ Choose the control $v_\alpha\in \mathcal{V}(t_2)$ so that \begin{equation}\label{eq:2} M(t_2,x_2)\geq J(\underline{\Phi}[v_\alpha],v_\alpha)-\varepsilon. \end{equation} By the inequality $\eqref{eq:1},$ we have \begin{equation}\label{eq:3} M(t_1,x_1)\leq J(\Phi[\overline{v}_\alpha],\overline{v}_\alpha)+\varepsilon. \end{equation} We know that the (unique, Lipschitz) solution $x(\cdot)$ of the Cauchy problem $$\left\{\begin{array}{ll} \frac{\partial x^i}{\partial s^\alpha}(s)=X^i_\alpha(s,x(s),u_\alpha(s),v_\alpha(s))\\ x(t)=x, \quad s\in \Omega_{tT}\subset \mathbb{R}_+^m, x\in \mathbb{R}^n, i=\overline{1,n}, \alpha =\overline{1,m}, \end{array}\right.$$ is the response to the controls $u_\alpha(\cdot), v_\alpha(\cdot)$ for $s\in \Omega_{0T}.$ We choose $x_1(\cdot)$ as solution of the Cauchy problem $$\left\{\begin{array}{ll} \frac{\partial x^i_1}{\partial s^\alpha}(s)=X^i_\alpha(s,x_1(s),\Phi[\overline{v}_\alpha](s),\overline{v}_\alpha(s))\\ x_1(t_1)=x_1, \quad s\in \Omega_{0T}\setminus \Omega_{0t_1}. \end{array}\right.$$ Equivalently, $x_1(\cdot)$ is solution of integral equation $$x_1(s)= x_1(t_1) + \int_{\Gamma_{t_1s}}X_\alpha(\sigma,x_1(\sigma),\Phi[\overline{v}_\alpha](\sigma),\overline{v}_\alpha(\sigma))d\sigma^\alpha.$$ Take $x_2(\cdot)$ as solution of the Cauchy problem $$\left\{\begin{array}{ll} \frac{\partial x^i_2}{\partial s^\alpha}(s)=X^i_\alpha(s,x_2(s),\underline{\Phi}[v_\alpha](s),v_\alpha(s))\\ x_2(t_2)=x_2, \quad s\in \Omega_{0T}\setminus \Omega_{0t_2}. \end{array}\right.$$ Equivalently, $x_2(\cdot)$ is solution of integral equation $$x_2(s)= x_2(t_2) + \int_{\Gamma_{t_2s}}X_\alpha(\sigma,x_2(\sigma),\underline{\Phi}[v_\alpha](\sigma),\overline{v}_\alpha(\sigma))d\sigma^\alpha.$$ It follows that $$\Vert x_1(t_2)-x_1 \Vert = \Vert x_1(t_2)-x_1(t_1)\Vert \leq \Vert A\Vert \,\ell(\Gamma_{t_1t_2}).$$ Since $v_\alpha=\overline{v}_\alpha$ and $\underline{\Phi}[v_\alpha]=\Phi[\overline{v}_\alpha],$ for $s\in \Omega_{0T}\setminus\Omega_{0t_2}$, we find the estimation \begin{equation}\begin{split} \Vert x_1(s)-x_2(s)\Vert \ & \leq \Vert x_1(t_1) - x_2(t_2)\Vert + \Vert \int_{\Gamma_{t_1t_2}}\cdots \Vert\\ & \leq \Vert A \Vert \ell(\Gamma_{t_1t_2})+ \Vert x_1-x_2\Vert ,\,\, \hbox{on}\,\, t_2\leq s\leq T. \end{split}\end{equation} Thus the inequalities $\eqref{eq:2}$ and $\eqref{eq:3}$ imply $$ M(t_1,x_1)-M(t_2,x_2) \leq J(\Phi[\overline{v}_\alpha],\overline{v}_\alpha])-J(\underline{\Phi}[v_\alpha],v_\alpha])+2\varepsilon $$ $$\leq \Big\vert \int_{\Gamma_{t_1t_2}} L_\alpha (s,x_1(s),\Phi[\overline{v}_\alpha](s),\overline{v}_\alpha(s))ds^\alpha$$ $$+\int_{\Gamma_{t_2T}} (L_\alpha (s,x_1(s),\underline{\Phi}[v_\alpha](s),v_\alpha(s)) -L_\alpha (s,x_2(s),\underline{\Phi}[v_\alpha](s),v_\alpha(s)))ds^\alpha$$ $$+g(x_1(T))-g(x_2(T))+2\varepsilon\Big\vert $$ $$\leq\int_{\Gamma_{t_1t_2}} \vert L_\alpha (s,x_1(s),\Phi[\overline{v}_\alpha](s),\overline{v}_\alpha(s))ds^\alpha\vert$$ $$+\int_{\Gamma_{t_2T}} \vert (L_\alpha (s,x_1(s),\underline{\Phi}[v_\alpha](s),v_\alpha(s)) -L_\alpha (s,x_2(s),\underline{\Phi}[v_\alpha](s),v_\alpha(s)))ds^\alpha\vert$$ $$+\vert g(x_1(T))-g(x_2(T))\vert +2\varepsilon$$ $$\leq \Vert C \Vert \ell(\Gamma_{t_1t_2}) +\Vert C \Vert \ell(\Gamma_{t_2T})\,(\Vert A \Vert \ell(\Gamma_{t_1t_2})+ \Vert x_1-x_2\Vert) +B\, \Vert x_1-x_2\Vert) +2\varepsilon$$ $$\leq \Vert C \Vert \ell(\Gamma_{t_1t_2}) +\Vert C \Vert \ell(\Gamma_{0T})\,(\Vert A \Vert \ell(\Gamma_{t_1t_2})+ \Vert x_1-x_2\Vert) +B\, \Vert x_1-x_2\Vert) +2\varepsilon.$$ Since $\varepsilon$ is arbitrary, we obtain the inequality \begin{equation}\label{eq:7} M(t_1,x_1)-M(t_2,x_2)\leq E\,\ell(\Gamma_{t_1t_2}) + F \,\Vert x_1-x_2\Vert. \end{equation} Let $\varepsilon>0$ and choose the strategy $\Phi\in \mathcal{A}(t_2)$ such that \begin{equation}\label{eq:4} M(t_2,x_2)\leq \min_{v_\alpha \in \mathcal{V}(t_2)} J(\Phi[v_\alpha],v_\alpha)+\varepsilon. \end{equation} For each control $v_\alpha \in \mathcal{V}(t_1)$ and $s\in \Omega_{0T}\setminus\Omega_{0t_2},$ define the control $\underline{v}_\alpha\in \mathcal{V}(t_2), \underline{v}_\alpha(s)=v_\alpha(s).$ For some $u^1_\alpha \in U,$ we define the strategy $\overline{\Phi}\in \mathcal{A}(t_1)$ (the restriction of $\Phi$ over $\Omega_{0T}\setminus\Omega_{0t_2}$) by $$\overline{\Phi}[{v}_\alpha]= \left\{\begin{array}{ll} u^1_\alpha & s\in \Omega_{0t_2}\setminus\Omega_{0t_1}\\ \Phi[\underline{v}_\alpha] & s\in \Omega_{0T}\setminus\Omega_{0t_2}. \end{array}\right.$$ Now choose a control $v_\alpha\in \mathcal{V}(t_1)$ so that \begin{equation}\label{eq:5} M(t_1,x_1)\geq J(\overline{\Phi}[v_\alpha],v_\alpha)-\varepsilon. \end{equation} By the inequality $\eqref{eq:4},$ we have \begin{equation}\label{eq:6} M(t_2,x_2)\leq J(\Phi[\underline{v}_\alpha],\underline{v}_\alpha)+\varepsilon. \end{equation} We choose $x_1(\cdot)$ as solution of the Cauchy problem (PDE system + initial condition) $$\left\{\begin{array}{ll} \frac{\partial x_1^i}{\partial s^\alpha}(s)=X^i_\alpha(s,x_1(s),\overline{\Phi}[v_\alpha],v_\alpha(s)), s\in \Omega_{0T}\setminus\Omega_{0t_1} \\ x_1(t_1)=x_1, \quad s\in \Omega_{0T}\setminus \Omega_{0t_1} \end{array}\right.$$ and $x_2(\cdot)$ as solution of the Cauchy problem (PDE system + initial condition) $$\left\{\begin{array}{ll} \frac{\partial x_2^i}{\partial s^\alpha}(s)=X^i_\alpha(s,x_2(s),\Phi[\underline{v}_\alpha],\underline{v}_\alpha(s)), s\in \Omega_{0T}\setminus\Omega_{0t_2}\\ x_2(t_2)=x_2, \quad s\in \Omega_{0T}\setminus \Omega_{0t_2}. \end{array}\right.$$ Using the associated integral equations, it follows that $$\Vert x_1(t_2)-x_1 \Vert = \Vert x_1(t_2)-x_1(t_1)\Vert \leq \Vert A\Vert \,\ell(\Gamma_{t_1t_2}).$$ Also, for $s\in \Omega_{0T}\setminus\Omega_{0t_2}, v_\alpha=\underline{v}_\alpha$ and $\overline{\Phi}[v_\alpha]=\Phi[\underline{v}_\alpha],$ we find \begin{equation}\begin{split} \Vert x_1(s)-x_2(s)\Vert \ & \leq \Vert x_1(t_1) - x_2(t_2)\Vert + \Vert \int_{\Gamma_{t_1t_2}}\cdots \Vert\\ & \leq \Vert A \Vert \ell(\Gamma_{t_1t_2})+ \Vert x_1-x_2\Vert,\,\, \hbox{on}\,\, t_2\leq s\leq T. \end{split}\end{equation} Thus, the relations $\eqref{eq:5}$ and $\eqref{eq:6}$ imply $$ M(t_2,x_2)-M(t_1,x_1) = J(\overline{\Phi}[v_\alpha],v_\alpha])-J(\Phi[\underline{v}_\alpha],\underline{v}_\alpha])+2\varepsilon $$ $$ = - \int_{\Gamma_{t_1t_2}} L_\alpha (s,x_1(s),\overline{\Phi}[{v}_\alpha](s),{v}_\alpha(s))ds^\alpha $$ $$ +\int_{\Gamma_{t_2T}} (L_\alpha (s,x_1(s),{\Phi}[\underline{v}_\alpha](s),\underline{v}_\alpha(s)) - L_\alpha (s,x_2(s),{\Phi}[\underline{v}_\alpha](s),\underline{v}_\alpha(s)))ds^\alpha $$ $$ +g(x_1(T))-g(x_2(T))+2\varepsilon $$ $$\leq \Vert C \Vert \ell(\Gamma_{t_1t_2}) +\Vert C \Vert \ell(\Gamma_{0T})\,(\Vert A \Vert \ell(\Gamma_{t_1t_2})+ \Vert x_1-x_2\Vert) +B\, \Vert x_1-x_2\Vert) +2\varepsilon.$$ Since $\varepsilon$ is arbitrary, we obtain the inequality \begin{equation}\label{eq:8} M(t_2,x_2)-M(t_1,x_1)\leq E\,\ell(\Gamma_{t_1t_2}) + F \,\Vert x_1-x_2\Vert. \end{equation} By $2.17$ and $2.22$, we proved the continuity of the lower and upper value functions. \end{proof} \section{Viscosity solutions of \\multitime (HJIU) PDEs} \begin{theorem}\textbf{(PDEs for multitime upper value function, resp. multitime lower value function)} The multitime upper value function $M(t,x)$ and the multitime lower value function $m(t,x)$ are the viscosity solutions of Hamilton-Jacobi-Isaacs-Udri\c ste (HJIU) PDEs: \begin{itemize} \item the multitime upper (HJIU) PDEs $$\frac{\partial M}{\partial t^\alpha}(t,x)+\min_{v_\alpha\in \mathcal{V}} \max_{u_\alpha\in \mathcal{U}} \left\lbrace \frac{\partial M}{\partial x^i}(t,x) X_\alpha^i(t,x,u_\alpha,v_\alpha)+L_\alpha(t,x,u_\alpha,v_\alpha)\right\rbrace =0,$$ with the terminal condition $M(T,x)=g(x),$ \item the multitime lower (HJIU) PDEs $$\frac{\partial m}{\partial t^\alpha}(t,x)+\max_{u_\alpha \in \mathcal{U}} \min_{v_\alpha \in \mathcal{V}} \left\lbrace \frac{\partial m}{\partial x^i}(t,x) X_\alpha^i(t,x,u_\alpha,v_\alpha)+L_\alpha(t,x,u_\alpha,v_\alpha)\right\rbrace =0,$$ with the terminal condition $m(T,x)=g(x).$ \end{itemize} \end{theorem} \begin{remark} If we introduce the so-called upper and lower Hamiltonian $1$-forms defined respectively by $$H^+_\alpha(t,x,p)=\min_{v_\alpha\in \mathcal{V}} \max_{u_\alpha \in \mathcal{U}}\lbrace p_i(t) X_\alpha^i(t,x,u_\alpha,v_\alpha)+L_\alpha(t,x,u_\alpha,v_\alpha)\rbrace,$$ $$H^-_\alpha(t,x,p)=\max_{u_\alpha\in \mathcal{U}} \min_{v_\alpha\in \mathcal{V}}\lbrace p_i(t) X_\alpha^i(t,x,u_\alpha,v_\alpha)+L_\alpha(t,x,u_\alpha,v_\alpha)\rbrace,$$ then the multitime (HJIU) PDE systems can be written in the form $$\frac{\partial M}{\partial t^\alpha}(t,x)+H^+_{\alpha}\left( t,x,\frac{\partial M}{\partial x}(t,x)\right) =0$$ and $$\frac{\partial m}{\partial t^\alpha}(t,x)+H^-_\alpha\left( t,x,\frac{\partial m}{\partial x}(t,x)\right) =0.$$ \end{remark} The proof will be given in another paper. \section{Representation formula of viscosity \\solutions for multitime (HJ) PDEs} In this section, we want to obtain a representation formula for the viscosity solution $M(t,x)$ of the multitime (HJ) PDEs system \begin{equation} \frac{\partial M}{\partial t^\alpha}+H_\alpha\left( t,x,\frac{\partial M}{\partial x}(t,x)\right) =0, (t,x)\in \Omega_{0T}\times \mathbb{R}^n,\alpha=\overline{1,m}, \end{equation} \begin{equation} M(0,x)=g(x), x\in \mathbb{R}^n \,(\hbox{initial\, condition}), \end{equation} where the unique solution $M(t,x)$ satisfies the inequalities \begin{equation}\label{eq:11} \left\{\begin{array}{ll} \vert M(t,x)\vert\leq D\\ \vert M(t,x)-M(\hat{t},\hat{x})\vert\leq E\,\,\ell(\Gamma_{t\hat t})+ F\,\,\Vert x-\hat{x}\Vert, \end{array}\right.\end{equation} for some constants $D, E, F$ (for $m=1,$ see also [4]). Also, we assume that $g:\mathbb{R}^n \rightarrow \mathbb{R}, H_\alpha:\Omega_{0T} \times \mathbb{R}^n\times \mathbb{R}^p\rightarrow \mathbb{R},$ satisfy the inequalities $$\left\{\begin{array}{ll} \vert g (x)\vert\leq B\\ \vert g(x)-g (\hat{x})\vert\leq B \Vert x-\hat{x}\Vert \end{array}\right.$$ and \begin{equation}\label{eq:8} \left\{\begin{array}{ll} \vert H_\alpha(t,x,0)\vert\leq K_\alpha\\ \vert H_\alpha (t,x,p)-H_\alpha(\hat{t},\hat{x},\hat{p})\vert\leq K_\alpha\,\, (\ell(\Gamma_{t\hat t})+\Vert x-\hat{x}\Vert +\Vert p-\hat{p}\Vert). \end{array}\right. \end{equation} \textbf{Max-min representation of a Lipschitz function as affine functions} (for $m=1,$ see also [2], [3]). \begin{lemma}\label{l-2} For each $\alpha$, let \begin{equation}\label{eq:10} \left\{\begin{array}{ll} U=B(0,1)\subset \mathbb{R}^{n}\\ V=B(0,P)\subset \mathbb{R}^{n}\\ X_\alpha(u_\alpha)=K_\alpha u_\alpha,\, K_\alpha \in \mathbb{R}\\ L_\alpha(t,x,u_\alpha,v_\alpha)=H_\alpha (t,x,v_\alpha)-<K_\alpha u_\alpha,v_\alpha>. \end{array}\right. \end{equation} Let $H_\alpha$ be a Lipschitz 1-form. For some constant $P>0$ and for each $t\in \Omega_{0T},\,\, x \in \mathbb{R}^n,$ we have $$H_\alpha (t,x,{p})=\max_{v_\alpha \in V}\min_{u_\alpha \in U}\left\lbrace <X_\alpha(u_\alpha),{p}> + L_\alpha(t,x,u_\alpha,v_\alpha)\right\rbrace ,$$ if $\Vert {p}\Vert \leq P$. \end{lemma} \begin{proof} In view of the assumption $H_\alpha (t,x,v_\alpha)-H_\alpha(t,x,{p})\leq K_\alpha \Vert {p}-v_\alpha\Vert,$ by the Cauchy-Schwarz formula, and by the condition $||u||\leq 1$, we have for any $x\in \mathbb{R}^n,$ \begin{equation}\begin{split} H_\alpha (t,x,{p})\ & =\max_{v_\alpha \in V} \left\lbrace H_\alpha(t,x,v_\alpha) - K_\alpha\Vert {p}-v_\alpha\Vert\right\rbrace \\ & =\max_{v_\alpha \in V}\min_{u_\alpha\in U}\left\lbrace H_\alpha(t,x,v_\alpha)\,+\, <K_\alpha u_\alpha,{p}-v_\alpha>\right\rbrace. \end{split}\end{equation} \end{proof} \textbf{Max-min representation of a Lipschitz function as positive homogeneous functions} (for m=1, see also [2],[3]). \begin{lemma} Let $H_\alpha$ be a Lipschitz $1$-form which is homogeneous in $p,$ i.e., $$H_\alpha(t,x,\lambda p)=\lambda H_\alpha(t,x,p),\,\,\lambda \geq 0.$$ Then there exist compact sets $U, V\subset \mathbb{R}^{2n}$ and vector fields $$X_\alpha:[0,T]\times \mathbb{R}^n\times U\times V\rightarrow \mathbb{R}^n$$ satisfying $$\Vert X_\alpha(x)-X_\alpha(\hat{x})\Vert\leqslant A_\alpha\Vert x-\hat{x}\Vert$$ and such that, for each $\alpha$, $$H_\alpha (t,x,p)=\max_{v_\alpha \in V}\min_{u_\alpha \in U}\left\lbrace <X_\alpha(t,x,u_\alpha,v_\alpha),p> \right\rbrace ,$$ for all $t\in \Omega_{0T},x\in \mathbb{R}^n,p \in \mathbb{R}^n.$ \end{lemma} \begin{proof} Let $u_\alpha=(u^1_\alpha,u^2_\alpha),v_\alpha=(v^1_\alpha,v^2_\alpha)$ ($2n$-dimensional controls) and \begin{equation}\label{eq:9} \left\{\begin{array}{ll} U=V=B(0,1)\times B(0,1)\subset \mathbb{R}^{2n}\\ L_\alpha(t,x,u^1_\alpha,v^1_\alpha)=H_\alpha (t,x,v_\alpha^1)-<K_\alpha u^1_\alpha,v_\alpha^1>\\ X_\alpha(t, x, u_\alpha, v_\alpha)=K_\alpha u^1_\alpha+ C_\alpha v^2_\alpha+ (L_\alpha(t,x,u^1_\alpha,v^1_\alpha)- C_\alpha)u^2_\alpha.\\ \end{array}\right. \end{equation} According to Lemma $\eqref{l-2}$ and the assumptions $\eqref{eq:9},$ if $\Vert\eta\Vert =1,$ we have \begin{equation}\begin{split} H_\alpha (t,x,\eta)\ & =\max_{v^1_\alpha \in V^1}\min_{u^1_\alpha\in U^1}\left\lbrace <K_\alpha u^1_\alpha,\eta> + L_\alpha (t,x,u^1_\alpha, v^1_\alpha)\right\rbrace, \end{split}\end{equation} for $U^1=V^1=B(0,1)\in \mathbb{R}^n$. For any $p\neq 0$, we can write \begin{equation}\begin{split} H_\alpha(t,x,p)\ & =\Vert p\Vert H_\alpha \left( t,x,\frac{p}{\Vert p\Vert}\right) \\ & =\max_{v^1_\alpha\in V^1}\min_{u^1_\alpha\in U^1}\left\lbrace <K_\alpha u^1_\alpha,p>+L_\alpha (t,x,u^1_\alpha,v^1_\alpha)\Vert p\Vert\right\rbrace . \end{split}\end{equation} Then, if we choose $C_\alpha>0$ such that $\vert L_\alpha\vert\leq C_\alpha,$ we find \begin{equation}\begin{split} H_\alpha (t,x,p)\ & =\max_{v^1_\alpha \in V^1}\min_{u^1_\alpha\in U^1}\bigg\{ <K_\alpha u^1_\alpha,p> +C_\alpha\Vert p\Vert +( L_\alpha (t,x,u^1_\alpha, v^1_\alpha)-C_\alpha)\Vert p\Vert\bigg\}\\ & =\max_{v^1_\alpha \in V^1}\min_{u^1_\alpha\in U^1}\max_{v^2_\alpha \in V^1}\min_{u^2_\alpha\in U^1}\bigg\{ <K_\alpha u^1_\alpha,p> +<C_\alpha v^2_\alpha, p>\\ & +( L_\alpha (t,x,u^1_\alpha, v^1_\alpha)-C_\alpha)< u^2_\alpha,p> \bigg\} \\ & =\max_{v_\alpha \in V}\min_{u_\alpha\in U}\bigg\{ <X_\alpha(t,x,u_\alpha, v_\alpha),p>\bigg\}. \end{split}\end{equation} Now, interchanging $\min_{u_\alpha^1\in U^1}$ and $\max_{v_\alpha^2\in V^1}$, the result in Lemma follows. \end{proof} We are now in a position to give the main result of this section. \begin{theorem} For each $t \in \Omega_{0T}$ and $x \in \mathbb{R}^n,$ the upper value function $M(t,x)$ verifies the equality \begin{equation}\begin{split} M(t,x)=\max_{\Phi\in \mathcal{U}(T-t)}\min_{v_\alpha\in V(T-t)}\bigg\{ \ & - \int_{\Gamma_{T-tT}} L_\alpha(T-s,x(s),\Phi[v_\alpha](s),v_\alpha(s))ds^\alpha \\ & +g(x(T))\bigg\} , \end{split}\end{equation} where for each pair of controls $v_\alpha\in V(T-t)$,\,\, $u_\alpha=\Phi[v_\alpha]\in U(T-t),$ the state function $x(\cdot)$ solves the problem \begin{equation} \left\{\begin{array}{ll} \frac{\partial x^i}{\partial s^\alpha}(s)=-F^i_\alpha u_\alpha(s), s\in \Omega_{0T}\setminus \Omega_{0T-t}\\ x(T-t)=x. \end{array}\right. \end{equation} \end{theorem} \begin{proof} Let $$H^1_\alpha(t,x,p)=\max_{v_\alpha\in V}\min_{u_\alpha\in U}\left\lbrace <X_\alpha(u_\alpha),p>+L_\alpha(t,x,u_\alpha,v_\alpha)\right\rbrace,$$ $U=B(0,1)\subset \mathbb{R}^{pm}, V=B(0,P)\subset \mathbb{R}^{qm}$ and $X^i_\alpha, L_\alpha$ Lipschitz functions with the assumptions $\eqref{eq:10}.$ Then $H_\alpha(t,x,p)=H^1_\alpha(t,x,p)$ provided $\vert p\vert\leq P.$ Since $M(t,x)$ satisfies $\eqref{eq:11},$ it follows that $M(t,x)$ is also the unique viscosity solution of the multitime (HJ) PDEs system (for $m=1,$ see also [4]) \begin{equation} \frac{\partial M}{\partial t^\alpha}+H^1_\alpha\left( t,x,\frac{\partial M}{\partial x}(t,x)\right) =0, \,\,(t,x)\in \Omega_{0T}\times \mathbb{R}^n,\alpha=\overline{1,m}, \end{equation} \begin{equation} M(0,x)=g(x),\,\, x\in \mathbb{R}^n. \end{equation} If we take $M^1(t,x)=M(T-t,x),$ one observes that $M^1(t,x)$ is a viscosity solution of this system (for $m=1,$ see also [2]) \begin{equation} \frac{\partial M^1}{\partial t^\alpha}+H^+_\alpha\left( t,x,\frac{\partial M^1}{\partial x}(t,x)\right) =0,\,\, (t,x)\in \Omega_{0T}\times \mathbb{R}^n,\alpha=\overline{1,m}, \end{equation} \begin{equation} M^1(T,x)=g(x),\,\, x\in \mathbb{R}^n \end{equation} and $$H^+_\alpha(t,x,p)=\max_{v_\alpha\in V}\min_{u_\alpha\in U}\left\lbrace -<X_\alpha(u_\alpha),p>+L_\alpha(T-t,x,u_\alpha,v_\alpha)\right\rbrace.$$ Using the above developments, we obtain \begin{equation}\begin{split} M^1(t,x)=M(t,x)=\max_{\Phi\in \mathcal{U}(t)}\min_{v_\alpha\in V(t)}\bigg\{ \ & - \int_{\Gamma_{tT}} L_\alpha(T-s,x(s),\Phi[v_\alpha](s),v_\alpha(s))ds^\alpha \\ & +g(xT))\bigg\} , \end{split}\end{equation} where $x(\cdot)$ is the solution of the Cauchy problem \begin{equation} \left\{\begin{array}{ll} \frac{\partial x^i}{\partial s^\alpha}(s)=-X^i_\alpha(u_\alpha(s))=-F^i_\alpha u_\alpha(s), s\in \Omega_{0T}\setminus \Omega_{0T-t}\\ x(t)=x, \end{array}\right. \end{equation} for the control $u_\alpha(\cdot)=\Phi[v_\alpha].$ \end{proof}
{ "attr-fineweb-edu": 1.356445, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa7g5qoaAwnody91V
\section{Introduction}\label{Introduction} \noindent This paper examines the properties of four alternative parametric techniques -- frequency domain maximum likelihood (FML), Whittle, time domain maximum likelihood (TML) and conditional sum of squares (CSS) -- when they are employed to estimate a mis-specified model applied to a true data generating process (TDGP) that exhibits long range dependence. These estimators have a long history in time series analysis, dating back to the pioneering work of \cite{grenander:rosenblatt:1957}, \cite{whittle:1962}, \cite{walker:1964}, \cite{box:jenkins} and \cite{hannan:1973}, and their properties in the context of weakly dependent processes are well known \citep[see, for instance,][]{brockwell:davis:1991}. Extension of these methods to the analysis of strongly dependent processes has been examined in \cite{fox:taqqu:1986}, \cite{dahlhaus:1989}, \cite{sowell:1992}, \cite{beran:1995} and \cite{robinson:2006}, among others, but this literature presupposes that the structure of the TDGP is known apart from the values of a finite number of parameters that are to be estimated. Recognition that the true structure can only ever be approximated by the model being fitted has given rise to two responses: (i) the development of semi-parametric techniques such as those advanced by \cite{geweke:porter:1983} and \cite{robinson:1995a,robinson:1995b} for example; and (ii) the examination of the consequences of mis-specification. Significant contributions to the issue of mis-specification in long memory models have been made by \cite{yajima:1992} and \cite{chen:deo:2006}. Specifically, Yajima investigates the asymptotic properties of the estimators of the parameters in an autoregressive moving average (ARMA) model under a long memory fractional noise TDGP; whilst Chen and Deo focus on the estimation of the parameters in an incorrectly specified fractionally integrated model. Both studies demonstrate that once model mis-specification is accommodated consistency for the true parameters no longer obtains, and that the properties of inferential methods become case-specific and dependent on the precise nature and degree of mis-specification. In particular, it is shown that the estimator of the (vector-valued) parameter of a mis-specified model converges, subject to regularity, to a `pseudo-true' value that is different from the true value and that the estimator may or may not achieve the usual $\sqrt{n}$ rate of convergence and limiting Gaussianity, depending on the magnitude of the deviation between the true and pseudo-true parameters. By definition, the pseudo-true parameter is the value which optimizes the limiting form of the objective function that defines an estimator. \cite{chen:deo:2006} derive the form of this limiting objective function for the FML estimator, and proceed to demonstrate that the asymptotic behaviour of the parametric estimator of the incorrectly specified model is dependent on whether the distance between the true and pseudo-true values of the long memory parameter, $d,$ is less than, equal to, or in excess of $0.25.$ For specific models in the autoregressive fractionally integrated moving average (ARFIMA) class, this distance is then linked to respective values of the ARMA parameter(s) in the true and mis-specified models. The extent to which mis-specification of the short memory dynamics is still compatible with $\sqrt{n}$-consistency and asymptotic Gaussianity is then documented for these particular examples. In this paper we extend the analysis of \cite{chen:deo:2006} in several directions. Firstly, we derive the limiting form of the objective function for the three other commonly-used parametric estimators -- namely, Whittle, TML and CSS -- and show that the FML, Whittle, TML and CSS estimators will converge to the same pseudo-true parameter value under common mis-specification. Secondly, we derive closed-form representations for the first-order conditions that define the pseudo-true parameter for \emph{general} ARFIMA model structures. Thirdly, we extend the asymptotic theory established by Chen and Deo for the FML estimator to the other three estimators, and show that all four methods are asymptotically equivalent, in that they converge in distribution under common mis-specification. Fourthly, we demonstrate how to implement numerically the asymptotic distribution that obtains under the most extreme type of mis-specification, by using an appropriate method of truncating the series expansion in random variables that characterises the distribution. This then enables us to illustrate graphically the differences in the rates at which the finite sample distributions of the four different estimators approach the (common) asymptotic distribution. Notably, when the difference between the true and pseudo-true values of $d$ is greater than or equal to $0.25$, there is a distinct grouping into frequency domain and time-domain techniques; with the latter tending to replicate the asymptotic distribution more closely than the former in small samples. Finally, we perform an extensive simulation experiment in which the relative finite sample performance of all four mis-specified estimators is assessed, with the CSS estimator exhibiting superior performance, in terms of bias and mean squared error, across a range of mis-specification settings. The paper is organized as follows. In Section \ref{mispec} we define the estimation problem, namely producing an estimate of the parameters of a fractionally integrated model when the component of the model that characterizes the short term dynamics is mis-specified. The criterion functions that define the FML estimator and the three above-mentioned alternative estimators are specified, and we demonstrate that all four estimates converge under common mis-specification. The limiting form of the criterion function for a mis-specified ARFIMA model is presented in Section \ref{pseudo}, under complete generality for the short memory dynamics in the true process and estimated model, and closed-form expressions for the first-order conditions that define the pseudo-true values of the parameters are then given. The asymptotic equivalence of all four estimation methods is proved in Section \ref{asyQ}. The finite sample performance of the four parametric estimators of $d$ in the mis-specified model -- with reference to estimating the pseudo-true value $d_{1}$ -- is documented in Section \ref{finite-misspec}. The form of the sampling distribution is recorded, as is the bias and mean squared error (MSE), under different degrees of mis-specification. Section \ref{Conclusion} then concludes. The proofs of the results presented in the paper are assembled in Appendix A, which also presents a lemma required in the proofs. Appendix B contains certain technical derivations referenced in the text. \section{Estimation Under Misspecification\label{mispec}} Assume that $\{y_{t}\}$ is generated from a TDGP that is a stationary Gaussian process with spectral density given by \begin{equation}\label{Spectral density_TDGP} f_{0}(\lambda )=\frac{\sigma _{\varepsilon 0}^{2}}{2\pi }g_{0}\left( \lambda \right) (2\sin (\lambda /2))^{-2d_{0}}\,, \end{equation where $g_{0}\left( \lambda \right) $ is a real valued function of $\lambda$ defined on $\left[0,\pi \right]$ that is bounded above and bounded away from zero. The model refers to a parametric specification for the spectral density of $\{y_{t}\}$ of the for \begin{equation} f_{1}(\mathbf{\psi ,}\lambda ) =\frac{\sigma _{\varepsilon }^{2}}{2\pi g_{1}\left( \mathbf{\beta ,}\lambda \right) (2\sin (\lambda /2))^{-2d}\,, \label{Spectral density_MM} \end{equation that is to be estimated from the data, where $g_{1}\left( \mathbf{\beta ,}\lambda \right) $ is a real valued function of $\lambda$ defined on $\left[ 0,\pi \right]$ that is bounded above and bounded away from zero. Let $\mathbf{\Psi =\mathbb{R}^{+}\times (}0,0.5)\times \mathbf{\Theta}$ and denote by $\mathbf{\psi }=(\sigma _{\varepsilon }^{2},\mathbf{\eta }^{T})^{T}\in \mathbf{\Psi}$ the parameter vector of the model where $\mathbf{\eta}=(d,\mathbf{\beta}^{T})^{T}$ and $\mathbf{\beta}\in\mathbf{\Theta}$, with $\mathbf{\Theta }\subset\mathbb{R}^{l}$ an $l$-dimensional compact convex set. It will be assumed that: \begin{enumerate} \item[$(A.1)$] $g_{1}(\mathbf{\beta ,}\lambda )$ is thrice differentiable with continuous third derivatives. \item[$(A.2)$] $\inf\limits_{\mathbf{\beta }}\inf\limits_{\lambda }g_{1} \mathbf{\beta ,}\lambda )>0$ and $\sup\limits_{\mathbf{\beta }\sup\limits_{\lambda }g_{1}(\mathbf{\beta ,}\lambda )<\infty .$ \item[$(A.3)$] $\sup\limits_{\lambda }\sup\limits_{\mathbf{\beta }\left\vert \frac{\partial g_{1}(\mathbf{\beta ,}\lambda )}{\partial \beta _{i}}\right\vert <\infty ,$ $1\leqslant i\leqslant l.$ \item[$(A.4)$] $\sup\limits_{\lambda }\sup\limits_{\mathbf{\beta }\left\vert \frac{\partial ^{2}g_{1}(\mathbf{\beta ,}\lambda )}{\partial \beta _{i}\partial \beta _{j}}\right\vert <\infty ,$ $\sup\limits_{\lambda }\sup\limits_{\mathbf{\beta }}\left\vert \frac{\partial ^{2}g_{1}(\mathbf \beta ,}\lambda )}{\partial \beta _{i}\partial \lambda }\right\vert <\infty , $ $1\leqslant i,j\leqslant l.$ \item[$(A.5)$] $\sup\limits_{\lambda }\sup\limits_{\mathbf{\beta }\left\vert \frac{\partial ^{3}g_{1}(\mathbf{\beta ,}\lambda )}{\partial \beta _{i}\partial \beta _{j}\partial \beta _{k}}\right\vert <\infty ,$ 1\leqslant i,j,k\leqslant l.$ \item[$(A.6)$] $\dint\limits_{-\pi }^{\pi }\log g_{1}(\mathbf{\beta ,}\lambda )d\lambda =0$ for all $\mathbf{\beta \in \Theta .}$ \end{enumerate} If there exists a subset of $\left[ 0,\pi \right]$ with non-zero Lebesgue measure in which $g_{1}\left( \mathbf{\beta ,}\lambda \right) \neq g_{0}\left(\lambda \right) $ for all $\mathbf{\beta}\in\mathbf{\Theta}$ then the model will be referred to as a mis-specified model (MM). An ARFIMA model for a time series $\{y_{t}\}$ may be defined as follows, \begin{equation} \phi (L)(1-L)^{d}\{y_{t}-\mu \}=\theta (L)\varepsilon _{t}, \label{General_model} \end{equation where $\mu =E\left( y_{t}\right) $, $L$ is the lag operator such that L^{k}y_{t}=y_{t-k}$, and $\phi (z)=1+\phi _{1}z+...+\phi _{p}z^{p}$ and $\theta (z)=1+\theta _{1}z+...+\theta _{q}z^{q}$ are the autoregressive and moving average operators respectively, where it is assumed that $\phi (z)$ and $\theta (z)$ have no common roots and that the roots lie outside the unit circle. The errors $\{\varepsilon _{t}\}$ are assumed to be a white noise sequence with finite variance $\sigma _{\varepsilon }^{2}>0$. For $|d|<0.5$, $\{y_{t}\}$ can be represented as an infinite-order moving average of $\{\varepsilon _{t}\}$ with square-summable coefficients and, hence, on the assumption that the specification in \eqref{General_model} is correct, $\{y_{t}\}$ is defined as the limit in mean square of a covariance-stationary process. When $d\leq 0$ the process is weakly dependent and in this case the behaviour of the estimators is to a large degree already known. We will therefore assume that $0<d<0.5$. When $0<d<0.5$ neither the moving average coefficients nor the autocovariances of the process are absolutely summable, declining at a slow hyperbolic rate rather than the exponential rate typical of an ARMA process, with the term `long memory' invoked accordingly. A detailed outline of the properties of ARFIMA processes is provided in \cite{beran:1994}. For an ARFIMA model we have $g_{1}\left( \mathbf{\beta},\lambda \right)=|\theta(e^{i\lambda})|^2/|\phi(e^{i\lambda})|^2$ where $\mathbf{\beta }=(\phi _{1},\phi_{2},...,\phi _{p},\theta _{1},\theta _{2},...,\theta _{q})^{T}$ and Assumptions $A.1-A.6$ are satisfied. An ARFIMA($p,d,q$) model will be mis-specified if the realizations are generated from a true ARFIMA( p_{0},d_{0},q_{0}$) process and any of $\{p\neq p_{0}\cup q\neq q_{0}\}\setminus \{p_{0}\leq p\cap q_{0}\leq q\}$ obtain. The estimators to be considered (denoted generally by $\widehat{\mathbf{\psi}}$) are all to be obtained by minimizing an objective function, $Q_{n}(\mathbf{\psi ),}$ say, and under mis-specification the estimator $\widehat{\mathbf{\psi }}_{1}$ is obtained by minimizing $Q_{n} \mathbf{\psi )}$ on the assumption that $\{y_{t}\}$ follows the MM.\footnote We follow the usual convention by denoting the estimator obtained under mis-specification as $\widehat{\mathbf{\psi }}_{1}$ rather than simply by \widehat{\mathbf{\psi }}$, say. This is to make it explicit that the estimator is obtained under mis-specification and does not correspond to the estimator produced under the correct specification of the model, which could be denoted by $\widehat{\mathbf{\psi }}_{0}.$} For any given $Q_{n}(\mathbf{\psi ),}$ there exists a non-stochastic limiting objective function $Q(\mathbf{\psi )}$, that is independent of the sample size $n$, such that $\left\vert Q_{n}(\mathbf{\psi })-Q \mathbf{\psi })\right\vert $\textbf{\textbf{$\rightarrow $}}$^{p}$ $0 \textbf{\ }for all $\mathbf{\psi \in \Phi}$, and provided certain conditions hold, $Q_{n}(\widehat{\mathbf{\psi }}_{1})$ will converge to $Q(\mathbf{\psi }_{1})$ where $\mathbf{\psi }_{1}$ is the minimizer of $Q(\mathbf{\psi )}$ and $\widehat{\mathbf{\psi }}_{1}$\textbf{\textbf{$\rightarrow $}}$^{p}$ $\mathbf{\psi }_{1}$ as a consequence. In Subsection \ref{fml} we specify the form of $Q_{n}(\mathbf{\psi )}$ associated with the FML estimator, $\widehat{\mathbf{\psi }}_{1}^{(1)}$ hereafter, and outline the asymptotic results derived in \cite{chen:deo:2006} pertaining to the convergence of $\widehat{\mathbf{\psi }}_{1}^{(1)}$ to $\mathbf{\psi }_{1}$. In Subsection \ref{altest} the equivalence of the values that minimize the limiting criterion functions of the three alternative estimators to the value that minimizes the limiting criterion function of the FML estimator is demonstrated and, hence, the asymptotic convergence of these four estimators established. \subsection{Frequency domain maximum likelihood estimation\label{fml}} \cite{chen:deo:2006} focus on the FML estimator of $\mathbf{\eta =(}d,\mathbf \beta }^{T})^{T},$ $\widehat{\mathbf{\eta }}_{1},$ defined as the value of $\mathbf{\eta}\in (0,0.5)\times \mathbf{\Theta}$ that minimizes the objective functio \begin{equation} Q_{n}(\mathbf{\eta })=\frac{2\pi }{n}\dsum\limits_{j=1}^{\lfloor n/2\rfloor }\frac{I(\lambda _{j})}{f_{1}(\mathbf{\eta ,}\lambda _{j})}, \label{Chen & Deo objective function} \end{equation where $I(\lambda _{j})$ is the periodogram, defined as $I(\lambda )=\frac{1}{2\pi n}|\sum_{t=1}^{n}y_{t}\exp (-i\lambda t)|^2$ evaluated at the Fourier frequencies $\lambda _{j}=2\pi j/n;$ $(j=1,...,$ $\lfloor n/2\rfloor ),$ $\lfloor x\rfloor $ is the largest integer not greater than $x,$ and, with a slight abuse of notation, $f_{1}(\mathbf{\eta ,}\lambda _{j})=g_{1}\left( \mathbf{\beta ,}\lambda _{j}\right) (2\sin (\lambda _{j}/2))^{-2d}.$ The objective function in (\ref{Chen & Deo objective function}) is a frequency domain approximation to the negative of the Gaussian log-likelihood \citep[See][\S 10.8, for example.]{brockwell:davis:1991}. Indeed, one of the alternative estimators that we consider (TML) is the minimizer of the exact version of this negative log-likelihood function. Let \begin{equation}\label{limit of Chen and Deo} Q(\mathbf{\eta })=\lim_{n\rightarrow \infty }E_{0}\left[ Q_{n}(\mathbf{\eta })\right]=\dint\limits_{0}^{\pi }\frac{f_{0}(\lambda )}{f_{1} \mathbf{\eta ,}\lambda )}d\lambda\,, \end{equation where here, and in what follows, the zero subscript denotes that the moments are defined with respect to the TDGP. From Lemma $2$ of \cite{chen:deo:2006} it follows that under Assumptions $A.1-A.3$, \begin{equation} \sup_{\mathbf{\eta }\in (0,0.5)\times \mathbf{\Theta}}\left|\frac{2\pi }{n}\dsum\limits_{j=1}^{\lfloor n/2\rfloor }\frac{I(\lambda _{j} }{f_{1}(\mathbf{\eta ,}\lambda _{j})}-Q(\mathbf{\eta })\right|\rightarrow ^{p} 0\,. \label{cd} \end{equation The limiting objective function $Q(\mathbf{\eta )}$, in turn, defines the pseudo-true parameter $\mathbf{\eta }_{1}$ to which $\widehat{\mathbf{\eta }}_{1}$ will converge under the assumed regularity. This follows from \eqref{cd} and the additional assumption: \begin{itemize} \item[$(A.7)$] There exists a unique vector $\mathbf{\eta }_{1}=\mathbf{( d_{1},\mathbf{\beta }_{1}^{T})^{T}$ $\mathbf{\in (}0,0.5)\times \mathbf \Theta ,}$ with\textbf{\ }$\mathbf{\beta }_{1}\mathbf{=(}\beta _{11},...,\beta _{l1}\mathbf{) ^{T},$ which satisfies $\mathbf{\eta }_{1}=\arg \min_{\mathbf{\eta }}Q(\mathbf{\eta })$\,. \end{itemize} On application of a standard argument for M-estimators, \eqref{cd} and $(A.7)$ imply that $\mbox{plim}\,\widehat{\mathbf{\eta }}_{1}=\mathbf{\eta }_{1}$ \cite[see][Corollary 1]{chen:deo:2006}. \subsection{Alternative Estimators}\label{altest} Index by $i=1,2,3$ and $4$ respectively, the criterion function associated with the FML estimator, the Whittle estimator, the TML estimator and the CSS estimator, each viewed as a function of $\mathbf{\psi}$ or $\mathbf{\eta}$, that is $Q_{n}^{(i)}(\cdot),$ $i=1,2,3,4$. The criterion function of the FML estimator is given in \eqref{Chen & Deo objective function}. The criterion functions of the three alternative estimators are defined as follows: \begin{itemize} \item The objective function for the Whittle estimator as considered in \cite{beran:1994} i \begin{equation}\label{Whittle objective function} Q_{n}^{(2)}(\mathbf{\psi })=\frac{4}{n}\dsum\limits_{j=1}^{\lfloor n/2\rfloor }\log f_{1}(\mathbf{\psi ,}\lambda _{j})+\frac{4}{n \dsum\limits_{j=1}^{\lfloor n/2\rfloor }\frac{I(\lambda _{j})}{f_{1}(\mathbf \psi ,}\lambda _{j})}\,, \end{equation where $\mathbf{\psi }=(\sigma _{\varepsilon }^{2},\mathbf{\eta }^{T})^{T}$, which when re-expressed as an explicit function of $\sigma _{\varepsilon }^{2}$ and $\mathbf{\eta }$ gives \begin{equation*} Q_{n}^{(2)}(\sigma _{\varepsilon }^{2},\mathbf{\eta })=\frac{4}{n}\dsum\limits_{j=1}^{\lfloor n/2\rfloor }\log \left[ \frac{\sigma _{\varepsilon }^{2}}{2\pi }f_{1} \mathbf{\eta ,}\lambda _{j})\right] +\frac{8\pi }{\sigma _{\varepsilon }^{2} }\dsum\limits_{j=1}^{\lfloor n/2\rfloor }\frac{I(\lambda _{j})}{f_{1} \mathbf{\eta ,}\lambda _{j})}. \end{equation* \item Let $\mathbf{Y}^{T}=\left( y_{1},y_{2},...,y_{n}\right)$ and denote the variance covariance matrix of $\mathbf{Y}$ derived from the mis-specified model by $\sigma _{\varepsilon }^{2}\mathbf{\Sigma }_{\eta}=\left[ \gamma_1 \left( i-j \right)\right]$, $i,j=1,2,...,n$, where $$ \gamma_1(\tau)=\gamma_1(-\tau)=\frac{\sigma _{\varepsilon }^{2}}{2\pi}\int_{-\pi}^{\pi}f_{1}(\mathbf{\eta},\lambda)e^{i\lambda\tau}d\lambda\,. $$ The Gaussian log-likelihood function for the TML estimator i \begin{equation} \mathbf{-}\frac{1}{2}\left(n\log (2\pi\sigma _{\varepsilon }^{2} )+\log |\mathbf{\Sigma}_{\eta}|+\frac{1}{\sigma _{\varepsilon }^{2}}\mathbf{Y}^{T}\mathbf{\Sigma}_{\eta}^{-1 \mathbf{Y}\right) \,, \label{ML_objective function} \end{equation and maximizing (\ref{ML_objective function}) with respect to $\mathbf{\psi }$ is equivalent to minimizing the criterion function \begin{equation} Q_{n}^{(3)}(\sigma _{\varepsilon }^{2},\mathbf{\eta })=\log\sigma _{\varepsilon }^{2} +\frac{1}{n \log |\mathbf{\Sigma}_{\eta}|+\frac{1}{n\sigma _{\varepsilon }^{2}}\mathbf{Y}^{T}\mathbf{\Sigma}_{\eta}^{-1 \mathbf{Y}\,. \label{Equivalent form} \end{equation \item To construct the CSS estimator note that we can expand $(1-z)^{d}$ in a binomial expansion as \begin{equation} (1-z)^{d}=\dsum\limits_{j=0}^{\infty }\frac{\Gamma (j-d)}{\Gamma (j+1)\Gamma(-d)}z^{j}\,, \label{binomial} \end{equation where $\Gamma (\cdot)$ is the gamma function. Furthermore, since $g_{1}\left(\mathbf{\beta},\lambda \right) $ is bounded, by Assumption $(A.2)$, we can employ the method of Whittle \citep[][\S 2.8]{whittle:1984} to construct an autoregressive operator $\alpha(\mathbf{\beta},z)=\sum_{i=0}^{\infty}\alpha_i(\mathbf{\beta})z^{i}$ such that $g_{1}\left(\mathbf{\beta},\lambda \right)=|\alpha(\mathbf{\beta},e^{i\lambda})|^{-2}$. The objective function of the CSS estimation method then becomes \begin{equation} Q_{n}^{(4)}(\mathbf{\eta})=\frac{1}{n}\dsum\limits_{t=1}^{n}e_{t}^{2}\,, \label{CSS objective function} \end{equation where \begin{equation} e_{t}=\dsum\limits_{i=0}^{t-1}\tau _{i}(\mathbf{\eta})y_{t-i} \label{Expression of e_t} \end{equation and the coefficients $\tau _{j}(\mathbf{\eta})$, $j=0,1,2,\ldots$, are given by $\tau _{0}(\mathbf{\eta})=1$ an \begin{equation} \tau _{j}(\mathbf{\eta})=\dsum\limits_{s=0}^{j}\frac{\alpha_{j-s}(\mathbf{\beta})\Gamma (j-d)}{\Gamma (j+1)\Gamma (-d)}\,,\quad j=1,2,\ldots\,. \label{Tau_i} \end{equation \end{itemize} In Appendix \ref{proofs} we prove that for $i=1,2,3$ \and 4, we have $Q_{n}^{(i)}(\cdot)\rightarrow ^{p}\mathcal{Q}^{(i)}(\sigma _{\varepsilon }^{2},Q(\mathbf{\eta})),$ where the minimum of the function $\mathcal{Q}^{(i)}(\sigma_{\varepsilon }^{2},Q(\mathbf{\eta }))$ occurs at $\sigma_{\varepsilon }^{2}=2Q(\mathbf{\eta }_{1})$ for all $i$, and each $\mathcal{Q}^{(i)}$, when concentrated with respect to $\sigma _{\varepsilon }^{2}$, is a monotonically increasing function of $Q(\mathbf{\eta })$, with $Q(\mathbf{\eta })$ as defined in (\ref{limit of Chen and Deo}). Hence, with $\mathbf{\eta }$ being the (vector-valued) parameter of interest, we can state the following proposition: \begin{proposition}\label{converge} Suppose that the TDGP of $\{y_{t}\}$ is a Gaussian process with a spectral density as given in \eqref{Spectral density_TDGP} and that the MM satisfies Assumptions $A.1-A.7$. Let $\widehat{\mathbf{\eta } _{1}^{(i)}$, $i=1,2,3,4$, denote, respectively, the FML, Whittle, TML and CSS estimators of the parameter vector $\mathbf{\eta =(}d,\mathbf{\beta ^{T})^{T}$ of the MM. Then $\Vert \widehat{\mathbf{\eta }}_{1}^{(i)} \widehat{\mathbf{\eta }}_{1}^{(j)}\Vert \rightarrow _{P}0$ for all i,j=1,2,3,4$ and the common probability limit of $\widehat{\mathbf{\eta } _{1}^{(i)}$, $i=1,2,3,4$, is $\mathbf{\eta }_{1}=\arg \min_{\mathbf{\eta }}Q \mathbf{\eta })$\thinspace . \end{proposition} Note that if the MM were used to construct a one-step-ahead prediction, the mean squared prediction error would be $$ \sigma _{\varepsilon }^{2}=2Q(\mathbf{\eta})=\int_{-\pi}^{\pi }\frac{f_{0}(\lambda )}{f_{1}(\mathbf{\eta},\lambda )}d\lambda \geq \sigma_{\varepsilon0}^2\,, $$ where $\sigma _{\varepsilon 0}^{2}$ is the mean squared prediction error of the minimum mean squared error predictor of the TDGP, \citep[][Proposition 10.8.1]{brockwell:davis:1991}. The implication of Assumption $A.7$ is that among all spectral densities within the mis-specified family the member characterised by the parameter value $\mathbf{\eta }_{1}$ is closest to the true spectral density $f_{0}(\lambda )$. Evidently it is $\mathbf{\eta }_{1}$ that the estimators should be trying to target as this will give fitted parameter values that yield the predictor from within the MM class whose mean squared prediction error is closest to that of the optimal predictor. Having established that the four parametric estimators converge towards $\mathbf{\eta }_{1}$ under mis-specification, we can as a consequence now broaden the applicability of the asymptotic distributional results derived by \cite{chen:deo:2006} for the FML estimator. This we do in Section \ref{asyQ} by establishing that all four alternative parametric estimators converge in distribution. Prior to doing this, however, we indicate the precise form of the limiting objective function $Q(\mathbf{\eta })$, and the associated first-order conditions that define the (common) pseudo-true value $\mathbf{\eta }_{1}$ of the four estimation procedures, in the ARFIMA case. As well as being relevant for all four estimation methods, these derivations apply in complete generality with respect to the models that specify both the TDGP and the MM. Hence, in this sense also the results represent a substantive extension of the corresponding results in \citet{chen:deo:2006}. \section{Pseudo-True Parameters Under ARFIMA Mis-Specification\label{pseudo}} Under Assumptions $A.1-A.7$ $\mathbf{\eta }_{1}=\arg \min_{\mathbf{\eta }}Q(\mathbf{\eta })$ can be determined as the solution of the first-order condition $\partial Q(\mathbf{\eta })/\partial \mathbf{\eta }=0$, and \cite{chen:deo:2006} illustrate the relationship between $\partial \log Q(\mathbf{\eta })/\partial d$ and the deviation $d^{\ast }=d_{0}-d_1$ for the simple special case in which the TDGP is an ARFIMA ($0,d_{0},1$) and the MM is an ARFIMA ($0,d,0$). They then cite (without providing detailed derivations) certain results that obtain when the MM is an ARFIMA ($1,d,0$). Here we provide a significant generalization, by deriving expressions for both $Q(\mathbf{\eta })$ and the first-order conditions that define the pseudo-true parameters, under the full ARFIMA($p_{0},d_{0},q_{0}$)/ARFIMA ($p,d,q$) dichotomy for the true process and the estimated model. Representations of the associated expressions via polynomial and power series expansions suitable for the analytical investigation of $Q(\mathbf{\eta })$ are presented. It is normally not possible to solve the first order conditions $\partial Q(\mathbf{\eta })/\partial \mathbf{\eta }=0$ exactly as they are both nonlinear and (in general) defined as infinite sums. Instead one would determine the estimate numerically, via a Newton iteration for example, with the series expansions replaced by finite sums. An evaluation of the magnitude of the approximation error produced by any power series truncation that might arise from such a numerical implementation is given. The results are then illustrated in the special case where $p_{0}=q=0,$ in which case true MA short memory dynamics of an arbitrary order are mis-specified as AR dynamics of an arbitrary order. In this particular case, as will be seen, no truncation error arises in the computations. To begin, denote the spectral density of the TDGP, a general ARFIMA ( p_{0},d_{0},q_{0}$) process, b \begin{equation*} f_{0}(\lambda )=\frac{\sigma _{\varepsilon 0}^{2}}{2\pi }\frac{\left\vert 1+\theta _{10}e^{i\lambda}+...+\theta _{q_{0}0}e^{iq_{0}\lambda }\right\vert ^{2}}{\left\vert 1+\phi _{10}e^{i\lambda}+...+\phi _{p_{0}0}e^{ip_{0}\lambda}\right\vert ^{2}}|2\sin (\lambda /2)|^{-2d_{0}}, \end{equation* and that of the MM, an \textit{ARFIMA }($p,d,q$) model, by \begin{equation*} f_{1}(\mathbf{\psi ,}\lambda )=\frac{\sigma _{\varepsilon }^{2}}{2\pi }\frac \left\vert 1+\theta _{1}e^{i\lambda}+...+\theta _{q}e^{iq\lambda }\right\vert ^{2}}{\left\vert 1+\phi _{1}e^{i\lambda}+...+\phi _{p}e^{ip\lambda}\right\vert ^{2}}|2\sin (\lambda /2)|^{-2d}. \end{equation* Substituting these expressions into the limiting objective function we obtain the representation \begin{equation} Q\left( \mathbf{\psi }\right) =\int\limits_{0}^{\pi }\frac{f_{0}(\lambda }{f_{1}(\mathbf{\psi ,}\lambda )}d\lambda = \frac{\sigma _{\varepsilon 0}^{2}}{\sigma _{\varepsilon }^{2}}\dint\limits_{0}^{\pi }\frac{|A_\beta(e^{i\lambda})|^2} |B_\beta(e^{i\lambda})|^2 }|2\sin (\lambda/2)|^{-2(d_{0}-d)}d\lambda \,, \label{Limiting form} \end{equation wher \begin{equation}\label{Abeta} A_\beta(z)=\sum\limits_{j=0}^{\underline{q}}a_{j}z^j=\theta_{0}(z)\phi(z)= \left( 1+\theta_{10}z+...+\theta _{q_{0}0}z^{q_{0}}\right)(1+\phi _{1}z+...+\phi _{p}z^p) \end{equation} with $\underline{q}=q_{0}+p$ and \begin{equation}\label{Bbeta} B_\beta(z) =\sum\limits_{j=0}^{\underline{p}}b_{j}z^j =\phi_{0}(z)\theta(z)=(1+\phi _{10}z+...+\phi _{p_{0}0}z^{p_{0}})\left( 1+\theta _{1}z+...+\theta _{q}z^q\right) \end{equation} with $\underline{p}=p_{0}+q$. The expression for $Q(\mathbf{\psi})$ in \eqref{Limiting form} takes the form of the variance of an ARFIMA process with MA operator $A_\beta(z)$, AR operator $B_\beta(z)$ and fractional index $d_0-d$. It follows that $Q(\mathbf{\psi})$ could be evaluated using the procedures presented in \cite{sowell:1992}. Sowell's algorithms are based upon series expansions in gamma and hypergeometric functions however, and although they are suitable for numerical calculations, they do not readily lend themselves to the analytical investigation of $Q(\mathbf{\psi})$. We therefore seek an alternative formulation. Let $C(z)=\sum_{j=0}^\infty c_jz^j=A_\beta(z)/B_\beta(z)$ where $A_\beta(z)$ and $B_\beta(z)$ are as defined in \eqref{Abeta} and \eqref{Bbeta} respectively. Then \eqref{Limiting form} can be expanded to giv $$ Q\left( \mathbf{\psi }\right) =2^{1-2(d_{0}-d)}\frac{\sigma _{\varepsilon 0}^{2}}{\sigma _{\varepsilon }^{2}}\left[\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}c_{j}c_{k}\dint_{0}^{\pi /2}\cos \left( 2\left( j-k\right) \lambda \right) \sin (\lambda )^{-2(d_{0}-d)}d\lambda\right] \,. $$ Using standard results for the integral $\dint\limits_{0}^{\pi }(\sin x)^{\upsilon -1}\cos (ax)dx$ from \citet[][p 397]{gradshteyn:ryzhik:2007} yields, after some algebraic manipulation, $$ Q\left( \mathbf{\psi }\right) =\frac{\pi }{(1-2(d_{0}-d))\text{ }}\frac \sigma _{\varepsilon 0}^{2}}{\sigma _{\varepsilon }^{2}}\left[\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\frac{c_{j}c_{k}\cos \left( \left( j-k\right) \pi \right) }{\mathcal{B}\left(1-(d_{0}-d)+\left( j-k\right) ,1-(d_{0}-d)-\left( j-k\right) \right) }\right]\,, $$ where $\mathcal{B}(a,b)$ denotes the Beta function. This expression can in turn be simplified t \begin{equation}\label{QK} Q\left( \mathbf{\psi }\right) =\{\pi \frac{\sigma _{\varepsilon 0}^{2}} \sigma _{\varepsilon }^{2}}\frac{\Gamma (1-2(d_{0}-d))}{\Gamma^{2}(1-(d_{0}-d))}\}K(\mathbf{\eta })\,, \end{equation where \begin{equation*} K(\mathbf{\eta })=\sum_{j=0}^{\infty}c_{j}^{2}+2\sum_{k=0}^{\infty}\sum_{j=k+1}^{\infty}c_{j}c_{k}\rho(j-k) \end{equation*} and $$ \rho(h)=\prod_{i=1}^{h}\left(\frac{(d_{0}-d)+i-1}{i-(d_{0}-d)}\right)\,,\quad h=1,2,\ldots\,. $$ Using \eqref{QK} we now derive the form of the first-order conditions that define $\mathbf{\eta}_{1}$, namely $\partial Q(\mathbf{\psi})/\partial \mathbf{\eta }=0$. Differentiating $Q\left( \mathbf{\psi }\right)$ first with respect to $\beta _{r}$, $r=1,\ldots,l$, and then $d$ gives: \begin{equation*}\label{Derivative_1_0} \frac{\partial Q\left( \mathbf{\psi }\right) }{\partial \beta _{r}} =\{\pi \frac{\sigma _{\varepsilon 0}^{2}} \sigma _{\varepsilon }^{2}}\frac{\Gamma (1-2(d_{0}-d))}{\Gamma ^{2}(1-(d_{0}-d))}\}\frac{\partial K\left( \mathbf{\eta }\right) }{\partial \beta _{r}}\,,\quad r=1,2,...,l, \end{equation*} where \begin{equation*}\label{Derivative_K 1_0} \frac{\partial K\left( \mathbf{\eta }\right) }{\partial \beta _{r}} =\sum_{j=1}^{\infty}2c_{j}\frac{\partial c_{j}}{\partial \beta_{r}}+2\sum_{k=0}^{\infty}\sum_{j=k+1}^{\infty}(c_{k}\frac{\partial c_{j}}{\partial \beta _{r}}+\frac{\partial c_{k}}{\partial \beta _{r}}c_{j})\rho(j-k)\,, \end{equation*} and \begin{equation*}\label{Derivative_2_0} \frac{\partial Q\left( \mathbf{\psi }\right) }{\partial d} =\{\pi \frac{\sigma _{\varepsilon 0}^{2}} \sigma _{\varepsilon }^{2}}\frac{\Gamma (1-2(d_{0}-d))}{\Gamma^{2}(1-(d_{0}-d))}\}\left\{2\left(\Psi[1-2(d_{0}-d)]-\Psi [ 1-(d_{0}-d)]\right)K(\mathbf{\eta })+\frac{\partial K\left( \mathbf{\eta }\right) }{\partial d}\right\}\,, \end{equation*} where $\Psi(\cdot)$ denotes the digamma function and \begin{align*}\label{Derivative_K 2_0} \frac{\partial K\left( \mathbf{\eta }\right) }{\partial d}=&2\sum_{k=0}^{\infty}\sum_{j=k+1}^{\infty}c_{j}c_{k}\rho(j-k)\left\{2\Psi [ 1-(d_{0}-d)]\right.\\ &\left.-\Psi [ 1-(d_{0}-d)+\left(j-k\right) ]-\Psi [ 1-(d_{0}-d)-\left( j-k\right) ]\right\}\,. \end{align*} Eliminating the common (non-zero) factor $\{\pi \frac{\sigma _{\varepsilon 0}^{2}} \sigma _{\varepsilon }^{2}}\frac{\Gamma (1-2(d_{0}-d))}{\Gamma ^{2}(1-(d_{0}-d))}\}$ from both $\partial Q\left( \mathbf{\psi }\right)/\partial \mathbf{\beta}$ and $\partial Q\left( \mathbf{\psi }\right)/\partial d$, it follows that the pseudo-true parameter values of the \textit{ARFIMA }($p,d,q$) MM can be obtained by solving \begin{equation}\label{dKbeta} \frac{\partial K\left( \mathbf{\eta }\right) }{\partial \beta _{r}}=0\,,\quad r=1,2,...,l, \end{equation} and \begin{equation}\label{dKd} 2(\Psi[1-2(d_{0}-d)]-\Psi [ 1-(d_{0}-d)])K(\mathbf{\eta })+\frac{\partial K\left( \mathbf{\eta }\right) }{\partial d}=0 \end{equation} for $\beta _{r1}$, $r=1,\ldots,l$, and $d_1$ using appropriate algebraic and numerical procedures. A corollary of the following theorem is that $\mathbf{\eta}_1$ can be calculated to any desired degree of numerical accuracy by truncating the series expansions in the expressions for $ K\left( \mathbf{\eta }\right)$, $\partial K\left( \mathbf{\eta }\right)/\partial \mathbf{\beta}$ and $\partial K\left( \mathbf{\eta }\right)/\partial d$ after a suitable number of $N$ terms before substituting into \eqref{dKbeta} and \eqref{dKd} and solving (numerically) for $\phi _{i1},$, $ i=1,2,...,p$,\, $\theta_{j1}$, $j=1,2,...,q$, and $d_1$. \begin{theorem}\label{Theorem1} Set $C_{N}(z )=\sum_{j=0}^{N}c_{j}z^{j}$ and let $Q_N\left( \mathbf{\psi }\right) = \left(\sigma _{\varepsilon 0}^{2}/\sigma_{\varepsilon}^{2}\right)I_{N}$ where the integral $I_{N}=\int_{0 }^{\pi }|C_{N}(\exp \left( -i\lambda \right))|^2|2\sin (\lambda/2)|^{-2(d_{0}-d)}d\lambda$. Then \begin{equation*} Q\left( \mathbf{\psi }\right) = Q_N\left( \mathbf{\psi }\right)+R_{N} =\left\{\pi \frac{\sigma _{\varepsilon 0}^{2}}{\sigma _{\varepsilon}^{2}}\frac{\Gamma (1-2(d_{0}-d))}{\Gamma^{2}(1-(d_{0}-d))}\right\}K_N(\mathbf{\eta })+R_{N} \end{equation*} where \begin{equation*} K_N(\mathbf{\eta })=\sum_{j=0}^{N}c_{j}^{2}+2\sum_{k=0}^{N-1}\sum_{j=k+1}^{N}c_{j}c_{k}\rho(j-k) \end{equation*} and there exists a $\zeta$, $0<\zeta<1$, such that $R_{N}=O(\zeta^{(N+1)})=o(N^{-1})$. Furthermore, $\partial Q_N(\mathbf{\psi })/\partial\mathbf{\eta}=\partial Q(\mathbf{\psi })/\partial\mathbf{\eta}+o(N^{-1})$. \end{theorem} By way of illustration, consider the case of mis-specifying a true \textit{ARFIMA }($0,d_{0},q_{0}$) process by an \textit{ARFIMA }($p,d,0$) model. When $p_{0}=q=0$ we have $B_\beta(z)\equiv 1$ and $C(z)$ is polynomial, $C(z)=1+\sum_{j=1}^{\underline{q}}c_jz^j$ where $c_j=\sum_{r=\max\{0,j-p\}}^{\min\{j,p\}}\theta_{(j-r)0}\phi_r$. Abbreviating the latter to $\sum_r\theta_{(j-r)0}\phi_r$, this then gives us: \begin{align*} K(d,\phi_1,\ldots,\phi_p)=&\sum_{j=0}^{\underline{q}}(\sum_r\theta_{(j-r)0}\phi_r)^{2}+\\ &2\sum_{k=0}^{\underline{q}-1}\sum_{j=k+1}^{\underline{q}} (\sum_r\theta_{(j-r)0}\phi_r)(\sum_r\theta_{(k-r)0}\phi_r)\rho(j-k)\,; \end{align*} and setting $\theta_{s0}\equiv 0$, $s\ni[0,1,\ldots,q_0]$, \begin{align*} \frac{\partial K\left( d,\phi_1,\ldots,\phi_p\right) }{\partial \phi _{r}} =&\sum_{j=1}^{\underline{q}}2(\sum_r\theta_{(j-r)0}\phi_r)\theta_{(j-r)0}+\\ &2\sum_{k=0}^{\underline{q}-1}\sum_{j=k+1}^{\underline{q}}\left\{(\sum_r\theta_{(j-r)0}\phi_r)\theta_{(k-r)0} +\theta_{(j-r)0}(\sum_r\theta_{(k-r)0}\phi_r)\right\}\rho(j-k)\,, \end{align*} $r=1,\ldots,p$, and \begin{eqnarray*}\label{Derivative_K 2_0} \frac{\partial K\left(d,\phi_1,\ldots,\phi_p\right) }{\partial d} &=& 2\sum_{k=0}^{\underline{q}-1}\sum_{j=k+1}^{\underline{q}}(\sum_r\theta_{(j-r)0}\phi_r)(\sum_r\theta_{(k-r)0}\phi_r)\rho(j-k)\times \notag \\ &&\left( 2\Psi \lbrack 1-(d_{0}-d)]-\Psi \lbrack 1-(d_{0}-d)+\left( j-k\right) ]-\Psi \lbrack 1-(d_{0}-d)-\left( j-k\right) ]\right) \end{eqnarray*} for the required derivatives. The pseudo-true values $\phi _{r1}$, $r=1,\ldots,p$, and $d_1$ can now be obtained by solving \eqref{dKbeta} and \eqref{dKd} having inserted these exact expressions for $K\left(d,\phi_1,\ldots,\phi_p\right)$, $\partial K\left(d,\phi_1,\ldots,\phi_p\right)/\partial \phi_r$, $r=1,\ldots,p$, and $\partial K\left(d,\phi_1,\ldots,\phi_p\right)/\partial d$ into the equations. Let us further highlight some features of this special case by focussing on the case where the TDGP is an ARFIMA($0,d_{0},1$) and the MM an ARFIMA($1,d,0$). In this example $\underline{q}=2$ and $C(z)=1+c_1z+c_2z^2$ where, neglecting the first order MA and AR coefficient subscripts, $c_{1}=(\theta _{0}+\phi )$ and $c_{2}=\theta _{0}\phi$. The second factor of the criterion function in \eqref{QK} is now \begin{align}\label{log_likelihood of MA_AR1} K(d,\phi) =&1+(\theta _{0}+\phi)^2+(\theta _{0}\phi)^2 \notag\\ &+\frac{2\left[\theta _{0}\phi(d_0-d+1)-(1+\theta _{0}\phi)(\theta _{0}+\phi)(d_0-d-2)\right](d_0-d)}{(d_0-d-1)(d_0-d-2)}\,. \end{align} The derivatives $\partial K(d,\phi)/\partial \phi$ and $\partial K(d,\phi)/\partial d$ can be readily determined from \eqref{log_likelihood of MA_AR1} and hence the pseudo-true values $d_1$ and $\phi_1$ evaluated. It is clear from (\ref{log_likelihood of MA_AR1}) that for given values of $|\theta _{0}|<1$ we can treat $K(d,\phi)$ as a function of $\widetilde{d}=(d_{0}-d)$ and $\phi $, and hence treat $Q\left( d,\phi\right)=(\sigma _{\varepsilon }^{2}/\sigma_{\varepsilon 0}^{2})Q\left( \mathbf{\psi}\right)$ similarly. Figure \ref{Qgraphs} depicts the contours of $Q\left(d,\phi\right)$ graphed as a function of $\widetilde{d}$ and $\phi $ for the values of $\theta _{0}=\{-0.7,-0.637014,-0.3\}$ when $\sigma _{\varepsilon }^{2}=\sigma_{\varepsilon 0}^{2}$. Pre-empting the discussion to come in the following section, the values of $\theta _{0}$ are deliberately chosen to coincide with $d^{\ast }=d_{0}-d_{1}$ being respectively greater than, equal to and less than $0.25$. \begin{figure}[h!] \centering \subfloat[$\theta_{0}=-0.7$.]{\includegraphics[scale=0.4]{ET_1_new}\label{ma1}} \subfloat[$\theta_{0}=-0.637014$.]{\includegraphics[scale=0.4]{ET_2_new}\label{ma2}} \subfloat[$\theta_{0}=-0.3$.]{\includegraphics[scale=0.4]{ET_3_new}\label{ma3}} \caption{Contour plot of $Q(d,\phi)$ against $\widetilde{d}=d_0-d$ and $\phi$ for the mis-specification of an ARFIMA$(0,d_0,1)$ TDGP by an ARFIMA$(1,d,0)$ MM; $\widetilde{d}\in(-0.5,0.5)$, $\phi\in(-1,1)$. Pseudo-true coordinates $(d_0-d_1,\phi_1)$ are (a) $(0.2915,0.3473)$, (b) $(0.25,0.33)$ and (c) $(0.0148,0.2721)$.} \label{Qgraphs} \end{figure} The three graphs in Figure \ref{Qgraphs} show that although the location of $\left( d_1,\phi_1\right)$ may be unambiguous, the sensitivity of $Q\left(d,\phi\right)$ to perturbations in $\left(d,\phi\right)$ can be very different depending on the value of $d^*=d_0-d_1$.\footnote{All the numerical results presented in this paper have been produced using MATLAB \textit{2011b}, version \textit{7.13.0.564} \textit{(R2011b).}} In Figure \ref{ma1} the contours indicate that when $d^*>0.25$ the limiting criterion function has hyperbolic profiles in a small neighbourhood of the pseudo-true parameter point $(d_1,\phi_1)$, with similar but more locally quadratic behaviour exhibited in Figure \ref{ma2} when $d^*=0.25$. The contours of $Q(d,\phi)$ in Figure \ref{ma3}, corresponding to $d^*<0.25$, are more elliptical and suggest that in this case the limiting criterion function is far closer to being globally quadratic around $(d_1,\phi_1)$. It turns out that these three different forms of $Q\left(d,\phi\right)$, reflecting the most, intermediate, and the least mis-specified cases, correspond to the three different forms of asymptotic distribution presented in the following section. \section{Asymptotic Distributions\label{asyQ}} In this section we show that the key theoretical results derived in \cite{chen:deo:2006} pertaining to the asymptotic distribution of the FML estimator are also applicable to the Whittle, TML and CSS estimators. Writing $\widehat{\mathbf{\eta }}_{1}$ for any one of these estimators, the critical feature is that the rate of convergence and the nature of the asymptotic distribution of $\widehat{\mathbf{\eta }}_{1}$ is determined by the deviation of the pseudo-true value of $d$, $d_{1}$, from the true value, $d_{0}$; in Theorem \ref{Theorem A} we summarize these different properties as they relate to three ranges of values for $d^{\ast }=d_{0}-d_{1}:$ $d^{\ast }>0.25,$ $d^{\ast }=0.25$ and $d^{\ast}<0.25$. \begin{theorem}\label{Theorem A} Suppose that the TDGP of $\{y_{t}\}$ is a Gaussian process with a spectral density as given in \eqref{Spectral density_TDGP} and that the MM satisfies Assumptions $A.1-A.7$. Let \begin{equation}\label{Expression for B} \mathbf{B}=-2\dint\limits_{-\pi }^{\pi }\frac{f_{0}(\lambda )}{f_{1}^{3}(\mathbf \eta }_{1}\mathbf{,}\lambda )}\frac{\partial f_{1}(\mathbf{\eta }_{1}\mathbf ,}\lambda )}{\partial \mathbf{\eta }}\frac{\partial f_{1}(\mathbf{\eta }_{1 \mathbf{,}\lambda )}{\partial \mathbf{\eta }^T}d\lambda +\dint\limits_{-\pi }^{\pi \frac{f_{0}(\lambda )}{f_{1}^{2}(\mathbf{\eta }_{1}\mathbf{,}\lambda )}\frac \partial ^{2}f_{1}(\mathbf{\eta }_{1}\mathbf{,}\lambda )}{\partial \mathbf{\eta }\partial \mathbf{\eta }^T}d\lambda\,, \end{equation} and set $\mathbf{\mu }_{n}=\mathbf{B}^{-1}E_{0}\left( \frac{\partial Q_{n} \mathbf{\eta }_{1})}{\partial \mathbf{\eta }}\right)$ where $Q_{n}(\mathbf{\cdot})$ denotes the objective function that defines $\widehat{\mathbf{\eta }}_{1}$.\footnote{Heuristically, $\mathbf{\mu }_{n}$ measures the bias associated with the estimator $\widehat{\mathbf{\eta }}_{1}$. That is, $\mathbf{\mu _{n}\approx E_{0}\left( \widehat{\mathbf{\eta } _{1}\right) -\mathbf{\eta }_{1}.$ Note that the expression for $\mathbf{\mu _{n}$ given in Chen and Deo (2006, p 263) is incorrect. The derivation of \mathbf{\mu}_{n}$ for all four estimation methods considered in the paper is provided in Appendix \ref{Appendix 2A:}.} Then the limiting distribution of the estimator is as follows: \begin{enumerate} \item[] Case 1: When $d^{\ast }=d_{0}-d_{1}>0.25,$ \begin{equation} \frac{n^{1-2d^{\ast }}}{\log n}\left( \widehat{\mathbf{\eta }}_{1}-\mathbf \eta }_{1}-\mathbf{\mu }_{n}\right) \rightarrow ^{D}\mathbf{B}^{-1}\left( \dsum\limits_{j=1}^{\infty }W_{j},0,...0\right) ^{T}, \label{Asymptotic distribution for d0-d1>0.25} \end{equation where $\dsum\limits_{j=1}^{\infty }W_{j}$ is defined as the mean-square limit of the random sequence $\sum_{j=1}^{s}W_{j}$ as $s\rightarrow \infty$, wherein \begin{equation*} W_{j}=\frac{\left( 2\pi \right) ^{1-2d^{\ast }}g_{0}(0)}{j^{2d^{\ast }}g_{1} \mathbf{\beta ,}0)}\left[ U_{j}^{2}+V_{j}^{2}-E_{0}(U_{j}^{2}+V_{j}^{2} \right] , \end{equation* and $\{ U_{j}\}$ and $\{V_{k}\}$ denote sequences of Gaussian random variables with zero mean and covariances $Cov_{0}\left( U_{j},U_{k}\right)=Cov_{0}\left(U_{j},V_{k}\right) = Cov_{0}\left( V_{j},V_{k}\right)$ with \begin{equation*} Cov_{0}\left( U_{j},V_{k}\right) =\iint\limits_{[0,1]^{2}}\left\{ \sin (2\pi jx)\sin (2\pi ky)+\sin (2\pi kx)\sin (2\pi jy)\right\} \left\vert x-y\right\vert ^{2d_{0}-1}dxdy\,. \end{equation*} \item[] Case 2: When $d^{\ast }=d_{0}-d_{1}=0.25$, \begin{equation} n^{1/2}\left[ \overline{\Lambda}_{dd}\right] ^{-1/2}\left( \widehat{\mathbf{\eta }}_{1}-\mathbf{\eta }_{1}\right) \rightarrow ^{D}\mathbf{B}^{-1}\left( Z,0,...,0\right) ^{T}, \label{Asymptotic distribution for d0-d1=0.25} \end{equation where \begin{equation*} \overline{\Lambda}_{dd}=\frac{1}{n} \dsum\limits_{j=1}^{n/2}\left( \frac{f_{0}(\lambda _{j})}{f_{1} \mathbf{\eta }_{1}\mathbf{,}\lambda _{j})}\frac{\partial \log f_{1}(\mathbf \eta }_{1}\mathbf{,}\lambda _{j})}{\partial d}\right) ^{2}\,, \end{equation*} and $Z$ is a standard normal random variable. \item[] Case 3: When $d^{\ast }=d_{0}-d_{1}<0.25$, \begin{equation} \sqrt{n}\left( \widehat{\mathbf{\eta }}_{1}-\mathbf{\eta }_{1}\right) \rightarrow ^{D}N(0,\mathbf{\Xi }), \label{Asymptotic distribution for d0-d1<0.25} \end{equation where $\mathbf{\Xi }=\mathbf{B}^{-1}\mathbf{\Lambda B}^{-1}$' \begin{equation*} \mathbf{\Lambda }=2\pi \int_{0}^{\pi }\left( \frac{f_{0}(\lambda )}{f_{1} \mathbf{\eta }_{1}\mathbf{,}\lambda )}\right) ^{2}\left( \frac{\partial \log f_{1}(\mathbf{\eta }_{1}\mathbf{,}\lambda )}{\partial \mathbf{\eta }}\right) \left( \frac{\partial \log f_{1}(\mathbf{\eta }_{1}\mathbf{,}\lambda )} \partial \mathbf{\eta }}\right) ^{^{T}}d\lambda \,. \end{equation*} \end{enumerate} \end{theorem} We refer to \citet[][Theorems 1, 3 and 2]{chen:deo:2006} for details of the proof of Theorem \ref{Theorem A} in the case of the FML estimator $\widehat{\mathbf{\eta}}_{1}^{(1)}$. For the Whittle, TML and CSS estimators we will establish that $R_n(\widehat{\mathbf{\eta}}_{1}^{(i)}-\widehat{\mathbf{\eta }}_{1}^{(1)})\rightarrow ^{D} 0$ for $i=2,3$ and $4$, where $R_n$ denotes the convergence rate applicable in the three different cases outlined in the theorem. We use a first-order Taylor expansion of $\partial Q_{n}^{(\cdot)}(\mathbf{\eta}_{1})/\partial\mathbf{\eta}$ about $\partial Q_{n}^{(\cdot)} \widehat{\mathbf{\eta }}_{1}^{(\cdot)})/\partial\mathbf{\eta}=\mathbf{0}$. This gives \begin{equation*} \frac{\partial Q_{n}^{(\cdot)}(\mathbf{\eta }_{1})}{\partial\mathbf{\eta}}=\frac{\partial ^{2}Q_{n}^{(\cdot)}(\mathbf{\grave{\eta}}_{1}^{(\cdot)})}{\partial\mathbf{\eta}\partial\mathbf{\eta}'}\left( \mathbf{\eta }_{1}-\widehat{\mathbf{\eta }}_{1}^{(\cdot)}\right) \end{equation* and $$ R_n(\widehat{\mathbf{\eta}}_{1}^{(i)}-\widehat{\mathbf{\eta }}_{1}^{(j)})=\left[\frac{\partial ^{2}Q_{n}^{(j)}(\mathbf{\grave{\eta}}_{1}^{(j)})}{\partial\mathbf{\eta}\partial\mathbf{\eta}'}\right]^{-1}R_n\frac{\partial Q_{n}^{(j)}(\mathbf{\eta }_{1})}{\partial\mathbf{\eta}}- \left[\frac{\partial ^{2}Q_{n}^{(i)}(\mathbf{\grave{\eta}}_{1}^{(i)})}{\partial\mathbf{\eta}\partial\mathbf{\eta}'}\right]^{-1}R_n\frac{\partial Q_{n}^{(i)}(\mathbf{\eta }_{1})}{\partial\mathbf{\eta}}\,, $$ where $\| \mathbf{\eta }_{1}-\mathbf{\grave{\eta}}_{1}^{(\cdot)}\|\leq \| \mathbf{\eta }_{1}-\widehat{\mathbf{\eta }}_{1}^{(\cdot)}\| $. Since $\text{plim}\,\widehat{\mathbf{\eta }}_{1}^{(\cdot)}=\mathbf{\eta }_{1}$ it is therefore sufficient to show that there exists a constant $\mathcal{C}$ independent of $\mathbf{\eta }$ such that \begin{equation}\label{asyI} \quad\frac{\partial^2 \{\mathcal{C}\cdot Q_n^{(i)}\left(\mathbf{\eta }_{1}\right)-Q_n^{(j)}\left(\mathbf{\eta }_{1}\right)\}}{\partial \mathbf{\eta }\partial \mathbf{\eta }'}=o_p(1) \end{equation} and \begin{equation}\label{asyII} \quad R_n\mathcal{C}\cdot\frac{\partial Q_n^{(i)}\left(\mathbf{\eta }_{1}\right)}{\partial \mathbf{\eta }}\rightarrow ^{D} R_n\frac{\partial Q_n^{(j)}\left(\mathbf{\eta }_{1}\right)}{\partial \mathbf{\eta }}\,. \end{equation} The condition in \eqref{asyI} is established by showing that for each $i=1, 2, 3$ and $4$ the Hessian $\partial^2 \{Q_n^{(i)}\left(\mathbf{\eta }_{1}\right)\}/\partial \mathbf{\eta }\partial \mathbf{\eta }'$ converges in probability to a matrix proportional to $\mathbf{B}$, as defined in \eqref{Expression for B}. This result parallels the convergence of $Q_n^{(1)}\left(\mathbf{\eta }\right)$ itself to the limiting objective function seen in \eqref{cd}, following the replacement of $f_{1}(\mathbf{\eta }_{1},\lambda )^{-1}$ by $\partial^2 \{f_{1}(\mathbf{\eta }_{1},\lambda )^{-1}\}/\partial \mathbf{\eta }\partial \mathbf{\eta }'$ and $Q\left(\mathbf{\eta }\right)$ by $\mathbf{B}$. The proof that the Hessians so converge uses arguments similar those employed in the proof of Proposition \ref{converge}, the details are therefore omitted. The proof of \eqref{asyII} is more involved because of the presence of the scaling factor $R_n$. In Appendix \ref{proofs} we present the steps necessary to prove \eqref{asyII} for each estimator. A key point to note from the three cases delineated in Theorem \ref{Theorem A} is that when the deviation between the true and pseudo-true values of $d$ is sufficiently large ( d^{\ast }\geq 0.25$) -- something that is related directly to the degree of mis-specification of $g_{0}(\lambda )$ by $g_{1}(\mathbf{\beta},\lambda)$ -- the \sqrt{n}$ rate of convergence is lost, with the rate being arbitrarily close to zero depending on the value of $d^{\ast }.$ For $d^{\ast }$ strictly greater than $0.25,$ asymptotic Gaussianity is also lost, with the limiting distribution being a function of an infinite sum of non-Gaussian variables. For the $d^{\ast }\geq 0.25$ case, the limiting distribution -- whether Gaussian or otherwise -- is degenerate in the sense that the limiting distribution for each element of $\widehat{\mathbf{\eta }}_{1}$ is a different multiple of the same random variable ($\sum_{j=1}^{\infty }W_{j}$ in the case of $d^{\ast }>0.25$ and $Z$ in the case of $d^{\ast }=0.25$). \section{Finite Sample Performance of the Mis-Specified Parametric Estimators of the Pseudo-True Parameter \label{finite-misspec}} \subsection{Experimental design} In this section we explore the finite sample performance of the alternative methods, as it pertains to estimation of the pseudo-true value of the\textbf{\ }long memory parameter, d_{1}$, under specific types of mis-specification. We refer to these estimators as $\widehat{d}_{1}^{(1)}$ (FML), $\widehat{d}_{1}^{(2)}$ (Whittle), $\widehat{d}_{1}^{(3)}$ (TML) and $\widehat{d}_{1}^{(4)}$ (CSS). We first document the form of the finite sample distributions for each estimator by plotting the distribution of the standardized versions of the estimators, for which the asymptotic distributions are given in Cases 1, 2 and 3 respectively in Theorem \ref{Theorem A}. As part of this exercise we develop a method for obtaining the limiting distribution for $d^{\ast }>0.25,$ as the distribution does not have a closed form in this case, as well as a method for estimating the bias-adjustment term, \mathbf{\mu }_{n},$ which is relevant for this distribution. In the figures that follow the `Limit' curve depicts the limiting distribution of the relevant statistic. Supplementing these graphical results, we then tabulate the bias, MSE and\textbf{\ relative efficiency of the four different techniques, as estimators of the pseudo-true parameter $d_{1},$ again under specific types of mis-specification and, hence, for different values of $d^{\ast }$. Data are simulated from a zero-mean Gaussian ARFIMA$(p_{0},d_{0},q_{0})$ process, with the method of \cite{sowell:1992}, as modified by \cite{doornik:ooms:2003}, used to compute the exact autocovariance function for the TDGP for any given values of $p_{0},$ d_{0} $ and $q_{0}.$ We have produced results for $n=100,$ $200,$ $500$ and 1000$ and for two versions of mis-specification nested in the general case for which the analytical results are derived in Section \ref{pseudo}.\footnote{Note that the scope of the experimental design is constrained by the restriction that the pseudo-true value $d_{1}$ implied by any choice of parameter values should lie in the interval $(0,0.5)$.} However, we report selected results (only) from the full set due to space constraints. The bias, MSE and relative efficiency results, plus certain computations needed for the numerical specification for the limiting distribution in the $d^{\ast }>0.25$ case, are produced from $R=1000$ replications of samples of size $n$ from the relevant TDGP. The two forms of mis-specification considered are: \begin{example}\label{eg1} : An ARFIMA$(0,d_{0},1)$ TDGP, with parameter values $d_{0}=\left\{ 0.2,0.4\right\}$ and $\theta _{0}=\{-0.7,-0.444978,-0.3\}$; and an ARFIMA$(0,d,0)$ MM. The value $\theta _{0}=-0.7$ corresponds to the case where $d^{\ast }>0.25$ and $\widehat{d}_{1}^{(i)},$ $i=1,2,3,4,$ have the slowest rate of convergence, $n^{1-2d^*}/\log n$, and to a non-Gaussian distribution. The value $\theta _{0}=-0.444978$ corresponds to the case where $d^{\ast }=0.25,$ in which case asymptotic Gaussianity is preserved but the rate of convergence is of order $(n/\log^3 n)^{1/2}$. The value $\theta_{0}=-0.3$ corresponds to the case where $d^{\ast }<0.25$, with $\sqrt{n}$-convergence to Gaussianity obtaining. \end{example} \begin{example}\label{eg2} : An ARFIMA$(0,d_{0},1)$ TDGP, with parameter values $d_{0}=\left\{ 0.2,0.4\right\}$ and $\theta _{0}=\{-0.7,-0.637014,-0.3\}$; and an ARFIMA$(1,d,0)$ MM. In this example the value $\theta _{0}=-0.7$ corresponds to the case where $d^{\ast }>0.25$, the value $\theta _{0}=-0.637014$ corresponds to the case where $d^{\ast }=0.25$, and the value $\theta _{0}=-0.3$ corresponds to the case where $d^{\ast }<0.25$. \end{example} In Subsection \ref{distemp} we document graphically the form of the finite sampling distributions of all four estimators of $d$ under the first type of mis-specification described above for $d_{0}=0.2$ only. In Subsection \ref{biasmse} we report the bias and MSE of all four estimators (in terms of estimating the pseudo-true value $d_{1}$) under both forms of mis-specification and for both values of $d_{0}$. \subsection{Finite sample distributions}\label{distemp} In this section we consider in turn the three cases listed under Theorem \ref{Theorem A}. For notational ease and clarity we use $\widehat{d}_{1}$ to denote the (generic) estimator obtained under mis-specification, remembering that this estimator may be produced by any one of the four estimation methods. Similarly, we use $Q_{n}(\cdot)$ to denote the criterion associated with a generic estimator. Only when contrasting the (finite sample) performances of the alternative estimators do we re-introduce the superscript notation. \subsubsection{Case 1: $d^{\ast }>0.25$} The limiting distribution for $\widehat{d}_{1}$ in this case is \begin{equation} \frac{n^{1-2d^{\ast }}}{\log n}\left( \widehat{d}_{1}-d_{1}-\mu _{n}\right) \rightarrow ^{D}b^{-1}\dsum\limits_{j=1}^{\infty }W_{j}\,, \label{Asymptotic distribution_ex1_case1} \end{equation where $\mu _{n}=b^{-1}E_{0}\left( \frac{\partial Q_{n}(\mathbf{\eta }_{1})} \partial d}\right) , \begin{eqnarray} b&=& -2\dint\limits_{-\pi }^{\pi }\frac{f_{0}(\lambda )} f_{1}^{3}(\mathbf{\eta }_{1}\mathbf{,}\lambda )}\left( \frac{\partial f_{1} \mathbf{\eta }_{1}\mathbf{,}\lambda )}{\partial d}\right) ^{2}d\lambda +\dint\limits_{-\pi }^{\pi }\frac{f_{0}(\lambda )}{f_{1}^{2}(\mathbf{\eta _{1}\mathbf{,}\lambda )}\frac{\partial ^{2}f_{1}(\mathbf{\eta }_{1}\mathbf{, \lambda )}{\partial d^{2}}d\lambda \notag \\ &=& -2\dint\limits_{0}^{\pi }(1+\theta _{0}^{2}+2\theta _{0}\cos (\lambda ))(2\sin (\lambda /2))^{-2d^{\ast}}(2\log (2\sin (\lambda /2)))^{2}d\lambda\,, \label{Calculation_b} \end{eqnarray and $W_{j}=\frac{\left( 2\pi \right) ^{1-2d^{\ast}}(1+\theta _{0}^{2})} j^{2d^{\ast }}}\left[ U_{j}^{2}+V_{j}^{2}-E_{0}(U_{j}^{2}+V_{j}^{2})\right] , $ with $\{U_{j}\}$ and $\{V_{k}\}$ as defined in Theorem \ref{Theorem A}. (With reference to Theorem \ref{Theorem A}, both $\mathbf{B}$\ and $\mathbf{\mu }_{n}$ in \eqref{Asymptotic distribution for d0-d1>0.25} are here scalars since in Example $1$ there is only one parameter to estimate under the MM, namely $d.$ Hence the obvious changes made to notation. All other notation is as defined in the theorem.) Given that the distribution in \eqref{Asymptotic distribution_ex1_case1} is non-standard and does not have a closed form representation, consideration must be given to its numerical evaluation. In finite samples the bias-adjustment term $\mu _{n}$ (which approaches zero in probability as $n\rightarrow\infty$) also needs to be calculated. We tackle each of these issues in turn, beginning with the computation of $\mu _{n}.$ \begin{enumerate} \item[$(1)$] From Theorem \ref{Theorem A} it is apparent that in general the formula for $\mathbf{B}$ is independent of the estimation method, but the calculation of $\mathbf{\mu}_{n}$ requires separate evaluation of $E_{0}( \partial Q_{n}(\mathbf{\eta }_{1})/\partial \mathbf{\eta }) $ for each estimator. In Appendix \ref{Appendix 2A:} we provide expressions for $E_0(\partial Q_n(\mathbf{\eta}_1)/\partial \mathbf{\eta})$ for each of the four estimation methods. These formulae are used to evaluate the scalar $\mu _{n}$ here. Each value is then used in the specification of the standardized estimator $\frac{n^{1-2d^{\ast }}}{\log n}\left( \widehat{d _{1}-d_{1}-\mu _{n}\right) $ in the simulation experiments. \item[$(2)$] Quantification of the distribution of $\sum_{j=1}^{\infty }W_{j}$ requires the approximation of the infinite sum of the $W_{j}$, plus the use of simulation to represent the (appropriately truncated) sum. We truncate the series $\sum_{j=1}^{\infty }W_{j}$ after $s$ terms where the truncation point $s$ is chosen such that $1\leqslant s<\lfloor n/2\rfloor$ with $s\rightarrow \infty $ as $n\rightarrow \infty$ (\textit{cf}. Lemma $6$ of \cite{chen:deo:2006}). The value of $s$ is determined using the following criterion function. Let \begin{equation} S_{n}=\widehat{Var}_{0}\left[ \frac{n^{1-2d^{\ast }}}{\log n}\left( \widehat d}_{1}-d_{1}-\mu _{n}\right) \right] \label{sn} \end{equation denote the empirical finite sample variation observed across the $R$ replications and for each $m$, $1\leqslant m<\lfloor n/2\rfloor$, let \begin{equation*} T_{m}=S_{n}-b^{-2}\Omega _{m}, \end{equation* where $\Omega _{m}=Var_{0}\left( \dsum\limits_{j=1}^{m}W_{j}\right)$. Now set \begin{equation}\label{evaluation of s} s=\arg \min_{1\leqslant m<\lfloor n/2\rfloor }T_{m}. \end{equation Given $s$, we generate random draws of $\sum_{j=1}^{s}W_{j}$ via the underlying Gaussian random variables from which the $W_{j}$ are constructed, and produce an estimate of the limiting distribution using kernel methods. \end{enumerate} To determine $s$ we need to evaluate \begin{equation}\label{Omega_m} Var_{0}\left( \dsum\limits_{j=1}^{m}W_{j}\right) =\dsum\limits_{j=1}^{m}Var_{0}\left( W_{j}\right) +2\dsum\limits_{j=1}^{m}\dsum\limits_{\substack{ k=1 \\ j\neq k} ^{m}Cov_{0}\left( W_{j},W_{k}\right)\,. \end{equation The variance of $W_{j}$ in this case is \begin{align*}\label{Variance of W_j} Var_{0}&\left\{\frac{\left(2\pi\right)^{1-2d^{\ast}}(1+\theta_{0}^{2})}{j^{2d^{\ast}}} \left[U_{j}^{2}+V_{j}^{2}-E_{0}\left(U_{j}^{2}+V_{j}^{2}\right) \right]\right\} \\ =&\,\frac{\left( 2\pi\right)^{2-4d^{\ast }}(1+\theta_{0}^{2})^2}{j^{4d^{\ast }}}\left\{ E_{0}\left( U_{j}^{2}+V_{j}^{2}\right)^{2}-\left[ E_{0}\left(U_{j}^{2}+V_{j}^{2}\right) \right] ^{2}\right\} . \end{align*} As $\{U_{j}\}$ and $\{V_{k}\}$ are normal random variables with a covariance structure as specified in Theorem \ref{Theorem A}, standard formulae for the moments of Gaussian random variables yield the result tha \begin{eqnarray*} E_{0}\left( U_{j}^{2}+V_{j}^{2}\right)^{2} &=&E_{0}\left( U_{j}^{4}\right) +2E_{0}\left( U_{j}^{2}V_{j}^{2}\right) +E_{0}\left( V_{j}^{4}\right) \\ &=&3\left[ Var_{0}\left( U_{j}\right) \right] ^{2}+2\left[ Var_{0}\left( U_{j}\right) Var_{0}\left( V_{j}\right) +2Cov_{0}(U_{j},V_{j})\right] \\ &&+3\left[ Var_{0}\left( V_{j}\right) \right] ^{2} \\ &=&12\left[ Var_{0}\left( U_{j}\right) \right] ^{2} \end{eqnarray*} and \begin{eqnarray*} \left[ E_{0}\left( U_{j}^{2}+V_{j}^{2}\right) \right] ^{2} &=&\left[ E_{0}\left( U_{j}^{2}\right) +E_{0}\left( V_{j}^{2}\right) \right] ^{2} \\ &=&\left[ Var_{0}\left( U_{j}\right) +Var_{0}\left( V_{j}\right)\right] ^{2} \\ &=&4\left[ Var_{0}\left( U_{j}\right) \right] ^{2}. \end{eqnarray*} Thus, \begin{equation*} Var_{0}(W_{j})=\frac{8\left(2\pi\right)^{2-4d^{\ast }}(1+\theta_{0}^{2})^2}{j^{4d^{\ast }}}\left[ Var_{0}\left( U_{j}\right) \right] ^{2}\,. \end{equation*} Similarly, the covariance between $W_{j}$ and $W_{k}$ when $j\neq k$ can be shown to be equal to \begin{align*} \frac{\left( 2\pi \right) ^{2-4d^{\ast}}(1+\theta_{0}^{2})^2}{(jk)^{2d^{\ast }}}&Cov_{0}\left( U_{j}^{2}+V_{j}^{2},U_{k}^{2}+V_{k}^{2}\right) \\ =&\frac{4\left( 2\pi \right) ^{2-4d^{\ast}}(1+\theta_{0}^{2})^2}{(jk)^{2d^{\ast }}}\left[ Var_{0}\left( U_{j}\right) Var_{0}\left( V_{k}\right) +2Cov_{0}(U_{j},V_{k})\right]\,. \end{align*} The expression in \eqref{Omega_m} can therefore be evaluated numerically using the formula for $Cov_{0}(U_{j},V_{k})$ to calculate the necessary moments required to determine $s$ from \eqref{evaluation of s}. The idea behind the use of $T_{m}$\ is simply to minimize the difference between the second-order sample and population moments. The value of $S_{n}$ in \eqref{sn} will vary with the estimation method of course; however, we choose $s$ based on $S_{n}$ calculated from the FML estimates and maintain this choice of $s$ for all other methods. The terms in \eqref{Omega_m} are also dependent on the form of both the TDGP and the MM and hence $T_{m}$ needs to be determined for any specific case. The values of $s$ for the sample sizes used in the particular simulation experiment underlying the results in this section are provided in Table \ref{ex1_s}. \begin{table}[h] \caption{\small Truncation values $\small s$: ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP vis-\`{a}-vis ARFIMA (${\small 0,d,0}$) MM.} \begin{center} \begin{tabular}{lllllllll} \hline\hline ${\small n}$ & & ${\small 100}$ & & ${\small 200}$ & & ${\small 500}$ & & ${\small 1000}$ \\ & & & & & & & & \\ ${\small s}$ & & ${\small 36}$ & & ${\small 75}$ & & ${\small 162}$ & & {\small 230}$ \\ \hline\hline \end{tabular} \end{center} \label{ex1_s} \end{table} Each panel in Figure \ref{ex1_case1} provides the kernel density estimate of $\frac{n^{1-2d^{\ast }}}{\log n}(\widehat{d}_{1}-d_{1}-\mu _{n})$ under the four estimation methods, for a specific $n$ as labeled above each plot, plus the limiting distribution for the given $s$. \begin{figure}[h!] \centering \caption{Kernel density of $\frac{n^{1-2d^{\ast }}}{\log n}\left( \protect\widehat{{\protect\small }}_{{\protect\small 1}}{\protect\small -d}_{{\protect\small 1} {\protect\small -\protect\mu }_{{\protect\small n}}\right)$ for an ARFIMA( {\protect\small 0,d}_{{\protect\small 0}}{\protect\small ,1})$ TDGP with ${\protect\small d}_{{\protect\small 0}}{\protect\small =0.2} $ and ${\protect\small \protect\theta }_{{\protect\small 0 }{\protect\small =-0.7,}$ and an ARFIMA( {\protect\small 0,d,0})$ MM; ${\protect\small d}^{\protect\small *}{\protect\small >0.25}$.} {\includegraphics[scale=0.7]{Ex1_1_100}} {\includegraphics[scale=0.7]{Ex1_1_200}} \\ {\includegraphics[scale=0.7]{Ex1_1_500}} {\includegraphics[scale=0.7]{Ex1_1_1000}} \\ {\includegraphics[scale=0.75]{Legend}} \label{ex1_case1} \end{figure} The particular parameter values employed in the specification of the TGDP are $d_{0}=0.2$ and $\theta _{0}=-0.7,$ with d^{\ast }=0.3723$ in this case, and the values of $s$ used are those given in Table \ref{ex1_s}. From Figure \ref{ex1_case1} we see that $\frac{n^{1-2d^{\ast }}}{\log n}(\widehat{d}_{1}-d_{1}-\mu _{n})$ is centered away from zero for all sample sizes, for all estimation methods. However, as the sample size increases the point of central location\ of $\frac{n^{1-2d^{\ast }}}{\log n}(\widehat{d}_{1}-d_{1}-\mu _{n})$ approaches zero and all distributions of the standardized statistics go close to matching the asymptotic ('Limit') distributions. The salient feature to be noted is the clustering that occurs, in particular for $n\leqslant 500;$ that is, TML and CSS form one cluster and FML and Whittle form the other, with the time-domain estimators being closer to the asymptotic distribution for all three (smaller) sample sizes. \subsubsection{Case 2: $d^{\ast }=0.25$} The limiting distribution for $\widehat{d}_{1}$ in the case of\ $d^{\ast }=0.25$ i \begin{equation}\label{Asymptotic distribution_ex1_case2} n^{1/2}[\overline{\Lambda}_{dd}]^{-1/2}\left(\widehat{d}_{1}-d_{1}\right) \rightarrow ^{D}N(0,b^{-2})\,, \end{equation where \begin{equation}\label{Rncase2} \overline{\Lambda}_{dd}=\frac{1}{n}\sum_{j=1}^{n/2}(1+\theta_{0}^{2}+2\theta_{0}\cos(\lambda_{j}))^2(2\sin(\lambda_{j}/2))^{-1}(2\log(2\sin(\lambda_{j}/2)))^{2} \end{equation} and $b$ is as in (\ref{Calculation_b}). In both \eqref{Rncase2} and \eqref{Calculation_b} $\theta _{0}=-0.444978$, as $d^{\ast }=0.25$ occurs at this particular value. Once again, $d_{0}=0.2$ in the TDGP. Each panel of Figure \ref{ex1_case2} provides the densities of $n^{1/2}[\overline{\Lambda}_{dd}]^{-1/2}\left(\widehat{d}_{1}-d_{1}\right)$ under the four estimation methods, for a specific $n$ as labeled above each plot, plus the limiting distribution given in (\ref{Asymptotic distribution_ex1_case2}). \begin{figure}[h!] \centering \caption{Kernel density of $\small n^{1/2}[\overline{\Lambda}_{dd}]^{-1/2}\left(\widehat{d}_{1}-d_{1}\right)$ for an ARFIMA( {\protect\small 0,d}_{{\protect\small 0}}{\protect\small ,1})$ TDGP with ${\protect\small d}_{{\protect\small 0}}{\protect\small =0.2} $ and ${\protect\small \protect\theta }_{{\protect\small 0 }{\protect\small =-0.444978,}$ and an ARFIMA( {\protect\small 0,d,0})$ MM, ${\protect\small d}^{\protect\small *}{\protect\small =0.25}$.} {\includegraphics[scale=0.7]{Ex1_2_100}} {\includegraphics[scale=0.7]{Ex1_2_200}} \\ {\includegraphics[scale=0.7]{Ex1_2_500}} {\includegraphics[scale=0.7]{Ex1_2_1000}} \\ {\includegraphics[scale=0.75]{Legend}} \label{ex1_case2} \end{figure} Once again we observe a disparity between the time domain and frequency domain kernel estimates, with the pair of time domain methods yielding finite sample distributions that are closer to the limiting distribution, for all sample sizes considered. The discrepancy between the two types of methods declines as the sample size increases, with the distributions of all methods being reasonably close both to one another, and to the limiting distribution, when $n=1000.$ \subsubsection{\noindent Case 3: $d^{\ast }<0.25$} In this case we have \begin{equation} \sqrt{n}\left( \widehat{d}_{1}-d_{1}\right) \rightarrow ^{D}N(0,\upsilon ^{2})\,, \label{Asymptotic distribution_ex1_case3} \end{equation wher \begin{equation} \upsilon ^{2}=\Lambda_{11}/b^{-2}\,, \label{Calculation of V2} \end{equation with \begin{eqnarray*} \nonumber \Lambda_{11} &=&2\pi \dint\limits_{0}^{\pi }\left( \frac{f_{0}(\lambda )} f_{1}(d_{1}\mathbf{,}\lambda )}\right) ^{2}\left( \frac{\partial \log f_{1}(d_{1}\mathbf{,}\lambda )}{\partial d}\right) ^{2}d\lambda \\ &=&2\pi \dint\limits_{0}^{\pi }(1+\theta _{0}^{2}+2\theta _{0}\cos (\lambda ))^{2}(2\sin (\lambda /2))^{-4d^*}(2\log (2\sin (\lambda /2)))^{2}d\lambda \,, \end{eqnarray*} and $b$ as given in (\ref{Calculation_b}) evaluated at $\theta _{0}=-0.3\ $ and $d^*=0.1736$. Each panel in Figure \ref{ex1_case3} provides the kernel density estimate of the standardized statistic $\sqrt{n}(\widehat{d _{1}-d_{1}),$ under the four estimation methods, for a specific $n$ as labeled above each plot, plus the limiting distribution given in (\re {Asymptotic distribution_ex1_case3}). \begin{figure}[h!] \centering \caption{Kernel density of $\protect\sqrt{{\protect\small n}}\left( \protect\widehat {\protect\small d}}_{{\protect\small 1}}{\protect\small -d}_{{\protect\small 1}}\right) $ for an ARFIMA( {\protect\small 0,d}_{{\protect\small 0}}{\protect\small ,1})$ TDGP with ${\protect\small d}_{{\protect\small 0}}{\protect\small =0.2} $ and ${\protect\small \protect\theta }_{{\protect\small 0 }{\protect\small =-0.3,}$ and an ARFIMA( {\protect\small 0,d,0})$ MM, ${\protect\small d}^{\protect\small *}{\protect\small <0.25}$.} {\includegraphics[scale=0.7]{Ex1_3_100_1}} {\includegraphics[scale=0.7]{Ex1_3_200_1}} \\ {\includegraphics[scale=0.7]{Ex1_3_500_1}} {\includegraphics[scale=0.7]{Ex1_3_1000_1}} \\ {\includegraphics[scale=0.75]{Legend}} \label{ex1_case3} \end{figure} In this case there is no clear visual differentiation between the time domain and frequency domain methods, for any sample size, and perhaps not surprisingly given the faster convergence rate in this case, all the methods produce finite sample distributions that match the limiting distribution reasonably well by the time $n=1000.$ \subsection{Finite sample bias and MSE of estimators of the pseudo-true parameter $d_{1}$}\label{biasmse} We supplement the graphical results in the previous section by documenting the finite sample bias, MSE and relative efficiency of the four alternative estimators, as estimators of the pseudo-true parameter $d_{1}$. The following standard formulae, \begin{eqnarray} \widehat{\text{Bias}}_{0}\left( \widehat{d}_{1}^{(i)}\right) &=&\frac{1}{R \dsum\limits_{r=1}^{R}\widehat{d}_{r}^{(i)}-d_{1} \label{bias_d1} \\ \widehat{Var}_{0}\left( \widehat{d}_{1}^{(i)}\right) &=&\frac{1}{R \dsum\limits_{r=1}^{R}\left( \widehat{d}_{1,r}^{(i)}\right) ^{2}-\left( \frac{1}{R}\dsum\limits_{r=1}^{R}\widehat{d}_{1,r}^{(i)}\right) ^{2} \label{var_d1} \\ \widehat{\text{MSE}}_{0}\left( \widehat{d}_{1}^{(i)}\right) &=&\widehat \text{Bias}}_{0}^{2}+\widehat{Var}_{0}\left( \widehat{d}_{1}^{(i)}\right) \label{mse_d1} \\ \widehat{r.eff}_{0}\left( \widehat{d}_{1}^{(i)},\widehat{d}_{1}^{(j)}\right) &=&\frac{\widehat{\text{MSE}}_{0}\left( \widehat{d}_{1}^{(i)}\right) } \widehat{\text{MSE}}_{0}\left( \widehat{d}_{1}^{(j)}\right) }, \label{ref_d1} \end{eqnarray are applied to all four estimators $i,j=1,...,4$. Since all empirical expectations and variances are evaluated under the TGDP, we make this explicit with appropriate subscript notation. Results are produced for Example $1$ in Tables \ref{Table_bias_MSE_Example 1} and \ref{Table_efficiency_Example 1 2}\ and for Example $2$\ in Tables \ref{Table_bias_MSE_Example 2} and \ref{Table_efficiency_Example 1 2}, with additional results in Table \ref{Table_bias_MSE_no mis}. Values of d^{\ast }=d_{0}-d_{1}$ are documented across the key ranges, $d^{\ast }\lesseqgtr 0.25,$ along with associated values for the MA coefficient in the TGDP, $\theta _{0}.$ The minimum values of bias and MSE for each parameter setting are highlighted in bold face in all tables for each sample size, $n$.\footnote{Only that number which is smallest at the precision of 8 decimal places is bolded. Values highlighted with a `${\small \ast }$' are equally small to 4 decimal places.} Consider first the bias and MSE results for Example $1$ with $d_{0}=0.2$ displayed in the top panel of Table \ref{Table_bias_MSE_Example 1}. \begin{table}[h] \caption{\small Estimates of the bias and MSE of $\widehat{{\small d}}_{{\small 1}}$ for the FML, Whittle, TML and CSS estimators: Example 1.}\label{Table_bias_MSE_Example 1} \begin{tabular}{lllcccccccc} & & & \multicolumn{2}{c}{\small FML} & \multicolumn{2}{c}{\small Whittl } & \multicolumn{2}{c}{\small TML} & \multicolumn{2}{c}{\small CSS} \\ ${\small d}^{{\small \ast }}$ & ${\small \theta }_{{\small 0}}$ & ${\small n}$ & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} \\ \hline & & & & & & & & & & \\ & & &\multicolumn{8}{c}{ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP ${\small d}_{{\small 0}}{\small =0.2}$ vis-\`{a}-vis ARFIMA (${\small 0,d,0}$) MM} \\ & & & & & & & & & & \\ {\small 0.3723} & {\small -0.7} & {\small 100} & {\small -0.1781} & {\small 0.0915} & {\small -0.2466} & {\small 0.0691} & {\small -0.1748} & {\small 0.0481} & \smal \textbf{-0.1427} & \smal \textbf{0.0315} \\ & & {\small 200} & {\small -0.1620} & {\small 0.0558} & {\small -0.1940} & {\small 0.0431} & {\small -0.1287} & {\small 0.0335} & \smal \textbf{-0.1110} & \smal \textbf{0.0207} \\ & & {\small 500} & {\small -0.1354} & {\small 0.0211} & {\small -0.1308} & {\small 0.0178} & {\small -0.0916} & {\small 0.0138} & \smal \textbf{-0.0798} & \smal \textbf{0.0097} \\ & & {\small 1000} & {\small -0.1019} & {\small 0.0141} & {\small -0.0996} & {\small 0.0127} & {\small -0.0776} & {\small 0.0103} & \smal \textbf{-0.0670} & \smal \textbf{0.0065} \\ {\small 0.2500} & {\small -0.44} & {\small 100} & {\small -0.1515} & {\small 0.0393} & {\small -0.1184} & {\small 0.0298} & {\small -0.0650} & {\small 0.0170} & \smal \textbf{-0.0577} & \smal \textbf{0.0119} \\ & & {\small 200} & {\small -0.1010} & {\small 0.0148} & {\small -0.0852} & {\small 0.0117} & {\small -0.0434} & {\small 0.0072} & \smal \textbf{-0.0400} & \smal \textbf{0.0052} \\ & & {\small 500} & {\small -0.0544} & {\small 0.0048} & {\small -0.0487} & {\small 0.0042} & {\small -0.0257} & {\small 0.0027} & \smal \textbf{-0.0241} & \smal \textbf{0.0021} \\ & & {\small 1000} & {\small -0.0351} & {\small 0.0023} & {\small -0.0323} & {\small 0.0021} & {\small -0.0188} & {\small 0.0015} & \smal \textbf{-0.0162} & \smal \textbf{0.0012} \\ {\small 0.1736} & {\small -0.3} & {\small 100} & {\small -0.1082} & {\small 0.0217} & {\small -0.0712} & {\small 0.0146} & {\small -0.0340} & {\small 0.0095} & \smal \textbf{-0.0330} & \smal \textbf{0.0087} \\ & & {\small 200} & {\small -0.0663} & {\small 0.0085} & {\small -0.0491} & {\small 0.0064} & \smal \textbf{-0.0213} & {\small 0.0047} & {\small -0.0228} & \smal \textbf{0.0045} \\ & & {\small 500} & {\small -0.0318} & {\small 0.0026} & {\small -0.0251} & {\small 0.0022} & \smal \textbf{-0.0106} & {\small 0.0017$^*$} & {\small -0.0188} & \smal \textbf{0.0017} \\ & & {\small 1000} & {\small -0.0184} & {\small 0.0011} & {\small -0.0149} & {\small 0.0010} & \smal \textbf{-0.0065} & {\small 0.0009$^*$} & {\small -0.0180} & \smal \textbf{0.0009} \\ \hline & & & & & & & & & & \\ & & &\multicolumn{8}{c}{ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP ${\small d}_{{\small 0}}{\small =0.4}$ vis-\`{a}-vis ARFIMA (${\small 0,d,0}$) MM} \\ & & & & & & & & & & \\ {\small 0.3723} & {\small -0.7} & {\small 100} & {\small -0.2786} & {\small 0.0995} & {\small -0.2456} & {\small 0.0724} & {\small -0.2210} & {\small 0.0515} & \smal \textbf{-0.1957} & \smal \textbf{0.0489} \\ & & {\small 200} & {\small -0.2096} & {\small 0.0601} & {\small -0.1942} & {\small 0.0440} & {\small -0.1778} & {\small 0.0357} & \smal \textbf{-0.1648} & \smal \textbf{0.0340} \\ & & {\small 500} & {\small -0.1598} & {\small 0.0213} & {\small -0.1287} & {\small 0.0181} & {\small -0.1347} & {\small 0.0137} & \smal \textbf{-0.0871} & \smal \textbf{0.0118} \\ & & {\small 1000} & {\small -0.1123} & {\small 0.0157} & {\small -0.0939} & {\small 0.0143} & {\small -0.0812} & {\small 0.0121} & \smal \textbf{-0.0648} & \smal \textbf{0.0117} \\ {\small 0.2500} & {\small -0.44} & {\small 100} & {\small -0.1903} & {\small 0.0475} & {\small -0.1659} & {\small 0.0383} & {\small -0.0911} & {\small 0.0201} & \smal \textbf{-0.0550} & \smal \textbf{0.0138} \\ & & {\small 200} & {\small -0.1362} & {\small 0.0237} & {\small -0.1227} & {\small 0.0195} & {\small -0.0534} & {\small 0.0103} & \smal \textbf{-0.0421} & \smal \textbf{0.0089} \\ & & {\small 500} & {\small -0.0796} & {\small 0.0095} & {\small -0.0550} & {\small 0.0082} & {\small -0.0249} & {\small 0.0059} & \smal \textbf{-0.0224} & \smal \textbf{0.0038} \\ & & {\small 1000} & {\small -0.0360} & {\small 0.0048} & {\small -0.0295} & {\small 0.0042} & {\small -0.0180} & {\small 0.0035} & \smal \textbf{-0.0175} & \smal \textbf{0.0025} \\ {\small 0.1736} & {\small -0.3} & {\small 100} & {\small -0.0990} & {\small 0.0228} & {\small -0.0843} & {\small 0.0152} & {\small -0.0422} & {\small 0.0102} & \smal \textbf{-0.0321} & \smal \textbf{0.0092} \\ & & {\small 200} & {\small -0.0773} & {\small 0.0092} & {\small -0.0505} & {\small 0.0071} & {\small -0.0244} & {\small 0.0057} & \smal \textbf{-0.0199} & \smal \textbf{0.0048} \\ & & {\small 500} & {\small -0.0407} & {\small 0.0031} & {\small -0.0276} & {\small 0.0025} & {\small -0.0129} & {\small 0.0022} & \smal \textbf{-0.0087} & \smal \textbf{0.0019} \\ & & {\small 1000} & {\small -0.0172} & {\small 0.0011} & {\small -0.0163} & {\small 0.0010} & {\small -0.0077} & {\small 0.0009} & \smal \textbf{-0.0052} & \smal \textbf{0.0008}\\ \end{tabular} \end{table} As is consistent with the theoretical results (and the graphical illustration in the previous section) the bias and MSE of all four parametric estimators show a clear tendency to decline as the sample size increases, for a fixed value of $\theta _{0}.$ In addition, as $\theta _{0}$ declines in magnitude, and the MM becomes closer to the TDGP, there is a tendency for the MSE values and the absolute values of the bias to decline. Importantly, the bias is \textit{negative} for all four estimators, with the (absolute) bias of the two frequency domain estimators (FML and Whittle) being larger than that of the two time domain estimators. These results are consistent with the tendency of the standardized sampling distributions illustrated above to cluster, and for the frequency domain estimators to sit further to the left of zero than those of the time domain estimators, at least for the $d^*\geq 0.25$ cases. Again, as is consistent with the theoretical results, the rate of decline in the (absolute) bias and MSE of all estimators, as $n$ increases, is slower for $d^{\ast }\geq 0.25$ than for $d^{\ast }<0.25.$ As indicated by the results in the bottom panel of Table \ref{Table_bias_MSE_Example 1} for $d_0=0.4$, the impact of an increase in $d_{0}$ (for any given value of $d^{\ast }$ and $n ) is to (usually but not uniformly) increase the bias and MSE of all estimators,\ as estimators of\ $d_{1}.$ That is, the ability of the four estimators to accurately estimate the pseudo-true parameter for which they are consistent tends to decline (overall) as the long memory in the TDGP increases. Nevertheless, these results show that the relativities between the estimators remain essentially the same as for the smaller value of $d_{0},$ with the CSS estimator now being uniformly preferable to all other estimators under mis-specification, and the FML estimator still performing the worst of all. The results recorded in Table \ref{Table_bias_MSE_Example 2} for Example $2$ illustrate that the presence of an AR term in the MM means that more severe mis-specification can be tolerated. \begin{table}[h]\ \caption{\small Estimates of the bias and MSE of $\widehat{{\small d}}_{{\small 1}}$ for the FML, Whittle, TML and CSS estimators: Example 2.}\label{Table_bias_MSE_Example 2} \begin{tabular}{lllcccccccc} & & & \multicolumn{2}{c}{\small FML} & \multicolumn{2}{c}{\small Whittl } & \multicolumn{2}{c}{\small TML} & \multicolumn{2}{c}{\small CSS} \\ ${\small d}^{{\small \ast }}$ & ${\small \theta }_{{\small 0}}$ & ${\small n}$ & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} \\ \hline & & & & & & & & & & \\ & & &\multicolumn{8}{c}{ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP ${\small d}_{{\small 0}}{\small =0.2}$ vis-\`{a}-vis ARFIMA (${\small 1,d,0}$) MM} \\ & & & & & & & & & & \\ {\small 0.2915} & {\small -0.7} & {\small 100} & {\small -0.1612} & {\small 0.0541} & {\small -0.1169} & {\small 0.0342} & {\small -0.0950} & {\small 0.0295} & \smal \textbf{-0.0671} & \smal \textbf{0.0236} \\ & & {\small 200} & {\small -0.1143} & {\small 0.0376} & {\small -0.0941} & {\small 0.0262} & {\small -0.0760} & {\small 0.0213} & \smal \textbf{-0.0482} & \smal \textbf{0.0175} \\ & & {\small 500} & {\small -0.0679} & {\small 0.0165} & {\small -0.0604} & {\small 0.0125} & {\small -0.0454} & {\small 0.0110} & \smal \textbf{-0.0369} & \smal \textbf{0.0089} \\ & & {\small 1000} & {\small -0.0469} & {\small 0.0089} & {\small -0.0432} & {\small 0.0071} & \smal \textbf{-0.0250} & {\small 0.0067} & {\small -0.0303} & \smal \textbf{0.0059} \\ {\small 0.25} & {\small -0.64} & {\small 100} & {\small -0.1339} & {\small 0.0279} & {\small -0.0899} & {\small 0.0175} & {\small -0.0655} & {\small 0.0138} & \smal \textbf{-0.0457} & \smal \textbf{0.0110} \\ & & {\small 200} & {\small -0.0902} & {\small 0.0125} & {\small -0.0700} & {\small 0.0086} & {\small -0.0490} & {\small 0.0067} & \smal \textbf{-0.0345} & \smal \textbf{0.0062} \\ & & {\small 500} & {\small -0.0490} & {\small 0.0041} & {\small -0.0415} & {\small 0.0030} & {\small -0.0323} & {\small 0.0026} & \smal \textbf{-0.0230} & \smal \textbf{0.0022} \\ & & {\small 1000} & {\small -0.0316} & {\small 0.0019} & {\small -0.0281} & {\small 0.0015} & {\small -0.0181} & {\small 0.0013} & \smal \textbf{-0.0176} & \smal \textbf{0.0011} \\ {\small 0.0148} & {\small -0.3} & {\small 100} & {\small -0.0508} & {\small 0.0139} & {\small -0.0256} & {\small 0.0086} & {\small 0.0190} & {\small 0.0067} & \smal \textbf{-0.0082} & \smal \textbf{0.0054} \\ & & {\small 200} & {\small -0.0266} & {\small 0.0053} & {\small -0.0135} & {\small 0.0036} & {\small 0.0168} & {\small 0.0028} & \smal \textbf{-0.0081} & \smal \textbf{0.0025} \\ & & {\small 500} & {\small -0.0093} & {\small 0.0027} & {\small -0.0080} & {\small 0.0019} & {\small 0.0073} & {\small 0.0016} & \smal \textbf{-0.0004} & \smal \textbf{0.0014} \\ & & {\small 1000} & {\small -0.0036} & {\small 0.0010} & {\small -0.0023} & {\small 0.0008} & {\small 0.0067} & {\small 0.0006}$^{\ast }$ & \smal \textbf{0.0003} & \smal \textbf{0.0006} \\ \hline & & & & & & & & & & \\ & & &\multicolumn{8}{c}{ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP ${\small d}_{{\small 0}}{\small =0.4}$ vis-\`{a}-vis ARFIMA (${\small 1,d,0}$) MM} \\ & & & & & & & & & & \\ {\small 0.2915} & {\small -0.7} & {\small 100} & {\small -0.2299} & {\small 0.0639} & {\small -0.1805} & {\small 0.0419} & {\small -0.1279} & {\small 0.0372} & \smal \textbf{-0.0699} & \smal \textbf{0.0140} \\ & & {\small 200} & {\small -0.1774} & {\small 0.0395} & {\small -0.1599} & {\small 0.0282} & {\small -0.1034} & {\small 0.0245} & \smal \textbf{-0.0578} & \smal \textbf{0.0190} \\ & & {\small 500} & {\small -0.1294} & {\small 0.0197} & {\small -0.1039} & {\small 0.0150} & {\small -0.0816} & {\small 0.0126} & \smal \textbf{-0.0294} & \smal \textbf{0.0101} \\ & & {\small 1000} & {\small -0.1089} & {\small 0.0125} & {\small -0.0632} & {\small 0.0099} & {\small -0.0462} & {\small 0.0081} & \smal \textbf{-0.0109} & \smal \textbf{0.0069} \\ {\small 0.25} & {\small -0.64} & {\small 100} & {\small -0.1396} & {\small 0.0257} & {\small -0.0979} & {\small 0.0155} & {\small -0.0692} & {\small 0.0145} & \smal \textbf{-0.0508} & \smal \textbf{0.0103} \\ & & {\small 200} & {\small -0.0868} & {\small 0.0122} & {\small -0.0675} & {\small 0.0077} & {\small -0.0401} & {\small 0.0076} & \smal \textbf{-0.0357} & \smal \textbf{0.0058} \\ & & {\small 500} & {\small -0.0455} & {\small 0.0065} & {\small -0.0342} & {\small 0.0046} & {\small -0.0294} & {\small 0.0041} & \smal \textbf{-0.0216} & \smal \textbf{0.0033} \\ & & {\small 1000} & {\small -0.0316} & {\small 0.0027} & {\small -0.0192} & {\small 0.0021} & {\small -0.0177} & {\small 0.0018} & \smal \textbf{-0.0122} & \smal \textbf{0.0014} \\ {\small 0.0148} & {\small -0.3} & {\small 100} & {\small -0.0650} & {\small 0.0162} & {\small -0.0422} & {\small 0.0115} & {\small 0.0246} & {\small 0.0082} & \smal \textbf{-0.0132} & \smal \textbf{0.0067} \\ & & {\small 200} & {\small -0.0312} & {\small 0.0095} & {\small -0.0164} & {\small 0.0075} & {\small 0.0107} & {\small 0.0053} & \smal \textbf{-0.0094} & \smal \textbf{0.0047} \\ & & {\small 500} & {\small -0.0205} & {\small 0.0042} & {\small -0.0133} & {\small 0.0034} & {\small 0.0079} & {\small 0.0026} & \smal \textbf{-0.0035} & \smal \textbf{0.0023} \\ & & {\small 1000} & {\small -0.0136} & {\small 0.0021} & {\small -0.0088} & {\small 0.0018} & {\small 0.0053} & {\small 0.0014} & \smal {\small -}\textbf{0.0017} & \smal \textbf{0.0013} \\ \end{tabular} \end{table} More specifically, in all (comparable) cases and for all estimators, the finite sample bias and MSE recorded in Table \ref{Table_bias_MSE_Example 2} tend to be smaller in (absolute) value than the corresponding values in Table \ref{Table_bias_MSE_Example 1}. Results not presented here suggest, however, that when the value of $\theta _{0}$ is near zero, estimation under the MM with an extraneous AR parameter causes an increase in (absolute) bias and MSE, relative to the case where the MM is fractional noise (see also the following remark). With due consideration taken of the limited nature of the experimental design, these results suggest that the inclusion of some form of short-memory dynamics in the estimated model -- even if those dynamics are not of the correct form -- acts as an insurance against more extreme mis-specification, but at the possible cost of a decline in performance when the consequences of mis-specification are not severe. \bigskip \noindent REMARK: When the parameter $\theta _{0}$ of the ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP equals zero the TDGP coincides with the ARFIMA (${\small 0,d,0}$) model and is nested within the ARFIMA (${\small 1,d,0}$) model. Thus the value $\theta _{0}=0$ is associated with a match between the TDGP and the model, at which point $d^{\ast }=0$ and there is no mis-specification. That is, neither the ARFIMA (${\small 0,d,0}$) model estimated in Example 1, nor the ARFIMA (${\small 1,d,0}$) model estimated in Example 2, is mis-specified (according to our definition) when applied to an ARFIMA (${\small 0,d_0,0}$) TDGP, although the ARFIMA (${\small 1,d,0}$) model is \textit{incorrect} in the sense of being over-parameterized. Table \ref{Table_bias_MSE_no mis} presents the bias and MSE observed when there is such a lack of mis-specification. \begin{table}[h]\ \caption{\small Estimates of the bias and MSE of $\widehat{{\small d}}_{{\small 1}}$ for the FML, Whittle, TML and CSS estimators: ARFIMA (${\small 0,d}_{{\small 0}}{\small ,0}$) TDGP $d_{{\small 0}}=0.2$, $d^*=0.0$.}\label{Table_bias_MSE_no mis} \begin{tabular}{clcccccccc} & & \multicolumn{2}{c}{\small FML} & \multicolumn{2}{c}{\small Whittl } & \multicolumn{2}{c}{\small TML} & \multicolumn{2}{c}{\small CSS} \\ & ${\small n}$ & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} & {\small Bias} & {\small MSE} \\ \hline & & & & & & & & & \\ & &\multicolumn{8}{c}{Correct ARFIMA (${\small 0,d,0}$) model} \\ & & & & & & & & & \\ & {\small 100} & {\small -0.0502} & {\small 0.0113} & {\small -0.0173} & {\small 0.0102} & \smal \textbf{0.0066} & \smal \textbf{0.0087} & {\small 0.0094} & {\small 0.0096} \\ & {\small 200} & {\small -0.0279} & {\small 0.0044} & {\small -0.0110} & {\small 0.0041} & \smal \textbf{0.0043} & \smal \textbf{0.0037} & {\small 0.0063} & {\small 0.0039} \\ & {\small 500} & {\small -0.0089} & {\small 0.0015} & {\small -0.0062} & {\small 0.0014} & \smal \textbf{0.0026} & \smal \textbf{0.0013} & {\small 0.0031} & {\small 0.0014} \\ & {\small 1000} & {\small -0.0045} & {\small 0.0006}$^{\ast }$ & {\small -0.0037} & {\small 0.0006}$^{\ast }$ & \smal \textbf{0.0016} & \smal \textbf{0.0006} & {\small 0.0025} & {\small 0.0006}$^{\ast }$ \\ & & & & & & & & & \\ & &\multicolumn{8}{c}{Over-Parameterized ARFIMA (${\small 1,d,0}$) model} \\ & & & & & & & & & \\ & {\small 100} & {\small -0.0455} & {\small 0.0177} & {\small 0.0371} & {\small 0.0121} & {\small 0.0255} & {\small 0.0107} & \smal \textbf{0.0158} & \smal \textbf{0.0087} \\ & {\small 200} & {\small -0.0216} & {\small 0.0081} & {\small 0.0196} & {\small 0.0058} & {\small 0.0107} & {\small 0.0052} & \smal \textbf{0.0092} & \smal \textbf{0.0042} \\ & {\small 500} & {\small -0.0120} & {\small 0.0065} & {\small 0.0091} & {\small 0.0049} & {\small 0.0078} & {\small 0.0043} & \smal \textbf{0.0055} & \smal \textbf{0.0037} \\ & {\small 1000} & {\small -0.0074} & {\small 0.0027} & {\small 0.0055} & {\small 0.0021} & {\small 0.0034} & {\small 0.0019} & \smal \textbf{0.0028} & \smal \textbf{0.0016} \end{tabular} \end{table Under the correct specification of the ARFIMA (${\small 0,d,0}$) model the TML estimator is now superior, in terms of both bias and MSE. The relative accuracy of the TML estimator seen here is consistent with certain results recorded in \cite{sowell:1992} and \cite{cheung:diebold:1994}, in which the performance of the TML method (under a known mean, as is the case considered here) is assessed against that of various comparators under correct model specification. For the over-parameterized ARFIMA (${\small 1,d,0}$) model, however, the CSS estimator dominates once more. This latter result is in accord with the findings in \cite{nielsen:frederiksen:2005}, where the TML estimator is compared with the CSS and Whittle estimators for a fractional noise model and a deterioration in relative performance of the TML estimator as a result of estimating the unknown mean is observed, an effect previously documented in \citet{cheung:diebold:1994}. \hfill$\Box$ \bigskip The results in Tables \ref{Table_bias_MSE_Example 1}, \ref{Table_bias_MSE_Example 2} and \ref{Table_bias_MSE_no mis} highlight that the CSS estimator has the smallest MSE of all four estimators under mis-specification, and when there is no mis-specification but the model is over-parameterized, and that this result holds for all sample sizes considered. The absolute value of its bias is also the smallest in the vast majority of such cases. This superiority presumably reflects a certain in-built robustness of least squares methods to mis-specification and incorrect model formulation. This is further emphasized in Table \ref{Table_efficiency_Example 1 2} which records the relative efficiencies of the estimators. The relative efficiencies are calculated by taking the ratio of the MSE of $\widehat{{\small d}}_{{\small 1}}$ for all estimation methods to the MSE of the FML estimator, as per (\ref{ref_d1}), and for each combination of $ d_{ 0}$, $\theta_{ 0}$ and $n$ the minimum MSE ratio is bolded. \begin{table}[h]\ \caption{\small Estimates of the efficiency of the Whittle, TML and CSS estimators of the long memory parameter relative to the FML estimator: Examples 1 and 2}\label{Table_efficiency_Example 1 2} \begin{center} \begin{tabular}{llccccccc} & & & {\small Whittle} & {\small TML} & {\small CSS} & {\small Whittle} & {\small TML} & {\small CSS} \\ \cline{4-9} ${\small d}^{{\small \ast }}$ & ${\small \theta }_{{\small 0}}$ & ${\small n} $ & \multicolumn{3}{c}{${\small d}_{0}{\small =0.2}$} & \multicolumn{3}{|c}{ {\small d}_{0}{\small =0.4}$} \\ \hline & & & & & & & & \\ & & \multicolumn{7}{c}{ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP vis-\`{a}-vis ARFIMA (${\small 0,d,0}$) MM} \\ & & & & & & & & \\ {\small 0.3723} & {\small -0.7} & {\small 100} & {\small 0.7552} & {\small 0.5257} & \smal \textbf{0.3443} & \multicolumn{1}{|c}{\small 0.7276} & {\small 0.5176} & \smal \textbf{0.4915} \\ & & {\small 200} & {\small 0.7724} & {\small 0.6004} & \smal \textbf{0.3710} & \multicolumn{1}{|c}{\small 0.7321} & {\small 0.5940} & \smal \textbf{0.5657} \\ & & {\small 500} & {\small 0.8436} & {\small 0.6540} & \smal \textbf{0.4597} & \multicolumn{1}{|c}{\small 0.8498} & {\small 0.6432} & \smal \textbf{0.5540} \\ & & {\small 1000} & {\small 0.9007} & {\small 0.7305} & \smal \textbf{0.4610} & \multicolumn{1}{|c}{\small 0.9108} & {\small 0.7707} & \smal \textbf{0.7452} \\ {\small 0.2500} & {\small -0.44} & {\small 100} & {\small 0.7583} & {\small 0.4326} & \smal \textbf{0.3028} & \multicolumn{1}{|c}{\small 0.8063} & {\small 0.4232} & \smal \textbf{0.2905} \\ & & {\small 200} & {\small 0.7905} & {\small 0.4865} & \smal \textbf{0.3514} & \multicolumn{1}{|c}{\small 0.8228} & {\small 0.4346} & \smal \textbf{0.3755} \\ & & {\small 500} & {\small 0.8750} & {\small 0.5625} & \smal \textbf{0.4375} & \multicolumn{1}{|c}{\small 0.8632} & {\small 0.6211} & \smal \textbf{0.4000} \\ & & {\small 1000} & {\small 0.9130} & {\small 0.6522} & \smal \textbf{0.5217} & \multicolumn{1}{|c}{\small 0.8750} & {\small 0.7292} & \smal \textbf{0.5208} \\ {\small 0.1736} & {\small -0.3} & {\small 100} & {\small 0.6728} & {\small 0.4378} & \smal \textbf{0.4009} & \multicolumn{1}{|c}{\small 0.6667} & {\small 0.4474} & \smal \textbf{0.4035} \\ & & {\small 200} & {\small 0.7529} & {\small 0.5529} & \smal \textbf{0.5294} & \multicolumn{1}{|c}{\small 0.7717} & {\small 0.6196} & \smal \textbf{0.5217} \\ & & {\small 500} & {\small 0.8462} & {\small 0.6538} & \smal \textbf{0.6362} & \multicolumn{1}{|c}{\small 0.8065} & {\small 0.7097} & \smal \textbf{0.6129} \\ & & {\small 1000} & {\small 0.9091} & {\small 0.8182} & \smal \textbf{0.7730} & \multicolumn{1}{|c}{\small 0.9091} & {\small 0.8182} & \smal \textbf{0.7636} \\ & & & & & & & & \\ & & \multicolumn{7}{c}{ARFIMA (${\small 0,d}_{{\small 0}}{\small ,1}$) TDGP vis-\`{a}-vis ARFIMA (${\small 1,d,0}$) MM} \\ & & & & & & & & \\ {\small 0.2915} & {\small -0.7} & \multicolumn{1}{l}{\small 100} & {\small 0.6322} & {\small 0.5453} & \smal \textbf{0.4362} & \multicolumn{1}{|c}{\small 0.6557} & {\small 0.5822} & \smal \textbf{0.4224} \\ & & \multicolumn{1}{l}{\small 200} & {\small 0.6968} & {\small 0.5665} & \smal \textbf{0.4654} & \multicolumn{1}{|c}{\small 0.7139} & {\small 0.6203} & \smal \textbf{0.4810} \\ & & \multicolumn{1}{l}{\small 500} & {\small 0.7576} & {\small 0.6667} & \smal \textbf{0.5394} & \multicolumn{1}{|c}{\small 0.7614} & {\small 0.6396} & \smal \textbf{0.5127} \\ & & \multicolumn{1}{l}{\small 1000} & {\small 0.7978} & {\small 0.7528} & \smal \textbf{0.6629} & \multicolumn{1}{|c}{\small 0.7920} & {\small 0.6480} & \smal \textbf{0.5520} \\ {\small 0.25} & {\small -0.64} & \multicolumn{1}{l}{\small 100} & {\small 0.6272} & {\small 0.4946} & \smal \textbf{0.3943} & \multicolumn{1}{|c}{\small 0.6031} & {\small 0.5642} & \smal \textbf{0.4008} \\ & & \multicolumn{1}{l}{\small 200} & {\small 0.6880} & {\small 0.5360} & \smal \textbf{0.4960} & \multicolumn{1}{|c}{\small 0.6311} & {\small 0.6230} & \smal \textbf{0.4754} \\ & & \multicolumn{1}{l}{\small 500} & {\small 0.7317} & {\small 0.6341} & \smal \textbf{0.5366} & \multicolumn{1}{|c}{\small 0.7077} & {\small 0.6308} & \smal \textbf{0.5077} \\ & & \multicolumn{1}{l}{\small 1000} & {\small 0.7895} & {\small 0.6842} & \smal \textbf{0.5789} & \multicolumn{1}{|c}{\small 0.7778} & {\small 0.6667} & \smal \textbf{0.5185} \\ {\small 0.0148} & {\small -0.3} & {\small 100} & {\small 0.6187} & {\small 0.4820} & \smal \textbf{0.3885} & \multicolumn{1}{|c}{\small 0.7099} & {\small 0.5062} & \smal \textbf{0.4136} \\ & & {\small 200} & {\small 0.6792} & {\small 0.5283} & \smal \textbf{0.4717} & \multicolumn{1}{|c}{\small 0.7895} & {\small 0.5579} & \smal \textbf{0.4947} \\ & & {\small 500} & {\small 0.7148} & {\small 0.5926} & \smal \textbf{0.5185} & \multicolumn{1}{|c}{\small 0.8095} & {\small 0.6190} & \smal \textbf{0.5476} \\ & & {\small 1000} & {\small 0.7632} & {\small 0.6400} & \smal \textbf{0.5600} & \multicolumn{1}{|c}{\small 0.8571} & {\small 0.6667} & \smal \textbf{0.6190} \end{tabular} \end{center} \end{table The relative efficiency results recorded in Table \ref{Table_efficiency_Example 1 2} confirm that the CSS estimator is between (approximately) two and three times as efficient as the FML estimator (in particular) in the region of the parameter space ($d^{\ast }\geq 0.25$) in which both (absolute) bias and MSE are at their highest for all estimators. Also, the MSE of the FML estimator exceeds the corresponding values for all three other estimators, with all relative efficiency values recorded in Table \ref{Table_efficiency_Example 1 2} being less than one. Accordingly, across all parameter settings we have documented where mis-specification or incorrect model formulation obtains, the CSS estimator is almost universally dominant. \section{Summary and Conclusions\label{Conclusion}} This paper presents theoretical and simulation-based results relating to the estimation of mis-specified models for long range dependent processes. We show that under mis-specification four classical parametric estimation methods, frequency domain maximum likelihood (FML), Whittle, time domain maximum likelihood (TML) and conditional sum of squares (CSS) converge to the same pseudo-true parameter value. A general closed-form solution for the limiting criterion function for the four alternative parametric estimation methods is derived in the case of ARFIMA models. This enables us to demonstrate the link between any form of mis-specification of the short-memory dynamics and the difference between the true and pseudo-true values of the fractional index, $d$, and, hence, to the resulting (asymptotic) distributional properties of the estimators, having proved that the estimators are asymptotically equivalent. The finite sample performance of all four estimators is then documented. The extent to which the finite sample distributions mimic the (numerically specified) asymptotic distributions is displayed. In the case of more extreme mis-specification, the pairs of time domain and frequency domain estimators tend to cluster together for smaller sample sizes, with the former pair mimicking the asymptotic distributions more closely. Bias and mean squared error (MSE) calculations demonstrate the superiority overall of the CSS estimator, under mis-specification, and the distinct inferiority of the FML estimator -- as estimators of the pseudo-true parameter for which they are both consistent. There are several interesting issues that arise from the results that we have established, including the following: First, the necessity to suppose that $\{y_t\}$ is a Gaussian process in order to appeal to existing results in the literature where this assumption is made is unfortunate. It seems reasonable to suppose that our results can be extended to long range dependent linear processes, given that under current assumptions the series will have such a representation, but extension to more general processes is not likely to be straightforward. Second, a relaxation of the restriction that only values of $d\in(0,0.5)$ be considered seems desirable, particularly as the relationship between the true value $d_0$ and the pseudo-true value $d_1$ depends upon the interaction between the TDGP and the MM and $d_0\in(0,0.5)$ does not imply the same is true of $d_1$. The extension of our results to short memory, $d=0$, anti-persistent, $d<0$, and non-stationary, $d\geq 0.5$, cases will facilitate the consideration of a broader range of circumstances. To some extent other values of $d$ might be covered by means of appropriate pre-filtering, for example, the use of first-differencing when $d\in(1,3/2)$, but this would require prior knowledge of the structure of the process and opens up the possibility of a different type of mis-specification from the one we have considered here. Explicit consideration of the non-stationary case with $d\in(0,3/2)$, say, perhaps offers a better approach as prior knowledge of the characteristics of the process would then be unnecessary. The latter also seems particularly relevant given that estimates near the boundaries $d=0.5$ and $d=1$ are not uncommon in practice. Previous developments in the analysis of non-stationary fractional processes \citep[see, inter alios,][]{beran:1995,tanaka:1999, velasco:1999} might offer a sensible starting point for such an investigation. Third, our limiting distribution results can be used in practice to conduct inference on the long memory and other parameters after constructing obvious smoothed periodogram consistent estimates of $\mathbf{B}$, $\mathbf{\mu }_{n}$, $\overline{\Lambda}_{dd}$ and $\mathbf{\Lambda}$. But which situation should be assumed in any particular instance, $d^*>0.25$, $d^*=0.25$ or $d^*<0.25$, may be a moot point. Fourth, the relationships between the bias and MSE of the parametric estimators of $d_{1}$ (denoted respectively below by Bias$\_d_{1}$ and MSE$\_d_{1}$), and the bias and MSE as estimators of the \textit{true} value $d_{0}$, (Bias$\_d_{0}$ and MSE$\_d_{0}$ respectively) can be expressed simply as follows \begin{eqnarray*} \text{Bias}\_d_{0} &=&E_{0}(\widehat{d}_{1})-d_{0} \\ &=&\left[ E_{0}(\widehat{d}_{1})-d_{1}\right] +(d_{1}-d_{0}) \\ &=&Bias\_d_{1}-d^*\,, \end{eqnarray* where we recall, $d^*=d_{0}-d_{1}$, an \begin{eqnarray*} \text{MSE}\_d_{0} &=&E_{0}\left( \widehat{d}_{1}-d_{0}\right) ^{2} \\ &=&E_{0}\left( \widehat{d}_{1}-E_{0}(\widehat{d}_{1})\right) ^{2}+\left[ E_{0}(\widehat{d}_{1})-d_{0}\right] ^{2} \\ &=&E_{0}\left( \widehat{d}_{1}-E_{0}(\widehat{d}_{1})\right) ^{2}+\left[ [ E_{0}(\widehat{d}_{1})-d_{1}] -d^*\right] ^{2} \\ &=&E_{0}\left( \widehat{d}_{1}-E_{0}(\widehat{d}_{1})\right) ^{2}+\left[ E_{0}(\widehat{d}_{1})-d_{1}\right] ^{2}+d^{*2}-2d^*\left[ E_{0}(\widehat{d}_{1})-d_{1}\right] \\ &=&MSE\_d_{1}+d^{*2}-2d^*Bias\_d_{1} . \end{eqnarray* Hence, if Bias$\_d_{1}$ is the same sign as $d^*$ at any particular point in the parameter space, then the bias of a mis-specified parametric estimator \textit{as an estimator of }$d_{0},$ may be less (in absolute value) than its bias as an estimator of $d_{1}$, depending on the magnitude of the two quantities. Similarly, MSE$\_d_{0}$ may be less than MSE$\_d_{1}$ if Bias$\_d_{1}$ and d^*$ have the same sign, with the final result again depending on the magnitude of the two quantities. These results imply that it is possible for the ranking of mis-specified parametric estimators to be altered, once the reference point changes from $d_{1}$ to $d_{0}$. This raises the following questions: Does the dominance of the CSS estimator (within the parametric set of estimators) still obtain when the true value of $d$ is the reference value? And more critically from a practical perspective; Are there circumstances where a mis-specified parametric estimator out-performs semi-parametric alternatives in finite samples, the lack of consistency (for $d_{0}$) of the former notwithstanding? Such topics remain the focus of current and ongoing research. \phantomsection \addcontentsline{toc}{section}{References} \bibliographystyle{ims}
{ "attr-fineweb-edu": 1.572266, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUa8k4eIZijQpFYEOf
\section{Introduction} Kleene algebra generalises the language of regular expressions and, as a basis for reasoning about programs and computing systems, it has been used in applications ranging from compiler optimisation, program refinement, combinatorial optimisation and algorithm design~\cite{Con71,Koz94,Koz00a,Koz00b,Mci06}. A number of variants of the original axiom system and language of Kleene algebra have extended its range of applicability to include probability \cite{Mci05} with the most recent being the introduction of a concurrency operator \cite{Hoa09}. Main benefits of the algebraic approach are that it captures some essential aspects of computing systems in a simple and concise way and that the calculational style of reasoning it supports is very suitable for automated theorem proving. In this paper we continue this line of work and propose \emph{weak concurrent Kleene algebra}, which extends the abstract probabilistic Kleene algebra~\cite{Mci05} with the concurrency operator of concurrent Kleene algebra~\cite{Hoa09} and thus supports reasoning about concurrency in a context of probabilistic effects. This extension calls for a careful evaluation of the axiom system so that it accurately accounts for the interactions of probabilistic choice, nondeterministic choice and the treatment of concurrency. For example probabilistic Kleene algebra accounts for the presence of probability in the \emph{failure} of the original distributive law $x(y + z) = xy + xz$ which is also absent in most process algebras. That is because when the terms $x, y, z$ are interpreted as probabilistic programs, with $xy$ meaning ``first execute $x$ and then $y$" and $+$ interpreted as a nondeterministic choice, the expression on the left hand side exhibits a greater range of nondeterminism than the right in the case that $x$ includes probabilistic behaviours. For example if $x$ is interpreted as a program which flips a bit with probability $1/2$ then the following nondeterministic choice in $y+z$ can always be resolved so that $y$ is executed \emph{if and only if} the bit was indeed flipped. This is not a behaviour amongst those described by $xy + xz$, where the nondeterminism is resolved before the bit is flipped and therefore its resolution is unavoidably independent of the flipping. Instead, in contexts such as these, distributivity be replaced by a weaker law: \begin{equation}\label{eq:subdistributivity} \textrm{Sub-distributivity:} \qquad xy + xz ~\leq ~ x(y + z)~. \end{equation} Elsewhere~\cite{Rab11} we show that this weakening of the original axioms of Kleene algebra results in a complete system relative to a model of nondeterministic automata modulo simulation equivalence. The behaviour of the concurrency operator of concurrent Kleene algebra~\cite{Hoa09} is captured in particular by the \emph{Interchange law}: $ (x \| y) (u \| v) ~ \leq ~ (x u) \| (y v) $$ which expresses that there is a lesser range of nondeterministic executions on the left where, for example, the execution of $u$ is constrained to follow a complete execution of $x$ run concurrently with $y$ but on the right it is not. Our {\bf first contribution} is the construction of a concrete model of abstract probabilistic automata (where the probability is at the action level) over which to interpret terms composed of traditional Kleene algebra together with concurrent composition. In this interpretation, each term represents an automaton. For example in Equation (\ref{eq:subdistributivity}), $x,y$ and $z$ are automata and so is $xy + xz$. We show that the axiom system of concurrent Kleene algebra weakened to allow for the presence of probability is sound with respect to those probabilistic automata. Our use of probabilistic automata is similar to models where the resolution of probability and nondeterminism can be interleaved; concurrent composition of automata models CSP synchronisation~\cite{Hoa78} in that context. Finally we use a notion of rooted $\eta$-simulation to interpret the inequality $\leq$ used in algebraic inequations. Our {\bf second contribution} is to explore some applications of our axiomatisation of weak concurrent Kleene algebra, to explain our definition of rooted $\eta$-simulation in terms of may testing~\cite{Nic83}, and to demonstrate the proof system on Rabin's distributed consensus protocol~\cite{Rab82}. One of the outcomes of this study is to expose the tensions between the various aspects of system execution. Some of the original concurrent Kleene algebra axioms~\cite{Hoa09} required for the concurrency operator now fail to be satisfiable in the presence of probabilistic effects and synchronisation supported by the interchange law. For example, the term $1$ from Kleene algebra (interpreted as ``do nothing") can no longer be a neutral element for the concurrency operator $\|$ --- we only have the specific equality $1 \| 1 = 1$ and not the more general $1 \| x = x$. In fact we chose to preserve the full interchange law in our choice of axioms because it captures so many notions of concurrency already including exact parallel and synchronisation, suggesting that it is a property about general concurrent interactions. A feature of our approach is to concentrate on broad algebraic structures in order to understand how various behaviours interact rather than to study precise quantitative behaviours. Thus we do not include an explicit probabilistic choice operator in the signature of the algebra --- probability occurs explicitly only in the concrete model as a special kind of asynchronous probabilistic action combined with internal events (events that the environment cannot access). This allows the specification of complex concurrent behaviour to be simplified using applications of weak distributivity embodied by Equation (\ref{eq:subdistributivity}) and/or the interchange law as illustrated by our case study. Finally we note that the axiomatisation we give is entirely in terms of first-order expressions and therefore is supported by first-order reasoning. Thus all of our algebraic proofs has been implemented within the Isabelle/HOL theorem proving environment. These proof can be found in a repository of formalised algebraic theorems.~\footnote{\url{http://staffwww.dcs.shef.ac.uk/people/G.Struth/isa/}} In Section \ref{sec:axiomatisation} we explore the axiomatisation of the new algebra. It is essentially a mixture of probabilistic and concurrent Kleene algebras. Sections \ref{sec:concrete-model} and \ref{sec:soundness} are devoted to showing the consistency of our approach. A concrete model based on automata and $\eta$-simulation is constructed. In section \ref{sec:probabilistic-aut}, we compare our approach with probabilistic automata (automata that exhibit explicit probability) and probabilistic simulation. We conclude that, up to some constraint, the concrete model is a very special case of that more general model. In sections \ref{sec:algebraic-testing} and \ref{sec:rabin-protocol}, we present some applications, in particular an algebraic version of may testing is studied and variations of the specification of Rabin's protocol are explored. In this paper $x,y,$ etc represent algebraic expressions or variables. Terms are denoted $s,t,$ etc. Letters $a,b,$ etc stand for actions and $\tau$ represents an internal action. An automaton associated with a term or an expression is usually denoted by the same letter. Other notation is introduced as we need it. In this extended abstract we can only explain the main properties of weak concurrent Kleene algebra and sketch the construction of the automaton model. Detailed constructions and proofs of all statements in this paper can be found in an appendix. \section{Axiomatisation}\label{sec:axiomatisation} A Kleene algebra is a structure that encodes algebraically the sequential behaviour of a system. It is generally presented in the form of an idempotent~\footnote{Idempotence refers to the operation $+$ i.e. $x + x = x$.} semiring structure $(K,+,\cdot,0,1)$ where $x\cdot y$ (sequential composition) is sometimes written using juxtaposition $xy$ in expressions. The term $0$ is the neutral element of $+$ and $1$ is the neutral element of $\cdot$. The semiring is then endowed with a unary Kleene star $*$ representing finite iteration to form a Kleene algebra. This operator is restricted by the following axioms: \begin{eqnarray} \textrm{Left unfold:}\qquad 1 + xx^* & = & x^*, \label{eq:unfold}\\ \textrm{Left induction:}\hspace{1.5mm}\qquad xy\leq y&\Rightarrow &x^*y\leq y,\label{hf:linduction} \end{eqnarray} where $x\leq y$ if and only if $x + y = y$. In the sequel our interpretations will be over a version of probabilistic automata. In particular we will interpret $\leq$ and $=$ as $\eta$-simulations. Often, the dual of (\ref{eq:unfold}-\ref{hf:linduction}) i.e. $1 + x^*x = x^*$ and $yx\leq y\Rightarrow yx^*\leq y$ are also required. However, (\ref{eq:unfold}) and (\ref{hf:linduction}) are sufficient here and the dual laws follow from continuity of sequential composition for finite automata. In a Kleene algebra, the semiring structure supports two distributivity laws: \begin{eqnarray} \textrm{Left distributivity:}\qquad\hspace{0.8mm} xy + xz & = & x(y + z), \label{eq:ldist}\\ \textrm{Right distributivity:}\qquad (x + y)z & = & xz + yz. \label{eq:rdist} \end{eqnarray} Equation (\ref{eq:ldist}) however is not valid in the presence of probability. For example, compare the behaviour of probabilistic choice in the diagrams below. Here, $\mathtt{flip}_p$ denotes the process that flips a $p$-biased coin, which we can represent by a probabilistic automaton (details are given in Section \ref{sec:concrete-model}). \begin{figure}[h]\label{fig:dist} $$ \xymatrix{ &\ar[dl]_{\mathtt{flip}_p}\ar[dr]^{\mathtt{flip}_p}& &\hspace{0.5cm}&&\ar[d]_{\mathtt{flip}_p}&\\ \ar[d]_x &&\ar[d]^y &&&\ar[dl]_x\ar[dr]^y &\\ && && && } $$ \end{figure} In the right diagram, the choice between $a$ and $b$ can be based on the outcome of the coin flip but such resolution is not possible in the left-hand diagram. We express the greater range of possible outcomes by the general inequation (\ref{eq:subdistributivity}), specifically here it becomes \begin{equation}\label{eq:lsubdist} (\mathtt{flip}_p)y + (\mathtt{flip}_p)z \leq (\mathtt{flip}_p)(y + z).~\footnote{We have abused notation in this example by using $\mathtt{flip}_p$ to represent both an action and an automaton which performs that action.} \end{equation} As mentioned above, the zero of a Kleene algebra satisfies: \begin{eqnarray} \textrm{Left annihilation:}\qquad 0x & = & 0, \label{eq:lzero}\\ \textrm{Right annihilation:}\qquad x0 & = & 0. \label{eq:rzero} \end{eqnarray} In our interpretation that includes concurrency, we assume that $0$ captures \emph{deadlock}. However, axiom (\ref{eq:rzero}) is no longer appropriate because we should be able to differentiate between the process doing an action and deadlocking from a process that is just deadlocked. \begin{definition} A weak probabilistic Kleene algebra is a structure $(K,+,\cdot,*,0,1)$ that satisfies the axioms of Kleene algebra except there is no left distributivity (it is replaced by (\ref{eq:subdistributivity})) and Equation (\ref{eq:rzero}) does not hold generally. \end{definition} A concurrency operator was added to Kleene algebra by Hoare et al~\cite{Hoa09}. Our concurrency operator $\|$ satisfies the following standard axioms: \begin{eqnarray} \textrm{Associativity:}\qquad x \| (y \| z) & = & (x \| y) \| z, \label{eq:par-assoc}\\ \textrm{Commutativity:}\hspace{1.3cm} x \| y & = & y \| x, \label{eq:par-comm}\\ \textrm{One-idempotence:}\hspace{1.35cm} 1 \| 1 & = & 1.\label{eq:par-1} \end{eqnarray} In~\cite{Hoa09}, $\|$ satisfies the identity $1\|x = x$ which we do not have here because in the concrete model, we will interpret $\|$ as the synchronisation operator found in CSP~\cite{Hoa78}. However, we still maintain the instance of that law in the special case $x = 1$ (see axiom~(\ref{eq:par-1})) where $1$ is interpreted as ``do nothing". Next we have the axioms dealing the interaction of $\|, +$ and $\cdot$. \begin{eqnarray} \textrm{Monotonicity :}\qquad x \| y + x \| z & \leq & x \| (y + z) \label{eq:par-dist}\\ \textrm{Interchange-law:}\hspace{0.3cm} (x \| y) (u \| v) & \leq & (x u) \| (y v)\label{eq:exchange-law} \end{eqnarray} The interchange law is the most interesting axiom of concurrent Kleene algebra. In fact it allows the derivation of many properties involving $\|$. To illustrate this in the probabilistic context, consider a probabilistic vending machine $\mathtt{VM}$ which we describe as the expression $$\mathtt{VM}\ =\ \mathtt{coin}\cdot\mathtt{flip}_p\cdot(\tau_h\cdot(\mathtt{tea}+1) + \tau_t\cdot(\mathtt{coffee}+1))$$ where $\mathtt{coin},\mathtt{tea},\mathtt{coffee},\tau_h,\tau_t$ and $\mathtt{flip}_p$ are all represented by automata. That is the vending machine accepts a coin and then decides internally whether it will enable the button coffee or tea. The decision is determined by the action $\mathtt{flip}_p$~\footnote{i.e. the automaton that performs a $\mathtt{flip}_p$ action.} which (as explained later) enables either $\tau_h$ or $\tau_t$. The actions $\tau_t$ and $\tau_h$ are internal and the user cannot access them. Now, a user who wants to drink tea is specified as $$\mathtt{U}\ =\ \mathtt{coin}\cdot(\mathtt{tea}+1).$$ The system becomes $\mathtt{U}\|\mathtt{VM}$ where the concurrent operation is CSP like and synchronises on $\mathtt{coin},\mathtt{tea}$ and $\mathtt{coffee}$. The interchange law together with the other axioms and some system assumptions imply the following inequation: \begin{equation}\label{eq:vm} \mathtt{U}\|\mathtt{VM}\ \geq\ \mathtt{coin}\cdot\mathtt{flip}_p\cdot(\tau_h\cdot(\mathtt{tea}+1) + \tau_t) \end{equation} which is proved automatically in our repository. In other words, the user will only be satisfied with \textit{probability at least} $p$ since the right-hand side equation says that the tea action can only be enabled provided that $\tau_h$ is enabled, and in turn that is determined by the result of the $\mathtt{flip}_p$ action. Now we are ready to define our algebra. \begin{definition} A weak concurrent Kleene algebra is a weak probabilistic Kleene algebra $(K,+,\cdot,*,0,1)$ with a concurrency operator $\|$ satisfying (\ref{eq:par-assoc}-\ref{eq:exchange-law}) \end{definition} We assume the operators precedence $*<\cdot<\|<+$. \begin{proposition}\label{pro:elementary-consequences} Let $s,t$ be terms, the following equations holds in weak concurrent Kleene algebra. \begin{enumerate} \item All the operators are monotonic. \item $(s^*\|t^*)^* = s^*\|t^*$.\label{eq:star-idem} \item $(s\|t)^*\leq s^*\| t^*$.\label{eq:subdist} \item $(s + t)^* = (s^*t^*)^*$. \end{enumerate} \end{proposition} \section{Concrete Model}\label{sec:concrete-model} \subsection{Semantic Space}\label{subsec:semantic-space} We use nondeterministic automata to construct a concrete model. An automaton is denoted by a tuple \begin{displaymath} (P,\longrightarrow,i,F) \end{displaymath} where $P$ is a set of states. The set $\longrightarrow\subseteq P\times\Sigma\times P$ is a transition relation and we write $x\trans{a } y$ when there is a transition, labelled by $a$, from state $x$ to state $y$. The alphabet $\Sigma$ is left implicit and considered to be fixed for every automaton. The state $i\in P$ is the initial state and $F\subseteq P$ is the set of final states of the automaton. In the sequel, we will denote an automaton $(P,\longrightarrow,i,F)$ by its set of states $P$ when no confusion is possible. The actions in the alphabet $\Sigma$ are categorised into three kinds: \begin{itemize} \item \textit{internal}: actions that will be ``ignored" by the simulation relation (as in $\tau_h$ and $\tau_t$). Internal actions are never synchronised by $\|$. \item \textit{external}: actions that \emph{can} be synchronised. Probabilistic actions are external (as in $\mathtt{flip}_p$) but they are \emph{never} synchronised. \item \textit{synchronised}: external actions that will be synchronised when applying $\|$ (as in $\mathtt{coin},\mathtt{tea}$ and $\mathtt{coffee}$). These actions are determined by a set of external actions $A$. More specifically, $\|$ refers to $\pr{A} $ which we assume is fixed and given beforehand. \end{itemize} The special case of probabilistic choice is modelled by combining probabilistic and internal actions. That is a process that does $a$ with probability $p$ and does $b$ with probability $1-p$ is interpreted as the following automaton \begin{figure}[h] $$ \xymatrix{ & \ar[d]^{\mathtt{flip}_p} & \\ & \ar[dl]_{\tau_h}\ar[dr]^{\tau_l} &\\ \ar[d]_{a}&&\ar[d]^{b}\\ & & }$$ \end{figure} where $\mathtt{flip}_p\in\Sigma$ represents the action of flipping a $p$-biased coin which produces head with probability $p$ and tail with probability $1-p$. The internal actions $\tau_t$ and $\tau_h$ are enabled according to the result of $\mathtt{flip}_p$. Hence only one of $\tau_h$ and $\tau_t$ will be enabled just after the coin flip. Since $\tau_t$ and $\tau_h$ are internal actions, the choice is internal and based upon the outcome of $\mathtt{flip}_p$. The important facts here are that the choice after $\mathtt{flip}_p$ is internal so could be based on the probabilistic outcome of $\mathtt{flip}_p$ and that the environment cannot interfere with that choice. These two behavioural characteristics are what we consider to be the most general features of probability in a concurrent setting and they are those which we axiomatise and record in our concrete model. Next, we impose some conditions on the automata to ensure soundness. \begin{itemize} \item[-]\label{hc:reachable} reachability: every state of the automaton is reachable by following a finite path from the initial state. \item[-]\label{hc:initial} initiality: there is no transition that leads to the initial state. This means that $a^*$ corresponds to the automata associated to $1 + aa^*$ rather than a self loop labeled by $a\in\Sigma$. \end{itemize} We denote by $\mathbf{Aut}$ the set of automata satisfying these two conditions. The next step is to define the operators that act on $\mathbf{Aut}$. We use the standard inductive construction found in~\cite{Coh09,Gla90,Rab11} and the diagrams illustrating the constructions are given in the appendix. \begin{itemize} \item[]\textbf{Deadlock: $0$}\\ This is the automaton that has only one state, namely the initial state, and no transition at all. It is the tuple $(\{i\}, \emptyset,i,\emptyset)$. \item[]\textbf{Skip: $1$}\\ This is the automaton that has only one state $i$ which is both initial and final. This automaton has no transition i.e. is denoted by $(\{i\},\emptyset,i,\{i\})$. \item[]\textbf{Single action:} \\ The automata associated with $a$ is $i\trans{a} \circ$ where $i$ is the initial state and $\circ$ is a final state. It is the tuple $(\{i,\circ\},\{i\trans{a} \circ\}, i, \{\circ\})$. \item[]\textbf{Addition: $P+Q$}\\ This automaton is obtained using the standard construction of identifying the initial states of $P$ and $Q$. (This is possible due to the initiality property.) \item[]\textbf{Multiplication: $PQ$ (or $P\cdot Q$)}\\ This automaton is constructed in the standard way of identifying copies of the initial state of $Q$ with final states of $P$. \item[]\textbf{Concurrency: $P\pr{A} Q$} \\ This automaton is constructed as in CSP~\cite{Hoa78}. It is a sub-automaton of the Cartesian product of $P$ and $Q$. The initial state is $(i_P,i_Q)$ and final states are reachable elements of $F_P\times F_Q$. Notice that the set $A$ never contains probabilistic actions. Further explanation about $\pr{A} $ is given below. \item[]\textbf{Kleene star: $P^*$} \\This automaton is the result of repeating $P$ allowing a successful termination after each (possibly empty) full execution of $P$. The initial state of $P^*$ is final and copies of the initial state of $P$ are identified with the final states of $P$. \end{itemize} All automata begin with an initial state and end in some final or deadlock state. Our main use of final states is in the construction of sequential composition and Kleene star. The concurrency operator $\pr{A} $ synchronises transitions labeled by an action in $A$ and interleaves the others (including internal transitions). As in CSP, a synchronised transition waits for a corresponding synchronisation action from the other argument of $\pr{A} $. This is another reason we do not have $1\pr{\{a\}} P = P$ because if $P = i_P\trans{a} \circ$ and $i_P$ is not a final state, then $$1\pr{\{a\}} P = (\{(i,i_P)\}, \emptyset, (i,i_P),\emptyset) = 0.$$ \begin{proposition}\label{pro:hc-welldef} These operations of weak concurrent Kleene algebra are well defined on $\mathbf{Aut}$ that is if $P,Q\in\mathbf{Aut}$ then $P+Q, PQ, P\pr{A} Q$ and $P^*$ are elements of $\mathbf{Aut}$. \end{proposition} The proof consists of checking that $P+Q, PQ, P\|Q$ and $P^*$ satisfy the reachability and initiality conditions whenever $P$ and $Q$ satisfy the same conditions. (See Proposition \ref{apro:stability} in the appendix). In the sequel, whenever we use an unframed concurrency operator $\|$, we mean that the frame $A$ has been given and remains fixed. \subsection{Equivalence}\label{subsec:notion-of-equality} The previous subsection has given us the objects and operators needed to construct our concrete model. Next we turn to the interpretation of equality for our concrete interpretation. Following the works found in~\cite{Coh09,Rab11,Mil71}, we again use a simulation-like relation to define valid equations in the concrete model. More precisely, due to the presence of internal actions, we will use an \textit{$\eta$-simulation} as the basis for our equivalence. Before we give the definition of simulation, we need the following notation. Given the state $x$ and $y$, we write $x\Rightarrow y$ if there exists a path, possibly empty, from $x$ to $y$ such that it is labelled by internal actions only. This notation is also used in~\cite{Gla90} with the same meaning. \begin{definition}\label{df:sim} Let $P,Q$ be automata, a relation $S\subseteq P\times Q$ (or $S:P\rightarrow Q)$ is called \textbf{$\eta$-simulation} if \begin{itemize} \item[--] $(i_P,i_Q)\in S$, \item[--] if $(x,y)\in S$ and $x\trans{a} x'$ then \begin{itemize} \item[a)] if $a$ is internal then there exits $y'$ such that $y\Rightarrow y'$ and $(x',y')\in S$, \item[b)] if $a$ is external then there exists $y_1$ and $y'$ in $Q$ such that $y\Rightarrow y_1\trans{a} y'$ and $(x,y_1)\in S$ and $(x',y')\in S$. \end{itemize} \item[--] if $(x,y)\in S$ and $x\in F_P$ then $y\in F_Q$. \end{itemize} A simulation $S$ is \textbf{rooted} if $(i_P,y)\in S$ implies $y = i_Q$. If there is a rooted simulation from $P$ to $Q$ then we say that $P$ is simulated by $Q$ and we write $P\leq Q$. Two processes $P$ and $Q$ are \textbf{simulation equivalent} if $P\leq Q$ and $Q\leq P$, and we write $P\equiv Q$. In the sequel, rooted any $\eta$-simulation will be referred simply as a simulation. \end{definition} Relations satisfying Definition \ref{df:sim} are also $\eta$-simulation in the sense of~\cite{Gla90} where property (a) is replaced by: \begin{equation}\label{pr:property-a} \textrm{if } a \textrm{ is internal then } (x',y)\in S. \end{equation} The identity relation (drawn as dotted arrow) in the following diagram is a simulation relation satisfying Definition \ref{df:sim}, but it is not a simulation in the sense of~\cite{Gla90}. \begin{figure}[h] $$\xymatrix{ \ar[d]^\tau\ar@{.>}[rr]&&\ar[d]^\tau\\ \circ\ar@{.>}[rr]&&\circ }$$ \end{figure} We need the identity relation to be a simulation here because in our proof of soundness, more complex simulations are constructed from identity relations. \begin{proposition}\label{pro:eta-sim-welldef} The following statements hold. \begin{enumerate} \item The relational composition of two rooted $\eta$-simulations is again a rooted $\eta$-simulation. That is, if $S,T$ are rooted $\eta$-simulations then $S\circ T$ is also a rooted $\eta$-simulation, where $\circ$ denotes relational composition. \item The simulation relation $\leq$ is a preorder on $\mathbf{Aut}$. \end{enumerate} \end{proposition} Proposition \ref{pro:eta-sim-welldef} is proven in Proposition \ref{apro:sim-equivalence} of the appendix. Therefore, $\equiv$ as determined by Definition \ref{df:sim} is an equivalence. In fact, we prove in the following proposition that it is a congruence with respect to $+$. \begin{proposition}\label{pro:sim-cong} The equivalence relation $\equiv$ is a congruence with respect to $+$ and $P\leq Q$ iff $P + Q\equiv Q$. \end{proposition} The proof adapts and extends the one found in \cite{Gla90} and the specialised version for our case is Proposition \ref{apro:sim-congruence} in the appendix. It is well documented that $\eta$-simulation is not a congruence without the rootedness condition~\cite{Gla90}. A typical example is given by the expressions $\tau a + \tau b$ and $\tau(a +b)$. The automata associated to these expressions are equivalent under non-rooted $\eta$-simulation. The manipulation of probabilistic actions is also an important facet of our model. We assume that probabilistic actions are not synchronised and in that respect they are similar to internal actions. However probabilistic actions cannot be treated as internal as the following examples illustrates. Consider the action $\mathtt{flip}_{1/2}$ which flips a fair coin. If $\mathtt{flip}$ is an internal action then the inequality $$(\mathtt{flip}_{1/2})(\tau a + \tau b)\leq (\mathtt{flip}_{1/2})\tau a + (\mathtt{flip}_{1/2})\tau b$$ would be valid when interpreted in the concrete model. In other words, we would have the following simulation: \begin{figure}[h] $$ \xymatrix{ &\ar[d]_{\mathtt{flip}_{1/2}}\ar@{.>}[rrrr]&&& &\ar[dl]_{\mathtt{flip}_{1/2}}\ar[dr]^{\mathtt{flip}_{1/2}}&\\ &\ar[ld]_{\tau}\ar@{.>}[rrurr]\ar[dr]^{\tau}&&& \ar[d]_{\tau}&&\ar[d]^{\tau}\\ \ar[d]_a\ar@{.>}@/_/[rrrr]\ar@{.>}@/_/[uurrrrr]&&\ar[d]^b\ar@{.>}[uurrr]\ar@{.>}@/_/[rrrr]&& \ar[d]^a&&\ar[d]_b\\ \ar@{.>}@/_/[rrrr]&&\ar@{.>}@/_/[rrrr]&& && } $$ \end{figure} But this relationship (which implies distributivity of $\mathtt{flip}_p$ through $+$) does not respect the desired behaviour of probability which, as we explained earlier, satisfies only a weaker form of distributivity. Whence, we assume that probabilistic actions such as $\mathtt{flip}_{1/2}$ are among the external actions which will never be synchronised. \section{Soundness}\label{sec:soundness} In this section, we prove that the set $\mathbf{Aut}$ endowed with the operators defined in Subsection \ref{subsec:semantic-space} modulo rooted $\eta$-simulation equivalence (Subsection \ref{subsec:notion-of-equality}) forms a weak concurrent Kleene algebra. The first part is to prove that $\mathbf{Aut}$ is a weak probabilistic Kleene algebra. \begin{proposition}\label{pro:pka-soundness} $(\mathbf{Aut},+,\cdot,*,0,1)$ is a weak probabilistic Kleene algebra. \end{proposition} The proof consists of detailed verifications of the axioms for weak probabilistic Kleene algebra (see Proposition \ref{apro:weak-pka} in the appendix). The second part consists of proving that $\|$ satisfies the equations (\ref{eq:par-assoc}-\ref{eq:exchange-law}). Associativity depends heavily on the fact that both concurrent compositions involved in $x\|y\|z$ have the same frame set. For instance, let $\Sigma = \{a,b,c\}$. The identities $$(a\pr{\{a\}} b ) \pr{\{c\}} a = ab0 + ba0$$ and $$a\pr{\{a\}} (b \pr{\{c\}} a) = ab + ba$$ are valid in the concrete model. Hence, the first process will always go into a deadlock state though the second one will always terminate successfully. Therefore, to have associativity, the concurrency operator must have a fixed frame. \begin{proposition}\label{pro:soundness-par} $(\mathbf{Aut}, +,\cdot,\pr{A} ,1 )$ satisfies equations (\ref{eq:par-assoc}- \ref{eq:exchange-law}) modulo rooted $\eta$-simulation equivalence for any set of synchronisable actions $A\subseteq\Sigma$ (i.e. no probabilistic actions). \end{proposition} Associativity is mainly a consequence of the fact that there is only one frame for $\|$. The other axioms need to be checked thoroughly (see Proposition \ref{apro:parallel-algebra}). Our soundness result directly follows from these two propositions. \begin{theorem} $(\mathbf{Aut},+,\cdot,\pr{A} ,*,0,1)$ is a weak concurrent Kleene algebra for any set of synchronisable actions $A\subseteq\Sigma$. \end{theorem} In this theorem, the frame $A$ is fixed beforehand. In other words, a model of weak concurrent Kleene algebra is constructed for each possible choice of $A$. In particular, if $A$ is empty then the concurrency operator is interleaving all actions i.e. no actions are synchronised. This particular model satisfies the identity $1\pr{\emptyset} x = x$ of the original concurrent Kleene algebra found in~\cite{Hoa09}. The sequential and concurrent composition actually have stronger properties in the concrete model. If we consider finite automata only --- automata with finitely many states and transitions--- then we show that these two operators are \textit{conditionally Scott continuous} in the sense of~\cite{Rab11} (see Proposition \ref{pro:mult-cont} and \ref{pro:par-continuous} in the appendix). \section{Relationship to Probabilistic Processes}\label{sec:probabilistic-aut} Firstly, it is shown in~\cite{Mci04} that a probabilistic choice $a\pc{p} b$ simulates the nondeterministic choice $a+b$. A similar result also holds in our setting. In the absence of internal transitions, simulation has been also defined elsewhere~\cite{Coh09,Gla90,Rab11} which we will refer to as strong simulation. Recall that $(\mathtt{flip}_p)a+ (\mathtt{flip}_p)b\leq(\mathtt{flip}_p)(a+b)$ is a general property of probabilistic Kleene algebra~\cite{Mci05} so it is valid under strong simulation equivalence~\cite{Coh09,Rab11}. Due to the absence of internal actions, the middle part of the diagram of Figure \ref{fig:figure1} does not exist with respect to strong simulation equivalence. In the context of Definition \ref{df:sim}, the right-hand simulation of Figure \ref{fig:figure1} is the refinement of probabilistic choice by nondeterminism. This example gives an explicit distinction between $(\mathtt{flip}_p)(a+b)$ and $(\mathtt{flip}_p) a+ (\mathtt{flip}_p) b$ by considering the fact that the choice in $(\mathtt{flip}_p) (a + b)$ can depend on the probabilistic outcome of $(\mathtt{flip}_p)$, but this is not the case for $(\mathtt{flip}_p) a + (\mathtt{flip}_p)b$. \begin{figure*} $$ \xymatrix{ & & &&& \ar[d]^{\mathtt{flip}_p}\ar@{.>}[rrrr] & && &\ar[d]^{\mathtt{flip}_p}&\\ &\ar[dl]_{\mathtt{flip}_p}\ar[dr]^{\mathtt{flip}_p}\ar@{.>}[urrrr]& &&& \ar[dl]_{\tau}\ar[dr]^{\tau}\ar@{.>}[rrrr] & && &\ar[ld]_{a}\ar[rd]^b &\\ \ar[d]_a\ar@{.>}[urrrrr]\ar@{.>}@/_/[rrrr]&&\ar[d]^b\ar@{.>}@/_/[rrrr]\ar@{.>}[urrr] &&\ar[d]_{a}\ar@{.>}[urrrrr]&&\ar[d]^{b}\ar@{.>}[urrr] && & &\\ \ar@{.>}@/_/[rrrr] &&\ar@{.>}@/_/[rrrr] && \ar@{.>}@/_/[urrrr]& &\ar@{.>}@/_/[urrrr] && & & }$$ \caption{Refinements between probabilistic choice and nondeterminism.}\label{fig:figure1} \end{figure*} Secondly, we discuss about the relationship between our concrete model and probabilisitic automata. Remind that our interpretation of probability lies in the use of actions that implicitly contain probabilistic information. In its most general form, a probabilistic choice between $n$ possibilities can be written as $$\mathtt{flip}_{p_1,\dots,p_n}\cdot(\tau_1\cdot a_1+\dots+\tau_n\cdot a_n)$$ where $\sum_ip_i = 1$. In this algebraic expression, we implicitly ensure that each guard $\tau_i$ is enabled with a corresponding probability $p_i$. Therefore, if these $\tau_i$'s are not found directly after the execution of the probabilistic action then matching them with the corresponding $p_i$ becomes a difficult task. We call $p$-automaton~\footnote{The name $p$-automata describes probabilistic automata and as we will see later on, there is a relationship between the two of them.} a transition system as per the definition of Subsection \ref{subsec:semantic-space} such that if a probabilistic action has associated $\tau$ transitions then all of them follow that action directly. Another complication also arises from the use of these $\tau_i$'s. Consider the following two processes $$\mathtt{flip}_{p_1,p_2}\cdot(\tau_1\cdot a+\tau_2\cdot b)$$ and $$\mathtt{flip}_{p_1,p_2}\cdot(\tau_1\cdot b + \tau_2\cdot a)$$ where $p_1+p_2 = 1$. We can construct a (bi)simulation relation between the corresponding automata though the probabilities of doing an $a$ are different. Hence we need to modify the definition of $\eta$-simulation (Definition \ref{df:sim}) to account for these particular structure. \begin{definition}\label{df:p-sim} A $p$-simulation $S$ between two $p$-automata $P,Q$ is a $\eta$-simulation such that if \begin{itemize} \item[-] $x\trans{\mathtt{flip}_{p_1,\dots,p_n}} x'\trans{\tau_i} x_i''$ is a transition in $P$, \item[-] $y\trans{\mathtt{flip}_{p_1,\dots,p_n}} y'\trans{\tau_i} y_i''$ is a transition in $Q$, \item[-] and $(x,y)\in S$ \end{itemize} then $(x_i'',y_i'')\in S$, for each $i=1,\dots,n$. \end{definition} This definition ensures that the probability of doing a certain action from $y$ is greater than doing that action from $x$. With similar proofs as in the previous Sections, we can show that the set of $p$-automata modulo $p$-simulation forms again a weak concurrent Kleene algebra. We denote $p$-$\mathbf{Aut}$ the set of $p$-automata modulo $p$-simulation. We will now show that this definition is a very special case of probabilistic simulation on probabilistic automata. To simplify the comparison, we assume that $\tau$ transitions occur only as part of these probabilistic choices in $p$-automata. \begin{definition} A probabilistic automaton is defined as a tuple $(P,\longrightarrow,\Delta,F)$ where $P$ is a set of states, $\longrightarrow$ is a set of labelled transitions from state to distributions~\footnote{We assume that all distributions are finitely supported.} of states i.e. $\longrightarrow\subseteq P\times\Sigma\times\mathcal{D} P$, $\Delta$ is the initial distribution and $F\subseteq P$ is a set of final states. \end{definition} The notion of simulation also exists for probabilistic automata~\cite{Seg94} and, in particular, simulation and failure simulation is discussed in~\cite{Den07} where they are proven to be equivalent to may and must testing respectively. To give a proper definition of probabilistic simulation, we need the following notations which are borrowed from~\cite{Den07} and~\cite{Gla90}. Given a relation $R\subseteq P\times\mathcal{D} Q$, the lifting of $R$ is a relation $\hat{R}\subseteq \mathcal{D} P\times \mathcal{D} Q$ such that $\phi \hat{R} \psi$ iff: \begin{itemize} \item[-] $\phi = \sum_xp_x\delta_x$,~\footnote{We denote by $\delta_x$ the point distribution concentrated on $x$.} \item[-] for each $x\in\mathrm{supp}(\phi)$ (the support of $\phi$) there exists $\psi_x\in\mathcal{D} Q$ such that $x R\psi_x$, \item[-] $\psi = \sum_xp_x\psi_x$. \end{itemize} Similarly, the lifting of a transition relation $\trans{\tau} $ is denote $\trans{\hat{\tau}} $ whose reflexive transitive closure is denote $\ttrans{\hat{\tau}} $. For each external action $a$, we write $\ttrans{\hat{a}} $ for the sequence $\ttrans{\hat{\tau}} \trans{a} $. \begin{definition}\label{df:probsim} A probabilistic simulation $S$ between two probabilistic automata $P$ and $Q$ is a relation $S\subseteq R\times\mathcal{D} Q$ such that: \begin{itemize} \item[-] $(\Delta_P,\Delta_Q)\in \hat{S}$, \item[-] if $(x,\psi)\in S$ and $x\trans{a} \phi$ then there exists $\psi'\in\mathcal{D} Q$ such that $\psi\ttrans{\hat{a}} \psi'$ and $(\phi,\psi')\in\hat{S}$ (for every $a\in\Sigma\cup\{\tau\}$). \item[-] if $x\in F_P$ and $(x,\psi)\in S$ then $\mathrm{supp}(\psi)\subseteq F_Q$. \end{itemize} \end{definition} we denote by $\mathbf{ProbAut}$ the set of probabilistic automata modulo simulation equivalence. We can now construct a mapping $\epsilon:p\textrm{-}\mathbf{Aut}\rightarrow \mathbf{ProbAut}$ such that each instance of structure similar to $\mathtt{flip}_{p_1,\dots, p_n}\cdot(\tau_1\cdot a_1 + \dots + \tau_n\cdot a_n)$ is collapsed into probabilistic transitions. More precisely, let $P\inp\textrm{-}\mathbf{Aut}$ and $\longrightarrow$ be its transition relation. The automaton $\epsilon(P)$ has the same state space as $P$ (up to accessibility with respect to the transitions of $\epsilon(P)$). The initial distribution of $\epsilon(P)$ is $\delta_{i_P}$ and the set of final states of $\epsilon(P)$ is $F_P$ again~\footnote{Notice that by assuming the structure $\mathtt{flip}_{p_1,\dots,p_n}\cdot(\tau_1\cdot a_1 + \dots + \tau_n\cdot a_n$, the state between the flip action the corresponding $\tau$ transitions is never a final state. Hence we are safe to use $F_P$ as the final state of $\epsilon(P)$} . The set of transitions $\longrightarrow_{\epsilon(P)}$ is constructed as follow. Let $x\trans{a} x'$ be a transition of $P$, there are two possible cases: \begin{itemize} \item[a)] if $a$ is probabilistic i.e. of the form $\mathtt{flip}_{p_1,\dots,p_n}$ and is followed by the $\tau_i$'s, then the transition $$x\trans{\tau} p_1\delta_{x_1'}+\dots+p_n\delta_{x_n'}$$ is in $\longrightarrow_{\epsilon(P)}$ where $x'\trans{\tau_i} x_i'$ is a transition in $P$. \item[b)] else the transition $x\trans{a} x'$ is in $\longrightarrow_{\epsilon(P)}$. \end{itemize} We now prove that $\epsilon$ is a monotonic function from $p\textrm{-}\mathbf{Aut}$ to $\mathbf{ProbAut}$. \begin{proposition}\label{pro:cor} If $P\leq Q$ then $\epsilon(P)\leq\epsilon(Q)$. \end{proposition} \begin{proof} Assume that $S$ is a $p$-simulation from $P$ to $Q$. Consider the exact same relation but restricted to the state space of $\epsilon(P)$ and $\epsilon(Q)$. We show that this restriction is a probabilistic simulation. \begin{itemize} \item[-] Obviously, $(\delta_{i_P},\delta_{i_Q})\in \hat{S}$. \item[-] Let $x\trans{a} \phi$ and $(x,\psi)\in\hat{S}$. Since $\tau$ transitions only occur as part of probabilistic choices, we have two possibilities: \begin{itemize} \item $x\trans{\tau} p_1\delta_{x_1'}+\dots +p_n\delta_{x_n'}$ is a transition of $\epsilon(P)$ and $(x,\psi)\in S$ where $\psi = \delta_y$. Since $(x,y)$ belongs to the original $S$. In this case, $y\trans{\tau} p_1\delta_{y_1'}+\dots +p_n\delta_{y_n'}$ is a transition of $\epsilon(Q)$ and each $(x_i',y_i')$ belongs to the original $S$ (Definition of $p$-simulation). \item $x\trans{a} x'$ and $a$ is an external action. Therefore there are two possibilities again, $y\trans{\tau_i} y_i\trans{a} y'$ or $y\trans{a} y'$. In both cases, we have $(x',y')\in S$. \end{itemize} \item[-] Conservation of final states follows easily from the fact that $S$ is a $p$-simulation.\qedhere \end{itemize} \end{proof} Since our Definition (\ref{df:probsim}) implies the definition of probabilistic simulation in~\cite{Den07}, we conclude that maximal probability of doing a particular action in $p$-automata is increased by $p$-simulation. This remark provides a formal justification of our earlier example. That is, Equation~(\ref{eq:vm}) ensures that the maximal probability that a buyer will be satisfied when using the probabilistic vending machine is at least $1/2$ because the maximal probability of a trace containing $\mathtt{tea}$ in the automata described by $$\mathtt{coin}\cdot\mathtt{flip}\cdot(\tau_h\cdot(\mathtt{tea}+1) + \tau_t$$ is $1/2$. In the proof of proposition \ref{pro:cor}, the simulation constructed is a very particular case of probabilistic simulation so it is too weak to establish certain relationships between $p$-automata. For instance, the automaton represented by $a\pc{p} (a\pc{q} b)$ should be equivalent to $a\pc{p+q-pq} b$ but Definition \ref{df:p-sim} will not provide such equality. This line of research is part of our future work where we will study proper probabilistic automata and simulations against weak concurrent Kleene algebra. \section{Algebraic Testing}\label{sec:algebraic-testing} In this section, we describe an algebraic treatment of \emph{testing}. Testing is a natural ordering for processes that was studied first in~\cite{Nic83}. The idea is to ``measure" the behaviour of the process with respect to the environment. In other words, given two processes $x$ and $y$ and a set of test processes $T$, the goal is to compare the processes $x\|t$ and $y\|t$ for every $t\in T$. In our case, the set $T$ will contain all processes. We consider a function $o$ from the set of terms to the set of internal expressions $I = \{x\ |\ x\leq 1\}$. The function $o:T_\Sigma\rightarrow I$ is defined by $$\begin{array}{lll} o(x) = x\textrm{ if }x\in I & &o(st) = o(s)o(t)\\ o(a) = \tau\textrm{ for any a }\in\Sigma-I && o(s^*) = 1\\ o(s+t) = o(s)+ o(t) &&o(s\|t)\leq o(s)o(t) \end{array}$$ In the model, the function $o$ is interpreted by substituting each external action with the internal action $\tau$ ($o(a) = \tau$ for any $a\in\Sigma-I$). Then any final state is labelled by $1$ and deadlock states are labelled by $0$. Inductively, we label a state that leads to some final state by $1$, else it is labelled by $0$. This is motivated by the fact that $x0=0$ for any $x\in I$ so each transition leading to \textit{deadlock states only} will be removed. Therefore, only states labelled by $1$ will remain and the transitions between them. Hence, $o(s)\neq 0$ iff the resulting automaton contains at least one state labelled by $1$. In other words, $o(s) = 0$ iff $x$ \textit{must not terminate successfully}. Without loss of generality (by considering automata modulo simulation), we assume that $\tau$ is the only internal action in $\Sigma$ and it satisfies $\tau\tau = \tau$. This equation is valid in the concrete model. The existence of a well-defined function $o$ satisfying these conditions depends on our definition of simulation. That is, we can show that if $P\leq Q$ then $o(P)\leq o(Q)$ where we have abused notation by writing $o(P)$ as the application of $o$ on the term associated to $P$. A detailed discussion about this can be found in the appendix under Remark \ref{rem:remark-o}. \begin{definition} The \textit{may testing order} is given by $$x\may{} y\quad \textrm{ iff }\quad \forall t\in T_\Sigma.\left[o(y\| t) = 0\Rightarrow o(x\| t) = 0\right].~\footnote{Notice $\|$ should be framed because some external actions are not synchronised. But in the setting of testing, we can also assume that all external actions are synchronised which permits to follow up all external actions present in the process.}$$ \end{definition} We now provide some results about algebraic may testing. It follows from monotonicity of $\|$ with respect to $\leq$ (Proposition \ref{pro:elementary-consequences}) that may ordering $\may{} $ is weaker than the rooted $\eta$-simulation order. \begin{proposition} $x\leq y$ implies $x\may{} y$. \end{proposition} In fact, $\may{} $ is too weak compared to $\leq$: may testing is equivalent to language equivalence. Given a term $s$, the language $Tr(s)$ of $s$ is the set of finite words formed by external actions and are accepted by the automata represented by $s$. In other word, it is the set of finite traces in the sense of CSP which lead to final states. The precise definition of this language equivalence can be found in the appendix and so is the proof of the following proposition (Proposition \ref{apro:may-language} of the appendix). \begin{proposition}\label{pro:may-equals-language} In $\mathbf{Aut}$, $\may{} $ reduces to language equivalence. \end{proposition} We have shown that $\may{} $ is equivalent to language equivalence and hence it is weaker than our simulation order. This is also a consequence of the fact that our study of may testing is done in a qualitative way because the probabilities are found implicitly within actions. A quantitative study of probabilistic testing orders can be found in~\cite{Den07}. \section{Case Study: Rabin's Choice Coordination}\label{sec:rabin-protocol} The problem of choice coordination is well known in the area of distributed systems. It usually appears in the form of processes voting for a common goal among some possibilities. Rabin has proposed a probabilistic protocol which solves the problem~\cite{Rab82} and a sequential specification can be found in~\cite{Mci04}. We specify the protocol in our algebra and prove that a fully concurrent specification is equivalent to a sequential one. Once this has been done, the full verification can proceed by reusing the techniques for sequential reasoning~\cite{Mci04}. The protocol consists of a set of tourists and two places: a church $C$ and a museum $M$. Each tourist has a notepad where he keeps track of an integer $k$. Each place has a board where tourists can read and write. We denote by $L$ (resp. $R$) the value on the church board (resp. museum board). In this section, we use $\cdot$ again for the sequential composition to make the specifications clearer. \begin{itemize} \item The church is specified as $C = (c!L)^*\cdot(c?L)$ where the channel $c$ represents the church's door. $c!L$ means that the value of $L$ is available to be read in the channel $c$ and $c?L$ waits for an input which is used as value for $L$ in the subsequent process. In other words, each tourist can read as many times as they want from the church board but write on it only once. Repeated writing will be considered in the specification of the protocol. Similarly, the museum is specified as $M = (m!R)^*\cdot(m?R)$. \item Each tourist is specified as $P(\alpha,k)$ where $\alpha\in\{c,m\}$ is the door before which the tourist currently stands and $k$ is the actual value written on his notepad. A detailed description of $P$ can be found in the appendix but roughly, we have $$P(\alpha,k) = (\alpha?K)\cdot\mathtt{rabin}\cdot[\alpha:=\underline{\alpha}]~\footnote{Any action written within square brackets will denote internal action (see appendix for the detailed specification).}$$ where $\underline c = m$ and $\underline m = c$. In other words, the tourist reads the value on the place specified by $\alpha$, executes Rabin's protocol \texttt{rabin} and then goes to the other place. Notice that the process $\texttt{rabin}$ contains the probabilistic component of Rabin's protocol. Essentially, it describes the rules that are used by each tourist to update their actual value for $k$ with respect to the value on the board and vice versa. The whole specification of the protocol executed by each tourist is described by the automata of Figure \ref{fig:rabin} \begin{center} \begin{figure*} $$\xymatrix{ &P(\alpha,k)\ar[d]_{\alpha?K}&&\\ &\ar[dl]_{[K=here]}\ar[dr]^{[K\neq here]}&&\\ \bullet & \ar[l]^{\alpha!here}& \ar[l]_{[k>K]}\ar[dl]^{[k<K]}\ar[d]^{[k=K]}&\\ & \ar[dl]^{[k:=K]} &\ar[d]^{\mathtt{flip}_{1/2}}& \\ \ar[d]_{\alpha!k}& &\ar[dl]_{\tau_h}\ar[dr]^{\tau_t} &\\ \ar[d]_{[\alpha := \underline\alpha]}& \ar[dr]_{[k := K+2]}& &\ar[dl]^{[k:=\overline{K+2}]}\\ \circ&&\ar[d]^{\alpha!k}&\\ & &\ar[d]^{[\alpha := \underline{\alpha}]} &\\ & &\circ & }$$ \caption{$p$-automaton that describes the protocol $P(\alpha,k)$ executed by each tourist.}\label{fig:rabin} \end{figure*} \end{center} \end{itemize} We are ready to specify the whole system. Assume we have two tourists $P$ and $Q$ (our result generalises easily to $n$ tourists). The tourists' joint action is specified as $(P + Q)^*$. This ensures that when a tourist has started his turn by reading the board, he will not be interrupted by any other tourist until he is done and goes inside the current place or to the other place. This condition is crucial for the protocol to work properly. The actions of the locations process are specified by $(M+C)^*$ which ensures that each tourist can be at one place at a time only --- this is a physical constraint. Now, the whole system is specified by \begin{equation}\label{eq:spec} \mathtt{init}\cdot\left([P(\alpha,u) + Q(\beta,v)]^*\pr{\{c,m\} } (M + C)^*\right) \end{equation} where $\mathtt{init}$ is the initialisation of the values on the boards, notepads and initial locations. Specification \ref{eq:spec} describes the most arbitrary behaviour of the tourists compatible with visiting and interacting with the locations in the manner described above. Rabin's design of the protocol means that this behaviour is equivalent to a serialised execution where first one location is visited, followed by the other. We can write that behaviour behaviour as $[((P+Q)\|M)^*((P+Q)\|C)^*]^*$, where (for this section only) we denote the concurrency operator by $\|$ instead of $\pr{\{c,m\}} $ to make the notation lighter. The next theorem says that this more uniform execution is included in $S= [P(\alpha,u) + Q(\beta,v)]^*\| (M + C)^*$, described by Specification \ref{eq:spec}. \begin{theorem}\label{pro:dupl} We have $$S \geq [((P+Q)\|M)^*((P+Q)\|C)^*]^*$$ \end{theorem} The proof is a simple application of Proposition \ref{pro:elementary-consequences}. Theorem \ref{pro:dupl} means $S$ could execute all possible actions related to door $M$, and then those at door $C$, and then back to door $M$ and so one. In fact, we can also prove the converse i.e. Proposition \ref{pro:dupl} could be strengthen to equality. But for that, we need the continuity of the operators $\cdot$ and $\|$. \begin{theorem}\label{thm:rabin} In the concrete model, the specification of Rabin's protocol satisfies $$S = [((P+Q)\|M)^*((P+Q)\|C)^*]^*$$ \end{theorem} The proof of this theorem depends heavily on the fact that the concurrent and sequential compositions are continuous in the the concrete model. The complete proof can be found in the appendix. In the proof, if we stopped at the distribution over $\|$, we obtain the equivalent specification $$S = [(P+Q)\|M + (P+Q)\|C]^*$$ which describes a simpler situation where P or Q interacts at the Museum or at the Church. This is similar to the sequential version found in~\cite{Mci04}, which can be treated by standard probabilistic invariants to complete a full probabilistic analysis of the protocol. \section{Conclusion} An algebraic account of probabilistic and concurrent system has been presented in this paper. The idea was to combine probabilistic and concurrent Kleene algebra. A soundness result with respect to automata and rooted $\eta$-simulation has been provided. The concrete model ensures not only the consistency of the axioms but provides also a semantic space for systems exhibiting probabilistic, nondeterministic and concurrent behaviour. We also showed that the model has stronger properties than just the algebraic axiomatisation. For instance, sequential and concurrent compositions are both continuous in the case of finite automata. We provided some applications of the framework. An algebraic account of may testing has been discussed in Section \ref{sec:algebraic-testing}. It was shown that may ordering reduces to language equivalence. We also provided a case study of Rabin's solution to the choice coordination problem. A concurrent specification was provided and it was shown to be structurally equivalent to the sequential one given in~\cite{Mci04}. Though the algebra was proven to be powerful enough to derive non-trivial properties for concrete protocols, the concrete model still needs to be refined. For instance, the inclusion of tests is important especially for the construction of probabilistic choices. Tests need to be introduced carefully because their algebraic characterisation are subtle due to presence of probability. We also need to improve and refine the manipulation of quantitative properties in the model as part of our future work. Finally, it is customary to motivate automated support for algebraic approaches. The axioms system for weak concurrent Kleene algebra is entirely first-order, therefore proof automation is supported and automatised version of our algebraic proofs can be found in our repository. \bibliographystyle{plain}
{ "attr-fineweb-edu": 1.75293, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaMbxK0wg0_7472KE
\section{Introduction} Fractional quantum Hall effect represents one of the most important examples of strongly correlated electron system \cite{DasSarma97}. In the bulk, quasiparticle (qp) excitations are predicted to have fractional charge \cite{Laughlin83} which, e.g., for filling factor in the Jain series, $\nu = p/(2p + 1)$ ($p\in \mathbb{Z}$), is $e^{*} = e(\nu/|p|)$. At the edge \cite{Wen90,Wen91,Wen95} the identification of these charge excitations seems more complicated. Indeed, while in the past measurements of current noise through quantum point contacts (QPC), in the weak backscattering regime, confirmed the tunneling of single-qp\cite{dePicciotto97, Seminadayar97}, recently, new measurements have demonstrated the possibility of tunneling charges multiple of the fundamental charge. The condition to observe a bunching of qp depends on the external parameters such as temperature and voltage. Measurements\cite{Chung03} carried out for the Jain series ($p=2,3$), at extremely low temperatures, show an effective charge equal to $e_{\rm eff}= \nu e$, which, only by increasing the temperature, decreases to the fundamental value $e_{\rm eff}=e^{*}$. Last year, experimental results for filling factor $\nu=2/3$ $(p=-2)$ appeared \cite{Ofek09}, showing a similar crossover. This common trend was very recently verified also for filling factor outside the Jain series belonging to fractional values in the second Landau level \cite{dolev10}. In addition to the bunching phenomena peculiar behavior also appears in the backscattering current at high transparencies. For example for $\nu=1/3$, the current was found to increase with temperature \cite{Chung03,Roddaro04} instead of decrease as theoretically predicted \cite{Fendley95}. This support the indication of a non-universal renormalization of the tunneling exponents induced by the presence of edge interaction with external environment \cite{Rosenow02}, electron-electron interaction \cite{Papa04, Mandal02} and edge reconstruction \cite{Yang03,Aleiner94}. In order to describe the Jain sequence different models were proposed with the common requirement of the presence of neutral modes in order to fulfill the statistical properties. One could have $|p|-1$ neutral fields propagating at finite velocity along the edge \cite{Wen92,Kane94,Kane95}, or only two or one - for infinite edges - additional modes with zero \cite{Lopez99, Lopez01} or finite velocity \cite{Ferraro08}. A peculiar characteristic, associated to the neutral modes is their direction of propagation with respect to the charged mode. Depending on the sign of $p$ and the theoretical model, there is the possibility to have co-propagating or counter-propagating neutral modes. The tendency of bunching of qp at low temperature and weak backscattering was underlined in theory for the hierarchy of the Jain sequence \cite{Kane94,Kane95,Ferraro08}. In Ref.~\onlinecite{Ferraro08} we pointed out the role of propagating neutral modes in order to fully describe the experimental data \cite{Chung03} for $p>1$. By comparing with experiments for $\nu=2/5$ it was indeed possible to estimate the energy bandwidth of neutral modes. Despite the presence of different proposals on the direct detection of neutral modes \cite{Levin07,Feldman08,Ferraro08,Overbosch09,Ferraro09b,Cappelli09,Cappelli10,Yang09}, experiments addressed this issue only recently \cite{Granger09,Aveek10}. In this paper we present a minimal hierarchical model able to include all the essential features of the above different proposal using few free parameters. This allows to explain, in an unified background, the experimental results of tunneling of effective charges in a standard quantum point contact geometry at extremely high transmission \cite{Chung03, Ofek09}. The dependence of the excess noise on the external parameters such as the voltage and the temperature is quantitatively analyzed. The flexibility of the proposed model resides on the possibility to link the results obtained in the presence of counter-propagating or co-propagating neutral modes. We demonstrate that both cases reproduce the experimental results using a proper choice of the fitting parameters. We also propose the skewness, namely the normalized third backscattering current cumulant, as a measurable quantity \cite{Reulet03,Lindell04,Bomze05,Huard07,Timofeev07,Gershon08} able to give independent information on the nature of the carriers. This quantity is a good estimator of the crossover in the tunneling between the bunching of qp and the fundamental charge. We show that this quantity can be directly compared with the {\textit{effective charge}} measured in the experiments by fitting the excess noise, as a function of the bias voltage, at fixed values of temperature. \section{Model} We consider infinite edge states of an Hall bar with filling factor in the Jain series $\nu=p/(2p+1)$ ($p\in \mathbb{Z}$). The model adopted is a minimal one with two decoupled bosonic fields, one charged $\varphi^{\rm{c}}$ and one neutral $\varphi^{\rm{n}}$. The Euclidean free action is ($\hbar=1$, $k_{\rm B}=1$) \beq \mathcal{S}^{0}&&=\f{1}{4\pi\nu} \int^{\beta}_{0} \!\!\!d\tau \!\!\int^{+\infty}_{-\infty} \!\!\!\!\!\!dx \partial_{x}\varphi^{\rm{c}}(x,\tau)\left(i \partial_{\tau}+v_{\rm{c}}\partial_{x}\right)\varphi^{\rm{c}}(x,\tau)+ \nonumber\\ &&+\f{1}{4\pi} \int^{\beta}_{0} \!\!\! d\tau \!\!\!\int^{+\infty}_{-\infty}\! \!\!\!\!\!\!dx \partial_{x}\varphi^{\rm{n}}(x,\tau)\left(i\xi \partial_{\tau}+v_{\rm{n}} \partial_{x}\right)\varphi^{\rm{n}}(x,\tau)\,, \label{action} \eeq with $\beta=T^{-1}$ the inverse temperature and $v_{\rm{c}}$, $v_{\rm{n}}$ the propagation velocities of charge and neutral modes respectively. The former is affected by Coulomb interactions \cite{Levkinskyi08, Levkinskyi09} such that $v_{\rm{c}}\gg v_{\rm{n}}$ \cite{Ferraro08}. We consider neutral modes co-propagating $(\xi=+1)$ or counter-propagating $(\xi=-1)$ with respect to the charged one. This choice allows a unified description of different hierarchical models. For $\xi={\rm{sgn}}(p)$ one recovers the restricted model of Lee and Wen \cite{Lee98} (LW), where the $|p|-1$ neutral modes are described in terms of a single one. While for $\xi=-{\rm{sgn}}(p)$ one obtains the generalized Fradkin-Lopez model \cite{Lopez99, Chamon07, Ferraro08, Ferraro09b} (GFL) with a single neutral mode propagating at finite velocity instead of a topological one \cite{Lopez99}. The commutators of the bosonic fields are $[\varphi^{\rm{c/n}}(x),\varphi^{\rm{c/n}}(y)]=i\pi \nu_{\rm{c/n}} \mathrm{sgn}(x-y)$ with $\nu_{\rm{c}}=\nu$ and $\nu_{\rm{n}}=\xi$. The electron number density depends on the charged field only, via the relation $\rho(x)=\partial_{x}\varphi^{\rm{c}}(x)/2\pi$. \textit{Edge excitations.} In the hierarchical theories admissible edge excitations have a well defined charge and statistics \cite{Wen92, Lopez99}. There are single-qp excitations with charge $e^{*}$ with $ e^{*}=(\nu/|p|)e$ and multiple qp-excitations with charge $m e^{*}$ ($m\in \mathbb{N}$)\cite{Note1}. Their statistics is fractional with statistical angle \cite{Su86} \be \theta_{m}=m^{2}\left( \f{\nu}{p^{2}}-\f{1}{p}-1\right) \pi\,\,\,\,\,\,\, (\rm{mod} \,\,2 \pi). \label{stat} \ee In addition, the phase acquired by any excitation in a loop around an electron must be an integer multiple of $2\pi$ \cite{Froehlich97, Ino98, Ferraro09b}. Using the bosonization technique and imposing the above constraints, one can write the $m$-multiple excitation operator \cite{Ferraro09b} \be \Psi^{(m,q)}(x)=\f{\mathcal{F}^{(m,q)}}{\sqrt{2\pi a}}e^{i\left\{\left(s+\f{d}{|p|}\right)\varphi^{\rm{c}}(x)+\sqrt{p^{2}-\xi p} \left(q+\f{d}{|p|}\right)\varphi^{\rm{n}}(x)\right\}} \label{qp_operator} \ee with $a$ cut-off length, $s\in \mathbb{N}$ and $0\leq d\leq |p|-1$ such that $m=s|p|+d$. The integer $q$ is an additional quantum number associated to the freedom of add $2\pi$ to the statistical angle \cite{Ferraro09b}. The operator $\mathcal{F}^{(m,q)}$ changes the number of $m$-agglomerates on the edge and ensures the right statistical properties between different $q$-values and different edges\cite{Ferraro09b} . It can be neglected in the sequential tunneling regime \cite{Ferraro09b, Guyon02, Martin05}. The most general expression for an excitation with charge $me^*$ will be then given by a superposition of the above operator with different $q$ values \cite{Ferraro09b, Wen95}. \textit{Relevant excitations.} The scaling dimension associated to an $(m,q)$-excitation is extracted from the long time limit of the two-point imaginary time Green's function $\mathcal{G}^{(m,q)}( \tau)=\<T_{\tau}\Psi^{(m, q)} (0,\tau){\Psi^{(m,q)}}^{\dagger}(0,0)\>$ at zero temperature \cite{Kane92}. For $ |\tau | \gg \omega_{\rm{n}}^{-1}, \omega_{\rm{c}}^{-1}$ it is $ \mathcal{G}^{(m,q)}( \tau)\propto |\tau|^{-2 \Delta_{m}(q)}$ with \be \Delta_{m}(q)= \frac{g_{\rm{c}}\nu}{2}\!\left(\f{m}{|p|}\right)^{2}\!\!\!+ \! \frac{g_{\rm{n}}}{2} (p^{2}-\xi p)\! \left(q+ \f{d}{|p|}\right)^{2}. \label{scaling} \ee Here, $\omega_{\rm{c,n}}=v_{\rm{c,n}}/a$ are the energy bandwidth and satisfy $\omega_{\rm{n}}\ll \omega_{\rm{c}}$. The first term in (\ref{scaling}) is due to the charged mode, while the second is related to the neutral one. The parameters $g_{\rm c}$ and $g_{\rm n}$ are introduced to take into account possible interaction effects due to the external environment \cite{Rosenow02, Papa04, Mandal02, Yang03}. It is worth to note that the two models considered, with $\xi=\pm {\rm sgn}( p)$, differ in the neutral mode contribution only. However, introducing neutral renormalization parameters $g^{\rm{ LW}}_{\rm{n}}$ and $g^{\rm{GFL}}_{\rm{n}}$ for the LW and the GFL model respectively, one can map the two cases via the substitution \be \label{eq:mapping} g^{\rm{LW}}_{\rm{n}}=g^{\rm{GFL}}_{\rm n}\ \frac{p^{2}+|p|}{p^{2}-|p|}\ . \ee Operators with the minimal scaling dimension are the most relevant and dominate the transport properties at low energies $E\ll \omega_{\rm n},\omega_{\rm c}$ \cite{Kane92, Ferraro08, Ferraro09b}. In the unrenormalized case $(g_{\rm c}=g_{\rm n}=1)$ the two most dominant excitations have always $q=0$. They correspond to the agglomerate with $m=|p|$ ($d=0$, $s= 1$) and to the single-qp with $m= 1$ ($d= 1$, $s=0$). The corresponding scaling are \be \Delta^{\rm min}_{ |p|}=\frac{\nu}{2}\,,\qquad \Delta^{\rm min}_{ 1}=\frac{1}{2}\left[\f{\nu}{p^{2}}\!\!+ (1-\frac{\xi }{p})\right]. \label{scaling1} \ee Note that among these two, the $|p|$-agglomerate is always the most relevant since $\Delta^{\rm min}_{|p|}<\Delta^{\rm min}_{1}$ with the only exception for $\nu=2/3$ in the LW model ($\xi=-1$), where both have equal scaling \cite{Wen92, Kane95}. At higher energies $\omega_{\rm n}\ll E\ll \omega_{\rm c}$ the neutral mode saturates and does not contribute to the scaling $\Delta_{m}$, which consequently depends on the charged mode only with a value $\Delta^{\rm eff}_m={\nu m^2}/{2 p^2}$. Here, the single-qp ($m=1$) always dominate. This implies the possibility of a crossover regime from low energies (relevance of $|p|$-agglomerates) to higher energies (relevance of single-qp). In the presence of interactions, Eq.(\ref{scaling}) shows the relevance of the $|p|$-agglomerate at low energies if $g_{\rm{n}}/g_{\rm{c}}>\nu (1+\xi/p)$, otherwise the single-qp will always dominate. \section{Transport properties} Tunneling of a bunched $m$-excitations through the QPC located at $x=0$ is described by $H^{(m)}_{\rm{T}}=\textbf{t}_{m}{\Psi^{(m)}_{\rm{R}}}^{\dagger}(0)\Psi^{(m)}_{\rm{L}}(0)+{\rm h.c.}$ with amplitude $\textbf{t}_{m}$. The indices $\rm R$ and $\rm L$ represent the right and left edge of the Hall bar. We will consider only the relevant excitations with $m= 1 $ (single-qp) or $m= |p|$ ($|p|$-agglomerate). In the incoherent sequential regime and at lowest order in $H^{(m)}_{\rm{T}}$ higher current cumulants $\langle I^{(m)}_{\rm{B}}\rangle_{k}$ ($k$-th order cumulant) are expressed in terms of the backscattering current $I^{(m)}_{\rm{B}}$ \be \langle I^{(m)}_{\rm{B}}\rangle_{k} =\left\{ \ba (m e^{*})^{k-1}\coth\left(E_{m}/2T\right)I^{(m)}_{\rm{B}}\\ (m e^{*})^{k-1}I^{(m)}_{\rm{B}} \ea \right. \ba k \,\,\,{\rm{even}}\\ k \,\,\,{\rm{odd}} \ea \label{cumulant} \ee since the statistics is bidirectional Poissonian\cite{Levitov04}. The current is proportional to the tunneling rate $\Gamma^{(m)}(E)$ as $I^{(m)}_{\rm{B}}=m e^{*}(1-e^{-E_{m}/T})\Gamma^{(m)}(E_{m})$ with \be \Gamma^{(m)}(E_{m})= \gamma_m^2\!\int^{+\infty}_{-\infty}\!\!\!\!\!\! d t' e^{-iE_m t'} e^{2\alpha^{2}_{m} \mathcal{D}^{>}_{\rm{c}}(t')}e^{2\beta^{2}_{m} \mathcal{D}^{>}_{\rm{n}}(t')}\,. \label{Rate} \ee Here, $E_{m}=m e^{*}V$, with $V$ the QPC bias voltage and $\gamma_m^2=|\textbf{t}_{m}|^{2}/(4\pi^{2} a^{2})$. The charge coefficient is $\alpha_{m}=m/|p|$ while the neutral one is given by the minimal value with $q=0$ in Eq.(\ref{qp_operator}). For the single-qp it is $\beta_1^2=(1-\xi/p)$, while for the $|p|$-agglomerate it is $\beta_{|p|}=0$. The correlation functions \cite{Braggio01,Ferraro08} of charged and neutral modes are \be \mathcal{D}^{>}_{r}(t)=g_{r}\nu_{r}\ln{\left[\frac{|\mathbf{\Gamma}\left(1+T/\omega_{r}-iTt\right)|^{2}}{\bold{\Gamma}^{2}\left(1+T/ \omega_{r}\right) \left(1-i\omega_{r}t\right)}\right]}, \label{correlation} \ee with $r=\rm{c},\rm{n}$ and $\bold{\Gamma}(x)$ the Euler Gamma function. The rate is obtained by numerically evaluating (\ref{Rate}) apart at zero temperature where analytical results are available~\cite{Ferraro08}. At lowest order, tunneling processes of different excitations are independent. The contributions of different excitations are then simply summed. In our case, the total $k$-th order cumulant will be given by the sum of the most relevant processes $\langle I_{\rm{B}}\rangle_{k}=\langle I^{(1)}_{\rm{B}}\rangle_{k}+\langle I^{(p)}_{\rm{B}}\rangle_{k}$. The trasmission of the QPC is then expressed in terms of the total backscattering current \be t=1-I_{\rm{B}}/I_{0}\,,\qquad {\rm with}\qquad I_{0}=(\nu e^{2}/2\pi)V\,, \label{transmission} \ee where, for simplicity, we denoted $I_{\rm{B}}\equiv\langle I_{\rm{B}}\rangle_{1}$. Among higher cumulants, backscattering current noise is an essential quantity in order to extract information on charge excitations. It consists of the excess backscattered noise $S_{\rm{exc}}$, due to finite current, and the thermal Johnson-Nyquist noise \be \langle I_{\rm{B}}\rangle_{2}=S_{\rm{exc}}+2T G_{\rm{B}}(T)\,, \ee with $G_{\rm{B}}$ the total backscattering conductance \cite{Note2}. Note that, at lowest order in tunneling, the backscattered excess noise coincides with the transmitted excess noise which is usually measured in experiments\cite{Ponomarenko99,Dolcini05}. For this reason, treating the high transmission regime, we will analyze $S_{\rm{exc}}$ and we will compare it with experiments. Often in experiments it is introduced the effective charge, $e_{\rm{eff}}(T)$, defined as the \emph{single} carrier that better fits the excess noise at a given temperature $T$ \cite{Chung03,Ofek09} \be \label{Sexceff} S_{\rm{exc}}=e_{\rm{eff}}(T)\coth\left[\f{e_{\rm{eff}}(T)V}{2T}\right]I_{\rm{B}}(V,T)-2T G_{\rm{B}}(T). \ee One has to be aware that this quantity has a clear meaning of real tunneling charge when is guaranteed the presence of a single dominant carrier, otherwise it represents a weighted average of different carriers. Its value strongly depends on the voltage range considered. In the shot noise regime $e^*V\gg T$ it is \be {e}^{\rm sh}_{\rm{eff}}=e^* \frac{I^{(1)}_{\rm{B}} +|p| I^{(p)}_{\rm{B}}}{ I_{\rm{B}}}\,. \ee In the opposite regime, $e^*V<T$, often considered in experiments, it can be deduced from the behavior of (\ref{Sexceff}) in the limit $V\to 0$ \be {e}^{\rm th}_{\rm{eff}}(T)=\left[\f{3T}{G^{(\rm{tot})}_{\rm{B}}} \left(\f{d^{2}S_{\rm{exc}}} {dV^{2}}-\f{2}{3}T\f{d^{3} I^{}_{\rm{B}}}{dV^{3}} \right)\right]^{\f{1}{2}}_{V\to 0}. \label{charge} \ee IUsing the relation (\ref{cumulant}) this effective charge can be equivalently expressed in terms of the third order cumulant \be {e}^{\rm th}_{\rm{eff}}(T) =e \left[\f{\langle I_{\rm{B}} \rangle_{3}}{(e^{2} I_{\rm{B}})}\right]^{\f{1}{2}}_{V\rightarrow 0}. \label{eff_skew} \ee This corresponds to the square root of the normalized skewness at zero voltage~\cite{Ferraro09b} and it can be interpreted as the definition of the effective charge in the thermal regime. This quantity can be compared with the effective charge measured in the experiments as a function of temperature. \section{Results} In this part we will focus on the comparison with available experimental data for $\nu=2/5$ ($p=2$) and $\nu=2/3$ ($p=-2$). Parameters are chosen in order to guarantee a crossover between the $|p|$-agglomerate at low energies and the single-qp at higher energies. Figures and fitting will be presented for the LW model $\xi={\rm{sgn}}(p)$, which corresponds to a counter-propagating (co-propagating) neutral mode for $\nu=2/3$ ($\nu=2/5$). The opposite case of $\xi=-{\rm{sgn}}(p)$ (GFL model) is straighforwardly obtained using the mapping (\ref{eq:mapping}). At low temperature $T\ll e^* V$ (shot noise regime) the total current and the excess noise show similar power law behavior $I_{\rm{B}}\propto V^{\eta-1}$, $S_{\rm{exc}}\propto V^{\eta-1}$ with scaling exponent $\eta$ depending on the voltage regimes (see below) \be \hskip-0.1cm\eta_1\!=\!{2 g_{\rm{c}}\nu};\;\; \eta_2\!=\!{2g_{\rm{c}}\f{\nu}{p^{2}}+2 g_{\rm{n}}\left(\!1\!-\!\f{\xi}{p}\!\right)};\;\; \eta_3\!=\!{2g_{\rm{c}} \f{\nu}{p^{2}}}\,. \label{eta} \ee For $V\ll V^*$, $|p|$-agglomerates dominate with $\eta=\eta_1$. At higher voltages, $V^*\ll V\ll \omega_{\rm n}/e^*$ single-qps become more relevant and neutral modes contribute to the dynamics with $\eta=\eta_2$ . At even higher bias $V\gg \omega_{\rm n}/e^*$ the neutral modes saturate giving $\eta=\eta_3$. The crossover voltage $V^*$ is defined as the bias at which the two current contributions are equal $I_{\rm{B}}^{(1)}(V^*)=I_{\rm{B}}^{(p)}(V^*)$. The explicit value depends on intrinsic parameters such as the ratio of the tunneling amplitudes $\gamma_2/\gamma_1$ \cite{Ferraro09b}. At higher temperature $T \gg e^{*}V$ (thermal regime) the current is linear in voltage with a temperature dependent total backscattering conductance $G_{\rm{B}}(T)\propto T^{\eta-2}$. The scaling exponent varies as function of temperature, with $\eta=\eta_1$ for $T\ll T^*$, $\eta=\eta_2$ for $T^*\ll T\ll\omega_{\rm n}$, and $\eta=\eta_3$ for $T\gg\omega_{\rm n}$. The crossover temperature $T^*$ separates the region of relevance between the $|p|$-agglomerate and the single-qp in the linear conductance. Its value depends explicitly on the model parameters such as interaction renormalizations and amplitude ratio $\gamma_{2}/\gamma_{1}$. It corresponds to the value where $G_{\rm{B}}^{(p)}(T^*)=G_{\rm{B}}^{(1)}(T^*)$. In the same regime the excess noise is quadratic in the bias $S_{\rm{exc}}\propto V^{2}$. Fig. \ref{Fig1}a shows the excess noise and the QPC transmission as a function of the external voltage for $\nu=2/3$ at extremely low temperature $T=10$ mK. The parameters are chosen in order to fit the experimental data (black diamonds)~\cite{Ofek09}. The voltages considered are mainly in the shot noise regime, $e^* V>T$. The excess noise shows an almost linear behavior until very small voltages with a single power law. We then select the $m=|p|=2$ contribution, which is the relevant at low energies, with $S_{\rm{exc}}\propto V^{\eta_1-1}$ and ${e}^{\rm sh}_{\rm{eff}}=2e/3$. The fit of the experimental data fixes the interaction to $g_{\rm c}=1.6$ (cf. Eq.(\ref{eta})). This value is also used to plot the transmission in (\ref{transmission}) as shown in the inset. A good agreement with the data is visible. Note that having considered the contribution of the $|p|$-agglomerate it fixes a lower bound to the crossover voltage that has to be higher than the voltage's window considered $V^*> 70 \mu$V. In order to obtain informations on the single-qp one should investigate higher voltage or temperature regimes. In Fig.\ref{Fig1}b, main panel, we show the expected higher temperature noise for $T=80$ mK. For $V \ll T/e^*\approx 21 \mu$V the parabolic behavior of the thermal excess noise is visible. In the same regime the current is linear in voltage with a temperature dependent conductance (see inset). Here, the temperature range is chosen in order to show the first two scaling regimes: from $\eta_1$ ($|p|$-agglomerate) to $\eta_2$ (single-qp), indeed we have $T^*=42$ mK. Note that the noise behavior in the main figure is at $T>T^*$, where single-qp tunneling processes dominate. This is confirmed by the value of effective charge given by ${e}^{\rm {th}}_{\rm{eff}}=e/3$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FerraroFig1} \caption{(a) Excess noise at $\nu=2/3$ (in unit of $10^{-29}$ ${\rm{A}}^{2}/{\rm{Hz}}$) as a function of $V$ for $T=10$ mK (corresponding voltage $V=T/e^*=2.6$ $\mu V$). Inset: transmission $t$ as given in Eq. (\ref{transmission}) as a function of $V$ with $t(V=0)=0.95$. Diamonds represent the experimental data taken from Ref.~\onlinecite{Ofek09}, with courtesy of Moty Heiblum. (b) Same as in (a) but at $T=80$ mK. Inset: log-log plot of the total linear backscattering conductance (in unit of $G_{0}=e^{2}/2 \pi$) as a function of temperature. Other parameters: $g_{\rm{c}}=1.6$, $g_{\rm{n}}=8.1$, $\omega_{\rm{c}}=5$ K, $\omega_{\rm{n}}=200$ mK, $\gamma_{2}/\gamma_{1}=0.20$, $\gamma_1^2 / \omega_c^2=1.1\cdot 10^{-1}$.} \label{Fig1} \end{figure} \newline The above results demonstrates that the value of the effective tunneling charge crucially depends on the external parameters such as temperature and voltage. This point can be further analyzed by considering the temperature dependence of the effective charge at low voltages, $e^*V<T$. Fig. \ref{Fig2} shows ${e}^{\rm th}_{\rm{eff}}$, evaluated using the expression (\ref{eff_skew}), for different values of the tunneling amplitude ratio $\gamma_{2}/\gamma_{1}$ between a bunch of two qps $(\gamma_{2})$ and a single qp $(\gamma_{1})$. At low temperatures, the effective charge corresponds to the $|p|=2$ agglomerate with ${e}^{\rm th}_{\rm{eff}}= \nu e$, while, increasing temperature, it reaches the single-qp value ${e}^{\rm th}_{\rm{eff}}=\nu e/|p|$. The crossover region between the two regimes is driven by $T^*$ which increases increasing the ratio of $\gamma_{2}/\gamma_{1}$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FerraroFig2} \caption {Effective charge, in unit of the electron charge $e$, as a function of temperature, for $\nu=2/3$ and different values of the ratio $\gamma_{2}/\gamma_{1}=0.1$ (blue, short-dashed), $0.2$ (red, straigth), $0.35$ (green, long-dashed). The corresponding crossover temperatures are $T^*=32$ mK, $42$ mK, $60$ mK respectively. The other parameters are as in Fig. \ref{Fig1}. } \label{Fig2} \end{figure} We conclude the comparison with experiments by considering the effective charge for filling factor $\nu=2/5$ where experimental data are available. This case was discussed in Ref.~\onlinecite{Ferraro08} where model parameters were fixed by fitting the temperature dependence of the linear conductance. Here we focus on the temperature behavior of the effective charge. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FerraroFig3} \caption {Effective charge, in unit of the electron charge $e$, as a function of temperature, for $\nu=2/5$. Diamonds represent the experimental data taken from Ref.~\onlinecite{Chung03}, with courtesy of Moty Heiblum. Parameters: $g_{\rm{c}}=3$, $g_{\rm{n}}=12$, $\omega_{\rm{c}}=5$ K, $\omega_{\rm{n}}=50$ mK, $\gamma_{2}/\gamma_{1}=0.65$, with $T^*=18$ mK.} \label{Fig3} \end{figure} Fig. \ref{Fig3} shows the evolution of ${e}^{\rm th}_{\rm{eff}}$ as a function of temperature. The agreement with the corresponding quantity measured in Ref.~\onlinecite{Chung03} (black diamonds) is very good and reinforces the crossover scenario of tunneling from single-qps to agglomerates at sufficiently low temperature. Note that for the above fit we used the parameters fixed in Ref.~\onlinecite{Ferraro08} for the linear conductance. They are however here expressed for the LW model with co-propagating neutral and charged modes\cite{Ferraro08}. \section{Conclusion} We proposed a minimal hierarchical model which fully explains recent experimental observations on excess noise at low temperatures and weak backscattering. The meaning of the effective charge and its temperature dependence was analyzed in comparison with the available experimental data. A quantitative analysis of the dependence of noise and effective charge on external parameters was performed. Evidence of neutral modes propagating with finite velocity and quantitative value of the corresponding bandwidth were extracted. Our results show that the increasing of the effective charges, observed in experiments at extremely low temperatures for the Jain sequence, can be well explained in terms of the dominance of the $|p|$-agglomerates over the single-qp contribution. Only at sufficiently high energies the single-qp dominance is again recovered. We expect that the described crossover could be also relevant for other filling factors, outside of the Jain sequence, where anomalous increasing of the effective charges is also observed \cite{dolev10}. As a final remark we note that within the analyzed geometry with a point-like scatterer we cannot shed light on the propagation direction of the neutral modes, but only on their presence. The fit of the experiments were done using the value $\xi={\rm{sgn}}(p)$ (LW model), which corresponds to a counter-propagating neutral mode for $\nu=2/3$ in accordance with recent observations\cite{Aveek10}. However, one could have fit as well the data in the other case with $\xi=-{\rm{sgn}}(p)$ (GFL model) with a co-propagating neutral mode for $\nu=2/3$, simply changing the interaction parameters (cf. Eq.(\ref{eq:mapping})). Anyway, to have information on the direction of propagation one should consider more complicated geometries such as the four terminal steup recently addressed in experiments \cite{Aveek10}. \vskip0.7cm \section*{ACKNOWLEDGEMENT} We thank M. Heiblum, M. Dolev, N. Ofek and A. Bid for valuable discussions on the experiments and A. Cappelli, G. Viola and M. Carrega for useful discussions. Financial support of the EU-FP7 via ITN-2008-234970 NANOCTM is gratefully acknowledged.
{ "attr-fineweb-edu": 1.990234, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaWjxK5YsWV5L1WNG
\section{Introduction} Given a fundamental discriminant $\Delta$, it is known that the corresponding ideal class group $\Cl(\Delta)$ of the order $\mathcal{O}_{\Delta}$ of discriminant $\Delta$ in $\K = \Q(\sqrt{\Delta})$ is a finite abelian group that can be decomposed as $$\Cl(\Delta) \simeq \bigoplus_i \Z/d_i\Z,$$ where the divisibility condition $d_{i}|d_{i+1}$ holds. In this paper we investigate improvements in the computation of the group structure of $\Cl(\Delta)$: that is, determining the $d_i$, which is of both cryptographic and number theoretic interest. Indeed some cryptographic protocols relying on the difficulty of solving the discrete logarithm problem (DLP) in imaginary quadratic orders have been proposed \cite{buchmannProtocol,JacobsonProtocol}, and solving instances of the DLP is closely related to finding the group structure of $\Cl(\Delta)$. In 1968 Shanks \cite{Shanks} proposed an algorithm relying on the baby-step giant-step method in order to compute the structure of the ideal class group of an imaginary quadratic number field in time $O\left( |\Delta |^{1/4 + \epsilon} \right)$, or $O\left( |\Delta |^{1/5 + \epsilon} \right)$ under the extended Riemann hypothesis \cite{LenstraShanks}. This allows us to compute class groups of discriminants having up to 20 or 25 decimal digits. Then a subexponential strategy was described in 1989 by Hafner and McCurley \cite{hafner}. The expected running time of this method is $$e^{ \left( \sqrt{2}+o(1)\right) \sqrt{\log|\Delta|\log\log|\Delta|} }.$$ Buchman and D\"{u}llmann \cite{dullmann} computed class groups with discriminants of around 50 decimal digits using an implementation of this algorithm. An improvement of this method was published by Jacobson in 1999 \cite{JacobsonPhd}. He achieved a significant speed-up by using sieving strategies to generate the matrix of relations. He was able to compute the structure of class groups of discriminants having up to 90 decimal digits. More recently Sutherland \cite{Sutherland} used generic methods in order to compute class groups with discriminants having 100 decimal digits. Unlike the previous algorithms, this one relies heavily on the particular structure of $\Cl(\Delta)$ thus obtaining variable performances depending on the values of $\Delta$. Our approach is based on that of Jacobson, using new techniques to accelerate both the sieving phase and the linear algebra phase; we have obtained the group structure of class groups of 110 decimal digit discriminants. \section{The ideal class group} In this section we give essential results concerning the ideal class group and the subexponential strategies for computing its structure. For a more detailed description of the theory of ideal class groups we refer to \cite{cohen} and \cite{neukirch}. In the following, $\Delta$ is a non-square integer congruent to 0 or 1 modulo 4, and the quadratic order of discriminant $\Delta$ is defined as the $\Z$-module $$\mathcal{O}_{\Delta} = \Z + \frac{\Delta + \sqrt{\Delta}}{2}\Z.$$ We also denote by $\K$ the field $\Q(\sqrt{\Delta})$. \subsection{Description} Elements of $\Cl(\Delta)$ are obtained from fractional ideals of $\mathcal{O}_{\Delta}$, which are $\Z$-modules of $\K$ of the form: $$\mathfrak{a} = q \left( a\Z + \frac{b + \sqrt{\Delta}}{2}\Z\right), $$ where $a$ and $b$ are integers with $b\equiv \Delta \ \text{mod}\ 2$ and $q$ is a rational number. The prime ideals are the fractional ideals for which there exists a prime number $p$ such that: $$\p = p\Z + \frac{b_p+\sqrt{\Delta}}{2}\Z\ \ \text{or}\ \ \p = p\Z\ (p \text{ inert in } \K).$$ \begin{definition}[Ideal Class group] Let $\mathcal{I}_{\Delta}$ be the set of invertible fractional ideals of $\mathcal{O}_{\Delta}$, and $\mathcal{P}_{\Delta}=\left\lbrace (\alpha)\in\mathcal{I}_{\Delta},\alpha\in\K\right\rbrace $ the subset of principal ideals. We define the ideal class group of $\Delta$ as : $$\Cl(\Delta) := \mathcal{I}_{\Delta}/\mathcal{P}_{\Delta},$$ where the group law is the one derived from the multiplication of $\Z$-modules. \end{definition} For every $\mathfrak{a}\in\mathcal{I}_{\Delta}$, there exist uniquely determined prime ideals $\p_1,\hdots ,\p_n$ and exponents $e_1,\hdots , e_n$ in $\Z$ such that $$\mathfrak{a} = \p_1^{e_1}\hdots\p_n^{e_n}.$$ Unlike $\mathcal{I}_{\Delta}$, the ideal class group $\Cl(\Delta)$ is a finite group. Its order is called the class number and usually denoted by $h(\Delta)$. It grows like $|\Delta|^{1/2+\epsilon}$, as shown in \cite{siegel}. \subsection{Computing the group structure} The algorithm for computing the group structure of $\Cl(\Delta)$ is divided into two major phases: relation collection and linear algebra. In the first phase, we begin by precomputing a factor base $\mathcal{B} = \left\lbrace \p_1,\hdots,\p_n\right\rbrace $ of non-inert prime ideals satisfying $\mathcal{N}\left( \p_i\right) \leq B$, where $B$ is a smoothness bound. Then we look for relations of the form $$\left( \alpha\right) = \p_1^{e_1}\hdots\p_n^{e_n}, $$ where $\alpha\in\K$. Every $n$-tuple $[e_1,\hdots,e_n]$ collected becomes a row of what we will refer to as the relation matrix $A\in\Z^{m\times n}$. We have from \cite{bach} the following important result: \begin{theorem} Let $\Lambda$ be the lattice spanned by the set of the possible relations. Assuming GRH, if $B\geq 6\log^2\Delta$, then we have $$\Cl(\Delta) \simeq \Z^n/\Lambda.$$ \end{theorem} After the relation collection phase we can test if $A$ has full rank and if its rows generate $\Lambda$ using methods described in \textsection \ref{hnf_algo}. If it is not the case then we have to compute more relations. From now on we assume that $A$ has full rank and that its rows generate $\Lambda$. The linear algebra phase consists of computing the Smith Normal Form (SNF) of $A$. Any matrix $A$ in $\Z^{n\times n}$ with non zero determinant can be written as \[ A = V^{-1}\left( \begin{array}{cccc} d_1 & 0 & \hdots & 0 \\ 0 & d_2 & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \hdots & 0 & d_n \end{array} \right) U^{-1}\], where $d_{i+1} | d_i$ for all $1 \leq i < n$ and $U$ and $V$ are unimodular matrices in $\Z^{n\times n}$. The matrix $\text{diag}(d_1,\hdots,d_n)$ is called the SNF of $A$. If $m=n$ and $\text{diag}(d_1,\hdots,d_n) = \text{SNF}(A)$ then $$\Cl(\Delta) \simeq \bigoplus_{i=1}^{n} \Z/d_i\Z.$$ This reduces the problem of computing the group structure of $\Cl(\Delta)$ to computing the SNF of a relation matrix $A$ in $\Z^{n\times n}$. For an arbitrary $A$ in $\Z^{m\times n}$ we start by computing the Hermite Normal Form (HNF) of $A$. A matrix $H$ is said to be in HNF if it has the shape \[ H = \left( \begin{BMAT}(@)[2pt,3cm,3cm]{c}{c.c} \begin{BMAT}(e){cccc}{cccc} h_{1,1}& 0 & \hdots & 0 \\ \vdots & h_{2,2}& \ddots & \vdots \\ \vdots & \vdots & \ddots & 0 \\ * & * & \hdots & h_{n,n} \end{BMAT} \\ \begin{BMAT}[2pt,3cm,1cm]{c}{c} (0) \end{BMAT} \end{BMAT} \right) \], where $0\leq h_{ij} < h_{ii}$ for all $j<i$ and $h_{ij}=0$ for all $j>i$. For each matrix $A$ in $\Z^{m\times n}$ there exists a matrix $H$ in HNF and a unimodular matrix $W$ in $\Z^{m\times m}$ such that $$H = WA.$$ The upper block of $H$ is a $n\times n$ relation matrix whose SNF provides us the group structure of $\Cl(\Delta)$. There is an index $l$ such that $h_{i,i} = 1$ for every $i\geq l$. The upper left $l\times l$ submatrix of $H$ is called the essential part of $H$. In order to compute the group structure of $\Cl(\Delta)$ it suffices to compute the SNF of the essential part of $H$, which happens to have small dimension in our context. \subsection{The use of sieving for computing the relation matrix} The use of sieving to create the relation matrix was first described by Jacobson \cite{JacobsonPhd}. Here we follow the approach of \cite{JacobsonPell} Chap.13, which relies on the following lemma: \begin{lemma} If $\mathfrak{a} = \left( a\Z + \frac{b + \sqrt{\Delta}}{2}\Z\right)$ with $a>0$, then for all $x,y$ in $\Z$ there exists $\mathfrak{b}\in\mathcal{I}_{\Delta}$ such that $\mathfrak{a}\mathfrak{b}\in\mathcal{P}$ and $$\mathcal{N}(\mathfrak{b}) = ax^2 + bxy + \frac{b^2 - \Delta}{4a}y^2.$$ \end{lemma} The strategy for finding relations is the following: We start with $$\mathfrak{a}=\prod_i \p_i^{e_i} =: \left( a\Z + \frac{b + \sqrt{\Delta}}{2}\Z\right),$$ whose norm is $B$-smooth. Then we choose a sieve radius $R$ satisfying $R\approx \sqrt{|\Delta|/2}/\mathcal{N}(\mathfrak{a})$ and we look for values of $x\in[-R,R]$ such that $\varphi(x,1)$ is $B$-smooth where $$\varphi(x,y) = ax^2 + bxy + \frac{b^2 - \Delta}{4a}y^2,$$ which allows us to find $\mathfrak{b} = \prod_i\p_i^{f_i}$ satisfying $\mathfrak{a}\mathfrak{b}=(\gamma)$ for some $\gamma$ in $\K$. The $\p_i$ and $f_i$ are deduced from the decomposition $\varphi(x,1)=\prod_ip_i^{v_i}$. For more details we refer to \cite{JacobsonPell}, Chap 13. This method yields the relation $$(\gamma) = \prod_i \p_i^{e_i+f_i}.$$ Now given a binary quadratic form $\varphi(x,y)=ax^2+bxy+cy^2$ of discriminant $\Delta$, we are interested in finding values of $x\in[-R,R]$ such that $\varphi(x,1)$ is $B$-smooth. This can be done trivially by testing all the possible values of $x$, but there is a well-known method for pre-selecting some values of $x$ in $[-R,R]$ that are going to be tested, namely the quadratic sieve (introduced by Pomerance \cite{pomerance82}). It consists in initializing to 0 an array $S$ of length $2R+1$ and precomputing the roots $r_i'$ and $r_i''$, or the double root $r_i'$, of $\varphi(x,1)\mod p_i$ for each $p_i\leq B$ such that $\left( \frac{\Delta}{p_i} \right) \neq -1$ . Then for each $x$ in $[-R,R]$ of the form $x=r_i+kp_i$ for some $k$, we add $\lfloor\log p_i\rfloor$ to $S[x]$. At the end of this procedure, if $\varphi(x,1)$ is $B$-smooth, then $S[x]\approx\log\varphi(x,1)$. As $\varphi(x,1)\approx \sqrt{\Delta/2}R$, we set a bound \begin{equation}\label{eqK} F = \log\left( \sqrt{\frac{\Delta}{2}}R\right) -T\log(p_n), \end{equation} where $T$ is a number representing the tolerance to rounding errors due to integer approximations. We then perform a trial division test on every $\varphi(x,1)$ such that $S[x]\geq F$. \section{Practical improvements} In this section we describe the improvements that allowed us to achieve a significant speed-up with respect to the existing algorithm and the computation of class group structures of large discriminants. Our contribution is to take advantage of the large prime variants, of an algorithm due to Vollmer \cite{vollmer} for the SNF which had not been implemented in the past, and of special Gaussian elimination techniques. \subsection{Large prime variants} The large prime variants were developed in the context of integer factorization to speed up the relation collection phase in both the quadratic sieve and the number field sieve. Jacobson considered analogous variants for class group computation \cite{JacobsonPhd}, but the speed-up of the relation collection phase was achieved at the price of such a slow-down of the linear algebra that it did not significantly improve the overall time. The main idea is the following: We define the ``small primes" to be the prime ideals in the factor base and the small prime bound as the corresponding bound $B_1=B$. Then we define a large prime bound $B_2$. During the relation collection phase we choose not to restrict ourselves to relations only involving primes $\p$ in $\mathcal{B}$ but we also keep relations of the form $$(\alpha)=\p_1\hdots\p_n \p \ \ \text{and}\ \ (\alpha)=\p_1\hdots\p_n \p\p'$$ for $\p_i$ in $\mathcal{B}$, and for $\p,\p'$ of norm less than $B_2$. We will respectively refer to them as 1-partial relations and 2-partial relations. Keeping partial relations only involving one large prime is the single large prime variant, whereas keeping two of them is the double large prime variant which was first described by Lenstra and Manasse \cite{Lenstra2LP}. In this paper we do not consider the case of more large primes, but it is a possibility that has been studied in the context of factorization \cite{Lenstra3LP}. Partial relations may be identified as follows. Let $m$ be the residue of $\varphi(x,1)$ after the division by all primes $p\leq B_1$, and assume that $B_2 < B_1^2$. If $m=1$ then we have a full relation. If $m\leq B_2$ then we have a 1-partial relation. We can see here that detecting 1-partial relations is almost for free. If we also intend to collect 2-partial relations then we have to consider the following possibilities: \begin{enumerate} \item $m > B_2^2$; \item $m$ is prime and $m > B_2$; \item $m \leq B_2$; \item $m$ is composite and $B_1^2 < m \leq B_2^2$. \end{enumerate} In Cases 1 and 2 we discard the relation. In Case 3 we have a 1-partial relation, and in Case 4 we have $m=pp'$ where $p = \mathcal{N}(\p)$ and $p' = \mathcal{N}(\p')$. After testing if we are in Cases 1, 2, or 3 we have to factorize the residue. We have done that using Milan's implementation of the SQUFOF algorithm \cite{tifa} based on the theoretical work of \cite{squfof}. Even though we might have to factor the residue, collecting a partial relation is much faster than collecting a full relation because the probability that $\mathcal{N}(\mathfrak{b})$ is $B_2$-smooth is much greater than the probability that it is $B_1$-smooth. This improvement in the speed of the relation collection phase comes at a price: The number of columns in the relation matrix is much greater, thus preventing us from running the linear algebra phase directly on the resulting relation matrix and forcing us to find many more relations since we have to produce a full rank matrix. We will see in \textsection \ref{gauss} how to reduce the dimensions of the relation matrix using Gaussian elimination techniques and in \textsection \ref{opt} how to optimize the parameters to make the creation of the relation matrix faster, even though there are many more relations to be found. \subsection{Gaussian elimination techniques}\label{gauss} Traditionally rows were recombined to give full relations as follows: In the case of 1-partial relations, any pair of relations involving the same large prime $\p$ were recombined into a full relation. In the case of 2-partial relations, Lenstra \cite{Lenstra2LP} described the construction of a graph whose vertices were the relations and whose edges linked vertices having one large prime in common. Finding independent cycles in this graph allows us to find recombinations of partial relations into full relations. In this paper we rather follow the approach of Cavallar \cite{Cavallar}, developed for the number field sieve, which uses Gaussian elimination on columns without distinguishing those corresponding to the large primes from the others. One of the main differences between our relation matrices and the matrices produced in the number field sieve is that our entries are in $\Z$ rather than $\F_2$, thus obliging us to monitor the evolution of the size of the coefficients. Indeed, eliminating columns at the price of an explosion of the size of the coefficients can be counter-productive in preparation for the HNF algorithm. In what follows we will use a few standard definitions that we briefly recall here. First, subtracting two rows is called \textit{merging}. This is because rows are stored as lists of the non-zero entries sorted with respect to the corresponding columns and subtracting them corresponds to merging the two sorted lists. If two rows $r_1$ and $r_2$ share the same prime $\p$ with coefficients $c_1$ and $c_2$ respectively then multipling $r_1$ by $c_2$ and $r_2$ by $c_1$ and merging is called \textit{pivoting}. Finally, finding a sequence of pivots leading to the elimination of a column of Hamming weight $k$ is a $k$-way merge. We aim to reduce the dimension of the relation matrix by performing $k$-way merges on the columns of weight $k=1,\hdots,w$ in increasing order for a certain bound $w$. Unfortunately, the density of the rows and the size of the coefficients increase during the course of the algorithm, thus obliging us to use optimized pivoting strategies. In what follows we describe an algorithm performing $k$-way merges to minimize the growth of both the density and the size of the coefficients. First we have to define a cost function defined over the set of the rows encapsulating the difficulty induced for the HNF algorithm. In factorization, we want to find a vector in the kernel of the relation matrix which is defined over $\F_2$; the only property of the row that really matters is its Hamming weight. In our context, we need to minimize the Hamming weight of the row, but we also have to take into account the size of the coefficients. Different cost functions lead to different elimination strategies. Our cost function was determined empirically: We took the number of non-zero entries, counting $c$ times those whose absolute value was above a bound $Q$, where $c$ is a positive number. If $r = [e_1,\hdots,e_n]$ corresponds to $(\alpha)=\prod_i\p_i^{e_i}$ then $$C(r) = \sum_{1\leq|e_i|\leq Q}1 + c\sum_{|e_j| > Q}1.$$ Indeed as we will see, matrices with small entries are better suited for the HNF algorithm described in \textsection \ref{hnf_algo}. Let us assume now that we are to perform a $k$-way merge on a given column. We construct a complete graph $\mathcal{G}$ of size $k$ as follows: \begin{itemize} \item The vertices are the rows $r_i$. \item Every edge linking $r_i$ and $r_j$ is labeled by $C(r_{ij})$, where $r_{ij}$ is obtained by pivoting $r_i$ and $r_j$. \end{itemize} Finding the best sequence of pivots with respect to the cost function $c$ we chose is equivalent to finding the minimum spanning tree $\mathcal{T}$ of $\mathcal{G}$, and then recombining every row $r$ with its parent starting with the leaves of $\mathcal{T}$. Unfortunately, some coefficients might grow during the course of column eliminations despite the use of this strategy. Once a big coefficient is created in a given row $r$, it is likely to spread to other rows once $r$ is involved in another column elimination. We must therefore discard such rows as quickly as possible. In our implementation we chose to do it regularly: Once we have performed all the $k$-way merges for $k\leq 10\cdot i$ and $i=1,\hdots,w/10$ we discard a fixed number $K$ of the rows containing the largest coefficients. We show in Table \ref{TabCrunch} the effect of the use of a cost function taking into account the size of the coefficients and the regular discard of the worst rows for $\Delta = -4(10^{70}+1)$ with $c = 100$, $Q = 8$ and $K = 10$. We kept track of the evolution of the dimensions of the matrix, the average Hamming weight of the rows, and the maximum and minimum size of the coefficients. In the first case we use the traditional cost function that only takes into account the Hamming weight of the rows and we keep deleting the worst rows regularly; this corresponds to taking $c=1$ and $K=10$. In the second case, we use the cost function described above but without row elimination by setting $c=100$ and $K=0$. In the third case, we combine the two ($c=100$ and $K=10$). We clearly see that the coefficients are properly monitored only in the latter case. Indeed using a cost function that does not take into account the size of the coefficients and just discarding the worst rows regularly seems more efficient in terms of reduction of the matrix dimension, but the row corresponding to $i=12$ (that is to say after all the 120-way merges) clearly shows that we run the risk of an explosion of the coefficients. \begin{figure}[h!] \caption{Comparative table of elimination strategies} \label{TabCrunch} \small \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{\textbf{Without score depending on the size of the coefficients}} \\ \hline $i$ & Row Nb & Col Nb & Average weight & max & min \\ \hline 0 & 38752 & 45975 & 22 & 10 & -10\\ 2 & 2334 & 1668 & 76 & 21 & -20\\ 4 & 2123 & 1477 & 117 & 52 & -56\\ 6 & 2028 & 1402 & 146 & 59 & -62\\ 8 & 1951 & 1345 & 175 & 72 & -65\\ 10 & 1890 & 1304 & 203 & 193 & -196\\ 12 & 1836 & 1270 & 219 & 212 & -2147483648\\ \hline \multicolumn{6}{|c|}{\textbf{Without row elimination}} \\ \hline $i$ & Row Nb & Col Nb & Average weight & max & min \\ \hline 0 & 38752 & 45975 & 22 & 10 & -10\\ 2 & 2373 & 1687 & 79 & 30 & -40\\ 4 & 2224 & 1538 & 118 & 67 & -50\\ 6 & 2158 & 1472 & 148 & 71 & -132\\ 8 & 2117 & 1431 & 179 & 2648 & -10568\\ 10 & 2097 & 1411 & 196 & 347136 & -337920\\ 12 & 2080 & 1394 & 214 & 268763136 & -173162496\\ \hline \multicolumn{6}{|c|}{\textbf{With adapted score and row elimination}} \\ \hline $i$ & Row Nb & Col Nb & Average weight & max & min \\ \hline 0 & 38752 & 45975 & 22 & 10 & -10\\ 2 & 2357 & 1691 & 76 & 17 & -17\\ 4 & 2176 & 1530 & 114 & 27 & -30\\ 6 & 2074 & 1448 & 149 & 37 & -37\\ 8 & 2013 & 1407 & 177 & 43 & -43\\ 10 & 1958 & 1372 & 199 & 44 & -45\\ 12 & 1908 & 1342 & 224 & 54 & -53\\ \hline \end{tabular} \end{center} \end{figure} \normalsize \subsection{Vollmer's algorithm for computing the HNF}\label{hnf_algo} In \cite{JacobsonPhd} it has been observed that the algorithm used to compute the HNF of the relation matrix relied heavily on the sparsity of the matrix. While recombinations of the kind described in \cite{Lenstra2LP} or the techniques of \textsection\ref{gauss} reduce the dimensions of the matrix, they also dramatically increase the density of the matrix, thus slowing down the computation of the HNF. We had to find an HNF algorithm whose features were adapted to our situation. Vollmer described in \cite{vollmer} an algorithm of polynomial complexity depending on the capacity to solve diophantine linear systems, but not on the density of the matrix. It was not implemented at the time because there was no efficient diophantine linear system solver available. We implemented Vollmer's algorithm using the IML \cite{iml} library provided by Storjohann. Here we give a brief description of the algorithm (for more details we refer to \cite{vollmer}). We assume we have created an $m\times n$ relation matrix $A$ of full rank. For each $i\leq n$, we define two matrices \[ A_i = \left( \begin{BMAT}(e)[2pt,3cm,3cm]{ccc}{ccc} a_{1,1} & \hdots & a_{m,1}\\ \vdots & & \vdots \\ a_{1,i} & \hdots & a_{m,i} \end{BMAT} \right) \ \ \text{and}\ \ e_i = \left( \begin{BMAT}(e)[2pt,0pt,3cm]{c}{cccc} 0 \\ \vdots \\ 0 \\ 1 \end{BMAT} \right). \] For each $i$, we define $h_i$ to be the minimal denominator of a rational solution of the system $$A_ix = e_i;$$ this is computed using the function \texttt{MinCertifiedSol} of IML, which is an implementation of (Special)MinimalSolution from \cite{mulders}, and used in \cite{vollmer} for the complexity analysis. In \cite{vollmer} it is shown that $$h(\Delta) = \prod_i h_i.$$ Fortunately, analytic formulae allow us to compute a bound $h_*$ such that $$h_*\leq h(\Delta) < 2h_*,$$ so we do not have to compute $h_i$ for every $i\in[1,n]$. In addition, the matrices produced for the computation of the group structure of $\Cl(\Delta)$ have small essential part, which keeps the number of diophantine systems to solve small (about the same size as the number of columns of the essential part) as shown in \cite{vollmer}. \begin{algorithm}[H] \caption{Computation of the class number} \begin{algorithmic} \REQUIRE $\Delta$, relation matrix $A$ of full rank and $h_*$ \ENSURE $h (\Delta) $ \STATE $h\leftarrow 1$ \STATE $i\leftarrow n$ \WHILE {$h < h_*$} \STATE Compute the minimal denominator $h_i$ of a solution of $A_i\cdot x=e_i$ \STATE $h\leftarrow h\cdot h_i$ \STATE $i\leftarrow i-1$ \ENDWHILE \RETURN $h$ \end{algorithmic} \end{algorithm} We can compute the essential part of the HNF of $A$ with a little extra effort involving only modular reductions of coefficients; we refer to \cite{vollmer} for more details. This part of the algorithm is highly dependent on the performance of the diophantine solver we use, which in turn is mostly influenced by the number of columns of the matrix and the size of the coefficients. The bechmarks available \cite{iml} show that the algorithm runs much faster on matrices with 3-bit coefficients, which is why we took coefficient size into account in the cost function for the Gaussian elimination. \section{Optimization of the parameters}\label{opt} In this section we proceed to optimize the parameters involved in the relation collection phase. Each parameter has an effect on the overall time taken to compute the group structure of $\Cl(\Delta)$. Recall \eqref{eqK} giving the bound $F$; when we collect partial relations it should be adapted in the following way: $$F = \log\left( \sqrt{\frac{\Delta}{2}} R \right) -T\log B_2,$$ where $B_2$ is the large prime bound. \subsection{Optimization of $T$} The parameter $T$ represents the tolerance to rounding errors in the traditional sieving algorithms. Its value is empirically determined, and usually lies in the interval $[1,2]$. In the large prime variant it also encapsulates the number of large primes we want to allow. Indeed, if there were no rounding errors one would expect this value to be 1 for one large prime and 2 for two large primes. In practice, we can exhibit an optimum value which differs slightly from what we would expect. In figure \ref{opt_T} we show the overall running time of the algorithm when the parameter $T$ varies between 1.5 and 3.5 for the discriminant $\Delta = -4(10^{75}+1)$. The size of the factor base taken is 3250, the ratio $B_2/B_1$ equals 120, and we allow two large primes. \begin{figure}[!h] \caption{Optimum value of $T$} \label{opt_T} \begin{center} \includegraphics[angle=-90,scale=0.3]{opt_T.eps} \end{center} \end{figure} One of the main issues for determining the optimal value of $T$ is that it tends to shift when one modifies the value of $B_1$, the rest being unchanged. Indeed, if for example $B_2/B_1 = 120$ then $$F = \log\left( \sqrt{\frac{\Delta}{2}} R \right) -T\log 120B_1,$$ so when we increase $B_1$ we have to lower $T$ to compensate. Figure \ref{opt_FB} illustrates this phenomenon on the example $\Delta = -4(10^{75}+1)$, with two large primes. \begin{figure}[!h] \caption{Effect of $|\mathcal{B}|$ on the optimal value of $T$} \label{opt_FB} \begin{center} \includegraphics[angle=-90,scale=0.3]{opt_FB.eps} \end{center} \end{figure} In Figure \ref{opt_FB_n} we study the evolution of the optimal value of $T$ for the single and double large prime variants on discriminants of the form $-4(10^n+1)$ where $n$ ranges between 60 and 80. It appears that, as we expected, the optimal value for the double large prime variant is greater than the one corresponding to the single large prime variant. This value is between 2 and 2.3 for one large prime and around 2.7 when we allow two large primes. \begin{figure}[!h] \caption{Optimal value of $T$ when $n$ varies} \label{opt_FB_n} \begin{center} \includegraphics[angle=-90,scale=0.3]{opt_T_n.eps} \end{center} \end{figure} \subsection{ The size of the factor base} The optimal size of the factor base reflects the trade-off between the time spent on the relation collection phase and on the linear algebra phase. This optimum is usually not the size that minimizes the time spent on the relation collection phase. To illustrate this, Figure \ref{opt_FB_time} shows the time taken by the algorithm for $\Delta = -4(10^{75}+1)$ with $B_2/B_1 = 120$ and the corresponding optimal $T$. \begin{figure}[!h] \caption{Optimal value of $|\mathcal{B}|$} \label{opt_FB_time} \begin{center} \includegraphics[angle=-90,scale=0.3]{opt_FB_time.eps} \end{center} \end{figure} The optimal size of the factor base increases with the size of the discriminant. Figure \ref{opt_FB_time_n} shows the optimal size of the factor base for discriminants of the form $-4(10^n+1)$ as $n$ ranges between 60 and 80 for both one large prime and two large primes. We notice that the single large prime variant requires smaller factor bases than without large primes, and bigger factor bases than the double large prime variant. \begin{figure}[!h] \caption{Optimal value of $|\mathcal{B}|$ when $n$ varies} \label{opt_FB_time_n} \begin{center} \includegraphics[angle=-90,scale=0.3]{opt_FB_n.eps} \end{center} \end{figure} \subsection { The ratio $B_2/B_1$ } Theoretically $B_2$ should not exceed $B_1^2$. In practice, when the ratio $B_2/B_1$ is too high we lose time taking into account partial relations involving primes that are so large that they are very unlikely to occur twice and to lead to a recombination. This phenomenon is known in the context of factorization, and 120 is a common choice of value of $B_2/B_1$ (see \cite{contini}). We ran experiments using 12, 120 and 1200 as values for the ratio $B_2/B_1$. Figure \ref{TabRatio} shows the results for $\Delta = -4(10^{75}+1)$ with two large primes. We give the optimum timings for each value of the size of the factor base, and compare those values for the three different ratios. It appears that 120 is indeed the best choice, but the performance of the algorithm is not highly dependent on this parameter. \begin{figure}[h!] \caption{Comparative table of $B_2/B_1$} \label{TabRatio} \begin{center} \begin{tabular}{|c|c|c|c|} \hline $|FB|$ & 12 & 120 & 1200 \\ \hline 3000 & 6399.60 & \textbf{6051.11} & 6173.66 \\ 3250 & 6795.43 & \textbf{6185.67} & 6754.02 \\ 3500 & \textbf{6539.69} & 6821.77 & 6754.02 \\ 3750 & 6916.93 & \textbf{6750.88} & 7456.92 \\ 4000 & 6671.18 & \textbf{6390.48} & 7009.72\\ \hline \end{tabular} \end{center} \end{figure} \section{Computational results} \subsection{Comparative timings} In Figure \ref{TabComp} we give comparative timings in seconds between no large primes and the large prime variants for discriminants of the form $-4(10^n+1)$, for $n$ between 60 and 80. We used 2.4GHz Opterons with 16GB of memory, and the NTL library with GMP. It appears that we achieved a significant speed-up by using the large prime strategy. Direct comparison with previous methods based on sieving is hard since the timings available in \cite{JacobsonPhd} were obtained on 296 MHz UtraSPARC-II processors; therefore we just quote that the computation of the group structure corresponding to $\Delta = -4(10^{80}+1)$ took 5.37 days (463968 CPU time) at the time. We also notice that the double large prime variant does not provide an impressive improvement on the overall time for the sizes of discriminant involved. The performance is comparable for discriminants of 60 decimal digits and starts showing an improvement when we reach 75 digit discrimants. \begin{figure}[h!] \caption{Comparative table of the performances (CPU time)} \label{TabComp} \begin{center} \begin{tabular}{|c|c|c|c|} \hline $n$ & 0 Large primes & 1 Large prime & 2 Large primes \\ \hline 60 & 374 & 284& 280 \\ 65 & 1019 & 756 & 776\\ 68 & 2010 & 1489 & 1122 \\ 70 & 2148 & 1663& 1680 \\ 75 & 8409 & 6669 & 5347 \\ 80 & 21215 & 17123 & 14664\\ \hline \end{tabular} \end{center} \end{figure} \subsection{Large discriminants} In the imaginary case, the largest class groups that had been computed using relation collection methods had 90 digits; some 100 decimal digit discriminant class group structures could be computed using the techniques of \cite{Sutherland}. With the techniques described in this paper, we achieved the computation of a class group with a 110 decimal digit discriminant. We used 100 Core2 Duo 2.4GHz Pentium IV processors with 2 GB of memory each for the sieving, and one 2.66 GHz Opteron with 64 GB of memory for the linear algebra, which is the real bottleneck of this algorithm. Indeed, the sieving phase can be trivially parallelized for as many processors as we have and does not require much memory, whereas the linear algebra can only be parallelized into the number of factors of $h$ that we get from Vollmer's algorithm (around 10 in our examples) and requires a lot of memory. Indeed the limit in terms of matrix dimensions for the diophantine solver on a 64GB memory computer seems to be around 10000 columns. For comparison, in the case of the 110 decimal digit discriminant we had to handle an 8000-column matrix (after the Gaussian reduction). \begin{figure}[h!] \caption{Decomposition of $\Cl(\Delta)$ for $\Delta = -4(10^n+1)$} \label{Tab_decomp} \begin{center} \begin{tabular}{|l|l|} \hline n & decomposition \\ \hline 100 & $C(2)^7\times C(1462491779472195274571694315857495335176880879072)$\\ 110 & $C(2)^{11}\times C(8576403641950292891121955131452148838284294200071440)$\\ \hline \end{tabular} \end{center} \end{figure} \section*{Acknowledgements} The author thanks Andreas Enge for his support on this project, the fruitful discussions we had and a careful reading of this article. We thank Nicolas Th\'{e}riault and all the organizing comitee of the conference CHILE 2009 where the original results of this paper were first presented. We also thank J\'{e}r\^{o}me Milan for his support on issues regarding implementation, especially with the TIFA library.
{ "attr-fineweb-edu": 1.337891, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaX3xK3xgpbERAvHN
\section{Introduction} \label{sec:intro} \begin{figure}[t] \includegraphics[width=1\columnwidth]{figs/hp_sensibility.png} \caption{Performance sensibility of the best state-of-the-art methods SpCL \cite{ge2020self} with respect to parameter $\epsilon$ (the maximum neighborhood distance) of DBSCAN \cite{ester1996density} for two different cross-dataset experiments. {HyPASS}{} consistently ensures a better HP choice.} \label{fig:sensibility} \end{figure} Re-identification ({re-ID}) aims at retrieving images of a person or an object of interest captured by different cameras. While supervised learning has achieved excellent performance on widely used {re-ID}{} datasets \cite{ye2021deep}, it suffers from a significant drop in performance when {re-ID}{} models are evaluated cross-dataset, i.e., on images of a target context different from the training context. To avoid a manual annotation, the computer vision community has become increasingly involved to seek how to transfer knowledge of a {re-ID}{} model from a source domain to a target domain without identity (ID) label on the target domain. Creative Unsupervised Domain Adaptation (UDA) methods for {re-ID}{} have been designed. These methods tailor the open-set nature of {re-ID}{} in which classes of individuals at test time are different from those seen during the training stage.\\ In particular, pseudo-labeling approaches have proven to be the best UDA methods to learn ID-discriminative features for the target domain \cite{zhong19enc} \cite{song2020unsupervised} \cite{ge2020self}. For this purpose, these methods rely on generating artificial labels for the target unlabeled training data. Due to the open-set nature of the {re-ID}{} UDA task, pseudo-labels are generally generated by clustering the target training samples \cite{song2020unsupervised, ge2020mutual, ge2020self}. To this end, it is necessary to specify values for the hyperparameters (\textbf{HP}) that set the clustering algorithm. Density-based clustering algorithms \cite{beeferman2000agglomerative, McInnes2017, ester1996density} are the most widespread in the UDA re-ID literature. In particular, DBSCAN \cite{ester1996density} is used for its effectiveness in a large majority of pseudo-labeling approaches, including the best performing ones \cite{ge2020self, zhai2020multiple}. For DBSCAN, one hyperparameter to set is $\epsilon$, defined as the maximum neighborhood distance. Despite the development of approaches robust to noise in pseudo-labels \cite{ge2020mutual, ge2020self}, their final performance is still quite sensitive to the choice of $\epsilon$. In Fig.~\ref{fig:sensibility}, there is a limited range of $\epsilon$ values for which performance of SpCL \cite{ge2020self}, the best state-of-the-art methods, remain near `optimal' and not very sensitive. Indeed, given a cross-dataset task, for example PersonX$\rightarrow$Market (the re-ID datasets are presented later in Sec.~\ref{sec:dataset}), these values seem condensed in a range around $\epsilon = 0.5$, where performance reaches a mAP of $75.8 \%$. However, if $\epsilon$ is set to $0.6$, performance drops to $72.2\%$. For $\epsilon = 0.7$, the performance drop is even sharper: down to $7.8\%$.\\ \newline Therefore, selecting a suitable value for this critical HP is crucial to obtain the best performance. This behavior is not specific to DBSCAN and the same can be said for HP k of k-means (this will be discussed later in Sec.~\ref{sec:clustering} with Fig.~\ref{fig:sensibility_mmt}). The lack of labels for the target data makes this selection non-trivial in the UDA context. Unlike the supervised setting, it is impossible to form a labeled validation set to do HP tuning with a {re-ID}{} performance metric on the target domain (mAP, rank-1...). The state-of-the-art for UDA re-ID \cite{song2020unsupervised,ge2020self} sets these critical pseudo-labeling HP (like $\epsilon$) by validation on one adaptation task (e.g. PersonX$\rightarrow$MSMT) with a \emph{labeled} target validation data set, then uses this empirical value for other adaptation tasks. This empirical setting strategy assumes that a value selected for HP from one adaptation task transfers well to another one. However, this assumption only holds to a certain extent and, to our knowledge, there is no rule to know in advance how well this value transfers to a new task in the UDA setting. In Fig.~\ref{fig:sensibility}, by using this strategy for SpCL \cite{ge2020self} method, with the best value $\epsilon$ on PersonX$\rightarrow$MSMT ($\epsilon = 0.6$), we get a mAP of $72.2\%$ on the PersonX$\rightarrow$Market task. However, if we had chosen $\epsilon = 0.5$ we could have obtained a better mAP of $75.8\%$. This indicates that empirical setting has its limits and that a task-specific choice of HP would be more desirable in order to get maximum performance of the pseudo-labeling method. Again, these remarks also apply to other clustering algorithms (see \cite{ge2020mutual} and Fig.~\ref{fig:sensibility_mmt} for k-means). Moreover, the clusters depend on the learned feature representation. As the feature representation varies through learning, this HP choice might even be better if we could cyclically adjust its value to the learned feature representation before each pseudo-labeling updates by clustering.\\ \newline Motivated by the above concerns, we propose to improve existing pseudo-labeling methods by an automatic and cyclic selection of clustering HP suitable to the adaptation task and feature representation. To achieve this goal, our contribution is twofold: \begin{itemize} \item Theoretical modeling and insights that shed light on the conditions under which source-based validation is relevant for the UDA re-ID clustering task are provided. \item A novel method to automate the selection of clustering HP used by pseudo-labeling approaches is proposed: HyperParameters Automated by Source \& Similarities ({HyPASS}{}). It consists in (i) a source-guided automatic HP tuning performed before each clustering phase and (ii) a conditional domain alignment of feature similarities with source ID-discriminative features applied during the training phase to improve HP selection. \end{itemize} Extensive experiments on commonly used and challenging {re-ID}{} tasks for people or vehicles and ablative studies show that {HyPASS}{} can be integrated into the best pseudo-labeling methods and improves consistently re-ID performance compared to a less well-chosen HP value with empirical setting. The paper is structured as follows: In Sec.~\ref{sec:related} , we review the literature on UDA re-ID and HP selection for UDA classification. Then, in Sec.~\ref{sec:theory}, we present our theoretical grounds on clustering HP selection in the UDA re-ID setting. This motivates the design of {HyPASS}{} presented in Sec.~\ref{sec:algo_practice}. In Sec.~\ref{sec:experiments}, {HyPASS}{} is evaluated on commonly used and challenging cross-dataset benchmarks, and thorough analysis and discussion are conducted about its components and training computation time. \newpage \section{Related Work} \label{sec:related} \subsection{Unsupervised Domain Adaptation for {re-ID}} \label{sec:uda_reid} State-of-the-art methods for UDA {re-ID}{} can be divided into two main families: \textit{Domain translation} and \textit{Pseudo-labeling} methods. \subsubsection{Domain Translation Methods} \label{sec:domain_translation} In the one hand, \textit{Image-to-Image translation} methods aim at reducing the domain discrepancy at the pixel level. A generative model \cite{zhu2017unpaired} learns to translate images from one domain to another while preserving some class-related information. Source images are translated into target style then used with their original labels to learn a {re-ID}{} model for the target domain in a supervised way \cite{wei2018person, deng2018image, peng2019cross}. Existing works \cite{zhong2018generalizing, qi2019novel} further reduce the domain discrepancy at the camera level, with additional target camera labels. But overall, images translated into the target style highly depend on the quality of the generated images and the source domain appearances, thus failing to capture specific target {re-ID}{} cues. In the other hand, domain discrepancy can also be directly tackled at the feature level with \textit{Domain Invariant Feature} learning. In existing works, various assumptions are made to learn a domain-shared space, such as a semantic attribute feature space \cite{lin2018multi}, a {re-ID}{} disentangled/factorized feature space \cite{chang2019disjoint, li2018adaptation, li2019cross} or a {re-ID}{} feature space learned by regularizing the model with an unsupervised domain discrepancy loss to align the source and target feature distributions \cite{lin2018multi, mekhazni2020unsupervised}. As Image-to-Image translation, Domain Invariant Feature learning cannot learn target-specific discriminative features that are not shared with the source domain. \subsubsection{Pseudo-Labeling Methods} \label{sec:pseudo_labeling} Pseudo-labeling methods generally exploit a source-trained model to initialize pseudo-identity labels for target data. The pseudo-labels are generated by clustering the target data feature representations obtained by this model. Some works on pseudo-labeling define their own strategy to assign labels to target data based, for example, on similarity to a selected set of prototypes \cite{yu2019unsupervised, zhong19enc, luogeneralizing, wang2020unsupervised, lin2019bottom, zeng2020hierarchical}. Most pseudo-labeling methods are built on a self-learning iterative paradigm which alternates between (i) optimization for target {re-ID}{} feature learning with the lastly optimized model on target images and (ii) pseudo-label prediction (pseudo-labeling) by feature clustering \cite{song2020unsupervised, zhang2019self, jin2020global, tang2019unsupervised, zhai2020ad, zou2020joint, yang2019asymmetric, chendeep, ge2020mutual, zhai2020multiple, zhao2020unsupervised, zou2020joint,peng2020unsupervised, Zhang_2021_CVPR, Yang_2021_CVPR}. Most of these works improve the classical self-learning algorithm on not overfitting the pseudo-label errors, by using teacher-student or ensemble of expert models \cite{ge2020mutual, zhao2020unsupervised, zhai2020multiple} while other approaches focus on designing efficient sample selection and outlier detection strategies \cite{yang2019asymmetric, chendeep}. More robust frameworks are also designed by optimizing losses based on distance distributions \cite{jin2020global, liu2020domain}, by leveraging local features \cite{fu2019self}, intra-inter camera features \cite{Xuan_2021_CVPR,lin2020unsupervised}, the labeled source samples \cite{dub2020}, multiple cluster views \cite{feng2021complementary} or attention-based model \cite{jiattention}, or by mixing pseudo-labels with domain-translation methods \cite{zhai2020ad, tang2019unsupervised, zou2020joint, Chen_2021_CVPR}, online pseudo-label refinery strategy, temporal ensembling and label propagation \cite{Zhang_2021_CVPR, Zheng_2021_CVPR} or meta learning \cite{Yang_2021_CVPR}. A recent approach, SpCL \cite{ge2020self}, proposed self-contrastive learning during the training phase, by leveraging the source and target samples. Most of the above-mentioned pseudo-labeling methods, including the best and most recent ones, use DBSCAN to pseudo-label the target training samples \cite{song2020unsupervised, zhang2019self, jin2020global, tang2019unsupervised, zhai2020ad, zou2020joint, mekhazni2020unsupervised, yang2019asymmetric, chendeep, zhao2020unsupervised, zou2020joint, ge2020mutual, zhai2020multiple}. They are all possibly affected by the clustering sensibility to hyperparameters, as it is shown in \cite{song2020unsupervised} and illustrated in Fig. \ref{fig:sensibility}, where performance of the best state-of-the-art methods, SpCL, depends on the choice of a critical HP. Other approaches, using less common clustering algorithms, also seem concerned (shown later in Sec.~\ref{sec:clustering} with Fig.~\ref{fig:sensibility_mmt} for k-means). Moreover, to our knowledge, they all choose a fixed empirical value to set this HP, which remains the same no matter the adaptation task, and through all the pseudo-labeling cycles. performance of these approaches may suffer from this restricted HP setting. Our contribution aims at overcoming those limiting aspects by integrating a new automatic and cyclic HP selection phase into the pseudo-labeling cycle. Our contribution aims to be general so that it can be easily integrated and improve any existing or future pseudo-labeling approach. \subsection{Hyperparameter Selection for UDA classification} \label{sec:hp_classif} As HP selection in the UDA setting has been studied, to our knowledge, only for the classification task, we focus on the related work for this task. In UDA classification, HP selection remains a major problem. Many approaches in UDA classification use the same strategy as UDA re-ID pseudo-labeling methods: the empirical setting of HP values, used on different cross-dataset adaptation tasks \cite{tzeng2017adversarial, pinheiro2018unsupervised, saito2018maximum, pan2020unsupervised}. Manually labeling a part of the target dataset to make a validation set \cite{hoffman2018cycada} is out of the UDA context. The use of a source validation set \cite{ganin2016domain, peng2018visda} offers biased estimation of the classification target expected risk because of the domain discrepancy. Importance weighting methods \cite{sugiyama2007covariate, long2018conditional, cortes2010learning} tackle this issue by weighting the estimated risk with source samples but they still suffer from high variance estimation. The recent work \cite{you2019towards} improves these approaches and proposes an importance-weighted cross-validation in the feature space to reduce the source estimator variance. However, two major aspects prevent its application for HP selection of the pseudo-labeling UDA clustering. First, it requires the estimation of probability densities of the source and target distributions (in the feature space). cyclically integrated in a pseudo-labeling framework, these densities should be re-estimated before each update of the pseudo-labels by clustering. This would be harder to integrate in any pseudo-labeling methods, computationally expensive and the ratio of estimated densities could increase approximation errors. Finally, the approach is adapted for classification problems only, which differs from the clustering task. \\ To our knowledge, there is no general work on clustering HP selection adapted to UDA pseudo-labeling. That's why we recast the theory behind these source leveraging approaches \cite{sugiyama2007covariate, long2018conditional, cortes2010learning, you2019towards} to fit the clustering task. Moreover, in order to better integrate it into pseudo-labeling approaches, our approach takes a new turn compared to those ones, by avoiding estimation of importance weights: we propose to optimize the model for domain alignment in the feature similarity space with source ID-discriminative features to improve the estimation with a source validation set by reducing its variance. \section{Theoretical Grounds of Hyperparameter Selection for Clustering in UDA {re-ID}} \label{sec:theory} The selection of HP $\lambda \in \mathbb{R}^{n_{\lambda}}, n_{\lambda} \in \mathbb{N^*}$ consists in finding the value $\lambda^* \in \mathbb{R}^{n_{\lambda}}$ that minimizes a defined expected risk. Unlike the models learnable parameters, HPs are not directly learnt during the training loop of a machine learning pipeline. A typical strategy to estimate $\lambda^*$ is model selection: among a set of candidate models defined by different HP values, we choose the one that gives the lowest empirical risk. This strategy is not applicable with the UDA setting because target annotations are not available. Moreover, as discussed in Sec.~\ref{sec:hp_classif}, existing approaches (for classification) are not directly adapted for {re-ID}{}. The goal of this section is thus to give theoretical leads that will give us more insights about two questions: How do the source data bias the target risk estimation? How to overcome this bias? We first introduce notations and the problem formulation (Sec.~\ref{sec:notations}). Then we define the expected risk to optimize for the clustering task (Sec.~\ref{sec:risk_minimization}), in order to deduce an empirical estimate based on the source data (Sec.~\ref{sec:domain_discrepancy}). Finally, a focus is given to the variance of this estimate to better understand how to improve HP selection by reducing it (Sec.~\ref{sec:variance}). For this, we first show that the variance can be reduced by reducing the domain discrepancy between the source and target in the feature similarity space (Sec.~\ref{sec:feature_sim}). Then we give theoretical analysis on the pairwise ratio, showing that with reasonable assumptions, the source empirical risk can be used directly to do efficient HP selection (Sec.~\ref{sec:weight_ratio}). \subsection{Problem Formulation and Notations} \label{sec:notations} \subsubsection{Offline vs Online Cyclic HP tuning for clustering} \label{sec:cyclic} If we focus on the iterative pseudo-labeling paradigm, we can note that the learned feature representation changes during each training phase of an iterative cycle. Since the pseudo-labels are updated by clustering in this representation space, we intuitively expect the optimal clustering hyperparameter value to change when this representation changes (as it will be shown empirically in Sec.~\ref{sec:cluster_quality}). The model selection is classically done via an evaluation criterion on the "main" task (in our case the re-ID as a retrieval task). Proceeding in this way necessarily implies training completely with selected HP values, evaluating (with {re-ID}{} metrics such as mAP) and repeating again and again. This would thus make the selection computationally expansive (a training time analysis is given in Sec.~\ref{sec:time}). To overcome this, our idea is to perform an online model selection directly at the clustering task level, at each iterative cycle.\\ \subsubsection{Modeling the clustering task} As introduced in Sec.~\ref{sec:theory}, the first step is to define the expected risk to be minimized w.r.t $\lambda$ for HP selection. This expected risk $\mathcal{R}_{\mathcal{L},p}$ (defined in \cite{vapnik1998statistical}) is defined in relation to the unknown distribution of data characterized by the probability density $p$ and a cost function $\mathcal{L}$ which depends on our underlying task: a clustering task for our problem. A clustering is considered "good" when it generates pseudo-labels related to the ground-truth identity labels. Our idea is therefore to model this clustering task as a verification problem. For this, let's suppose that the {re-ID}{} data are i.i.d and come from an unknown joint distribution given by the density $p(x, x', r)$ defined on $\chi \times \chi \times \{-1,1\}$ where $\chi \subseteq \mathbb{R}^{n_\chi}, n_{\chi} \in \mathbb{N}$ represents the set of images for which $r=1$ if $x$ and $x'$ have the same ID and $r=-1$ otherwise. Thus, the goal is to find a clustering function $C_{\lambda}$ which is expected to classify all the $m \in \mathbb{N}$ pairs of images in a set $X = ((x_i,x_i')_{1 \leq i \leq m})$ as their respective ground truth labels are $R = (r_i)_{1 \leq i \leq m}$. We also assume that clusters are predicted from a measure of similarities between elements in the set. For the set $X$, the pairwise similarities are given by $S(X) = (s(x_i,x'_i))_{1 \leq i \leq m}$, where $s : \chi \times \chi \rightarrow \mathbb{R}$ is a given similarity function. Therefore, $C_{\lambda}$ is a $\mathbb{R}^{m} \rightarrow \{-1,1\}^{m}$ function. \subsection{Similarity-Based Clustering Risk Minimization} \label{sec:risk_minimization} By definition, following previous notations, the expected risk $\mathcal{R}_{\mathcal{L},p}$ for the clustering task can be seen as a function of $\lambda$: \begin{equation} \mathcal{R}_{\mathcal{L},p}(\lambda) \triangleq \int_{X,R}{\mathcal{L}\bigl(C_{\lambda}(S(X)),R\bigr)p(X,R)dXdR} \; , \label{eq:expected} \end{equation} where $p(X,R)$ is a joint probability density defined on $(\chi \times \chi)^m \times \{-1,1\}^{m}$.\\ The UDA setting for the clustering task does not involve only one distribution associated to its density $p$, but two distributions related to the source $\mathcal{S}$ and the target $\mathcal{T}$. Their joint probability densities are noted respectively $p^\mathcal{S}(X,R)$ and $p^\mathcal{T}(X,R)$. To perform source-based HP selection, we need to link the target expected risk $ \mathcal{R}_{\mathcal{L},p^\mathcal{T}}$ defined by Eq.~\ref{eq:expected} with $p^\mathcal{S}$. \subsection{Similarity Importance-Weighted Risk} \label{sec:domain_discrepancy} We consider the {re-ID}{ }UDA context with the target and source distributions defined above. Our goal is to link the target expected risk (Eq.~\ref{eq:expected}) with $p^\mathcal{S}$. By developing the target expected risk, we have: \begin{equation} \Scale[0.9]{ \begin{aligned} \mathcal{R}_{\mathcal{L},p^\mathcal{T}}(\lambda) &= \int_{X,R}{\mathcal{L}(C_{\lambda}(S(X)),R)p^\mathcal{T}(X,R)}dXdR\\ &= \int_{X,R}{\frac{p^\mathcal{T}(X,R)}{p^\mathcal{S}(X,R)}\mathcal{L}(C_{\lambda}(S(X)),R)p^\mathcal{S}(X,R)dXdR}\\ &= \int_{X,R}{w(X,R)\mathcal{L}(C_{\lambda}(S(X)),R)p^\mathcal{S}(X,R)}dXdR \\. \end{aligned} } \end{equation} With the pairwise weight ratio $w$ defined as: \begin{equation} w(X,R) \triangleq \frac{p^\mathcal{T}(X,R)}{p^\mathcal{S}(X,R)}. \end{equation} Then we can define the pairwise weighted risk as: \begin{equation} \Scale[0.95]{ \begin{aligned} \mathcal{R}_{\mathcal{L},w}(\lambda) {} & \triangleq \int_{X,R}{w(X,R)\mathcal{L}(C_{\lambda}(S(X)),R)p^\mathcal{S}(X,R)}dXdR. \\ \end{aligned} } \label{eq:expected_w} \end{equation} From Eq.~\ref{eq:expected_w}, we can deduce the associated pairwise weighted empirical risk which is an unbiased estimator of $\mathcal{R}_{\mathcal{L},p^\mathcal{T}}(\lambda)$ with finite source samples: \begin{equation} \mathcal{\hat{R}}_{\mathcal{L},w}(\lambda) = \frac{1}{N} \sum_{i=1}^{N}{w(X_i,R_i)\mathcal{L}(C_{\lambda}(S(X_i)),R_i)} , \label{eq:empirical_w} \end{equation} where $\{(X_i,R_i)\}_{1 \leq i \leq N}, N \in \mathbb{N^*}$ are samples from $p^\mathcal{S}(X,R)$. \subsection{Variance of the estimator} \label{sec:variance} Even if the estimator given by Eq.~\ref{eq:empirical_w} is unbiased, a high variance can add noise to HP selection with source samples. Before giving an expression of the estimator's variance, we define the exponential in base 2 of the R\'{e}nyi divergence (called R\'{e}nyi divergence in the rest of the paper for simplicity) of order $\alpha \geq 0$, $\alpha \neq 1$ between the source and target distribution described by the densities $p^\mathcal{S}$ and $p^\mathcal{T}$ as: \begin{equation} \label{eq:renyi} \begin{aligned} d_{\alpha}(p^\mathcal{T}||p^\mathcal{S}) &\triangleq \left( \int_{X,R}{\frac{p^\mathcal{S}(X,R)^{\alpha}}{p^\mathcal{T}(X,R)^{\alpha-1}}}dXdR \right) ^{\frac{1}{\alpha-1}}\\ &= \left( \int_{X,R}{w(X,R)^{-\alpha}p^\mathcal{T}(X,R)dXdR} \right) ^{\frac{1}{\alpha-1}}\\ &= \left( \mathop{\mathbb{E}}_{(X,R) \sim p^\mathcal{T}}[ w(X,R)^{-\alpha} ]\right) ^{\frac{1}{\alpha-1}}. \end{aligned} \end{equation} Let $Y$ be $Y=w(X,R)\mathcal{L}(C_{\lambda}(S(X)),R)$ for $(X,R) \sim p^\mathcal{S}(X,R)$. Using the lemma 2 from Cortes {\it et al.} \cite{cortes2010learning} and with the definition of $\hat{\mathcal{R}}_{\mathcal{L},w}$ (Eq.~\ref{eq:empirical_w}), we can get a bound on the variance of $\mathcal{\hat{R}}_{\mathcal{L},w}(\lambda)$: \begin{equation} \label{eq:variance} \Scale[0.9]{ \begin{aligned} Var(Y) &= \mathop{\mathbb{E}}_{(X,R) \sim p^\mathcal{S}}[ Y^2 ]-\mathop{\mathbb{E}}_{(X,R) \sim p^\mathcal{S}}[ Y ]^2\\ &\leq d_{\alpha+1}(p^\mathcal{T}||p^\mathcal{S})\mathcal{R}_{\mathcal{L},p^\mathcal{T}}(\lambda)^{1-\frac{1}{\alpha}} - \mathop{\mathbb{E}}_{(X,R) \sim p^\mathcal{S}}[ Y ]^2 \\ &\leq d_{\alpha+1}(p^\mathcal{T}||p^\mathcal{S})\mathcal{R}_{\mathcal{L},p^\mathcal{T}}(\lambda)^{1-\frac{1}{\alpha}}- \mathcal{R}_{\mathcal{L},p^\mathcal{T}}(\lambda)^2\\ Var(\hat{\mathcal{R}}_{\mathcal{L},w}) &\leq \frac{d_{\alpha+1}(p^\mathcal{T}||p^\mathcal{S})\mathcal{R}_{\mathcal{L},p^\mathcal{T}}(\lambda)^{1-\frac{1}{\alpha}}- \mathcal{R}_{\mathcal{L},p^\mathcal{T}}(\lambda)^2}{N} \; , \end{aligned} } \end{equation} This bound on the empirical risk variance confirms the intuition that the more source (validation) samples we have, the lesser is the variance. However, in practice, the amount of labeled source samples is limited. Therefore we cannot act on this constant in order to improve our estimation. However, this bound on the empirical risk variance also shows that the greater the $d_{\alpha+1}(p^\mathcal{T}||p^\mathcal{S})$, the greater the variance of the estimator. In order to control this variance, and therefore improve the use of the pairwise weighted empirical risk estimator for model selection, it is necessary to control $d_{\alpha+1}(p^\mathcal{T}||p^\mathcal{S})$ which measures the domain discrepancy between $p^\mathcal{T}$ and $p^\mathcal{S}$ according to the R\'{e}nyi divergence. Moreover, reducing this divergence should make the estimation less sensible to the number of source validation samples according to Eq.~\ref{eq:variance}. \subsection{Addressing the variance and weight ratio} \begin{figure*}[t!] \centering \includegraphics[width=2\columnwidth]{figs/framework.png} \caption{Our HyperParameters Automated by Source \& Similarities ({HyPASS}{}) cyclically integrated in iterations of a classical pseudo-labeling framework.} \label{fig:framework} \end{figure*} \subsubsection{Using feature similarity} \label{sec:feature_sim} The input space (images) is high-dimensional. Therefore, $d_{\alpha+1}(p^\mathcal{T}||p^\mathcal{S})$ is more likely to be greater (and thus the variance of the estimator given by Eq.~\ref{eq:variance}) than the divergence between probability distributions in a lower-dimensional feature space ( as stated in Sec. 4.2 of \cite{you2019towards}). Indeed, the pairwise weight ratio can more likely grow to infinity since $p^\mathcal{S}$ when $p^\mathcal{T} \neq 0$ is more likely to be 0. Moreover, a feature space induced by a learnable feature encoder could allow us to reduce the divergence by penalizing it during the learning phase.\\ Usually in {re-ID}{}, a feature space is learned so that a given similarity function used in this space can measure ID relatedness between images. Therefore, we introduce a feature encoder $E : \chi \rightarrow \mathbb{R}^{n_E}, n_E \in \mathbb{N}$ and redefine $s : \mathbb{R}^{n_E} \times \mathbb{R}^{n_E} \rightarrow \mathbb{R}$. We also define $S_E$ the feature similarity function with respect to E such as $S_E(X) =(s(E(x_i),E(x'_i)))_{1 \leq i \leq m}$. Thus, $S_E$, projects the set of images $X$ into a new set $S \in \mathbb{R}^{m}$, in a space we call the feature similarity space. Let $p^\mathcal{S}_{S_E}(S,R)$ (resp. $p^\mathcal{T}_{S_E}(S,R)$) be the feature similarity distribution densities of $\mathcal{S}$ (resp. $\mathcal{T}$) induced by $S_E$ and defined on $\mathbb{R}^{m} \times \{-1,1\}^m$. We consider this space as our new input space for computing the risks and therefore if we note \begin{equation} w_{S_E}(S,R) = \frac{p^\mathcal{T}_{S_E}(S,R)}{p^\mathcal{S}_{S_E}(S,R)}, \end{equation} with analogous definitions and notations, we deduce the pairwise similarity weighted empirical risk $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}$: \begin{equation} \mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}(\lambda) = \frac{1}{N} \sum_{i=1}^{N}{w_{S_E}(S_i,R_i)\mathcal{L}(C_{\lambda}(S_i),R_i)} , \label{eq:empirical_w_s} \end{equation} where $\{(S_i,R_i)\}_{1 \leq i \leq N}, N \in \mathbb{N}$ are samples from $p^\mathcal{S}_{S_E}$.\\ In practice, we have directly access to sets of pairwise image samples $\{(X_i,R_i)\}_{1 \leq i \leq N}$ defined above and we use $S_E$ to get $\{(S_i,R_i)\} = \{(S_E(X_i),R_i)\}$.\\ According to Eq.~\ref{eq:expected_w}, $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}$ is an unbiased estimator of the expected target risk $\mathcal{R}_{\mathcal{L},p^\mathcal{T}_{S_E}}$, that we can use to do HP selection of $\lambda$ with source labeled samples. We expect this new estimator to be better for HP selection. Indeed, we expect it to have a lower variance than due to the lower domain discrepancy in this learnable low-dimensional feature space (as stated in Sec. 4.2 of \cite{you2019towards}): \begin{equation} Var(\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}) \leq Var(\hat{\mathcal{R}}_{\mathcal{L},w}). \end{equation} \newline \newline In addition, the pairwise data samples being i.i.d. (see Sec.~\ref{sec:notations}), the pairwise similarities are i.i.d. too and therefore the densities in the feature similarity space can be written as: \begin{center} \label{eq:cond_densities} $$ \left\{ \begin{array}{ll} p^\mathcal{S}_{S_E}(S,R) = p^\mathcal{S}(R) \Pi_{i=1}^{m} p^\mathcal{S}_{S_E}(S_i|R_i) \vspace{5pt} \\ p^\mathcal{T}_{S_E}(S,R) = p^\mathcal{T}(R) \Pi_{i=1}^{m} p^\mathcal{T}_{S_E}(S_i|R_i) \; . \end{array} \right. $$ \end{center} Since $ p^\mathcal{S}(R)$ and $ p^\mathcal{T}(R)$ are fixed by the domain distributions and are independent from $E$, we assume that $E$ can be learned to penalize the conditional domain discrepancy (i.e. the the divergence between the conditional distributions) in the feature similarity space in order to improve HP selection with our estimator $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}$. \subsubsection{Computing the pairwise weight ratio} \label{sec:weight_ratio} To sum up, our goal is to do HP selection of $\lambda$ by minimizing $\mathcal{R}_{\mathcal{L},p^\mathcal{T}}$ (Eq.~\ref{eq:expected_w}) w.r.t $\lambda$. For this, we established the expression of the pairwise weighted empirical risk estimator $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}$ with source samples (Eq.~\ref{eq:empirical_w_s}). This estimator will be improved by learning $E$ to penalize the conditional domain discrepancy in the feature similarity space. Using $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}$ requires to compute $w_{S_E}$. As mentioned in Sec.~\ref{sec:hp_classif}, unlike importance weighted risk estimation approaches for UDA classification, we do not wish to estimate the pairwise weight ratio in a pseudo-labeling framework: this would require estimating the probability density of this ratio at each new pseudo-labeling step. This would be computationally expensive. Moreover, the quotient of estimated probabilities in the ratio could increase approximation errors and therefore add noise to the risk estimate. To avoid computing pairwise weight ratio, it would be desirable that we can do HP selection using the source empirical risk $\mathcal{\hat{R}}_{\mathcal{L},p^\mathcal{S}_{S_E}}(\lambda)$.\\ To do relevant HP selection using $\mathcal{\hat{R}}_{\mathcal{L},p^\mathcal{S}_{S_E}}(\lambda)$ instead of $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}(\lambda)$ , it is therefore necessary that $\mathop{\mathrm{argmin}}_{\lambda} \mathcal{\hat{R}}_{\mathcal{L},p^\mathcal{S}_{S_E}}(\lambda) \approx \mathop{\mathrm{argmin}}_{\lambda} \mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}(\lambda)$. In other words, this ensures that selecting the best $\lambda$ with $\mathcal{\hat{R}}_{\mathcal{L},p^\mathcal{S}_{S_E}}(\lambda)$ is the same as selecting the best $\lambda$ with $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}(\lambda)$.\\ Given the expression of $\mathcal{\hat{R}}_{\mathcal{L},w_{S_E}}(\lambda)$ (Eq.~\ref{eq:empirical_w_s}), a direct sufficient condition to ensure this is that: \begin{equation} \label{eq:condition} \forall 1 \leq i \leq N, w_{S_E}(S_i,R_i) = c, c \in \mathbb{R+}.\\ \end{equation} In practice, Eq.~\ref{eq:condition} can be satisfied by using the whole source validation set as a unique pair $(S,R)$ to do HP selection. This will be part of our framework design choice as discussed later in Sec.~\ref{sec:ari} in what we call One-clustering evaluation. To summarize, these theoretical considerations show us that to select HP $\lambda$ from the source examples, it is sufficient to minimize the source empirical risk, provided that we satisfy condition (i) and that we minimize the conditional domain discrepancy in the feature similarity space w.r.t $E$. \section{Source-Guided Selection of Pseudo-Labeling Hyperparameters and Similarity Alignment} \label{sec:algo_practice} We wish to apply the theory discussed above and integrate it into a pseudo-labeling algorithm. For this purpose, we propose a novel method integrated intto the classical iterative pseudo-labeling paradigm \cite{song2020unsupervised}: HyperParameters Automated by Source \& Similarities (HyPASS). Fig.~\ref{fig:framework} gives an overview of the incremented method. HyPASS consists in integrating a new clustering HP selection phase (Auto HP TUNING) from a source validation set before each clustering update and optimizing the model to minimize the conditional feature similarity domain discrepancy $L^{cond}_{align}$. In this part, we give more details about this two major novelties. \subsection{Automatic Clustering HP Tuning} \label{sec:ari} Our method proposes a new step of automatic selection of clustering HP $\lambda$. This selection is cyclic because it takes place at each cycle before the update of the pseudo-labels, in order to adapt the selected HP to the representation learned by $E$.\\ \paragraph*{One-clustering evaluation} We suppose we have access to a separate labeled source validation set $D^\mathcal{S}_{val}$ of $N^\mathcal{S}_{val}$ samples. We also assume that HP search is restricted to a finite size set $\Lambda \subset \mathbb{R}^{n_{\lambda}}$. Given a clustering criterion $\mathcal{L}$ and a HP value $\lambda$ to evaluate, HP tuning phase uses the source empirical risk with samples from $D^\mathcal{S}_{val}$. Remember that to satisfy the condition (i) for using the source empirical risk , we should use the whole set of validation samples and on a one-clustering evaluation of the associated risk. Moreover, it can be very computationally expensive to do multiple clusterings to evaluate a unique HP value, and $N^\mathcal{S}_{val}$ can be `too small' to split $D^\mathcal{S}_{val}$ into different subsets for clustering. Therefore, we decide to only perform one clustering on the full set $D^\mathcal{S}_{val}$ to evaluate one parameter value of $\lambda$ with the source empirical risk. At the end of this step, we keep the value $\lambda^*$ that gives the lowest empirical risk value. \subsection{Learning with conditional domain alignment of feature similarities.} \subsubsection{Learning features for {re-ID}} From the pseudo-labels, the model is trained to minimize a loss function $L_{ID}^{T}$ in order to learn an ID-discriminative feature representation on the target domain. This loss function can be for example the cross entropy loss, the triplet loss, a contrastive loss function or the sum of several of these terms. Besides, we also wish this representation to be ID-discriminative on the source domain by optimizing a loss function $L_{ID}^{S}$ with the labeled source samples. Intuitively, we motivate this choice in order not to degrade the discriminativeness of the representation on the target domain, while optimizing the feature similarity alignment between source and target. \subsubsection{Domain Discrepancy} Reducing the domain discrepancy in the conditional similarity feature space is a key aspect to reduce the variance when using the source empirical estimation (as shown in Sec.~\ref{sec:weight_ratio}). Given a differentiable domain alignment criterion $L_{align}$ (e.g., Maximum Mean Discrepancy (MMD) \cite{saito2018maximum}), we optimize the domain alignment in the conditional feature similarity space given by the formula: \begin{equation} \label{eq:mmd_loss} L^{cond}_{align} = L_{align}(S^\mathcal{S}_{+},S^\mathcal{T}_{+}) + L_{align}(S^\mathcal{S}_{-},S^\mathcal{T}_{-}) \; , \end{equation} where $S^\mathcal{S}_{+}$, $S^\mathcal{T}_{+}$, $S^\mathcal{S}_{-}$ and $S^\mathcal{T}_{-}$ are the similarities between features of, resp., positive pairs of the source, positive pairs of the target, negative pairs of the source and negative pairs of the target in the feature similarity space. Minimizing this term aligns intra-cluster similarity distributions but also inter-cluster similarity distributions between domains. \subsubsection{Global criterion} The total loss $L_{total}$ is given by: \begin{equation} \label{eq:total_loss} L_{total} = L_{ID}^\mathcal{T} + L_{ID}^\mathcal{S} + L^{cond}_{align}. \end{equation} Note that we choose not to weight the different loss terms in $ L_{total}$ in order not to introduce new additional HP in the UDA context. Indeed, experiments in Sec.~\ref{sec:experiments} will show that this loss choice already allows to get performance improvements from {HyPASS}{} in various UDA benchmarks. \subsection{General pseudo-code of {HyPASS}{}} We propose in the Algo.~\ref{algo:general} a pseudo-code for training a pseudo-labeling re-ID UDA framework by using {HyPASS}{}. The proposed automatic hyperparameter tuning from source data (AUTO HP-TUNING) called by Algo.~\ref{algo:general} is detailed in Algo.~\ref{algo:hp} introduced by our approach.\\ Algo.~\ref{algo:general} describes the whole {HyPASS}{} training paradigm. A model is first initialized (INITIALIZATION) to predict the first pseudo-labels for the target training set. Then the algorithm iterates cyclically through a FEATURE EXTRACTION phase with the actual model for the source validation set and the target training set. Then during the AUTO HP-TUNING phase a value for $\lambda^*$ is automatically selected by maximizing a clustering quality criteria. Then this HP value is used to pseudo-label/cluster the target training features during the PSEUDO-LABELING phase. Then the model is fine-tuned with the source training set and the pseudo-labeled target training using {HyPASS}{} loss function (see Eq.~\ref{eq:total_loss}). Algo.~\ref{algo:hp} further details the AUTO HP-TUNING phase, where the algorithm iterates through different HP values proposed by a HP selection strategy or function which are used to pseudo-label the source validation set and compute with the source label a clustering quality metric to be maximized. \begin{algorithm}[h] \caption{HyperParameters Automated by Source \& Similarities ({HyPASS}{})} \begin{algorithmic} \REQUIRE{Labeled source training set $D^\mathcal{S}$} \REQUIRE{Labeled source validation set $D^\mathcal{S}_{val}$: $D^\mathcal{S}_{val} \cap D^\mathcal{S} = \varnothing$} \REQUIRE{Unlabeled target data $D^\mathcal{T}$} \REQUIRE{Clustering/Pseudo-labeling function $C_{\lambda}$ with HP $\lambda$} \REQUIRE{HP list $\Lambda$} \REQUIRE{Clustering/Pseudo-Labeling quality metric $\mathcal{L}$ (to maximize)} \REQUIRE{Loss Functions for Training: $L_{ID}^\mathcal{S}$, $L_{ID}^\mathcal{T}$, $L_{align}$} \REQUIRE{Number of training epochs $N_{epoch}$} \REQUIRE{Feature encoder $E$} \STATE \textbf{INITIALIZATION:} \STATE Compute $S^\mathcal{S}$, $S^\mathcal{T}$ the sets of feature similarities for all pairs of images in $D^\mathcal{S}$ and $D^\mathcal{T}$, respectively. \STATE Train $E$ to minimize $L_{init} \gets L_{ID}^\mathcal{S} + L_{align}(S^\mathcal{S},S^\mathcal{T})$. \STATE \textbf{PSEUDO-LABELING TRAINING:} \FOR{$t=1$ to $N_{epoch}$} \STATE \textbf{FEATURE EXTRACTION:} Compute target training features $F^\mathcal{T}$ and source validation features $F^\mathcal{S}_{val}$ from $D^\mathcal{T}$ and $D^\mathcal{S}_{val}$. \STATE \textbf{AUTO HP-TUNING:} Find $\lambda^*$ that maximizes $\mathcal{L}$ with pseudo-labeling of $F^\mathcal{S}_{val}$ by $C_{\lambda^*}$ and $D^\mathcal{S}_{val}$ ground-truth labels. \STATE \textbf{PSEUDO-LABELING:} Pseudo-label some/all target samples by $C_{\lambda^*}$ with $F^\mathcal{T}$. \STATE \textbf{TRAINING:} \STATE Compute $S_{+}^\mathcal{S}$/$S_{-}^\mathcal{S}$, $S_{+}^\mathcal{T}$/$S_{-}^\mathcal{T}$ the positive/negative sets of feature similarities in $D^\mathcal{S}$ and $D^\mathcal{T}$, respectively. \STATE Train $E$ to minimize $L_{total} \gets L_{ID}^\mathcal{T} + L_{ID}^\mathcal{S} + L_{align}(S_{+}^\mathcal{S},S_{+}^\mathcal{T}) + L_{align}(S_{-}^\mathcal{S},S_{-}^\mathcal{T})$ with $D^\mathcal{S}$ and pseudo-labeled $D^\mathcal{T}$. \ENDFOR \STATE Return $E$ \end{algorithmic} \label{algo:general} \end{algorithm} \begin{algorithm}[h!] \caption{AUTO HP-TUNING} \begin{algorithmic} \REQUIRE{Number of HP values to validate $N_{search}$} \REQUIRE{Hyperparameter (HP) search function $search\_next()$} \REQUIRE{Source validation set features $F^\mathcal{S}_{val}$ and labels $Y^\mathcal{S}_{val}$} \REQUIRE{Pseudo-labeling function $C_{\lambda^*}$} \REQUIRE{Pseudo-labeling quality metric $\mathcal{L}$} \STATE Initialize best HP value $\lambda^*$ \STATE Initialize best metric value $L^* \gets - \infty$ \FOR{$t=1$ to $N_{search}$} \STATE $\lambda \gets search\_next() $ \STATE Get pseudo-labels $\hat{Y}^\mathcal{S}_{val}$ by clustering $F^\mathcal{S}_{val}$ with $C_{\lambda}$ \STATE Compute $L \gets \mathcal{L}(\hat{Y}^\mathcal{S}_{val},Y^\mathcal{S}_{val})$ \IF{$ L \geq L^*$} \STATE{$\lambda^* \gets \lambda$} \STATE{$ L^* \gets L$} \ENDIF \ENDFOR \STATE Return $\lambda^*$ \end{algorithmic} \label{algo:hp} \end{algorithm} \newpage \section{Experiments} \label{sec:experiments} \subsection{Datasets and Protocol} \label{sec:dataset} \begin{table}[b!] \caption{Dataset composition} \label{table:dataset} \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|cc|ccc|c|c|} \hline Dataset & \specialcell{train \\IDs} & \specialcell{train \\images} & \specialcell{test\\ IDs} & \specialcell{gallery \\images} & \specialcell{query\\ images} & \specialcell{~ query images \\ per ID} & \specialcell{~ train images \\ per ID} \\ \hline Market \cite{zheng2015scalable} & 751 & 12,936 & 750 & 16,364 & 3,368 & 4 & 17 \\ Duke \cite{ristani2016performance} & 702 & 16,522 & 702 & 16,364 & 2,228 & 3 & 24 \\ PersonX \cite{sun2019dissecting} & 410 & 9,840 & 856 & 17,661 & 30,816 & 36 & 24 \\ MSMT \cite{wei2018person} & 1,041 & 32,621 & 3,060 & 82,161 & 11,659 & 4 & 31 \\ \hline Vehicle-ID \cite{liu2016deep} & 13,164 & 113,346 & 800 & 7,332 & 6,532 & 8 & 9 \\ Veri \cite{liu2016deep2} & 575 & 37,746 & 200 & 49,325 & 1,678 & 8 & 66 \\ VehicleX \cite{liu2016deep2} & 1,362 & 192,150 & N.A. & N.A. & N.A. & 4 & 141 \\ \hline \end{tabular}} \end{table} \subsubsection{Datasets} We study {HyPASS}{} on different {re-ID}{} adaptation tasks: Person {re-ID}{} and Vehicle {re-ID}. \textit{Person {re-ID}} is evaluated on the large {re-ID}{} dataset MSMT17 \cite{wei2018person} (\textit{MSMT}): used as the target domain, it offers a challenging adaptation task due to its large number of images and identities in its gallery (cf. dataset statistics in Tab.~\ref{table:dataset}). We also use Market-1501 \cite{zheng2015scalable} (\textit{Market}) as the target domain using the synthetic dataset PersonX as the source domain. \textit{PersonX} \cite{sun2019dissecting} is composed of synthetic images generated on Unity with different types of person appearances, camera views and occlusions. Then we also report classical benchmarks between Market and DukeMTMC-reID \cite{ristani2016performance} (\textit{Duke}). \textit{Vehicle {re-ID}{}} is less used than Person {re-ID}{} for UDA {re-ID}{} benchmarking. However, we find it interesting to test our module on a different kind of object of interest and on a potentially different domain discrepancy. We use for this task \textit{Vehicle-ID} \cite{liu2016deep}, Veri-776 \cite{liu2016deep2} (\textit{Veri}) datasets as source or target domains and the synthetic vehicle dataset \textit{VehicleX} \cite{naphade20204th} as source domain. \subsubsection{Protocol} The feature encoder $E$ is evaluated on the target test set. When it is available, we use the source query set as source validation set $D^\mathcal{S}_{val}$ since it is never used elsewhere during the training and no official validation set has been built for these benchmarks. As no test sample is available for VehicleX, we randomly remove 5000 images from the training set to build the validation set. We report the Mean Average Precision (mAP) and rank-1 (top-1) in percents on the target test set after UDA training.\\ \subsubsection{Remarks} In the different protocols, the source validation sets are very varied in size (number of images) and distinct from the target training set in terms of number of IDs and number of samples per ID. According to our theoretical insights in Sec.~\ref{sec:weight_ratio}, we do not expect these statistic differences to influence a good selection of $\lambda$. This will be confirmed by the experiments in Sec.~\ref{sec:validation_exp} for further discussion and experiments about this point and the choice of validation set. \subsection{Implementation Choices and Details} \subsubsection{Implementation Choices} \paragraph{Frameworks} In order to show its effectiveness, we integrate {HyPASS}{} within 3 state-of-the-art methods: UDAP \cite{song2020unsupervised}, MMT \cite{ge2020mutual} and SpCL \cite{ge2020self}. UDAP is a classical pseudo-labeling method, while MMT and SpCL, which manage noise in pseudo-labels, are the best approaches on UDA {re-ID}{}. We focus our experiments on these three frameworks for mainly three reasons: these are renowned re-ID approaches, supplied with a code for reproducibility, and with the best UDA re-ID performance on different adaptation tasks (for SpCL particularly).\\ \paragraph{Clustering algorithm} We focus our experiments on DBSCAN \cite{ester1996density} clustering for two reasons: it is the most widespread in the state of the art and it is used by the best approaches (cf. Sec.~\ref{sec:related}). Thus, our experiments focus on the selection of $\epsilon$ HP that is critical for performance (cf. Sec.~\ref{sec:intro}). However, experiments are also made with other clustering algorithms (k-means, Agglomerative Clustering \cite{beeferman2000agglomerative}, HDBSCAN \cite{McInnes2017}) to show the genericity of {HyPASS}{} (cf. Sec.~\ref{sec:ablative}). The main implementation choices are summarized in Tab.~\ref{table:implementation}. \begin{table}[t!] \caption{Main implementation choices for experiments.} \label{table:implementation} \resizebox{\columnwidth}{!}{ \begin{tabular}{c|cl} Theory & Implementation choices \\ \cline{1-2} $\lambda$ & Maximum Neighborhood Distance $\epsilon$ \\ $\Lambda$ & Bayesian Search \cite{gonzalez2016gpyopt} with $\epsilon \in [0,2] $ \\ $C_{\lambda}$ & DBSCAN \cite{ester1996density} \\ $\mathcal{L}$ & Adjusted Random Index (ARI) \cite{rand1971objective} \\ $E$ & ResNet-50 \cite{he2016deep} initialized on ImageNet \cite{deng2009imagenet} \\ $L_{align}$ & Maximum Mean Discrepancy (MMD) \cite{saito2018maximum} \\ $s$ & based on $L^2$ distance with normalized features \\ $L^{S}_{ID}, L^{T}_{ID}$ & \specialcell{Cross-Entropy \& Triplet Losses (UDAP \cite{song2020unsupervised} \& MMT \cite{ge2020mutual})\\ Contrastive Loss (SpCL \cite{ge2020self})} \end{tabular} } \end{table} \paragraph{Empirical setting comparison} Pseudo-labeling state-of-the-art approaches use empirical values to set HP $\epsilon$ in DBSCAN. The empirical setting strategy supposes that, in addition to a source labeled dataset, we have access to labels of a part of a calibration target dataset. Therefore, it becomes possible to evaluate the re-ID performance for this cross-dataset adaptation task for different values of $\epsilon$. Then, the $\epsilon$ associated to the best mAP is selected, and reused for other cross-dataset adaptation tasks with another target (unlabeled) dataset.\\ We can choose PersonX as the source dataset. Indeed, PersonX being a synthetic dataset, it is free to label and it does not raise any problem of privacy access to real people identities. For the sake of a robust empirical setting, we suppose that we have access to the test set of MSMT, the biggest and most challenging person re-ID dataset. We train different models with best SOTA method SpCL, for different values of $\epsilon$ ($\epsilon = 0.3, 0.4, 0.5, 0.6, 0.7$ see Fig.~\ref{fig:sensibility}), for the cross-dataset adaptation task PersonX$\rightarrow$MSMT. The mAP of each model is computed on MSMT test set, and the $\epsilon$ associated with the best mAP is kept. After experiments, as shown on Fig.~\ref{fig:sensibility}, we obtain $\epsilon = 0.6$. This value will therefore be reused for other cross-dataset adaptation task, with other target domains, such as PersonX$\rightarrow$Market. In Sec.~\ref{sec:ablative}, we compare {HyPASS}{} to this empirical setting strategy (i.e. re-use $\epsilon = 0.6$). Sec.~\ref{sec:effectiveness} gives extensive results for more cross-dataset experiments comparing this empirical strategy with {HyPASS}{}. \paragraph{HDBSCAN comparison} HDBSCAN is a hierarchical clustering version of DBSCAN that automatically selects a parameter like $\epsilon$, according to an unsupervised criterion of stability of the clusters in the hierarchy \cite{McInnes2017}. It therefore seems like a reasonable alternative to DSBCAN with empirical setting since it has an unsupervised heuristic to automatically select an $\epsilon$ value. Indeed, we can see HDBSCAN as an automatic HP tuning of $\epsilon$ and it is therefore relevant to compare {HyPASS}{} (DBSCAN) to HDBSCAN on different state-of-the-art methods. The comparison is done in Sec.~\ref{sec:effectiveness}. HDBSCAN still needs a value for $n_{min}$ controlling the minimum of samples per cluster that is set to 10 during experiments since it gives the best results for different cross-dataset benchmarks in other state-of-the-art work \cite{zhang2019self}. \subsubsection{Implementation Details} Our framework is implemented in PyTorch \cite{paszke2019pytorch}. We use 4 x 24Go NVIDIA TITAN RTX GPU for our experiments.\\ \paragraph{Data preprocessing} \label{sec:preproc} We build two mini-batches: one of size 64 for source images and another of the same size for target ones. Each batch is made of P=16 identities and K=4 instances per identity (and sampled randomly at initialization phase for target due to lack of labels). Images are resized to 256x128 for person images as in \cite{zheng2015scalable, ristani2016performance, wei2018person} and 224x224 for vehicle ones as in \cite{liu2016deep,liu2016deep2}. We randomly flipped and cropped images but we do not use random erasing augmentation during initialization phase since it has been shown to be harmful for direct transfer \cite{luo2019bag}. \\ \paragraph{Feature encoder/Network} For state-of-the-art comparison, we use a Resnet-50 \cite{he2016deep} pretrained on ImageNet \cite{deng2009imagenet} as our backbone. The last stride of ResNet-50 is set to 2 to have higher resolution feature map. After the global average pooling layer, we add a BatchNorm layer and then the classification layer(s) which is initialized with the Kaiming initialization \cite{he2016deep}. At test time, we use the normalized 2048 pre-classification features with squared Euclidean distance to compute the ranking lists.\\ \paragraph{Domain Alignment} For $L_{align}$, we use the MMD PyTorch implementation of D-MMD paper \cite{mekhazni2020unsupervised} with the Gaussian kernel \footnote{https://github.com/djidje/D-MMD}. The features are normalized before computing the (conditional) pairwise feature similarities.\\ \paragraph{Initial phase} The network is trained during 60 epochs. The learning rate is set to $3,5.10^{-4}$ and is decayed by a factor 10 every 20 epochs. Since we have not yet pseudo-labels for the target data, the classical Cross Entropy Loss and Triplet Loss are optimized on the source samples only, jointly with $L_{align}$ on the source and target unlabaled samples.\\ \paragraph{HP tuning} We perform HP search with Bayesian optimization. We choose Bayesian optimization since it is a powerful HP search approach that is able to look for relevant HP values ($\Lambda$) according to an updated belief \cite{gonzalez2016gpyopt}. We use the library GPyOpt \footnote{https://sheffieldml.github.io/GPyOpt/} using Gaussian processes. We just used the default Bayesian optimizer parameters using basic Gaussian processes as the modeling function and Expected Improvement (EI) as the acquisition type. The search range for $\epsilon$ is set to $[0,2]$ (it is the whole range of variation for $\epsilon$ since the features are normalized and thus belong to the unit hypersphere). For k-means variant, $k$ is searched in the full range [1, number of target training samples]. At each Auto HP tuning step, we evaluate $N_{HP} = 50$ hyperparameter values proposed by the Bayesian search. With this setting, the initial value can be sampled randomly since it has no influence on performance as shown later in Sec.~\ref{sec:variants}.\\ The Adjusted Random Index (ARI) \cite{hubert1985comparing} is computed between the source validation set ground truth labels and the cluster predictions using the scikit-learn implementation \footnote{https://scikit-learn.org/}.\\ \paragraph{Pseudo-labeling training phase} Implementation details for this step are framework-specific. We put the symbol "*" after the name of the framework to indicate that it corresponds to our version (to include {HyPASS}{} and allow easier experimental comparisons) based on the original framework. We give the specific implementation details below. If not specified we make the same choices (optimizer, number of epochs,...) as given in their respective paper. \subsubsection{Framework-specific details} \paragraph{UDAP*} We build our code from the UDAP \cite{song2020unsupervised} implementation publicly available on the official UDAP GitHub \footnote{https://github.com/LcDog/DomainAdaptiveReID}. For UDAP, we use an initialization phase before the pseudo-labeling UDA learning. DBSCAN is run on k-reciprocal encoded features with $k=30$ whereas the k-means version directly uses the feature as in the original paper. The minimum samples $n_{min}$ per cluster is set to 4 (as in paper \cite{song2020unsupervised}). Compared to the UDAP paper, we use only one 2048 feature space with Triplet Loss, and add a Cross Entropy Classification loss for the target pseudo-labeled samples (since it improves performance). To add {HyPASS}{}, we add to this UDAP* loss, the classification and triplet losses $L_{ID}^\mathcal{S}$ for the source samples (by initializing a new classification layer for source IDs) as well as $L_{align}$. Other training hyparameters are the same as in the UDAP paper \cite{song2020unsupervised}. \paragraph{MMT*} We build our code from the MMT \cite{ge2020mutual} implementation publicly available on the official MMT GitHub \footnote{https://github.com/yxgeee/MMT}. For MMT, we use an initialization phase before the pseudo-labeling UDA learning. DBSCAN is run on k-reciprocal encoded features with $k=30$ whereas the k-means version directly uses the features as in the original paper \cite{song2020unsupervised}. The minimum samples $n_{min}$ per cluster is set to 4. To add {HyPASS}{}, we only add to the original MMT global loss function, the hard classification and triplet losses defined in paper \cite{song2020unsupervised}, for the source samples (by initializing a new classification layer for source IDs), as well as $L_{align}$. Other training hyparameters are the same as in MMT paper \cite{song2020unsupervised}. \paragraph{SpCL*} We build our code from the SpCL \cite{ge2020self} implementation publicly available on the official SpCL GitHub \footnote{https://github.com/yxgeee/SpCL}. It does not need an initialization phase and the ID loss on source samples is already implemented and used in the original framework with the contrastive loss. To include {HyPASS}{}, we add $L_{align}$ to the global objective and remove the cluster criterion (for {HyPASS}{} and HDBSCAN experiments). Other hyparameters are the same as in the SpCL paper \cite{ge2020self}. \\ \newline Our implementations based on the authors' code for UDAP*, MMT* gives better performance than those reported in the papers. For SpCL*, we obtained only slightly inferior performance (-1.1 p.p. at worst), which should not interfere with conclusions that will be made from experiments in Sec.~\ref{sec:exp_analysis}. \begin{center} \begin{table*}[t!] \caption{Comparison of {HyPASS}{} with empirical setting strategy on pseudo-labeling state-of-the-art methods on person {re-ID}{} adaptation tasks. * means we used authors' code and add {HyPASS}{}. } \label{table:state-of-the-art} \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|cc|cc|cc|cc|cc|} \cline{2-12} \multirow{2}{*}{} & \multirow{2}{*}{HP selection} & \multicolumn{2}{c|}{Market$\rightarrow$MSMT} & \multicolumn{2}{c|}{PersonX$\rightarrow$Market} & \multicolumn{2}{c|}{PersonX$\rightarrow$MSMT} & \multicolumn{2}{c|}{Market$\rightarrow$Duke} & \multicolumn{2}{c|}{Duke$\rightarrow$Market} \\ \cline{3-12} & & mAP & rank-1 & mAP & rank-1 & mAP & rank-1 & mAP & rank-1 & mAP & rank-1 \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{UDAP* \cite{song2020unsupervised}}} & Empirical ($\epsilon=0.6$) & 12.0 & 30.6 & 48.4 & 68.4 & 10.5 & 26.3 & 50.1 & 70.2 & 55.3 & 78.1 \\ \multicolumn{1}{|c|}{} & HDBSCAN & 11.8 & 29.8 & 48.1 & 68.3 & 10.3 & 25.9 & 51.3 & 72.5 & 55.9 & 80.0 \\ \cline{2-12} \multicolumn{1}{|c|}{} & \textbf{{HyPASS}{}} & \textbf{21.4} & \textbf{48.8} & \textbf{62.2} & \textbf{73.7} & \textbf{15.6} & \textbf{36.4} & \textbf{64.9} & \textbf{78.0} & \textbf{69.8} & \textbf{87.1} \\ \hline \hline \multicolumn{1}{|c|}{\multirow{2}{*}{MMT* \cite{ge2020mutual}}} & Empirical ($\epsilon=0.6$) & 23.8 & 49.9 & 71.1 & 66.8 & 17.4 & 39.0 & 65.3 & 78.1 & 73.6 & 89.4 \\ \multicolumn{1}{|c|}{} & HDBSCAN & 23.0 & 47.8 & 70.9 & 66.1 & 18.0 & 41.1 & 65.2 & 78.2 & 74.2 & 90.4 \\ \cline{2-12} \multicolumn{1}{|c|}{} & \textbf{{HyPASS}{}} & \textbf{25.1} & \textbf{52.2} & \textbf{74.5} & \textbf{88.9} & \textbf{20.3} & \textbf{45.9} & \textbf{68.8} & \textbf{82.8} & \textbf{76.0} & \textbf{90.1} \\ \hline \hline \multicolumn{1}{|c|}{\multirow{3}{*}{SpCL* \cite{ge2020self}}} & Empirical ($\epsilon=0.6$) & 25.7 & 53.4 & 72.2 & 86.1 & 22.1 & 47.7 & 68.3 & 82.5 & 76.1 & 89.8 \\ \multicolumn{1}{|c|}{} & HDBSCAN & 24.6 & 52.0 & 70.8 & 86.5 & 21.1 & 46.9 & 66.4 & 81.3 & 75.8 & 89.5 \\ \cline{2-12} \multicolumn{1}{|c|}{} & \textbf{{HyPASS}{}} & \textbf{27.4} & \textbf{55.0} & \textbf{77.9} & \textbf{91.5} & \textbf{23.7} & \textbf{48.6} & \textbf{71.1} & \textbf{84.5} & \textbf{78.9} & \textbf{92.1}\\ \hline \end{tabular} } \end{table*} \end{center} \begin{center} \begin{table}[t!] \caption{Comparison of {HyPASS}{} with empirical setting strategy on pseudo-labeling state-of-the-art methods on vehicle {re-ID}{} adaptation tasks. * means we used authors' code and add {HyPASS}{}. } \label{table:state-of-the-art_v} \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c|cc|cc|} \cline{2-6} \multirow{2}{*}{} & \multirow{2}{*}{HP selection} & \multicolumn{2}{c|}{VehicleID$\rightarrow$Veri} & \multicolumn{2}{c|}{VehicleX$\rightarrow$Veri} \\ \cline{3-6} & & mAP & rank-1 & mAP & rank-1 \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{UDAP* \cite{song2020unsupervised}}} & Empirical ($\epsilon=0.6$) & 35.6 & 74.1 & 35.0 & 75.9 \\ \multicolumn{1}{|c|}{} & HDBSCAN & 35.9 & 75.0 & 35.5 & 79.9 \\ \cline{2-6} \multicolumn{1}{|c|}{} & \textbf{{HyPASS}{}} & \textbf{36.9} & \textbf{74.9} & \textbf{37.0} & \textbf{77.0} \\ \hline \hline \multicolumn{1}{|c|}{\multirow{2}{*}{MMT* \cite{ge2020mutual}}} & Empirical ($\epsilon=0.6$) & 36.4 & 74.2 & 36.3 & 75.8 \\ \multicolumn{1}{|c|}{} & HDBSCAN & 37.0 & 75.9 & 36.5 & 75.9 \\ \cline{2-6} \multicolumn{1}{|c|}{} & \textbf{{HyPASS}{}} & \textbf{36.9} & \textbf{75.0} & \textbf{36.8} & \textbf{76.1} \\ \hline \hline \multicolumn{1}{|c|}{\multirow{3}{*}{SpCL* \cite{ge2020self}}} & Empirical ($\epsilon=0.6$) & 37.6 & 79.7 & 37.4 & 81.0 \\ \multicolumn{1}{|c|}{} & HDBSCAN & 37.4 & 79.9 & 37.5 & 79.8 \\ \cline{2-6} \multicolumn{1}{|c|}{} & \textbf{{HyPASS}{}} & \textbf{40.0} & \textbf{81.1} & \textbf{40.3} & \textbf{81.9}\\ \hline \end{tabular} } \end{table} \end{center} \newpage \section{Results and analysis of {HyPASS}{}.} \label{sec:exp_analysis} \subsection{Effectiveness of {HyPASS}{} on state-of-the-art methods.} \label{sec:effectiveness} \subsubsection{Performance analysis of {HyPASS}{}.} \paragraph{HyPASS vs empirical setting.} Results in resp. Tab.~\ref{table:state-of-the-art} and Tab.~\ref{table:state-of-the-art_v} show that our automatic HP selection improves the three SOTA frameworks, on all person {re-ID}{} and vehicle {re-ID}{} adaptation tasks. This improvement is particularly significant for UDAP: it increases, e.g., the mAP by +9.4 p.p. on Market$\rightarrow$MSMT and +13.8 p.p. on PersonX$\rightarrow$Market over the empirical setting strategy. This improvement of using {HyPASS}{} over the empirical setting strategy is also consistent for "easier" adaptation tasks such as Duke$\rightarrow$Market (+14.5 p.p.) and Market$\rightarrow$Duke (+14.8 p.p.). {HyPASS}{} seems thus to benefit a simple pseudo-labeling approach like UDAP by making it competitive with more complex approaches like MMT, designed to be resistant to pseudo-label noise. Our contribution also improves consistently MMT and SpCL (the best state-of-the-art approaches) on all tasks: there is, e.g., up to +4.1 p.p. mAP improvement on PersonX$\rightarrow$Market for SpCL compared to the SpCL reported performance (using empirical setting). Furthermore, we highlight that SpCL with {HyPASS}{} for cross-dataset UDA re-ID is able to outperform (or at least be competitive with) performance of the latest UDA re-ID and unsupervised approaches: for example, SpCL + {HyPASS}{} reaches 71.1 \% mAP on Market$\rightarrow$Duke whereas \cite{Xuan_2021_CVPR, Yang_2021_CVPR,Zhang_2021_CVPR,Zheng_2021_CVPR, Chen_2021_CVPR, zheng2021exploiting} reach respectively, 59.1\%, 53.8\%, 69.2\%, 69.2\%, 69.1\% and 67.6\% mAP on Duke or Market$\rightarrow$Duke.\\ We recall that experiments have been conducted with an empirical setting performed on PersonX$\rightarrow$MSMT ($\epsilon = 0.6$). A different empirical setting choice, on PersonX$\rightarrow$Market for example, would let to an empirical value $\epsilon = 0.5$ (see Fig.~\ref{fig:sensibility}), and therefore improvements given by using HyPASS would be greater on other cross-datasets. Indeed, with $\epsilon = 0.5$, the performance are further degraded for SpCL on PersonX$\rightarrow$MSMT (20.3\% mAP). Therefore {HyPASS}{} improves the mAP by +3.4 p.p with this other empirical setting for SpCL. \paragraph{HyPASS vs HDBSCAN.} Moreover, results in Tab.~\ref{table:state-of-the-art} and Tab.~\ref{table:state-of-the-art_v} show that using {HyPASS}{} (with DBSCAN) consistently outperforms HDBSCAN for the three frameworks and on all the person \& vehicle re-ID cross-datasets benchmarks. Indeed, results show that HDBSCAN is in fact not necessarily better than using the empirical setting $\epsilon = 0.6$ (for e.g. 24.6\% mAP for SpCL on PersonX$\rightarrow$Market with HDBSCAN instead of 25.7\% mAP with empirical setting) or only brings small improvements (+0.1 p.p. for SpCL and PersonX$\rightarrow$Market with HDBSCAN instead of empirical setting). Therefore, the conclusions done for empircal setting vs HyPASS remains the same for emprical setting vs HDBSCAN: among those three HP selection strategies, using {HyPASS}{} appears to be the best one. \subsection{A cluster quality analysis to understand the effectiveness of HyPASS.} \label{sec:cluster_quality} \begin{figure}[b!] \includegraphics[width=\columnwidth]{figs/ari.png} \caption{Positive impact of an iterative HP tuning of $\epsilon$ (HyPASS) on the clustering quality. The figure represents evolution of ARI of the pseudo-labeled target training set through epochs on PersonX$\rightarrow$Market with SpCL \cite{ge2020self}.} \label{fig:ari} \end{figure} To understand more precisely the positive impact of {HyPASS}{} on the training process, we monitor the evolution of: (i) the quality of the clusters found during the pseudo-labeling cycles, through the ARI of the pseudo-labeled target samples, every 10 epochs (after the pseudo-labels are updated); (ii) HP $\epsilon$ found by {HyPASS}{}. Fig.~\ref{fig:ari} shows that {HyPASS}{} seems to find better clusters (with better ARI) than the fixed empirical parameter strategy ($\epsilon = 0.6$) from the first epochs on. We believe this impact on the quality of the clusters is `iterative': better clusters (pseudo-labels) in early epochs will imply the learning of better representations and therefore the possibility to make better clusters when the pseudo-labels are updated. Fig.~\ref{fig:ari} also highlights that the value of the selected $\epsilon$ changes cyclically (as the feature representation changes) over the pseudo-labeling cycles.\\ \subsection{Ablative Study \& Parameter Analysis on training time and performance.} \subsubsection{Relevance of the optimization losses} \label{sec:ablative} In the ablative study presented in Tab.~\ref{table:study}, we seek to verify the relevance of our optimization losses (see Eq.~\ref{eq:total_loss}) for the selection of HP for the UDAP~\cite{song2020unsupervised}, MMT~\cite{ge2020mutual} and SpCL~\cite{ge2020self} approaches. We train different variants by removing terms from the total loss function (see. Eq.~\ref{eq:total_loss}) in order to observe their effects on the final performance (mAP). Variant \#5 corresponds to {HyPASS}{} with the total loss function. First, we notice that training the model to be discriminating on the source domain (variant \#3) together with our Auto HP tuning allows improvements compared to variant \#1 (only Auto HP tuning) for UDAP: +18.6 p.p. mAP on PersonX$\rightarrow$Market. We believe that the feature encoder in variant \#1 specializes on target domain while forgetting source domain initialization. Thus, HP selection becomes worse because it is done on a representation that is less and less discriminating for the source domain over time. After a certain number of epochs, bad choices of HP may impact the quality of pseudo-labels and, then, target representation. In variant \#2, performance drops even more if alignment is added without $L_{ID}^{S}$ (variant \#2): -28.7 p.p. on PersonX$\rightarrow$Market . We believe that alignment on poorly discriminative source is even more harmful to the target representation. We notice the same behavior for MMT with -29.4 p.p. and -8.7 p.p. respectively. Therefore, when using Auto HP of {HyPASS}{}, it is necessary to keep optimizing source ID-discriminative features with $L_{ID}^{S}$. Adding the term $L^{cond}_{align}$ of conditional domain alignment of feature similarities (variant \#5) further improves substantially performance by using Auto HP (variant \#3): +13.4 p.p. on PersonX$\rightarrow$Market . The same improvement trend is observed for MMT and SpCL. This seems to confirm our theoretical considerations of reducing the variance of the estimation by reducing the domain discrepancy in the feature similarity space when using Auto HP (see Sec.~\ref{sec:feature_sim}). Finally, by comparing variants \#4 and \#5, we observe the contribution of our cyclic Auto HP: +9 p.p. on PersonX$\rightarrow$Market . The same is true for MMT and SpCL. We believe this shows the importance of choosing a suitable HP for each pseudo-labeling update cycle as done with the Auto HP tuning step of {HyPASS}{} (variant \#5). \begin{table}[t] \caption{Ablation studies on {HyPASS}{} for UDAP*, MMT* and SpCL* methods (mAP in \%). \#5 is (full) {HyPASS}{}.} \label{table:study} \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c|ccc|c|c|} \cline{2-7} \multirow{2}{*}{} & \multirow{2}{*}{{ \#}}& \multicolumn{3}{c|}{Losses} & \multirow{2}{*}{\begin{tabular}{c}\small{Auto.}\\ \small HP tuning \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} PersonX\\$\rightarrow$Market \end{tabular}} \\ \cline{7-7} \multirow{2}{*}{} & &$L_{ID}^{T}$ & $L_{ID}^{S}$ & $L^{cond}_{align}$ & & mAP \\ \cline{1-7} \multicolumn{1}{|c|}{\multirow{5}{*}{UDAP* \cite{song2020unsupervised}}} & 1 & \checkmark & & & \checkmark & 30,2 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{2} & \checkmark & & \checkmark & \checkmark & 20.1 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{3} & \checkmark & \checkmark & & \checkmark & 48.8 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{4} & \checkmark & \checkmark & \checkmark & & 53.2 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{5} & \checkmark & \checkmark & \checkmark & \checkmark & \textbf{62.2} \\ \cline{1-7} \multicolumn{1}{|c|}{\multirow{5}{*}{MMT* \cite{ge2020mutual}}} & 1 & \checkmark & & & \checkmark & 55.9 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{2} & \checkmark & & \checkmark & \checkmark & 41.3 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{3} & \checkmark & \checkmark & & \checkmark & 70.7 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{4} & \checkmark & \checkmark & \checkmark & & 71.5 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{5} & \checkmark & \checkmark & \checkmark & \checkmark & \textbf{74.5} \\ \cline{1-7} \multicolumn{1}{|c|}{\multirow{3}{*}{SpCL* \cite{ge2020self}}} & 3 & \checkmark & \checkmark & & \checkmark & 68.1 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{4} & \checkmark & \checkmark & \checkmark & & 73.9 \\ \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{5} & \checkmark & \checkmark & \checkmark & \checkmark & \textbf{77.9} \\ \cline{1-7} \end{tabular}} \end{table} \subsubsection{Performance of {HyPASS}{} with other clustering algorithms.} \label{sec:clustering} \begin{figure}[t] \includegraphics[width=1\columnwidth]{figs/hp_sensibility_mmt.png} \caption{Performance sensibility for the state-of-the-art framework MMT \cite{ge2020mutual} with respect to $k$ parameter of k-means.} \label{fig:sensibility_mmt} \end{figure} \paragraph{K-means} Other clustering algorithms can be used instead of DBSCAN. But they still need to set HP. For example, k-means relies on the number $k$ of clusters. Similarly to the sensibility of DBSCAN with $\epsilon$, Fig.~\ref{fig:sensibility_mmt} shows that the performance with k-means is also sensible to the number of clusters HP. Again, choosing a good HP value is crucial to get good performance: for example, by choosing k = 250, performance drops from 70.8\% to 50.2\% mAP for PersonX$\rightarrow$Market and from 16.6\% to 10.1\% compared to k = 500 for PersonX$\rightarrow$MSMT with MMT .\\ Therefore, empirical setting strategy for choosing k on another adaptation task quite limits performance too. Indeed, re-using the best value from PersonX$\rightarrow$MSMT (k=1500) leads to 59.8\% mAP for PersonX$\rightarrow$Market whereas it could have been 70.8\% for k=500. Reciprocally, choosing k=500 from PersonX$\rightarrow$Market leads to 13.6\% for PersonX$\rightarrow$MSMT instead of 17.4\% for k=1500.\\ As illustrated on Fig.~\ref{fig:sensibility_mmt} and shown in Tab.~\ref{table:cluster}, using {HyPASS}{} leads to better performance compared to empirical setting. For PersonX$\rightarrow$Market with MMT, it leads to 71.1\% mAP instead of 59.8\% reusing k=1500 obtained by empirical setting on PersonX$\rightarrow$MSMT. \paragraph{Agglomerative Clustering} Agglomerative Clustering \cite{beeferman2000agglomerative} is another clustering algorithm that can be used instead of DBSCAN. As DBSCAN, Agglomerative Clustering is a density-based clustering algorithm that relies on a neighborhood distance threshold parameter $\epsilon$. {HyPASS}{} can also improve performance of pseudo-labeling methods using this clustering algorithm. As shown in Tab.~\ref{table:cluster}, for PersonX$\rightarrow$Market with SpCL, using {HyPASS}{} leads to 78.2\% mAP instead of 72.2\% using empirical setting ($\epsilon = 0.6$). \begin{table}[t!] \caption{Performance (mAP) of {HyPASS}{} on k-means and Agglomerative Clustering and with state-of-the-art pseudo-labeling approaches. We set $k=1500$ as empirical setting since it is the best configuration on PersonX$\rightarrow$MSMT in our experiments. For Agglomerative Clustering, empirical setting $\epsilon = 0.6$ is motivated by analogy to our experiments for PersonX$\rightarrow$MSMT with DBSCAN (see Fig.~\ref{fig:sensibility}) which is also a density-based algorithm.} \label{table:cluster} \resizebox{\columnwidth}{!} { \begin{tabular}{c|c|c|c|} \cline{2-4} & Clustering & HP choice & PersonX$\rightarrow$Market \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{MMT* \cite{ge2020mutual}}} & \multirow{2}{*}{k-means} & Empirical $k=1500$ & 59.8 \\ \cline{3-4} \multicolumn{1}{|c|}{} & & \textbf{{HyPASS}{}} & \textbf{71.1} \\ \hline \hline \multicolumn{1}{|c|}{\multirow{2}{*}{SpCL* \cite{ge2020self}}} & \multirow{2}{*}{Agglo. Clustering \cite{beeferman2000agglomerative}} & Empirical $\epsilon = 0.6$ & 72.6 \\ \cline{3-4} \multicolumn{1}{|c|}{} & & \textbf{{HyPASS}{}} & \textbf{78.2} \\ \hline \end{tabular}} \end{table} \subsubsection{Influence of the validation set size.} \label{sec:validation_exp} \begin{table}[h] \caption{Experiments with different validation set size $N^S_{val}$ on SpCL for PersonX$\rightarrow$Market showing the validation set size on performance and training computation time.} \label{table:time} \resizebox{\columnwidth}{!} { \begin{tabular}{c|c|cccc|} \cline{2-6} & \multicolumn{1}{|c|}{\multirow{2}{*}{Empirical setting}} & \multicolumn{4}{c|}{HyPASS $N^S_{val} =$} \\ & & 1000 & 5000 & 10000 & 30816 \\ \cline{1-6} \multicolumn{1}{|c|}{Time} & 60h12 (6 $\times \sim$ 10h02) & 12h08 & 34h39 & 42h21 & 68h43 \\ \cline{1-6} \multicolumn{1}{|c|}{mAP (in \%)} & 72.2 & 76.1 & 77.8 & 77.8 & 77.9 \\ \hline \end{tabular} } \end{table} We have seen that {HyPASS}{} brings consistent improvements on various adaptation tasks (see Tab.~\ref{table:state-of-the-art}) and therefore with various sizes validation set (see Tab.~\ref{table:dataset} number of query validation images). These also show experimentally that performance improvements from {HyPASS}{} is also robust to various dataset compositional bias between the source and target domains, more particularly the difference in number of query per IDs and IDs.\\ But the validation set size also intuitively influences the clustering computation time, and thus the full training computation time of the frameworks where {HyPASS}{} is added. Moreover, it is also interesting to have more experimental insights on the influence of the validation set on performance improvement of {HyPASS}{} for a fixed adaptation task. That's why we further investigate the influence of the validation set size on the training computation time and the {re-ID}{} performance. Experiments are conducted on PersonX$\rightarrow$Market for the SpCL framework. For this, we randomly select $N$ images from PersonX query set. The execution time (on the same machine) and {re-ID}{} performance are reported in Tab.~\ref{table:time}. The empirical setting strategy has been performed on PersonX$\rightarrow$Market adaptation task with the 5 HP values: $\epsilon = 0.3, 0.4, 0.5, 0.6, 0.7$. The empirical setting strategy requires 5 training of SpCL with the 5 HP values for PersonX$\rightarrow$MSMT then one more training of SpCL for PersonX$\rightarrow$Market with the best $\epsilon$ ($\epsilon$ = 0.6) evaluated by the mAP on the target test set. \\ We notice that the training computation time increases with the validation set size. However, it is still fairly reasonable for a training time including hyperparameter selection. Even with a large validation set (30k images), the complete training time lasts only 68h40 and brings significant performance for this adaptation task (+5.7 p.p.). In practice, it is quite big for a validation set size, and experiments show that even with 5k images, performance remains the same, with a training computation time reduced by about 25h33 compared to empirical setting. More generally, performance of {HyPASS}{} is not really sensible validation set size variations tested (from 1/10 up to 3 times the size of the training set induced only 1.8 p.p. variation). Indeed, this is consistent with our guess that reducing the domain discrepancy should allow less sensibility to the number of validation samples, as motivated by Eq.~\ref{eq:variance} in Sec.~\ref{sec:variance}. \subsubsection{Influence of Auto HP selection criterion.} \label{sec:time} We included in the design of {HyPASS}{} different modeling choices aiming at improving training time and performance. To show the relevance of these choices, we conducted various experiments by changing {HyPASS}{} HP selection strategy on PersonX$\rightarrow$MSMT on the framework SpCL. First, {HyPASS}{} HP selection is directly based on cyclic clustering quality evaluations instead of {re-ID}{} performance evaluation in order to reduce the computation cost. As illustrated in Tab.~\ref{table:comput}, using {HyPASS}{} but with the mAP criterion (the {re-ID}{} criterion which is our main task) on the source test to select the clustering HP gives almost the same performance of {HyPASS}{} (77.1\% mAP), but greatly increases the training time to 90h29 instead of 68h43. We reckon that it is mainly due to the higher number of training steps needed to evaluate HP values with the mAP. Even though the best target mAP is the final goal, our assumption to select HP by clustering quality evaluation instead of mAP evaluation (Sec.~\ref{sec:cyclic}) is relevant to limit the training time while having the best {re-ID}{} performance. \begin{table}[h] \caption{Impact of {HyPASS}{} with different version of Auto HP criterion on the {re-ID}{} performance and computation. Experiments done on SpCL for PersonX$\rightarrow$Market.} \label{table:comput} \resizebox{\columnwidth}{!} { \begin{tabular}{|c|c|c|c|} \hline Variants & Auto HP criterion & mAP & Time \\ \hline SpCL w/ mAP HP selection & re-ID task (mAP) & 77.1 & 90h29 \\ \hline SpCL w/ HyPASS & clustering task (ARI) & 77.9 & 68h43 \\ \hline \end{tabular} } \end{table} \subsubsection{Performance with other {HyPASS}{} variants} \label{sec:variants} \paragraph{Domain Alignment.} We conducted some experiments with other implementation choices for {HyPASS}{} with SpCL on PersonX$\rightarrow$Market. For instance, a 2-layer Domain Adversarial Neural Network (DANN \cite{ganin2016domain}) can be used instead of MMD to align the pairwise feature similarities. Tab.~\ref{table:variants} shows that {HyPASS}{} keeps performance improvement over the framework without {HyPASS}{} (+5.1 p.p. mAP compared to SpCL without {HyPASS}{}). \paragraph{Cluster quality criterion.} The Normalized Mutual Information (NMI) can replace the ARI and gives as good performance (+0.2 p.p. mAP in Tab.~\ref{table:variants} compared to {HyPASS}{} with ARI). \paragraph{HP search strategy.} Using a more simple HP search strategy like grid search for $\epsilon \in \Lambda = [0.05,0.1,0.15,...,2]$ can replace the Bayesian search. It still gives good results with {HyPASS}{} (-0.3 p.p. compared to Bayesian search in Tab.~\ref{table:variants}). \paragraph{HP search initialization.} In Tab.~\ref{table:init}, when using Bayesian Search with $N_{HP} = 50$ proposed values per HP tuning phase, the initial value $\epsilon_0$ has completely no impact on performance. \begin{table}[h] \caption{Performance of {HyPASS}{} for PersonX$\rightarrow$Market with SpCL* \cite{ge2020self} pseudo-labeling method on different variants.} \label{table:variants} \centering \resizebox{\columnwidth}{!} { \begin{tabular}{c|c|} \cline{2-2} & PersonX$\rightarrow$Market (mAP in \%)\\ \hline \multicolumn{1}{|c|}{SpCL*} & 72.2 \\ \hline \multicolumn{1}{|c|}{SpCL*+ {HyPASS}{} (MMD + Bay. search + ARI)} & 77.9 \\ \hline \multicolumn{1}{|c|}{SpCL* + {HyPASS}{} (DANN \cite{ganin2016domain} + Bay. search + ARI)} & 77.3 \\ \hline \multicolumn{1}{|c|}{SpCL* + {HyPASS}{} (MMD + Bay. search + NMI)} & 78.1 \\ \hline \multicolumn{1}{|c|}{SpCL* + {HyPASS}{} (MMD + Grid Search + ARI)} & 77.6 \\ \hline \end{tabular}} \end{table} \begin{table}[h] \caption{Robustness of {HyPASS}{} against the Bayesian search initialization of $\epsilon_{0}$. Performance for PersonX$\rightarrow$Market of {HyPASS}{} with SpCL* \cite{ge2020self} pseudo-labeling method are reported with different values of Bayesian Search initialization.} \label{table:init} \centering \resizebox{ \columnwidth}{!} { \begin{tabular}{|c|c|} \cline{1-2} Bayesian search initialization $\epsilon_{0}$ &PersonX$\rightarrow$Market (mAP in \%) \\ \hline \multicolumn{1}{|c|}{0.01} & 77.8 \\ \hline \multicolumn{1}{|c|}{0.8} & 77.9 \\ \hline \multicolumn{1}{|c|}{2} & 77.8 \\ \hline \end{tabular}} \end{table} \section{Conclusion} \label{sec:conclusion} This paper addresses the problem of empirical HP selection for pseudo-labeling UDA re-ID approaches as it can have a negative impact on performance when addressing new unlabeled cross-datasets. We provided novel theoretical insights to highlight the conditions under which a source-based selection is effective for the UDA clustering task. These allowed us to design a new method, {HyPASS}{}, to automatically select suitable HP for the clustering phase of pseudo-labeling UDA methods. It is based on source guidance and domain similarity alignment. When {HyPASS}{} is applied to select critical clustering HP instead of using empirical settings, it consistently improves performance of the best methods of the state-of-the-art. We believe that suitable HP selection could be relevant for the unsupervised re-ID scenario in which pseudo-labeling methods seem also effective \cite{ge2020self}. Further work could be done on how to select suitable HP in the unsupervised scenario when we don't want to use any available labeled dataset. \newpage \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figs/fabian.png}}]{Fabian Dubourvieux} Fabian Dubourvieux is actually a PhD student in computer vision and machine learning at CEA LIST, Paris-Saclay University and the LITIS CNRS Laboratory, INSA Rouen Normandie. His main research interests include unsupervised domain adaptation, self-supervised learning and object re-identification. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figs/photo_angelique.jpg}}]{Angélique Loesch} Angélique Loesch received PhD degree in computer science from the University Blaise Pascal, Clermont-Ferrand, France in 2017. She did her research about Simultaneous Localization and Mapping (SLAM) and 3D object tracking in partnership with the Pascal Institute. She is currently a permanent researcher at CEA LIST, Paris-Saclay University. Her main research interests include computer vision with a focus on visual perception with deep learning (object re-identification, instance and semantic segmentation, few-shot classification and detection). \end{IEEEbiography} \printbibliography \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figs/photo_romaric.jpg}}]{Romaric Audigier} Romaric Audigier is a researcher in computer vision \& machine learning at CEA LIST, Paris-Saclay University. He received a PhD in image processing from the State University of Campinas (UNICAMP) in 2007. After a post-doctoral position at Mines-ParisTech, he joined CEA LIST in 2009. His current research interests include frugal learning paradigms like unsupervised domain adaptation and few-shot learning applied to visual scene analysis tasks (object detection, segmentation, re-identification, tracking, human interaction and event detection). \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figs/samia.png}}]{Samia Ainouz} Samia Ainouz is a full Professor at INSA Rouen Normandy. She is the head of the team Intelligent Transportation systems since June 2019. Her research area is around Multimodality for intelligent vehicle navigation including data fusion, 3D reconstruction, VSLAM. She supervised 7 PhD students around road scene analysis and autonomous navigation. Recently, she focused her research towards non-conventional imaging for autonomous navigation in adverse weather conditions using Deep learning tools. She is currently the head of the ANR project ICUB dealing with road scene analysis in adverse weather conditions with collaboration with Peugeot PSA, Stereolabs and ImVia. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figs/stephane.png}}]{Stéphane Canu} Stéphane Canu is a Professor of the LITIS research laboratory and of the information technology department, at the National institute of applied science in Rouen (INSA). He has been the dean of the computer engineering department he create in 1998 until 2002 when he was named director of the computing service and facilities unit. In 2004 he join for one sabbatical year the machine learning group at ANU/NICTA (Canberra) with Alex Smola and Bob Williamson. In the last five years, he has published approximately thirty papers in refereed conference proceedings or journals in the areas of theory, algorithms and applications using kernel machines learning algorithm and other flexible regression methods. His research interests includes deep learning, kernels machines, regularization, machine learning applied to signal processing, pattern classification, factorization for recommender systems and learning for context aware applications. \end{IEEEbiography} \EOD \end{document}
{ "attr-fineweb-edu": 1.605469, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUaefxK7Tt522WX0gs
\section{Introduction} Dark Energy is the usual explanation for the apparent acceleration implied by the type Ia supernova (SNIA) data \cite{Perlmutter99}, \cite{Riess98}. However, the suggestion for the existence of dark energy is ultimately based on the cosmological principle, that of assuming global homogeneity and isotropy. The requirement of an extra parameter $\Omega_{\Lambda}$ is then necessary to explain the dimming of supernovae magnitude at large redshift. Although at large scales the universe appears being homogeneous and isotropic in agreement to the CMB observations, at small scales it is far from being like that due to the presence of complex structures that produce underdensities \cite{Keenan13}, fractal-like structures \cite{Labini11},\cite{Labini98} and bulk flows that are not at rest with respect to the Hubble flow \cite{Feindt13} \cite{Hudson99} \cite{Magoulas16}. There have been many works claiming that some of these effects can mimic an apparent acceleration. Possibly the combination of many (even all) of these contributions may have a strong effect than we previously thought \cite{Celerier06} \cite{Enqvist07} \cite{Cosmai19} \cite{Tsagas11} \cite{Asvesta22}. As a step towards the understanding of the influence of the local structure in cosmology, in this work we explore a simple approach by studying the local fractal structure of the universe with the luminosity distance relations from SNIA data. In a recent paper \cite{Cosmai19}, the authors stated that a fractal-like $M(r)$ function can fit the SNIA data using a non-dark energy LTB model written as: \begin{equation}\label{eq1} M(r) \propto r^D, \end{equation} where $D$ is the fractal dimension. Although interesting, the analysis is based on ambiguous arguments that produce some theoretical problems. Here, we propose an analysis based also on the LTB metric but using both a fractal dimension and also a length-scale that seems to work without any theoretical problems. \section{The LTB model} Lemaitre \cite{Lemaitre33}, Tolman \cite{Tolman34} and Bondi \cite{Bondi47} were the first to studied isotropic and radial inhomogeneous universes, known as LTB models. Here we restrict ourselves to the study of pressureless matter universes. Assuming for generality the existence of a cosmological constant with equation of state $ \rho_\Lambda=-p_\Lambda$, the energy-momentum tensor is given by: \begin{equation}\label{eq2} T_\mu^\nu=-\rho_M(r.t)\delta^\mu_0\delta^0_\nu-\rho_\Lambda\delta^{\mu}_\nu, \end{equation} in which the matter density $\rho_M$ varies in time and radially in space, meanwhile $\rho_\Lambda$ remains fixed. The metric of a LTB universe can be written as: \begin{equation}\label{eq3} ds^2 = -dt^2 + \frac{R'^2}{1+2E(r)}dr^2 + R^2d\Omega^2, \end{equation} where $R=R(r,t)$ is a generalization of the scale factor and $E(r)$ is a function associated with the total energy. Here a prime indicates a derivative respect to $r$ and a dot a derivative respect to time $t$. From Einstein's equation we get the evolution equation: \begin{equation}\label{eq4} \frac{\dot{R}^2}{R^2}=\frac{2GM(r)}{R^3}+\frac{8\pi G \rho_\Lambda}{3} +\frac{2E}{R^2}, \end{equation} where $M(r)$ is an integration constant (in time). Notice that by writing $R(r,t)=a(t)r$ where $a(t)$ is the scale factor and $2E(r)=-Kr^2$ with $K$ being the curvature constant factor, the metric reduces to the FLRW metric and (\ref{eq4}) reduces to the usual Friedmann equation. The $M(r)$ function can be interpreted as the mass contained in a sphere of radius $R=ar$ in the FLRW limit. The matter density is defined through: \begin{eqnarray}\label{eq5} \frac{M'}{R'R^2} = 4\pi \rho _M. \end{eqnarray} So the mass function $M(r)$ is defined as the integral of $\rho_M$ in the LTB space as: \begin{equation}\label{eq6} M(r)=\int_0^r\rho _M (4 \pi R'R^2 dr) \end{equation} Defining the local Hubble rate by \begin{equation}\label{eq7} H(r,t)=\frac{\dot{R}(r,t)}{R(r,t)}, \end{equation} we can write (\ref{eq4}) as: \begin{equation}\label{eq8} H^2= \frac{2GM(r)}{R^3}+\frac{8\pi G \rho_\Lambda}{3} +\frac{2E}{R^2} \end{equation} By defining the local matter density parameter $\Omega_m$ as \begin{equation}\label{eq9} 2GM(r)=H_0^2(r)\Omega_m(r)R_0^3(r), \end{equation} where $H_0(r)=H(r,t_0)$ is the present Hubble local rate, $R_0=R(r,t_0)$ being an initial condition that can be gauged, and for the curvature term \begin{equation}\label{eq10} 2E(r)=H_0^2(r)\Omega_k(r)R_0^2(r), \end{equation} then, the local Friedman equation takes the form \begin{equation}\label{eq11} H^2(r,t)=H_0^2(r)( \Omega_m(r) A^3 + (1-\Omega_m(r))A^2), \end{equation} where $A(r,t)=R_0/R$. This means that we can choose any $R_0(r)$ function and the equations remains the same. This freedom is usually fixed by taking $R_0(r)=r$ \cite{Enqvist07}. The Friedman equation (\ref{eq11}) defines completely the relation between the functions $H_0(r)$, $\Omega_m(r)$, $R(r,t)$ and $H(r,t)$. One of the most used constraint to LTB models is based on the time that has passed from the big bang until now \cite{GarciaBellido08}, known as the bang time $t_{BT}(r)$ defined by: \begin{equation}\label{eq12} \begin{split} H_0(r)(t-&t_{BT}(r))=\int_0^{R(r,t)/R_0(t)}\frac{dx}{\sqrt{\Omega_m/x+\Omega_k}} \\ &=\frac{R(r,t)}{R_0(r)\sqrt{\Omega_k(r)}}\sqrt{1+\frac{R_0(r)\Omega_m(r)}{R(r,t)\Omega_k(r)}} \\ &-\frac{\Omega_m(r)}{\sqrt{\Omega_k(r)^3}}\sinh^{-1}{\sqrt\frac{\Omega_m(r)R(r,t)}{\Omega_k(r)R_0(r)}}. \end{split} \end{equation} This function means that not all locations in the universe were created at the same time. If we set $t=t_0$ we can obtain the current age of the universe $t_{BB}(r)$: \begin{equation}\label{eq13} \begin{split} &H_0(r)(t_0-t_{BB}(r))=H_0(r)t_{u}(r)=\int_0^{1}\frac{dx}{\sqrt{\Omega_m/x+\Omega_k}} \\ &=\frac{1}{\sqrt{\Omega_k(r)}}\sqrt{1+\frac{\Omega_m(r)}{\Omega_k(r)}} -\frac{\Omega_m(r)}{\sqrt{\Omega_k(r)^3}}\sinh^{-1}{\sqrt\frac{\Omega_m(r)}{\Omega_k(r)}}, \end{split} \end{equation} where we have defined $t_{u}(r)$ as the local age of the universe. It is interesting to note that we need to fix two functions here, chosen among $\Omega_m(r)$, $H_0(r)$ and $t_{BT}(r)$, and also fix a gauge (usually $R_0(r,t)=r$) to define completely the scale factor function $R(r,t)$. We obtain a similar result if we use instead the functions $M(r)$ and $E(r)$: \begin{equation}\label{eq14} \begin{split} \sqrt{2}(t-&t_{BT}(r))=\int_0^{R(r,t)}\frac{dx}{\sqrt{2GM(r)/x+2E(r)}} \\ &=\frac{R(r,t)}{\sqrt{E}}\sqrt{1+\frac{GM(r)}{R(r,t)E(r)}} \\ &-\frac{GM(r)}{\sqrt{E(r)^3}}\sinh^{-1}{\sqrt\frac{GM(r)R(r,t)}{E(r)}} \end{split} \end{equation} In this case, it seems we need to set three functions, $t_{BT}(r)$, $M(r)$ and $E(r)$, to define $R(r,t)$, but in fact one of them is a pure gauge. Usually $M(r) \propto r^3$ is assumed as this gauge (as we mentioned in the introduction, in some fractal-like models it has been proposed to use Eq.(\ref{eq1}) as the gauge, with $D$ being the fractal dimension \cite{Cosmai19}), so we just need to define two functions as we will see. Another way to define LTB model is thinking in a density contrast profile $\delta(r,t)$ defined as: \begin{equation}\label{eq15} \delta(r,t)=\frac{\rho(r,t)-\rho_0^\infty}{\rho_0^\infty}, \end{equation} where $\rho_0^\infty=\rho(\infty,t_0)$ is the mean density constant very far outside the local inhomogeneity at the present time. This function is useful because this allows us to describe the mass profile using the data directly from the observational photometric local surveys. We have for $t=t_0$: \begin{equation}\label{eq16} \rho_0(r)=\rho_0^\infty(\delta_0(r)+1) \end{equation} Although the functions $\Omega_m(r)$, $M(r)$ and $E(r)$ have a clear meaning, it is difficult to relate them directly to the observed quantity $\delta_0(r)$. So instead of using the complexities of (\ref{eq13}), we can approximate a solution by assuming the universe being described by FLRW and define a integrated density function as: \begin{equation}\label{eq17} \rho_0^H(r) \propto \frac{M(r)}{r^3} \propto (\delta^H(r)+1), \end{equation} which is exactly true for an homogeneous universe. Using the gauge $R_0(r)=r$ we can write: \begin{equation}\label{eq18} \delta^H(r)+1 \propto \frac{M(r)}{r^3} \propto \Omega_m(r)H_0(r)^2, \end{equation} where $\delta^H(r)$ is the integrated density contrast. Then, constraining $t_{BT}(r)$ and $\delta^H(r)$ we can define completely $R(r,t)$. It is important to note that this is not exactly the integrated density contrast, but is a useful approximation that allows us to define the universe in terms of density real parameters. \section{Luminosity Distance} The luminosity distance in LTB models can be obtained from: \begin{equation}\label{eq19} d_L = (1+z)^2 R(r,t), \end{equation} where $r=r(z)$ and $t=t(z)$ are defined by the geodesic equations. \begin{eqnarray}\label{eq20-21} \frac{dr}{dz} &=& \frac{\sqrt{1-k}}{(1+z)\dot{R'}} \\ \frac{dt}{dz} &=& -\frac{R'}{(1+z)\dot{R'}} \end{eqnarray} These equations are necessary to make contact with the observations. In our particular case, we use the Pantheon sample \cite{Scolnic18} that release the redshift $z$, distance modulus $\mu(z)$ and the error $\sigma_{\mu}(z)$ for $1048$ type Ia supernova that can be compared to the theoretical expectation \begin{equation}\label{eq44} \mu(z) = 5\log_{10}\frac{d_L(z)}{10 \text{pc}}. \end{equation} \section{Fractals in LTB Model} In this section we follow the treatment of Ref. \cite{Cosmai19}. In that work, the authors argue that from the statistical analysis of the 3D galaxy distribution, the average conditional density can be described locally as \begin{equation}\label{eq22} \langle n(r) \rangle \sim r^{-\gamma} \end{equation} where $\gamma$ is a phenomenological parameter that can take values from $0.2$ to $0.9$ in a scale range from $1$ Mpc/h until $100$ Mpc/h. Beyond that scale, a transition to homogeneity ($\gamma=0$) is expected whose behavior is not completely clear. Then for the structure of a real galaxy we can approximate the density of matter as: \begin{equation}\label{eq23} \rho_M(r) \sim \langle n(r) \rangle \end{equation} a behaviour that can be described using a LTB model. In this case, the function $M(r)$ is related to this fractal density function due to equation (\ref{eq6}): \begin{equation}\label{eq24} M(r)=\int_0^r \langle n(r) \rangle(4 \pi R'R^2 dr), \end{equation} where $M(r)$ can be understood as the matter inside a volume of radius $r$. In \cite{Cosmai19}, it was stated that a fractal-like $M(r)$ function can fit the SNIA data in a pressureless LTB model given by \begin{equation}\label{eq25} M(r)=\Phi r^D, \end{equation} where $D=3-\gamma$ is the fractal dimension and $\Phi$ is the mass-scale. The problem with this definition is that they fix the bang time to zero $t_{BT}(r)=0$, use the gauge $R_0(r)=r$ and also assume the energy function $E(r)=0$. Then, they fix three functions and a gauge. Actually the definition proposed $M(r) \propto r^D$ can be interpreted as a gauge in itself, and the definition on $R_0(r)$ cannot hold. However, if $R_0(r)$ is not fixed, then the model is just an Einstein de Sitter (EdS) universe without a cosmological constant and $M(r)$ is just a coordinate scaling. The problem arises from assuming an homogeneous bang-time function $t_{BT}(r)=0$, from which leads to the incorrect definition: \begin{equation}\label{eq26} R(r,0)=R(r,t_0). \end{equation} In fact, $R(r,t_0)=R_0(r)$ is the present scale factor (which is usually a gauge in LTB cosmology when $\Omega_m(r)$ is used instead of $M(r)$), while $R(r,0)=0$ is the initial scale factor in the big bang itself. Those issues lead to a theoretical inconsistent model. If we perform the integration of the equation for a purely flat universe, with homogeneous bang time function and the function $M(r)$ with the right limits ($t\in[0,t],\>R\in[0,R(r,t)]$ we just get the EdS results: \begin{equation}\label{eq27} R(r,t)= \left (\frac{9M(r)}{2}\right )^{1/3}t^{2/3}. \end{equation} We can see here that the election of the function $M(r)$, with the assumption of global flatness and homogeneous big-bang is just a gauge related to the scale of the radial coordinate. Also, the Hubble function can be written as \begin{eqnarray}\label{eq28} H(t)=\frac{\dot{R}}{R}= \frac{2}{3t}, \end{eqnarray} independently of the $M(r)$ function used. We have no divergences for $H_0$ at $r=0$, but also the fractal structure described with $M(r)$ cannot be used to explain any $H_0$ tension. The question now emerges, can we use LTB models with $E(r)=0$ to describe fractal distribution of matter? In the following we describe our proposal to do that. First, we propose to define a function $M(r)$ that can describe both, the internal fractal statistical behaviour of the density and also a transition to a FLRW universe (a transition between fractality and non-fractality). The function $M(r)$ could be described as \begin{align}\label{eq29-30} M_{in}(r)&=\Phi_{in}r^D, &r<L \\ M_{out}(r)&=\Phi_{out}r^3, &r>L, \end{align} where $M\sim r^3$ is the mass for a FLRW universe. At this point we have four free parameters in the model: the mass scale $\Phi_{in}$ and $\Phi_{out}$, the fractal dimension $D$ and the scale of the fractal structure $L$. If we demand for continuity at $r=L$ we can reduce to three free parameters as: \begin{eqnarray}\label{eq31} \Phi_{in}L^D=\Phi_{out}L^3, \end{eqnarray} then the model becomes \begin{align}\label{eq32-33} M_{in}(r)&=\Phi_{out}L^3\left (\frac{r}{L}\right)^D, &r<L \\ M_{out}(r)&=\Phi_{out}r^3, &r>L. \end{align} Defining the function in this way, the effects between the internal fractal structure and the external EdS universe should be noticeable. A similar approach was developed in \cite{Ruffini17}, but the difference is that they fix the scale of transition about $r=2300$ Mpc meanwhile for us this is a free parameter to test. As a first approach, we assume $\Omega_{\Lambda}=0$ and the universe being just EdS beyond the transition. We use low redshift type Ia supernova data to estimate the transition scale for fractality. We also use the full sample \footnote{We can understand this approach as being a tool to estimate the fractal transition in low redshift data, but also as a possible explanation for dark energy with fractal structure using the full sample. However, this last approach is questionable as it does not exist a clear evidence for fractal structure for scales $r>100$ Mpc}, in which case we have to add another parameter to be fixed. Following \cite{Alexander09} and considering an EdS universe for $r>L$, we choose for $\Phi_{out}$ to be in agreement with the age of the universe in an EdS universe: \begin{eqnarray}\label{eq34} \Phi_{out}=\frac{4 \pi}{3} \rho_0 = \frac{H_0^2}{2}, \end{eqnarray} where $H_0 \sim h_{out}/3000$ Mpc is the Hubble parameter outside the fractal behaviour in the FLRW limit. With this scale we have the radial coordinate in Mpc and just two free parameters: $D$ and $L$. The $R(r,t)$ can then be written as: \begin{eqnarray}\label{eq35-36} R_{in}(r,t) &=& \left(\frac{9\Phi_{in}}{2}\right)^{\frac{1}{3}}L\left (\frac{r}{L}\right)^{D/3}t^{2/3},\>\>r<L \\ R_{out}(r,t) &=& \left(\frac{9\Phi_{out}}{2}\right)^{\frac{1}{3}} rt^{2/3},\>\>r>L, \end{eqnarray} It is important to note that in this model, the density is surprisingly homogeneous. Using equation (\ref{eq11}) we can obtain the same density profile than an EdS universe: \begin{eqnarray}\label{eq37} \rho_0(r)=\frac{3\Phi_{out}}{4\pi}, \end{eqnarray} but the integrated density at $t_0$ is: \begin{eqnarray}\label{eq38} \rho_0^H(r) \propto r^{D-3}=r^{-\gamma},\>\>r<L \end{eqnarray} which is constant for $r>L$. Using the continuity condition at $L$ we obtain: \begin{equation}\label{eq39-40} \rho_0^H(r)= \left\{ \begin{array}{lr} \frac{3\Phi_{out}}{4\pi}\left(\frac{r}{L}\right)^{-\gamma} &, r<L \\ \frac{3\Phi_{out}}{4\pi} &, r>L \end{array} \right. \end{equation} The behaviour of $M(r)$ and the densities are displayed in Fig. \ref{fig:1} and Fig. \ref{fig:2}. It is interesting to note that the fractal behaviour can be also be defined as: \begin{eqnarray}\label{eq41-42} \rho_0^H(r)&=&\rho_0 \left(\frac{r}{L}\right)^{-\gamma} ,\>\>r<L \\ \rho_0^H(r)&=&\rho_0 ,\>\>r>L \end{eqnarray} Purely Fractal and Fractal-EdS universe are plotted in Fig. \ref{fig:3}. \begin{figure}[ht] \centering \includegraphics[scale=0.35]{fractalMass.png} \caption{$M(r)$ profiles. In black appears the fractal model with transition to EdS; in red pointed line EdS and in yellow pointed line a purely fractal universe. We use $D=3.36$, $L=100 Mpc$ and $h_{out} \sim 0.73$. The vertical gray line is the lenght $L$ of the transition. } \label{fig:1} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.35]{fractalprofile.png} \caption{Integrated density profiles for different values of $D$. We use $L=100 Mpc$ and $h_{out} \sim 0.73$. The vertical gray line is the length $L$ of the transition. Note that profiles with $D>3$ are similar to an over-dense region and $D<3$ to an under-dense region. $D=3$ correspond to the common constant integrated density profile in EdS universe. } \label{fig:2} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.35]{LuminosityFractal.png} \caption{Luminosity Distance for the different universes. Same description as in FIG. \ref{fig:1}. Note that we use the purely fractal universe with the scale $\Phi_{in}$ to illustrate.} \label{fig:3} \end{figure} \section{Statistical analysis with SNIa data} To test our model, we use the Pantheon sample comprised by 1048 SNIa in the redshift range $z \in (0.01, 2.3)$. As we are interested in studying a possible transition between fractal behaviour and a FLRW universe, we consider two approaches. First, using only low redshift SNIa, using a cutoff of $z<0.3$ corresponding approximately to $\sim 800 $Mpc. Second, using the full sample. We select $h_{out}\sim 0.73$ in the first case to match local Hubble parameter scale measures \cite{Dainotti_2021} and $h_{out}\sim 0.45$ in the second case, in order to converge for low scales of the Hubble parameter that fits CMB with an EdS universe \cite{Alnes}. The full Tripp Formulae for distance modulus is \begin{equation}\label{eq43} \mu_{obs}=m_b^*-M=m_b+\alpha x-\beta c+\Delta_M-M \end{equation} here, $m_b$ corresponds to the peak apparent magnitude in the B-band, $M$ is the absolute B-band magnitude of a fiducial SNIa, $x$ and $c$ are light curve shape and color parameters, $\alpha$ is a coefficient of the relation between luminosity and stretch and $\beta$ is a coefficient of the relation between luminosity and colour, and $\Delta_M$ is a corrections based on the mass of the host galaxy. All of these parameters, usually called \textit{nuisance parameters}, are needed to standardize the SNIa data leading to the corrected magnitude $m_b^*$. Those correction are provided in Pantheon catalogue, and the parameter $M$ can be marginalized. This relation can be used to perform statistical analysis with the distance theoretical modulus for any cosmological model given by Eq.(\ref{eq44}). \section{Results} We use the code \texttt{EMCEE} \cite{2013PASP..125..306F} to test the model against the SNIa data. This is a pure python implementation of the affine invariant ensemble sampler for Markov chain Monte Carlo proposed by Goodman and Weare \cite{2010CAMCS...5...65G}. From this analysis we obtain the best fit values for $D$ and $L$ in our model. We used directly the corrected magnitudes from the catalogue, but also another approach allowing the nuisance parameters $\alpha$ and $\beta$ to vary along the cosmological parameters. In this last approach, we ignore the step mass correction as this does not impact significantly the cosmological fit. The results and details are shown in tables \ref{Table:Corrected} and \ref{Table:Nuisance}. We note that for the low redshift supernovae we can get a relatively good fit, but not for the full sample. Overall, fitting also $\alpha$ and $\beta$ parameters improve a slightly in the case of the full sample. We also note high errors in the parameter estimation when we do not use tight priors. Best fits are plotted in Figures \ref{fig:4} and \ref{fig:5}. To look for insights into the value of the fractal dimension $D$, we also consider priors on the parameter $L$, which is our fractal transition scale. Instead of fixing it at a convenient value, as previous works have done, here we introduce Gaussian priors for $L$ using different values coming from large scale research of the transition scale for homogeneity. For example, in \cite{2010MNRAS.405.2009Y} the authors concluded that the scale of homogeneity should be less than $260h^{-1}$ Mpc. Also in \cite{2005MNRAS.364..601Y} and \cite{2005ApJ...624...54H} the authors have suggested even a smaller scale of $L=60h^{-1}$Mpc. Let us consider four values for $L$: $L=60h^{-1}$, $L=100h^{-1}$, $L=150h^{-1}$ and $L=200h^{-1}$. Considering Gaussian priors on this values with a 5\% of error, we find the results showed in Tables \ref{Table:prior0.3} for low redshift supernovae at $z<0.3$. Again, the error on $D$ is very high. This means that a LTB fractal model with no cosmological constant does not perform well to fit SN IA data. \begin{table}[ht] \caption{\label{Table:Corrected} Best fit values for $D$ and $L$ fitting for corrected magnitudes.} \begin{ruledtabular} \begin{tabular}{ccccc} $h_{out}$ &$z_{cut}$ & $D$ & $L (Mpc)$ &$\chi^2_{red}$ \\ \hline $0.73$ & $z<0.3$ & $3.16^{+0.36}_{-0.63}$ & $696.83^{+113.01}_{-298.50}$ & $1.15$ \\ $0.45$ & Full Sample & $3.47^{1.04}_{ -1.38}$ & $1354.17^{232.08}_{-313.73}$ & 1.43 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[ht] \caption{\label{Table:Nuisance}Best values for $D$ and $L$ allowing fit for nuisance parameters.} \begin{ruledtabular} \begin{tabular}{c c c c c} $h_{out}$ &$z_{cut}$ & $D$ & $L (Mpc)$ &$\mathcal{L}_{min}$\\ \hline $0.73$ & $z<0.3$ & $3.30^{+0.27}_{-1.77}$ & $844.55^{+95.34}_{-41.85}$ & $-2962.11$ \\ $0.45$ & Full Sample & $1.61^{2.11}_ {-0.17}$ & $999.75^{55.16}_{-9.44}$& -6580.89 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[ht] \caption{\label{Table:prior0.3} Best fit values for $D$ and $L$ using the Pantheon sample within the redshift range $z<0.3$. } \begin{ruledtabular} \begin{tabular}{cccc} $L$ Prior & $ D $ & $L(Mpc)$ & $\chi^2_{red}$ \\ \hline $60h^{-1}$ & $2.97 \substack{+1.26\\-1.07}$ & 85.02 $\substack{+0.53 \\ -0.52}$ & $1.15$ \\ $100h^{-1}$ & $2.74 \pm \substack{+1.24\\-1.09}$ & $137.02 \substack{+0.54\\-0.51}$ & $1.15$ \\ $150h^{-1}$ & $2.73 \pm \substack{+1.2\\-1.1}$ & $205.02 \substack{+0.54\\-0.51}$ & $1.16$ \\ $200h^{-1}$ & $2.6 \pm \substack{+1.15\\-1.13}$ & $273.00 \substack{+0.51\\-0.5}$ & $1.16$ \\ \end{tabular} \end{ruledtabular} \end{table} \section{Discussion and conclusions} We have demonstrated that a purely fractal model with $M(r)\sim r^D$ is not different from an usual EdS universe with another scaling relation. As an alternative to the fractal model proposed in \cite{Cosmai19}, a theoretically consistent model was developed, allowing for a transition between a fractal scale with $M \sim r^D$ to an homogeneous universe $M \sim r^3$. Two analysis to put in stress our model with the Pantheon data were performed. We got a good fit for low redshift supernovae, but the model fails to describe the full behaviour of the Pantheon data sample. We conclude that fractal LTB models cannot explain the effects of dark energy. Overall, we looked for insights in fractal dimension value performing a different analysis for low redshift supernovae using tight priors for the transition scale. It was observed that the errors are still high at $z<0.3$. The physical validity of the model could be studied more deeply if we summarize the problems that it has. First, we did not take into account a cosmological constant. This constant could be added to the model in order to fit the full SN IA data looking for effects of the local fractal structure in the cosmological constant value. Also, we assumed that the $M$ scale should converge to a fixed $H_0$ experimental value. Two possibilities accounting for convergence to CMB $h \sim 0.45$ and another to SNIA estimation to $h \sim 0.73$ were used. Those values could be modified allowing for other definitions like presented in \cite{Camarena20} \cite{Valkenburg13}. In future works, we propose to study how a fractal structure in LTB models can impact $\Lambda$CDM and alternative cosmologies. We reinforce the importance of take into account the full information about our local large scale structure to the development of a better cosmological description that lead us to stringer constraints on the LCDM model. \\ \\ \\ \section*{ACKNOWLEDGMENTS} EP acknowled the support through a graduate scholarship ANID-Subdirecci\'on de Capital Humano/Doctorado Nacional/2021-21210824. \begin{figure}[ht] \centering \includegraphics[scale=0.33]{fractalpanthz0.3.png} \includegraphics[scale=0.33]{Fullpant.png} \caption{Best fit functions for $z<0.3$ and the full sample using corrected magnitudes in Pantheon sample. We get a notably worst fit than previous studies in fractal LTB models. } \label{fig:4} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.33]{FRACTALPantheon0.3N.png} \includegraphics[scale=0.33]{FullpantALLN.png} \caption{Best fit functions for $z<0.3$ and the full sample allowing for $\alpha$ and $\beta$ fitting. We note that the full sample fit improves a little. } \label{fig:5} \end{figure}
{ "attr-fineweb-edu": 1.982422, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUau7xK2li-LM1PlJ7
\subsection{Constant or Slowly Growing $m$ Regime} We first collect some facts from the $\ell_2$ analysis of the spectral algorithm. \begin{theorem}\label{thm:stationary-error-bound} Suppose that $np^2 \geq \max\{C_2 \frac{12^2 e^{4\kappa}}{\gamma^2} \log m, C_1\log m\} $ for sufficiently large constants (e.g., $C_2 \geq 30$, $C_1 \geq 101$). Then $$ \lVert \pi - \pi^* \rVert_{2} \leq \frac{48/\sqrt 3 e^{3\kappa}}{\gamma}\cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{m\sqrt{np^2}}\, $$ and $$ \lvert P_{ij} - P^*_{ij}\rvert \leq \frac{\gamma np^2 }{e^{2\kappa} 8d} = \frac{\gamma}{12 m e^{2\kappa} } \quad\forall i\neq j$$ with probability at least $1- \min\{\exp{-12m}, \frac{1}{(np^2)^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{np^2}{20}} $. \end{theorem} We also have $$ \frac{1}{m} \leq \pi^*_{\max} = \lVert \pi^* \rVert_{\infty} \leq \frac{e^\kappa\sum_{i=1}^m \pi^*_i}{m} =\frac{e^{\kappa}}{m}\,. $$ ~\\ \textbf{Bounding $I_1$} \begin{lemma} Consider the setting of Theorem \ref{thm:stationary-error-bound}, there exists a constant $\alpha_1$ such that $$ I_1 \leq \alpha_1 \cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{\sqrt{mnp^2}} \cdot \lVert \pi^*\rVert_{\infty} $$ with probability at least $1- \min\{\exp{-12m}, \frac{1}{(np^2)^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{np^2}{20}} $. \end{lemma} \begin{proof} \begin{align*} I_1 &= \sum_{j\neq i} (\pi_j - \pi^*_j) P_{ji} \\ &\text{[Using Cauchy-Schwarz]}\\ &\leq \lVert \pi - \pi^* \rVert_2 \cdot \sqrt{\sum_{j\neq i} P_{ji}^2} = \lVert \pi - \pi^* \rVert_2 \cdot \sqrt{\sum_{j\neq i} (P_{ji}- P^*_{ij} + P_{ij}^*)^2}\\ &\leq \lVert \pi - \pi^* \rVert_2 \cdot \sqrt{\sum_{j\neq i} 2(P_{ji}- P^*_{ij})^2 + 2 (P_{ij}^*)^2}\\ &\leq \lVert \pi - \pi^* \rVert_2 \cdot \sqrt 2 \cdot \sqrt{\sum_{j\neq i} \bigg( \frac{\gamma}{12 m e^{2\kappa} }\bigg)^2 + \bigg(\frac{3}{2d} {np^2} \bigg)^2}\\ & = \lVert \pi - \pi^* \rVert_2 \cdot \sqrt 2 \cdot \sqrt{\sum_{j\neq i} \bigg( \frac{\gamma}{12 m e^{2\kappa} }\bigg)^2 + \bigg(\frac{3}{2d} {np^2} \bigg)^2}\\ \displaybreak &\leq \lVert \pi - \pi^* \rVert_2 \cdot \sqrt 2 \cdot \sqrt m \cdot \sqrt{\bigg( \frac{\gamma}{12 m e^{2\kappa} }\bigg)^2 + \bigg(\frac{3}{2d} {np^2} \bigg)^2}\\ &\leq \lVert \pi - \pi^* \rVert_2 \cdot \sqrt 2 \cdot \sqrt m \cdot \frac{1}{m} \cdot \sqrt{\frac{\gamma^2}{12^2 e^{4\kappa}} + 1}\\ &\leq \lVert \pi - \pi^* \rVert_2 \cdot \sqrt 2 \cdot \frac{1}{\sqrt m} \cdot \sqrt{\frac{\gamma^2}{12^2 e^{4\kappa}} + 1}\\ &\leq \frac{48/\sqrt 3 e^{3\kappa}}{\gamma}\cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{m\sqrt{np^2}} \cdot \sqrt 2 \cdot \frac{1}{\sqrt m} \cdot \sqrt{\frac{\gamma^2}{12^2 e^{4\kappa}} + 1}\\ &\leq \underbrace{\frac{48/\sqrt 3 e^{3\kappa}}{\gamma}\cdot \sqrt 2 \cdot \sqrt{\frac{\gamma^2}{12^2 e^{4\kappa}} + 1}}_{\alpha_1} \cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{\sqrt{mnp^2}} \cdot \lVert \pi^*\rVert_{\infty}\\ &= \alpha_1 \cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{\sqrt{mnp^2}} \cdot \lVert \pi^*\rVert_{\infty}\,. \end{align*} \end{proof} ~\\ \textbf{Bounding $I_2$} \begin{lemma} Consider the setting of Theorem \ref{thm:stationary-error-bound}. There exists a constant $\alpha_2 < 1$ such that $$ I_2 \leq \alpha_2 \cdot \lVert \pi - \pi \rVert_{\infty} $$ with probability at least $1- \min\{\exp{-12m}, \frac{1}{(np^2)^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{np^2}{20}} $. \end{lemma} \begin{proof} It suffices to bound $P_{ii}$ away from 1. \begin{align*} \lvert P_{ii} - P^*_{ii}\rvert &= \bigg\lvert\sum_{j\neq i} P^*_{ij} - P_{ij}\bigg\rvert \\ &\leq \sum_{j\neq i} \lvert P_{ij}^* - P_{ij}\rvert \\ &\leq \frac{\gamma}{12 e^{2\kappa}}\,. \end{align*} On the other hand, $P_{ii}^* = 1- \frac{1}{d} \cdot\sum_{j\neq i} P_{ij}^* \leq 1- \frac{3}{2d}\gamma mnp^2 = 1- \gamma$. We have $$ P_{ii} \leq 1- \gamma + \frac{\gamma}{12e^{2\kappa}} < 1 =: \alpha_2 $$ \end{proof} ~\\ \textbf{Bounding $I_3$} \begin{lemma} Consider the setting of Theorem \ref{thm:stationary-error-bound}. There exists a constant $\alpha_3$ such that $$ I_3 \leq \alpha_3 \cdot \frac{\lVert \pi^*\rVert_{\infty} \sqrt{\log np^2} }{\sqrt{np^2}} $$ with probability at least $1- \min\{\exp{-12m}, \frac{3}{(np^2)^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{np^2}{20}} $. \end{lemma} \begin{proof} \begin{equation*} \begin{aligned} I_3 &= \sum_{j\neq i} \pi_j^* (P_{ji} - P_{ji}^*) = \frac{1}{d} \cdot \sum_{j\neq i}\pi_j^* \cdot \sum_{l=1}^n A_{li}A_{lj} \big[X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})] \big] \,.\\ \end{aligned} \end{equation*} Informally speaking, for a fixed $l$ if we change $X_{li}$, the sum above changes by at most $\frac{1}{d} \cdot \sum_{j\neq i} \pi_j^* A_{li} A_{lj} $. For a fixed $l$ and $j\neq i$, if we change $X_{lj}$, the sum above changes by at most $\frac{1}{d} \cdot \pi_j^* A_{li}A_{lj} \leq \frac{1}{d}\cdot \lVert \pi^*\rVert_{\infty} A_{li}A_{lj}$. Note also that by Cauchy-Schwarz, \begin{equation*}\label{eqn:bounded-difference} \sum_{j\neq i} \pi_j^* A_{li} A_{lj} \leq \lVert \pi^*\rVert_{\infty} \cdot \sqrt{m-1} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}} \,. \end{equation*} Ignore the factor $d$ for now. Hoeffding's inequality then tells us that \begin{equation*} \begin{aligned} &\Pr\bigg(\sum_{j\neq i}\pi_j^* \cdot \sum_{l=1}^n A_{li}A_{lj} \big[X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big] > t\,\lvert\, \A \bigg) \\ &\leq 2\exp{-\frac{2t^2}{ \lVert \pi^*\rVert^2_{\infty} \cdot (m-1) \cdot \sum_{j\neq i}\sum_{l=1}^n A_{li}A_{lj} + \sum_{j\neq i}\sum_{l=1}^n\lVert \pi^*\rVert^2_{\infty} A_{li}A_{lj} }}\\ &\leq 2\exp{-\frac{2t^2}{ \lVert \pi^*\rVert^2_{\infty} \cdot \big( m \sum_{j\neq i} \sum_{l=1}^n A_{li}A_{lj} \big)}}\\ &\leq 2\exp{-\frac{2t^2}{ m \lVert \pi^*\rVert^2_{\infty} \cdot 3/2mnp^2 }}\\ \end{aligned} \end{equation*} Set $t = \sqrt{12} \cdot \sqrt{\frac{3}{8}} \cdot m \sqrt{np^2} \cdot \sqrt{\log np^2} \cdot \lVert \pi^*\rVert_{\infty} $. We have $$ I_3 \leq \frac{1}{d} \cdot \sqrt{12} \cdot \sqrt{\frac{3}{8}} \cdot m \sqrt{np^2} \cdot \sqrt{\log np^2} \cdot \lVert \pi^*\rVert_{\infty} = \sqrt 2 \cdot \frac{\lVert \pi^*\rVert_{\infty} \sqrt{\log np^2} }{\sqrt{np^2}} $$ with probability at least $1 - \frac{2}{(np^2)^{12}}$. \end{proof} ~\\ \textbf{Bounding $I_4$} \begin{lemma} Consider the setting of Theorem \ref{thm:stationary-error-bound}. There exists a constant $\alpha_4$ such that $$ I_4 \leq \alpha_4 \cdot \frac{\lVert \pi^*\rVert_{\infty} \sqrt{\log np^2} }{\sqrt{np^2}} $$ with probability at least $1- \min\{\exp{-12m}, \frac{3}{(np^2)^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{np^2}{20}} $. \end{lemma} \begin{proof} \begin{equation*} \begin{aligned} I_4 &= \sum_{j\neq i} \pi_i^* (P_{ij}^* - P_{ij} ) = \frac{1}{d} \cdot \sum_{j\neq i}\pi_i^* \cdot \sum_{l=1}^n A_{li}A_{lj} \big[\E[X_{li}(1-X_{lj}) - X_{li}(1-X_{lj}) \big] \,,\\ \end{aligned} \end{equation*} which has essentially the same form as $I_3$. Following the same argument as that in the proof of the third claim, we can bound $I_4$ as $$ I_4 \leq \frac{1}{d} \cdot \sqrt{12} \cdot \sqrt{\frac{3}{8}} \cdot m \sqrt{np^2} \cdot \sqrt{\log np^2} \cdot \lVert \pi^*\rVert_{\infty} = \sqrt 2 \cdot \frac{\lVert \pi^*\rVert_{\infty} \sqrt{\log np^2} }{\sqrt{np^2}} $$ with probability at least $1 - \frac{2}{(np^2)^{12}}$. \end{proof} ~\\ \textbf{Putting the terms together}, we have $$ \lvert \pi_i - \pi_i^*\rvert_{\infty} \leq \alpha_2 \cdot \lVert \pi - \pi^*\rVert_{\infty} + (\alpha_1 + \alpha_3 + \alpha_4) \cdot \frac{\sqrt{ \log np^2 } }{\sqrt{np^2}} \cdot \lVert \pi^*\rVert_{\infty}\,. $$ Rearranging the term and maximizing the LHS gives $$ \frac{ \lVert \pi - \pi^*\rVert_{\infty} }{ \lVert \pi^*\rVert_{\infty} } \leq \frac{\alpha_1 + \alpha_3 + \alpha_4}{1-\alpha_2} \cdot \frac{\sqrt{ \log np^2 } }{\sqrt{np^2}} \,. $$ \newpage \subsection{Large $m$ Regime} Recall the following results from our $\ell_2$ analysis under large $m$ regime. \begin{theorem}\label{thm:stationary-error-bound-growing-m} Suppose that $np^2 \geq \max\{C_2 \frac{12^2 e^{4\kappa}}{\gamma^2} \log m, C_1\log m\} $ and that $mp \geq C''\log m$ for sufficiently large constants $C_1, C_2, C''$ (e.g., $C_2 \geq 30$, $C_1, C'' \geq 101 $). Then the output of the spectral algorithm satisfies $$ \lVert \pi - \pi^* \rVert_2 \leq \frac{48/\sqrt{2} e^{3\kappa}}{\gamma} \cdot \frac{1}{\sqrt{mnp}} $$ with probability at least $1- \exp{-12m} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - n^{-9}$. \end{theorem} Note that the bound on $I_2$ doesn't change. We, however, obtain sharper bounds for $I_1, I_3, I_4$. ~\\ \textbf{Sharper Bound on $I_1$} \begin{lemma} Consider the setting of Theorem \ref{thm:stationary-error-bound-growing-m}, there exists a constant $\alpha'_1$ such that $$ I_1 \leq \alpha'_1 \cdot \frac{1}{\sqrt{mnp}} \cdot \lVert \pi^*\rVert_{\infty} $$ with probability at least $1- \exp{-12m} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - n^{-9}$. \end{lemma} \begin{proof} Following the same decomposition as in the proof of the bound on $I_1$ under the constant (or small) $m$ regime, \begin{align*} I_1 &\leq \lVert \pi - \pi^* \rVert_2 \cdot \sqrt 2 \cdot \frac{1}{\sqrt m} \cdot \sqrt{\frac{\gamma^2}{12^2 e^{4\kappa}} + 1}\\ &\leq \frac{48/\sqrt 2 e^{3\kappa}}{\gamma}\cdot \frac{1 }{m\sqrt{np}} \cdot \sqrt 2 \cdot \frac{1}{\sqrt m} \cdot \sqrt{\frac{\gamma^2}{12^2 e^{4\kappa}} + 1}\\ &\leq \underbrace{\frac{48/\sqrt 2 e^{3\kappa}}{\gamma}\cdot \sqrt 2 \cdot \sqrt{\frac{\gamma^2}{12^2 e^{4\kappa}} + 1} }_{\alpha'_1} \cdot \frac{1}{\sqrt{mnp} \cdot \lVert \pi^*\rVert_{\infty}}\\ &= \frac{\alpha'_1}{\sqrt{mnp}} \cdot \lVert \pi^*\rVert_{\infty}\,. \end{align*} \end{proof} ~\\ \textbf{Sharper Bound on $I_3$} \begin{lemma} Consider the setting of Theorem \ref{thm:stationary-error-bound-growing-m}. There exists a constant $\alpha'_3$ such that $$ I_3 \leq \alpha'_3 \cdot \frac{\lVert \pi^*\rVert_{\infty} \sqrt{\log np} }{\sqrt{np}} $$ with probability at least $1- \exp{-12m} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - n^{-9}$. \end{lemma} \begin{proof} \begin{equation*} \begin{aligned} I_3 &= \sum_{j\neq i} \pi_j^* (P_{ji} - P_{ji}^*) = \frac{1}{d} \cdot \sum_{j\neq i}\pi_j^* \cdot \sum_{l=1}^n A_{li}A_{lj} \big[X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})] \big] \,.\\ \end{aligned} \end{equation*} Informally speaking, for a fixed $l$ if we change $X_{li}$, the sum above changes by at most $\frac{1}{d} \cdot \sum_{j\neq i} \pi_j^* A_{li} A_{lj} $. For a fixed $l$ and $j\neq i$, if we change $X_{lj}$, the sum above changes by at most $\frac{1}{d} \cdot \pi_j^* A_{li}A_{lj} \leq \frac{1}{d}\cdot \lVert \pi^*\rVert_{\infty} A_{li}A_{lj}$. Note also that by Cauchy-Schwarz and under the growing $m$ regime, \begin{equation*} \begin{aligned} &\sum_{j\neq i} \pi_j^* A_{li} A_{lj} \leq \lVert \pi^*\rVert_{\infty} \cdot \sum_{j\neq i} A_{lj} \cdot A_{li}A_{lj} \\ &\leq \lVert \pi^*\rVert_{\infty} \cdot \sqrt{\sum_{j\neq i} A_{lj} } \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj} }\\ &\leq \lVert \pi^*\rVert_{\infty} \cdot \sqrt{3/2} \cdot \sqrt{mp} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj} } \,.\\ \end{aligned} \end{equation*} We have \begin{equation*} \begin{aligned} &\Pr\bigg(\sum_{j\neq i}\pi_j^* \cdot \sum_{l=1}^n A_{li}A_{lj} \big[X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big] > t\,\lvert\, \A^+ \bigg) \\ &\leq 2\exp{-\frac{2t^2}{ m \lVert \pi^*\rVert^2_{\infty} \cdot 9/4\cdot mp \cdot np^2 }}\\ \end{aligned} \end{equation*} Set $t = \sqrt{12} \cdot \sqrt{\frac{9}{8}} \cdot m \sqrt{np} \cdot \sqrt{\log np^2} \cdot \lVert \pi^*\rVert_{\infty} $. We have $$ I_3 \leq \frac{1}{d} \cdot \sqrt{12} \cdot \sqrt{\frac{9}{8}} \cdot m \sqrt p \cdot \sqrt{np^2} \cdot \sqrt{\log np^2} \cdot \lVert \pi^*\rVert_{\infty} = \sqrt{27/2} \cdot \frac{\lVert \pi^*\rVert_{\infty} \sqrt{\log np} }{\sqrt{np}} $$ with probability at least $1 - \frac{2}{(np)^{12}}$. \end{proof} ~\\ \textbf{Sharper Bound on $I_4$} \begin{lemma} Consider the setting of Theorem \ref{thm:stationary-error-bound-growing-m}. There exists a constant $\alpha'_4$ such that $$ I_4 \leq \alpha'_4 \cdot \frac{\lVert \pi^*\rVert_{\infty} \sqrt{\log np} }{\sqrt{np}} $$ with probability at least $1- \exp{-12m} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - n^{-9}$. \end{lemma} \begin{proof} The proof is omitted. \end{proof} ~\\ \textbf{Putting the terms together}, we have under the growing $m$ regime, $$ \frac{ \lVert \pi - \pi^*\rVert_{\infty} }{ \lVert \pi^*\rVert_{\infty} } \leq \frac{\alpha_1 + \alpha_3 + \alpha_4}{1-\alpha_2} \cdot \frac{\sqrt{ \log np } }{\sqrt{np}} \,. $$ \section{Preliminary analysis} $d_i$ is a normalization factor to ensure that we have a valid probability transition matrices, that is $d_i$ must be sufficiently large such that: $$\sum_{j\neq i} \frac{1}{d_i} B_{ij} = \frac{1}{d_i} \sum_{j=1, j\neq i}^m \sum_{l=1}^n A_{li}A_{lj} X_{li}(1-X_{lj}) \leq 1$$ Suppose that for each student, she is given problem $i$ with probability $p$ independently of other students and problems. \begin{lemma} For any $i \neq j \in [m]$ $$ \Pr( \lvert P_{ij} - P^*_{ij}\rvert > t) \leq 2\exp{-\frac{t^2d^2}{ 2\bigg(\sum_{l=1}^n \big[\frac{p^2w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)}\big] + \frac{td}{3} \bigg) }} $$ \end{lemma} \begin{proof} By definition we have: \begin{equation*} \begin{aligned} P_{ij} - P^*_{ij} &= \frac{1}{d} \sum_{l=1}^n \underbrace{A_{li}A_{lj} X_{li}(1-X_{lj})}_{Y_{ij}^{l}\leq 1 } - \E[A_{li}A_{lj} X_{li}(1-X_{lj}) ]\\ \end{aligned} \end{equation*} Note that the above sum is over $n$ independent centered random variables. Thus, we can use Bernstein's inequality (cf. Theorem 2.10 \cite{boucheron2013concentration}) to obtain a concentration bound. To this end, we first see that $$ Y_{ij}^l = \begin{cases}1 &\text{ with probability }p^2\frac{w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)} \\ 0 &\text{ with probability } 1-p^2\frac{w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)} \end{cases}\,, $$ $$ \E[{Y_{ij}^l}^2] = \frac{p^2w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)}\,. $$ We can now apply Bernstein's inequality to obtain \begin{equation*} \begin{aligned} \Pr( \lvert P_{ij} - P^*_{ij}\rvert > t) &= \Pr\bigg( \big\lvert \sum_{l=1}^n Y^{l}_{ij} - \E[Y_{ij}^{l}] \big\rvert > td\bigg)\\ &\leq 2\exp{-\frac{t^2d^2}{ 2\bigg(\sum_{l=1}^n \frac{p^2w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)} + \frac{td}{3} \bigg) }}\,. \end{aligned} \end{equation*} This completes the proof. \end{proof} We have the following useful results which provide bounds on the total variation distance distance as well as the $\ell_2$ error of the stationary distribution of a Markov Chain from a \emph{reference reversible Markov Chain}: ~\\ \textbf{a. Total variation perturbation bound for reversible Markov Chains} \begin{lemma}\label{lem:total-variation-bound-mc} (Theorem 2 of \cite{agarwal2018accelerated}, originally due to \cite{mitrophanov2005sensitivity}) Consider two discrete time Markov Chain $P$ and $P^*$ with finite state space and stationary distribution $\pi$ and $\pi^*$, respectively. If there exists a positive constants $R > 1$ and $\rho < 1$ such that: $$ \lVert P^t(x, \cdot) - \pi^* \rVert_{TV} \leq R\rho^t, t = 1, 2,\ldots $$ then we have: $$ \lVert \pi - \pi^* \rVert_{TV} \lesssim \lVert P - P^* \rVert_{\infty} $$ \end{lemma} ~\\ \textbf{b. $\ell_2$ perturbation bound for reversible Markov Chains} \begin{lemma} (Lemma 2 of \cite{negahban2017rank}) Consider the setting of lemma \ref{lem:total-variation-bound-mc}, then: $$ \frac{\lVert \pi - \pi^* \rVert}{\lVert \pi^*\rVert} \lesssim \lVert P - P^* \rVert_{2} $$ \end{lemma} \begin{lemma} (Lemma 8 of \cite{chen2019spectral}) Consider the setting of lemma \ref{lem:total-variation-bound-mc}, then: $$ \lVert \pi - \pi^* \rVert_{\pi^*} \leq \frac{\lVert {\pi^*}^\top(P^* - P)\rVert_{\pi^*} }{\mu(P^*) - \lVert P - P^* \rVert_{\pi^*} } $$ where $\mu(P^*)$ is the spectral gap of $P^*$. \end{lemma} ~\\ \subsection{A first analysis attempt} One difficulty of the analysis of the Rasch model compared to the analysis of the BTL model is the dependence between the pairwise estimate $P_{ij}$ and $P_{kj}$. As a first analysis attempt, we will provide a concentration bound on the individual entry error $\lvert P_{ij} - P^*_{ij}\rvert$ and rely on union bound to provide a bound on $\lVert P - P^* \rVert_{\infty}$. We have the following result which bounds the entrywise error $\lvert P_{ij} - P^*_{ij}\rvert$. \begin{lemma}\label{lem:error-concentration-bound-large-d} (Large self-loop) If we set $d = mnp$, then $$ \Pr(\lVert P - P^* \rVert_{\infty} > t ) \lesssim \sum_{i=1}^m \sum_{j\neq i }\exp{-\frac{t^2np }{ p\kappa + t }} $$ for some constant $\kappa$ to be determined. \end{lemma} \begin{proof} \begin{equation*} \begin{aligned} \Pr(\lvert P_{i\cdot} - P_{i\cdot}^*\rvert > t) &= \Pr(\lvert \sum_{j=1}^m P_{ij} - P^*_{ij} \rvert > t )\\ &= \Pr(\lvert \sum_{j\neq i} P_{ij} - P^*_{ij} \rvert > t/2 )\\ &\leq \sum_{j\neq i} \Pr(\lvert P_{ij} - P^*_{ij}\rvert > \frac{t}{2(m-1)})\\ &\lesssim \sum_{j\neq i} 2\exp{-\frac{t^2d^2}{ 8m^2\bigg(\sum_{l=1}^n \frac{p^2w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)} + \frac{td}{6m} \bigg) }} \end{aligned} \end{equation*} Suppose that for all $l\in [n]$ and $i, j\in [m]$, $\frac{p^2w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)} \leq \kappa$ for some constant $\kappa$ to be determined. We have: \begin{equation*} \begin{aligned} \Pr(\lvert P_{i\cdot} - P_{i\cdot}^*\rvert > t) &\lesssim \sum_{j\neq i} \exp{-\frac{t^2d^2}{ 8m^2\bigg(\sum_{l=1}^n \frac{p^2w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)} + \frac{td}{6m} \bigg) }}\\ &\lesssim \sum_{j\neq i} \exp{-\frac{t^2d^2}{ m^2\bigg(np^2\kappa + \frac{td}{m} \bigg) }}\\ &\asymp \sum_{j\neq i} \exp{-\frac{t^2d^2}{ m^2np^2\kappa + mtd }} \\ &\text{Substituting $d\asymp mnp$ gives}\\ &\asymp \sum_{j\neq i} \exp{-\frac{t^2m^2n^2p^2 }{ m^2np^2\kappa + tm^2np }} \\ &\lesssim \sum_{j\neq i} \exp{-\frac{t^2np }{ p\kappa + t }} \end{aligned} \end{equation*} \end{proof} \begin{lemma}\label{lem:error-concentration-bound-small-d} (Small self-loop) If we set $d \asymp mnp^2$, then $$ \Pr(\lVert P - P^* \rVert_{\infty} > t ) \lesssim \sum_{i=1}^m \sum_{j\neq i }\exp{-\frac{t^2np^2 }{ p\kappa + t }} $$ for some constant $\kappa$ to be determined. \end{lemma} \begin{proof} The proof is almost identical to that of lemma (\ref{lem:error-concentration-bound-large-d}) \begin{equation*} \begin{aligned} \Pr(\lvert P_{i\cdot} - P_{i\cdot}^*\rvert > t) &\lesssim \sum_{j\neq i} \exp{-\frac{t^2d^2}{ 8m^2\bigg(\sum_{l=1}^n \frac{p^2w^*_l}{(w^*_l+z^*_i)(w_l+z^*_j)} + \frac{td}{6m} \bigg) }}\\ &\lesssim \sum_{j\neq i} \exp{-\frac{t^2d^2}{ m^2\bigg(np^2\kappa + \frac{td}{m} \bigg) }}\\ &\asymp \sum_{j\neq i} \exp{-\frac{t^2d^2}{ m^2np^2\kappa + mtd }} \\ &\text{Substituting $d\asymp mnp^2$ gives}\\ &\asymp \sum_{j\neq i} \exp{-\frac{t^2m^2n^2p^4 }{ m^2np^2\kappa + tm^2np^2 }} \\ &\lesssim \sum_{j\neq i} \exp{-\frac{t^2np^2 }{ \kappa + t }} \end{aligned} \end{equation*} \end{proof} ~\\ \textbf{Large self-loop construction with $d=mnp$} ~\\ An immediate consequence of lemma (\ref{lem:error-concentration-bound-large-d}) is that by setting $t = \frac{1}{\sqrt{\log m}}$ and so long as we have sufficiently large number of students (compared to the number of tests) such that: $$ p \kappa = \frac{\log m}{n} \lesssim \frac{1}{\sqrt{\log m}} \Leftrightarrow n \gtrsim (\poly\log m)\,, $$ Then $p\kappa \lesssim t$. The exponential concentration bound simplifies to $$ \Pr(\lVert P - P^* \rVert_{\infty} > t ) \lesssim \sum_{i=1}^m \sum_{j\neq i }\exp{-\frac{t^2np }{ t }} \lesssim \sum_{i=1}^m \sum_{j\neq i }\exp{-tnp } \,.$$ $$ \Pr(\lVert P - P^* \rVert_{\infty} > \frac{1}{\sqrt{\log m}} ) \lesssim \sum_{i=1}^m \sum_{j\neq i }\exp{-\frac{np}{\sqrt{\log m}} } \,.$$ ~\\ So long as $np \gtrsim \poly\log m$, we have $\lVert P - P^* \rVert_{\infty} \lesssim \frac{1}{\sqrt{\log m}}$ with probability $1 - O(\poly(m))$. \emph{If the spectral gap} of the Markov Chain $P^*$ can be bounded below by a constant, then: $$ \lVert z - z^* \rVert_1 \lesssim \lVert P - P^* \rVert_{\infty} \lesssim \frac{1}{\sqrt{\log m}} $$ ~\\ Additionally, using the fact that $\lVert P - P^* \rVert_{op} \leq \sqrt{\lVert P - P^*\rVert_1 \cdot \lVert P - P^* \rVert_{\infty} }$ and the fact that both $\lVert P - P^*\rVert_1$ and $\lVert P - P^* \rVert_{\infty}$ can be bounded in the same way, we also have $$ \frac{\lVert \pi - \pi^* \rVert_2}{\lVert \pi^* \rVert_2} \lesssim \frac{1}{\sqrt{\log m}} $$ with probability $1 - O(\poly(m))$ so long as $ np \gtrsim \poly\log m $ and that the spectral gap of the Markov Chain $P^*$ can be bounded below by a constant. ~\\ \duc{If the spectral gap scales as $\frac{mnp^2}{d} = p$ then we will be in trouble. Experimentally, when $d=mnp$, the constructed Markov Chain seems to have terrible mixing time (slow convergence). } ~\\ \duc{If $m=n$ and $p \asymp \frac{\log n}{n}$, the probability with which a problem is not selected by any students is as $1-(1-\frac{\log n}{n})^n \geq 1- \exp{-\log n} = 1- \frac{1}{n}$} ~\\ \textbf{Small self-loop construction with $d=mnp^2$} An immediate consequence of lemma (\ref{lem:error-concentration-bound-small-d}) is that by setting $t = \frac{1}{\sqrt{\log m}}$, the exponential concentration bound simplifies to $$ \Pr(\lVert P - P^* \rVert_{\infty} > \frac{1}{\sqrt{\log m}} ) \lesssim \sum_{i=1}^m \sum_{j\neq i }\exp{-\frac{t^2np^2}{\kappa} } \,.$$ ~\\ So long as $np^2 \gtrsim \poly\log m$, and \emph{if the spectral gap} of the Markov Chain $P^*$ can be bounded below by a constant, then: $$ \lVert z - z^* \rVert_1 \lesssim \lVert P - P^* \rVert_{\infty} \lesssim \frac{1}{\sqrt{\log m}} $$ ~\\ Similarly, we have: $$ \frac{\lVert \pi - \pi^* \rVert_2}{\lVert \pi^* \rVert_2} \lesssim \frac{1}{\sqrt{\log m}} $$ with probability $1 - O(\poly(m))$ so long as $ np^2 \gtrsim \poly\log m $ and that the spectral gap of the Markov Chain $P^*$ can be bounded below by a constant. ~\\ \textbf{Some preliminary experiment results.} Consider the following experiment: Let there be $n$ students and $m$ tests. $w^*_l = 1$ for all $l \in [n]$ and $z^*_m$ has been generated as a uniform grid from $0.05$ to $2$ (i.e., \verb|np.linspace|). We set $m = n$ and vary both quantities and measure how $\frac{\lVert \pi - \pi^* \rVert_2}{\lVert \pi^* \rVert_2}$ depends on the problem parameters and the sampling regime. \begin{figure}[H] \begin{centering} \begin{subfigure}{1.\textwidth} \centering \includegraphics[scale=0.45]{pics/dense.png} \caption{Performance of the spectral estimator (averaged over 50 trials), sparse sampling regime $p=4\log m/n$. Intuitively, the spectral algorithm should be able to do well in the sparsest sampling regime. However, these experiment results say otherwise.} \end{subfigure}\\ \begin{subfigure}{1.\textwidth} \centering \includegraphics[scale=0.45]{pics/sparse.png} \caption{Performance of the spectral estimator (averaged over 50 trials), dense sampling regime $p=\sqrt{\log m/n}$. We already have the theory to back up this dense sampling regime.} \end{subfigure} \end{centering} \label{fig:performance-sparse-dense} \end{figure} \subsection{Bounding the spectral gap $\mu(P^*)$} \begin{theorem} Suppose that $np^2\gtrsim \log m$, then $$\mu(P^*) \gtrsim \frac{nmp^2}{d} $$ \end{theorem} \begin{proof} Here we follow similar steps to the proof of Lemma 4 in \citet{negahban2017rank}. The goal is to construct a reversible Markov Chain whose spectral gap is easier to analyze and use a comparison Lemma 6 of \citet{negahban2017rank} (originally due to \citet{diaconis1993comparison}) to obtain a lower bound on the spectral gap of $P^*$. To this end, consider the following Markov Chain: $$ \tilde P_{ij} = \frac{B_{ij}}{d}\quad \forall i\neq j $$ and $$\tilde P_{ii} = 1 - \sum_{j\neq i} \tilde P_{ij} $$ Define the Laplacian $L = D- B$ where $D$ is a diagonal matrix with $D_{ii} = \sum_{j\neq i} B_{ij}$. Then one can verify that: $$ \tilde P = I - \frac{L}{d} $$ As such: $$ \mu(\tilde P) = \mu(I - \frac{L}{d}) = \frac{1}{d}\lambda_{\min, \indep}(L) $$ Following similar argument as in the proof of Lemma A.3 in \citet{gao2021uncertainty}, we have: $$ \lambda_{\min,\indep}(L) \gtrsim mnp^2 $$ This completes the proof. \end{proof} \subsection{Bounding the error term $\lVert P - P^* \rVert_2$} \begin{theorem} Suppose that $np^2\gtrsim \log m$ $$\lVert P - P^* \rVert_2 \lesssim \frac{\epsilon nmp^2}{d} $$ for some small constant $\epsilon$. \end{theorem} \begin{proof} As shown previously, so long as $np^2 \gtrsim \log m$, then $\lvert P_{ij} - P^*_{ij}\rvert \lesssim \frac{np^2\epsilon}{d}$ with high probability. We have: \begin{equation*} \begin{aligned} \lVert P - P^* \rVert_2 &\leq \lVert \diag(P) I - \diag(P^*) I\rVert_{op} + \lVert [P - P^*]_{i\neq j}\rVert_{op} \\ &\leq \max_{i} \lvert P_{ii} - P_{ii} \rvert + \max_{u, v} \sum_{i\neq j} u_i (P_{ij} - P_{ij}^*)v_j\\ &\leq \max_{i} \lvert \sum_{j\neq i} P_{ij}-P_{ij}^* \rvert + \max_{i\neq j} \lvert P_{ij}-P^*_{ij}\rvert \cdot \sum_{i\neq j} \lvert u_i\rvert \lvert v_j\rvert\\ &\leq 2 m \cdot \max_{i\neq j} \lvert P_{ij} - P_{ij}^* \rvert \\ &\lesssim \frac{\epsilon mnp^2}{d} \end{aligned} \end{equation*} This completes the proof. \end{proof} \subsection{Bounding the projected error term $\lVert {\pi^*}^\top (P - P^*)\rVert_2 $} To make progress we follow a simlar argument as in the proof of Lemma 8.4 in \citet{gao2021uncertainty} in turning the norm term into a linear term. Specifically, we have: \begin{equation*} \begin{aligned} \lVert {\pi^*}^\top (P - P^*)\rVert_2 &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j=1}^n \pi_j^*(P_{ji}-P_{ji}^*) \bigg)}\\ &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^*(P_{ii}-P_{ii}^*) \bigg)}\\ &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^*(\sum_{j\neq i} P^*_{ij}-P_{ij}) \bigg)}\\ &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg)}\\ &\leq 2 \max_{v\in \mathcal{V}} \sum_{i=1}^n v_i \bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^*(P_{ij}^* - P_{ij}) \bigg) \end{aligned} \end{equation*} where $\mathcal{V}$ is a $\frac{1}{2}$-covering of the unit norm ball. Now noting that $\pi_i^* = \bar z^*_i$ where $\bar z_i^* = \frac{z_i^*}{\sum_{j} z_j^*}$ is the normalized version of the test parameters. \subsubsection{Bounded difference method} We first show that the linear term can be rewritten as follows: \begin{equation*} \begin{aligned} &d\sum_{i=1}^m v_i \bigg( \sum_{j\neq i} \pi_j^* (P_{ji} - P_{ji}^*) + \pi_i^*(P_{ij}^* - P_{ij}) \bigg)\\ &= \sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} \bigg( (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big]\bigg) \end{aligned} \end{equation*} Note that (ignoring the expectation terms) this sum is a function $f$ of $m\times n$ independent random variables $\{X_{li}\}$. Let $X$ and $X'$ be identical copies except for $X_{li}\neq X'_{li}$. \begin{equation*} \begin{aligned} f(X) - f(X') &= \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) [X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*] \\ & \underset{\text{Cauchy-Schwarz}}{\leq} \frac{\kappa}{\sqrt m} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} \end{aligned} \end{equation*} Applying Hoeffding's inequality now gives us: \begin{equation*} \begin{aligned} &\sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} \bigg( (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big]\bigg)\\ &\lesssim \frac{\kappa}{\sqrt m} \cdot \underbrace{\sqrt{\sum_{i\neq j}(v_i - v_j)^2 B_{ij}}}_{\lesssim \sqrt{mnp^2}}\\ &\lesssim \kappa \sqrt{np^2} \end{aligned} \end{equation*} Applying union bound over all $v\in \mathcal{V}$ gives us: $$ \lVert {\pi^*}^\top (P -P^*)\rVert_2 \lesssim \kappa\sqrt{mnp^2} $$ \subsubsection{Slightly tighter bound when $m$ is allowed to grow} Note that one of the key step in the bounded difference analysis above is the Cauchy-Schwarz inequality. Conditioned on event $\A^+$ (same assumptions as $\A$ and $mp \gtrsim \log n$), we have slightly better bound. \begin{equation*} \begin{aligned} f(X) - f(X') &= \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) [X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*] \\ &= \sum_{j\neq i} A_{lj}\cdot A_{li}A_{lj}(v_i - v_j) [\underbrace{X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*}_{\lesssim \frac{\kappa}{m}}] \\ & \underset{\text{Cauchy-Schwarz}}{\lesssim} \sqrt{\underbrace{\sum_{j\neq i} A_{lj}}_{\lesssim mp} } \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} \cdot \frac{\kappa}{m}\\ &\lesssim \sqrt{mp} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} \cdot \frac{\kappa}{m}\\ &\asymp \frac{\kappa\sqrt p}{\sqrt m} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} \end{aligned} \end{equation*} The rest of the analysis proceeds in the same way, giving us: $$ \lVert {\pi^*}^\top (P- P^*)\rVert_2 \lesssim \kappa \sqrt{mnp^3} $$ \subsection{Summary} ~\\ \textbf{Error bound under small $m$ regime:} Using the bound from the bounded difference method with the previously shown bounds on the spectral gap and the transition matrix error term gives: $$\lVert \pi^* - \pi\rVert_2 \lesssim \frac{\sqrt{mnp^2}}{mnp^2} \asymp \frac{1}{\sqrt{{mnp^2}}} $$ The error on the estimation of the difficulty parameters thus scales as: $$ \lVert z - z^* \rVert_2 \lesssim \frac{\sqrt m}{\sqrt{np^2}} $$ ~\\ \textbf{Error bound when $m$ is allowed to grow:} Conditioned on event $\A^+$ we obtain: $$ \lVert {\pi^*}^\top (P - P^*)\rVert_2 \lesssim \frac{\sqrt {mnp^3}}{d} $$ The error on the estimation is: $$ \lVert z - z^* \rVert_2 \lesssim \frac{\sqrt m}{\sqrt{np}} $$ ~\\ The following table summarizes the results obtained thus far under two different regimes of $m$ \begin{centering} \begin{table}[H] \begin{tabular}{|l|l|l|} \hline & Constant $m$ regime & $m$ is allowed to grow \\ \hline Assumptions & $np^2 \gtrsim \log m$ & $np^2 \gtrsim \log m, mp \gtrsim \log n$ \\ \hline Error bound on $\lVert z - z^* \rVert_2$ & $\sqrt{\frac{m}{np^2}}$ & $\sqrt{\frac{m}{np}}$ \\ \hline Sufficient conditions for consistent estimation & $np^2 \gtrsim m\log m$ & $np \gtrsim \max\{m\log m, \sqrt n\sqrt{\log m}\}$, $mp \gtrsim \log n$ \\ \hline \end{tabular} \end{table} \end{centering} \section{Proofs for the decomposition approach} Fixing an assignment $A$, we have: \begin{equation*} \begin{aligned} &d\cdot\sum_{i=1}^m v_i \bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^*(P_{ij}^* - P_{ij}) \bigg)\\ &= \sum_{i=1}^m v_i \bigg(\sum_{j\neq i} \bar z^*_j \big(\sum_{l=1}^n A_{li}A_{lj}X_{lj}(1-X_{li}) - \E[ A_{li}A_{lj}X_{lj}(1-X_{li})]\big) + \\ & \quad\quad\quad\sum_{j\neq i} \bar z^*_i \big(\sum_{l=1}^n \E[ A_{li}A_{lj}X_{li}(1-X_{lj})] - A_{li}A_{lj}X_{li}(1-X_{lj})\big) \bigg)\\ &= \sum_{i=1}^m v_i \bigg(\sum_{l=1}^n\sum_{j\neq i} \big( \bar z^*_jA_{li}A_{lj}X_{lj}(1-X_{li}) - \E[ \bar z^*_j A_{li}A_{lj}X_{lj}(1-X_{li})]\big) + \\ & \quad\quad\quad\sum_{l=1}^n\sum_{j\neq i} \big( \E[ \bar z^*_i A_{li}A_{lj}X_{li}(1-X_{lj})] - \bar z^*_iA_{li}A_{lj}X_{li}(1-X_{lj})\big) \bigg)\\ &= \sum_{i=1}^m v_i \bigg(\sum_{l=1}^n\sum_{j\neq i} \big( \bar z^*_jA_{li}A_{lj}X_{lj} - \bar z^*_jA_{li}A_{lj}X_{lj}X_{li} - \bar z^*_j A_{li}A_{lj}\frac{w^*_l z^*_i}{(w_l^*+ z^*_i)(w_l^*+ z^*_j)} \big) + \\ & \quad\quad\quad\sum_{l=1}^n\sum_{j\neq i} \big( \bar z^*_i A_{li}A_{lj}\frac{w^*_l z^*_j}{(w_l^*+ z^*_i)(w_l^*+ z^*_j)} - \bar z^*_iA_{li}A_{li}X_{lj} + \bar z^*_iA_{li}A_{lj}X_{lj}X_{li})\big) \bigg)\\ &= \sum_{i=1}^m v_i \bigg(\sum_{l=1}^n\sum_{j\neq i} \big( \bar z^*_jA_{li}A_{lj}X_{lj} - \bar z^*_jA_{li}A_{lj}X_{lj}X_{li} - A_{li}A_{lj}\frac{w^*_l \bar z^*_j( z^*_i+ w_l^* - w_l^*)}{(w_l^*+ z^*_i)(w_l^*+ z^*_j)} \big) + \\ & \quad\quad\quad\sum_{l=1}^n\sum_{j\neq i} \big( A_{li}A_{lj}\frac{w^*_l\bar z^*_i ( z^*_j+w_l^*-w_l^*) }{(w_l^*+ z^*_i)(w_l^*+ z^*_j)} - \bar z^*_iA_{li}A_{li}X_{lj} + \bar z^*_iA_{li}A_{lj}X_{lj}X_{li})\big) \bigg)\\ &\text{After some rearrangment}\\ &= \sum_{i=1}^m v_i\bigg( \sum_{l=1}^n\sum_{j\neq i} A_{li}A_{lj} \bar z^*_j X_{lj} - A_{li}A_{lj} \bar z^*_j \frac{w^*_l}{w_l^* + z^*_j}) \bigg) \\ &+ \sum_{i=1}^m v_i\bigg( \sum_{l=1}^n\sum_{j\neq i} A_{li}A_{lj} \bar z^*_i \frac{w_l^*}{w_l^* + z^*_i} - A_{li}A_{lj} \bar z^*_iX_{li}) \bigg) \\ &+ \sum_{i=1}^m v_i\bigg( \sum_{l=1}^n\sum_{j\neq i} A_{li}A_{lj} (\bar z^*_i-\bar z^*_j) X_{li}X_{lj} - A_{li}A_{lj} (\bar z^*_i-\bar z^*_j) \frac{{w_l^*}^2}{(w_l^*+ z^*_i)(w_l^*+ z^*_j)} \bigg) \\ &= \sum_{i=1}^m v_i\bigg( \sum_{l=1}^n\sum_{j\neq i} A_{li}A_{lj} \bar z^*_j (X_{lj} - \E[X_{lj}]) \bigg) + \sum_{i=1}^m v_i\bigg( \sum_{l=1}^n\sum_{j\neq i} A_{li}A_{lj} \bar z^*_i (\E[X_{li}] - X_{li}) \bigg)\\ &+ \sum_{i=1}^m v_i\bigg( \sum_{l=1}^n\sum_{j\neq i} A_{li}A_{lj} (\bar z^*_i-\bar z^*_j) (X_{li}X_{lj} - \E[X_{li}X_{lj}]) \bigg) \\ &=\sum_{l=1}^n \bigg( \sum_{i=1}^m v_i \sum_{j\neq i} A_{li}A_{lj} \bar z^*_j (X_{lj} - \E[X_{lj}]) \bigg) + \sum_{l=1}^n\bigg( \sum_{i=1}^m v_i\sum_{j\neq i} A_{li}A_{lj} \bar z^*_i (\E[X_{li}] - X_{li}) \bigg) \\ &+ \sum_{l=1}^n\bigg(\sum_{i=1}^m v_i\sum_{j\neq i} A_{li}A_{lj} (\bar z^*_i-\bar z^*_j) (X_{li}X_{lj} - \E[X_{li}X_{lj}]) \bigg) \\ \end{aligned} \end{equation*} ~\\ \textbf{Bounding the first term:} Let us bound the first term \begin{equation*} \begin{aligned} &\sum_{l=1}^n \bigg( \sum_{i=1}^m v_i \sum_{j\neq i} A_{li}A_{lj} \bar z^*_j (X_{lj} - \E[X_{lj}]) \bigg) \\ &=\sum_{l=1}^n \sum_{j=1}^m (X_{lj} - \E[X_{lj}]) \cdot \bigg(\sum_{i\neq j} v_i A_{li}A_{lj} \bar z^*_j \bigg)\\ &= \sum_{l=1}^n \sum_{j=1}^m (X_{lj} - \E[X_{lj}]) \cdot \underbrace{\bigg(A_{lj}\bar z^*_j v^\top A_l \bigg)}_{d_{lj}}\\ \end{aligned} \end{equation*} One can see that conditioned on event $\A$ and for a fixed $v$ and $A$, this is a sum of $mn$ independent random variables. Thus, we can use Hoeffding to show that (with high probability) \begin{equation*} \begin{aligned} &\sum_{l=1}^n \sum_{j=1}^m (X_{lj} - \E[X_{lj}]) \cdot {\bigg(\sum_{i\neq j} v_i A_{li}A_{lj} \bar z^*_j \bigg)}\\ &\lesssim \sqrt{\sum_{lj} A_{lj} {{\bar{z^*}}_j}^2 \cdot (v^\top A_l)^2 }\\ &\lesssim \bar z_{\max}^* \sqrt{\sum_{l} (v^\top A_l)^2 \cdot \sum_j A_{lj} }\\ &\lesssim \bar z_{\max}^* \sqrt{mp} \sqrt{\sum_{l} v^\top A_lA_l^\top v}\\ &\asymp \bar z_{\max}^* \sqrt{mp} \sqrt{\lambda_{\max}(A^\top A)} \lesssim \frac{\kappa}{m} \sqrt{mp} \sqrt{nmp^2} \end{aligned} \end{equation*} The last inequality comes from $ \lambda_{\max}(A^\top A) = \max_{v} \sum_{ij} v_i v_j B_{ij} \lesssim \max_v \{np^2 v^\top (\mb 1\mb 1^\top)v \} \lesssim mnp^2 $ and $\kappa = z^*_{\max} - z^*_{\min}$ ~\\ \textbf{Bounding the second linear term:} \begin{equation*} \begin{aligned} &\sum_{l=1}^n\bigg( \sum_{i=1}^m v_i\sum_{j\neq i} A_{li}A_{lj} \bar z^*_i (\E[X_{li}] - X_{li}) \bigg) \\ &= \sum_{l=1}^n\sum_{i=1}^m (\E[X_{li}] - X_{li}) \cdot A_{li} \cdot \bigg( v_i \bar z^*_i \sum_{j\neq i} A_{lj} \bigg) \end{aligned} \end{equation*} Again we have a sum over $mn$ independent random variables. To bound the total, we invoke Hoeffding's inequality: \begin{equation*} \begin{aligned} &\sum_{l=1}^n\sum_{i=1}^m (\E[X_{li}] - X_{li}) \cdot A_{li} \cdot \bigg( v_i \bar z^*_i \sum_{j\neq i} A_{lj} \bigg) \\ &\lesssim \sqrt{ \sum_{l=1}^n\sum_{i=1}^m A_{li} \bigg( v_i \bar z^*_i \sum_{j\neq i} A_{lj} \bigg)^2 }\\ &\lesssim \sqrt{ \sum_{l=1}^n\sum_{i=1}^m v_i^2 \bar z_{\max}^* \cdot A_{li} \bigg(\sum_{j\neq i} A_{lj} \bigg)^2 }\\ &\asymp \bar z_{\max}^* \sqrt{ \sum_{i=1}^m v_i^2 \sum_{l=1}^n A_{li}\bigg(\underbrace{\sum_{j\neq i} A_{lj}}_{\lesssim mp} \bigg)^2 }\\ &\lesssim \bar z_{\max}^* \sqrt{ \sum_{i=1}^m v_i^2 \bigg(\underbrace{\sum_{l=1}^n A_{li}}_{\lesssim np}\cdot m^2p^2 \bigg) }\\ &\lesssim \bar z_{\max}^* \sqrt{ \sum_{i=1}^m v_i^2 \cdot np \cdot m^2p^2 } \lesssim \frac{\kappa}{m} \sqrt{mp} \cdot \sqrt{mnp^2} \end{aligned} \end{equation*} ~\\ \textbf{Bounding the third quadratic term: }To use the standard Hanson-Wright's inequality (cf. \cite{rudelson2013hanson}) which applies to second order chaos of centered random variables, we first observe that: \begin{equation*} \begin{aligned} &\sum_{l=1}^n\bigg(\sum_{i=1}^m v_i\sum_{j\neq i} A_{li}A_{lj} (\bar z^*_i-\bar z^*_j) (X_{li}X_{lj} - \E[X_{li}X_{lj}]) \bigg)\\ &= \sum_{l=1}^n X_l^\top M^l X_l - \E [X_l]^\top M^l \E[X_l]\\ &= \sum_{l=1}^n\bigg( {(X_l-\E[X_l])^\top M^l (X_l-\E[X_l])} + {(X_l - \E [X_l])^\top M^l \E[X_l]} + {\E[X_l]^\top M^l (X_l - \E[X_l])}\bigg) \\ &= \underbrace{ \sum_{l=1}^n(X_l-\E[X_l])^\top M^l (X_l-\E[X_l])}_{S_1} + \underbrace{ \sum_{l=1}^n(X_l - \E [X_l])^\top M^l \E[X_l]}_{S_2} + \underbrace{ \sum_{l=1}^n\E[X_l]^\top M^l (X_l - \E[X_l])}_{S_3} \\ \end{aligned} \end{equation*} where $X_l = (X_{li})_{i=1}^m$ and $M^l_{ij} = v_i A_{li}A_{lj} (\bar z_i^* - \bar z_j^*)$. We now bound these three terms separately. The first term can be bounded using the standard Hanson-Wright's inequality, we first rewrite it as: \begin{equation*} \begin{aligned} S_1 = \begin{bmatrix}X_1\\ \vdots \\ X_n \end{bmatrix}^\top \bigg[\diag(M^1,\ldots, M^l) \bigg] \begin{bmatrix}X_1\\ \vdots \\ X_n\end{bmatrix} \end{aligned} \end{equation*} where $M = \bigg[\diag(M^1,\ldots, M^n) \bigg]$ is a block diagonal matrix with the $(l, l)$ blocks being $M^l$ and all other blocks being 0. Hanson-Wright's inequality gives $$ \Pr(\lvert S_1 \rvert > t) \lesssim \exp{-\min\{\frac{t^2}{\lVert M\rVert_F^2}, \frac{t}{\lVert M\rVert_{op}} \}} $$ The frobenius norm term can be bounded as \begin{equation*} \begin{aligned} \lVert M\rVert_F^2 &= \sum_{l=1}^n \lVert M^l \rVert_F^2= \sum_{l=1}^n \sum_{i\neq j} v_i^2 A_{li}A_{lj}(\bar z_i^* - \bar z^*_j)^2 \leq \frac{\kappa^2}{m^2} \sum_{l=1}^n\sum_{i\neq j} v_i^2 A_{li}A_{lj}\\ &=\frac{\kappa^2}{m^2}\sum_{i=1}^m v_i^2 \sum_{l=1}^n \sum_{j\neq i} A_{li}A_{lj} = \frac{\kappa^2}{m^2} \sum_{i=1}^m v_i^2\sum_{j\neq i} B_{ij}\\ &\asymp\frac{\kappa^2}{m^2} np^2m \sum_{i=1}^m v_i^2 \asymp\frac{\kappa^2}{m^2} mnp^2 \end{aligned} \end{equation*} Due to the uniform random sampling, it should be the case that $\lVert M\rVert_F \asymp \sqrt n \lVert M^l \rVert_F \gtrsim \sqrt n \lVert M^l \rVert_{op} \forall l$. Therefore, the bound is controlled by the exponential quadratic term and we have: $$ S_1 \lesssim \sqrt{\frac{\kappa^2}{m^2}} \cdot \sqrt{mnp^2} \asymp \sqrt{\frac{np^2}{m}} $$ ~\\ For the second and third term \begin{equation*} \begin{aligned} S_2 &= \sum_{l=1}^n \bigg( \sum_{i\neq j} v_i A_{li} A_{lj} (\bar z_i^* - \bar z_j^*) (X_{li} - \E[X_{li}]) \E[X_{lj}] \bigg)\\ &= \sum_{l=1}^n \sum_{i=1}^m (X_{li} - \E[X_{li}]) \cdot \bigg(v_i A_{li}\sum_{j\neq i} A_{lj}\E[X_{lj}] \bigg)\\ &\text{Applying Hoeffding's inequality}\\ &\lesssim \sqrt{\sum_{l=1}^n \sum_{i=1}^m \bigg( v_i A_{li}\sum_{j\neq i} (\bar z_i^* - \bar z_j^*)A_{lj}\E[X_{lj}] \bigg)^2 }\\ &\lesssim \sqrt{\sum_{l=1}^n \sum_{i=1}^m \bigg( v_i A_{li}\sum_{j\neq i}(\underbrace{\bar z_i^* - \bar z_j^*}_{\lesssim \frac{\kappa}{m} }) A_{lj} \bigg)^2 }\\ &\lesssim \frac{\kappa}{m} \sqrt{\sum_{l=1}^n \sum_{i=1}^m v^2_i A_{li} \bigg( \sum_{j\neq i} A_{lj} \bigg)^2 }\\ &\text{Following the same argument as that for bounding the second linear term}\\ &\lesssim \frac{\kappa}{m} \sqrt{mp}\sqrt{mnp^2} \end{aligned} \end{equation*} \\ For the third term: \begin{equation*} \begin{aligned} S_3 &= \sum_{l=1}^n \bigg( \sum_{i\neq j} v_i A_{li} A_{lj} (\bar z_i^* - \bar z_j^*) \E[X_{li}] (X_{lj} - \E[X_{lj}]) \bigg)\\ &= \sum_{l=1}^n \sum_{j=1}^m (X_{lj} - \E[X_{lj}]) \bigg( \sum_{i\neq j} v_i A_{li} A_{lj} (\bar z_i^* - \bar z_j^*) \E[X_{li}] \bigg)\\ &\text{Applying Hoeffding's inequality}\\ &\lesssim \sqrt{\sum_{l=1}^n\sum_{j=1}^m \bigg( \sum_{i\neq j} v_i A_{li} A_{lj} (\bar z_i^* - \bar z_j^*) \E[X_{li}] \bigg)^2 }\\ &\lesssim \frac{\kappa}{m} \sqrt{\sum_{l=1}^n\sum_{j=1}^m A_{lj} \bigg( \sum_{i\neq j} v_i A_{li} \bigg)^2 }\\ &\lesssim \frac{\kappa}{m}\sqrt{\sum_{l=1}^n\sum_{j=1}^m A_{lj} \bigg( \sum_{i\neq j} v_i A_{li} \bigg)^2 }\\ &\text{Following the same argument as that for bounding the first linear term}\\ &\lesssim\frac{\kappa}{m} \sqrt{mp}\sqrt{mnp^2} \end{aligned} \end{equation*} \section{Proofs for the bounded difference method} We first rewrite the linear term into a more manageable form: \begin{equation*} \begin{aligned} &d\sum_{i=1}^m v_i \bigg( \sum_{j\neq i} \pi_j^* (P_{ji} - P_{ji}^*) + \pi_i^*(P_{ij}^* - P_{ij}) \bigg)\\ &=\sum_{i=1}^m v_i \bigg( \sum_{j\neq i} \pi_j^* \big[\sum_{l=1}^n A_{li}A_{lj}\big(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big) \big] \\ &\quad\quad- \sum_{j\neq i}\pi_i^* \big[\sum_{l=1}^n A_{li}A_{lj}\big(X_{li}(1-X_{lj}) - \E[X_{li}(1-X_{lj})]\big) \big] \bigg)\\ &=\sum_{l=1}^n\sum_{i=1}^m \bigg( \sum_{j\neq i} v_i \pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big]\big) \\ &-\sum_{l=1}^n\sum_{i=1}^m \bigg( \sum_{j\neq i} v_i \pi_i^*A_{li}A_{lj} \big[(X_{li}(1-X_{lj}) - \E[X_{li}(1-X_{lj})]\big]\big) \\ &= \sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} \bigg( (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big]\bigg) \end{aligned} \end{equation*} We will use the method of bounded difference to obtain a concentration inequality on the above sum. Note that this sum is essentially a function $f$ of $m\times n$ independent random variables $\{X_{li}\}$. Let $X$ and $X'$ be identical copies except for $X_{li}\neq X'_{li}$. \begin{equation*} \begin{aligned} \lvert f(X) - f(X')\rvert &= \lvert \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) [X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*] \rvert\\ \end{aligned} \end{equation*} Using Cauchy-Schwarz, we have: \begin{equation*} \begin{aligned} &\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) \underbrace{[X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*]}_{\leq \max\{\pi^*_i, \pi^*_j\} \lesssim \frac{\kappa}{m}}\\ &\leq \frac{\kappa}{m} \cdot \sqrt m \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} = \frac{\kappa}{\sqrt m} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} \end{aligned} \end{equation*} Applying Hoeffding's inequality now gives us: \begin{equation*} \begin{aligned} &\sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} \bigg( (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big]\bigg)\\ &\lesssim \sqrt{\sum_{l=1}^n\sum_{j=1}^m \frac{\kappa^2}{m} \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2 }\\ &\asymp \frac{\kappa}{\sqrt m} \cdot \underbrace{\sqrt{\sum_{i\neq j}(v_i - v_j)^2 B_{ij}}}_{\lesssim \sqrt{mnp^2}}\\ &\lesssim \kappa \sqrt{np^2} \end{aligned} \end{equation*} We thus have $$ \lVert {\pi^*}^\top (P -P^*)\rVert_2 \lesssim \kappa\sqrt{mnp^2} $$ \subsection{Entrywise Error Bounds for the Growing $m$ Regime} Consider the case when $mp \gtrsim \log n$. Now follow the same decomposition as in our analysis in the previous section. We claim that when $m$ grows, \begin{itemize} \item $$ I_1 \lesssim \frac{\lVert \pi^*\rVert_{\infty}}{\sqrt{np}}\,, $$ \item $$ I_2 \leq c \cdot \lVert \pi - \pi^*\rVert_{\infty}\,, $$ where $c$ is some constant less than 1, \item $$ I_3, I_4 \lesssim \frac{\lVert \pi^* \rVert_{\infty}}{\sqrt{np}} \,.$$ \end{itemize} Note that similarly to the $\ell_2$ analysis of the spectral method, we also remove a $\sqrt p$ factor in the three terms $I_1, I_3, I_4$. ~\\ \textbf{Improving term $I_1$:} We have from our $\ell_2$ analysis that when $m$ grows, $$ \lVert \pi - \pi^* \rVert_2 \lesssim \frac{1}{\sqrt{mnp}} \asymp \frac{\sqrt m \cdot \lVert \pi^* \rVert_{\infty} }{\sqrt{np}} \,.$$ We can then bound $I_1$ as $$ I_1 \lesssim \frac{\lVert \pi - \pi^* \rVert_2}{d} \cdot \sqrt m \cdot np^2 \lesssim \frac{\lVert \pi^*\rVert_{\infty} }{\sqrt{np}}\,. $$ ~\\ \textbf{Improving terms $I_3$ and $I_4$:} Using the same trick that we use in our $\ell_2$ analysis of the spectral algorithm where can obtain a tighter bound on the bounded difference term in Equation (\ref{eqn:bounded-difference}) by introducing an extra $\sqrt p$ factor. \begin{equation*} \begin{aligned} &\sum_{j\neq i} \pi_j^* A_{li} A_{lj} \leq \lVert \pi^*\rVert_{\infty} \cdot \sum_{j\neq i} A_{lj} \cdot A_{li}A_{lj} \\ &\leq \lVert \pi^*\rVert_{\infty} \cdot \sqrt{\sum_{j\neq i} A_{lj} } \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj} }\\ &\leq \lVert \pi^*\rVert_{\infty} \cdot \sqrt{mp} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj} }\\ \end{aligned} \end{equation*} We can thus bound $I_3$ more tightly as \begin{equation*} \begin{aligned} I_3 &= \frac{1}{d} \sum_{j\neq i}\pi_j^* \cdot \sum_{l=1}^n A_{li}A_{lj} \big[X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})] \big] \\ &\lesssim \frac{1}{d} \cdot \sqrt{ \lVert \pi^*\rVert^2_{\infty} \cdot mp \cdot \sum_{j\neq i}\sum_{l=1}^n A_{li}A_{lj} + \sum_{j\neq i}\sum_{l=1}^n\lVert \pi^*\rVert^2_{\infty} A_{li}A_{lj} }\\ &\asymp \frac{\lVert \pi^*\rVert_{\infty} }{d} \cdot \sqrt{mp \sum_{j\neq i} \sum_{l=1}^n A_{li}A_{lj} }\\ &\lesssim \frac{\lVert \pi^*\rVert_{\infty} }{d} \cdot \sqrt{mp} \cdot \sqrt{m np^2} \asymp \frac{\lVert \pi^*\rVert_{\infty} }{\sqrt{np}} \end{aligned} \end{equation*} Since the terms $I_3$ and $I_4$ can be bounded in the same way, we also obtain tighter rate for $I_4$. ~\\ \textbf{Putting all the terms together,} we have $$ \frac{\lVert \pi - \pi^*\rVert_{\infty}}{\lVert \pi^*\rVert_{\infty}} \lesssim \frac{1}{\sqrt{np}}\, $$ under the regime where $mp \gtrsim \log n$. \subsection{Other Pairwise Methods in the Literature} In this section we describe three methods that are related to our algorithm. As noted before, previous matrix methods in the literature construct an item-item matrix and assumes that such matrix is dense. It is unclear how one would generalize these methods to the case where the item-item matrix is sparse, which is quite commonly observed in real life datasets. A common quantity that the previous matrix methods use is $$ f_{ij} = \Pr_{l\in [n]} (X_{li}=1, X_{lj}=0 \,\lvert \, X_{li} + X_{lj}=1 )\,.$$ Intuitively, $f_{ij}$ is the empirical probability at which a user responds $1$ to item $i$ and $0$ to item $j$, conditioned on the event that the user responds to exactly only one of the two items. Suppose that we have collected $f_{ij}$ for all $i\neq j$, consider a matrix $D$ defined entrywise as $$ D_{ij} = \frac{f_{ji}}{f_{ij}}\,. $$ This is also known as the \emph{positive reciprocal matrix}. \textbf{The Row Sum Approach of Choppin \cite{choppin1982fully}:} Given the matrix $D$, construct a matrix $\ln D$ by taking the $\log$ of every entry of $D$. The row sums of this $\ln D$ matrix is (after appropriate normalization) produce an estimate $\beta$. To see why, it is helpful to first check that in the limit of infinite data and suppose that we observe all pairs $i, j$ then $f_{ij}$ is exact and $$ f_{ij} = \frac{e^{\beta_j^*}}{e^{\beta_i^*} + e^{\beta_j^*} }\,. $$ Then $$ D_{ij} = \frac{e^{\beta_i^*}}{e^{\beta_j^*}}$$ and $$ \ln D_{ij} = \beta_i^* - \beta^*_j\,.$$ It is easy to see that the row sums correspond exactly to $\beta^*$. \textbf{The Eigenvector Method of Garner \cite{garner2002eigenvector} and Saaty \cite{saaty1987analytic}:} Given the matrix $D$, right the leading eigen-vector. Modulo an appropriate scaling factor, the leading left eigen-vector is an estimate for $e^{\beta}$. Taking the log of this eigenvector recovers $\beta$. To see why, verify that $$ D \begin{bmatrix} e^{\beta^*_1}\\\vdots \\ e^{\beta_m^*} \end{bmatrix} = m \cdot \begin{bmatrix} e^{\beta^*_1}\\\vdots \\ e^{\beta_m^*} \end{bmatrix}\,. $$ \textbf{Pairwise Maximum Likelihood Estimate (PMLE):} Another pairwise approach used in the literature is the Pairwise Maximumum Likelihood Estimate (PMLE) \cite{zwinderman1995pairwise}. Similar to the intuition behind the pairwise methods mentioned above, PMLE uses the fact that the conditional probability $ \Pr_{l\in [n]} (X_{li}=1, X_{lj}=0 \,\lvert \, X_{li} + X_{lj}=1 )$ does not involve the user parameter. PMLE maximizes the pairwise conditional likelihood. We are not able to find any open source python implementation of PMLE so we use the majorization-minorization (MM) algorithm for estimation \cite{hunter2004tutorial} and adapt our implementation from an open source implementation \cite{choix}. We also implement a different version of PMLE using Scipy's optimization subroutine \cite{2020SciPy-NMeth}. However, the later version has very significant numerical issues and gives inaccurate results. We therefore use the MM-based version in our experiments. For completeness, we conduct extra experiments comparing between our spectral method and the two previously studied matrix methods on synthetic data with $m=100$ under full observation data. The result is presented in Figure \ref{fig:eigen-methods-comparisons}. \begin{figure}[h] \centering \includegraphics[height=60mm, width=80mm]{pics/eigen_vector_methods_p=1_0.pdf} \caption{Comparison between our spectral method and two matrix methods in the literature. The performance are quite similar. However, our methods can be easily generalizable to the setting where the pairwise comparison matrix contains missing entries whereas the spectral methods in the literature assumes a full comparison matrix. \label{fig:eigen-methods-comparisons}} \end{figure} \textbf{Extra Experiment Results with Pairwise MLE:} For completeness, we also conduct extra experiments comparing between the spectral method and PMLE. The result (together with previous results reported in the main paper) is summarized in Table \ref{tbl:all-results-pair}. \begin{table}[h!] \begin{adjustwidth}{-2.35cm}{} \resizebox{1.25\textwidth}{!}{% \begin{tabular}{|c|ccccc|ccccc|ccccc|ccccc|} \hline & \multicolumn{5}{c|}{AUC} & \multicolumn{5}{c|}{Log likelihood} & \multicolumn{5}{c|}{Top-K accuracy} & \multicolumn{5}{c|}{Inference time} \\ \hline Dataset & \multicolumn{1}{c|}{Spectral} & \multicolumn{1}{c|}{MMLE} & \multicolumn{1}{c|}{CMLE} & \multicolumn{1}{c|}{PMLE} & JMLE & \multicolumn{1}{c|}{Spectral} & \multicolumn{1}{c|}{MMLE} & \multicolumn{1}{c|}{CMLE} & \multicolumn{1}{c|}{PMLE} & JMLE & \multicolumn{1}{c|}{Spectral} & \multicolumn{1}{c|}{MMLE} & \multicolumn{1}{c|}{CMLE} &\multicolumn{1}{c|}{PMLE} & JMLE & \multicolumn{1}{c|}{Spectral} & \multicolumn{1}{c|}{MMLE} & \multicolumn{1}{c|}{CMLE} & \multicolumn{1}{c|}{PMLE} & JMLE \\ \hline {LSAT} & \multicolumn{1}{c|}{$0.707$} & \multicolumn{1}{c|}{$0.707$} & \multicolumn{1}{c|}{$0.707$} & \multicolumn{1}{c|}{$0.707$} & {$0.707$} & \multicolumn{1}{c|}{$-0.487$} & \multicolumn{1}{c|}{$-0.489$} & \multicolumn{1}{c|}{$-0.487$} & \multicolumn{1}{c|}{$-0.487$} & {$-0.485$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & {N/A} & \multicolumn{1}{c|}{$0.028$} & \multicolumn{1}{c|}{$0.159$} & \multicolumn{1}{c|}{$0.154$} & \multicolumn{1}{c|}{$0.011$} & {$0.075$} \\ \hline {UCI} & \multicolumn{1}{c|}{$0.565$} & \multicolumn{1}{c|}{$0.565$} & \multicolumn{1}{c|}{$0.565$} & \multicolumn{1}{c|}{$0.565$} & {$0.565$} & \multicolumn{1}{c|}{$-0.687$} & \multicolumn{1}{c|}{$-0.686$} & \multicolumn{1}{c|}{$-0.692$} & \multicolumn{1}{c|}{$-0.687$} & {$-0.706$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & {N/A} & \multicolumn{1}{c|}{$0.015$} & \multicolumn{1}{c|}{$0.133$} & \multicolumn{1}{c|}{$0.136$} & \multicolumn{1}{c|}{$0.015$} & {$0.034$} \\ \hline {3 GRADES} & \multicolumn{1}{c|}{$0.532$} & \multicolumn{1}{c|}{$0.532$} & \multicolumn{1}{c|}{$0.532$} & \multicolumn{1}{c|}{$0.532$} & {$0.532$} & \multicolumn{1}{c|}{$-0.706$} & \multicolumn{1}{c|}{$-0.692$} & \multicolumn{1}{c|}{$-0.699$} & \multicolumn{1}{c|}{$-0.704$} & {$-0.717$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & {N/A} & \multicolumn{1}{c|}{$0.021$} & \multicolumn{1}{c|}{$0.181$} & \multicolumn{1}{c|}{$0.105$} & \multicolumn{1}{c|}{$0.011$} & {$0.009$} \\ \hline {RIIID} & \multicolumn{1}{c|}{$0.723$} & \multicolumn{1}{c|}{$0.724$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$0.724$} & {$0.724$} & \multicolumn{1}{c|}{$-0.486$} & \multicolumn{1}{c|}{$-0.49$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$-0.486$} & {$-0.486$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{N/A} & {N/A} & \multicolumn{1}{c|}{$13.1$} & \multicolumn{1}{c|}{$104$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$16.3\text{K}$} & {$61.2$} \\ \hline {HETREC} & \multicolumn{1}{c|}{$0.728$} & \multicolumn{1}{c|}{$0.729$} & \multicolumn{1}{c|}{$0.506$} & \multicolumn{1}{c|}{$0.727$} & {$0.73$} & \multicolumn{1}{c|}{$-0.604$} & \multicolumn{1}{c|}{$-0.603$} & \multicolumn{1}{c|}{$-1.119$} & \multicolumn{1}{c|}{$-0.603$} & {$-0.602$} & \multicolumn{1}{c|}{$0.5 \git 0.64 \git 0.6$} & \multicolumn{1}{c|}{$0.0 \git 0.0 \git 0.02$} & \multicolumn{1}{c|}{$0.0 \git 0.0 \git 0.0$} & \multicolumn{1}{c|}{$0.5 \git 0.64 \git 0.58$} & {$0.0 \git 0.0 \git 0.02$} & \multicolumn{1}{c|}{$50.1$} & \multicolumn{1}{c|}{$140$} & \multicolumn{1}{c|}{$224\text{K}$} & \multicolumn{1}{c|}{$4.25\text{K}$} & {$144$} \\ \hline {ML-100K} & \multicolumn{1}{c|}{$0.662$} & \multicolumn{1}{c|}{$0.659$} & \multicolumn{1}{c|}{$0.498$} & \multicolumn{1}{c|}{$0.662$} & {$0.665$} & \multicolumn{1}{c|}{$-0.646$} & \multicolumn{1}{c|}{$-0.66$} & \multicolumn{1}{c|}{$-1.159$} & \multicolumn{1}{c|}{$-0.645$} & {$-0.653$} & \multicolumn{1}{c|}{$0.4 \git 0.6 \git 0.54$} & \multicolumn{1}{c|}{$0.0 \git 0.0 \git 0.0$} & \multicolumn{1}{c|}{$0.0 \git 0.0 \git 0.0$} & \multicolumn{1}{c|}{$0.4 \git 0.6 \git 0.5$} & {$0.0 \git 0.0 \git 0.0$} & \multicolumn{1}{c|}{$1.39$} & \multicolumn{1}{c|}{$16.2$} & \multicolumn{1}{c|}{$9.56\text{K}$} & \multicolumn{1}{c|}{$368$} & {$21$} \\ \hline {ML-1M} & \multicolumn{1}{c|}{$0.698$} & \multicolumn{1}{c|}{$0.701$} & \multicolumn{1}{c|}{$0.468$} & \multicolumn{1}{c|}{$0.699$} & {$0.7$} & \multicolumn{1}{c|}{$-0.626$} & \multicolumn{1}{c|}{$-0.632$} & \multicolumn{1}{c|}{$-1.166$} & \multicolumn{1}{c|}{$-0.627$} & {$-0.63$} & \multicolumn{1}{c|}{$0.8 \git 0.72 \git 0.72$} & \multicolumn{1}{c|}{$0.6 \git 0.6 \git 0.62$} & \multicolumn{1}{c|}{$0.0 \git 0.0 \git 0.0$} & \multicolumn{1}{c|}{$0.4 \git 0.6 \git 0.6$} & {$0.5 \git 0.64 \git 0.66$} & \multicolumn{1}{c|}{$19.2$} & \multicolumn{1}{c|}{$86.9$} & \multicolumn{1}{c|}{$156\text{K}$} & \multicolumn{1}{c|}{$1.38\text{K}$} & {$194$} \\ \hline {EACH MOVIE} & \multicolumn{1}{c|}{$0.716$} & \multicolumn{1}{c|}{$0.718$} & \multicolumn{1}{c|}{$0.522$} & \multicolumn{1}{c|}{$0.715$} & {$0.716$} & \multicolumn{1}{c|}{$-0.615$} & \multicolumn{1}{c|}{$-0.613$} & \multicolumn{1}{c|}{$-0.946$} & \multicolumn{1}{c|}{$-0.616$} & {$-0.614$} & \multicolumn{1}{c|}{$0.8 \git 0.76 \git 0.82$} & \multicolumn{1}{c|}{$0.8 \git 0.68 \git 0.84$} & \multicolumn{1}{c|}{$0.0 \git 0.0 \git 0.02$} & \multicolumn{1}{c|}{$0.7 \git 0.72 \git 0.78$} & {$0.6 \git 0.6 \git 0.72$} & \multicolumn{1}{c|}{$11.3$} & \multicolumn{1}{c|}{$329$} & \multicolumn{1}{c|}{$220\text{K}$} & \multicolumn{1}{c|}{$446$} & {$1.9\text{K}$} \\ \hline {ML-10M} & \multicolumn{1}{c|}{$0.714$} & \multicolumn{1}{c|}{$0.716$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$0.714$} & {$0.716$} & \multicolumn{1}{c|}{$-0.617$} & \multicolumn{1}{c|}{$-0.619$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$-0.62$} & {$-0.618$} & \multicolumn{1}{c|}{$0.5 \git 0.84 \git 0.7$} & \multicolumn{1}{c|}{$0.1 \git 0.28 \git 0.32$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$0.5 \git 0.72 \git 0.72$} & {$0.0 \git 0.32 \git 0.36$} & \multicolumn{1}{c|}{$821$} & \multicolumn{1}{c|}{$3.93\text{K}$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$9.53\text{K}$} & {$6.55\text{K}$} \\ \hline {ML-20M} & \multicolumn{1}{c|}{$0.709$} & \multicolumn{1}{c|}{$0.71$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$0.709$} & {$0.71$} & \multicolumn{1}{c|}{$-0.619$} & \multicolumn{1}{c|}{$-0.619$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$-0.621$} & {$-0.619$} & \multicolumn{1}{c|}{$0.5 \git 0.8 \git 0.64$} & \multicolumn{1}{c|}{$0.3 \git 0.44 \git 0.4$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$0.4 \git 0.6 \git 0.5$} & {$0.1 \git 0.4 \git 0.4$} & \multicolumn{1}{c|}{$1.58\text{K}$} & \multicolumn{1}{c|}{$5.36\text{K}$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$12.8\text{K}$} & {$4.42\text{K}$} \\ \hline {BX} & \multicolumn{1}{c|}{$0.546$} & \multicolumn{1}{c|}{$0.577$} & \multicolumn{1}{c|}{$0.503$} & \multicolumn{1}{c|}{$0.546$} & {$0.57$} & \multicolumn{1}{c|}{$-0.618$} & \multicolumn{1}{c|}{$-0.612$} & \multicolumn{1}{c|}{$-0.8$} & \multicolumn{1}{c|}{$-0.627$} & {$-0.617$} & \multicolumn{1}{c|}{$0.3 \git 0.16 \git 0.16$} & \multicolumn{1}{c|}{$0.3 \git 0.24 \git 0.2$} & \multicolumn{1}{c|}{$0.0 \git 0.0 \git 0.02$} & \multicolumn{1}{c|}{$0.3 \git 0.28 \git 0.3$} & {$0.3 \git 0.2 \git 0.18$} & \multicolumn{1}{c|}{$205$} & \multicolumn{1}{c|}{$2.02\text{K}$} & \multicolumn{1}{c|}{$156\text{K}$} & \multicolumn{1}{c|}{$338$} & {$481$} \\ \hline {BOOK-GENOME} & \multicolumn{1}{c|}{$0.658$} & \multicolumn{1}{c|}{$0.665$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$0.657$} & {$0.654$} & \multicolumn{1}{c|}{$-0.651$} & \multicolumn{1}{c|}{$-0.645$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$-0.649$} & {$-0.651$} & \multicolumn{1}{c|}{$0.6 \git 0.44 \git 0.42$} & \multicolumn{1}{c|}{$0.3 \git 0.32 \git 0.34$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$0.3 \git 0.44 \git 0.36$} & {$0.2 \git 0.24 \git 0.38$} & \multicolumn{1}{c|}{$2.53\text{K}$} & \multicolumn{1}{c|}{$2.56\text{K}$} & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{c|}{$7.8\text{K}$} & {$4.34\text{K}$} \\ \hline \end{tabular}} \end{adjustwidth} \caption{Results from Table \ref{tbl:all-results-main} with PMLE. PMLE is quite competitive when applied small datasets. However, similarly to CMLE, it tends to converge quite slowly when applied to large datasets. Overall, both PMLE and the spectral method are quite competitive but the spectral method is significantly faster. \label{tbl:all-results-pair}} \end{table} \textbf{Extra experiments with a Bayesian method.} All of the estimation algorithms considered so far are point estimation algorithm (i.e., return a single parameter estimate). In certain applications, one may prefer a Bayesian estimation algorithm that returns a \emph{distribution} over the estimate. Recently, Bayesian algorithms based on variational inference has received considerable attention in the IRT literature. We conduct extra experiments using the algorithm proposed in \cite{natesan2016bayesian} of which implementation can be found in \cite{lalor2019emnlp,rodriguez2021evaluation}. Table \ref{tbl:experiments-bayesian} summarizes the results on a restricted subset of experiments. One can see that the Bayesian algorithm is somewhat more accurate than the spectral algorithm. However, it is considerably more complicated and as a result runs much slower than the spectral algorithm. \begin{table}[] \centering \resizebox{0.5\textwidth}{!}{% \begin{tabular}{|c|c|c|} \hline \textbf{Dataset} & \textbf{AUC (Bayesian)} & \textbf{AUC (Spectral)} \\ \hline LSAT & 0.706 & 0.707 \\ \hline 3 Grades & 0.5322 & 0.532 \\ \hline UCI & 0.565 & 0.565 \\ \hline ML-100K & 0.695 & 0.662 \\ \hline & \textbf{LogLik (Bayesian)} & \textbf{Loglik (Spectral)} \\ \hline LSAT & -0.487 & -0.487 \\ \hline 3 Grades & -0.681 & -0.687 \\ \hline UCI & -0.693 & -0.706 \\ \hline ML-100K & -0.646 & -0.646 \\ \hline & \textbf{Top-K (Bayesian)} & \textbf{Top-K (Spectral)} \\ \hline ML-100K & 0; 0; 0.04; & 0.4; 0.6; 0.54 \\ \hline & \textbf{Time (Bayesian)} & \textbf{Time (Spectral)} \\ \hline LSAT & 63 & 0.028 \\ \hline 3 Grades & 27 & 0.015 \\ \hline UCI & 26 & 0.021 \\ \hline ML-100K & 6700 & 2 \\ \hline \end{tabular}} \caption{While the Bayesian algorithm is somewhat more accurate than the spectral algorithm, it is considerably slower. In fact, it is significantly slower than CMLE, the slowest method considered in our main experiments. \label{tbl:experiments-bayesian}} \end{table} \subsection{Datasets Metadata and Experiment Setup} Table \ref{tbl:datasets-metadata} summarizes the metadata for all the real-life datasets used in our experiments. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|} \hline Dataset & $m$ & $m$ & Reference \\ \hline LSAT & 5 & 1000 & \cite{mcdonald2013test} \\ \hline UCI & 4 & 131 & \cite{hussain2018educational} \\ \hline 3 GRADES & 3 & 648 & \cite{cortez2008using} \\ \hline RIIID & 6311 & 22906 & \cite{riiid} \\ \hline HETREC & 10197 & 2113 & \cite{Cantador:RecSys2011} \\ \hline ML-100K & 1682 & 943 & \cite{harper2015movielens} \\ \hline ML-1M & 3952 & 6040 & \cite{harper2015movielens} \\ \hline EACH MOVIE & 1628 & 72916 & \cite{harper2015movielens} \\ \hline ML-10M & 10681 & 71567 & \cite{harper2015movielens} \\ \hline ML-20M & 27278 & 138493 & \cite{harper2015movielens} \\ \hline BX & 6185 & 278858 & \cite{ziegler2005improving} \\ \hline BOOK-GENOME & 9374 & 350332 & \cite{kotkov2022tag}\\ \hline \end{tabular} \caption{Datasets metadata and references. \label{tbl:datasets-metadata}} \end{table} \textbf{Experiment Setup:} For each experiment on real-life datasets, we first partition the data randomly dividing the set of users into 80\% of users for training and 20\% of users for testing. Within the set of training users, we further partition into 90\% for inference and 10\% for validation. For the prior distribution over the user parameters, we experimented with 10 prior distributions, all normal distributions but with different means and standard deviations. For each method, we run inference on the inference set to obtain an item estimate $\beta$. For ranking metrics evaluation, we compute top-$K$ accuracy with respect to the reference ranking predetermined by average ratings (after removing items with very high average ratings but receive very few ratings). For AUC and log-likelihood metrics, we choose the prior distribution over $\theta$ by evaluating the log-likelihood on the validation set. The prior distribution corresponding to the high validation log-likelihood is used to evaluate log-likelihood on the test set. \newpage \textbf{Python implementation.} For readers reading this paper online, we also include here the python implementation of our spectral algorithm. \begin{lstlisting}[language=Python,basicstyle=\tiny] import numpy as np from scipy.sparse import csc_matrix INVALID_RESPONSE = -99999 def construct_markov_chain_accelerated(X, lambd=0.1): m, _ = X.shape D = np.ma.masked_equal(X, INVALID_RESPONSE, copy=False) D_compl = 1. - D M = np.ma.dot(D, D_compl.T) # This computes Mij = sum_l Alj Ali Xli (1-Xlj) np.fill_diagonal(M, 0) M = np.round(M) # Add regularization M = np.where(np.logical_or((M != 0), (M.T != 0)), M+lambd, M) d = [] # Construct a row stochastic matrix for i in range(m): di = max(np.sum(M[i, :]), 1) d.append(di) M[i, :] /= max(d[i], 1) M[i, i] = 1. - np.sum(M[i, :]) d = np.array(d) return M, d def spectral_estimate(X, max_iters=10000, lambd=1, eps=1e-6): """Estimate the hidden parameters according to the Rasch model, either for the tests' difficulties or the students' abilities. We follow the convention in Girth https://eribean.github.io/girth/docs/quickstart/quickstart/ the response matrix X has shape (m, n) where m is the number of items and n is the number of users. The algorithm returns the item estimates. X: np.array of size (m, n) where missing entries have value INVALID_RESPONSE max_iters: int, maximum number of iterations to compute the stationary distribution of the Markov chain lambd: float, regularization parameter eps: tolerance for convergence checking """ M, d = construct_markov_chain_accelerated(X, lambd=lambd) M = csc_matrix(M) m = len(A) pi = np.ones((m,)).T for _ in range(max_iters): pi_next = (pi @ M) pi_next /= np.sum(pi_next) if np.linalg.norm(pi_next - pi) < eps: pi = pi_next break pi = pi_next pi = pi.T/d beta = np.log(pi) beta = beta - np.mean(beta) return beta \end{lstlisting} \section{Conclusion} We propose a new spectral algorithm for the item estimation problem under the celebrated Rasch model. Our algorithm is theoretically well-founded, practically performant and should be added to the statistician's quiver of estimation methods when analyzing binary response data. Extending our algorithm to more expressive IRT models such as 2PL or 3PL and response types is an open avenue. In the future, we also hope to generalize the method to more complicated response data types such as ordinal or rating data, as well as incorporating ancillary information (user and item features). \subsubsection*{References}} \title{A Spectral Approach to Item Response Theory} \author{% Duc Nguyen\\ Department of Computer and Information Science\\ University of Pennsylvania \\ \texttt{[email protected]} \\ \And Anderson Y. Zhang \\ Department of Statistics and Data Science \\ University of Pennsylvania \\ \texttt{[email protected]} \\ } \begin{document} \maketitle \vspace{-1em} \begin{abstract} The Rasch model is one of the most fundamental models in \emph{item response theory} and has wide-ranging applications from education testing to recommendation systems. In a universe with $n$ users and $m$ items, the Rasch model assumes that the binary response $X_{li} \in \{0,1\}$ of a user $l$ with parameter $\theta^*_l$ to an item $i$ with parameter $\beta^*_i$ (e.g., a user likes a movie, a student correctly solves a problem) is distributed as $\Pr(X_{li}=1) = 1/(1 + \exp{-(\theta^*_l - \beta^*_i)})$. In this paper, we propose a \emph{new item estimation} algorithm for this celebrated model (i.e., to estimate $\beta^*$). The core of our algorithm is the computation of the stationary distribution of a Markov chain defined on an item-item graph. We complement our algorithmic contributions with finite-sample error guarantees, the first of their kind in the literature, showing that our algorithm is consistent and enjoys favorable optimality properties. We discuss practical modifications to accelerate and robustify the algorithm that practitioners can adopt. Experiments on synthetic and real-life datasets, ranging from small education testing datasets to large recommendation systems datasets show that our algorithm is scalable, accurate, and competitive with the most commonly used methods in the literature. \end{abstract} \vspace{-1em} \section{Introduction} \import{./introduction/}{intro} \subsection{Notations and Problem Formulation} \import{./introduction/}{notations} \section{The Spectral Estimator}\label{sect:algorithm} \import{./spectral_algo/}{algorithm} \section{Theoretical Analysis}\label{sect:analysis} \import{./analysis/}{analysis_overview} \subsection{Finite Sample Error Guarantees}\label{sect:error-guarantees} \import{./analysis/}{upper_bound} \subsection{Cramer-Rao Lower Bound}\label{sect:lower-bounds} \import{./analysis/}{lower_bound} \section{Practical Implementation Aspects}\label{sect:accelerated} \import{./spectral_algo/}{accelerated} \section{Experiments}\label{sect:experiments} \import{./experiments/}{experiments} \section{Related Works} \import{./introduction/}{related_works} \section{Ethical Considerations} Our work proposes an algorithm of which real-life applications very often involve actual human data with sensitive information. For example, the Rasch model is often studied in the context of education testing and psychological testing where the subjects of studies are students and patients. Therefore, deploying our algorithm (or any algorithms in this context) needs to be accompanied by thoughtful and thorough ethical considerations. In this work, we provide the algorithmic tool that lays the foundation for our algorithm and its theoretical guarantee. We believe that a socially constructive application of our algorithm should always be accompanied by detailed explanation of its fundamental limitations, assumptions and decision makers need to take into account these aspects when interpreting the results returned by the algorithm. \import{./introduction/}{conclusion} \section{Acknowledgement} The authors would like to thank William Zhang for proofreading and leaving helpful comments on earlier drafts of this paper. We would also like to thank the anonymous reviewers for their thoughtful suggestions that have been incorporated to improve this paper. D.N. and A.Z. are supported by NSF Grant DMS-2112988. A.Z. acknowledges financial support from the Alfred H. Williams Faculty Scholar award. Any opinions expressed in this paper are those of the author and do not necessarily reflect the views of the National Science Foundation. \section{Problem setup}\label{sect:problem-setup} Suppose that in a final exam, there are $n$ students and $m$ total problems. Each student has a latent ability parameter $w_l > 0$ for $l = 1, \ldots, n$. Each problem has a latent difficulty parameter $z_i>0$ for $i = 1,\ldots, m$. If student $l$ is given problem $i$, she can solve the problem correctly with probability $\frac{w^*_l}{w^*_l + z^*_i}$. Let $X_{li}$ be the random indicator variable for the event that student $l$ solves problem $i$. For each student $l$, let $\calP_l$ be the set of problems assigned to her. We assume that the student attempts all of the problems assigned to her. Per the Rasch model, we assume that $\{X_{li}\}_{i\in \calP_l}$ are independent binary random variables. For two problems $i, j$, let $\Ss_{ij}$ denote the set of students who were assigned both problems. \subsection{Preliminaries} As a convenient shorthand notation, define $w^*_i = e^{\beta^*_i}$ for $i \in [m]$. Since $ \kappa:= \beta_{\max} - \beta_{\min}$, $w^*_{\max}/w^*_{\min} = e^{\kappa}$. Let $\gamma = \min_{l\in[n], i, j\in [m]} \E[X_{li}(1-X_{lj})]$. Define $B = A^\top A$ and the following events: $$ \A = \{ \frac{np^2}{2} \leq B_{ij} \leq \frac{3np^2}{2} \,\forall i\neq j\in [m] \}\,,$$ $$ \A^+ = \A \cap \{ \frac{mp}{2} \leq A_{l}^\top \mb 1 \leq \frac{3mp}{2} \,\forall l \in [n] \} \,.$$ Both events happen with high probability under appropriate conditions, as summarized by two lemmas below. \begin{lemma} Consider the random sampling scheme described in Section \ref{sect:error-guarantees}, we have $$ \Pr(\A) \geq 1 - \exp{-\frac{np^2}{20}} $$ so long as $np^2 \geq C_1 \log m$ by a sufficiently large constant $C_1$ (e.g., $C_1 \geq 60$). \end{lemma} \begin{proof} Invoking Chernoff bound, we have: \begin{equation*} \begin{aligned} \Pr(\lvert \sum_{l=1}^n A_{li}A_{lj} - \E[A_{li}A_{lj}] \rvert > \frac{1}{2}np^2) \leq 2\exp{-\frac{\frac{1}{4}np^2 }{\frac{1}{2}+2}} = \exp{-\frac{np^2}{10}+\ln 2}\,. \end{aligned} \end{equation*} It can be checked that so long as $np^2 \geq 60\ln m \geq 20\ln 2 + 40\ln m$ for $m \geq 2$, then $\exp{-\frac{np^2}{10}+\ln 2} \leq \exp{-\frac{np^2}{20} -2\ln m } $. The rest of the proof follows by applying union bound over all pairs $i\neq j$. \end{proof} \begin{lemma} Consider the random sampling scheme described in Section \ref{sect:error-guarantees}), we have $$ \Pr(\A^+) \geq 1 - \exp{-\frac{np^2}{20}} - \frac{1}{n^{9}} $$ so long as $np^2 \geq C_1 \log m$ and $mp \geq C_1 \log n$ by a sufficiently large constant $C_1$ (e.g., $C_1 > 101$). \end{lemma} \begin{proof} The first term $\exp{-\frac{np^2}{20}}$ is obtained using the same argument as in the lemma above. For the second term $\frac{1}{m^9}$, we again invoke Chernoff bound: \begin{equation*} \begin{aligned} \Pr(\lvert \sum_{l=1}^n A_{li}A_{lj} - \E[A_{li}A_{lj}] \rvert > \frac{1}{2}mp) \leq 2\exp{-\frac{\frac{1}{4}mp }{\frac{1}{2}+2}} = \exp{-\frac{mp}{10}+\ln 2}\,. \end{aligned} \end{equation*} So long as $mp \geq 100 \log n$, one could see that $\exp{-\frac{mp}{10}+\ln 2} \leq \frac{1}{n^{10}}$. Applying union bound over all users $n$ gives us the probability bound. \end{proof} Under either event $\A$ (or $\A^+$) happening, one can simply set $d = \frac{3mnp^2}{2} $. We state without proof the following useful identities: $$ w^*_{\min} \geq \frac{\sum_{i=1}^m w^*_i}{m e^\kappa}\, \quad\text{and}\quad \pi^*_{\min} \geq \frac{\sum_{i=1}^m \pi^*_i}{m e^\kappa} = \frac{1}{me^{\kappa}}\,. $$ $$ w^*_{\max} \leq \frac{e^\kappa\sum_{i=1}^m w^*_i}{m}\,\quad\text{and}\quad \pi^*_{\max} \leq \frac{e^\kappa\sum_{i=1}^m \pi^*_i}{m} =\frac{e^{\kappa}}{m}\,. $$ ~\\ \textbf{Reversibility of the idealized Markov chain:} Note that the eigenperturbation bound in Lemma \ref{lem:perturb-bound-mc} requires that the reference Markov chain $P^*$ is reversible. Fortunately for us, this is indeed the case. \begin{repproposition}{lem:reversibility} For a fixed assignment $A$, consider the following idealized the Markov chain: $$ P^*_{ij} =\begin{cases} \frac{1}{d} \sum_{l=1}^n A_{li}A_{lj} \E[X_{li}(1-X_{lj})] &\text{for } i\neq j \\ 1 - \frac{1}{d} \sum_{k\neq i} P^*_{ik} &\text{for } i = j \end{cases}\quad , $$ where $d$ is some sufficiently large normalization constant. Then the Markov chain is reversible and its stationary distribution satisfies $\pi^*_i = w_i^*/(\sum_k w_k^*)$ for $i\in[m]$. \end{repproposition} \begin{proof} This boils down to verifying the reversility condition, i.e., whether $\pi_i^* P_{ij}^* = \pi_j^* P_{ji}^*$. One can see that \begin{equation*} \begin{aligned} \pi_i^* P^*_{ij} &= w_i^* \cdot \frac{1}{\sum_{k\in[m]} w_k^* }\cdot \sum_{l\in [m]} A_{li}A_{lj} \frac{e^{\theta^*_l}}{e^{\theta^*_l} + w^*_i} \cdot \frac{w_j^*}{e^{\theta^*_l} + w_j^*} \\ &= w_j^* \cdot \frac{1}{\sum_{k\in[m]} w_k^*}\cdot\sum_{l\in [m]}A_{li}A_{lj} \frac{e^{\theta^*_l}}{e^{\theta^*_l} + w^*_j} \cdot \frac{w_i^*}{e^{\theta^*_l} + w_i^*} \\ &= \pi_j^* P^*_{ji}\,. \end{aligned} \end{equation*} This completes the proof. \end{proof} ~\\ \textbf{$\ell_2$ eigen-perturbation bound:} The reader might recognize the similarity between the following perturbation bound to Theorem 8 of \cite{chen2019spectral}. Our lemma has been modified to use the $\ell_2$ norm instead of the induced norm in the original theorem. \begin{lemma}\label{lem:perturb-bound-mc} Consider two discrete time Markov chains $P$ and $P^*$ with a finite state space and stationary distributions $\pi$ and $\pi^*$, respectively. Furthermore, assume that the Markov chain $P^*$ is reversible. If $\lVert P - P^* \rVert_2\leq \mu^*(P^*)$ where $\mu^*(P^*)$ is the spectral gap of $P^*$, then $$ \lVert \pi - \pi^* \rVert_{2} \leq \frac{\lVert {\pi^*}^\top(P^* - P)\rVert_{2} }{\mu^*(P^*) - \lVert P - P^* \rVert_{2} }\,. $$ \end{lemma} \begin{proof} We have: \begin{equation*} \begin{aligned} {\pi^*}^\top - \pi^\top &= {\pi^*}^\top P^* - \pi^\top P\\ &= {\pi^*}^\top (P^* - P + P) - \pi^\top P \\ &= {\pi^*}^\top (P^* - P) + {\pi^*}^\top P - \pi^\top P \\ &= {\pi^*}^\top (P^* - P) + {(\pi^* - \pi)}^\top P\\ &= {\pi^*}^\top (P^* - P) + {(\pi^* - \pi)}^\top (P- P^* + P^*)\\ &= {\pi^*}^\top (P^* - P) + {(\pi^* - \pi)}^\top (P- P^*) + ({\pi^* - \pi})^\top P^*\\ &= {\pi^*}^\top (P^* - P) + {(\pi^* - \pi)}^\top (P- P^*) + ({\pi^* - \pi})^\top (P^* - \mb 1 {\pi^*}^\top )\,.\\ \end{aligned} \end{equation*} The last equality comes from the simple observation that $(\pi^* - \pi)^\top \mb 1 = 0$. We thus obtain the following normed inequality: $$ \lVert \pi - \pi^* \rVert_2 \leq \lVert{\pi^*}^\top (P^* - P)\rVert_2 + \lVert{\pi^* - \pi}\lVert_2 \cdot \rVert P- P^*\rVert_2 + \lVert{\pi^* - \pi}\rVert_2 \cdot \rVert P^* - \mb 1 {\pi^*}^\top\rVert_2 \,.$$ If we can show $1-\rVert P^* - \mb 1 {\pi^*}^\top\rVert_2$ is exactly the spectral gap of the reversible Markov chain $P^*$, then the final inequality is readily obtained after a simple rearrangment. We devote the rest of the proof towards this. Because $P^*$ is reversible, i.e., $\pi^*_i P^*_{ij} = \pi_j^* P_{ji}^*$, it can be checked that the matrix $\Lambda^{1/2} P^* \Lambda^{-1/2}$ is symmetric and so is $\Lambda^{1/2} \mb 1{\pi^*}^\top \Lambda^{-1/2}$. Because there is a similarity transformation between $P^* - \mb 1{\pi^*}^\top$ and $\Lambda^{1/2} (P^*- \mb 1{\pi^*}^\top) \Lambda^{-1/2}$, it suffices to analyze the spectrum of the later symmetric matrix. Let $v = [\sqrt \pi_1, \ldots, \sqrt \pi_m]$ (it has unit length). It can be checked that \begin{enumerate} \item $v^\top \Lambda^{1/2}P^*\Lambda^{-1/2} = \pi^\top P^* \Lambda^{-1/2} = v^\top$. Essentially, $v$ is a eigenvector associated with eigenvalue 1. \item $\Lambda^{1/2}\mb 1 {\pi^*}^\top \Lambda^{-1/2} = 1 vv^\top$. \end{enumerate} These two observations and the elementary fact that a Markov chain has leading eigenvalue 1 readily imply that $1- \lVert P^* - \mb 1 {\pi^*}^\top \rVert_2$ is exactly the spectral gap of $P^*$. \end{proof} \subsection{Bounding the spectral gap} Towards bounding the denominator of the eigen-perturbation bound, we will firsrt lower bound the spectral gap of the idealized Markov chain $P^*$. We first state the following useful comparison lemma that has been presented as Lemma 2 of \cite{negahban2017rank} and is originally due to \cite{diaconis1993comparison}. \begin{lemma}\label{lem:comparison-lemma} Consider two reversible Markov chains $P^*, Q$ with stationary distribution $\pi^*, \pi$, respectively that are defined on the same graph $G(V, E)$ of $m$ states. That is, $P^*_{ij} = 0$ and $Q_{ij} =0$ if $i, j\notin E$. Define $\alpha = \min_{i, j\in E} \frac{\pi^*_i P{ij}^* }{\pi_i Q_{ij} } $ and $\beta = \max_{i} \frac{\pi_i^*}{\pi_i}$. We have $$ \frac{\mu^*(P^*) }{\mu^*(Q)} \geq \frac{\alpha}{\beta}\,, $$ where $\mu^*(.)$ is the spectral gap operator. \end{lemma} Note that the comparison lemma lower bounds the spectral gap of a reversible Markov chain in terms of another reversible Markov chain. Considera Markov chain whose pairwise transition probabilties are defined as follows: \begin{equation}\label{eqn:reference-mc} Q_{ij} = \begin{cases} \frac{B_{ij}}{d} &\text{for } i\neq j \\ 1 - \frac{1}{d} \sum_{k\neq i} B_{ik} &\text{for } i= j \end{cases} \end{equation} where $d$ is the same normalization constant as in the idealized Markov chain described in Lemma \ref{lem:reversibility}. By design, this is a random walk whose stationary distribution is the uniform distribution, $\pi_i = \frac{1}{m}$. \begin{lemma}\label{lem:spectral-gap-reference} Conditioned on event $\A$, $$ \mu^*(Q) \geq \frac{1}{3}\,. $$ \end{lemma} \begin{proof} Let $\lambda_{\max, \indep}(Q)$ denote the second largest eigenvalue of $Q$ and $D = \diag(\mb 1/d)$. We have: \begin{equation*} \begin{aligned} Q&= I - D^{-1} \diag(B^\top \mb 1) + D^{-1}B\\ \Rightarrow \lambda_{\max, \indep} (Q) &= \lambda_{\max, \indep} (I - D^{-1} \diag(B^\top \mb 1) + D^{-1}B)\\ &= \lambda_{\max, \indep} (I - \underbrace{\big[D^{-1} \diag(B^\top \mb 1) - D^{-1}B\big]}_{Laplacian} )\\ &= 1 - \lambda_{\min, \indep} (D^{-1} \diag(B^\top \mb 1) - D^{-1}B)\\ \Rightarrow 1 - \lambda_{\max, \indep}(Q) &= \lambda_{\min, \indep} (D^{-1} \diag(B^\top \mb 1) - D^{-1}B) \end{aligned} \end{equation*} In these derivation steps, we have made use of the fundamental property of the Laplacian of a weighted graph: it has an eigenvalue 0 corresponding to the eigenvector proportional to $\mb 1_m$. We now need to lower bound $\lambda_{\min, \indep} (D^{-1} \diag(B^\top \mb 1) - D^{-1}B)$. Conditioned on event $\A$: \begin{equation*} \begin{aligned} \lambda_{\min, \indep} (D^{-1} \diag(B^\top \mb 1) - D^{-1}B) &= \frac{1}{d} \cdot \lambda_{\min, \indep}( B^\top \mb 1 -B)\\ &= \frac{1}{d}\cdot\min_{u \indep \mb 1, \lVert u \rVert_2 = 1} \sum_{ij} (u_i - u_j)^2 B_{ij} \\ &\geq \frac{1}{d}\cdot\frac{1}{2}np^2 \cdot \min_{u \indep \mb 1_m, \lVert u \rVert_2 = 1} \sum_{ij} (u_i - u_j)^2\\ &\geq \frac{1}{d}\cdot\frac{1}{2}np^2 \cdot \min_{u \indep \mb 1_m, \lVert u \rVert_2 = 1} \, u^\top \big[m \, I_m - \mb 1_m \mb 1_m^\top \big] u = \frac{1}{2d}mnp^2\,. \end{aligned} \end{equation*} Substituting $d=\frac{3}{2}mnp^2$ completes the proof. \end{proof} \begin{lemma}\label{lem:spectral-gap} Conditioned on event $\A$, $$\mu(P^*) \geq \frac{\gamma}{3e^{2\kappa}}\,, $$ where $\gamma = \min_{l\in[n], i, j \in [m]} \E[X_{li}(1-X_{lj})]$. \end{lemma} \begin{proof} To prove the above lower bound, we will combine Lemmas \ref{lem:comparison-lemma}, the definition of the reference Markov chain $Q$ in Equation (\ref{eqn:reference-mc}) with a lower bound on $\alpha$ and an upper bound on $\beta$. We have the following lower bound on $\alpha$: \begin{equation*} \begin{aligned} \alpha &= \min_{i, j} \frac{\pi^*_i P^*_{ij}}{\pi_i Q_{ij} }\\ &= \min_{i, j} \frac{\pi^*_i P^*_{ij}}{\frac{1}{m} \cdot \frac{1}{d} \sum_{l=1}^n A_{li} A_{lj} } = \min_{i, j} \frac{\pi^*_i \frac{1}{d} \sum_{l=1}^n A_{li}A_{lj} \E[X_{li}(1-X_{lj})] }{\frac{1}{m} \cdot \frac{1}{d} \sum_{l=1}^n A_{li} A_{lj} }\\ &= \min_{i, j} \frac{\pi^*_i \sum_{l=1}^n A_{li}A_{lj} \E[X_{li}(1-X_{lj})] }{\frac{1}{m} \cdot \sum_{l=1}^n A_{li} A_{lj} } \geq \min_{i, j} \frac{\pi^*_{\min} \gamma B_{ij} }{\frac{1}{m} \cdot B_{ij} } = \frac{\pi^*_{\min}\gamma }{\frac{1}{m}}\\ &\geq \frac{\gamma }{e^{\kappa}}\,. \end{aligned} \end{equation*} The last inequality follows from $ \pi ^*_{\min} \geq \frac{1}{m e^\kappa}$ (stated earlier). On the other hand, we have the following upper bound on $\beta$: \begin{equation*} \begin{aligned} \beta &= \max_{i} \frac{\pi_i^*}{\pi_i} = \max_i \frac{w_i^* d}{\frac{1}{m} \sum_{k=1}^m w_k^* d }\\ &\leq \frac{1}{ \frac{1}{m}} \cdot \frac{e^{\kappa}}{m} \leq e^{\kappa} \,. \end{aligned} \end{equation*} Combining the lower bound above with the upper bound on $\alpha$ obtained earlier, we have $$ \mu^*(P) \geq \frac{\gamma }{e^{2\kappa} } \cdot \mu^*(Q)\,. $$ The rest of the proof follows from the conclusion of Lemma \ref{lem:spectral-gap-reference}. \end{proof} \subsection{Bounding the matrix error term $\lVert P - P^* \rVert_2$} \begin{lemma}\label{lem:operator-norm-bound} Suppose event $\A$ holds. Fix a small constant $\epsilon < 1$. Suppose further that $np^2\geq \frac{C_2\log m}{\gamma^2\epsilon^2}$ for a sufficiently large constant $C_2$ (e.g., $C_2 \geq 30$). Then $$\lVert P - P^* \rVert_2 \leq 2\epsilon\gamma $$ with probability at least $1-\exp{-\frac{\gamma^2\epsilon^2np^2}{10}}$ over the random responses of the users where $\gamma = \min_{l\in [n], i\neq j \in [m]} \E[X_{li}(1-X_{lj})]$. \end{lemma} \begin{proof} Fix a pair $i, j$, let $\mu = \frac{1}{n} \sum_{l=1}^n A_{li}A_{lj} \E[X_{li}(1-X_{lj})]$ (note that $\mu \leq 1$). Applying Chernoff's bound gives us \begin{equation*} \begin{aligned} &\Pr(\lvert \sum_{l=1}^n A_{li}A_{lj} X_{li}(1-X_{lj}) - A_{li}A_{lj}\E[X_{li}(1-X_{lj})] \rvert > \epsilon\gamma B_{ij}\,\lvert \, \A) \\ &=\Pr(\lvert \sum_{l=1}^n A_{li}A_{lj} X_{li}(1-X_{lj}) - A_{li}A_{lj}\E[X_{li}(1-X_{lj})] \rvert > \frac{\epsilon\gamma}{\mu}\cdot \mu B_{ij} \,\lvert \,\A) \\\ &\leq 2\exp{-\frac{\gamma^2\epsilon^2/\mu^2 \cdot \mu B_{ij} }{\frac{1}{2}+2}}\\ &\leq 2\exp{-\frac{\gamma^2\epsilon^2 \cdot \frac{np^2}{2} }{\frac{1}{2}+2}}\\ &= \exp{-\frac{\gamma^2\epsilon^2np^2}{5}+\ln 2}\,. \end{aligned} \end{equation*} One can see that so long as $np^2 \geq \frac{30\ln m}{\gamma^2\epsilon^2}$ (and noting that $30\ln m \geq 20\ln m + 20\ln 2 \,\forall m \geq 2$) then $\exp{-\frac{\gamma^2\epsilon^2np^2}{5}+\ln 2} = \exp{-2\cdot \frac{\gamma^2\epsilon^2np^2}{10} +\ln 2} \leq \exp{-\frac{\gamma^2\epsilon^2np^2}{10}-2\ln m} $. Applying union bound over all pairs $i\neq j$, we have with probability at least $1-\exp{-\frac{\gamma^2\epsilon^2np^2}{10}}$, $\lvert P_{ij} - P^*_{ij}\rvert \leq \frac{1}{d} \gamma\epsilon B_{ij} \leq \frac{3\gamma\epsilon np^2 }{2d}$ for all pairs $i\neq j$. We then have \begin{equation*} \begin{aligned} \lVert P - P^* \rVert_{2} &\leq \lVert \diag(P) I - \diag(P^*) I\rVert_{2} + \lVert [P - P^*]_{i\neq j}\rVert_{2} \\ &\leq \max_{i} \lvert P_{ii} - P_{ii} \rvert + \max_{u, v: \lVert u \rVert = \lVert v\rVert = 1 } \sum_{i\neq j} u_i (P_{ij} - P_{ij}^*)v_j\\ &\leq \max_{i} \lvert \sum_{j\neq i} P_{ij}-P_{ij}^* \rvert + \max_{i\neq j} \lvert P_{ij}-P^*_{ij}\rvert \cdot \sum_{i\neq j} \lvert u_i\rvert \lvert v_j\rvert\\ &\leq 2 m \cdot \max_{i\neq j} \lvert P_{ij} - P_{ij}^* \rvert \\ &\leq \frac{3\epsilon\gamma mnp^2}{d} = 2\epsilon\gamma\,.\\ \end{aligned} \end{equation*} The conclusion follows from $d = \frac{3mnp^2}{2}$. \end{proof} Before moving on to bounding the projected error term, we first note that the eigen-perturbation bound in Lemma \ref{lem:perturb-bound-mc} holds when $\mu^*(P^*) \geq \lVert P - P^* \rVert_2$. We solve for $\epsilon$ such that $$ 2\epsilon\gamma = \frac{1}{2} \cdot \frac{\gamma}{3e^{2\kappa}} \Rightarrow \epsilon = \frac{1}{12e^{2\kappa}} \,. $$ We summarize this condition with the following corollary: \begin{coro}\label{coro:spectral-error-difference} Suppose that event $\A$ happense and that $$np^2 \geq \frac{C_2 12^2 e^{4\kappa}}{\gamma^2} \log m $$ for sufficiently large constant $C_2$ (e.g., $C_2 \geq 30$). Then: $$ \mu^*(P) - \lVert P - P^* \rVert_2 \geq \frac{\gamma}{6 e^{2\kappa}} $$ with probability at least $1-\exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}}$ over the random responses of the users. \end{coro} \begin{proof} Substituting $\epsilon = \frac{1}{12e^{2\kappa}}$ into the statement of Lemma \ref{lem:operator-norm-bound} gives: so long as $np^2 \geq \frac{30\gamma^2}{\big( \frac{1}{12e^{2\kappa}} \big)^2} \cdot \log m$ the $\lVert P - P^* \rVert_2 \leq \mu^*(P^*)/2$ with probability at least $1-\exp{-\frac{\gamma^2np^2}{10} \cdot\big( \frac{1}{12e^{2\kappa}} \big)^2 } $. \end{proof} \subsection{Bounding the projected error term $\lVert {\pi^*}^\top (P - P^*)\rVert_2 $} \begin{lemma}\label{lem:projected-error-bound} Conditioned on event $\A$, $$ \lVert {\pi^*}^\top (P - P^*)\rVert_2 \leq \frac{2e^\kappa \sqrt{16 \cdot \max\{m, \log np^2 \} }}{m \sqrt{np^2}}\, $$ with probability at least $1- \min\{\exp{-12m}, \frac{1}{(np^2)^{12}} \} $ over the random responses of the users. \end{lemma} \begin{proof} We follow a simlar argument as in the proof of Lemma 8.4 in \cite{chen2020partial} in turning the normed term into a linear term. For completeness we reproduce this argument here. We first have \begin{equation*} \begin{aligned} \lVert {\pi^*}^\top (P - P^*)\rVert_2 &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j=1}^n \pi_j^*(P_{ji}-P_{ji}^*) \bigg)^2}\\ &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^*(P_{ii}-P_{ii}^*) \bigg)^2}\\ &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^*(\sum_{j\neq i} P^*_{ij}-P_{ij}) \bigg)^2}\\ &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg)^2}\,. \\ \end{aligned} \end{equation*} Let $\B$ denote the unit norm ball in $\R^m$ and $\mathcal{V}$ denote a $1/2$-net of $\B$. That is, for every $u \in \B$, there exists $v\in \mathcal{V}$ such that $\lVert u - v\rVert_2 \leq \frac{1}{2}$. For any $u\in \B$ and any corresponding $v$, we have \begin{equation*} \begin{aligned} &\sum_{i=1}^n u_i\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg)\\ &= \sum_{i=1}^n v_i\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg) + \sum_{i=1}^n (u_i-v_i)\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg)\\ &\leq \sum_{i=1}^n v_i\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg) + \frac{1}{2} \cdot \sqrt{\sum_{i=1}^n \bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg)^2}\,. \\ \end{aligned} \end{equation*} Maximizing both sides of the above inequality with respect to $u$ and rearranging the terms gives \begin{equation*} \begin{aligned} &\sqrt{\sum_{i=1}^n \bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg)^2}\\ &\leq 2 \max_{v\in \mathcal{V}} \sum_{i=1}^n v_i\bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^* (P^*_{ij}-P_{ij}) \bigg)\,. \end{aligned} \end{equation*} In summary, we can upper bound the normed term by a more manageable linear term as follows: \begin{equation}\label{eqn:norm-linear} \begin{aligned} \lVert {\pi^*}^\top (P - P^*)\rVert_2 &= \sqrt{\sum_{i=1}^n\bigg(\sum_{j=1}^n \pi_j^*(P_{ji}-P_{ji}^*) \bigg)^2}\\ &\leq 2 \max_{v\in \mathcal{V}} \sum_{i=1}^n v_i \bigg(\sum_{j\neq i} \pi_j^*(P_{ji}-P_{ji}^*) + \pi_i^*(P_{ij}^* - P_{ij}) \bigg)\,. \end{aligned} \end{equation} We now expand on the linear term: \begin{equation*} \begin{aligned} &\sum_{i=1}^m v_i \bigg( \sum_{j\neq i} \pi_j^* (P_{ji} - P_{ji}^*) + \pi_i^*(P_{ij}^* - P_{ij}) \bigg)\\ &= \frac{1}{d} \sum_{i=1}^m v_i \bigg( \sum_{j\neq i} \pi_j^* \big[\sum_{l=1}^n A_{li}A_{lj}\big(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big) \big] \\ &\quad\quad- \sum_{j\neq i}\pi_i^* \big[\sum_{l=1}^n A_{li}A_{lj}\big(X_{li}(1-X_{lj}) - \E[X_{li}(1-X_{lj})]\big) \big] \bigg)\\ &=\frac{1}{d} \sum_{l=1}^n\sum_{i=1}^m \bigg( \sum_{j\neq i} v_i \pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big]\big) \\ &-\frac{1}{d}\sum_{l=1}^n\sum_{i=1}^m \bigg( \sum_{j\neq i} v_i \pi_i^*A_{li}A_{lj} \big[(X_{li}(1-X_{lj}) - \E[X_{li}(1-X_{lj})]\big]\big) \\ &=\frac{1}{d} \sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} \bigg( (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big]\bigg)\,. \end{aligned} \end{equation*} We will use the method of bounded difference to obtain a concentration inequality on the above sum. Note that this sum is essentially a function $f$ of $n\times m$ independent Bernoulli random variables $\{X_{li}\}$. Let $X$ and $X'$ be identical copies except for $X_{li}\neq X'_{li}$. \begin{equation}\label{eqn:absolute-diff} \begin{aligned} \lvert f(X) - f(X')\rvert &= \frac{1}{d} \lvert \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) [X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*] \rvert \,.\\ \end{aligned} \end{equation} Ignore the normalization factor $d$ for now. Using Cauchy-Schwarz, we can upper bound the absolute difference term as \begin{equation*} \begin{aligned} &\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) \underbrace{[X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*]}_{\leq \max\{\pi^*_i, \pi^*_j\} \leq \frac{e^\kappa}{m}}\\ &\leq \frac{e^\kappa}{m} \cdot \sqrt m \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} = \frac{e^\kappa}{\sqrt m} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} \,.\\ \end{aligned} \end{equation*} At this point we can invoke concentration inequality based on bounded difference (e.g., Hoeffding's inequality). Fixing a $v \in \mathcal{V}$, we have \begin{equation*} \begin{aligned} &\Pr\bigg(\sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big] > t\,\lvert\, \A \bigg) \\ &\leq 2\exp{-\frac{2t^2}{\sum_{l=1}^n \sum_{i=1}^m \frac{e^{2\kappa} }{m} \cdot \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2 }}\\ &= 2\exp{-\frac{2t^2}{ \frac{e^{2\kappa} }{m} \sum_{i\neq j} B_{ij} (v_i - v_j)^2 }}\\ &[\text{Conditioned on $\A$, $B_{ij} \leq \frac{3np^2}{2} $}]\\ &\leq 2\exp{-\frac{2t^2}{\frac{e^{2\kappa} }{m} \sum_{i\neq j} \frac{3}{2}np^2 \cdot (v_i - v_j)^2 }}\\ &\leq 2\exp{-\frac{2t^2}{ \frac{e^{2\kappa}}{m} \frac{3}{2}mnp^2 }}\\ &= 2\exp{-\frac{4t^2}{3e^{2\kappa} np^2 }} \,.\\ \end{aligned} \end{equation*} Note that our $\frac{1}{2}$-net has cardinality $(\frac{2}{\frac{1}{2}} + 1)^m = 5^m$ (cf. Corollary 4.2.13 \cite{vershynin2018high}) and we are interested in the probability that large deviation \emph{doesn't happen for all $v\in \mathcal{V}$}. Applying union bound over all $v\in \mathcal{V}$, we have \begin{equation*} \begin{aligned} &\Pr\bigg(\sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big] > t \quad\forall v \in \mathcal{V} \,\lvert \,\A \bigg) \\ &\leq 2\cdot 5^m \cdot \exp{ -\frac{4t^2}{3e^{2\kappa}np^2} }\\ &\leq \exp{-\frac{4t^2}{3e^{2\kappa}np^2} + 4m }\,. \end{aligned} \end{equation*} Set $$t = e^\kappa \sqrt{\frac{3np^2}{4}} \cdot \sqrt{4m + 12\max\{m, \log{np^2}\} } .$$ Then $ \exp{-\frac{4t^2}{3e^{2\kappa}np^2} + 4m } \leq \min\{\exp{-12m}, \frac{1}{(np^2)^{12}}\}$. Consequently, $$ \lVert {\pi^*}^\top (P - P^*)\rVert_2 \leq \frac{ 2e^\kappa\sqrt{\frac{3np^2}{4}} \cdot \sqrt{16 \cdot \max\{m, \log np^2 \}}}{d} \leq \frac{2e^\kappa \sqrt{np^2} \cdot \sqrt{12 \cdot \max\{m, \log np^2 \} }}{d} $$ with probability at least $1- \min\{\exp{-12m}, \frac{1}{(np^2)^{12}} \} $. Note that the factor of $2$ comes from Equation (\ref{eqn:norm-linear}). Substituting $d = \frac{3mnp^2}{2}$ into the bound above finishes the proof. \end{proof} \subsection{Putting it all together} The results of previous sections provide bounds on the numerator and denominator of the eigenperturbation bound in Lemma \ref{lem:perturb-bound-mc}. We can now combine all of them towards obtaining a bound on $\lVert \beta - \beta^*\rVert_2$. As an intermediate, we first obtain the following bound on $\lVert \pi - \pi^* \rVert_2$: \begin{theorem}\label{thm:stationary-error-bound} Consider the random sampling scheme described in Section \ref{sect:error-guarantees}. Suppose that $np^2 \geq \max\{C_2 \frac{12^2 e^{4\kappa}}{\gamma^2} \log m, C_1\log m\} $ for sufficiently large constants (e.g., $C_2 \geq 30$, $C_1 \geq 101$). Then $$ \lVert \pi - \pi^* \rVert_{2} \leq \frac{48/\sqrt 3 e^{3\kappa}}{\gamma}\cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{m\sqrt{np^2}} $$ with probability at least $1- \min\{\exp{-12m}, \frac{1}{(np^2)^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{np^2}{20}} $. \end{theorem} \begin{proof} We first assume that $\A$ holds. The probability bound in the theorem statement can be obtained following a simple union bound argument. From the conclusions of Lemma \ref{lem:projected-error-bound}, we have $$ \Pr\bigg(\lVert {\pi^*}^\top (P - P^*)\rVert_2 > \frac{2e^\kappa \sqrt{16 \cdot \max\{m, \log np^2 \} }}{m \sqrt{np^2}} \,\lvert \, \A \bigg) \leq \min\{\exp{-12m}, \frac{1}{(np^2)^{12}}\} \,. $$ From the conclusion of Corollary \ref{coro:spectral-error-difference}, we have $$ \Pr\bigg(\mu^*(P) - \lVert P - P^* \rVert_2 < \frac{\gamma}{6 e^{2\kappa}} \,\lvert \, \A \bigg) \leq \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} \,. $$ Applying Lemma \ref{lem:perturb-bound-mc}: Conditioned on $\A$ and applying union bound over the two rare events above, the following holds with probability at least $1- \min\{\exp{-12m}, \frac{1}{{(np^2)}^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}}$. $$ \lVert \pi - \pi^* \rVert_{2} \leq \frac{\lVert {\pi^*}^\top(P^* - P)\rVert_{2} }{\mu(P^*) - \lVert P - P^* \rVert_{2} } \leq \frac{48/\sqrt 3 e^{3\kappa}}{\gamma}\cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{m\sqrt{np^2}}\,. $$ Let the above good event be $\B$. \begin{equation*} \begin{aligned} \Pr(\B^c) &= \Pr(\B^c, \A) + \Pr(\B^c, \A^c)\\ &\leq \Pr(\B^c\,\lvert\,\A) \cdot \Pr(\A) + \Pr(\A^c)\\ &\leq \Pr(\B^c\,\lvert\,\A) + \Pr(\A^c)\\ &\leq \min\{\exp{-12m}, \frac{1}{{(np^2)}^{12}} \} + \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} + \exp{-\frac{np^2}{20}}\,. \end{aligned} \end{equation*} This completes the proof. \end{proof} With these results, we are finally ready to prove the main theorem providing error bounds on the parameters returned by our spectral algorithm. \begin{reptheorem}{thm:parameters-error-bound} Consider the random sampling scheme described in Section \ref{sect:error-guarantees}. Suppose that $np^2 \geq \max\{C_2 \frac{12^2 e^{4\kappa}}{\gamma^2} \log m, C_1\log m\} $ for sufficiently large constants (e.g., $C_2 \geq 30$, $C_1 \geq 101$). Then the output of the spectral algorithm (Algorithm \ref{alg:spectral}) satisfies $$ \lVert \beta - \beta^* \rVert_2 \leq \frac{96/\sqrt{3}\cdot e^{4\kappa}}{\gamma} \cdot \frac{\sqrt{ \max\{m, \log np^2 \} } }{\sqrt{np^2}} $$ with probability at least $1- \min\{\exp{-12m}, \frac{1}{(np^2)^{12}} \} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{np^2}{20}} $. \end{reptheorem} \begin{proof} Suppose for now that there is a factor $L$ such that $\lvert \log x - \log x' \rvert \leq L \lvert x - x'\rvert$ for all $x, x' \in [\frac{w^*_{\min}}{\sum_j w^*_j}, \frac{w^*_{\max}}{\sum_j w^*_j}]$. One can see that the output of the spectral algorithm is essentially $$ \beta_i = \log \pi_i - \frac{1}{m} \sum_{k=1}^m \log \pi_k \,. $$ And $\beta^*$ is related to $\pi^*$ via the same transformation: $$ \beta^*_i = \log \pi^*_i - \frac{1}{m} \sum_{k=1}^m \log \pi^*_k \,. $$ The goal is to relate $\lVert \beta-\beta^*\rVert_2$ to $\lVert \pi -\pi^*\rVert_2$. \begin{equation*} \begin{aligned} \lVert \beta - \beta^* \rVert_2^2 &= \sum_{i=1}^m (\beta_i - \beta^*_i)^2 = \sum_{i=1}^m (\log \pi_i - \frac{1}{m}\sum_k \log \pi_k - \log \pi_i^* + \frac{1}{m}\sum_k \log \pi^*_k )^2 \\ &= \sum_{i=1}^m (\log \pi_i -\pi^*_i + \frac{1}{m}\sum_k [\log \pi^*_k - \log \pi_k] )^2\\ &\leq 2\sum_{i=1}^m \bigg((\log \pi_i -\pi^*_i)^2 + (\frac{1}{m}\sum_k [\log \pi^*_k - \log \pi_k] )^2\bigg)\\ &= 2\sum_{i=1}^m (\log \pi_i -\pi^*_i)^2 + 2m \cdot \frac{1}{m^2} \big(\sum_k [\log \pi^*_k - \log \pi_k] )^2 \big)\\ &\leq 2L \sum_{i=1}^m \lvert \pi_i - \pi^*_i \rvert^2 + \frac{2}{m} \cdot m \cdot \sum_{k=1}^m (\log \pi^*_k - \log \pi_k)^2\\ &\leq 2L \lVert \pi - \pi^* \rVert_ 2^2 + 2L \sum_{k=1}^m \lvert \pi_k - \pi^*_k \rvert^2\\ &= 4L \lVert \pi - \pi^* \rVert_2^2\,. \end{aligned} \end{equation*} Taking the square root of both sides of the inequality gives $$ \lVert \beta - \beta^* \rVert_2 \leq 2L \lVert \pi - \pi^* \rVert_2\,. $$ Observe that $w^*_{\min}/\sum_j w^*_j \leq \frac{1}{me^{\kappa}}$. One can thus easily see that the $\log$ function within the dynamic range has gradient absolutely bounded by $me^{\kappa}$. Therefore $L \leq me^{\kappa}$. Substituting this upper bound on $L$ into the inequality obtained above and combining with the conclusion of Theorem \ref{thm:stationary-error-bound} completes the proof. \end{proof} \begin{repcoro}{coro:consistency} Consider the setting of Theorem \ref{thm:parameters-error-bound} and for a fixed $m$ with $p = 1$, the spectral algorithm is a consistent estimator of $\beta^*$. That is, its output $\beta$ satisfies $\lim_{n\rightarrow \infty} \Pr(\lVert \beta - \beta^* \rVert_2 < \epsilon) = 1 \,, \forall \epsilon > 0\,.$ \end{repcoro} \begin{proof} It is easy to see from the conclusion of Theorem \ref{thm:parameters-error-bound} that as $n\rightarrow \infty$, for a fixed $m$ and $p=1$ (or any constant $p$ for that matter), $\lVert \beta - \beta^* \rVert_2 \rightarrow 0$ and $$1- \min\{\exp{-12m}, \frac{1}{(n)^{12}} \} - \exp{-\frac{\gamma^2n}{10 \cdot 12^2 \cdot e^{4\kappa}}} - \exp{-\frac{n}{20}} \rightarrow 1\,. $$ \end{proof} We now prove the error bounds when $m$ is allowed to grow. The proof of Theorem \ref{coro:parameters-error-bound-growing-m} is almost identical to that of Theorem \ref{thm:parameters-error-bound}. The key difference is that under condition $\A^+$ (which happens with probability at least $1-n^{-9}$ given that $mp \geq C''\log n$ for a sufficiently large constant $C''$), one can obtain a stronger bound on the projected error term $\lVert {\pi^*}^\top(P - P^*)\rVert_2$. The difference in the projected term is summarized by the lemma below. \begin{lemma}\label{lem:projected-error-bound-growing-m} Conditioned on event $\A^+$, $$ \lVert {\pi^*}^\top (P - P^*)\rVert_2 \leq e^\kappa \sqrt{\frac{8}{mnp}} $$ with probability at least $1- \exp{-12m}$ over the random responses of the users. \end{lemma} \begin{proof} The proof here is almost identical to that of Lemma \ref{lem:projected-error-bound}. The key difference is that under conditioned $\A^+$ we could obtain better bound also using the bounded difference method. Namely, one can invoke Cauchy-Schwarz on the absolute difference term in Equation (\ref{eqn:absolute-diff}) as follows: \begin{equation*} \begin{aligned} &\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) \underbrace{[X_{lj}(\pi^*_i - \pi_j^*) - \pi_i^*]}_{\leq \max\{\pi^*_i, \pi^*_j\} \leq \frac{e^\kappa}{m}}\\ &\leq \frac{e^\kappa}{m}\cdot \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j) = \frac{e^\kappa}{m}\cdot \sum_{j\neq i} A_{lj} A_{li}A_{lj}(v_i - v_j)\\ &\leq \frac{e^\kappa}{m} \cdot \sqrt{\underbrace{\sum_j A_{lj}}_{\leq \frac{3}{2}mp}} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2} = \sqrt{3/2} \cdot \frac{e^\kappa\sqrt p}{\sqrt m} \cdot \sqrt{\sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2}\,.\\ \end{aligned} \end{equation*} Continuting the same procedures as in the proof of Lemma \ref{lem:projected-error-bound} gives \begin{equation*} \begin{aligned} &\Pr\bigg(\sum_{l=1}^n \sum_{i=1}^m \sum_{j\neq i} (v_i - v_j)\pi_j^*A_{li}A_{lj} \big[(X_{lj}(1-X_{li}) - \E[X_{lj}(1-X_{li})]\big] > t \quad\forall v \in \mathcal{V} \,\lvert \,\A \bigg) \\ &\leq 2\cdot 5^m \cdot \exp{-\frac{2t^2}{\sum_{l=1}^n \sum_{i=1}^m \frac{e^{2\kappa} }{m} \cdot \sum_{j\neq i} A_{li}A_{lj}(v_i - v_j)^2 }}\\ &= 2\cdot 5^m\cdot\exp{-\frac{2t^2}{ \frac{3e^{2\kappa} p }{2m} \sum_{i\neq j} B_{ij} (v_i - v_j)^2 }}\\ &\leq 2\cdot 5^m \cdot \exp{ -\frac{8t^2}{9e^{2\kappa}p \cdot np^3} }\\ &\leq \exp{-\frac{8t^2}{9e^{2\kappa}np^3} + 4m }\,. \end{aligned} \end{equation*} Set $$t = e^\kappa \sqrt{\frac{9np^3}{8}} \cdot \sqrt{16m} .$$ Then $ \exp{-\frac{8t^2}{9e^{2\kappa}np^3} + 4m } \leq \exp{-12m}$. Consequently, $$ \lVert {\pi^*}^\top (P - P^*)\rVert_2 \leq \frac{ e^\kappa\sqrt{ 18 mnp^3 } }{d} $$ with probability at least $1- \exp{-12m} $. Substituting $d = \frac{3mnp^2}{2}$ into the bound above finishes the proof. \end{proof} With the numerator the eigenperturbation bound in Lemma \ref{lem:perturb-bound-mc} updated, we now have the proof for Theorem \ref{coro:parameters-error-bound-growing-m}. \begin{reptheorem}{coro:parameters-error-bound-growing-m} Consider the random sampling scheme described in Section \ref{sect:error-guarantees}. Suppose that $np^2 \geq \max\{C_2 \frac{12^2 e^{4\kappa}}{\gamma^2} \log m, C_1\log m\} $ and $mp \geq C''\log m$ for sufficiently large constants $C_1, C_2, C''$ (e.g., $C_2 \geq 30$, $C_1, C'' \geq 101 $). Then the output of the spectral algorithm (Algorithm \ref{alg:spectral}) satisfies $$ \lVert \beta - \beta^* \rVert_2 \leq \frac{96/\sqrt{2} e^{4\kappa}}{\gamma} \cdot \frac{\sqrt m}{\sqrt{np}} $$ with probability at least $1- \exp{-12m} - \exp{-\frac{\gamma^2np^2}{10 \cdot 12^2 \cdot e^{4\kappa}}} - n^{-9}$. \end{reptheorem} \begin{proof} Applying Lemma \ref{lem:perturb-bound-mc} and Corollary \ref{coro:spectral-error-difference} gives $$ \lVert \pi - \pi^* \rVert_2 \leq \frac{48/\sqrt{2} e^{3\kappa}}{\gamma} \cdot \frac{1}{\sqrt{mnp}} \,.$$ Following the same proof as that of Theorem \ref{thm:parameters-error-bound} with minor changes to the constant factor completes the proof. \end{proof}
{ "attr-fineweb-edu": 1.155273, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUawnxK02iP4Y2sqcX
\section{Introduction} Fractional diffusion equations (FDEs) generalize classical partial differential equations (PDEs). Their recent success is due to the non-local behavior of fractional operators resulting in an appropriate modeling of anomalous diffusion phenomena that appear in several applicative fields, like imaging or electrophysiology \cite{imag,orovio}. In particular, a standard diffusion equation can be ``fractionalized'', either by replacing the derivative in time with a fractional one whose fractional order ranges from $0$ to $1$, or by introducing a fractional derivative in space with order between $1$ and $2$. The two approaches can also be combined and lead to similar computational issues. \begin{sloppypar} The improved physical description of the considered phenomenon obtained by ``fractionalizing'' the derivatives, however, translates in a \cmag{more challenging} numerical treatment of the corresponding discretized problems. Indeed, \cmag{the evaluation/approximation of a fractional operator is numerically more expensive (and often less stable). Moreover,} even when standard local discretization methods are adopted, the non-locality of the fractional operators causes absence of sparsity in the discretization matrices. This makes FDEs computationally more demanding than PDEs. \end{sloppypar} \cmag{Various numerical discretization methods for FDE problems (e.g., finite differences, finite volumes, finite elements, spectral methods) can be found in the literature. We refer the reader to \cite{ervin,Moroney,LW,Mao-sinum-18,Meer1,TZD,WD,Mao-sisc-17} and references therein. In the case of regular spatial domain subdivisions,} the discretization matrices inherit a Toeplitz-like structure from the space-invariant property of the underlying operators that can be exploited for the design of ad hoc iterative schemes of multigrid and preconditioned Krylov type (see, e.g., \cite{mazza2016,sisc,LS,mazza2017,PNW,PS}). In the context of finite difference/volume discretizations, we mention the structure preserving preconditioning and the algebraic multigrid methods presented in \cite{mazza2016,sisc}. Both strategies are based on the spectral analysis of the coefficient matrices via their symbol, a function which provides an approximation of their eigenvalues/singular values. \cmag{A similar symbol-based approach has also been successfully employed in the context of isogeometric analysis (IgA) for the discretization of integer order differential problems;} see, e.g., \cite{cmame2,sinum,MMRSS}. In these papers, the spectral information provided by the symbol has been leveraged for the design of effective preconditioners and fast multigrid/multi-iterative solvers whose convergence speed is independent of the fineness parameters and the approximation parameters. \cmag{The present work aims at uncovering the structure and studying the symbol of the discretization matrices obtained by IgA collocation for FDE problems. As a first step towards the spectral treatment of general differential problems involving fractional diffusion operators, we consider here} the following fractional diffusion boundary value problem with absorbing boundary conditions: \begin{equation}\label{eq:FDE} \begin{cases} \frac{{\rm d}^\alpha u(x)}{{\rm d}|x|^\alpha}=s(x), & x\in\Omega,\\ u(x)=0, & x\in\mathbb R\backslash\Omega, \end{cases} \end{equation} where $\Omega:=(0,1)$, $\alpha\in(1,2)$, and \[\frac{{\rm d}^\alpha u(x)}{{\rm d}|x|^\alpha}:=\frac{1}{2\cos({\pi\alpha}/{2})}\left(\Ddlname{\alpha}{RL}u(x)+\Ddrname{\alpha}{RL}u(x)\right)\] is the so-called Riesz fractional operator, while $\Ddlname{\alpha}{RL}u(x)$, $\Ddrname{\alpha}{RL}u(x)$ are the left and right Riemann-Liouville fractional derivatives of $u$ (see Section~\ref{sub:fracder} for their definition). More precisely, we are interested in a polynomial B-spline collocation-based discretization of \eqref{eq:FDE} where the so-called Greville abscissae are chosen as collocation points. Collocation methods based on polynomial splines were applied to fractional problems for the first time in \cite{blank} and further developed in \cite{pedas}. Polynomial B-spline bases have been used for solving time-fractional problems in \cite{Pitolli} and (left-sided) space-fractional problems in \cite{Pitolli2}. Among non-polynomial spline collocation methods for fractional problems, we mention the work \cite{PP} in which the authors explore the application of fractional B-splines. Our choice of classical polynomial B-splines is motivated by the fact that, contrarily to their fractional counterpart, they have compact support and naturally fulfill boundary and/or initial conditions. Furthermore, they possess good approximation properties. Seminal results concerning the structure of the quadratic spline collocation matrices can be found in \cite{QSC2}. Therein, the authors recognize the Toeplitz-like structure of the coefficient matrices and use a classical circulant preconditioner to solve the corresponding linear systems by means of Krylov methods. To the best of our knowledge, this is the first time that the structure and the spectral properties of polynomial B-spline collocation matrices are investigated for an arbitrary polynomial degree $p$. We show that the coefficient matrices retain the Toeplitz-like structure and we study their spectral properties via their symbol. It turns out that the symbol: \begin{enumerate} \item[{(a)}] has a single zero at $0$ of order $\alpha$; \item[{(b)}] presents an exponential decay to zero at $\pi$ for increasing $p$, a so-called numerical zero, that becomes faster as $\alpha$ approaches $1$; \item[{(c)}] is bounded in the proximity of $\pi$. \end{enumerate} This translates in a mitigated conditioning in the low frequencies and in a deterioration in the high frequencies when compared to second order problems (see \cite{DGMSS}). \cmag{The symbol, and so the (asymptotic) spectral properties of the involved matrices, do not change if reaction and/or advection terms are added to \eqref{eq:FDE}. As a side result of the symbol computation,} we propose a new way of expressing both a left and a right fractional derivative of a cardinal B-spline as inner products of two fractional derivatives of cardinal B-splines. Furthermore, we provide a numerical study of the approximation behavior of polynomial B-spline collocation for an arbitrary degree $p$. \cmag{It turns out that the approximation order for smooth solutions is $p+2-\alpha$ for even $p$, and $p+1-\alpha$ for odd $p$. This is again in agreement with the approximation results known for standard (non-fractional) diffusion problems \cite{ABHRS}. We refer the reader to \cite{Cai-2019,Hao-sinum-20,Mao-sinum-18} for a smoothness analysis of the solution in (weighted) Sobolev spaces.} The paper is organized as follows. Section~\ref{sec:preliminaries} is devoted to notations, definitions, and preliminary results. In Section~\ref{sec:frac-cardinal} we present a new way of writing the fractional derivative of a cardinal B-spline. In Section~\ref{sec:iga} we describe the IgA collocation approximation of the problem reported in \eqref{eq:FDE}, while in Sections~\ref{sec:symbol-properties} and \ref{sec:symbol} we perform a detailed spectral analysis of the resulting coefficient matrices. We validate our theoretical spectral findings with a selection of numerical experiments in Section~\ref{sec:numerics} and we do a numerical study of the approximation order of the polynomial B-spline collocation method as well. We end with some concluding remarks in Section~\ref{sec:conclusions}. \section{Preliminaries}\label{sec:preliminaries} In this section we collect some preliminary tools on fractional derivatives, spectral analysis and IgA discretizations. Firstly, we give two definitions of fractional derivatives (Section~\ref{sub:fracder}). Secondly, after introducing the definition of spectral distribution of general matrix-sequences, we summarize the essentials of Toeplitz sequences (Section~\ref{sub:tools}). Finally, we recall the definition of B-splines and cardinal B-splines (Section~\ref{sub:bsplines}). \subsection{Fractional derivatives}\label{sub:fracder} A common definition of fractional derivatives is given by the Riemann-Liouville formula. For a given function $u$ with absolutely continuous $(m-1)$-th derivative on $[a,b]$, the left and right Riemann-Liouville fractional derivatives of order $\alpha$ are defined by \begin{align* \begin{split} \Ddlaname{\alpha}{RL}u(x)&:=\frac{1}{\Gamma(m-\alpha)}\frac{{{\rm d}}^m}{{\rm d} x^m}\int_{a}^x(x-y)^{m-\alpha-1}u(y)\,{\rm d} y,\\ \Ddrbname{\alpha}{RL}u(x)&:=\frac{(-1)^m}{\Gamma(m-\alpha)}\frac{{{\rm d}}^m}{{\rm d} x^m}\int_{x}^{b}(y-x)^{m-\alpha-1}u(y)\,{\rm d} y, \end{split} \end{align*} with $m$ the integer such that $m-1\le\alpha<m$ and $\Gamma$ the Euler gamma function. Note that the left fractional derivative of the function $u$ computed at $x$ depends on all function values to the left of $x$, while the right fractional derivative depends on the ones to the right. Another common definition of fractional derivative was proposed by Caputo: \begin{align}\label{eq:caputo} \begin{split} \Ddlaname{\alpha}{C}u(x)&:=\frac{1}{\Gamma(m-\alpha)}\int_{a}^x(x-y)^{m-\alpha-1}u^{(m)}(y)\,{\rm d} y,\\ \Ddrbname{\alpha}{C}u(x)&:=\frac{(-1)^m}{\Gamma(m-\alpha)}\int_{x}^{b}(y-x)^{m-\alpha-1}u^{(m)}(y)\,{\rm d} y. \end{split} \end{align} Note that \eqref{eq:caputo} requires the $m$-th derivative of $u$ to be absolutely integrable. Higher regularity of the solution is typically imposed in time rather than in space. As a consequence, the Caputo formulation is mainly used for fractional derivatives in time, while Riemann-Liouville's is preferred for fractional derivatives in space. The use of Caputo's derivative provides some advantages in the treatment of boundary conditions when applying the Laplace transform method (see \cite[Chapter~2.8]{Podlubny}). The Riemann-Liouville derivatives are related to the Caputo ones as follows \begin{align}\label{eq:relRLC} \begin{split} \Ddlaname{\alpha}{RL}u(x)&=\Ddlaname{\alpha}{C}u(x)+ \sum_{k=0}^{m-1} \frac{(x-a)^{k-\alpha}}{\Gamma(k-\alpha+1)}u^{(k)}(a^+),\\ \Ddrbname{\alpha}{RL}u(x)&=\Ddrbname{\alpha}{C}u(x)+ \sum_{k=0}^{m-1} \frac{(-1)^k(b-x)^{k-\alpha}}{\Gamma(k-\alpha+1)}u^{(k)}(b^-), \end{split} \end{align} and the two coincide if $u$ satisfies homogeneous conditions, i.e., $u^{(k)}(a^+)=u^{(k)}(b^-)=0$ for $k=0,\ldots,m-1$. \begin{remark} \cmag{Throughout the paper, whenever we write $\Ddlaname{\alpha}{RL}u(\xi)$ or $\Ddrbname{\alpha}{RL}u(\xi)$ for a fixed $\xi$ we mean $\Ddlaname{\alpha}{RL}u(x)$ or $\Ddrbname{\alpha}{RL}u(x)$, where $x=\xi$, respectively.} \end{remark} \subsection{Spectral tools}\label{sub:tools} We begin with the formal definition of spectral distribution in the sense of the eigenvalues for a general matrix-sequence. \begin{definition Let $f:G\to\mathbb C$ be a measurable function, defined on a measurable set $G\subset\mathbb R^k$ with $k\ge 1$ and Lebesgue measure $0<\mu_k(G)<\infty$. Let $\mathcal C_0(\mathbb C)$ be the set of continuous functions with compact support over $\mathbb C$, and let $A_n$ be a matrix of size $d_n$ with eigenvalues $\lambda_j(A_n)$, $j=1,\ldots,d_n$. The matrix-sequence $\{A_n\}_n$ (with $d_n<d_{n+1}$) is {distributed as the pair $(f,G)$ in the sense of the eigenvalues,} denoted by $$\{A_n\}_n\sim_\lambda(f,G),$$ if the following limit relation holds for all $F\in\mathcal C_0(\mathbb C)$: \begin{align}\label{distribution:sv-eig} \lim_{n\to\infty}\frac{1}{d_n}\sum_{j=1}^{d_n}F(\lambda_j(A_n))= \frac1{\mu_k(G)}\int_G F(f(t))\, {\rm d} t. \end{align} We say that $f$ is the {(spectral) symbol} of the matrix-sequence $\{A_{n}\}_{n}$. \end{definition} \begin{remark} Throughout the paper, when it is not of crucial importance to know what is the domain of $f$, we replace the notation $\{A_n\}_n\sim_\lambda(f,G)$ with $\{A_n\}_n\sim_\lambda f$. \end{remark} \begin{remark} When $f$ is continuous, an informal interpretation of the limit relation \eqref{distribution:sv-eig} is that when the matrix-size is sufficiently large, the eigenvalues of $A_n$ can be approximated by a sampling of $f$ on a uniform equispaced grid of the domain $G$. \end{remark} The following result allows us to determine the spectral distribution of a Hermitian matrix-sequence plus a correction (see \cite{BarSer}). \begin{theorem}\label{thm:quasi-herm} Let $\{X_n\}_n$ and $\{Y_n\}_n$ be two matrix-sequences, with $X_n,Y_n\in\mathbb C^{d_n\times d_n}$, and assume that \begin{itemize} \item[{(a)}] $X_n$ is Hermitian for all $n$ and $\{X_n\}_n\sim_{\lambda}f$; \item[{(b)}] $\|Y_n\|_F=o(\sqrt{d_n})$ as $n\rightarrow\infty$, with $\|\cdot\|_{F}$ the Frobenius norm. \end{itemize} Then, $\{X_n+Y_n\}_n\sim_{\lambda}f$. \end{theorem} For a given matrix $X\in\mathbb C^{m\times m}$, let us denote by $\|X\|_{1,\ast}$ the trace norm defined by $\|X\|_{1,\ast}:=\sum_{j=1}^{m}\sigma_j(X)$, where $\sigma_j(X)$ are the $m$ singular values of $X$. \begin{corollary}\label{cor:quasi-herm} Let $\{X_n\}_n$ and $\{Y_n\}_n$ be two matrix-sequences, with $X_n,Y_n\in\mathbb C^{d_n\times d_n}$, and assume that (a) in Theorem~\ref{thm:quasi-herm} is satisfied. Moreover, assume that any of the following two conditions is met: \begin{itemize} \item $\|Y_n\|_{1,\ast}=o(\sqrt{d_n})$; \item $\|Y_n\|_2=o(1)$, with $\|\cdot\|_2$ the spectral norm. \end{itemize} Then, $\{X_n+Y_n\}_n\sim_{\lambda}f$. \end{corollary} \begin{remark In \cite{BarSer} the authors conjecture that Theorem~\ref{thm:quasi-herm} is also valid when (b) is replaced by the weaker condition $\|Y_n\|_{1,\ast}=o(d_n)$. \end{remark} We now recall the definition of Toeplitz sequences generated by univariate functions in $L^1([-\pi, \pi])$. \begin{definition} Let $f \in L^1([-\pi, \pi])$ and let $ f_k$ be its Fourier coefficients, \begin{equation}\label{eq:toeplitz-coeff} f_k:=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(\theta)\textup{e}^{-\i(k\theta)}\,{\rm d}\theta,\quad k\in\mathbb Z. \end{equation} The $n$-th Toeplitz matrix associated with $f$ is the $n \times n$ matrix defined by \begin{equation}\label{eq:toeplitz} T_{n}(f):=\begin{bmatrix} f_0 & f_{-1} & \cdots & \cdots & f_{-(n-1)} \\ f_1 & \ddots & \ddots & & \vdots \\ \vdots & \ddots & \ddots & \ddots & \vdots\\ \vdots & & \ddots & \ddots & f_{-1}\\ f_{n-1} & \cdots & \cdots & f_1 & f_0 \end{bmatrix}\in\mathbb C^{n\times n}. \end{equation} The matrix-sequence $\{T_n(f)\}_{n}$ is called the {Toeplitz sequence generated by $f$}. \end{definition} For real-valued Toeplitz matrix-sequences, the following theorem holds (see, e.g., \cite{GSz}). \begin{theorem}\label{szego} Let $f\in L^1([\pi,\pi])$ be a real-valued function. Then, $$\{T_n(f)\}_n\sim_\lambda(f,[-\pi,\pi]).$$ \end{theorem} \subsection{B-splines and cardinal B-splines}\label{sub:bsplines} For $p\ge0$ and $n\ge1$, consider the following uniform knot sequence \begin{equation* \xi_1=\cdots=\xi_{p+1}:=0<\xi_{p+2}<\cdots<\xi_{p+n}<1=:\xi_{p+n+1}=\cdots=\xi_{2p+n+1}, \end{equation*} where \begin{equation*} \xi_{i+p+1}:=\frac{i}{n}, \quad i=0,\ldots,n. \end{equation*} This knot sequence allows us to define $n+p$ B-splines of degree $p$. \begin{definition} The B-splines of degree $p$ over a uniform mesh of $[0,1]$, consisting of $n$ intervals, are denoted by \begin{equation*} N_{i}^p:[0,1]\rightarrow \mathbb R, \quad i=1,\ldots,n+p, \end{equation*} and defined recursively as follows: for $1 \le i\le n+2p$, \begin{equation*} N_i^0(x):=\begin{cases} 1, & x \in [\xi_i,\xi_{i+1}), \\ 0, & \text{otherwise}; \end{cases} \end{equation*} for $1\le k\le p$ and $1\le i\le n+2p-k$, \begin{equation*} N_i^k(x):=\frac{x-\xi_i}{\xi_{i+k}-\xi_i}N_{i}^{k-1}(x)+\frac{\xi_{i+k+1}-x}{\xi_{i+k+1}-\xi_{i+1}}N_{i+1}^{k-1}(x), \end{equation*} where a fraction with zero denominator is assumed to be zero. \end{definition} It is well known that the B-splines $N_{i}^p$, $i=1,\ldots,n+p$, are linearly independent and they enjoy the following list of properties (see, e.g.,~\cite{deBoor,LycheMS2018}). \begin{itemize} \item Local support: \begin{equation}\label{eq:spline-support} \textup{supp}(N_{i}^p)=[\xi_i,\xi_{i+p+1}], \quad i=1,\ldots,n+p; \end{equation} \item Smoothness: \begin{equation*} N_{i}^p \in \mathcal{C}^{p-1}(0,1), \quad i=1,\ldots,n+p; \end{equation*} \item Differentiation: \begin{equation}\label{eq:spline-diff} \left(N_{i}^p(x)\right)' = p\left(\frac{N_{i}^{p-1}(x)}{\xi_{i+p}-\xi_i}- \frac{N_{i+1}^{p-1}(x)}{\xi_{i+p+1}-\xi_{i+1}}\right), \quad i=1,\ldots,n+p, \quad p \geq 1; \end{equation} \item Non-negative partition of unity: \begin{equation*} N_{i}^p(x)\ge0, \quad i=1,\ldots,n+p, \qquad \sum_{i=1}^{n+p}N_{i}^p(x)=1; \end{equation*} \item Vanishing at the boundary: \begin{equation} \label{eq:spline-boundary} N_{i}^p(0)=N_{i}^p(1)=0, \quad i=2,\ldots,n+p-1; \end{equation} \item Bound for the second derivatives: \begin{equation} \label{eq:spline-diff-bound} |(N_{i}^p(x))''|\le 4p(p-1)n^2, \quad x\in(0,1). \end{equation} \end{itemize} We also add a property concerning fractional derivatives, which follows from \eqref{eq:relRLC} and \eqref{eq:spline-diff}--\eqref{eq:spline-boundary}. \begin{itemize} \item The Riemann-Liouville and the Caputo derivatives of interior B-splines coincide: \begin{equation}\label{eq:spline-frac} \begin{array}{c} \Ddlname{\alpha}{RL}N^p_{i}=\Ddlname{\alpha}{C}N^p_{i}\\ \Ddrname{\alpha}{RL}N^p_{i}=\Ddrname{\alpha}{C}N^p_{i} \end{array}, \quad i=m+1,\ldots,n+p-m. \end{equation} \end{itemize} From now onwards, we will denote the left and right Riemann-Liouville derivatives simply by $\Ddl{\alpha}$ and $\Ddr{\alpha}$. In view of the last B-spline property, these also stand for the left and right Caputo derivatives in case of interior B-splines. The B-splines $N_i^p$, $i=p+1,\ldots,n$, are uniformly shifted and scaled versions of a single shape function, the so-called cardinal B-spline $\phi_p:\mathbb R\rightarrow \mathbb R$, \begin{equation}\label{eq:phi_0} \phi_0(t) := \begin{cases} 1, & t \in [0, 1), \\ 0, & \text{otherwise}, \end{cases} \end{equation} and \begin{equation}\label{eq:phi_recurrence} \phi_p (t) := \frac{t}{p} \phi_{p-1}(t) + \frac{p+1-t}{p} \phi_{p-1}(t-1), \quad p \geq 1. \end{equation} More precisely, we have \begin{equation* N^{p}_i(x) =\phi_{p}(nx-i+p+1), \quad i=p+1,\ldots,n, \end{equation*} and \begin{equation* \left(N^{p}_i(x)\right)' =n\phi'_{p}(nx-i+p+1), \quad i=p+1,\ldots,n. \end{equation*} The cardinal B-spline $\phi_p$ belongs to $\mathcal{C}^{p-1}(\mathbb R)$ and is supported on the interval $[0,p+1]$. It is a symmetric function with respect to $\frac{p+1}{2}$, the midpoint of its support. The left Caputo derivative of $\phi_p$ has the following explicit expression (see \cite{Pitolli}): \begin{equation}\label{eq:cardinal_central} \D{0}{}{t}{\alpha}\phi_p(t)=\frac{1}{\Gamma(p-\alpha+1)}\sum_{j=0}^{p+1}(-1)^j\binom{p+1}{j}(t-j)^{p-\alpha}_+, \quad 0\leq\alpha<p, \end{equation} where $(\cdot)^q_+$ is the truncated power function of degree $q$. Note that the function in \eqref{eq:cardinal_central} is a fractional spline, i.e., a spline with fractional degree \cite{siamREV}. For other common properties of cardinal B-splines, we refer the reader to \cite[Section~3.1]{GMPSS}. \section{Fractional derivatives of cardinal B-splines} \label{sec:frac-cardinal} The aim of this section is to write the fractional derivative of a cardinal B-spline as the inner product of fractional derivatives of cardinal B-splines (see Theorem~\ref{thm:inner-product-mix}). This result will be used in Section~\ref{sec:symbol-properties} to derive an explicit expression of the symbol of the coefficient matrices of interest. All the results in this section refer to fractional derivatives on the half-axes. More precisely, for a given compactly supported function $u$ with absolutely continuous $(m-1)$-th derivative on $\mathbb R$, we consider \begin{align}\label{eq:fractional-infty} \begin{split} \Dl{\alpha}u(x)&:=\frac{1}{\Gamma(m-\alpha)}\frac{{{\rm d}}^m}{{\rm d} x^m}\int_{-\infty}^x(x-y)^{m-\alpha-1}u(y)\,{\rm d} y,\\ \Dr{\alpha}u(x)&:=\frac{(-1)^m}{\Gamma(m-\alpha)}\frac{{{\rm d}}^m}{{\rm d} x^m}\int_{x}^{+\infty}(y-x)^{m-\alpha-1}u(y)\,{\rm d} y, \end{split} \end{align} with $m$ the integer such that $m-1\leq\alpha<m$. For functions $u$ that are solutions of problem \eqref{eq:FDE} and $m=2$, these derivatives reduce to $\Ddl{\alpha}u(x)$ and $\Ddr{\alpha}u(x)$ since the adopted boundary conditions ensure $u$ to be identically zero on $\mathbb R\backslash(0,1)$. Let $\fourier{f}$ denote the Fourier transform of $f\in L_2(\mathbb R)$, i.e., \begin{equation*} \widehat{f}(\theta):=\int_\mathbb R f(x) \textup{e}^{-\i\,\theta x}\,{\rm d} x. \end{equation*} We start with a lemma addressing the Fourier transform of the derivatives in \eqref{eq:fractional-infty} for cardinal B-splines. \begin{lemma}\label{lem:fractional-fourier} Let $\phi_{p}$ be the cardinal B-spline as defined in (\ref{eq:phi_0})--(\ref{eq:phi_recurrence}). Then, for $0\leq\alpha<p$ we have \begin{align} \label{eq:fractional-l-fourier} \fourier{{\Dl{\alpha}\phi_{p}}}(\theta)=(\i\theta)^\alpha\left(\frac{1-\textup{e}^{-\i\theta}}{\i\theta}\right)^{p+1}, \end{align} and \begin{align} \label{eq:fractional-r-fourier} \fourier{{\Dr{\alpha}\phi_{p}}}(\theta)=(-\i\theta)^\alpha\left(\frac{1-\textup{e}^{-\i\theta}}{\i\theta}\right)^{p+1}. \end{align} \end{lemma} \begin{proof} From \cite{Podlubny} we know that \begin{equation*} \fourier{{\Dl{\alpha}f}}(\theta)=(\i\theta)^\alpha\fourier{f}(\theta), \quad \fourier{{\Dr{\alpha}f}}(\theta)=(-\i\theta)^\alpha\fourier{f}(\theta), \end{equation*} and from \cite{Chui,LycheMS2018}, \begin{equation*} \fourier{\phi_{p}}(\theta) =\left(\frac{1-\textup{e}^{-\i\theta}}{\i\theta}\right)^{p+1}. \end{equation*} Combining these results immediately gives \eqref{eq:fractional-l-fourier} and \eqref{eq:fractional-r-fourier}. \end{proof} In the following $\alpha_1,\alpha_2$ stand for real numbers. \begin{lemma}\label{lem:im-power-mix} Let $\conj{z}$ denote the conjugate of the complex number $z$. Then, for any real number $\theta$ we have \begin{equation*} (\i\theta)^{\alpha_1}\conj{(-\i\theta)^{\alpha_2}}=(\i\theta)^{\alpha_1+\alpha_2}. \end{equation*} \end{lemma} \begin{proof} Let us consider the polar form of the complex number $(\i\theta)^\alpha$, i.e., \begin{equation*} (\i\theta)^\alpha=|\theta|^\alpha\textup{e}^{\i\frac{\pi}{2}\textup{sign}(\theta)\alpha}. \end{equation*} Hence, \begin{align*} (\i\theta)^{\alpha_1}\conj{(-\i\theta)^{\alpha_2}} &= |\theta|^{\alpha_1}\textup{e}^{\i\frac{\pi}{2}\textup{sign}(\theta)\alpha_1}|\theta|^{\alpha_2}\textup{e}^{\i\frac{\pi}{2}\textup{sign}(\theta)\alpha_2} =|\theta|^{\alpha_1+\alpha_2}\textup{e}^{\i\frac{\pi}{2}\textup{sign}(\theta)(\alpha_1+\alpha_2)}, \end{align*} which completes the proof. \end{proof} We are now ready for the main result of this section. \begin{theorem} \label{thm:inner-product-mix} Let $\phi_{p}$ be the cardinal B-spline as defined in (\ref{eq:phi_0})--(\ref{eq:phi_recurrence}). Then, for $0\leq\alpha_1<p_1$ and $0\leq\alpha_2<p_2$ we have \begin{align} \int_\mathbb R \Dl{\alpha_1}\phi_{p_1}(x)\,{\Dr{\alpha_2}\phi_{p_2}(x+k)}\,{\rm d} x &= \Dl{\alpha_1+\alpha_2}\phi_{p_1+p_2+1}(p_2+1-k), \label{eq:inner-product-mix} \\ \int_\mathbb R \Dr{\alpha_1}\phi_{p_1}(x)\,{\Dl{\alpha_2}\phi_{p_2}(x+k)}\,{\rm d} x &= \Dr{\alpha_1+\alpha_2}\phi_{p_1+p_2+1}(p_2+1-k). \label{eq:inner-product-mix2} \end{align} \end{theorem} \begin{proof} We first recall the Parseval identity for Fourier transforms, i.e., \begin{equation*} \int_\mathbb R \varphi(x)\conj{\psi(x)}\,{\rm d} x = \frac{1}{2\pi} \int_\mathbb R \fourier{\varphi}(\theta)\conj{\fourier{\psi}(\theta)}\,{\rm d} \theta,\quad\varphi,\,\psi\in L_2(\mathbb R), \end{equation*} and the translation property of the Fourier transform, i.e., \begin{equation*} \fourier{\psi(\cdot+k)}(\theta) = \fourier{\psi(\theta)}\,\textup{e}^{\i k\theta},\quad\psi\in L_1(\mathbb R),\ k\in\mathbb R. \end{equation*} Starting from the above equalities, and using Lemmas~\ref{lem:fractional-fourier} and \ref{lem:im-power-mix}, we get \begin{align*} &\int_\mathbb R \Dl{\alpha_1}\phi_{p_1}(x)\,{\Dr{\alpha_2}\phi_{p_2}(x+k)}\,{\rm d} x \\ &\quad=\frac{1}{2\pi}\int_\mathbb R \fourier{\Dl{\alpha_1} \phi_{p_1}}(\theta)\,\conj{\fourier{\Dr{\alpha_2}\phi_{p_2}}(\theta)\textup{e}^{\i k\theta}}\,{\rm d} \theta \\ &\quad=\frac{1}{2\pi}\int_\mathbb R (\i\theta)^{\alpha_1+\alpha_2}\left(\frac{1-\textup{e}^{-\i\theta}}{\i\theta}\right)^{p_1+1}\left(\frac{\textup{e}^{\i\theta}-1}{\i\theta}\right)^{p_2+1}\textup{e}^{-\i k\theta}\,{\rm d} \theta \\ &\quad=\frac{1}{2\pi}\int_\mathbb R (\i\theta)^{\alpha_1+\alpha_2}\left(\frac{1-\textup{e}^{-\i\theta}}{\i\theta}\right)^{p_1+p_2+2}\textup{e}^{\i(p_2+1-k)\theta}\,{\rm d} \theta. \end{align*} By taking the inverse Fourier transform of the right-hand side we arrive at~\eqref{eq:inner-product-mix}. The proof of \eqref{eq:inner-product-mix2} is analogous. \end{proof} \begin{remark} Theorem~\ref{thm:inner-product-mix} is a generalization of a known explicit formula for inner products of integer derivatives of cardinal B-splines (see \cite{GMPSS,LycheMS2018} and also \cite{Speleers2015}). \end{remark} \section{IgA collocation discretization of the fractional Riesz operator} \label{sec:iga} From now onwards, we assume that $\alpha$ is fixed in the open interval $(1,2)$. Let $\mathcal{W}$ be a finite dimensional vector space of sufficiently smooth functions defined on the closure of $\Omega$ and vanishing at its boundary, and let $N:={\rm dim}(\mathcal{W})$. Applying the collocation method to \eqref{eq:FDE} means looking for a function $u_{\mathcal{W}}\in \mathcal{W}$ such that \begin{equation}\label{eq:colloc-system} \frac{{\rm d}^\alpha u_{\mathcal{W}}(x_i)}{{\rm d}|x|^\alpha}=s(x_i), \quad i=1,\ldots,N, \end{equation} with $x_i\in\Omega$, the so-called collocation points. Given a basis $\{\varphi_j : j=1,\ldots,N\}$ of $\mathcal{W}$, problem \eqref{eq:colloc-system} can be rewritten in matrix form as follows: \begin{equation* A_{\rm col}{\bf u}={\bf b}_{\rm col}, \end{equation*} with \begin{equation*} A_{\rm col}:=\left[\frac{1}{2\cos({\pi\alpha}/{2})}(\Ddl{\alpha}+\Ddr{\alpha})\varphi_j(x_i)\right]_{i,j=1}^{N}, \quad {\bf b}_{\rm col}:=[s(x_i)]_{i=1}^N, \end{equation*} and ${\bf u}:=[u_1,\ldots,u_N]^T$ such that $u_{\mathcal{W}}(x)=\sum_{j=1}^{N}u_j\varphi_j(x)$. In this paper, we choose $\mathcal{W}$ as the space of splines of degree $p\geq2$ that vanish at the boundary, and the collocation points as the Greville abscissae. More precisely, we take \begin{itemize} \item the approximation space as the space spanned by the B-splines of degree $p\geq2$ that are zero at the boundary (see \eqref{eq:spline-boundary}), i.e., \begin{equation}\label{eq:spline-space} \mathbb{S}^{p}_{n}:={\rm span}\{N^p_i : i=2,\ldots,n+p-1\}; \end{equation} \item the collocation points as the Greville abscissae corresponding to the B-splines in \eqref{eq:spline-space}, i.e., \begin{equation* \eta_i:=\frac{\xi_{i+1}+\cdots+\xi_{i+p}}{p}, \quad i=2,\ldots,n+p-1. \end{equation*} \end{itemize} Thus \eqref{eq:colloc-system} translates in the following linear system \begin{equation* A^{p,\alpha}_n{\bf u}_n={\bf b}_n, \end{equation*} where \begin{equation*} A^{p,\alpha}_n:=\frac{1}{2\cos({\pi\alpha}/{2})}(A_n^L+A_n^R), \quad {\bf b}_n:=[s(\eta_{i+1})]_{i=1}^{n+p-2}, \end{equation*} with \begin{equation}\label{eq:ALR} A_n^{L}:=\left[\Ddl{\alpha}N^p_{j+1}(\eta_{i+1})\right]_{i,j=1}^{n+p-2}, \quad A_n^R:=\left[\Ddr{\alpha}N^p_{j+1}(\eta_{i+1})\right]_{i,j=1}^{n+p-2}, \end{equation} and ${\bf u}_n:=[u_1,\dots,u_{n+p-2}]^T$, the vector of the coefficients of $u$ with respect to the B-spline basis functions in the space $\mathbb{S}_n^p$. In order to assemble the matrices $A_n^L$ and $A_n^R$, we need to compute the left and right fractional derivatives of any B-spline. By using \eqref{eq:cardinal_central}, for the B-splines $N_{i}^p$ corresponding to the indexes $i=p+1,\ldots,n$, we have \begin{align*} \Ddl{\alpha}N_i^p(x)&=n^\alpha \D{0}{}{nx}{\alpha}\phi_p(nx-i+p+1)\\ &=\frac{n^\alpha}{\Gamma(p-\alpha+1)}\sum_{j=0}^{p+1}(-1)^j\binom{p+1}{j}\left(nx-i+p+1-j\right)^{p-\alpha}_+. \end{align*} Thanks to this relation, and recalling that the Greville abscissae for $i=p+1,\ldots,n$ reduce to \begin{equation*} \eta_i=\frac{i}{n}-\frac{p+1}{2n}, \quad i=p+1,\ldots,n, \end{equation*} or equivalently, \begin{equation*} n\eta_i+p+1=i+\frac{p+1}{2}, \quad i=p+1,\ldots,n, \end{equation*} we can immediately recognize that the central part of the matrix $A_n^L$ corresponding to the indexes $p+1,\ldots,n$ has a Toeplitz structure. In other words, we have \[A_n^L=n^\alpha (T^L_n+R^L_n),\] where \[T^L_n:=\left[ \D{0}{}{nx}{\alpha}\phi_p\left(\frac{p+1}{2}+i-j\right)\right]_{i,j=1}^{n+p-2},\] and $R^L_n$ is a matrix whose rank is bounded by $4(p-1)$. A similar reasoning can be applied to the matrix $A_n^R$, and we have \begin{equation*} A_n^R=n^\alpha (T^R_n+R^R_n), \end{equation*} where \begin{equation*} T^R_n:=\left[ \D{nx}{}{n}{\alpha}\phi_p\left(\frac{p+1}{2}+i-j\right)\right]_{i,j=1}^{n+p-2}, \end{equation*} and $R^R_n$ is a matrix whose rank is bounded by $4(p-1)$. As a consequence, the coefficient matrix $A_n^{p,\alpha}$ inherits the Toeplitz plus rank correction structure and can be written as follows: \begin{equation}\label{eq:coeff_matrix} A_n^{p,\alpha}=\frac{1}{2\cos({\pi\alpha}/{2})}(A_{n}^L+A_n^R)=n^\alpha (T_n^{p,\alpha}+R_n^{p,\alpha}), \end{equation} with \begin{equation}\label{eq:toeplitz-part} T_n^{p,\alpha}:=\frac{1}{2\cos({\pi\alpha}/{2})}(T^L_n+T^R_n), \quad R_n^{p,\alpha}:=\frac{1}{2\cos({\pi\alpha}/{2})}(R^L_n+R^R_n). \end{equation} In Section~\ref{sec:symbol} we will show that the symbol of $\{n^{-\alpha}A_n^{p,\alpha}\}_n$ coincides with the symbol of $\{T_n^{p,\alpha}\}_n$, denoted by $f^{p,\alpha}$, but first we discuss some properties of this function in the next section. \section{Properties of the function $f^{p,\alpha}$}\label{sec:symbol-properties} We start with a theorem that provides an explicit expression of the generating function $f^{p,\alpha}$ of the Toeplitz matrix $T_n^{p,\alpha}$, and whose proof uses the results obtained in Section~\ref{sec:frac-cardinal}. \begin{theorem}\label{thm:symbol} Let $T_n^{p,\alpha}$ be defined as in \eqref{eq:toeplitz-part}. Then, $T_n^{p,\alpha}=T_{n+p-2}(f^{p,\alpha})$ with \begin{equation} \label{eq:frac-symbol} f^{p,\alpha}(\theta) = \sum_{l\in\mathbb Z} |\theta+2l\pi|^{\alpha} \left(\frac{\sin(\theta/2+l\pi)}{\theta/2+l\pi}\right)^{p+1}. \end{equation} \end{theorem} \begin{proof} From its construction it is clear that $T_n^{p,\alpha}$ is a Toeplitz matrix of dimension $n+p-2$. According to the definition in \eqref{eq:toeplitz}, the entries $f_k$ of this matrix are given by \begin{align*} f_k &= \frac{1}{2\cos(\pi \alpha/2)} \left(\Dl{\alpha}\phi_{p}\left(\frac{p+1}{2}-k\right)+\Dr{\alpha}\phi_{p}\left(\frac{p+1}{2}-k\right)\right). \end{align*} We differentiate the cases of odd and even degree $p$. We start by proving the expression \eqref{eq:frac-symbol} of the generating function $f^{p,\alpha}$ for $p = 2q+1$. Using Theorem~\ref{thm:inner-product-mix} (and its proof) with $\alpha=\alpha_1+\alpha_2$ and $q=p_1=p_2$, we have \begin{align*} 2\cos(\pi \alpha/2) f_k &= \Dl{\alpha}\phi_{2q+1}\left(q+1-k\right)+\Dr{\alpha}\phi_{2q+1}\left(q+1-k\right) \\ &= \frac{1}{2\pi}\int_\mathbb R \left[(\i\theta)^{\alpha}+(-\i\theta)^{\alpha}\right]\left(\frac{1-\textup{e}^{-\i\theta}}{\i\theta}\right)^{q+1}\left(\frac{\textup{e}^{\i\theta}-1}{\i\theta}\right)^{q+1}\textup{e}^{-\i k\theta}\,{\rm d} \theta \\ &= \frac{1}{2\pi}\int_\mathbb R 2|\theta|^{\alpha}\cos(\pi \alpha/2) \left|\frac{1-\textup{e}^{-\i\theta}}{\theta}\right|^{2q+2}\textup{e}^{-\i k\theta}\,{\rm d} \theta. \end{align*} Set $$ w(\theta):=|\theta|^{\alpha}\left|\frac{1-\textup{e}^{-\i\theta}}{\theta}\right|^{2q+2} =|\theta|^{\alpha}\left(\frac{\sin(\theta/2)}{\theta/2}\right)^{2q+2}. $$ Then, \begin{align*} f_k &= \frac{1}{2\pi}\int_\mathbb R w(\theta)\textup{e}^{-\i k\theta}\,{\rm d} \theta =\sum_{l\in\mathbb Z}\frac{1}{2\pi}\int_{(2l-1)\pi}^{(2l+1)\pi} w(\theta)\textup{e}^{-\i k\theta}\,{\rm d} \theta \\ &=\sum_{l\in\mathbb Z}\frac{1}{2\pi}\int_{-\pi}^{\pi} w(\theta+2l\pi)\textup{e}^{-\i k\theta}\,{\rm d} \theta =\frac{1}{2\pi}\int_{-\pi}^{\pi} \left[\sum_{l\in\mathbb Z}w(\theta+2l\pi)\right]\textup{e}^{-\i k\theta}\,{\rm d} \theta. \end{align*} The expression \eqref{eq:frac-symbol} of the generating function $f^{p,\alpha}$ follows from \eqref{eq:toeplitz-coeff} for $p = 2q+1$. Now we consider even degree $p = 2q$. Using again Theorem~\ref{thm:inner-product-mix} (and its proof) with $\alpha=\alpha_1+\alpha_2$ and $q=p_1+1=p_2$, we have \begin{align*} 2\cos(\pi \alpha/2) f_k &= \Dl{\alpha}\phi_{2q}\left(q+1-k-1/2\right)+\Dr{\alpha}\phi_{2q}\left(q+1-k-1/2\right) \\ &= \frac{1}{2\pi}\int_\mathbb R \left[(\i\theta)^{\alpha}+(-\i\theta)^{\alpha}\right]\left(\frac{1-\textup{e}^{-\i\theta}}{\i\theta}\right)^{q}\left(\frac{\textup{e}^{\i\theta}-1}{\i\theta}\right)^{q+1}\textup{e}^{-\i (k+1/2)\theta}\,{\rm d} \theta \\ &= \frac{1}{2\pi}\int_\mathbb R 2|\theta|^{\alpha}\cos(\pi \alpha/2) \left|\frac{1-\textup{e}^{-\i\theta}}{\theta}\right|^{2q}\left(\frac{\textup{e}^{\i\theta/2}-\textup{e}^{-\i\theta/2}}{\i\theta}\right)\textup{e}^{-\i k\theta}\,{\rm d} \theta. \end{align*} Here, \begin{equation*} w(\theta):=|\theta|^{\alpha}\left|\frac{1-\textup{e}^{-\i\theta}}{\theta}\right|^{2q}\left(\frac{\textup{e}^{\i\theta/2}-\textup{e}^{-\i\theta/2}}{\i\theta}\right) =|\theta|^{\alpha}\left(\frac{\sin(\theta/2)}{\theta/2}\right)^{2q+1}. \end{equation*} Following then a similar argument as in the odd degree case, we arrive at the expression \eqref{eq:frac-symbol} of the generating function $f^{p,\alpha}$ for $p = 2q$. \end{proof} \begin{remark} The proof of Theorem~\ref{thm:symbol} remains valid for $\alpha\in[0,1)\cup(1,2]$. \end{remark} Starting from \eqref{eq:frac-symbol} and applying the same line of arguments as in the proofs of \cite[Lemmas~3.4 and~3.6]{DGMSS}, we obtain the following results for $f^{p,\alpha}(\theta)$. \begin{theorem}\label{thm:bounds-frac-symb} Let $f^{p,\alpha}$ be as in \eqref{eq:frac-symbol}. Then, $f^{p,\alpha}(\theta)=f^{p,\alpha}(-\theta)$, and for $p>\alpha $, \begin{equation} |\theta|^\alpha \left(\frac{\sin(\theta/2)}{\theta/2}\right)^{p+1}\leq f^{p,\alpha}(\theta)\leq|\theta|^\alpha\left(\frac{\sin(\theta/2)}{\theta/2}\right)^{p+1}+ C_{p,\alpha}\left(\sin(\theta/2)\right)^{p+1}, \label{eq:bounds-frac-symb} \end{equation} where $C_{p,\alpha}$ is a constant depending on $p$ and $\alpha$ and $\theta \in [0,\pi]$. Moreover, \begin{equation} \label{eq:val-pi-frac-symb} \frac{f^{p,\alpha}(\pi)}{\max_\theta f^{p,\alpha}(\theta)}\leq \frac{f^{p,\alpha}(\pi)}{f^{p,\alpha}(\pi/2)}\leq 2^{\frac{2\alpha+1-p}{2}}. \end{equation} \end{theorem} From the bounds in \eqref{eq:bounds-frac-symb} we can immediately deduce the vanishing properties of $f^{p,\alpha}$. \begin{corollary}\label{cor:zero_of_f} Let $f^{p,\alpha}$ be as in \eqref{eq:frac-symbol}. Then, $f^{p,\alpha}$ is non-negative for $\theta\in[-\pi,\pi]$, and it only vanishes at $\theta=0$ where it has a zero of order $\alpha$. \end{corollary} Let $\tilde{f}^{p,\alpha}:=f^{p,\alpha}/{\max_\theta f^{p,\alpha}(\theta)}$ be the normalized version of $f^{p,\alpha}$. The inequality in \eqref{eq:val-pi-frac-symb} shows that $\tilde{f}^{p,\alpha}(\theta)$ converges exponentially to zero at $\theta=\pm\pi$ for increasing $p$. Hence, we say that $f^{p,\alpha}$ has a numerical zero at $\pm\pi$ for large $p$. \begin{remark}\label{rem:alpha} The upper bound in \eqref{eq:val-pi-frac-symb} depends not only on $p$ but also on $\alpha$. In this view, the decay at $\pm\pi$ of $\tilde{f}^{p,\alpha}$ is expected to become faster as $\alpha$ approaches 1. \end{remark} In the following propositions we bound $f^{p,\alpha}(\theta)$ in terms of $f^{p,0}(\theta)$ and $f^{p,2}(\theta)$ for high enough value of $|\theta|$. \begin{proposition}\label{prop:bound_odd} For $p$ odd, we have \begin{equation} \label{eq:bound-high-freq} f^{p,0}(\theta)\leq f^{p,\alpha}(\theta)\leq f^{p,2}(\theta), \quad |\theta|\in [1,\pi]. \end{equation} \end{proposition} \begin{proof} Since \begin{equation*} 1=|\theta+2l\pi|^0\leq|\theta+2l\pi|^{\alpha}\leq|\theta+2l\pi|^2, \quad l\in\mathbb Z, \quad |\theta|\geq1, \end{equation*} and \begin{equation*} \left(\frac{\sin(\theta/2+l\pi)}{\theta/2+l\pi}\right)^{2q+2}\geq0, \quad l\in\mathbb Z, \end{equation*} it is clear from the definition of $f^{p,\alpha}$ in \eqref{eq:frac-symbol} that \eqref{eq:bound-high-freq} holds for $p=2q+1$. \end{proof} \begin{proposition}\label{prop:bound_even} For $p$ even and $p>\alpha$, we have \begin{equation} \label{eq:bound-high-freq-even} f^{p,0}(\theta)\leq f^{p,\alpha}(\theta), \quad |\theta|\in [a,\pi], \end{equation} where $$ a:=\left(\frac{\pi^4}{48}\right)^{1/\alpha}. $$ \end{proposition} \begin{proof} Let $p=2q>\alpha$. It is easy to check that \begin{equation*} f^{p,\alpha}(\theta) = |\theta|^\alpha\left(\frac{\sin(\theta/2)}{\theta/2}\right)^{p+1} + \left(2\sin(\theta/2)\right)^{p+1}r^{p,\alpha}(\theta), \end{equation*} where \begin{equation*} r^{p,\alpha}(\theta) := \sum_{k=1}^{\infty}(-1)^k\left[\frac{1}{(2k\pi+\theta)^{p+1-\alpha}} -\frac{1}{(2k\pi-\theta)^{p+1-\alpha}}\right]. \end{equation*} With the same line of arguments as in the proof of \cite[Lemma~A.2]{DGMSS} we deduce that $r^{p,\alpha}(\theta)$ is a strictly increasing function, which implies that $r^{p,\alpha}(\pi)\geq r^{p,\alpha}(\theta)>r^{p,\alpha}(0)=0$ for $\theta\in(0,\pi]$. Moreover, from the same lemma we know \begin{equation*} r^{p,0}(\theta) \leq \left(\frac{\pi^4}{48}-1\right)\frac{1}{\pi^{p+1}}, \quad\theta\in[0,\pi]. \end{equation*} From the above bounds we get \begin{align*} f^{p,\alpha}(\theta)-f^{p,0}(\theta) &= (|\theta|^\alpha-1)\left(\frac{\sin(\theta/2)}{\theta/2}\right)^{p+1} + \left(2\sin(\theta/2)\right)^{p+1}(r^{p,\alpha}(\theta)-r^{p,0}(\theta)) \\ &\geq \left(2\sin(\theta/2)\right)^{p+1}\left[\frac{|\theta|^\alpha-1}{\theta^{p+1}}-\left(\frac{\pi^4}{48}-1\right)\frac{1}{\pi^{p+1}}\right] \\ &\geq \left(\frac{2\sin(\theta/2)}{\pi}\right)^{p+1}\left[|\theta|^\alpha-1-\left(\frac{\pi^4}{48}-1\right)\right], \end{align*} for $\theta\in[1,\pi]$. Hence, \begin{equation*} f^{p,\alpha}(\theta)-f^{p,0}(\theta)\geq0, \quad \theta\geq\left(\frac{\pi^4}{48}\right)^{1/\alpha}, \end{equation*} which concludes the proof. \end{proof} In our final proposition we explicitly state that $f^{p,\alpha}$ is the symbol of the matrix-sequence $\{T_n^{p,\alpha}\}_n$. \begin{proposition} \label{prop:distr-T} The Toeplitz matrix $T_n^{p,\alpha}$ defined in \eqref{eq:toeplitz-part} is symmetric and \begin{equation}\label{eq:distr-T} \{T_n^{p,\alpha}\}_n\sim_\lambda(f^{p,\alpha},[-\pi,\pi]), \end{equation} where $f^{p,\alpha}$ is given in \eqref{eq:frac-symbol}. \end{proposition} \begin{proof} From Theorem~\ref{thm:bounds-frac-symb} we know that $f^{p,\alpha}$ is an even real-valued function, so the matrix $T_n^{p,\alpha}=T_{n+p-2}(f^{p,\alpha})$ is symmetric. The spectral distribution of $\{T_n^{p,\alpha}\}_n=\{T_{n+p-2}(f^{p,\alpha})\}_n$ follows from Theorem~\ref{szego}. \end{proof} We end this section by summarizing all the discussed properties of the symbol $f^{p,\alpha}$ and highlighting what is their role in the design of an ad hoc solver for a linear system associated with $T_n^{p,\alpha}$ (see Remark~\ref{rem:mim}). We have shown that $f^{p,\alpha}$ is equipped with the following three properties: \begin{enumerate} \item[{(a)}] it has a single zero at $0$ of order $\alpha$ (Corollary~\ref{cor:zero_of_f}); \item[{(b)}] it presents an exponential decay to zero at $\pi$ for increasing $p$ that becomes faster as $\alpha$ approaches $1$ (Theorem~\ref{thm:bounds-frac-symb} and Remark~\ref{rem:alpha}); \item[{(c)}] it is bounded in the proximity of $\pi$ by $f^{p,0}$ (Propositions~\ref{prop:bound_odd} and~\ref{prop:bound_even}). \end{enumerate} Properties (a)--(b) give us a clear picture of what are the conditioning peculiarities of the matrix $T_n^{p,\alpha}$. Specifically, they say that $T_n^{p,\alpha}$ is poorly conditioned both in the low frequencies (with a conditioning that grows as $n^\alpha$) and in the high frequencies (with a deterioration that is driven both by $p$ and $\alpha$). Moreover, property (c) ``isolates'' the source of ill-conditioning in the high frequencies induced by $p$, meaning the symbol behaves like $f^{p,0}$ in the proximity of $\pi$, with $f^{p,0}$ a positive function well-separated from zero. \begin{remark}\label{rem:mim} Based on what has been done in \cite{cmame2,sinum,mazza2016,sisc}, all this knowledge can be used for the design of an ad hoc solver for a linear system associated with $T_n^{p,\alpha}$. For instance, from (a) we can infer that a multigrid method with a standard choice of both prolongator and restrictor is able to cope with the standard ill-conditioning in the low frequency subspace, while from (c) we get hints on how to define a smoother that works in the subspace of high frequencies where there exists the ill-conditioning induced by $p$. \end{remark} \section{Spectral symbol of $\{n^{-\alpha}A_n^{p,\alpha}\}_n$}\label{sec:symbol} This section is devoted to the computation of the symbol of the matrix-sequence $\{n^{-\alpha}A_n^{p,\alpha}\}_n$. As we have already anticipated, it turns out that the symbol of $\{n^{-\alpha}A_n^{p,\alpha}\}_n$ coincides with the symbol of the Toeplitz part $\{T_n^{p,\alpha}\}_n$. The spectral distribution of $\{n^{-\alpha}A_n^{p,\alpha}\}_n$ is given in Theorem~\ref{thm:spectral-A}. Its proof uses Corollary~\ref{cor:quasi-herm} and needs several preliminary results. \begin{sloppypar} For a given matrix $X:=[x_{ij}]_{i,j=1}^m\in\mathbb C^{m\times m}$, we denote by $\|X\|_1:=\max_{j=1,\ldots,m}\sum_{i=1}^m |x_{ij}|$ and $\|X\|_\infty:=\max_{i=1,\ldots,m}\sum_{j=1}^m |x_{ij}|$, the induced 1- and infinity-norm, respectively. \end{sloppypar} \begin{lemma}\label{lem:elem-bound-AL} Let $A_{n}^L$ be defined as in \eqref{eq:ALR}. For $i,j=2,\dots, n+p-1$ we have \begin{equation}\label{eq:elem-bound-AL} |(A_n^{L})_{i-1,j-1}|\leq \begin{cases} 0, & \eta_{i}\leq \xi_j, \\[0.2cm] c_{p,\alpha}^{L}\, n^\alpha, & \xi_j<\eta_i\leq \xi_{j+p+1}+\frac{1}{n}, \\[0.2cm] c_{p,\alpha}^{L}\, (\eta_i-\xi_{j+p+1})^{-\alpha}, & \xi_{j+p+1}+\frac{1}{n} < \eta_i, \end{cases} \end{equation} where $c_{p,\alpha}^{L}$ is a constant depending on $p$ and $\alpha$. \end{lemma} \begin{proof} From the properties of fractional derivatives \eqref{eq:caputo}--\eqref{eq:relRLC} and the B-spline properties \eqref{eq:spline-support}--\eqref{eq:spline-frac} it follows that for $j=2$, \begin{equation} \label{first-matrix-element} (A_n^{L})_{i-1,1}= \frac{1}{\Gamma(2-\alpha)}\int_{0}^{\min{(\eta_i, \xi_{p+3})}}(\eta_i-y)^{1-\alpha}(N^p_2)''(y)\,{\rm d} y + \frac{pn}{\Gamma(2-\alpha)}(\eta_i)^{1-\alpha}, \end{equation} and for $j=3,\ldots,n+p-1$, \begin{equation} \label{matrix-elements} (A_n^{L})_{i-1,j-1}=\begin{cases} 0, & \eta_{i}\leq \xi_j, \\ \displaystyle\frac{1}{\Gamma(2-\alpha)}\int_{\xi_j}^{\min{(\eta_i, \xi_{j+p+1})}}(\eta_i-y)^{1-\alpha}(N^p_j)''(y)\,{\rm d} y, & \text{otherwise}. \end{cases} \end{equation} We remark that $\eta_i\in(0,1)$, $\eta_i<\eta_{i+1}$, and $\xi_{j+p+1}-\xi_j\leq \frac{p+1}{n}$. In the following, we address the three different cases in \eqref{eq:elem-bound-AL} separately. If $\eta_i\leq \xi_j$, then it is clear that $(A_n^{L})_{i-1,j-1}=0$ for $j=3,\dots, n+p-1$. Note that $j=2$ is not involved in this case for any $i$ because $\xi_2=0<\eta_i$. If $\xi_j<\eta_i\leq \xi_{j+p+1}+\frac{1}{n}$, then \begin{equation*} (\eta_i-y)\leq \frac {p+2}{n}, \quad y\in [\xi_j, \min{(\eta_i, \xi_{j+p+1})}]. \end{equation*} Using \eqref{eq:spline-diff-bound}, from \eqref{matrix-elements} we get for $j=3,\dots, n+p-1$, \begin{align*} |(A_n^{L})_{i-1,j-1}|&\leq \frac{4p(p-1)n^2}{\Gamma(2-\alpha)}\int_{\xi_j}^{\min{(\eta_i, \xi_{j+p+1})}}(\eta_i-y)^{1-\alpha}\,{\rm d} y\\ &\leq \frac{4p(p-1)n^2}{\Gamma(3-\alpha)}\left(\frac{p+2}{n}\right)^{2-\alpha}. \end{align*} When $j=2$ we have $\xi_2=0$ and $\frac{1}{pn}=\eta_2\leq\eta_i\leq \xi_{p+3}+\frac{1}{n}\leq\frac{3}{n}$. Then, we find in a similar way that for $j=2$, \begin{equation*} |(A_n^{L})_{i-1,1}|\leq \frac{4p(p-1)n^2}{\Gamma(3-\alpha)}\left(\frac{3}{n}\right)^{2-\alpha}+ \frac{pn}{\Gamma(2-\alpha)}\left(\frac{1}{pn}\right)^{1-\alpha}. \end{equation*} We now look at the case $\xi_{j+p+1}+\frac{1}{n}< \eta_i$. This case can only happen for $2\leq j< n$ because when $j\geq n$ we have $\xi_{j+p+1}=1>\eta_i$. Given $y\in [\xi_j, \xi_{j+p+1}]$, we consider the Taylor expansion of $(\eta_i-y)^{1-\alpha}$ at $\xi_j$, producing \begin{equation} \label{Taylor-1} (\eta_i-y)^{1-\alpha} = (\eta_i-\xi_j)^{1-\alpha}-(1-\alpha)(\eta_i-\omega_{i,j}(y))^{-\alpha}(y-\xi_j), \end{equation} for some $\omega_{i,j}(y)\in (\xi_j, \xi_{j+p+1})$. Substituting \eqref{Taylor-1} in \eqref{matrix-elements} results in \begin{align*} |(A_n^{L})_{i-1,j-1}| &\leq \frac{1}{\Gamma(2-\alpha)}\left |(\eta_i-\xi_j)^{1-\alpha}\int_{\xi_j}^{\xi_{j+p+1}}(N^p_j)''(y)\,{\rm d} y\right| \\ &\quad+\frac{\alpha-1}{\Gamma(2-\alpha)}\int_{\xi_j}^{\xi_{j+p+1}}(\eta_i-\omega_{i,j}(y))^{-\alpha}(y-\xi_j)|(N^p_j)''(y)|\,{\rm d} y. \end{align*} Observe that $(N^p_j)'(\xi_{j})=(N^p_j)'(\xi_{j+p+1})=0$ for $3\leq j\leq n+p-2$, and $(\eta_i-\omega_{i,j}(y))>(\eta_i-\xi_{j+p+1})$. Then, recalling the bound in \eqref{eq:spline-diff-bound}, we obtain for $j=3,\dots,n-1$, \begin{align*} |(A_n^{L})_{i-1,j-1}| &\leq \frac{1}{\Gamma(2-\alpha)}\left |(\eta_i-\xi_j)^{1-\alpha}((N^p_j)'(\xi_{j+p+1})-(N^p_j)'(\xi_j))\right| \\ &\quad +\frac{\alpha-1}{\Gamma(2-\alpha)}4p(p-1)n^2\int_{\xi_j}^{\xi_{j+p+1}}(y-\xi_j)\,{\rm d} y\,(\eta_i-\xi_{j+p+1})^{-\alpha} \\ &\leq \frac{\alpha-1}{\Gamma(2-\alpha)}2p(p-1)n^2\left(\frac{p+1}{n}\right)^2(\eta_i-\xi_{j+p+1})^{-\alpha}. \end{align*} Substituting \eqref{Taylor-1} in \eqref{first-matrix-element} and observing that $(N^p_2)'(0)=np$, $(N^p_2)'(\xi_{p+3})=0$, we find with a similar argument that for $j=2$, \begin{align*} |(A_n^{L})_{i-1,1}| &\leq \frac{1}{\Gamma(2-\alpha)}\left |(\eta_i)^{1-\alpha}((N^p_2)'(\xi_{p+3})-(N^p_2)'(0))+np(\eta_i)^{1-\alpha}\right| \\ &\quad+\frac{\alpha-1}{\Gamma(2-\alpha)}4p(p-1)n^2\int_{0}^{\xi_{p+3}}y\,{\rm d} y\, (\eta_i-\xi_{j+3})^{-\alpha} \\ &\leq \frac{\alpha-1}{\Gamma(2-\alpha)}2p(p-1)n^2\left(\frac{2}{n}\right)^2 (\eta_i-\xi_{j+3})^{-\alpha}. \end{align*} This concludes the proof. \end{proof} \begin{lemma}\label{lem:AL_norm} Let $A_{n}^L$ be defined as in \eqref{eq:ALR}. We have \begin{equation*} \|n^{-\alpha}A_{n}^L\|_{q}\leq C_{p,\alpha}^{L}, \quad q\in\{1,2,\infty\}, \end{equation*} where $C_{p,\alpha}^{L}$ is a constant depending on $p$ and $\alpha$. \end{lemma} \begin{proof} We first consider the infinity-norm \begin{align*} \|n^{-\alpha}A_{n}^L\|_\infty &=n^{-\alpha}\max_{i=2\ldots,n+p-1}\sum_{j=2}^{n+p-1}|(A_n^{L})_{i-1,j-1}|. \end{align*} The entries $|(A_n^{L})_{i-1,j-1}|$, $i,j=2, \dots n+p-1$, can be bounded thanks to the results of Lemma~\ref{lem:elem-bound-AL}. We observe that for any fixed $i$, \begin{itemize} \item the number of indices in $\{j: \xi_j<\eta_i\leq \xi_{j+p+1}+\frac{1}{n} \}$ is bounded by $p+2$; \item for $j=n,\dots, n+p-1$ we have $\xi_{j+p+1}=1$, thus either $\eta_i\leq \xi_j$ or $ \xi_j<\eta_i\leq \xi_{j+p+1}+\frac{1}{n}$; \item if $\eta_i> \xi_{j+p+1}+\frac{1}{n}$, then $2\leq j\leq n-1$ and \begin{equation*} \eta_i-\xi_{j+p+1}=\eta_i-\frac{j}{n}\geq \frac{\ell_{i,j}}{n}, \end{equation*} where $\ell_{i,j}:=\lfloor n\eta_i-j\rfloor$. Note that $\ell_{i,j}\geq 1$ and $\ell_{i,j}=\ell_{i,j+1}+1$, so $$ \sum_{j:\,\eta_i> \xi_{j+p+1}+\frac{1}{n}} (\ell_{i,j})^{-\alpha}\leq \sum_{\ell=1}^\infty\ell^{-\alpha}=\zeta(\alpha), $$ with $\zeta(\alpha)$ the Riemann zeta function evaluated at $\alpha$. The series $\sum_{\ell=1}^\infty\ell^{-\alpha}$ is convergent for $\alpha\in (1,2)$. \end{itemize} As a consequence, taking into account Lemma~\ref{lem:elem-bound-AL}, for any fixed $i$ we have \begin{align*} n^{-\alpha}\sum_{j=2}^{n+p-1}|(A_n^{L})_{i-1,j-1}| &\\ &\hspace{-40pt}\leq n^{-\alpha}\left[ \sum_{j:\, \xi_j<\eta_i\leq\xi_{j+p+1}+\frac{1}{n}}|(A_n^{L})_{i-1,j-1}| + \sum_{j:\, \eta_i>\xi_{j+p+1}+\frac{1}{n}}|(A_n^{L})_{i-1,j-1}| \right] \\ &\hspace{-40pt}\leq n^{-\alpha} \,c_{p,\alpha}^{L}\left[ (p+2) n^\alpha+\sum_{j:\, \eta_i>\xi_{j+p+1}+\frac{1}{n}}\left(\frac{\ell_{i,j}}{n}\right)^{-\alpha}\right]\\ &\hspace{-40pt}\leq c_{p,\alpha}^{L}\left[ p+2 +\zeta(\alpha)\right]. \end{align*} The bound for the 1-norm $$ \|n^{-\alpha}A_{n}^L\|_1 = n^{-\alpha}\max_{j=2\ldots,n+p-1}\sum_{i=2}^{n+p-1}|(A_n^{L})_{i-1,j-1}| $$ can be shown with a similar line of arguments, by observing that for any fixed $j$, \begin{itemize} \item the number of indices in $\{i: \xi_j<\eta_i\leq \xi_{j+p+1}+\frac{1}{n} \}$ is bounded by $2p$; \item if $\eta_i> \xi_{j+p+1}+\frac{1}{n}$, then \begin{equation*} \eta_i-\xi_{j+p+1}\geq \frac{\ell_{i,j}}{n}, \end{equation*} where $\ell_{i,j}:=\lfloor n(\eta_i-\xi_{j+p+1})\rfloor$. Note that $\ell_{i,j}\geq 1$ for all $i$ and $\ell_{i+1,j}=\ell_{i,j}+1$ for $p+1\leq i \leq n-1$, so $$ \sum_{i:\,\eta_i> \xi_{j+p+1}+\frac{1}{n}} (\ell_{i,j})^{-\alpha}\leq 2(p-1)+\sum_{\ell=1}^\infty\ell^{-\alpha}=2(p-1)+\zeta(\alpha). $$ \end{itemize} Finally, the bound for the spectral norm follows from the inequality \begin{equation*} \|n^{-\alpha}A_n^L\|_2\leq \sqrt{\|n^{-\alpha}A_n^L\|_\infty\|n^{-\alpha}A_n^L\|_1} \end{equation*} and the above results for the infinity-norm and 1-norm. \end{proof} A similar reasoning to the one adopted in the previous lemmas brings us to the following result. \begin{lemma}\label{lem:AR_norm} Let $A_{n}^R$ be defined as in \eqref{eq:ALR}. We have \begin{equation*} \|n^{-\alpha}A_{n}^R\|_q \leq C_{p,\alpha}^{R}, \quad q\in\{1,2,\infty\}, \end{equation*} where $C_{p,\alpha}^{R}$ is a constant depending on $p$ and $\alpha$. \end{lemma} \begin{lemma}\label{lem:normR} Let $R_n^{p,\alpha}$ be defined as in \eqref{eq:toeplitz-part}. We have \begin{equation* \|R_n^{p,\alpha}\|_2,\,\|R_n^{p,\alpha}\|_{1,\ast}\leq \widetilde{C}_{p,\alpha}, \end{equation*} where $\widetilde{C}_{p,\alpha}$ is a constant depending on $p$ and $\alpha$. \end{lemma} \begin{proof} The relation in \eqref{eq:coeff_matrix} implies $$ \|R_n^{p,\alpha}\|_2=\|n^{-\alpha}A_n^{p,\alpha}-T_n^{p,\alpha}\|_2\leq\|n^{-\alpha}A_n^{p,\alpha}\|_2+\|T_n^{p,\alpha}\|_2,$$ and we recall from Section~\ref{sec:symbol-properties} that $$ \|T_n^{p,\alpha}\|_2=\|T_{n+p-2}(f^{p,\alpha})\|_2\leq\|f^{p,\alpha}\|_\infty<+\infty. $$ Then, by Lemmas \ref{lem:AL_norm} and \ref{lem:AR_norm}, we arrive at $$ \|R_n^{p,\alpha}\|_2\leq \frac{1}{2\cos({\pi\alpha}/{2})}(C^L_{p,\alpha}+C^R_{p,\alpha})+\|f^{p,\alpha}\|_\infty.$$ In addition, $\|R_n^{p,\alpha}\|_{1,\ast}\leq\text{rank}(R_n^{p,\alpha})\|R_n^{p,\alpha}\|_2$ and $\text{rank}(R_n^{p,\alpha})\le4(p-1)$. This completes the proof. \end{proof} We are now in a position to discuss the spectral distribution of $\{n^{-\alpha}A_n^{p,\alpha}\}_n$. \begin{theorem}\label{thm:spectral-A} Given $\{n^{-\alpha}A_n^{p,\alpha}\}_n$ with $A_n^{p,\alpha}$ as in \eqref{eq:coeff_matrix}, we have \begin{equation}\label{eq:distr-A} \{n^{-\alpha}A_n^{p,\alpha}\}_n\sim_\lambda(f^{p,\alpha},[-\pi,\pi]), \end{equation} where $f^{p,\alpha}$ is given in \eqref{eq:frac-symbol}. \end{theorem} \begin{proof} We prove this result by applying Corollary~\ref{cor:quasi-herm} with $X_n=T_n^{p,\alpha}$ and $Y_n=R_n^{p,\alpha}$. We first note that, because of Proposition~\ref{prop:distr-T}, condition (a) of Theorem~\ref{thm:quasi-herm} is satisfied. The other conditions in Corollary~\ref{cor:quasi-herm} hold by Lemma~\ref{lem:normR}, which proves the result \eqref{eq:distr-A}. \end{proof} \begin{remark} \label{rem:mim-A} Thanks to Theorem~\ref{thm:spectral-A}, the matrices $n^{-\alpha}A_n^{p,\alpha}$ and $T_n^{p,\alpha}$ are asymptotically spectrally equivalent, possibly up to few outliers. As a consequence, the arguments given in Remark~\ref{rem:mim} apply unchanged when the aim is solving a linear system whose coefficient matrix is $n^{-\alpha}A_n^{p,\alpha}$ instead of $T_n^{p,\alpha}$. \end{remark} \begin{remark}\label{rem:adv-rea} \cmag{Let $A_n^{p,\alpha,\gamma,\rho}$ be the coefficient matrix corresponding to the B-spline collocation discretization of the advection-diffusion-reaction problem \begin{equation* \begin{cases} \frac{{\rm d}^\alpha u(x)}{{\rm d}|x|^\alpha}+\gamma u'(x)+\rho u(x)=s(x), & x\in\Omega,\\ u(x)=0, & x\in\mathbb R\backslash\Omega, \end{cases} \end{equation*} with $\rho>0$ and $\gamma\in\mathbb R$. From Theorem~\ref{thm:spectral-A}, in combination with Theorem~\ref{thm:quasi-herm} and \cite[Lemma~4.1]{DGMSS}, we immediately deduce that} \begin{equation*} \{n^{-\alpha}A_n^{p,\alpha,\gamma,\rho}\}_n\sim_\lambda(f^{p,\alpha},[-\pi,\pi]). \end{equation*} \end{remark} \section{Numerical experiments} \label{sec:numerics} In the following, we verify the spectral results obtained in Sections \ref{sec:symbol-properties} and \ref{sec:symbol} through several numerical experiments. We also provide a numerical study of the approximation behavior of the proposed polynomial B-spline collocation method for an arbitrary degree $p$. Let us start by illustrating that \begin{itemize} \item the symbol $f^{p,\alpha}$ has a single zero at $0$ of order $\alpha$ and it presents an exponential decay to zero at $\pi$ for increasing $p$; \item the symbol $f^{p,\alpha}$ satisfies the bounds in \eqref{eq:bound-high-freq} for odd $p$, and the bound in \eqref{eq:bound-high-freq-even} for even $p$; \item relations \eqref{eq:distr-T} and \eqref{eq:distr-A} hold. \end{itemize} \cmag{Note that it suffices to consider the interval $[0,\pi]$ due to the symmetry of $f^{p,\alpha}$; see Theorem~\ref{thm:bounds-frac-symb}.} Figure~\ref{fig:symbol} shows that, independently of $p$, the symbol $f^{p,\alpha}$ has a single zero at $0$ and the order of such zero increases up to 2 as $\alpha$ tends to 2. On the other hand, $f^{p,\alpha}$ presents a decay at $\pi$ as $p$ increases. We observe that such decay becomes faster when $\alpha$ decreases to 1, in accordance with Remark \ref{rem:alpha}. \begin{figure}[t] \centering \begin{subfigure}[$p=3$ { \includegraphics[width=0.34\linewidth]{symbolplotp3}} \end{subfigure}\hspace*{-0.7cm} \begin{subfigure}[$p=5$ { \includegraphics[width=0.34\linewidth]{symbolplotp5}} \end{subfigure}\hspace*{-0.7cm} \begin{subfigure}[$p=8$ { \includegraphics[width=0.34\linewidth]{symbolplotp8}} \end{subfigure} \caption{Plot of $f^{p,\alpha}$ for $p=3,5,8$ and $\alpha=1.2,1.5,1.8,2$.} \label{fig:symbol} \end{figure} In Figure~\ref{fig:bounds} we show that, fixing $\alpha=1.3$, the bounds in \eqref{eq:bound-high-freq} hold for $p=3$, and the one in \eqref{eq:bound-high-freq-even} holds for $p=4$. Observe that, despite relation \eqref{eq:bound-high-freq-even} is theoretically proven to be true for all $\theta\in[a,\pi]$, $a=\left(\frac{\pi^4}{48}\right)^{1/\alpha}$, it actually also holds for all $\theta\in[1,a]$, i.e., for all values on the left of the black vertical line $\theta=a$ shown in Figure \ref{fig:boundsb}. \begin{figure}[t] \centering \begin{subfigure}[$p=3$ { \includegraphics[width=0.46\linewidth]{bounds_podd_alpha1dot3}} \end{subfigure} \begin{subfigure}[$p=4$\label{fig:boundsb} { \includegraphics[width=0.46\linewidth]{bounds_peven_alpha1dot3}} \end{subfigure} \caption{(a) Check of the bound in \eqref{eq:bound-high-freq} which is valid for odd $p$, and (b) check of the bound in \eqref{eq:bound-high-freq-even} which is valid for even $p$. In both cases $\alpha$ has been fixed to $1.3$.}\label{fig:bounds} \end{figure} In order to numerically verify that relations \eqref{eq:distr-T} and \eqref{eq:distr-A} hold, for fixed $n$, $p$, we define the following equispaced grid on $[0,\pi]$: \begin{equation* \varGamma:=\left\{\theta_{k}:=\frac{k\pi}{n+p-2}: k={1}, \dots,n+p-2\right\}. \end{equation*} Then, we compare the sampling of $f^{p,\alpha}$ on $\varGamma$ with the eigenvalues of both $T_n^{p,\alpha}$ and $n^{-\alpha}A_{n}^{p,\alpha}$. Both eigenvalues and sampling values have been ordered in ascending way. In all the numerical experiments, the entries of the coefficient matrix $A_n^{p,\alpha}$ have been computed using the Gauss-Jacobi-type quadrature rules introduced in \cite{Pang}. In Figure~\ref{fig:compeig1dot8_100} we fix $p=3$, $n=63$ and vary $\alpha\in\{1.2,1.8\}$. For both $T_n^{p,\alpha}$ and $n^{-\alpha}A_{n}^{p,\alpha}$ we experience a very good matching, which is in accordance with Proposition~\ref{prop:distr-T} and Theorem~\ref{thm:spectral-A}. However, we observe that in the case of $n^{-\alpha}A_{n}^{p,\alpha}$ there are few large eigenvalues that do not behave like the symbol; these are the \cmag{outliers and their number is independent of $n$.} As a further confirmation of Proposition~\ref{prop:distr-T} and Theorem~\ref{thm:spectral-A}, we obtained similar results also for $p=4$, $n=62$, and $\alpha\in\{1.2,1.8\}$; see Figure~\ref{fig:compeig100}. We end this section by checking how the approximation order of the considered polynomial B-spline collocation method behaves with respect to $p$ for \cmag{smooth solutions} of problem \eqref{eq:FDE}. More precisely, in Tables \ref{tab:pol33}--\ref{tab:sin} we fix the source function $s(x)$ such that the exact solution of \eqref{eq:FDE} is given by \begin{itemize} \item $u(x)=x^3(1-x)^3$, and \item $u(x)=\sin(\pi x^2)$, \end{itemize} respectively. Then, by doubling $n$ repeatedly, we show the infinity-norm of the corresponding errors and the convergence orders for varying $p$ and $\alpha$. The infinity-norm of the error is computed by taking the maximum value of the error sampled in 1024 points uniformly distributed over $[0,1]$. \cmag{In the case of standard (non-fractional) diffusion problems, we know that the approximation order for smooth solutions is $p$ for even $p$, and $p-1$ for odd $p$; see \cite{ABHRS}. In the fractional case, we observe a dependency of the approximation order on $\alpha$ that seems to vary as $p+2-\alpha$ for even $p$, and as $p+1-\alpha$ for odd $p$.} \begin{figure}[t] \vspace*{-0.1cm} \centering \begin{subfigure}[$T_n^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigT_alpha1dot2p3n63}} \end{subfigure} \begin{subfigure}[$n^{-\alpha}A_{n}^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigA_alpha1dot2p3n63}} \end{subfigure} \\ \begin{subfigure}[$T_n^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigT_alpha1dot8p3n63}} \end{subfigure} \begin{subfigure}[$n^{-\alpha}A_{n}^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigA_alpha1dot8p3n63}} \end{subfigure} \caption{Comparison of the eigenvalues of $T_n^{p,\alpha}$ and $n^{-\alpha}A_{n}^{p,\alpha}$ (\textcolor{red}{${\text{\small o}}$}) with a uniform sampling of $f^{p,\alpha}$ on $\varGamma$, ordered in ascending way (\textcolor{blue}{$\ast$}), for $\alpha=1.2$ (top row) and $\alpha=1.8$ (bottom row), $n=63$, $p=3$.} \label{fig:compeig1dot8_100} \end{figure} \begin{figure}[t] \vspace*{-0.1cm} \centering \begin{subfigure}[$T_n^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigT_alpha1dot2p4n62}} \end{subfigure} \begin{subfigure}[$n^{-\alpha}A_{n}^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigA_alpha1dot2p4n62}} \end{subfigure} \\ \begin{subfigure}[$T_n^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigT_alpha1dot8p4n62}} \end{subfigure} \begin{subfigure}[$n^{-\alpha}A_{n}^{p,\alpha}$ { \includegraphics[width=0.46\linewidth]{compeigA_alpha1dot8p4n62}} \end{subfigure} \caption{Comparison of the eigenvalues of $T_n^{p,\alpha}$ and $n^{-\alpha}A_{n}^{p,\alpha}$ (\textcolor{red}{${\text{\small o}}$}) with a uniform sampling of $f^{p,\alpha}$ on $\varGamma$, ordered in ascending way (\textcolor{blue}{$\ast$}), for $\alpha=1.2$ (top row) and $\alpha=1.8$ (bottom row), $n=62$, $p=4$.} \label{fig:compeig100} \end{figure} \begin{table}[htb] \scriptsize \centering \begin{tabular}{|c||c|cc|cc|cc|cc|} \hline & & & & & & &&& \\ \multirow{3}{*}{$\alpha$} & \multirow{3}{*}{$n$}& \multicolumn{2}{|c|}{$p=2$} &\multicolumn{2}{c|}{$p=3$} &\multicolumn{2}{c|}{$p=4$} &\multicolumn{2}{c|}{$p=5$} \\ \cline{3-10} & & & & & & &&& \\ & & Error & Order & Error & Order &Error & Order &Error & Order \\ \hline\hline & & & & & & &&& \\ \multirow{6}{*}{1.2} & 4 & 1.3146e-03 & & 1.1197e-03 & & 2.6802e-04 & & 4.8556e-05 & \\ & 8 & 1.5675e-04 & 3.07 & 9.8810e-05 & 3.50 & 1.0317e-05 & 4.70 & 3.4230e-06 & 3.83\\ & 16 & 2.4941e-05 & 2.65 & 1.5622e-05 & 2.66 & 4.1887e-07 & 4.62 & 1.3853e-07 & 4.63\\ & 32 & 3.5227e-06 & 2.82 & 2.4433e-06 & 2.68 & 1.6226e-08 & 4.69 & 5.1403e-09 & 4.75\\ & 64 & 5.0507e-07 & 2.80 & 3.6711e-07 & 2.73 & 5.9986e-10 & 4.76 & 2.1670e-10 & 4.57\\ & & & $\approx$2.8 & & $\approx$2.8 & & $\approx$4.8 & & $\approx$4.8\\ \hline & & & & & & &&& \\ \multirow{5}{*}{1.5} & 4 & 1.6170e-03 & & 1.8701e-03 & & 3.4358e-04 & & 8.0567e-05 & \\ & 8 & 1.7117e-04 & 3.24 & 2.0365e-04 & 3.20 & 1.9552e-05 & 4.14 & 7.4745e-06 & 3.43\\ & 16 & 3.1719e-05 & 2.43 & 2.8530e-05 & 2.84 & 1.0245e-06 & 4.25 & 3.7577e-07 & 4.31\\ & 32 & 5.8828e-06 & 2.43 & 5.7869e-06 & 2.30 & 4.9498e-08 & 4.37 & 1.7183e-08 & 4.45\\ & 64 & 1.0458e-06 & 2.49 & 1.0661e-06 & 2.44 & 2.2995e-09 & 4.43 & 8.0703e-10 & 4.41\\ & & & $\approx$2.5 & & $\approx$2.5& & $\approx$4.5 & & $\approx$4.5\\ \hline & & & & & & &&& \\ \multirow{5}{*}{1.8} & 4 & 1.9908e-03 & & 3.1774e-03 & & 4.3396e-04 & & 1.3425e-04 & \\ & 8 & 2.5091e-04 & 2.99 & 4.4181e-04 & 2.85 & 3.6073e-05 & 3.59 & 1.5905e-05 & 3.08\\ & 16 & 4.2953e-05 & 2.55 & 6.8611e-05 & 2.69 & 2.4045e-06 & 3.91 & 9.8386e-07 & 4.01\\ & 32 & 9.3400e-06 & 2.20 & 1.3336e-05 & 2.36 & 1.4401e-07 & 4.06 & 5.5249e-08 & 4.15\\ & 64 & 2.0702e-06 & 2.17 & 3.0292e-06 & 2.14 & 8.2251e-09 & 4.13 & 3.0499e-09 & 4.18\\ & & & $\approx$2.2 & &$\approx$2.2 & & $\approx$4.2 & & $\approx$4.2\\ \hline \hline \end{tabular} \caption{Errors and convergence orders of the proposed B-spline collocation method for problem \eqref{eq:FDE} when $u(x)=x^3(1-x)^3$.}\label{tab:pol33} \end{table} \begin{table}[htb] \scriptsize \centering \begin{tabular}{|c||c|cc|cc|cc|cc|} \hline & & & & & & &&& \\ \multirow{3}{*}{$\alpha$} & \multirow{3}{*}{$n$}& \multicolumn{2}{|c|}{$p=2$} &\multicolumn{2}{c|}{$p=3$} &\multicolumn{2}{c|}{$p=4$} &\multicolumn{2}{c|}{$p=5$} \\ \cline{3-10} & & & & & & &&& \\ & & Error & Order & Error & Order &Error & Order &Error & Order \\ \hline\hline & & & & & & &&& \\ \multirow{6}{*}{1.2} & 4 & 4.0099e-02 & & 1.5948e-02 & & 6.1393e-03 & & 1.9341e-03 & \\ & 8 & 8.4523e-03 & 2.25 & 4.6043e-03 & 1.79 & 2.5317e-04 & 4.60 & 1.0271e-04 & 4.23\\ & 16 & 1.1497e-03 & 2.88 & 7.8372e-04 & 2.55 & 7.9503e-06 & 4.99 & 2.5175e-06 & 5.35\\ & 32 & 1.6423e-04 & 2.81 & 1.1786e-04 & 2.73 & 2.5619e-07 & 4.96 & 7.1641e-08 & 5.14\\ & 64 & 2.3468e-05 & 2.81 & 1.7096e-05 & 2.79 & 9.5594e-09 & 4.74 & 1.0289e-08 & 2.80\\ & & & $\approx$2.8 & & $\approx$2.8& &$\approx$ 4.8 & & $\approx$4.8\\ \hline & & & & & & &&& \\ \multirow{5}{*}{1.5} & 4 & 4.2457e-02 & & 2.4735e-02 & & 7.7604e-03 & & 2.6753e-03 & \\ & 8 & 1.0378e-02 & 2.03 & 7.9809e-03 & 1.63 & 4.3612e-04 & 4.15 & 1.9027e-04 & 3.81\\ & 16 & 1.7932e-03 & 2.53 & 1.7304e-03 & 2.21 & 1.7374e-05 & 4.65 & 6.3744e-06 & 4.90\\ & 32 & 3.1466e-04 & 2.51 & 3.1905e-04 & 2.44 & 7.0999e-07 & 4.61 & 2.2599e-07 & 4.82\\ & 64 & 5.5887e-05 & 2.49 & 5.7202e-05 & 2.48 & 2.9859e-08 & 4.57 & 7.5065e-09 & 4.91\\ & & & $\approx$2.5 & &$\approx$2.5 & & $\approx$4.5 & & $\approx$4.5\\ \hline & & & & & & &&& \\ \multirow{5}{*}{1.8} & 4 & 4.2801e-02 & & 3.8129e-02 & & 9.6792e-03 & & 3.8393e-03 & \\ & 8 & 1.2259e-02 & 1.80 & 1.4094e-02 & 1.44 & 7.5244e-04 & 3.69 & 3.6381e-04 & 3.40\\ & 16 & 2.7540e-03 & 2.15 & 3.8466e-03 & 1.87 & 3.9382e-05 & 4.26 & 1.6023e-05 & 4.50\\ & 32 & 6.0215e-04 & 2.19 & 8.8181e-04 & 2.13 & 2.0021e-06 & 4.30 & 7.1827e-07 & 4.48\\ & 64 & 1.3172e-04 & 2.19 & 1.9414e-04 & 2.18 & 1.0435e-07 & 4.26 & 3.2796e-08 & 4.45\\ & & & $\approx$2.2 & &$\approx$2.2 & & $\approx$4.2 & & $\approx$4.2\\ \hline \hline \end{tabular} \caption{Errors and convergence orders of the proposed B-spline collocation method for problem \eqref{eq:FDE} when $u(x)=\sin(\pi x^2)$.}\label{tab:sin} \end{table} \section{Conclusion and future perspective}\label{sec:conclusions} We focused on a fractional differential equation in Riesz form discretized by a polynomial B-spline collocation method and we showed that, for an arbitrary degree $p$, the resulting coefficient matrices possess a Toeplitz-like structure. We computed the corresponding spectral symbol and we proved that it has a single zero at $0$ of order $\alpha$, with $\alpha$ the fractional derivative order that ranges from $1$ to $2$, and it presents an exponential decay to zero at $\pi$ for increasing $p$ that becomes faster as $\alpha$ approaches $1$. This translates in a mitigated conditioning in the low frequencies and in a deterioration in the high frequencies when compared to second order problems. Moreover, we showed that the behavior of the symbol at $\pi$ is well captured by the symbol corresponding to $\alpha=0$ which is a trigonometric polynomial bounded in the neighborhood of $\pi$. As a side result of the symbol computation, we ended up with a new way to express the central entries of the coefficient matrix as inner products of two fractional derivatives of cardinal B-splines. In addition, we performed a numerical study of the approximation behavior of polynomial B-spline collocation. This study suggests that the approximation order \cmag{for smooth solutions in the fractional case is $p+2-\alpha$ for even $p$, and $p+1-\alpha$ for odd $p$, which is in line with approximation results known for standard (non-fractional) diffusion problems \cite{ABHRS}.} \cmag{The investigation presented here is intended as a first step towards the use of collocation methods based on high-order polynomial B-splines for FDE problems. In particular, (locally) non-uniform knot sequences could be considered to improve accuracy for non-smooth solutions. In this perspective, B-spline collocation methods provide a robust, problem-independent tool to face FDE problems. This robustness makes them an appealing alternative to state-of-the-art methods, such as the elegant collocation/Galerkin spectral methods for approximating the solution of \eqref{eq:FDE} obtained by exploiting the connection between Jacobi polynomials and pseudo eigenfunctions of the Riesz fractional operator; see \cite{Mao-sinum-18,Mao-sisc-17} and references therein. The spectral analysis in the present work will provide a strong guidance for forthcoming research. Indeed, the result in Theorem~\ref{thm:spectral-A} is a key ingredient for studying the symbol of matrices arising from B-spline collocation methods for more general FDE problems. In particular, additional reaction and advection terms do not modify the symbol of the corresponding matrices; see Remark~\ref{rem:adv-rea}. Furthermore, FDE problems involving non-constant coefficients can be addressed by applying the framework of GLT (Generalized locally Toeplitz) sequences~\cite{GS}. Following the results in \cite{cmame2,sinum,mazza2016,sisc}, all the information provided by the symbol can be leveraged for the design of effective preconditioners and fast multigrid/multi-iterative solvers whose convergence speed is independent of the fineness parameters and the approximation parameters as well as of the fractional derivative order; see also Remarks~\ref{rem:mim} and~\ref{rem:mim-A}. } \section*{Acknowledgements} All authors are members of the INDAM research group GNCS. The first author was partly supported by the GNCS-INDAM Young Researcher Project 2020 titled \lq\lq Numerical methods for image restoration and cultural heritage deterioration''. The last two authors are partially supported by the Beyond Borders Program of the University of Rome Tor Vergata through the project ASTRID (CUP E84I19002250005) and by the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata (CUP E83C18000100006). \bibliographystyle{amsplain}
{ "attr-fineweb-edu": 1.851562, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUb-fxK4sA-5fmv_We
\section{Introduction}\label{s1} To illustrate the reason behind the title of this paper, we briefly recall a celebrated result of Jost and Pais \cite{JP51}, who proved in 1951 a spectacular reduction of the Fredholm determinant associated with the Birman--Schwinger kernel of a one-dimensional Schr\"odinger operator on a half-line, to a simple Wronski determinant of distributional solutions of the underlying Schr\"odinger equation. This Wronski determinant also equals the so-called Jost function of the corresponding half-line Schr\"odinger operator. In this paper we prove a certain multi-dimensional variant of this result. To describe the result due to Jost and Pais \cite{JP51}, we need a few preparations. Denoting by $H_{0,+}^D$ and $H_{0,+}^N$ the one-dimensional Dirichlet and Neumann Laplacians in $L^2((0,\infty);dx)$, and assuming \begin{equation} V\in L^1((0,\infty);dx), \label{1.1} \end{equation} we introduce the perturbed Schr\"odinger operators $H_{+}^D$ and $H_{+}^N$ in $L^2((0,\infty);dx)$ by \begin{align} &H_{+}^Df=-f''+Vf, \notag \\ &f\in \text{\rm{dom}}\big(H_{+}^D\big)=\{g\in L^2((0,\infty); dx) \,|\, g,g' \in AC([0,R]) \text{ for all $R>0$}, \\ & \hspace*{4.95cm} g(0)=0, \, (-g''+Vg)\in L^2((0,\infty); dx)\}, \notag \\ &H_{+}^Nf=-f''+Vf, \notag \\ &f\in \text{\rm{dom}}\big(H_{+}^N\big)=\{g\in L^2((0,\infty); dx) \,|\, g,g' \in AC([0,R]) \text{ for all $R>0$}, \\ & \hspace*{4.85cm} g'(0)=0, \, (-g''+Vg)\in L^2((0,\infty); dx)\}. \notag \end{align} Thus, $H_{+}^D$ and $H_{+}^N$ are self-adjoint if and only if $V$ is real-valued, but since the latter restriction plays no special role in our results, we will not assume real-valuedness of $V$ throughout this paper. A fundamental system of solutions $\phi_+^D(z,\cdot)$, $\theta_+^D(z,\cdot)$, and the Jost solution $f_+(z,\cdot)$ of \begin{equation} -\psi''(z,x)+V\psi(z,x)=z\psi(z,x), \quad z\in{\mathbb{C}}\backslash\{0\}, \; x\geq 0, \label{1.4} \end{equation} are then introduced via the standard Volterra integral equations \begin{align} \phi_+^D(z,x)&=z^{-1/2}\sin(z^{1/2}x)+\int_0^x dx' \, z^{-1/2}\sin(z^{1/2}(x-x')) V(x')\phi_+^D(z,x'), \\ \theta_+^D(z,x)&=\cos(z^{1/2}x)+\int_0^x dx' \, z^{-1/2}\sin(z^{1/2}(x-x')) V(x')\theta_+^D (z,x'), \\ f_+(z,x)&=e^{iz^{1/2}x}-\int_x^\infty dx' \, z^{-1/2}\sin(z^{1/2}(x-x')) V(x')f_+(z,x'), \label{1.7} \\ &\hspace*{3.85cm} z\in{\mathbb{C}}\backslash\{0\}, \; \Im(z^{1/2})\geq 0, \; x\geq 0. \notag \end{align} In addition, we introduce \begin{equation} u=\exp(i\arg(V))\abs{V}^{1/2}, \quad v=\abs{V}^{1/2}, \, \text{ so that } \, V=u\, v, \end{equation} and denote by $I_+$ the identity operator in $L^2((0,\infty); dx)$. Moreover, we denote by \begin{equation} W(f,g)(x)=f(x)g'(x)-f'(x)g(x), \quad x \geq 0, \end{equation} the Wronskian of $f$ and $g$, where $f,g \in C^1([0,\infty))$. We also use the standard convention to abbreviate (with a slight abuse of notation) the operator of multiplication in $L^2((0,\infty);dx)$ by an element $f\in L^1_{\text{\rm{loc}}}((0,\infty);dx)$ (and similarly in the higher-dimensional context later) by the same symbol $f$ (rather than $M_f$, etc.). For additional notational conventions we refer to the paragraph at the end of this introduction. Then, the following results hold: \begin{theorem} \label{t1.1} Assume $V\in L^1((0,\infty);dx)$ and let $z\in{\mathbb{C}}\backslash [0,\infty)$ with $\Im(z^{1/2})>0$. Then, \begin{equation} \overline{u\big(H_{0,+}^D-z I_+\big)^{-1}v}, \, \overline{u\big(H_{0,+}^N-z I_+\big)^{-1}v} \in {\mathcal B}_1(L^2((0,\infty);dx)) \end{equation} and \begin{align} \det\Big(I_+ +\overline{u\big(H_{0, +}^D-z I_+\big)^{-1}v}\,\Big) &= 1+z^{-1/2}\int_0^\infty dx\, \sin(z^{1/2}x)V(x)f_+(z,x) \notag \\ &= W(f_+(z,\cdot),\phi_+^D(z,\cdot)) = f_+(z,0), \label{1.11} \\ \det\Big(I_+ +\overline{u\big(H_{0, +}^N-z I_+\big)^{-1}v}\,\Big) &= 1+ i z^{-1/2} \int_0^\infty dx\, \cos(z^{1/2}x)V(x)f_+(z,x) \notag \\ &= - \frac{W(f_+(z,\cdot),\theta_+^D (z,\cdot))}{i z^{1/2}} = \frac{f_+'(z,0)}{i z^{1/2}}. \label{1.12} \end{align} \end{theorem} Equation \eqref{1.11} is the modern formulation of the classical result due to Jost and Pais \cite{JP51} (cf.\ the detailed discussion in \cite{GM03}). Performing calculations similar to Section 4 in \cite{GM03} for the pair of operators $H_{0,+}^N$ and $H_+^N$, one obtains the analogous result \eqref{1.12}. For similar considerations in the context of finite interval problems, we refer to Dreyfus and Dym \cite{DD78} and Levit and Smilansky \cite{LS77}. We emphasize that \eqref{1.11} and \eqref{1.12} exhibit the remarkable fact that the Fredholm determinant associated with trace class operators in the infinite-dimensional space $L^2((0,\infty); dx)$ is reduced to a simple Wronski determinant of ${\mathbb{C}}$-valued distributional solutions of \eqref{1.4}. This fact goes back to Jost and Pais \cite{JP51} (see also \cite{GM03}, \cite{Ne72}, \cite{Ne80}, \cite[Sect.\ 12.1.2]{Ne02}, \cite{Si00}, \cite[Proposition 5.7]{Si05}, and the extensive literature cited in these references). The principal aim of this paper is to explore the extent to which this fact may generalize to higher dimensions $n\in{\mathbb{N}}$, $n\geq 2$. While a straightforward generalization of \eqref{1.11}, \eqref{1.12} appears to be difficult, we will next derive a formula for the ratio of such determinants which indeed permits a direct extension to higher dimensions. For this purpose we introduce the boundary trace operators $\gamma_D$ (Dirichlet trace) and $\gamma_N$ (Neumann trace) which, in the current one-dimensional half-line situation, are just the functionals, \begin{equation} \gamma_D \colon \begin{cases} C([0,\infty)) \to {\mathbb{C}}, \\ \hspace*{1.3cm} g \mapsto g(0), \end{cases} \quad \gamma_N \colon \begin{cases}C^1([0,\infty)) \to {\mathbb{C}}, \\ \hspace*{1.43cm} h \mapsto - h'(0). \end{cases} \end{equation} In addition, we denote by $m_{0,+}^D$, $m_+^D$, $m_{0,+}^N$, and $m_+^N$ the Weyl--Titchmarsh $m$-functions corresponding to $H_{0,+}^D$, $H_{+}^D$, $H_{0,+}^N$, and $H_{+}^N$, respectively, that is, \begin{align} m_{0,+}^D(z) &= i z^{1/2}, \qquad m_{0,+}^N (z)= -\frac{1}{m_{0,+}^D(z)} = i z^{-1/2}, \label{1.14} \\ m_{+}^D(z) &= \frac{f_+'(z,0)}{f_+(z,0)}, \quad m_{+}^N (z)= -\frac{1}{m_{+}^D(z)} = -\frac{f_+(z,0)}{f_+'(z,0)}. \label{1.15} \end{align} We briefly recall the spectral theoretic significance of $m_{+}^D$ in the special case where $V$ is real-valued: It is a Herglotz function (i.e., it maps the open complex upper half-plane ${\mathbb{C}}_+$ analytically into itself) and the measure $d\rho^D_+$ in its Herglotz representation is then the spectral measure of the operator $H_{+}^D$ and hence encodes all spectral information of $H_{+}^D$. Similarly, $m_{+}^D$ also encodes all spectral information of $H_{+}^N$ since $-1/m_{+}^D = m_{+}^N$ is also a Herglotz function and the measure $d\rho^N_+$ in its Herglotz representation represents the spectral measure of the operator $H_{+}^N$. In particular, $d\rho^D_+$ (respectively, $d\rho^N_+$) uniquely determine $V$ a.e.\ on $(0,\infty)$ by the inverse spectral approach of Gelfand and Levitan \cite{GL55} or Simon \cite{Si99}, \cite{GS00} (see also Remling \cite{Re03} and Section 6 in the survey \cite{Ge07}). Then we obtain the following result for the ratio of the perturbation determinants in \eqref{1.11} and \eqref{1.12}: \begin{theorem} \label{t1.2} Assume $V\in L^1((0,\infty);dx)$ and let $z\in{\mathbb{C}}\backslash\sigma(H_+^D)$ with $\Im(z^{1/2})>0$. Then, \begin{align} & \frac{\det\Big(I_+ +\overline{u\big(H_{0, +}^N-z I_+\big)^{-1} v}\,\Big)} {\det\Big(I_+ +\overline{u\big(H_{0, +}^D-z I_+\big)^{-1}v}\,\Big)} \notag \\ &\quad = 1 - \Big(\,\overline{\gamma_N(H_+^D-z I_+)^{-1}V \big[\gamma_D(H_{0,+}^N-\overline{z}I_+)^{-1}\big]^*}\,\Big) \label{1.16} \\ & \quad = \frac{W(f_+(z),\phi_+^N(z))}{i z^{1/2}W(f_+(z),\phi_+^D(z))} = \frac{f'_+(z,0)}{i z^{1/2}f_+(z,0)} = \frac{m_+^D(z)}{m_{0,+}^D(z)} = \frac{m_{0,+}^N(z)}{m_+^N(z)}. \label{1.17} \end{align} \end{theorem} At first sight it may seem unusual to even attempt to derive \eqref{1.16} in the one-dimensional context since \eqref{1.17} already yields the reduction of a Fredholm determinant to a simple Wronski determinant. However, we will see in Section \ref{s4} (cf.\ Theorem \ref{t4.1}) that it is precisely \eqref{1.16} that permits a natural extension to dimensions $n\in{\mathbb{N}}$, $n\geq 2$. Moreover, the latter is also instrumental in proving the analog of \eqref{1.17} in terms of Dirichlet-to-Neumann maps (cf.\ Theorem \ref{t4.2}). The proper multi-dimensional generalizations to Schr\"odinger operators in $L^2(\Omega;d^n x)$, corresponding to an open set $\Omega \subset {\mathbb{R}}^n$ with compact, nonempty boundary $\partial\Omega$, more precisely, the proper operator-valued generalization of the Weyl--Titchmarsh function $m_+^D(z)$ is then given by the Dirichlet-to-Neumann map, denoted by $M_{\Omega}^D(z)$. This operator-valued map indeed plays a fundamental role in our extension of \eqref{1.17} to the higher-dimensional case. In particular, under Hypothesis \ref{h2.6} on $\Omega$ and $V$ (which regulates smoothness properties of $\partial\Omega$ and $L^p$-properties of $V$), we will prove the following multi-dimensional extension of \eqref{1.16} and \eqref{1.17} in Section \ref{s4}: \begin{theorem} \label{t1.3} Assume Hypothesis \ref{h2.6} and let $k\in{\mathbb{N}}$, $k\geq p$ and $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big)\cup \sigma\big(H_{0,\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^N\big)\big)$. Then, \begin{align} & \frac{\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v}\,\Big)} {\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big)} \notag \\ & \quad = \det{}_k\Big(I_{{\partial\Omega}} - \overline{\gamma_N\big(H_{\Omega}^D-zI_{\Omega}\big)^{-1} V \big[\gamma_D(H_{0,\Omega}^N-\overline{z}I_{\Omega})^{-1}\big]^*}\,\Big) e^{\text{\rm{tr}}(T_k(z))} \label{1.18} \\ & \quad = \det{}_k\big(M_{\Omega}^{D}(z)M_{0,\Omega}^{D}(z)^{-1}\big) e^{\text{\rm{tr}}(T_k(z))}. \label{1.19} \end{align} \end{theorem} Here, ${\det}_k(\cdot)$ denotes the modified Fredholm determinant in connection with ${\mathcal B}_k$ perturbations of the identity and $T_k(z)$ is some trace class operator. In particular, $T_2(z)$ is given by \begin{equation} T_2(z)=\overline{\gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1} V \big(H_{\Omega}^D-zI_{\Omega}\big)^{-1} V \big[\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\big]^*}, \end{equation} where $I_{\Omega}$ and $I_{\partial\Omega}$ represent the identity operators in $L^2(\Omega; d^n x)$ and $L^2(\partial\Omega; d^{n-1} \sigma)$, respectively (with $d^{n-1}\sigma$ denoting the surface measure on $\partial\Omega$). The sudden appearance of the term $\exp(\text{\rm{tr}}(T_k(z)))$ in \eqref{1.18} and \eqref{1.19}, when compared to the one-dimensional case, is due to the necessary use of the modified determinant ${\det}_k(\cdot)$ in Theorem \ref{t1.3}. We note that the multi-dimensional extension \eqref{1.18} of \eqref{1.16}, under the stronger hypothesis $V\in L^2(\Omega; d^n x)$, $n=2,3$, first appeared in \cite{GLMZ05}. However, the present results in Theorem \ref{t1.3} go decidedly beyond those in \cite{GLMZ05} in the following sense: $(i)$ the class of domains $\Omega$ permitted by Hypothesis \ref{h2.6} (actually, Hypothesis \ref{h2.1}) is greatly enlarged as compared to \cite{GLMZ05}; $(ii)$ the multi-dimensional extension \eqref{1.19} of \eqref{1.17} invoking Dirichlet-to-Neumann maps is a new (and the most significant) result in this paper; $(iii)$ while \cite{GLMZ05} focused on dimensions $n=2,3$, we now treat the general case $n\in{\mathbb{N}}$, $n\geq 2$; $(iv)$ we provide an application involving eigenvalue counting functions at the end of Section \ref{s4}; $(v)$ we study a representation of the product formula for modified Fredholm determinants, which should be of independent interest, at the beginning of Section \ref{s4}. The principal reduction in Theorem \ref{t1.3} reduces (a ratio of) modified Fredholm determinants associated with operators in $L^2(\Omega; d^n x)$ on the left-hand side of \eqref{1.18} to modified Fredholm determinants associated with operators in $L^2(\partial\Omega; d^{n-1} \sigma)$ on the right-hand side of \eqref{1.18} and especially, in \eqref{1.19}. This is the analog of the reduction described in the one-dimensional context of Theorem \ref{t1.2}, where $\Omega$ corresponds to the half-line $(0,\infty)$ and its boundary $\partial\Omega$ corresponds to the one-point set $\{0\}$. As a result, the ratio of determinants on the left-hand side of \eqref{1.16} associated with operators in $L^2((0,\infty); dx)$ is reduced to ratios of Wronskians and Weyl--Titchmarsh functions on the right-hand side of \eqref{1.16} and in \eqref{1.17}. Finally, we briefly list most of the notational conventions used throughout this paper. Let ${\mathcal H}$ be a separable complex Hilbert space, $(\cdot,\cdot)_{{\mathcal H}}$ the scalar product in ${\mathcal H}$ (linear in the second factor), and $I_{{\mathcal H}}$ the identity operator in ${\mathcal H}$. Next, let $T$ be a linear operator mapping (a subspace of) a Banach space into another, with $\text{\rm{dom}}(T)$ and $\text{\rm{ran}}(T)$ denoting the domain and range of $T$. The closure of a closable operator $S$ is denoted by $\overline S$. The kernel (null space) of $T$ is denoted by $\ker(T)$. The spectrum and resolvent set of a closed linear operator in ${\mathcal H}$ will be denoted by $\sigma(\cdot)$ and $\rho(\cdot)$. The Banach spaces of bounded and compact linear operators in ${\mathcal H}$ are denoted by ${\mathcal B}({\mathcal H})$ and ${\mathcal B}_\infty({\mathcal H})$, respectively. Similarly, the Schatten--von Neumann (trace) ideals will subsequently be denoted by ${\mathcal B}_k({\mathcal H})$, $k\in{\mathbb{N}}$. Analogous notation ${\mathcal B}({\mathcal H}_1,{\mathcal H}_2)$, ${\mathcal B}_\infty ({\mathcal H}_1,{\mathcal H}_2)$, etc., will be used for bounded, compact, etc., operators between two Hilbert spaces ${\mathcal H}_1$ and ${\mathcal H}_2$. In addition, $\text{\rm{tr}}(T)$ denotes the trace of a trace class operator $T\in{\mathcal B}_1({\mathcal H})$ and $\det_{p}(I_{{\mathcal H}}+S)$ represents the (modified) Fredholm determinant associated with an operator $S\in{\mathcal B}_k({\mathcal H})$, $k\in{\mathbb{N}}$ (for $k=1$ we omit the subscript $1$). Moreover, ${\mathcal X}_1 \hookrightarrow {\mathcal X}_2$ denotes the continuous embedding of the Banach space ${\mathcal X}_1$ into the Banach space ${\mathcal X}_2$. For general references on the theory of (modified) Fredholm determinants we refer, for instance, to \cite[Sect.\ XI.9]{DS88}, \cite[Ch.\ Chs.\ IX--XI]{GGK00}, \cite[Ch.\ Sect.\ 4.2]{GK69}, \cite[Sect.\ XIII.17]{RS78}, \cite{Si77}, and \cite[Ch.\ 9]{Si05}. \section{Schr\"odinger Operators with Dirichlet and Neumann boundary conditions} \label{s2} In this section we primarily focus on various properties of Dirichlet, $H^D_{0,\Omega}$, and Neumann, $H^N_{0,\Omega}$, Laplacians in $L^2(\Omega;d^n x)$ associated with open sets $\Omega\subset {\mathbb{R}}^n$, $n\in{\mathbb{N}}$, $n\geq 2$, introduced in Hypothesis \ref{h2.1} below. In particular, we study mapping properties of $\big(H^{D,N}_{0,\Omega}-zI_{\Omega}\big)^{-q}$, $q\in [0,1]$ (with $I_{\Omega}$ the identity operator in $L^2(\Omega; d^n x)$) and trace ideal properties of the maps $f \big(H^{D,N}_{0,\Omega}-zI_{\Omega}\big)^{-q}$, $f\in L^p(\Omega; d^n x)$, for appropriate $p\geq 2$, and $\gamma_N \big(H^{D}_{0,\Omega}-zI_{\Omega}\big)^{-r}$, and $\gamma_D \big(H^{N}_{0,\Omega}-zI_{\Omega}\big)^{-s}$, for appropriate $r>3/4$, $s>1/4$, with $\gamma_N$ and $\gamma_D$ being the Neumann and Dirichlet boundary trace operators defined in \eqref{2.2} and \eqref{2.3}. At the end of this section we then introduce the Dirichlet and Neumann Schr\"odinger operators $H^D_{\Omega}$ and $H^N_{\Omega}$ in $L^2(\Omega;d^n x)$, that is, perturbations of the Dirichlet and Neumann Laplacians $H^D_{0,\Omega}$ and $H^N_{0,\Omega}$ by a potential $V$ satisfying Hypothesis \ref{h2.6}. We start with introducing our assumptions on the set $\Omega$: \begin{hypothesis} \label{h2.1} Let $n\in{\mathbb{N}}$, $n\geq 2$, and assume that $\Omega\subset{{\mathbb{R}}}^n$ is an open set with a compact, nonempty boundary $\partial\Omega$. In addition, we assume that one of the following three conditions holds: \\ $(i)$ \, $\Omega$ is of class $C^{1,r}$ for some $1/2 < r <1$; \\ $(ii)$ \hspace*{.0001pt} $\Omega$ is convex; \\ $(iii)$ $\Omega$ is a Lipschitz domain satisfying a {\it uniform exterior ball condition} $($UEBC\,$)$. \end{hypothesis} We note that while ${\partial\Omega}$ is assumed to be compact, $\Omega$ may be unbounded in connection with conditions $(i)$ or $(iii)$. For more details in this context we refer to Appendix \ref{sA}. First, we introduce the boundary trace operator $\gamma_D^0$ (Dirichlet trace) by \begin{equation} \gamma_D^0\colon C(\overline{\Omega})\to C({\partial\Omega}), \quad \gamma_D^0 u = u|_{\partial\Omega} . \end{equation} Then there exists a bounded, linear operator $\gamma_D$ (cf. \cite[Theorem 3.38]{Mc00}), \begin{equation} \gamma_D\colon H^{s}(\Omega)\to H^{s-(1/2)}({\partial\Omega}) \hookrightarrow L^2(\dOm;d^{n-1}\si), \quad 1/2<s<3/2, \label{2.2} \end{equation} whose action is compatible with that of $\gamma_D^0$. That is, the two Dirichlet trace operators coincide on the intersection of their domains. We recall that $d^{n-1}\sigma$ denotes the surface measure on ${\partial\Omega}$ and we refer to Appendix \ref{sA} for our notation in connection with Sobolev spaces. Next, we introduce the operator $\gamma_N$ (Neumann trace) by \begin{align} \gamma_N = \nu\cdot\gamma_D\nabla \colon H^{s+1}(\Omega)\to L^2(\dOm;d^{n-1}\si), \quad 1/2<s<3/2, \label{2.3} \end{align} where $\nu$ denotes outward pointing normal unit vector to $\partial\Omega$. It follows from \eqref{2.2} that $\gamma_N$ is also a bounded operator. Given Hypothesis \ref{h2.1}, we introduce the self-adjoint and nonnegative Dirichlet and Neumann Laplacians $H_{0,\Omega}^D$ and $H_{0,\Omega}^N$ associated with the domain $\Omega$ as follows, \begin{align} H^D_{0,\Omega} = -\Delta, \quad \text{\rm{dom}}\big(H^D_{0,\Omega}\big) = \{u\in H^{2}(\Omega) \,|\, \gamma_D u = 0\}, \label{2.4} \\ H^N_{0,\Omega} = -\Delta, \quad \text{\rm{dom}}\big(H^N_{0,\Omega}\big) = \{u\in H^{2}(\Omega) \,|\, \gamma_N u = 0\}. \label{2.5} \end{align} A detailed discussion of $H^D_{0,\Omega}$ and $H^N_{0,\Omega}$ is provided in Appendix \ref{sA}. \begin{lemma} \label{l2.2} Assume Hypothesis \ref{h2.1}. Then the operators $H_{0,\Omega}^D$ and $H_{0,\Omega}^N$ introduced in \eqref{2.4} and \eqref{2.5} are nonnegative and self-adjoint in $L^2(\Om;d^nx)$ and the following boundedness properties hold for all $q\in [0,1]$ and $z\in{\mathbb{C}}\backslash[0,\infty)$, \begin{align} \big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-q},\, \big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-q} \in{\mathcal B}\big(L^2(\Om;d^nx),H^{2q}(\Omega)\big). \label{2.6} \end{align} \end{lemma} The fractional powers in \eqref{2.6} (and in subsequent analogous cases) are defined via the functional calculus implied by the spectral theorem for self-adjoint operators. As explained in Appendix \ref{sA} (cf.\ particularly Lemma \ref{lA.2}), the key ingredients in proving Lemma \ref{l2.2} are the inclusions \begin{equation} \text{\rm{dom}}\big(H_{0,\Omega}^D\big) \subset H^2(\Omega), \quad \text{\rm{dom}}\big(H^N_{0,\Omega}\big) \subset H^2(\Omega) \end{equation} and methods based on real interpolation spaces. For the remainder of this paper we agree to the simplified notation that the operator of multiplication by the measurable function $f$ in $L^2(\Om;d^nx)$ is again denoted by the symbol $f$. The next result is an extension of \cite[Lemma 6.8]{GLMZ05} and aims at an explicit discussion of the $z$-dependence of the constant $c$ appearing in estimate (6.48) of \cite{GLMZ05}. \begin{lemma} \label{l2.3} Assume Hypothesis \ref{h2.1} and let $2\leq p$, $(n/2p)<q\leq1$, $f\in L^p(\Omega;d^nx)$, and $z\in{\mathbb{C}}\backslash[0,\infty)$. Then, \begin{align} \label{2.7} f\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-q}, \, f\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-q} \in{\mathcal B}_p\big(L^2(\Om;d^nx)\big), \end{align} and for some $c>0$ $($independent of $z$ and $f$\,$)$ \begin{align} \begin{split} &\big\| f\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-q}\big\|_{{\mathcal B}_p(L^2(\Om;d^nx))}^2 \\ & \qquad \leq c\bigg(1+\frac{\abs{z}^{2q}+1}{\text{\rm{dist}}\big(z,\sigma\big(H_{0,\Omega}^D\big)\big)^{2q}}\bigg) \|(\abs{\cdot}^2-z)^{-q}\|_{L^p({\mathbb{R}}^n;d^nx)}^2 \|f\|_{L^p(\Omega;d^nx)}^2, \\ &\big\| f\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-q}\big\|_{{\mathcal B}_p(L^2(\Om;d^nx))}^2 \\ & \qquad \leq c\bigg(1+\frac{\abs{z}^{2q}+1}{\text{\rm{dist}}\big(z,\sigma\big(H_{0,\Omega}^N\big)\big)^{2q}}\bigg) \|(\abs{\cdot}^2-z)^{-q}\|_{L^p({\mathbb{R}}^n;d^nx)}^2 \|f\|_{L^p(\Omega;d^nx)}^2. \end{split} \label{2.8} \end{align} \end{lemma} \begin{proof} We start by noting that under the assumption that $\Omega$ is a Lipschitz domain, there is a bounded extension operator ${\mathcal E}$, \begin{equation} {\mathcal E}\in{\mathcal B}\big(H^{s}(\Omega),H^{s}({\mathbb{R}}^n)\big) \, \text{ such that } \, ({\mathcal E} u){|_\Omega} = u, \quad u\in H^{s}(\Omega), \label{2.9} \end{equation} for all $s\in{\mathbb{R}}$ (see, e.g., \cite{Ry99}). Next, for notational convenience, we denote by $H_{0,\Omega}$ either one of the operators $H_{0,\Omega}^D$ or $H_{0,\Omega}^N$ and by ${\mathcal R}_{\Omega}$ the restriction operator \begin{equation} {\mathcal R}_{\Omega}\colon \begin{cases} L^2({\mathbb{R}}^n;d^nx) \to L^2(\Om;d^nx), \\ \hspace*{1.65cm}u \mapsto u|_{\Omega}. \end{cases} \end{equation} Moreover, we introduce the following extension $\tilde f$ of $f$, \begin{equation} \tilde f(x) = \begin{cases}f(x), & x\in\Omega, \\ 0, & x\in{\mathbb{R}}^n\backslash\Omega, \end{cases} \quad \tilde f\in L^p({\mathbb{R}}^n;d^nx). \end{equation} Then, \begin{equation} \label{2.12} f (H_{0,\Omega}-zI_{\Omega})^{-q}= {\mathcal R}_\Omega {\tilde f} (H_{0}-zI)^{-q}(H_{0}-zI)^{q}{\mathcal E} (H_{0,\Omega}-zI_{\Omega})^{-q}, \end{equation} where (for simplicity) $I$ denotes the identity operator in $L^2({\mathbb{R}}^n;d^nx)$ and $H_0$ denotes the nonnegative self-adjoint operator \begin{equation} H_0 = -\Delta, \quad \text{\rm{dom}}(H_0)=H^{2}({\mathbb{R}}^n) \end{equation} in $L^2({\mathbb{R}}^n;d^nx)$. Let $g\in L^2(\Om;d^nx)$ and define $h=(H_{0,\Omega}-zI_{\Omega})^{-q}g$, then by Lemma \ref{lA.2}, $h\in H^{2q}(\Omega)\subset L^2(\Om;d^nx)$. Using the spectral theorem for the nonnegative self-adjoint operator $H_{0,\Omega}$ in $L^2(\Om;d^nx)$, one computes, \begin{align} \label{2.14} \norm{h}_{L^2(\Om;d^nx)}^2 &= \norm{(H_{0,\Omega}-zI_{\Omega})^{-q}g}_{L^2(\Om;d^nx)}^2 \notag \\ &= \int_{\sigma(H_{0,\Omega})} \abs{\lambda-z}^{-2q}\big(dE_{H_{0,\Omega}}(\lambda)g,g\big)_{L^2(\Om;d^nx)} \\ &\leq \text{\rm{dist}}(z,\sigma(H_{0,\Omega}))^{-2q}\norm{g}_{L^2(\Om;d^nx)}^2 \notag \end{align} and since $(H_{0,\Omega}+I_{\Omega})^{-q}\in{\mathcal B}(L^{2}(\Omega;d^nx),H^{2q}(\Omega))$, \begin{align} \label{2.15} \norm{h}_{H^{2q}(\Omega)}^2 &= \norm{(H_{0,\Omega}+I_{\Omega})^{-q}(H_{0,\Omega}+I_{\Omega})^{q}h}_{H^{2q}(\Omega)}^2 \leq c \norm{(H_{0,\Omega}+I_{\Omega})^{q}h}_{L^{2}(\Omega;d^nx)}^2 \notag \\ &= c\int_{\sigma(H_{0,\Omega})} \abs{\lambda+1}^{2q}\big(dE_{H_{0,\Omega}}(\lambda)h,h\big)_{L^2(\Om;d^nx)} \notag \\ &\leq 2c\int_{\sigma(H_{0,\Omega})} \big(\abs{\lambda-z}^{2q}+\abs{z+1}^{2q}\big) \big(dE_{H_{0,\Omega}}(\lambda)h,h\big)_{L^2(\Om;d^nx)} \\ &= 2c\big(\norm{(H_{0,\Omega}-zI_{\Omega})^{q}h}_{H^{2q}(\Omega)}^2 + \abs{z+1}^{2q}\norm{h}_{L^2(\Om;d^nx)}^2 \big)\notag \\ &\leq 2c\big(1+\abs{z+1}^{2q}\text{\rm{dist}}(z,\sigma(H_{0,\Omega}))^{-2q}\big) \norm{g}_{L^2(\Om;d^nx)}^2, \notag \end{align} where $E_{H_{0,\Omega}}(\cdot)$ denotes the family of spectral projections of $H_{0,\Omega}$. Moreover, utilizing the representation of $(H_{0}-zI)^{q}$ as the operator of multiplication by $\big(\abs{\xi}^2-z\big)^{q}$ in the Fourier space $L^2({\mathbb{R}}^n;d^n\xi)$, and the fact that by \eqref{2.9} \begin{equation} {\mathcal E}\in{\mathcal B}\big(H^{2q}(\Omega),H^{2q}({\mathbb{R}}^n)\big) \cap {\mathcal B}\big(L^2(\Om;d^nx),L^2({\mathbb{R}}^n;d^nx)\big), \end{equation} one computes \begin{align} \label{2.17} \begin{split} \norm{(H_{0}-zI)^{q}{\mathcal E} h}_{L^2({\mathbb{R}}^n;d^nx)}^2 &= \int_{{\mathbb{R}}^n} d^n\xi\, \big|\abs{\xi}^2-z\big|^{2q} \, \abs{(\ha{{\mathcal E} h})(\xi)}^2 \\ &\leq 2\int_{{\mathbb{R}}^n} d^n \xi \big(\abs{\xi}^{4q}+\abs{z}^{2q}\big)\abs{(\ha{{\mathcal E} h})(\xi)}^2 \\ &\leq 2\big(\norm{{\mathcal E} h}_{H^{2q}({\mathbb{R}}^n)}^2 + \abs{z}^{2q}\norm{{\mathcal E} h}_{L^2({\mathbb{R}}^n;d^nx)}^2\big) \\ &\leq 2c\big(\norm{h}_{H^{2q}(\Omega)}^2 + \abs{z}^{2q}\norm{h}_{L^2(\Om;d^nx)}^2\big). \end{split} \end{align} Combining the estimates \eqref{2.14}, \eqref{2.15}, and \eqref{2.17}, one obtains \begin{align} (H_{0}-zI)^{q}{\mathcal E} (H_{0,\Omega}-zI_{\Omega})^{-q} \in {\mathcal B}\big(L^2(\Om;d^nx),L^2({\mathbb{R}}^n;d^nx)\big) \label{2.18} \end{align} and the following norm estimate with some constant $c>0$, \begin{align} \norm{(H_{0}-zI)^{q}{\mathcal E} (H_{0,\Omega}-zI_{\Omega})^{-q}}_{ {\mathcal B}(L^2(\Om;d^nx),L^2({\mathbb{R}}^n;d^nx)) }^2 \leq c+\frac{c(\abs{z}^{2q}+1)}{\text{\rm{dist}}(z,\sigma(H_{0,\Omega}))^{2q}}. \label{2.19} \end{align} Next, by \cite[Theorem 4.1]{Si05} (or \cite[Theorem XI.20]{RS79}) one obtains \begin{align} \tilde f(H_{0}-zI)^{-q}\in{\mathcal B}_p\big(L^2({\mathbb{R}}^n;d^nx)\big) \label{2.20} \end{align} and \begin{align}\begin{split} \big\|\tilde f(H_{0}-zI)^{-q}\big\|_{{\mathcal B}_p(L^2({\mathbb{R}}^n;d^nx))} &\leq c\, \|(\abs{\cdot}^2-z)^{-q}\|_{L^p({\mathbb{R}}^n;d^nx)} \|\tilde f\|_{L^p({\mathbb{R}}^n;d^nx)} \\&= c\, \|(\abs{\cdot}^2-z)^{-q}\|_{L^p({\mathbb{R}}^n;d^nx)} \|f\|_{L^p(\Omega;d^nx)}. \label{2.21} \end{split}\end{align} Thus, \eqref{2.7} follows from \eqref{2.12}, \eqref{2.18}, \eqref{2.20}, and \eqref{2.8} follows from \eqref{2.12}, \eqref{2.19}, and \eqref{2.21}. \end{proof} Next we recall certain mapping properties of powers of the resolvents of Dirichlet and Neumann Laplacians multiplied by the Neumann and Dirichlet boundary trace operators, respectively: \begin{lemma} \label{l2.4} Assume Hypothesis \ref{h2.1} and let $\varepsilon > 0$, $z\in{\mathbb{C}}\backslash[0,\infty)$. Then, \begin{align} \gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-(3+\varepsilon)/4}, \gamma_D\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-(1+\varepsilon)/4} \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\dOm;d^{n-1}\si)\big). \label{2.22} \end{align} \end{lemma} As in \cite[Lemma 6.9]{GLMZ05}, Lemma \ref{l2.4} follows from Lemma \ref{l2.2} and from \eqref{2.2} and \eqref{2.3}. \begin{corollary} \label{c2.5} Assume Hypothesis \ref{h2.1} and let $f_1\in L^{p_1}(\Omega;d^nx)$, $p_1\geq 2$, $p_1>2n/3$, $f_2\in L^{p_2}(\Omega;d^nx)$, $p_2 > 2n$, and $z\in{\mathbb{C}}\backslash[0,\infty)$. Then, denoting by $f_1$ and $f_2$ the operators of multiplication by functions $f_1$ and $f_2$ in $L^2(\Om;d^nx)$, respectively, one has \begin{align} \overline{\gamma_D\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}f_1} &\in {\mathcal B}_{p_1}\big(L^2(\Omega;d^nx),L^2({\partial\Omega};d^{n-1}\sigma)\big), \label{2.25} \\ \overline{\gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}f_2} &\in {\mathcal B}_{p_2}\big(L^2(\Omega;d^nx),L^2({\partial\Omega};d^{n-1}\sigma)\big) \label{2.26} \end{align} and for some $c_j(z)>0$ $($independent of $f_j$$)$, $j=1,2$, \begin{align} \Big\| \, \overline{\gamma_D\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}f_1}\, \Big\|_ {{\mathcal B}_{p_1}(L^2(\Omega;d^nx),L^2({\partial\Omega};d^{n-1}\sigma))} &\leq c_1(z) \norm{f_1}_{L^{p_1}(\Omega;d^nx)}, \label{2.27} \\ \Big\| \, \overline{\gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}f_2}\, \Big\|_ {{\mathcal B}_{p_2}(L^2(\Omega;d^nx),L^2({\partial\Omega};d^{n-1}\sigma))} &\leq c_2(z) \norm{f_2}_{L^{p_2}(\Omega;d^nx)}. \label{2.28} \end{align} \end{corollary} As in \cite[Corollary 6.10]{GLMZ05}, Corollary \ref{c2.5} follows from Lemmas \ref{l2.3} and \ref{l2.4}. Finally, we turn to our assumptions on the potential $V$ and the corresponding definition of Dirichlet and Neumann Schr\"odinger operators $H^D_{\Omega}$ and $H^N_{\Omega}$ in $L^2(\Omega; d^n x)$: \begin{hypothesis} \label{h2.6} Suppose that $\Omega$ satisfies Hypothesis \ref{h2.1} and assume that $V\in L^p(\Omega;d^nx)$ for some $p$ satisfying $p>4/3$ in the case $n=2$, and $p>n/2$ in the case $n\geq3$. \end{hypothesis} Assuming Hypothesis \ref{h2.6}, we next introduce the perturbed operators $H_{\Omega}^D$ and $H_{\Omega}^N$ in $L^2(\Om;d^nx)$ by alluding to abstract perturbation results summarized in Appendix \ref{sB} as follows: Let $V$, $u$, and $v$ denote the operators of multiplication by functions $V$, $u=\exp(i\arg(V))\abs{V}^{1/2}$, and $v=\abs{V}^{1/2}$ in $L^{2}(\Omega;d^nx)$, respectively. Since $u,v\in L^{2p}(\Omega;d^nx)$, Lemma \ref{l2.3} yields \begin{align} u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1/2}, \, \overline{\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1/2}v} &\in{\mathcal B}_{2p}\big(L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\backslash [0,\infty), \label{2.31} \\ u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1/2}, \, \overline{\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1/2}v} &\in{\mathcal B}_{2p}\big(L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\backslash [0,\infty), \label{2.32} \end{align} and hence, in particular, \begin{align} & \text{\rm{dom}}(u)=\text{\rm{dom}}(v) \supseteq H^{1}(\Omega) \supset H^{2}(\Omega) \supset \text{\rm{dom}}\big(H^N_{0,\Omega}\big), \\ & \text{\rm{dom}}(u)=\text{\rm{dom}}(v) \supseteq H^{1}(\Omega) \supseteq H^{1}_0(\Omega) \supset \text{\rm{dom}}\big(H^D_{0,\Omega}\big). \end{align} Thus, operators $H_{0,\Omega}^D$, $H_{0,\Omega}^N$, $u$, and $v$ satisfy Hypothesis \ref{hB.1}\,$(i)$. Moreover, \eqref{2.31} and \eqref{2.32} imply \begin{equation} \overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}, \, \overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v} \in{\mathcal B}_p\big(L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\backslash [0,\infty), \label{2.35} \end{equation} which verifies Hypothesis \ref{hB.1}\,$(ii)$ for $H_{0,\Omega}^D$ and $H_{0,\Omega}^N$. Utilizing \eqref{2.8} in Lemma \ref{l2.3} with $-z>0$ sufficiently large, such that the ${\mathcal B}_{2p}$-norms of the operators in \eqref{2.31} and \eqref{2.32} are less than 1, and hence the ${\mathcal B}_p$-norms of the operators in \eqref{2.35} are less than 1, one also verifies Hypothesis \ref{hB.1}\,$(iii)$. Thus, applying Theorem \ref{tB.2} one obtains the densely defined, closed operators $H_{\Omega}^D$ and $H_{\Omega}^N$ (which are extensions of $H_{0,\Omega}^D+V$ on $\text{\rm{dom}}\big(H_{0,\Omega}^D\big)\cap\text{\rm{dom}}(V)$ and $H_{0,\Omega}^N+V$ on $\text{\rm{dom}}\big(H_{0,\Omega}^N\big)\cap\text{\rm{dom}}(V)$, respectively). In particular, the resolvent of $H_{\Omega}^D$ (respectively, $H_{\Omega}^N$) is explicitly given by the analog of \eqref{B.5} in terms of the resolvent of $H_{0, \Omega}^D$ (respectively, $H_{0, \Omega}^N$) and the factorization $V=uv$. We note in passing that \eqref{2.6}--\eqref{2.8}, \eqref{2.22}, \eqref{2.25}--\eqref{2.28}, \eqref{2.31}, \eqref{2.32}, \eqref{2.35}, etc., extend of course to all $z$ in the resolvent set of the corresponding operators $H_{0,\Omega}^D$ and $H_{0,\Omega}^N$. \section{Dirichlet and Neumann boundary value problems \\ and Dirichlet-to-Neumann maps} \label{s3} This section is devoted to Dirichlet and Neumann boundary value problems associated with the Helmholtz differential expression $-\Delta - z$ as well as the corresponding differential expression $-\Delta + V - z$ in the presence of a potential $V$, both in connection with the open set $\Omega$. In addition, we provide a detailed discussion of Dirichlet-to-Neumann, $M^D_{0,\Omega}$, $M^D_{\Omega}$, and Neumann-to-Dirichlet maps, $M^N_{0,\Omega}$, $M^N_{\Omega}$, in $L^2(\partial\Omega; d^{n-1}\sigma)$. Denote by \begin{equation} \widetilde\gamma_N : \big\{u\in H^1(\Omega) \,\big|\, \Delta u \in \big(H^1(\Omega)\big)^*\big\} \to H^{-1/2}({\partial\Omega}) \label{3.0} \end{equation} a weak Neumann trace operator defined by \begin{align} \label{3.1a} \langle\widetilde\gamma_N u, \phi\rangle = \int_\Omega d^n x \, \nabla u(x)\cdot\nabla \Phi(x) + \langle\Delta u, \Phi\rangle \end{align} for all $\phi\in H^{1/2}({\partial\Omega})$ and $\Phi\in H^1(\Omega)$ such that $\gamma_D\Phi = \phi$. We note that this definition is independent of the particular extension $\Phi$ of $\phi$, and that $\widetilde\gamma_N$ is a bounded extension of the Neumann trace operator $\gamma_N$ defined in \eqref{2.3}. For more details we refer to equations \eqref{A.11}--\eqref{A.16}. We start with the Helmholtz Dirichlet and Neumann boundary value problems: \begin{theorem} \label{t3.1} Suppose $\Omega$ is an open Lipschitz domain with a compact nonempty boundary ${\partial\Omega}$. Then for every $f \in H^1({\partial\Omega})$ and $z\in{\mathbb{C}}\big\backslash\sigma\big(H_{0,\Omega}^D\big)$ the following Dirichlet boundary value problem, \begin{align} \label{3.1} \begin{cases} (-\Delta - z)u_0^D = 0 \text{ on }\, \Omega, \quad u_0^D \in H^{3/2}(\Omega), \\ \gamma_D u_0^D = f \text{ on }\, {\partial\Omega}, \end{cases} \end{align} has a unique solution $u_0^D$ satisfying $\widetilde\gamma_N u_0^D \in L^2(\dOm;d^{n-1}\si)$. Moreover, there exist constants $C^D=C^D(\Omega,z)>0$ such that \begin{equation} \|u_0^D\|_{H^{3/2}(\Omega)} \leq C^D \|f\|_{H^1(\partial\Omega)}. \label{3.3a} \end{equation} Similarly, for every $g\inL^2(\dOm;d^{n-1}\si)$ and $z\in{\mathbb{C}}\backslash\sigma\big(H_{0,\Omega}^N\big)$ the following Neumann boundary value problem, \begin{align} \label{3.2} \begin{cases} (-\Delta - z)u_0^N = 0 \text{ on }\,\Omega,\quad u_0^N \in H^{3/2}(\Omega), \\ \widetilde\gamma_N u_0^N = g\text{ on }\,{\partial\Omega}, \end{cases} \end{align} has a unique solution $u_0^N$. Moreover, there exist constants $C^N=C^N(\Omega,z)>0$ such that \begin{equation} \|u_0^N\|_{H^{3/2}(\Omega)} \leq C^N \|g\|_{L^2(\dOm;d^{n-1}\si)}. \label{3.4a} \end{equation} In addition, \eqref{3.1}--\eqref{3.4a} imply that the following maps are bounded \begin{align} \big[\gamma_N\big(\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* &: H^1({\partial\Omega}) \to H^{3/2}(\Omega), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.4b} \\ \big[\gamma_D \big(\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* &: L^2(\dOm;d^{n-1}\si) \to H^{3/2}(\Omega), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big). \label{3.4c} \end{align} Finally, the solutions $u_0^D$ and $u_0^N$ are given by the formulas \begin{align} u_0^D (z) &= -\big(\gamma_N \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}\big)^*f, \label{3.3} \\ u_0^N (z) &= \big(\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}\big)^*g. \label{3.4} \end{align} \end{theorem} \begin{proof} It follows from Theorem 9.3 in \cite{Mi96} that the boundary value problems, \begin{align} &\begin{cases} \label{3.5} (\Delta + z)u_0^D = 0 \text{ on } \Omega, \quad {\mathcal N}(\nabla u_0^D)\inL^2(\dOm;d^{n-1}\si), \\ \gamma_D u_0^D = f\in H^1({\partial\Omega}) \text{ on } {\partial\Omega} \end{cases} \intertext{and} &\begin{cases} \label{3.6} (\Delta + z)u_0^N = 0 \text{ on } \Omega, \quad {\mathcal N}(\nabla u_0^N)\inL^2(\dOm;d^{n-1}\si), \\ \widetilde\gamma_N u_0^N = g\inL^2(\dOm;d^{n-1}\si) \text{ on } {\partial\Omega}, \end{cases} \end{align} have unique solutions for all $z\in{\mathbb{C}}\backslash\sigma\big(H_{0,\Omega}^D\big)$ and $z\in{\mathbb{C}}\backslash\sigma\big(H_{0,\Omega}^N\big)$, respectively, satisfying natural estimates. Here ${\mathcal N}(\cdot)$ denotes the non-tangential maximal function (cf.\ \cite{JK95}, \cite{Mi96}) \begin{equation} ({\mathcal N} w)(x) = \sup_{y\in\Gamma(x)}|w(y)|, \quad x\in\partial\Omega, \end{equation} where $w$ is a locally bounded function and $\Gamma(x)$ is a nontangential approach region with vertex at $x$, that is, for some fixed constant $C>1$ one has \begin{equation} \Gamma(x)=\{ y\in\Omega \;|\; |x-y| < C \, {\rm dist}(y,\partial\Omega) \}. \end{equation} In the case of a bounded domain $\Omega$, it follows from Corollary 5.7 in \cite{JK95} that for any harmonic function $v$ in $\Omega$, \begin{align} {\mathcal N}(\nabla v)\inL^2(\dOm;d^{n-1}\si) \, \text{ if and only if } \, v\in H^{3/2}(\Omega), \label{3.7} \end{align} accompanied with natural estimates. For any solution $u$ of the Helmholtz equation $(\Delta+z)u=0$ on a bounded domain $\Omega$, one can introduce the harmonic function \begin{equation} v(x)=u(x)+z\int_\Omega d^n y\, E_n(x-y) u(y), \quad x\in\Omega, \end{equation} such that ${\mathcal N}(\nabla u)\inL^2(\dOm;d^{n-1}\si)$ if and only if ${\mathcal N}(\nabla v)\inL^2(\dOm;d^{n-1}\si)$, and $u\in H^{3/2}(\Omega)$ if and only if $v\in H^{3/2}(\Omega)$. (Again, natural estimates are valid in each case.) Here $E_n$ denotes the fundamental solution of the Laplace equation in ${\mathbb{R}}^n$, $n\in{\mathbb{N}}$, $n\geq 2$, \begin{equation} E_n(x) =\begin{cases} \frac{1}{2\pi} \ln(|x|), & n=2, \\ \frac{1}{n(2-n)\omega_{n-1}}|x|^{2-n}, & n\geq 3, \end{cases}, \quad x\in{\mathbb{R}}^n\backslash\{0\}, \end{equation} with $\omega_{n-1}$ denoting the area of the unit sphere in ${\mathbb{R}}^n$. The equivalence in \eqref{3.7} extends from harmonic functions to all functions $u$ satisfying the Helmholtz equation, $(\Delta+z)u=0$ on a bounded domain $\Omega$, \begin{align} {\mathcal N}(\nabla u)\inL^2(\dOm;d^{n-1}\si) \, \text{ if and only if } \, u\in H^{3/2}(\Omega). \label{3.8} \end{align} Thus, in the case of a bounded domain $\Omega$, \eqref{3.1} and \eqref{3.2} follow from \eqref{3.5}, \eqref{3.6}, and \eqref{3.8}. Moreover, one has the chain of estimates \begin{equation} \|u^D_0\|_{H^{3/2}(\Omega)} \leq C_1 \big[\big\|{\mathcal N}\big(\nabla u^D_0\big)\big\|_{L^2(\dOm;d^{n-1}\si)} + \|u^D_0\|_{L^2(\Omega; d^n x)}\big] \leq C_2 \|f\|_{H^1(L^2(\dOm;d^{n-1}\si))} \end{equation} for some constants $C_k>0$, $k=1,2$. In the case of an unbounded domain $\Omega$, one first obtains \eqref{3.8} for $\Omega\cap B$, where $B$ is a sufficiently large ball containing ${\partial\Omega}$. Then, since $z\in{\mathbb{C}}\big\backslash\sigma\big(H_{0,\Omega}^D\big) = {\mathbb{C}}\big\backslash\sigma\big(H_{0,\Omega}^N\big) = {\mathbb{C}}\backslash[0,\infty)$ (since now $\Omega$ contains the exterior of a ball in ${\mathbb{R}}^n$), one exploits the exponential decay of solutions of the Helmholtz equation to extend \eqref{3.8} from $\Omega\cap B$ to $\Omega$. This, together with \eqref{3.5} and \eqref{3.6}, yields \eqref{3.1} and \eqref{3.2}. Next, we turn to the proof of \eqref{3.3} and \eqref{3.4}. We note that by Lemma \ref{l2.4}, \begin{align} \gamma_N \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}, \gamma_D \big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\dOm;d^{n-1}\si)\big), \end{align} and hence \begin{align} \big(\gamma_N \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}\big)^*, \big(\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}\big)^* \in {\mathcal B}\big(L^2(\dOm;d^{n-1}\si),L^2(\Om;d^nx)\big). \label{3.21a} \end{align} Then, denoting by $u_0^D$ and $u_0^N$ the unique solutions of \eqref{3.1} and \eqref{3.2}, respectively, and using Green's formula, one computes \begin{align} \big(u_0^D,v\big)_{L^2(\Om;d^nx)} &= \big(u_0^D,(-\Delta-\overline{z})\big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\Om;d^nx)} \notag \\ &= \big((-\Delta-z)u_0^D,\big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\Om;d^nx)} \notag \\ &\quad + \big(\widetilde\gamma_N u_0^D, \gamma_D \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\dOm;d^{n-1}\si)} \notag \\ &\quad - \big(\gamma_D u_0^D, \gamma_N \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\dOm;d^{n-1}\si)} \notag \\ &= -\big(f, \gamma_N \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\dOm;d^{n-1}\si)} \notag \\ &= -\big(\big(\gamma_N \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}\big)^*f,v\big)_{L^2(\Om;d^nx)} \end{align} and \begin{align} \big(u_0^N,v\big)_{L^2(\Om;d^nx)} &= \big(u_0^N,(-\Delta-\overline{z})\big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\Om;d^nx)} \notag \\ &= \big((-\Delta-z)u_0^N,\big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\Om;d^nx)} \notag \\ &\quad + \big(\widetilde\gamma_N u_0^N, \gamma_D \big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\dOm;d^{n-1}\si)} \notag \\ &\quad - \big(\gamma_D u_0^N, \gamma_N \big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\dOm;d^{n-1}\si)} \notag \\ &= \big(g, \gamma_D \big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}v\big)_{L^2(\dOm;d^{n-1}\si)} \notag \\ &= \big(\big(\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_\Omega\big)^{-1}\big)^*g,v\big)_{L^2(\Om;d^nx)} \end{align} for any $v\inL^2(\Om;d^nx)$. This proves \eqref{3.3} and \eqref{3.4} with the operators involved understood in the sense of \eqref{3.21a}. Granted \eqref{3.3a} and \eqref{3.4a}, one finally obtains \eqref{3.4b} and \eqref{3.4c}. \end{proof} We temporarily strengthen our hypothesis on $V$ and introduce the following assumption: \begin{hypothesis} \label{h3.2} Suppose the set $\Omega$ satisfies Hypothesis \ref{h2.1} and assume that $V\in L^p(\Omega;d^nx)$ for some $p>2$ if $n=2,3$ and $p\geq 2n/3$ if $n\geq4$. \end{hypothesis} By employing a perturbative approach, we now extend Theorem \ref{t3.1} in connection with the Helmholtz differential expression $-\Delta - z$ on $\Omega$ to the case of a Schr\"odinger differential expression $-\Delta + V - z$ on $\Omega$. \begin{theorem} \label{t3.3} Assume Hypothesis \ref{h3.2}. Then for every $f \in H^1({\partial\Omega})$ and $z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^D\big)$ the following Dirichlet boundary value problem, \begin{align} \label{3.9} \begin{cases} (-\Delta + V - z)u^D = 0 \text{ on }\, \Omega, \quad u^D \in H^{3/2}(\Omega), \\ \gamma_D u^D = f \text{ on }\, {\partial\Omega}, \end{cases} \end{align} has a unique solution $u^D$ satisfying $\widetilde\gamma_N u^D \in L^2(\dOm;d^{n-1}\si)$. Moreover, there exist constants $C^D=C^D(\Omega,z)>0$ such that \begin{equation} \|u^D\|_{H^{3/2}(\Omega)} \leq C^D \|f\|_{H^1(\partial\Omega)}. \label{3.9a} \end{equation} Similarly, for every $g\inL^2(\dOm;d^{n-1}\si)$ and $z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big)$ the following Neumann boundary value problem, \begin{align} \label{3.10} \begin{cases} (-\Delta + V - z)u^N = 0 \text{ on }\, \Omega,\quad u^N \in H^{3/2}(\Omega), \\ \widetilde\gamma_N u^N = g\text{ on }\, {\partial\Omega}, \end{cases} \end{align} has a unique solution $u^N$. Moreover, there exist constants $C^N=C^N(\Omega,z)>0$ such that \begin{equation} \|u^N\|_{H^{3/2}(\Omega)} \leq C^N \|g\|_{L^2(\dOm;d^{n-1}\si)}. \label{3.10a} \end{equation} In addition, \eqref{3.9}--\eqref{3.10a} imply that the following maps are bounded \begin{align} \big[\gamma_N\big(\big(H^D_{\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* &: H^1({\partial\Omega}) \to H^{3/2}(\Omega), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.10b} \\ \big[\gamma_D \big(\big(H^N_{\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* &: L^2(\dOm;d^{n-1}\si) \to H^{3/2}(\Omega), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big). \label{3.10c} \end{align} Finally, the solutions $u^D$ and $u^N$ are given by the formulas \begin{align} u^D (z) &= -\big[\gamma_N \big(\big(H_{\Omega}^D-zI_\Omega\big)^{-1}\big)^*\big]^*f, \label{3.11} \\ u^N (z) &= \big[\gamma_D \big(\big(H_{\Omega}^N-zI_\Omega\big)^{-1}\big)^*\big]^*g. \label{3.12} \end{align} \end{theorem} \begin{proof} We temporarily assume that $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big)\big)$ in the case of the Dirichlet problem and $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^N\big)\cup\sigma\big(H_{\Omega}^N\big)\big)$ in the context of the Neumann problem. Uniqueness of solutions follows from the fact that $z\notin\sigma(H_\Omega^D)$ and $z\notin\sigma(H_\Omega^N)$, respectively. Next, we will show that the functions \begin{align} u^D (z) &= u_0^D (z) - \big(H_\Omega^D-zI_\Omega\big)^{-1} V u_0^D (z), \label{3.13} \\ u^N (z)&= u_0^N (z) - \big(H_\Omega^N-zI_\Omega\big)^{-1} V u_0^N (z), \label{3.14} \end{align} with $u_0^D, u_0^N$ given by Theorem \ref{t3.1}, satisfy \eqref{3.11} and \eqref{3.12}, respectively. Indeed, it follows from Theorem \ref{t3.1} that $u_0^D,u_0^N\in H^{3/2}(\Omega)$ and $\widetilde\gamma_N u_0^D \in L^2(\dOm;d^{n-1}\si)$. Using the Sobolev embedding theorem \begin{align*} H^{3/2}(\Omega) \hookrightarrow L^q(\Omega;d^nx) \text{ for all } q\geq2 \text{ if } n=2,3 \text{ and } 2\leq q \leq 2n/(n-3) \text{ if } n\geq4, \end{align*} and the fact that $V\in L^p(\Omega;d^nx)$, $p>2$ if $n=2,3$ and $p\geq 2n/3$ if $n\geq4$, one concludes that $Vu_0^D, Vu_0^N\inL^2(\Om;d^nx)$, and hence \eqref{3.13} and \eqref{3.14} are well-defined. Moreover, it follows from Lemma \ref{l2.3} that $V\big(H_{0,\Omega}^D-zI_\Omega\big)^{-1}$, $V\big(H_{0,\Omega}^N-zI_\Omega\big)^{-1}\in{\mathcal B}_p\big(L^2(\Om;d^nx)\big)$, and hence \begin{align} \big[I_{\Omega}+V\big(H_{0,\Omega}^D-zI_\Omega\big)^{-1}\big]^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big)\big), \label{3.15} \\ \big[I_{\Omega}+V\big(H_{0,\Omega}^N-zI_\Omega\big)^{-1}\big]^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^N\big)\cup\sigma\big(H_{\Omega}^N\big)\big), \label{3.16} \end{align} by applying Theorem \ref{tB.3}. Thus, by \eqref{2.4} and \eqref{2.5}, \begin{align} \big(H_\Omega^D-zI_\Omega\big)^{-1} V u_0^D &= \big(H_{0,\Omega}^D-zI_\Omega\big)^{-1}\big[I_{\Omega}+V\big(H_{0,\Omega}^D-zI_\Omega\big)^{-1}\big]^{-1}Vu_0^D \in H^2(\Omega), \\ \big(H_\Omega^N-zI_\Omega\big)^{-1} V u_0^N &= \big(H_{0,\Omega}^N-zI_\Omega\big)^{-1}\big[I_{\Omega}+V\big(H_{0,\Omega}^N-zI_\Omega\big)^{-1}\big]^{-1}Vu_0^N \in H^2(\Omega), \end{align} and hence $u^D,u^N\in H^{3/2}(\Omega)$ and $\widetilde\gamma_N u^D \in L^2(\dOm;d^{n-1}\si)$. Moreover, \begin{align} (-\Delta+V-z)u^D &= (-\Delta-z)u_0^D + Vu_0^D - (-\Delta+V-z)\big(H_{\Omega}^D-zI_\Omega\big)^{-1}Vu_0^D \notag \\ &= Vu_0^D - I_\Omega Vu_0^D = 0, \\ (-\Delta+V-z)u^N &= (-\Delta-z)u_0^N + Vu_0^N - (-\Delta+V-z)\big(H_{\Omega}^N-zI_\Omega\big)^{-1}Vu_0^N \notag \\ &= Vu_0^N - I_\Omega Vu_0^N = 0, \end{align} and by \eqref{2.4}, \eqref{2.5} and \eqref{3.15}, \eqref{3.16} one also obtains, \begin{align} \gamma_D u^D &= \gamma_D u_0^D - \gamma_D\big(H_{\Omega}^D-zI_\Omega\big)^{-1}Vu_0^D \notag \\&= f - \gamma_D \big(H_{0,\Omega}^D-zI_\Omega\big)^{-1} \big[I_{\Omega}+V\big(H_{0,\Omega}^D-zI_\Omega\big)^{-1}\big]^{-1} Vu_0^D=f, \\ \widetilde\gamma_N u^N &= \widetilde\gamma_N u_0^N - \widetilde\gamma_N\big(H_{\Omega}^N-zI_\Omega\big)^{-1}Vu_0^N \notag \\&= g - \gamma_N \big(H_{0,\Omega}^N-zI_\Omega\big)^{-1} \big[I_{\Omega}+V\big(H_{0,\Omega}^N-zI_\Omega\big)^{-1}\big]^{-1} Vu_0^N=g. \end{align} Finally, \eqref{3.11} and \eqref{3.12} follow from \eqref{3.3}, \eqref{3.4}, \eqref{3.13}, \eqref{3.14}, and the resolvent identity, \begin{align} u^D (z) &= \big[I_\Omega - \big(H_\Omega^D-zI_\Omega\big)^{-1} V\big] \big[-\gamma_N \big(\big(H_{0,\Omega}^D-zI_\Omega\big)^{-1}\big)^*\big]^*f \notag \\ &= -\big[\gamma_N \big(\big(H_{0,\Omega}^D- zI_\Omega\big)^{-1}\big)^* \big[I_\Omega - \big(H_\Omega^D-zI_\Omega\big)^{-1} V\big]^*\big]^*f \notag \\ &= -\big[\gamma_N \big(\big(H_\Omega^D-zI_\Omega\big)^{-1}\big)^*\big]^*f, \\[1mm] u^N (z) &= \big[I_\Omega - \big(H_\Omega^N-zI_\Omega\big)^{-1} V\big] \big[\gamma_D \big(\big(H_{0,\Omega}^N-zI_\Omega\big)^{-1}\big)^*\big]^*g \notag \\ &= \big[\gamma_D \big(\big(H_{0,\Omega}^N-zI_\Omega\big)^{-1}\big)^* \big[I_\Omega - \big(H_\Omega^N-zI_\Omega\big)^{-1} V\big]^*\big]^*g \notag \\ &= \big[\gamma_D \big(\big(H_\Omega^N-zI_\Omega\big)^{-1}\big)^*\big]^*g. \end{align} Analytic continuation with respect to $z$ then permits one to remove the additional condition $z \notin \sigma\big(H_{0,\Omega}^D\big)$ in the case of the Dirichlet problem, and the additional condition $z \notin \sigma\big(H_{0,\Omega}^N\big)$ in the context of the Neumann problem. \end{proof} Assuming Hypothesis \ref{h2.1}, we now introduce the Dirichlet-to-Neumann map $M_{0,\Omega}^{D}(z)$ associated with $(-\Delta-z)$ on $\Omega$, as follows, \begin{align} M_{0,\Omega}^{D}(z) \colon \begin{cases} H^1({\partial\Omega}) \to L^2(\dOm;d^{n-1}\si), \\ \hspace*{10mm} f \mapsto -\widetilde\gamma_N u_0^D, \end{cases} \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{0,\Omega}^D\big), \label{3.20} \end{align} where $u_0^D$ is the unique solution of \begin{align} (-\Delta-z)u_0^D = 0 \,\text{ on }\Omega, \quad u_0^D\in H^{3/2}(\Omega), \quad \gamma_D u_0^D = f \,\text{ on }{\partial\Omega}, \end{align} Similarly, assuming Hypothesis \ref{h3.2}, we introduce the Dirichlet-to-Neumann map $M_\Omega^{D}(z)$, associated with $(-\Delta+V-z)$ on $\Omega$, by \begin{align} M_\Omega^{D}(z) \colon \begin{cases} H^1({\partial\Omega}) \to L^2(\dOm;d^{n-1}\si), \\ \hspace*{10mm} f \mapsto -\widetilde\gamma_N u^D, \end{cases} \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^D\big), \label{3.22} \end{align} where $u^D$ is the unique solution of \begin{align} (-\Delta+V-z)u^D = 0 \,\text{ on }\Omega, \quad u^D \in H^{3/2}(\Omega), \quad \gamma_D u^D= f \,\text{ on }{\partial\Omega}. \end{align} By Theorems \ref{t3.1} and \ref{t3.3} one obtains \begin{equation} M_{0,\Omega}^{D}(z), M_\Omega^{D}(z) \in {\mathcal B}\big(H^1(\partial\Omega), L^2(\dOm;d^{n-1}\si) \big). \end{equation} In addition, assuming Hypothesis \ref{h2.1}, we introduce the Neumann-to-Dirichlet map $M_{0,\Omega}^{N}(z)$ associated with $(-\Delta-z)$ on $\Omega$, as follows, \begin{align} M_{0,\Omega}^{N}(z) \colon \begin{cases} L^2(\dOm;d^{n-1}\si) \to H^1({\partial\Omega}), \\ \hspace*{20.5mm} g \mapsto \gamma_D u_0^N, \end{cases} \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{0,\Omega}^N\big), \label{3.24} \end{align} where $u_0^N$ is the unique solution of \begin{align} (-\Delta-z)u_0^N = 0 \,\text{ on }\Omega, \quad u_0^N\in H^{3/2}(\Omega), \quad \widetilde\gamma_N u_0^N = g \,\text{ on }{\partial\Omega}, \end{align} Similarly, assuming Hypothesis \ref{h3.2}, we introduce the Neumann-to-Dirichlet map $M_\Omega^{N}(z)$ associated with $(-\Delta+V-z)$ on $\Omega$ by \begin{align} M_\Omega^{N}(z) \colon \begin{cases} L^2(\dOm;d^{n-1}\si) \to H^1({\partial\Omega}), \\ \hspace*{20.5mm} g \mapsto \gamma_D u^N, \end{cases} \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big), \label{3.26} \end{align} where $u^N$ is the unique solution of \begin{align} (-\Delta+V-z)u^N = 0 \,\text{ on }\Omega, \quad u^N \in H^{3/2}(\Omega), \quad \widetilde\gamma_N u^N= g \,\text{ on }{\partial\Omega}. \end{align} Again, by Theorems \ref{t3.1} and \ref{t3.3} one obtains \begin{equation} M_{0,\Omega}^{N}(z), M_\Omega^{N}(z) \in {\mathcal B}\big(L^2(\dOm;d^{n-1}\si), H^1(\partial\Omega) \big). \end{equation} Moreover, under the assumption of Hypothesis \ref{h2.1} for $M_{0,\Omega}^D(z)$ and $M_{0,\Omega}^N(z)$, and under the assumption of Hypothesis \ref{h3.2} for $M_{\Omega}^D(z)$ and $M_{\Omega}^N(z)$, one infers the following equalities: \begin{align} M_{0,\Omega}^{N}(z) &= - M_{0,\Omega}^{D}(z)^{-1}, \quad z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{0,\Omega}^N\big)\big), \label{3.28} \\ M_{\Omega}^{N}(z) &= - M_{\Omega}^{D}(z)^{-1}, \quad z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big)\cup\sigma\big(H_{\Omega}^N\big)\big), \label{3.29} \intertext{and} M^{D}_{0,\Omega}(z) &= \widetilde\gamma_N\big[\gamma_N \big(\big(H^D_{0,\Omega} - zI_\Omega\big)^{-1}\big)^*\big]^*, \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{0,\Omega}^D\big), \label{3.30} \\ M^{D}_{\Omega}(z) &= \widetilde\gamma_N\big[\gamma_N \big(\big(H^D_{\Omega} - zI_\Omega\big)^{-1}\big)^*\big]^*, \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^D\big), \label{3.31} \\ M^{N}_{0,\Omega}(z) &= \gamma_D\big[\gamma_D \big(\big(H^N_{0,\Omega} - zI_\Omega\big)^{-1} \big)^*\big]^*, \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{0,\Omega}^N\big), \label{3.32} \\ M^{N}_{\Omega}(z) &= \gamma_D\big[\gamma_D \big(\big(H^N_{\Omega} - zI_\Omega\big)^{-1}\big)^*\big]^*, \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big). \label{3.33} \end{align} The representations \eqref{3.30}--\eqref{3.33} provide a convenient point of departure for proving the operator-valued Herglotz property of $M^{D}_{\Omega}$ and $M^{N}_{\Omega}$. We will return to this topic in a future paper. Next, we note that the above formulas \eqref{3.30}--\eqref{3.33} may be used as alternative definitions of the Dirichlet-to-Neumann and Neumann-to-Dirichlet maps. In particular, we will next use \eqref{3.31} and \eqref{3.33} to extend the above definition of the operators $M^{D}_{\Omega}(z)$ and $M^{N}_{\Omega}(z)$ to a more general setting. This is done in the following two lemmas. \begin{lemma} \label{l3.4} Assume Hypothesis \ref{h2.6}. Then the following boundedness properties hold: \begin{align} & \gamma_N \big(H^D_{\Omega}-zI_\Omega\big)^{-1} \in{\mathcal B}\big(L^2(\Om;d^nx), L^2(\dOm;d^{n-1}\si)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^D\big), \label{3.38a} \\ & \gamma_D \big(H^N_{\Omega}-zI_\Omega\big)^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx), H^1({\partial\Omega})\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big), \label{3.39a} \\ &\big[\gamma_N \big(\big(H^D_{\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* \in {\mathcal B}\big(H^1({\partial\Omega}), H^{3/2}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^D\big), \label{3.40a} \\ & \big[\gamma_D \big(\big(H^N_{\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* \in {\mathcal B}\big(L^2(\dOm;d^{n-1}\si), H^{3/2}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big). \label{3.41a} \end{align} Moreover, the operators $M^{D}_{\Omega}(z)$ in \eqref{3.31} and $M^{N}_{\Omega}(z)$ in \eqref{3.33} remain well-defined and satisfy \begin{align} & M^{D}_{\Omega}(z) \in {\mathcal B}\big(H^{1}({\partial\Omega}), L^2(\dOm;d^{n-1}\si)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^D\big), \label{3.42a} \\ & M^{N}_{\Omega}(z) \in {\mathcal B}\big(L^2(\dOm;d^{n-1}\si), H^{1}({\partial\Omega})\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big). \label{3.43a} \end{align} In particular, $M^{N}_{\Omega}(z)$, $z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big)$, are compact operators in $L^2(\partial\Omega; d^{n-1}\sigma)$. \end{lemma} \begin{proof} We temporarily assume that $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big)\big)$ in the case of Dirichlet Laplacian and that $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^N\big)\cup\sigma\big(H_{\Omega}^N\big)\big)$ in the context of Neumann Laplacian. Next, let $u,v$ and $\widetilde u, \widetilde v$ denote the following factorizations of the perturbation $V$, \begin{align} V(x) = u(x)v(x),\quad u(x) &= \exp(i\arg(V(x)))\abs{V(x)}^{1/2},\quad v(x)=\abs{V(x)}^{1/2}, \label{3.44a} \\ V(x) = \widetilde u(x) \widetilde v(x),\quad \widetilde u(x) &= \exp(i\arg(V(x)))\abs{V(x)}^{p/p_1},\quad \widetilde v(x)=\abs{V(x)}^{p/p_2}, \label{3.45a} \end{align} where \begin{align} \label{3.46a} p_1=\begin{cases} 3p/2, & n=2, \\ 4p/3, & n\geq3,\end{cases} \qquad p_2=\begin{cases}3p, &n=2, \\ 4p, & n\geq3.\end{cases} \end{align} We note that Hypothesis \ref{h2.6} and \eqref{3.44a}, \eqref{3.45a} imply \begin{align} \widetilde u \in L^{p_1}(\Omega;d^nx), \; \widetilde v\in L^{p_2}(\Omega;d^nx), \;\text{ and }\; u,v\in L^{2p}(\Omega;d^nx). \label{3.46b} \end{align} It follows from the definition of the operators $H_{\Omega}^D$ and $H_{\Omega}^N$ and, in particular, from \eqref{B.5} that \begin{align} \big(H^D_{\Omega}-zI_\Omega\big)^{-1} &= \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} - \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}v \Big[I_\Omega+\overline{u\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}v}\,\Big]^{-1} u\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \notag \\ &= \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} - \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v \Big[I_\Omega+ \overline{\widetilde u \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v}\,\Big]^{-1} \widetilde u \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}, \label{3.47a} \\ \big(H^N_{\Omega}-zI_\Omega\big)^{-1} &= \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} - \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}v \Big[I_\Omega+\overline{u\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}v}\,\Big]^{-1} u\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \notag \\&= \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} - \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v \Big[I_\Omega+ \overline{\widetilde u \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v}\,\Big]^{-1} \widetilde u \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}. \label{3.48a} \end{align} Next, we establish a number of boundedness properties that will imply \eqref{3.38a}--\eqref{3.43a}. First, note that it follows from Hypothesis \ref{2.6} and \eqref{3.46a} that $p_1=\frac32p>2>2n/3$, $p_2=3p>4$ for $n=2$ and $p_1=\frac43p>2n/3$, $p_2=4p>2n$ for $n\geq3$. Then, utilizing Lemma \ref{l2.3}, one obtains \begin{align} & \widetilde u \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.51a} \\ & \widetilde u \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big), \label{3.52a} \\ & \big(H^D_{0,\Omega}-zI_\Omega\big)^{-\frac{1-\varepsilon}{4}} \widetilde v \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.53a} \\ & \big(H^N_{0,\Omega}-zI_\Omega\big)^{-\frac{1-\varepsilon}{4}} \widetilde v \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big), \label{3.54a} \end{align} and, utilizing Lemma \ref{l2.2} and the inclusion \eqref{incl-xxx}, one obtains for $\varepsilon\in(0,1-2n/p_2)$, \begin{align} & \big(H^D_{0,\Omega}-zI_\Omega\big)^{-\frac{3+\varepsilon}{4}} \in {\mathcal B}\big(L^2(\Om;d^nx),H^{\frac{3+\varepsilon}{2}}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.54b} \\ & \big(H^N_{0,\Omega}-zI_\Omega\big)^{-\frac{3+\varepsilon}{4}} \in {\mathcal B}\big(L^2(\Om;d^nx),H^{\frac{3+\varepsilon}{2}}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big). \label{3.54c} \end{align} In addition, \begin{align} & \big(H^D_{0,\Omega}-zI_\Omega\big)^{-\frac{3+\varepsilon}{4}} : L^2(\Om;d^nx) \to H^{\frac{3+\varepsilon}{2}}(\Omega) \hookrightarrow H^{3/2}(\Omega), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.55a} \\ & \big(H^N_{0,\Omega}-zI_\Omega\big)^{-\frac{3+\varepsilon}{4}} : L^2(\Om;d^nx) \to H^{\frac{3+\varepsilon}{2}}(\Omega) \hookrightarrow H^{3/2}(\Omega), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big). \label{3.56a} \end{align} In particular, one concludes from \eqref{3.53a}--\eqref{3.56a} that \begin{align} & \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v \in {\mathcal B}\big( L^2(\Om;d^nx),H^{3/2}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.57a} \\ & \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v \in {\mathcal B}\big(L^2(\Om;d^nx),H^{3/2}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big). \label{3.58a} \end{align} In addition, it follows from \eqref{3.53a}--\eqref{3.56a}, the definition of $\gamma_N$ \eqref{2.3}, inclusion \eqref{incl-xxx}, and Lemma \ref{lA.6} that \begin{align} & \gamma_N \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\dOm;d^{n-1}\si)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.59a} \\ & \gamma_D \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\widetilde v \in {\mathcal B}\big(L^2(\Om;d^nx),H^1({\partial\Omega})\big), \quad z\in{\mathbb{C}}\backslash\sigma(H^N_{0,\Omega}\big). \label{3.60a} \end{align} Next, it follows from Theorem \ref{t3.1} that \begin{align} & \big[\gamma_N\big(H^D_{0,\Omega}-\overline{z}I_\Omega\big)^{-1}\big]^* \in {\mathcal B}\big(H^1({\partial\Omega}),H^{3/2}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.61a} \\ & \big[\gamma_D \big(H^N_{0,\Omega}-\overline{z}I_\Omega\big)^{-1}\big]^* \in {\mathcal B}\big(L^2(\dOm;d^{n-1}\si),H^{3/2}(\Omega)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big). \label{3.62a} \end{align} Then, employing the Sobolev embedding theorem \begin{align} H^{3/2}(\Omega)\hookrightarrow L^q(\Omega;d^nx) \end{align} with $q$ satisfying $1/q=(1/2)-(1/p_1)>(1/2)-3/(2n)$, $n\geq2$, and the fact that $\widetilde u \in L^{p_1}(\Omega;d^nx)$, one obtains the following boundedness properties from \eqref{3.61a} and \eqref{3.62a}, \begin{align} & \widetilde u\big[\gamma_N \big(H^D_{0,\Omega}-\overline{z}I_\Omega\big)^{-1}\big]^* \in {\mathcal B}\big(H^1({\partial\Omega}),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^D_{0,\Omega}\big), \label{3.65a} \\ & \widetilde u\big[\gamma_D \big(H^N_{0,\Omega}-\overline{z}I_\Omega\big)^{-1}\big]^* \in {\mathcal B}\big(L^2(\dOm;d^{n-1}\si),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H^N_{0,\Omega}\big). \label{3.66a} \end{align} Moreover, it follows from Theorem \ref{tB.3} that the operators $\big[I_\Omega+ \widetilde u \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v\big]$ and $\big[I_\Omega+ \widetilde u \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v\big]$ are boundedly invertible on $L^2(\Om;d^nx)$ for $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big)\big)$ and $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^N\big)\cup\sigma\big(H_{\Omega}^N\big)\big)$, respectively, that is, the following operators are bounded, \begin{align} & \big[I_\Omega+ \widetilde u \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v\big]^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big)\big), \label{3.67a} \\ & \big[I_\Omega+ \widetilde u \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v\big]^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx),L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^N\big)\cup\sigma\big(H_{\Omega}^N\big)\big). \label{3.68a} \end{align} Finally, combining \eqref{3.47a}--\eqref{3.68a}, one obtains the assertions of Lemma \ref{l3.4} as follows: \eqref{3.38a} follows from \eqref{3.47a}, \eqref{3.51a}, \eqref{3.59a}, \eqref{3.67a}; \eqref{3.39a} follows from \eqref{3.48a}, \eqref{3.52a}, \eqref{3.60a}, \eqref{3.68a}; \eqref{3.40a} follows from \eqref{3.47a}, \eqref{3.57a}, \eqref{3.65a}, \eqref{3.67a}; \eqref{3.41a} follows from \eqref{3.48a}, \eqref{3.58a}, \eqref{3.66a}, \eqref{3.68a}; Thus, by \eqref{3.20}, \eqref{3.59a}, \eqref{3.65a}, and \eqref{3.67a}, we may introduce the operator \begin{equation} M^D_{\Omega}(z) = M^D_{0,\Omega}(z) - \gamma_N \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v \Big[I_\Omega+ \overline{\widetilde u \big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v}\,\Big]^{-1} \widetilde u \big(\gamma_N\big(H^D_{0,\Omega}-\overline{z}I_\Omega\big)^{-1}\big)^*, \label{3.49a} \end{equation} and observe that it satisfies \eqref{3.42a}. In addition, \eqref{3.47a} shows that \eqref{3.31} remains in effect under Hypothesis \ref{h2.6}. Similarly, by \eqref{3.24}, \eqref{3.60a}, \eqref{3.66a}, and \eqref{3.68a}, we may introduce the operator \begin{equation} M^N_{\Omega}(z) = M^N_{0,\Omega}(z) - \gamma_D \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v \Big[I_\Omega+ \overline{\widetilde u \big(H^N_{0,\Omega}-zI_\Omega\big)^{-1} \widetilde v}\,\Big]^{-1} \widetilde u \big(\gamma_D \big(H^N_{0,\Omega}-\overline{z}I_\Omega\big)^{-1}\big)^*, \label{3.50a} \end{equation} and observe that it satisfies \eqref{3.43a}. In addition, \eqref{3.48a} shows that \eqref{3.33} remains in effect under Hypothesis \ref{h2.6}. Moreover, since $H^1(\partial\Omega)$ embeds compactly into $L^2(\partial\Omega; d^{n-1}\sigma)$ (cf.\ \eqref{EQ1} and \cite[Proposition~2.4]{MM07}), $M^{N}_{\Omega}(z)$, $z\in{\mathbb{C}}\big\backslash\sigma\big(H_{\Omega}^N\big)$, are compact operators in $L^2(\partial\Omega; d^{n-1}\sigma)$. Finally, formulas \eqref{3.31} and \eqref{3.33} together with analytic continuation with respect to $z$ then permit one to remove the additional restrictions $z\notin\sigma\big(H_{0,\Omega}^D\big)$ and $z\notin\sigma\big(H_{0,\Omega}^N\big)$, respectively. \end{proof} Actually, one can go a step further and allow an additional perturbation $V_1\in L^\infty(\Omega;d^nx)$ of $H^D_{\Omega}$ and $H^N_{\Omega}$, \begin{align} H^D_{1,\Omega} &= H^D_{\Omega} + V_1, \quad \text{\rm{dom}}(H^D_{1,\Omega})=\text{\rm{dom}}(H^D_{\Omega}), \label{3.70a} \\ H^N_{1,\Omega} &=H^N_{\Omega} + V_1, \quad \text{\rm{dom}}(H^N_{1,\Omega})=\text{\rm{dom}}(H^N_{\Omega}). \label{3.70b} \end{align} Defining the Dirichlet-to-Neumann and Neumann-to-Dirichlet operators $M^{D}_{1,\Omega}$ and $M^{N}_{1,\Omega}$ in an analogous fashion as in \eqref{3.31} and \eqref{3.33}, \begin{align} &M^{D}_{1,\Omega}(z) = \widetilde\gamma_N\big[\gamma_N \big(\big(H^D_{1,\Omega} - zI_\Omega\big)^{-1}\big)^*\big]^*, \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{1,\Omega}^D\big), \label{3.71a} \\ & M^{N}_{1,\Omega}(z) = \gamma_D\big[\gamma_D \big(\big(H^N_{1,\Omega} - zI_\Omega\big)^{-1}\big)^*\big]^*, \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{1,\Omega}^N\big), \label{3.72a} \end{align} one can then prove the following result: \begin{lemma} Assume Hypothesis \ref{h2.6} and let $V_1\in L^\infty(\Omega;d^nx)$. Then the operators $M^{D}_{1,\Omega}(z)$ and $M^{N}_{1,\Omega}(z)$ defined by \eqref{3.71a} and \eqref{3.72a} satisfy the following boundedness properties, \begin{align} M^{D}_{1,\Omega}(z) \in {\mathcal B}\big(H^{1}({\partial\Omega}), L^2(\dOm;d^{n-1}\si)\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{1,\Omega}^D\big), \label{3.73a} \\ M^{N}_{1,\Omega}(z) \in {\mathcal B}\big(L^2(\dOm;d^{n-1}\si), H^{1}({\partial\Omega})\big), \quad z\in{\mathbb{C}}\big\backslash\sigma\big(H_{1,\Omega}^N\big). \label{3.74a} \end{align} \end{lemma} \begin{proof} We temporarily assume that $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big)\cup\sigma\big(H_{1,\Omega}^D\big)\big)$ in the case of $M^{D}_{1,\Omega}$ and that $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^N\big)\cup\sigma\big(H_{1,\Omega}^N\big)\big)$ in the context of $M^{N}_{1,\Omega}$. Next, using resolvent identities and \eqref{3.70a}, \eqref{3.70b}, one computes \begin{align} \big(H^D_{1,\Omega}-zI_\Omega\big)^{-1} &= \big(H^D_{\Omega}-zI_\Omega\big)^{-1} - \big(H^D_{\Omega}-zI_\Omega\big)^{-1} \Big[I_\Omega+ V_1 \big(H^D_{\Omega}-zI_\Omega\big)^{-1}\,\Big]^{-1} V_1 \big(H^D_{\Omega}-zI_\Omega\big)^{-1}, \label{3.75a} \\ \big(H^N_{1,\Omega}-zI_\Omega\big)^{-1} &= \big(H^N_{\Omega}-zI_\Omega\big)^{-1} - \big(H^N_{\Omega}-zI_\Omega\big)^{-1} \Big[I_\Omega+ V_1 \big(H^N_{\Omega}-zI_\Omega\big)^{-1}\,\Big]^{-1} V_1 \big(H^N_{\Omega}-zI_\Omega\big)^{-1}, \label{3.76a} \end{align} and hence, \begin{align} M^D_{1,\Omega} &= M^D_{\Omega} - \gamma_N \big(H^D_{\Omega}-zI_\Omega\big)^{-1} \Big[I_\Omega+ V_1 \big(H^D_{\Omega}-zI_\Omega\big)^{-1}\,\Big]^{-1} V_1 \big[\gamma_N\big(\big(H^D_{\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*, \label{3.77a} \\ M^N_{1,\Omega} &= M^N_{\Omega} - \gamma_D \big(H^N_{\Omega}-zI_\Omega\big)^{-1} \Big[I_\Omega+ V_1 \big(H^N_{\Omega}-zI_\Omega\big)^{-1}\,\Big]^{-1} V_1 \big[\gamma_D\big(\big(H^N_{\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*. \label{3.78a} \end{align} The assertions \eqref{3.73a} and \eqref{3.74a} now follow from \eqref{3.38a}--\eqref{3.43a} and the fact that by Theorem \ref{tB.3}, the operators $\big[I_\Omega+ V_1 \big(H^D_{\Omega}-zI_\Omega\big)^{-1}\big]$ and $\big[I_\Omega+V_1 \big(H^N_{\Omega}-zI_\Omega\big)^{-1}\big]$ are boundedly invertible on $L^2(\Om;d^nx)$ for all $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big)\cup\sigma\big(H_{1,\Omega}^D\big)\big)$ and $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^N\big)\cup\sigma\big(H_{1,\Omega}^N\big)\big)$, respectively. Formulas \eqref{3.71a} and \eqref{3.72a} together with analytic continuation with respect to $z$ then permit one to remove the additional restrictions $z\notin\sigma\big(H_{\Omega}^D\big)$ and $z\notin\sigma\big(H_{\Omega}^N\big)$, respectively. \end{proof} Weyl--Titchmarsh operators, in a spirit close to ours, have recently been discussed by Amrein and Pearson \cite{AP04} in connection with the interior and exterior of a ball in ${\mathbb{R}}^3$ and potentials $V\in L^\infty({\mathbb{R}}^3;d^3x)$. For additional literature on Weyl--Titchmarsh operators, relevant in the context of boundary value spaces (boundary triples, etc.), we refer, for instance, to \cite{ABMN05}, \cite{BL06}, \cite{BMN06}, \cite{BMN00}, \cite{BMN02}, \cite{BM04}, \cite{DM91}, \cite{DM95}, \cite{GKMT01}, \cite[Ch.\ 3]{GG91}, \cite{MM06}, \cite{Ma04}, \cite{MPP07} \cite{Pa87}, \cite{Pa02}. For applications of the Dirichlet-to-Neumann map to Borg--Levinson-type inverse spectral problems we refer to \cite{Ch90}, \cite{NSU88}, \cite{PS02}, \cite{Sa05}, \cite{SU86}, \cite{SU87} (see also \cite{KLW05} for an alternative approach based on the boundary control method). The inverse problem of detecting the number of connected components (i.e., the number of holes) in $\partial\Omega$ using the high-energy spectral asymptotics of the Dirichlet-to-Neumann map is studied in \cite{HL01}, Next, we prove the following auxiliary result, which will play a crucial role in Theorem \ref{t4.2}, the principal result of this paper. \begin{lemma} \label{l3.5} Assume Hypothesis \ref{h2.6}. Then the following identities hold, \begin{align} M_{0,\Omega}^D(z) - M_\Omega^D(z) &= \overline{\widetilde\gamma_N \big(H^D_{\Omega}-zI_\Omega\big)^{-1} V \big[\gamma_N \big(\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*}, \notag \\ &\hspace*{3.1cm} z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big)\big), \label{3.35} \\ M_\Omega^D(z) M_{0,\Omega}^D(z)^{-1} &= I_{\partial\Omega} - \overline{\widetilde\gamma_N \big(H^D_{\Omega}-zI_\Omega\big)^{-1} V \big[\gamma_D \big(\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*}, \notag \\ &\hspace*{2.45cm} z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big) \cup\sigma\big(H_{0,\Omega}^N\big)\big). \label{3.36} \end{align} \end{lemma} \begin{proof} Let $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big)\big)$. Then \eqref{3.35} follows from \eqref{3.30}, \eqref{3.31}, and the resolvent identity \begin{align} M_{0,\Omega}^D(z) - M_\Omega^D(z) &= \widetilde\gamma_N\big[\gamma_N\big(\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} - \big(H^D_{\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* \notag \\ &= \overline{\widetilde\gamma_N\big[\gamma_N \big( \big(H^D_{\Omega}-zI_\Omega\big)^{-1}V\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1} \big)^*\big]^*} \\ &= \overline{\widetilde\gamma_N \big(H^D_{\Omega}-zI_\Omega\big)^{-1} V \big[\gamma_N \big(\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*}. \notag \end{align} Next, let $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{0,\Omega}^D\big)\cup\sigma\big(H_{\Omega}^D\big) \cup\sigma\big(H_{0,\Omega}^N\big)\big)$, then it follows from \eqref{3.28}, \eqref{3.32}, and \eqref{3.35} that \begin{align} \label{3.40} M_\Omega^D(z) M_{0,\Omega}^D(z)^{-1} &= I_{\partial\Omega} + \big(M_\Omega^D(z) - M_{0,\Omega}^D(z)\big)M_{0,\Omega}^D(z)^{-1} \notag \\ &= I_{\partial\Omega} + \big(M_{0,\Omega}^D(z) - M_{\Omega}^D(z)\big)M_{0,\Omega}^N(z) \notag \\ &= I_{\partial\Omega} + \overline{\widetilde\gamma_N \big(H^D_{\Omega}-zI_\Omega\big)^{-1} V \big[\gamma_N \big(\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*} \\ &\quad \times \gamma_D\big[\gamma_D \big(\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*. \notag \end{align} Let $g\inL^2(\dOm;d^{n-1}\si)$. Then by Theorem \ref{t3.1}, \begin{align} u=\big[\gamma_D\big(\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*g \label{3.41} \end{align} is the unique solution of \begin{align} (-\Delta-z)u = 0 \,\text{ on }\Omega, \quad u\in H^{3/2}(\Omega), \quad \widetilde\gamma_N u = g \,\text{ on }{\partial\Omega}. \end{align} Setting $f=\gamma_D u \in H^1({\partial\Omega})$ and utilizing Theorem \ref{t3.1} once again, one obtains \begin{align} u &= -\big[\gamma_N \big(H_{0,\Omega}^D-\overline{z}I_\Omega\big)^{-1}\big]^*f \notag \\ &= -\big[\gamma_N \big(\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* \gamma_D\big[\gamma_D \big(\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*g. \label{3.43} \end{align} Thus, it follows from \eqref{3.41} and \eqref{3.43} that \begin{align} \big[\gamma_N \big(\big(H^D_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* \gamma_D\big[\gamma_D \big(\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^* = -\big[\gamma_D\big(\big(H^N_{0,\Omega}-zI_\Omega\big)^{-1}\big)^*\big]^*. \label{3.44} \end{align} Finally, insertion of \eqref{3.44} into \eqref{3.40} yields \eqref{3.36}. \end{proof} It follows from \eqref{4.24}--\eqref{4.29a}, $\widetilde \gamma_N$ can be replaced by $\gamma_N$ on the right-hand side of \eqref{3.35} and \eqref{3.36}. We note that the right-hand side (and hence the left-hand side) of \eqref{3.36} permits an analytic continuation to $z\in \sigma\big(H_{0,\Omega}^D\big)$ as long as $z\notin \big(\sigma\big(H_{\Omega}^D\big)\cup\sigma\big(H_{0,\Omega}^N\big)\big)$. \section{A Multi-Dimensional Variant of a Formula due to Jost and Pais} \label{s4} In this section we prove our multi-dimensional variants of the Jost and Pais formula as discussed in the introduction. We start with an elementary comment on determinants which, however, lies at the heart of the matter of our multi-dimensional variant of the one-dimensional Jost and Pais result. Suppose $A \in {\mathcal B}({\mathcal H}_1, {\mathcal H}_2)$, $B \in {\mathcal B}({\mathcal H}_2, {\mathcal H}_1)$ with $A B \in {\mathcal B}_1({\mathcal H}_2)$ and $B A \in {\mathcal B}_1({\mathcal H}_1)$. Then, \begin{equation} \det (I_{{\mathcal H}_2}-AB) = \det (I_{{\mathcal H}_1}-BA). \label{4.0} \end{equation} Equation \eqref{4.0} follows from the fact that all nonzero eigenvalues of $AB$ and $BA$ coincide including their algebraic multiplicities. The latter fact, in turn, can be derived from the formula \begin{equation} A(BA - z I_{{\mathcal H}_1})^{-1} B= I_{{\mathcal H}_2} + z(AB - z I_{{\mathcal H}_2})^{-1}, \quad z \in {\mathbb{C}} \backslash (\sigma(AB)\cup\sigma(BA)) \end{equation} (and its companion with $A$ and $B$ interchanged), as discussed in detail by Deift \cite{De78}. In particular, ${\mathcal H}_1$ and ${\mathcal H}_2$ may have different dimensions. Especially, one of them may be infinite and the other finite, in which case one of the two determinants in \eqref{4.0} reduces to a finite determinant. This case indeed occurs in the original one-dimensional case studied by Jost and Pais \cite{JP51} as described in detail in \cite{GM03} and the references therein. In the proof of Theorem \ref{t4.1} below, the role of ${\mathcal H}_1$ and ${\mathcal H}_2$ will be played by $L^2(\Omega; d^n x)$ and $L^2(\partial\Omega;d^{n-1} \sigma)$, respectively. In the context of KdV flows and reflectionless (i.e., generalizations of soliton-type) potentials represented as Fredholm determinants, a reduction of such determinants (in some cases to finite determinants) has also been studied by Kotani \cite{Ko04}, relying on certain connections to stochastic analysis. We start with an auxiliary lemma which is of independent interest in the area of modified Fredholm determinants. \begin{lemma} \label{l4.1} Let ${\mathcal H}$ be a separable, complex Hilbert space, and assume $A,B\in{\mathcal B}_k({\mathcal H})$ for some fixed $k\in{\mathbb{N}}$. Then there exists a polynomial $T_k(\cdot,\cdot)$ in $A$ and $B$ with $T_k(A,B)\in{\mathcal B}_1({\mathcal H})$, such that the following formula holds \begin{align} \label{4.3a} \det{}_k((I_{{\mathcal H}}-A)(I_{{\mathcal H}}-B)) = \det{}_k(I_{{\mathcal H}}-A)\det{}_k(I_{{\mathcal H}}-B)e^{\text{\rm{tr}}(T_k(A,B))}. \end{align} Moreover, $T_k(\cdot,\cdot)$ is unique up to cyclic permutations of its terms, and an explicit formula for $T_k$ may be derived from the representation \begin{align} \label{4.4a} T_k(A,B) = \sum_{m=k}^{2k-2} P_m(A,B), \end{align} where $P_m(\cdot,\cdot)$, $m=1,\dots,2k-2$, denote homogeneous polynomials in $A$ and $B$ of degree $m$ $($i.e., each term of $P_m(A,B)$ contains precisely the total number $m$ of $A$'s and $B$'s$)$ that one obtains after rearranging the following expression in powers of $t$, \begin{align} \label{4.5a} \sum_{j=1}^{k-1}\frac{1}{j}\big((tA+tB-t^2AB)^j-(tA)^j-(tB)^j\big) = \sum_{m=1}^{2k-2} t^m P_m(A,B), \quad t\in{\mathbb{R}}. \end{align} In particular, computing $T_k(A,B)$ from \eqref{4.4a} and \eqref{4.5a}, and subsequently using cyclic permutations to simplify the resulting expressions, then yields for the terms $T_k(A,B)$ in \eqref{4.3a} \begin{align} \label{4.6a} T_1(A,B) =& \;0, \notag \\ T_2(A,B) =& - AB, \notag \\ T_3(A,B) =& - A^2B - AB^2 + \frac12 ABAB, \notag \\ T_4(A,B) =& - A^3B - AB^3 -\frac12 ABAB - A^2B^2 + A^2BAB + AB^2AB - \frac13 ABABAB, \\ T_5(A,B) =& - A^4B - AB^4 - A^3B^2 - A^2B^3 - A^2BAB - AB^2AB + A^3BAB + AB^3AB \notag \\ &+ A^2B^2AB + A^2BAB^2 + \frac23ABABAB + \frac12 A^2BA^2B + \frac12 AB^2AB^2 \notag \\ & - A^2BABAB -AB^2ABAB + \frac14 ABABABAB, \, \text{ etc.} \notag \end{align} \end{lemma} \begin{proof} Suppose temporarily that $A,B\in{\mathcal B}_1({\mathcal H})$. Then it follows from \cite[Theorem 9.2]{Si05} that \begin{align} \det{}_k((I_{{\mathcal H}}-A)(I_{{\mathcal H}}-B)) = \det{}_k(I_{{\mathcal H}}-A)\det{}_k(I_{{\mathcal H}}-B)e^{\text{\rm{tr}}(\widetilde T_k(A,B))}, \end{align} where \begin{align} \widetilde T_k(A,B) = \sum_{j=1}^{k-1}\frac{1}{j}\big((A+B-AB)^j-(A)^j-(B)^j\big), \end{align} and hence, by \eqref{4.5a} \begin{align} \label{4.9a} \widetilde T_k(A,B) = \sum_{m=1}^{2k-2} P_m(A,B). \end{align} Since $\text{\rm{tr}}(\cdot)$ is linear and invariant under cyclic permutation of its argument, it remains to show that $T_k(A,B)$ in \eqref{4.4a} and $\widetilde T_k(A,B)$ in \eqref{4.9a} are equal up to cyclic permutations of their terms, that is, to show that $P_m(A,B)$ vanish for $m=1,\dots,k-1$ after a finite number of cyclic permutations of their terms. Let $\widetilde P_m(\cdot,\cdot)$, $m\geq1$, denote a sequence of polynomials in $A$ and $B$, obtained after rearranging the following expression in powers of $t\in{\mathbb{C}}$, \begin{align} \begin{split} &\ln((I_{{\mathcal H}}-tA)(I_{{\mathcal H}}-tB))-\ln(I_{{\mathcal H}}-tA)-\ln(I_{{\mathcal H}}-tB) \\ &\quad = \sum_{j=1}^{\infty}\frac{1}{j}\big((tA+tB-t^2AB)^j-(tA)^j-(tB)^j\big) = \sum_{m=1}^{\infty} t^m \widetilde P_m(A,B) \, \text{ for $|t|$ sufficiently small.} \end{split}\label{4.10a} \end{align} Then it follows from \eqref{4.5a} and \eqref{4.10a} that $P_m(A,B)=\widetilde P_m(A,B)$ for $m=1,\dots,k-1$, and hence, it suffices to show that $\widetilde P_m(A,B)$ vanish for $m=1,\dots,k-1$ after a finite number of cyclic permutations of their terms. The latter fact now follows from the Baker--Campbell--Hausdorff (BCH) formula as follows: First, assume $D, E \in {\mathcal B}({\mathcal H})$, ${\mathcal H}$. Then, \begin{equation} \label{4.12a} e^{t D}e^{t E} = e^{t D+t E+F(t)} \, \text{ for $|t|$ sufficiently small,} \end{equation} where $F(t)$ is given by a norm convergent infinite sum of certain repeated commutators involving $D$ and $E$, as discussed, for instance, in \cite{Su77} (cf.\ also \cite{BC04}). Explicitly, $F$ is of the form \begin{equation} F(t)=\sum_{\ell=2}^{\infty} t^\ell F_\ell, \quad F_p=\frac{1}{p!}\bigg[\frac{d^p}{dt^p}\ln\bigg(\sum_{j=0}^\infty \sum_{k=0}^\infty \frac{t^{j+k}}{j! k!} D^j E^k \bigg)\bigg]\bigg|_{t=0}, \quad p\in{\mathbb{N}}, \; p\geq 2, \end{equation} where \begin{equation} F_2=\frac{1}{2}[D,E], \; F_3=\frac{1}{6}[F_2,E-D], \; F_4=\frac{1}{12}[[F_2,D],E], \, \text{ etc.} \end{equation} That each $F_\ell$, $\ell\geq2$, is indeed at most a finite sum of commutators follows from a formula derived by Dynkin (cf., e.g., \cite[eqs.\ (1)--(4)]{Bo89}, \cite[eqs.\ (2.5), (2.6), (3.7), (3.8)]{Ot91}). If in addition, $D,E \in{\mathcal B}_1({\mathcal H})$, the expression for $F(t)$ is actually convergent in the ${\mathcal B}_1({\mathcal H})$-norm for $|t|$ sufficiently small. Thus, $F(t)$ vanishes after a finite number of cyclic permutations of each of its coefficients $F_n$. Next, setting $D=\ln(I_{{\mathcal H}}-tA)$, $E=\ln(I_{{\mathcal H}}-tB)$ and taking the natural logarithm in \eqref{4.12a} then implies \begin{equation} \ln((I_{{\mathcal H}}-tA)(I_{{\mathcal H}}-tB))-\ln(I_{{\mathcal H}}-tA)-\ln(I_{{\mathcal H}}-tB) = F(t) \end{equation} and hence \begin{equation} \ln((I_{{\mathcal H}}-tA)(I_{{\mathcal H}}-tB))-\ln(I_{{\mathcal H}}-tA)-\ln(I_{{\mathcal H}}-tB) = 0 \end{equation} after a finite number of cyclic permutations in each of the coefficients $F_\ell$ in $F(t)=\sum_{\ell=2}^\infty t^\ell F_\ell $. Thus, by \eqref{4.10a}, each $\widetilde P_m(A,B)$, $m\geq1$, vanishes after a finite number of cyclic permutations of its terms. Consequently, $P_m(A,B)$ vanish for $m=1,\dots,k-1$ after a finite number of cyclic permutations of their terms. Finally, to remove the assumption $A,B\in{\mathcal B}_1({\mathcal H})$, one uses a standard approximation argument of operators in ${\mathcal B}_k({\mathcal H})$ by operators in ${\mathcal B}_1({\mathcal H})$, together with the fact that both sides of \eqref{4.3a} are well-defined for $A,B\in{\mathcal B}_k({\mathcal H})$. \end{proof} Next, we prove an extension of a result in \cite{GLMZ05} to arbitrary space dimensions: \begin{theorem} \label{t4.1} Assume Hypothesis \ref{h2.6}, let $k\in{\mathbb{N}}$, $k\geq p$, and $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big)\cup \sigma\big(H_{0,\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^N\big)\big)$. Then, \begin{align} \overline{\gamma_N\big(H_{\Omega}^D-zI_{\Omega}\big)^{-1}V \big[\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\big]^*} \in{\mathcal B}_p\big(L^2(\dOm;d^{n-1}\si)\big)\subset{\mathcal B}_k\big(L^2(\dOm;d^{n-1}\si)\big) \label{4.2} \end{align} and \begin{align} & \frac{\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v}\,\Big)} {\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big)} \notag \\ &\quad = \det{}_k\Big(I_{{\partial\Omega}} - \overline{\gamma_N\big(H_{\Omega}^D-zI_{\Omega}\big)^{-1}V \big[\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\big]^*} \, \Big) \exp\big(\text{\rm{tr}}(T_k(z))\big). \label{4.3} \end{align} Here $T_k(z)\in {\mathcal B}_1\big(L^2(\partial\Omega; d^{n-1}\sigma)\big)$ denotes one of the cyclic permutations of the polynomial $T_k(\cdot,\cdot)$ defined in Lemma \ref{l4.1} with the following choice of $A=A_0(z)$ and $B=B_0(z)$, with $A_0(z)$ and $B_0(z)$ given by \begin{align} \begin{split} \label{4.4} A_0(z) &= \Big[\,\overline{\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{\widetilde u}}\,\Big]^* \, \overline{\gamma_N \big(H_{\Omega}^D -zI_{\Omega}\big)^{-1}\widetilde v}\in{\mathcal B}_p\big(L^2(\Om;d^nx)\big)\subset{\mathcal B}_k\big(L^2(\Om;d^nx)\big), \\ B_0(z) &= -\overline{\widetilde u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}\widetilde v} \in{\mathcal B}_p\big(L^2(\Om;d^nx)\big)\subset{\mathcal B}_k\big(L^2(\Om;d^nx)\big), \end{split} \end{align} and the functions $u$, $v$, $\widetilde u$, and $\widetilde v$ are given by \begin{align} u(x) &= \exp(i\arg(V(x)))\abs{V(x)}^{1/2},\quad v(x)=\abs{V(x)}^{1/2}, \\ \widetilde u(x) &= \exp(i\arg(V(x)))\abs{V(x)}^{p/p_1},\quad \widetilde v(x)=\abs{V(x)}^{p/p_2}, \end{align} with \begin{align} \label{4.6} p_1=\begin{cases} 3p/2,&n=2, \\ 4p/3, &n\geq3,\end{cases} \qquad p_2=\begin{cases} 3p,&n=2, \\ 4p, & n\geq3, \end{cases} \end{align} and $V=uv=\tilde u \tilde v$. In particular, \begin{align} T_2(z) &= \overline{\gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}V \big(H_{\Omega}^D-zI_{\Omega}\big)^{-1}V \Big[\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\Big]^*} \in{\mathcal B}_1\big(L^2(\dOm;d^{n-1}\si)\big). \label{4.5} \end{align} \end{theorem} \begin{proof} From the outset we note that the left-hand side of \eqref{4.3} is well-defined by \eqref{2.35}. Let $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^N\big)\big)$ and note that $\f1{p_1}+\frac{1}{p_2}=\f1p$ for all $n\geq2$, and hence $V=uv=\widetilde u \widetilde v$. Next, we introduce \begin{equation} \label{4.7} K_D(z)=-\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}, \quad K_N(z)=-\overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v} \end{equation} (cf.\ \eqref{B.4}) and note that by Theorem \ref{tB.3} \begin{align} [I_{\Omega}-K_D(z)]^{-1} \in{\mathcal B}\big(L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big)\cup\sigma\big(H_{0,\Omega}^D\big)\big). \end{align} Then Lemma \ref{l4.1} with $A=\widetilde A_0(z)$ and $B=\widetilde B_0(z)$ defined by \begin{align} \widetilde A_0(z) &= I_\Omega - (I_\Omega-K_N(z))[I_{\Omega}-K_D(z)]^{-1} = (K_N(z)-K_D(z))[I_{\Omega}-K_D(z)]^{-1}, \label{4.10} \\ \widetilde B_0(z) &= K_D(z) = -\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}, \label{4.10A} \end{align} yields \begin{align} &\frac{\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v}\,\Big)} {\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big)} = \frac{\det{}_k\big(I_{\Omega}-K_N(z)\big)}{\det{}_k\big(I_{\Omega}-K_D(z)\big)} \notag \\ &\quad = \det{}_k\big(I_{\Omega}-(K_N(z)-K_D(z))[I_{\Omega}-K_D(z)]^{-1}\big) \exp\big(\text{\rm{tr}}(T_k(\widetilde A_0(z),\widetilde B_0(z)))\big), \label{4.12} \end{align} where $T_k(\cdot,\cdot)$ is the polynomial defined in \eqref{4.4a}. Explicit formulas for the first few $T_k$ are computed in \eqref{4.6a}. Next, temporarily suppose that $V\in L^p(\Omega;d^nx)\cap L^\infty(\Omega;d^nx)$. Using Lemma \ref{lA.3} (an extension of a result of Nakamura \cite[Lemma 6]{Na01}) and Remark \ref{rA.5} (cf.\ \eqref{Na1-bis}), one finds \begin{align} \begin{split} K_N(z)-K_D(z) &= \overline{u\big[\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}- \big(H_{0,\Omega}^N -zI_{\Omega}\big)^{-1}\big]v} \\ &= \overline{u\big[\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\big]^*}\, \overline{\gamma_N \big(H_{0,\Omega}^D -zI_{\Omega}\big)^{-1}v} \\ &= \Big[\,\overline{\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{u}}\,\Big]^* \, \overline{\gamma_N \big(H_{0,\Omega}^D -zI_{\Omega}\big)^{-1}v}. \end{split}\label{4.13} \end{align} Inserting \eqref{4.13} into \eqref{4.10} and utilizing \eqref{4.7} and the following resolvent identity which follows from \eqref{B.5}, \begin{align} \label{4.13a} \overline{\big(H_{\Omega}^D-zI_{\Omega}\big)^{-1} v} = \overline{\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1} v} \Big[I_{\Omega}+\overline{ u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1} v}\,\Big]^{-1}, \end{align} one obtains the following equality for $\widetilde A_0(z)$, \begin{align} \begin{split} \label{4.4A} \widetilde A_0(z) &= \Big[\,\overline{\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{u}}\,\Big]^* \, \overline{\gamma_N \big(H_{\Omega}^D -zI_{\Omega}\big)^{-1}v}. \end{split} \end{align} Moreover, insertion of \eqref{4.13} into \eqref{4.12} yields \begin{align} \label{4.14} &\frac{\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v}\,\Big)} {\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big)} \notag \\ &\quad = \det{}_k\Big(I_{\Omega} - \Big[\,\overline{\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{u}}\,\Big]^* \overline{\gamma_N \big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v} \Big[I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big]^{-1}\Big) \\ &\qquad \times \exp\big(\text{\rm{tr}}(T_k(\widetilde A_0(z),\widetilde B_0(z)))\big). \notag \end{align} Utilizing Corollary \ref{c2.5} with $p_1$ and $p_2$ as in \eqref{4.6}, one finds \begin{align} \overline{\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{u}} &\in{\mathcal B}_{p_1}\big(L^2(\Om;d^nx),L^2(\dOm;d^{n-1}\si)\big), \\ \overline{\gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v} &\in{\mathcal B}_{p_2}\big(L^2(\Om;d^nx),L^2(\dOm;d^{n-1}\si)\big), \end{align} and hence, \begin{align} \Big[\,\overline{\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{u}}\,\Big]^* \overline{\gamma_N \big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v} &\in {\mathcal B}_p\big(L^2(\Om;d^nx)\big) \subset{\mathcal B}_k\big(L^2(\Om;d^nx)\big), \\ \overline{\gamma_N \big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v} \Big[\,\overline{\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{u}}\,\Big]^* &\in {\mathcal B}_p\big(L^2(\dOm;d^{n-1}\si)\big) \subset{\mathcal B}_k\big(L^2(\dOm;d^{n-1}\si)\big). \end{align} Then, using the fact that \begin{align} \Big[I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big]^{-1} \in {\mathcal B}\big(L^2(\Om;d^nx)\big), \quad z\in{\mathbb{C}}\big\backslash \big(\sigma\big(H_{\Omega}^D\big)\cup\sigma\big(H_{0,\Omega}^D\big)\big), \end{align} one applies the idea expressed in formula \eqref{4.0} and rearranges the terms in \eqref{4.14} as follows: \begin{align} \label{4.20} &\frac{\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v}\,\Big)} {\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big)} \\ &\quad = \det{}_k\Big(I_{{\partial\Omega}} - \overline{\gamma_N \big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v} \Big[I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big]^{-1} \Big[\, \overline{ \gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{u}}\,\Big]^*\Big) \notag \\ &\qquad \times \exp\big(\text{\rm{tr}}(T_k(\widetilde A_0, \widetilde B_0))\big). \notag \end{align} Similarly, using the cyclicity property of $\text{\rm{tr}}(\cdot)$, one rearranges $T_k \big(\widetilde A_0(z),\widetilde B_0(z)\big)$ to get an operator on $L^2(\dOm;d^{n-1}\si)$ which in the following we denote by $T_k(z)$. This is always possible since each term of $T_k \big(\widetilde A_0(z),\widetilde B_0(z)\big)$ has at least one factor of $\widetilde A_0(z)$. Then using equalities \eqref{4.4}, \eqref{4.10A}, \eqref{4.4A}, and $uv=\tilde u \tilde v$, one concludes that $T_k(z)$ is a cyclic permutation of $T_k(A_0,B_0)$ with $A_0(z)$ and $B_0(z)$ given by \eqref{4.4}. In particular, rearranging $T_2 \big(\widetilde A_0(z),\widetilde B_0(z)\big)=-\widetilde A_0(z) \widetilde B_0(z)$ or equivalently $T_2(A_0(z),B_0(z))=-A_0(z)B_0(z)$, one obtains $T_2(z)=-\widetilde B_0(z) \widetilde A_0(z) = -B_0(z) A_0(z)$, and hence equality \eqref{4.5}. Thus, \eqref{4.3}, subject to the extra assumption $V\in L^p(\Omega;d^n x)\cap L^\infty(\Omega;d^n x)$, follows from \eqref{4.13a} and \eqref{4.20}. Finally, assuming only $V\in L^p(\Omega;d^n x)$ and utilizing Theorem \ref{tB.3}, Lemma \ref{l2.3}, and Corollary \ref{c2.5} once again, one obtains \begin{align} \Big[I_{\Omega}+\overline{\widetilde u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1} \widetilde v}\,\Big]^{-1} &\in {\mathcal B}\big(L^2(\Om;d^nx)\big), \label{4.24} \\ \widetilde u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-p/p_1} &\in {\mathcal B}_{p_1}\big(L^2(\Om;d^nx)\big), \label{4.25} \\ \widetilde v\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-p/p_2} &\in {\mathcal B}_{p_2}\big(L^2(\Om;d^nx)\big), \label{4.26} \\ \overline{\gamma_D \big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}\widetilde u} &\in {\mathcal B}_{p_1}\big(L^2(\Om;d^nx),L^2(\dOm;d^{n-1}\si)\big), \label{4.27} \\ \overline{\gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}\widetilde v} &\in {\mathcal B}_{p_2}\big(L^2(\Om;d^nx),L^2(\dOm;d^{n-1}\si)\big), \label{4.28} \end{align} and thus, \begin{equation} \overline{\widetilde u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}\widetilde v} \in {\mathcal B}_p\big(L^2(\Om;d^nx)\big)\subset{\mathcal B}_k\big(L^2(\Om;d^nx)\big). \label{4.29} \end{equation} Relations \eqref{4.24}--\eqref{4.29} together with the following resolvent identity that follows from \eqref{B.5}, \begin{align} \overline{\big(H_{\Omega}^D-zI_{\Omega}\big)^{-1}\widetilde v} = \overline{\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}\widetilde v} \Big[I_{\Omega}+\overline{\widetilde u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}\widetilde v}\,\Big]^{-1}, \label{4.29a} \end{align} prove the ${\mathcal B}_k$-property \eqref{4.2}, \eqref{4.4}, and \eqref{4.5}, and hence, the left- and the right-hand sides of \eqref{4.3} are well-defined for $V\in L^p(\Omega;d^nx)$. Thus, using \eqref{2.8}, \eqref{2.27}, \eqref{2.28}, the continuity of $\det{}_k(\cdot)$ with respect to the ${\mathcal B}_k$-norm $\|\cdot\|_{{\mathcal B}_k\big(L^2(\Om;d^nx)\big)}$, the continuity of $\text{\rm{tr}}(\cdot)$ with respect to the trace norm $\|\cdot\|_{{\mathcal B}_1\big(L^2(\Om;d^nx)\big)}$, and an approximation of $V\in L^p(\Omega;d^nx)$ by a sequence of potentials $V_j \in L^p(\Omega;d^nx)\cap L^\infty(\Omega;d^nx)$, $j\in{\mathbb{N}}$, in the norm of $L^p(\Omega;d^nx)$ as $j\uparrow\infty$, then extends the result from $V\in L^p(\Omega;d^nx)\cap L^\infty(\Omega;d^nx)$ to $V\in L^p(\Omega;d^nx)$. \end{proof} Given these preparations, we are ready for the principal result of this paper, the multi-dimensional analog of Theorem \ref{t1.2}: \begin{theorem} \label{t4.2} Assume Hypothesis \ref{h2.6}, let $k\in{\mathbb{N}}$, $k\geq p$, and $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^D\big)\cup \sigma\big(H_{0,\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^N\big)\big)$. Then, \begin{equation} M_{\Omega}^{D}(z)M_{0,\Omega}^{D}(z)^{-1} - I_{\partial\Omega} = - \overline{\gamma_N\big(H_{\Omega}^D-zI_{\Omega}\big)^{-1} V \Big[\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\Big]^*} \in {\mathcal B}_k\big(L^2(\partial\Omega; d^{n-1}\sigma)\big) \end{equation} and \begin{align} & \frac{\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}v}\,\Big)} {\det{}_k\Big(I_{\Omega}+\overline{u\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}v}\,\Big)} \notag \\ & \quad = \det{}_k\Big(I_{{\partial\Omega}} - \overline{\gamma_N\big(H_{\Omega}^D-zI_{\Omega}\big)^{-1} V \big[\gamma_D\big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\big]^*}\,\Big) \exp\big(\text{\rm{tr}}(T_k(z))\big) \label{4.30} \\ & \quad = \det{}_k\big(M_{\Omega}^{D}(z)M_{0,\Omega}^{D}(z)^{-1}\big) \exp\big(\text{\rm{tr}}(T_k(z))\big) \label{4.31} \end{align} with $T_k(z)$ defined in Theorem \ref{t4.1}. \end{theorem} \begin{proof} The result follows from combining Lemma \ref{l3.5} and Theorem \ref{t4.1}. \end{proof} \begin{remark} \label{r4.4} Assume Hypothesis \ref{h2.6}, let $k\in{\mathbb{N}}$, $k\geq p$, and $z\in{\mathbb{C}}\big\backslash\big(\sigma\big(H_{\Omega}^N\big)\cup \sigma\big(H_{0,\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^N\big)\big)$. Then, \begin{equation} M_{0,\Omega}^{N}(z)^{-1}M_{\Omega}^{N}(z) - I_{\partial\Omega} = \overline{\gamma_N \big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}V \Big[\gamma_D \big(\big(H_{\Omega}^N-z I_{\Omega}\big)^{-1}\big)^*\Big]^*} \in {\mathcal B}_k\big(L^2(\partial\Omega; d^{n-1}\sigma)\big) \label{4.32} \end{equation} and one can also prove the following analog of \eqref{4.30}: \begin{align} &\frac{\det{}_k\Big(I_{\Omega}+\overline{u(H_{0,\Omega}^D-zI_{\Omega})^{-1}v}\,\Big)} {\det{}_k\Big(I_{\Omega}+\overline{u(H_{0,\Omega}^N-zI_{\Omega})^{-1}v}\,\Big)} \notag \\ &\quad = \det{}_k\Big(I_{{\partial\Omega}} + \overline{\gamma_N(H_{0,\Omega}^D-zI_{\Omega})^{-1}V \big[\gamma_D((H_{\Omega}^N-z I_{\Omega})^{-1})^*\big]^*}\,\Big) \exp\big(\text{\rm{tr}}(T_k(z))\big), \label{4.33} \\ & \quad = \det{}_k\big(M_{0,\Omega}^{N}(z)^{-1} M_{\Omega}^{N}(z)\big) \exp\big(\text{\rm{tr}}(T_k(z))\big) \label{4.34} \end{align} where $T_k(z)$ denotes one of the cyclic permutations of the polynomial $T_k(A,B)$ defined in Lemma \ref{l4.1} with the following choice of $A=A_1(z)$ and $B=B_1(z)$, \begin{align} \begin{split} \notag A_1(z) &= -\Big[\,\overline{\gamma_D \big(H_{\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\overline{\widetilde u}}\,\Big]^* \, \overline{\gamma_N \big(H_{0,\Omega}^D -zI_{\Omega}\big)^{-1}\widetilde v}\in{\mathcal B}_p\big(L^2(\Om;d^nx)\big)\subset{\mathcal B}_k\big(L^2(\Om;d^nx)\big), \\ B_1(z) &= -\overline{\widetilde u\big(H_{0,\Omega}^N-zI_{\Omega}\big)^{-1}\widetilde v} \in{\mathcal B}_p\big(L^2(\Om;d^nx)\big)\subset{\mathcal B}_k\big(L^2(\Om;d^nx)\big), \end{split} \end{align} and the functions $u$, $v$, $\widetilde u$, and $\widetilde v$ are given by \begin{align} u(x) &= \exp(i\arg(V(x)))\abs{V(x)}^{1/2},\quad v(x)=\abs{V(x)}^{1/2}, \\ \widetilde u(x) &= \exp(i\arg(V(x)))\abs{V(x)}^{p/p_1},\quad \widetilde v(x)=\abs{V(x)}^{p/p_2}, \end{align} with \begin{align} p_1=\begin{cases} 3p/2, &n=2, \\ 4p/3, &n\geq3,\end{cases} \qquad p_2=\begin{cases}3p, &n=2, \\ 4p, & n\geq3, \end{cases} \end{align} and $V=uv=\tilde u \tilde v$. In particular, \begin{align} T_2(z) &= -\overline{\gamma_N\big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}V \big(H_{\Omega}^N-zI_{\Omega}\big)^{-1}V \Big[\gamma_D\big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\Big]^*}. \end{align} \end{remark} \begin{remark} \label{r4.5} It seems tempting at this point to turn to an abstract version of Theorem \ref{t4.2} using the notion of boundary value spaces (see, e.g., \cite{BL06}, \cite{DM91}, \cite{DM95}, \cite[Ch.\ 3]{GG91} and the references therein). However, the analogs of the necessary mapping and trace ideal properties as recorded in Sections \ref{s2} and \ref{s3} do not seem to be available at the present time for general self-adjoint extensions of a densely defined, closed symmetric operator (respectively, maximal accretive extensions of closed accretive operators) in a separable complex Hilbert space. For this reason we decided to start with the special, but important case of multi-dimensional Schr\"odinger operators. \end{remark} \vspace*{.5mm} A few comments are in order at this point: The sudden appearance of the exponential term $\exp(\text{\rm{tr}}(T_k(z)))$ in \eqref{4.30}, \eqref{4.31}, and \eqref{4.32}, when compared to the one-dimensional case, is due to the necessary use of the modified determinant ${\det}_k(\cdot)$, $k\geq 2$, in Theorems \ref{t4.1} and \ref{t4.2}. As mentioned in the introduction, the multi-dimensional extension \eqref{4.30} of \eqref{1.16}, under the stronger hypothesis $V\in L^2(\Omega; d^n x)$, $n=2,3$, first appeared in \cite{GLMZ05}. However, the present results in Theorem \ref{t4.2} go decidedly beyond those in \cite{GLMZ05} in the following sense: \\ $(i)$ the class of domains $\Omega$ permitted by Hypothesis \ref{h2.1} is substantiallly expanded as compared to \cite{GLMZ05}. \\ $(ii)$ For $n=2,3$, the conditions on $V$ satisfying Hypothesis \ref{h2.6} are now nearly optimal by comparison with the Sobolev inequality (cf.\ Cheney \cite{Ch84}, Reed and Simon \cite[Sect.\ IX.4]{RS75}, Simon \cite[Sect.\ I.1]{Si71}). \\ $(iii)$ The multi-dimensional extension \eqref{4.31} of \eqref{1.17} invoking Dirichlet-to-Neumann maps is a new (and the most significant) result in this paper. \\ $(iv)$ While the results in \cite{GLMZ05} were confined to dimensions $n=2,3$, all results in this paper are now derived in the general case $n\in{\mathbb{N}}$, $n\geq 2$. The principal reduction in Theorem \ref{t4.2} reduces (a ratio of) modified Fredholm determinants associated with operators in $L^2(\Omega; d^n x)$ on the left-hand side of \eqref{4.30} to modified Fredholm determinants associated with operators in $L^2(\partial\Omega; d^{n-1} \sigma)$ on the right-hand side of \eqref{4.30} and especially, in \eqref{4.31}. This is the analog of the reduction described in the one-dimensional context of Theorem \ref{t1.2}, where $\Omega$ corresponds to the half-line $(0,\infty)$ and its boundary $\partial\Omega$ thus corresponds to the one-point set $\{0\}$. In the context of elliptic operators on smooth $k$-dimensional manifolds, the idea of reducing a ratio of zeta-function regularized determinants to a calculation over the $k-1$-dimensional boundary has been studied by Forman \cite{Fo87}. He also pointed out that if the manifold consists of an interval, the special case of a pair of boundary points then permits one to reduce the zeta-function regularized determinant to the determinant of a finite-dimensional matrix. The latter case is of course an analog of the one-dimensional Jost and Pais formula mentioned in the introduction (cf.\ Theorems \ref{t1.1} and \ref{t1.2}). Since then, this topic has been further developed in various directions and we refer, for instance, to Burghelea, Friedlander, and Kappeler \cite {BFK91}, \cite {BFK92}, \cite {BFK93}, \cite{BFK95}, Carron \cite{Ca02}, Friedlander \cite{Fr05}, Guillarmou and Guillop\'e \cite{GG07}, M\"uller \cite{Mu98}, Okikiolu \cite{Ok95}, \cite{Ok95a}, Park and Wojciechowski \cite{PW05}, \cite{PW05a}, and the references therein. \medskip Combining Theorems \ref{t4.2} and \ref{tB.3} yields the following applications of \eqref{4.30} and \eqref{4.33}: \begin{theorem} \label{t4.6} Assume Hypothesis \ref{h2.6} and $k\in{\mathbb{N}}$, $k\geq p$. \\ $(i)$One infers that \begin{align} \begin{split} &\text{for all } \, z\in{\mathbb{C}}\big\backslash \big(\sigma\big(H_{\Omega}^D\big)\cup \sigma\big(H_{0,\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^N\big)\big), \text{ one has } z\in\sigma\big(H_{\Omega}^N\big) \\ &\quad \text{if and only if } \, \det{}_k\Big( I_{{\partial\Omega}} - \overline{\gamma_N \big(H_{\Omega}^D-zI_{\Omega}\big)^{-1}V \big[\gamma_D \big(H_{0,\Omega}^N-\overline{z}I_{\Omega}\big)^{-1}\big]^*}\, \Big)=0. \label{4.49} \end{split} \end{align} $(ii)$ Similarly, one infers that \begin{align} \begin{split} & \text{for all } \, z \in {\mathbb{C}} \big\backslash \big(\sigma\big(H_{\Omega}^N\big)\cup\sigma\big(H_{0,\Omega}^N\big)\cup\sigma\big(H_{0,\Omega}^D\big)\big), \text{ one has } z\in\sigma\big(H_{\Omega}^D\big) \\ &\quad \text{if and only if }\, \det{}_k\Big(I_{{\partial\Omega}} + \overline{\gamma_N \big(H_{0,\Omega}^D-zI_{\Omega}\big)^{-1}V \big[\gamma_D \big(\big(H_{\Omega}^N-z I_{\Omega}\big)^{-1}\big)^*\big]^*} \,\Big)=0. \label{4.50} \end{split} \end{align} \end{theorem} \begin{proof} By the Birman--Schwinger principle, as discussed in Theorem \ref{tB.3}, for any $k\in{\mathbb{N}}$ such that $k\geq p$ and $z\in{\mathbb{C}}\big\backslash \big(\sigma\big(H_{\Omega}^D\big)\cup \sigma\big(H_{0,\Omega}^D\big) \cup \sigma\big(H_{0,\Omega}^N\big)\big)$, one has \begin{equation} z\in\sigma\big(H_{\Omega}^N\big) \, \text{ if and only if } \, \det{}_k\Big(I_{\Omega}+ \overline{u\big(H_{0,\Omega}^N -zI_{\Omega}\big)^{-1}v}\,\Big)=0. \end{equation} Thus, \eqref{4.49} follows from \eqref{4.30}. In the same manner, \eqref{4.50} follows from \eqref{4.33}. \end{proof} We conclude with another application to eigenvalue counting functions in the case where $H^D_{\Omega}$ and $H^N_{\Omega}$ are self-adjoint and have purely discrete spectra (i.e., empty essential spectra). To set the stage we introduce the following assumptions: \begin{hypothesis} \label{h4.7} In addition to assuming Hypothesis \ref{h2.6} suppose that $V$ is real-valued and that $H^D_{\Omega}$ and $H^N_{\Omega}$ have purely discrete spectra. \end{hypothesis} \begin{remark} \label{r4.8} ${}$ \\ $(i)$ Real-valuedness of $V$ implies self-adjointness of $H^D_{\Omega}$ and $H^N_{\Omega}$ as noted in \eqref{B.11}. \\ $(ii)$ Since $\partial\Omega$ is assumed to be compact, purely discrete spectra of $H^D_{0,\Omega}$ and $H^N_{0,\Omega}$, that is, compactness of their resolvents (cf.\ \cite[Sect.\ XIII.14]{RS78}), is equivalent to $\Omega$ being bounded. Indeed, if $\Omega$ had an unbounded component, then one can construct Weyl sequences which would yield nonempty essential spectra of $H^D_{0,\Omega}$ and $H^N_{0,\Omega}$. On the other hand, $H^D_{0,\Omega}$ has empty essential spectrum for any bounded open set $\Omega \subset {\mathbb{R}}^n$ as discussed in the Corollary to \cite[Theorem XIII.73]{RS78}. Similarly, $H^N_{0,\Omega}$ has empty essential spectrum for any bounded open set $\Omega$ satisfying the segment property as discussed in Corollary 1 to \cite[Theorem XIII.75]{RS78}. Since any bounded Lipschitz domain satisfies the segment property (cf.\ \cite[Sect,\ 1.2.2]{Gr85}), any bounded domain $\Omega$ satisfying Hypothesis \ref{h2.1} yields a purely discrete spectrum of $H^N_{0,\Omega}$. \\ $(iii)$ We recall that $V$ is relatively form compact with respect to $H^D_{0,\Omega}$ and $H^N_{0,\Omega}$, that is, \begin{equation} v\big(H^D_{0,\Omega} - z I_{\Omega}\big)^{-1/2}, \, v\big(H^N_{0,\Omega} - z I_{\Omega}\big)^{-1/2} \in {\mathcal B}_{\infty}\big(L^2(\Omega; d^n x)\big) \end{equation} for all $z$ in the resolvent sets of $H^D_{0,\Omega}$, respectively, $H^N_{0,\Omega}$ (in fact, much more is true as recorded in \eqref{2.31} and \eqref{2.32} since ${\mathcal B}_\infty$ can be replaced by ${\mathcal B}_{2p}$). By \eqref{3.47a} and \eqref{3.48a} this yields that the difference of the resolvents of $H^D_{\Omega}$ and $H^N_{\Omega}$ is compact (in fact, it even lies in ${\mathcal B}_{p}\big(L^2(\Omega; d^n x)\big)$). By a variant of Weyl's theorem (cf., e.g., \cite[Theorem XIII.14]{RS78}), one concludes that $H^D_{\Omega}$ and $H^N_{\Omega}$ have empty essential spectrum if and only if $H^D_{0,\Omega}$ and $H^N_{0,\Omega}$ have (cf.\ \cite[Problem 39, p.\ 369]{RS78}). Thus, by part $(ii)$ of this remark, the assumption that $H^D_{\Omega}$ and $H^N_{\Omega}$ have purely discrete spectra in Hypothesis \ref{h4.7} can equivalently be replaced by the assumption that $\Omega$ is bounded (still assuming Hypothesis \ref{h2.6} and that $V$ is real-valued). \end{remark} \medskip Assuming Hypothesis \ref{h4.7}, $k\in{\mathbb{N}}$, $k\geq p$, we introduce (cf.\ also \cite{Ya07}) \begin{equation} \xi_k(\lambda)= \begin{cases} \pi^{-1} \Im\Big(\ln\Big(\det{}_k\Big(I_{\Omega} +\overline{u (H_{0,\Omega} - \lambda I_{\Omega})^{-1}v}\,\Big)\Big)\Big), & \lambda \in (e_0,\infty) \backslash (\sigma(H_{\Omega})\cup \sigma(H_{0,\Omega})), \\ 0, & \lambda < e_0, \end{cases} \label{4.57} \end{equation} where \begin{equation} e_0 = \inf(\sigma(H_{\Omega}), \sigma(H_{0,\Omega})), \end{equation} and $H_{\Omega}$ and $H_{0,\Omega}$ temporarily abbreviate $H^D_{\Omega}$ and $H^D_{0, \Omega}$ in the case of Dirichlet boundary conditions on $\partial\Omega$ and $H^N_{\Omega}$ and $H^N_{0, \Omega}$ in the case of Neumann boundary conditions on $\partial\Omega$. Moreover, we subsequently agree to write $\xi^D_k(\cdot)$ and $\xi^N_k(\cdot)$ for $\xi(\cdot)$ in the case of Dirichlet and Neumann boundary conditions in $H_{\Omega}, H_{0, \Omega}$. The branch of the logarithm in \eqref{4.57} has been fixed by putting $\xi_k(\lambda)=0$ for $\lambda$ in a neighborhood of $-\infty$. This is possible since \begin{equation} \lim_{\lambda\downarrow -\infty} \det{}_k\Big(I_{\Omega} +\overline{u (H_{0,\Omega}- \lambda I_{\Omega})^{-1}v}\,\Big) = 1. \label{4.58} \end{equation} Equation \eqref{4.58} in turn follows from Lemma \ref{l2.3} since \begin{equation} \lim_{\lambda\downarrow -\infty} \Big\|\overline{u(H_{0,\Omega} - \lambda I_{\Omega})^{-1} v}\Big\|_{{\mathcal B}_k(L^2(\Omega; d^n x))} = 0 \end{equation} by applying the dominated convergence theorem to $\|(\abs{\cdot}^2-\lambda)^{-1/2}\|_{L^{2p}({\mathbb{R}}^n;d^nx)}^2$ as $\lambda\downarrow-\infty$ in \eqref{2.8} (replacing $p$ by $2p$, $q$ by $1/2$, $f$ by $u$ and $v$, etc.). Since $H_{0,\Omega}$ is self-adjoint in $L^2(\Omega; d^n x)$ with purely discrete spectrum, for any $\lambda_0 \in{\mathbb{R}}$, we obtain the norm convergent expansion \begin{equation} (H_{0,\Omega} - z I_{\Omega})^{-1} \underset{z\to\lambda_0}{=} P_{0,\Omega, \lambda_0} (\lambda_0 -z)^{-1} + \sum_{k=0}^{\infty} (-1)^k S_{0,\Omega, \lambda_0}^{k+1} (\lambda_0 -z)^{k}, \label{4.63} \end{equation} where $P_{0,\Omega,\lambda_0}$ denotes the Riesz projection associated with $H_{0,\Omega}$ and the point $\lambda_0$, and $S_{0,\Omega,\lambda_0}$ is given by \begin{equation} S_{0,\Omega, \lambda_0} = \lim_{z\to\lambda_0} (H_{0,\Omega} - z I_{\Omega})^{-1} (I_{\Omega} - P_{0,\Omega, \lambda_0}), \end{equation} with the limit taken in the topology of ${\mathcal B}(L^2(\Omega;d^n x))$. Hence, $S_{0,\Omega, \lambda_0} P_{0,\Omega, \lambda_0} = P_{0,\Omega,\lambda_0} S_{0,\Omega,\lambda_0} = 0$. If, in fact, $\lambda_0$ is a (necessarily discrete) eigenvalue of $H_{0,\Omega}$, then $P_{0,\Omega,\lambda_0}$ is the projection onto the corresponding eigenspace of $H_{0,\Omega}$ and the dimension of its range equals the multiplicity of the eigenvalue $\lambda_0$, denoted by \begin{equation} n_{0,\lambda_0} = \text{\rm{dim}}(\text{\rm{ran}}(P_{0,\Omega, \lambda_0})). \end{equation} We recall that all eigenvalues of $H_{0,\Omega}$ are semisimple, that is, their geometric and algebraic multiplicities coincide, since $H_{0,\Omega}$ is assumed to be self-adjoint. If $\lambda_0$ is not in the spectrum of $H_{0,\Omega}$ then, of course, $P_{0,\Omega,\lambda_0} =0$ and $n_{0,\lambda_0} =0$. In exactly, the same manner, and in obvious notation, one then also obtains \begin{equation} (H_{\Omega} - z I_{\Omega})^{-1} \underset{z\to\lambda_0}{=} P_{\Omega, \lambda_0} (\lambda_0 -z)^{-1} + \sum_{k=0}^{\infty} (-1)^k S_{\Omega, \lambda_0}^{k+1} (\lambda_0 -z)^{k} \label{4.66} \end{equation} and \begin{equation} n_{\lambda_0} = \text{\rm{dim}}(\text{\rm{ran}}(P_{\Omega, \lambda_0})). \end{equation} In the following we denote half-sided limits by \begin{equation} f(x_+)= \lim_{\varepsilon\downarrow 0} f(x+\varepsilon), \quad f(x_-) = \lim_{\varepsilon\uparrow 0} f(x-\varepsilon), \quad x\in{\mathbb{R}}. \end{equation} Moreover, we denote by $N_{H_{\Omega}}(\lambda)$ (respectively, $N_{H_{0,\Omega}}(\lambda)$), $\lambda\in{\mathbb{R}}$, the right-continuous function on ${\mathbb{R}}$ which counts the number of eigenvalues of $H_{\Omega}$ (respectively, $H_{0,\Omega}$) less than or equal to $\lambda$, counting multiplicities. \begin{lemma} \label{l4.8} Assume Hypothesis \ref{h4.7} and let $k\in{\mathbb{N}}$, $k\geq p$. Then $\xi_k$ equals a fixed integer on any open interval in ${\mathbb{R}}\backslash(\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega}))$. Moreover, for any $\lambda \in {\mathbb{R}}$, \begin{equation} \xi_k(\lambda_+) - \xi_k(\lambda_-) = - (n_{\lambda} - n_{0,\lambda}), \label{4.69} \end{equation} and hence $\xi_k$ is piecewise integer-valued on ${\mathbb{R}}$ and normalized to vanish on $(-\infty, e_0)$ such that \begin{equation} \xi_k(\lambda) = -[N_{H_{\Omega}}(\lambda) - N_{H_{0,\Omega}}(\lambda)], \quad \lambda \in {\mathbb{R}} \backslash (\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega})). \label{4.70} \end{equation} \end{lemma} \begin{proof} Introducing the unitary operator $S$ in $L^2(\Omega; d^n x)$ of multiplication by the function $\sgn(V)$, \begin{equation} (Sf)(x) = \sgn(V(x)) f(x), \quad f \in L^2(\Omega; d^n x) \label{4.71} \end{equation} such that $Su = uS = v$, $Sv = vS = u$, $S^2 = I_{\Omega}$, one computes for $\lambda \in {\mathbb{R}}\backslash \sigma(H_{0,\Omega})$, \begin{align} \overline{\det{}_k\Big(I_{\Omega}+\overline{u(H_{0,\Omega} - \lambda I_{\Omega})^{-1}v}\,\Big)} &= \det{}_k\Big(I_{\Omega} +\overline{v(H_{0,\Omega} - \lambda I_{\Omega})^{-1}u}\,\Big) \notag \\ &= \det{}_k\Big(I_{\Omega}+\overline{Su(H_{0,\Omega} - \lambda I_{\Omega})^{-1}vS}\,\Big) \notag \\ &= \det{}_k\Big(I_{\Omega}+\overline{u(H_{0,\Omega} - \lambda I_{\Omega})^{-1}v}\,\Big), \label{4.72} \end{align} that is, $\det{}_k\Big(I_{\Omega}+\overline{u(H_{0,\Omega} - \lambda I_{\Omega})^{-1}v}\,\Big)$ is real-valued for $\lambda \in {\mathbb{R}}\backslash \sigma(H_{0,\Omega})$. (Here the bars either denote complex conjugation, or the operator closure, depending on the context in which they are used.) Together with the Birman--Schwinger principle as expressed in Theorem \ref{tB.3}, this proves that $\xi_k$ equals a fixed integer on any open interval in ${\mathbb{R}}\backslash(\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega}))$. Next, we note that for $z\in {\mathbb{C}}\backslash(\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega}))$, \begin{align} \begin{split} & - \frac{d}{dz} \ln \Big(\det{}_k\Big(I_{\Omega} +\overline{u\big(H_{0,\Omega}- z I_{\Omega}\big)^{-1}v}\,\Big)\Big) = \text{\rm{tr}}\bigg((H_{\Omega} - z I_{\Omega})^{-1} - (H_{0,\Omega} - z I_{\Omega})^{-1} \\ & \hspace*{1.9cm} - \sum_{\ell=1}^{k-1} (-1)^{\ell} \overline{(H_{0,\Omega} - z I_{\Omega})^{-1}v} \Big[\overline{u(H_{0,\Omega} - z I_{\Omega})^{-1}v} \Big]^{\ell-1} u (H_{0,\Omega} - z I_{\Omega})^{-1}\bigg), \label{4.73} \end{split} \end{align} which represents just a slight extension of the result recorded in \cite{Ya07}. Insertion of \eqref{4.63} and \eqref{4.66} into \eqref{4.73} then yields that for any $\lambda_0\in{\mathbb{R}}$, \begin{align} - \frac{d}{dz} \ln \Big(\det{}_k\Big(I_{\Omega} +\overline{u(H_{0,\Omega}- z I_{\Omega})^{-1}v}\,\Big)\Big) & \underset{z\to\lambda_0}{=} \text{\rm{tr}}(P_{\Omega, \lambda_0} - P_{0,\Omega, \lambda_0}) (\lambda_0 -z)^{-1} + \sum_{\ell=-k}^{\infty} c_{\ell} (\lambda_0 -z)^{\ell} \notag \\ & \underset{z\to\lambda_0}{=} [n_{\lambda_0} - n_{0,\lambda_0}] (\lambda_0 -z)^{-1} + \sum_{\ell=-k}^{\infty} c_{\ell} (\lambda_0 -z)^{\ell}, \label{4.74} \end{align} where \begin{equation} c_{\ell} \in {\mathbb{R}}, \;\; \ell \in{\mathbb{Z}}, \, \ell \geq k, \, \text{ and } \, c_{-1} =0. \label{4.75} \end{equation} That $c_{\ell} \in {\mathbb{R}}$ is clear from the real-valuedness of $V$ and the self-adjointness of $H_{\Omega}$ and $H_{0,\Omega}$ by expanding the $(\ell -1)$th power of $\overline{u(H_{0,\Omega} - z I_{\Omega})^{-1}v}$ in \eqref{4.73}. To demonstrate that $c_{-1}$ actually vanishes, that is, that the term proportional to $(\lambda_0 -z)^{-1}$ cancels in the sum $\sum_{\ell=-k}^{\infty} c_{\ell} (\lambda_0 -z)^{\ell}$ in \eqref{4.74}, we temporarily introduce $u_m = P_m u$, $v_m = v P_m$, where $\{P_m\}_{m\in{\mathbb{N}}}$ is a family of orthogonal projections in $L^2(\Omega; d^n x)$ satisfying \begin{equation} P_m^2=P_m=P_m^*, \quad \text{\rm{dim}}(\text{\rm{ran}}(P_m))=m, \quad \text{\rm{ran}}(P_m) \subset \text{\rm{dom}}(v), \; \; m\in{\mathbb{N}}, \quad \slim_{m\uparrow\infty} P_m = I_{\Omega}, \end{equation} where $\slim$ denotes the limit in the strong operator topology. (E.g., it suffices to choose $P_m$ as appropriate spectral projections associated with $H_{0,\Omega}$.) In addition, we introduce $V_m = v_m u_m$ and the operator $H_{\Omega,m}$ in $L^2(\Omega; d^n x)$ by replacing $V$ by $V_m$ in $H_{\Omega}$. Since \begin{equation} V_m= (vP_m) P_m (u P_m)^*, \end{equation} one obtains that $V_m$ is a trace class (in fact, finite rank) operator, that is, \begin{equation} V_m \in {\mathcal B}_1\big(L^2(\Omega; d^n x)\big), \quad m\in{\mathbb{N}}. \label{4.77} \end{equation} Moreover, since by \eqref{2.31} and \eqref{2.32}, \begin{equation} u(H_{0,\Omega} - z I_{\Omega})^{-1/2}, \overline{(H_{0,\Omega} - z I_{\Omega})^{-1/2} v} \in {\mathcal B}_{2p}\big(L^2(\Omega; d^n x)\big), \quad z\in {\mathbb{C}}\backslash\sigma(H_{0,\Omega}), \end{equation} one concludes that $\overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-1} v P_m} = P_m \overline{u (H_{0,\Omega} - z I_{\Omega})^{-1} v} P_m$, $m\in{\mathbb{N}}$, satisfies \begin{align} \lim_{m\uparrow\infty} \big\|\overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-1} v P_m} - u (H_{0,\Omega} - z I_{\Omega})^{-1} v \big\|_{{\mathcal B}_{p}(L^2(\Omega; d^n x))} &= 0, \quad z\in {\mathbb{C}}\backslash\sigma(H_{0,\Omega}), \label{4.78} \\ \lim_{m\uparrow\infty} \big\|\overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-2} v P_m} - u (H_{0,\Omega} - z I_{\Omega})^{-2} v \big\|_{{\mathcal B}_{p}(L^2(\Omega; d^n x))} &= 0, \quad z\in {\mathbb{C}}\backslash\sigma(H_{0,\Omega}). \label{4.79} \end{align} Applying the formula (cf.\ \cite[p.\ 44]{Ya92}) \begin{equation} \frac{d}{dz} \ln({\det}_k (I_{{\mathcal H}} - A(z)) = - \text{\rm{tr}}\big((I_{{\mathcal H}} - A(z))^{-1} A(z)^{k-1} A'(z) \big), \quad z\in{\mathcal D}, \end{equation} where $A(\cdot)$ is analytic in some open domain ${\mathcal D}\subseteq{\mathbb{C}}$ with respect to the ${\mathcal B}_k({\mathcal H})$-norm, ${\mathcal H}$ a separable complex Hilbert space, one obtains for $z\in {\mathbb{C}}\backslash(\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega}))$, \begin{align} & - \frac{d}{dz} \ln \Big({\det}_k \Big(I_{\Omega} + \overline{u (H_{0,\Omega} - z I_{\Omega})^{-1} v} \, \Big)\Big) \notag \\ & \quad = (-1)^k \text{\rm{tr}}\Big(\Big[I_{\Omega} + \overline{u (H_{0,\Omega} - z I_{\Omega})^{-1} v} \, \Big]^{-1} \Big[\,\overline{u (H_{0,\Omega} - z I_{\Omega})^{-1} v} \, \Big]^{k-1} \overline{u (H_{0,\Omega} - z I_{\Omega})^{-2} v} \, \Big), \label{4.81} \\ & - \frac{d}{dz} \ln\Big({\det}_k \Big(I_{\Omega} + \overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-1} v P_m} \, \Big)\Big) \notag \\ & \quad = (-1)^k \text{\rm{tr}} \Big(\Big[I_{\Omega} + \overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-1} v P_m} \, \Big]^{-1} \Big[\overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-1} v P_m}\,\Big]^{k-1} \label{4.82} \\ & \hspace*{8cm} \times \overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-2} v P_m } \, \Big), \quad m\in{\mathbb{N}}. \notag \end{align} Combining equations \eqref{4.78}, \eqref{4.79} and \eqref{4.81}, \eqref{4.82} then yields \begin{align} & \lim_{m\uparrow\infty} \frac{d}{dz} \ln\Big({\det}_k \Big(I_{\Omega} + \overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-1} v P_m} \Big)\Big) = \frac{d}{dz} \ln\Big({\det}_k \Big(I_{\Omega} + \overline{u (H_{0,\Omega} - z I_{\Omega})^{-1} v} \Big)\Big), \notag \\ & \hspace*{9.5cm} z\in{\mathbb{C}}\backslash(\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega})). \label{4.83} \end{align} Because of \eqref{4.83}, to prove that $c_{-1} =0$ in \eqref{4.74} (as claimed in \eqref{4.75}), it suffices to replace $V$ in \eqref{4.74} by $V_m$ and prove that $c_{m,-1} =0$ for all $m\in{\mathbb{N}}$ in the following equation analogous to \eqref{4.74}, \begin{align} \begin{split} & - \frac{d}{dz} \ln \Big(\det{}_k\Big(I_{\Omega} + P_m \overline{u (H_{0,\Omega}- z I_{\Omega})^{-1}v} P_m \Big)\Big) \\ & \quad \underset{z\to\lambda_0}{=} \text{\rm{tr}}(P_{\Omega, m, \lambda_0} - P_{0,\Omega, \lambda_0}) (\lambda_0 -z)^{-1} + \sum_{\ell=-k}^{\infty} c_{m,\ell} (\lambda_0 -z)^{\ell}, \quad m\in{\mathbb{N}}, \label{4.84} \end{split} \end{align} where \begin{equation} c_{m,\ell} \in {\mathbb{R}}, \;\; \ell \in{\mathbb{Z}}, \, \ell \geq k, \;\; m\in{\mathbb{N}}, \label{4.85} \end{equation} and $P_{\Omega, m, \lambda_0}$ denotes the corresponding Riesz projection associated with $H_{\Omega,m}$ (obtained by replacing $V$ by $V_m$ in $H_{\Omega}$) and the point $\lambda_0$. Applying the analog of formula \eqref{4.73} to $H_{\Omega,m}$ (cf.\ again \cite{Ya07}), and noting that $P_m$ has rank $m\in{\mathbb{N}}$, one concludes that for $z\in {\mathbb{C}}\backslash(\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega}))$, \begin{align} & - \frac{d}{dz} \ln \Big(\det{}_k\Big(I_{\Omega} + P_m\overline{u (H_{0,\Omega}- z I_{\Omega})^{-1}v} P_m\,\Big)\Big) = - \frac{d}{dz} \ln \Big(\det{}_k\Big(I_{\Omega} + \overline{P_m u (H_{0,\Omega}- z I_{\Omega})^{-1}v P_m}\,\Big)\Big) \notag \\ & \quad = \text{\rm{tr}}\bigg((H_{\Omega,m} - z I_{\Omega})^{-1} - (H_{0,\Omega} - z I_{\Omega})^{-1} \notag \\ & \hspace*{1.4cm} - \sum_{\ell=1}^{k-1} (-1)^{\ell} \overline{(H_{0,\Omega} - z I_{\Omega})^{-1}v P_m} \Big[\overline{P_m u(H_{0,\Omega} - z I_{\Omega})^{-1}v P_m} \Big]^{\ell-1} P_m u (H_{0,\Omega} - z I_{\Omega})^{-1}\bigg) \notag \\ & \quad = \text{\rm{tr}}\big((H_{\Omega,m} - z I_{\Omega})^{-1} - (H_{0,\Omega} - z I_{\Omega})^{-1}\big) - \sum_{\ell=1}^{k-1} \frac{(-1)^{\ell}}{\ell} \frac{d}{dz} \text{\rm{tr}}\bigg(\Big[\overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-1}v P_m}\Big]^{\ell}\bigg) \label{4.86} \\ & \quad = \text{\rm{tr}}\big((H_{\Omega,m} - z I_{\Omega})^{-1} - (H_{0,\Omega} - z I_{\Omega})^{-1}\big) \notag \\ & \qquad + \sum_{\ell=1}^{k-1} (-1)^{\ell} \text{\rm{tr}}\bigg( \Big[\overline{P_m u(H_{0,\Omega} - z I_{\Omega})^{-1}v P_m} \Big]^{\ell-1} \, \overline{P_m u (H_{0,\Omega} - z I_{\Omega})^{-2} v P_m} \bigg), \quad m\in{\mathbb{N}}. \notag \end{align} Here we have used the fact that by \eqref{4.77}, \begin{equation} - \frac{d}{dz} \ln \Big(\det{}\Big(I_{\Omega} + \overline{P_m u (H_{0,\Omega}- z I_{\Omega})^{-1}v P_m}\,\Big)\Big) = \text{\rm{tr}}\big((H_{\Omega,m} - z I_{\Omega})^{-1} - (H_{0,\Omega} - z I_{\Omega})^{-1}\big), \end{equation} for $z\in {\mathbb{C}}\backslash(\sigma(H_{\Omega})\cup\sigma(H_{0,\Omega}))$, and that (cf.\ \cite[Theorem 9.2]{Si05}) \begin{align} \begin{split} \frac{d}{dz} \ln ({\det}_k(I_{{\mathcal H}} - B(z))) & = \frac{d}{dz} \ln ({\det} (I_{{\mathcal H}} - B(z))) + \sum_{\ell=1}^{k-1} \frac{1}{\ell} \frac{d}{dz} \text{\rm{tr}} \big(B(z)^\ell\big) \\ & = \frac{d}{dz} \ln ({\det} (I_{{\mathcal H}} - B(z))) + \sum_{\ell=1}^{k-1} \text{\rm{tr}} \big(B(z)^{\ell-1} B'(z)\big), \quad z\in{\mathcal D}, \end{split} \end{align} where $B(\cdot)$ is analytic in some open domain ${\mathcal D} \subseteq {\mathbb{C}}$ with respect to the ${\mathcal B}_1({\mathcal H})$-norm (with ${\mathcal H}$ a separable complex Hilbert space). The presence of the $d/dz$-term under the sum in \eqref{4.86} proves that the only $(\lambda_0 - z)^{-1}$-term in \eqref{4.84}, respectively, \eqref{4.86}, as $z\to\lambda_0$, must originate from the trace of the resolvent difference \begin{equation} \text{\rm{tr}}\big((H_{\Omega,m} - z I_{\Omega})^{-1} - (H_{0,\Omega} - z I_{\Omega})^{-1}\big) \underset{z\to\lambda_0}{=} \text{\rm{tr}}(P_{\Omega, m, \lambda_0} - P_{0,\Omega, \lambda_0}) (\lambda_0 -z)^{-1} + O(1), \quad m\in{\mathbb{N}}. \end{equation} Thus we have proved that \begin{equation} c_{m,-1} = 0, \quad m\in{\mathbb{N}}, \end{equation} in \eqref{4.84}. By \eqref{4.83} this finally proves \begin{equation} c_{-1} = 0 \end{equation} in \eqref{4.74}. Equations \eqref{4.74} and \eqref{4.75} then prove \eqref{4.69}. Together with the paragraph following \eqref{4.72}, this also proves \eqref{4.70}. \end{proof} Given Lemma \ref{l4.8}, Theorem \ref{t4.2} yields the following application to differences of Dirichlet and Neumann eigenvalue counting functions: \begin{theorem} \label{t4.9} Assume Hypothesis \ref{h4.7} and let $k\in{\mathbb{N}}$, $k\geq p$. Then, for all $\lambda\in{\mathbb{R}}\backslash \big(\sigma\big(H^D_{\Omega}\big) \cup \sigma\big(H^D_{0,\Omega}\big) \cup \sigma\big(H^N_{0,\Omega}\big)\big)$, \begin{align} & \xi_k^N(\lambda) - \xi_k^D(\lambda) = [N_{H^D_{\Omega}} (\lambda) - N_{H^D_{0, \Omega}} (\lambda)] - [N_{H^N_{\Omega}} (\lambda) - N_{H^N_{0, \Omega}} (\lambda)] \notag \\ & \quad = \pi^{-1} \Im\Big(\ln\Big({\det}_k \Big(I_{\partial\Omega} - \overline{\gamma_N \big(H^D_{\Omega} - \lambda I_{\Omega}\big)^{-1} V \big[\gamma_D \big(H^N_{0,\Omega} - \lambda I_{\Omega}\big)^{-1}\big]^*}\,\Big)\Big)\Big) + \pi^{-1} \Im(\text{\rm{tr}}(T_k(\lambda))) \notag \\ & \quad = \pi^{-1} \Im\big(\ln\big({\det}_k \big(M^D_{\Omega}(\lambda) M^D_{0,\Omega}(\lambda)^{-1} \big)\big)\big) + \pi^{-1} \Im(\text{\rm{tr}}(T_k(\lambda))) \end{align} with $T_k$ defined in Theorem \ref{t4.1}. \end{theorem} \begin{proof} This is now an immediate consequence of \eqref{4.30}, \eqref{4.31}, \eqref{4.57}, and \eqref{4.70}. \end{proof}
{ "attr-fineweb-edu": 1.092773, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbETxK0fkXQzmJ6Y4
\section{Introduction} \label{sec:intro} Compressible multicomponent flows are of significance to many applications, such as the inertial confinement fusion (ICF), the explosion of core-collapse supernova, underwater explosion (UNDEX) and so forth. These physical processes include Rayleigh-Taylor (RT) and Richtmyer-Meshkov (RM) hydrodynamic instabilities that rapidly develop in the presence of small initial perturbations. Up to now, understanding the development of these nonlinear instabilities still heavily relies on numerical simulations. The present research is motivated by the need to simulate the laser-driven plasma instability developed at the interface between dissimilar materials within ICF capsules. In such applications, the transport phenomena (of mass, momentum and energy) accompanying the hydrodynamic process play significant role. First of all, the laser energy deposited in the plasma is transported through the heat conduction process. Moreover, at small spatial scales the effect of viscous dissipation and the mass diffusion begin to impact the instability growth \cite{robey2004effects}. Numerically, the transport process is vital for achieving a grid-converged DNS (Direct Numerical Simulation) \cite{vold2021plasma}. To correctly simulate the dissipative processes in multicomponent flows, the governing model should satisfy two important criteria, i.e., physically admissibility and numerical consistency. The former stipulates that the model should respect the first and second law of thermodynamics. The latter dictates that the closure relations and numerics do not cause non-physical spurious oscillations near the material interface. Several works in the literature have attempted in this direction \cite{Thornber2018,Cook2009Enthalpy}. The present work is performed in the framework of the diffuse interface model (DIM). We aim to incorporate various dissipative transport phenomena (including mass diffusion, viscous dissipation and heat conduction) into this framework with the above two criteria being maintained. The fully conservative four-equation DIM (i.e., Euler equation supplemented with one conservation equation for the partial density) is notorious for triggering spurious oscillations in pressure and velocity (at the hydrodynamic stage) when solved with the Godunov finite volume method (FVM). Analysis works on this phenomenon have been performed both physically and numerically \cite{abgrall1996prevent,abgrall2001computations,saurel1999multiphase}. In this model there exists only one temperature and one pressure, implicitly assuming that the components are in thermal and mechanical equilibrium. {\color{black}This assumption is commonly used for combustion and boiling problems, however, it maybe too strong to be valid for interface problems.} Pressure and temperature disequilibria are diminished by compaction (pressure relaxation) and heat transfer (temperature relaxation) between components, respectively. However, these two processes may take place at very different time scales. For example, for the deflagration-to-detonation transition (DDT) in granular materials, the characteristic time scales for pressure relaxation and temperature relaxation are 0.03$\mu s$ and 18000$\mu s$ after sufficient combustion \cite{kapila2001two}, respectively. It is evident that forcing thermodynamical equilibrium to be reached at the same time scale may lead to physical inconsistency. To understand the temperature closure relations within the computational cell, let us look at the resolved interface problem in in-miscible multicomponent flows (\Cref{fig:Tjump}). According to the Rankine-Hugoniot relation, the jump conditions across the interface (the line $C$ in \Cref{fig:Tjump}) along its normal direction read: \begin{subequations} \begin{align} \llbracket \rho Y_1 (u_n-\mathcal{U})\rrbracket = 0, \\ \llbracket\rho Y_2 (u_n-\mathcal{U})\rrbracket=0,\\ \llbracket p + \rho \vc{u}(u_n-\mathcal{U}) \rrbracket =0, \label{eq:mom_jump} \\ \llbracket (u_n-\mathcal{U})\left(\rho e+\frac{\rho |\vc{u}-\mathcal{U}\vc{n}|^{2}}{2} + p\right) + J_{qn} \rrbracket =0, \label{eq:en_jump} \end{align} \end{subequations} where $\rho$, $Y_k$, $e$ are the mixture density, the mass fraction of component $k$, and the mixture internal energy. $u_n$, $\mathcal{U}$, $J_{qn}$ are normal components to the interface $C$ of the particle velocity $\vc{u}$, the interface velocity, and the heat flux, respectively. The operator $\llbracket \phi \rrbracket = \phi_2 - \phi_1$ with the subscripts $1/2$ denoting the variables on the left/right of the discontinuity, or the states of the 1/2-fluid. \begin{figure}[htbp] \centering \subfloat[Equilibrium temperatures]{\label{Tjump:eq}\includegraphics[width=0.25\textwidth]{./FIGS/T_eq.jpg}} \quad\quad \subfloat[Disequilibrium temperatures]{\label{Tjump:ineq}\includegraphics[width=0.25\textwidth]{./FIGS/T_ineq.jpg}}\quad\quad \subfloat[Miscible interface]{\label{fig:mass_diff_interface}\includegraphics[width=0.25\textwidth]{./FIGS/miscible.jpg}} \caption{Interfaces on computational grid.} \label{fig:Tjump} \end{figure} The interface represents a contact discontinuity, across which velocity is continuous, i.e., $ u_{n1} = u_{n2} = \mathcal{U}$. From \cref{eq:mom_jump,eq:en_jump} follows $\llbracket p \rrbracket = 0$ and $\llbracket J_{qn} \rrbracket = 0$, respectively. With the Fourier's heat flux, the latter can be expressed as: \begin{equation}\label{eq:heat_flux_jump} \left( \lambda_1 \nabla{T_1} \right) \dpr \vc{n} = \left( \lambda_2 \nabla{T_2} \right) \dpr \vc{n}, \end{equation} where $\lambda_k$ is the heat conduction coefficient of component $k$. In the framework of the DIM, the material interface contained within a computational cell is not tracked and the properties of fluids are diffused within the interfacial zone. Thermal conductivity imposes temperature continuity across the interface. If the model is equipped with only one temperature, then the two materials inside a computational cell share the same temperature (i.e., $T_1 = T_2 = T$, \Cref{Tjump:eq}), which maybe inconsistent with \cref{eq:heat_flux_jump}. This inconsistency results in numerical errors of FVM or physical inconsistency in the vicinity of the interface. In fact, the equilibrium temperature assumption means that temperature relaxation rate is infinitely large so that phase temperature equilibrium is reached instantaneously. This assumption deprives the model of the ability to deal with physically finite relaxation rates. To get rid of the above-mentioned temperature inconsistency, we turn to the temperature-disequilibrium models. In such models each cell state is characterized with temperatures for each component (\Cref{Tjump:ineq}), and thus the temperature equilibrium is not enforced. One representative of such models is the Baer-Nuziato (BN) model \cite{BAER1986861} and its variant \cite{saurel1999multiphase} for compressible multiphase flows. Formally, the BN model include balance equations for the partial density, the phase momentum and the phase total energy and is argumented with an evolution equation for volume fraction. The latter is vital for maintaining the free-of-oscillation property at the interface. In this model, each component is described with a full set of parameters (density, velocity, pressure and temperature) and governed by the single-phase Navier-Stokes (NS) equations away from the interface. The interactions between components only happen in the neighbourhood of the interface. These interactions include the kinetic, mechanical and thermal relaxations that strive to erase the disequilibria in velocity, pressure and temperature. The temperature equilibrium (imposed in one-temperature DIMs) is reached only after the complete temperature relaxation. The characteristic relaxation rates depend on particular physical problems. The model is unconditionally hyperbolic and respect the laws of thermodynamics. Moreover, in DIM the phase temperatures exist all through the computational domain since even the pure fluid is approximated as a fluid with negligible amount of other components. The disequilibrium temperature closure in each computational cell does not introduce inconsistency with the heat flux jump conditions. Although the BN model is physically complete, it consists of complicated wave structure and stiff relaxation processes, making its numerical implementation quite cumbersome. For many application scenarios, the model can be simplified to a large extent. For example, for the DDT process, Kapila et al. have derived two reduced models in the limit of instantaneous mechanical relaxations \cite{kapila2001two}. The first model is obtained in the limit of instantaneous velocity relaxation and consists of six equations. Only one equilibrium velocity exists in this model. This formulation is used for solving Kapila's five-equation model in \cite{saurel2009simple} for improving robustness. The second is the well-known five-equation model and derived in the limit that both pressure and velocity relaxation rates approach infinity (\Cref{fig:model_schematic}). \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{./FIGS/model_shematic.jpg} \caption{Schematic of reduced models.} \label{fig:model_schematic} \end{figure} Most current DIM works in literature focus on the resolved interface problem, however, the components maybe miscible and penetrate into each other (\Cref{fig:mass_diff_interface}). Under such circumstances, there is no longer a definite sharp discontinuous material interface, but a a physically diffused zone with finite thickness. This diffused zone develops as a result of mass diffusion and enthalpy diffusion. The significance of the latter to maintain the thermodynamical consistency has been demonstrated in \cite{Cook2009Enthalpy}. In fact, the enthalpy diffusion appear as a result of replacing the phase velocities with the mass-weighted one for any multi-velocity model, including the BN model. From a microscopic point of view, mass diffusion is the result of molecular random motions, which drives the molecular distribution toward uniformity. In the macroscopic continuum mechanics where each spatial element contains enormous number of molecules, the mass diffusion flux is characterized by the velocity difference between different species. Thus, the one-velocity models of Kapila is incapable of modelling this process. When the mass diffusion effect is strong, the velocity relaxation time scale is comparable to that of the problem considered and the assumption of instantaneous velocity relaxation is inappropriate. Therefore, to model the mass diffusion, we have to retain the velocity disequilibrium and then close it with diffusion laws (\Cref{fig:model_schematic}), which is one of the major contributions of the present paper. The reduction procedures in \cite{murrone2005five,perigaud2005compressible} assume the same relaxation time scale for velocity and pressure. This approach inevitably leads to velocity-equilibrium models where no mass diffusion sustains. Instead, weighing simplicity and thermodynamic consistency, we assume different time scales for the velocity and pressure. The pressure relaxation time scale is much smaller than that of the velocity. Such an assumption is in fact supported many physics, for example, see \cite{kapila2001two,bilicki1996evaluation,guillard2006numerical,zein2010}. With this assumption and the diffusion laws, the seven-equation BN model can be reduced to a five-equation one without losing the ability to model the mass diffusion process. In reducing the velocity-disequilibrium model, the second order term of the velocity relaxation time is abandoned. The finally obtained model formally contains one velocity. However, this velocity is the mass-weighted mixture velocity rather than the equilibrium velocity in Kapila's model. The component velocities can be derived by invoking diffusion laws. The finally obtained model satisfy both of the two criteria (physical admissibility and numerical consistency) proposed above. We propose numerical methods for solving the model with the fractional step method. In numerical implementation, the model is split into five physical parts, i.e., the hydrodynamic part, the viscous part, the temperature relaxation part, the heat conduction part and the mass diffusion part. The first part formally coincides with the original Kapila's formulation, however, the velocity owns different physical meanings (as discussed in the last paragraph). Various Godunov FVM methods in literature (for example, see \cite{Coralic2014Finite,perigaud2005compressible,kreeft2010new,Zhangchao2020}) can be used to solve the governing equations for the hydrodynamic part. In the first two parts the component temperatures are in disequilibrium. The temperature relaxation takes place at a much larger time scale and solved separately after these parts. The diffusion processes (including viscous dissipation, heat conduction and mass diffusion processes) are governed by parabolic PDEs, that are solved with the locally iterative method based on the Chebyshev parameters \cite{Zhukov2010,zhukov2018}. The heat conduction and mass diffusion equations are solved maintaining the temperature equilibrium, i.e., assuming an instantaneous temperature relaxation. Note that finite temperature relaxation can also be considered straightforwardly. The rest of the present paper is organized as follows. In \Cref{sec:model} we derive the reduced model by performing the asymptotic analysis on the BN-type model in the limit of instantaneous mechanical relaxations. In \Cref{sec:numer_meth} we develop numerical methods for solving the proposed model. In \Cref{sec:numer_res} we present numerical results for several multicomponent problems with diffusions and apply the model and numerical methods to the laser ablation problem in the field of ICF. \section{Model formulation} \label{sec:model} \subsection{The BN-type seven-equation model} The starting point of the following model formulation is the complete BN-type seven-equation model \cite{BAER1986861,saurel1999multiphase,petitpas2014,perigaud2005compressible}, which can be derived by using the averaging procedure of \cite{drew1983mathematical}. It reads: \begin{subequations} \label{eq:bn} \begin{align} \label{eq:bn:mass} \dudx{\alpha_k\rho_k}{t} + \nabla\dpr(\alpha_k\rho_k\vc{u}_k) = 0, \\ \label{eq:bn:mom} \dudx{\alpha_k\rho_k\vc{u}_k}{t} + \nabla\dpr\left(\alpha_k\rho_k\vc{u}_k\tpr\vc{u}_k - \alpha_k \overline{\overline{T}}_k \right) = - \overline{\overline{T}}_I \dpr \nabla {\alpha_k} + \mathcal{M}_k,\\ \label{eq:bn:en} \dudx{\alpha_k \rho_k E_k}{t} + \nabla\dpr\left( \alpha_k \rho_k E_k \vc{u}_k - \alpha_k \overline{\overline{T}}_k \dpr \vc{u}_k \right) = - \vc{u}_{I} \dpr \left( \overline{\overline{T}}_I \dpr \nabla {\alpha_k} \right) + \vc{u}_I \mathcal{M}_k - p_I \mathcal{F}_k + \mathcal{Q}_k + q_k + \mathcal{I}_k,\\ \label{eq:bn:vol} \dudx{\alpha_k }{t} + \vc{u}_I \dpr \nabla\alpha_k = \mathcal{F}_k, \end{align} \end{subequations} where the notations used are standard: $\alpha_k, \; \rho_k, \; \vc{u}_k, \; p_k, \; \overline{\overline{T}}_k, \; E_k$ are the volume fraction, phase density, velocity, pressure, stress tensor, and total energy of phase $k$. For the sake of clarity we constrict our discussions within the scope of two-phase flows, $k=1,2$. The phase density $\rho_k$ is defined as the mass per unit volume occupied by $k$-th phase. The mixture density $\rho$ is the sum of the partial densities $\alpha_k \rho_k$, i.e., $\rho = \sum {\alpha_k \rho_k}$. The last equation \cref{eq:bn:vol} is written for only one component thanks to the saturation constraint for volume fractions $\sum_{k=1}^2 \alpha_k = 1$. The total energy is $E_k = e_k + \mathcal{K}_k$ where $e_k$, and $\mathcal{K}_k = \frac{1}{2}\vc{u}_k \dpr \vc{u}_k$ are the internal energy and kinetic energy, respectively. The variables with the subscript ``I'' represent the variables at interfaces, for which there are several possible definitions \cite{saurel1999multiphase,perigaud2005compressible,saurel2018diffuse}. Here we choose the following \begin{equation} \vc{u}_I = \overline{\vc{u}} = {\sum y_k \vc{u}_k }, \quad p_I = \sum \alpha_k p_k, \quad \overline{\overline{T}}_{I} = - p_I \overline{\overline{I}} + \overline{\overline{\tau}}_I. \end{equation} where $y_k$ denotes the mass fraction $y_k = \alpha_k \rho_k / \rho$, and $\overline{\vc{u}}$ is the mass-fraction weighted mean velocity. The interfacial stress $\overline{\overline{\tau}}_I$ is defined in such way that the thermodynamical laws are respected. The inter-phase exchange terms include the velocity relaxation $\mathcal{M}_k$, the pressure relaxation $\mathcal{F}_k$, and the temperature relaxation $\mathcal{Q}_k$. They are as follows: \begin{equation}\label{eq:relaxations} \begin{split} \mathcal{M}_k=\vartheta\left( \vc{u}_{k^*}-\vc{u}_k \right), \quad \mathcal{F}_k=\eta\left( {p}_{k} - {p}_{k^*} \right), \quad \mathcal{Q}_k=\varsigma\left( T_{k^*} - T_{k} \right). \end{split} \end{equation} where $k^{*}$ denotes the conjugate component of the $k$-th component, i.e., $k=1,\; k^{*} =2$ or $k=2,\; k^{*} =1$. The relaxation velocities are all positive $\vartheta > 0, \; \eta > 0, \; \varsigma > 0$. The phase stress tensor, $\overline{\overline{T}}_k$, can be written as \begin{equation} \overline{\overline{T}}_k = - p_k \overline{\overline{I}} + \overline{\overline{\tau}}_k. \end{equation} For the viscous part we use the Newtonian approximation \begin{equation}\label{eq:newton_vis} \overline{\overline{\tau}}_k = 2\mu_k \overline{\overline{D}}_k + \left(\mu_{b,k} - \frac{2}{3}\mu_k \right) \nabla \dpr \vc{u}_k, \end{equation} where $\mu_k > 0$ is the coefficient of shear viscosity and $\mu_{b,k} > 0$ is the coefficient of bulk viscosity. The deformation rate $\overline{\overline{D}}_k$ is \[ \overline{\overline{D}}_k = \frac{1}{2} \left[ \nabla \vc{u}_k + \left( \nabla \vc{u}_k \right)^{\text{T}} \right] = \overline{\overline{D}}_a + \overline{\overline{D}}_{wk}, \] where the average part \[ \overline{\overline{D}}_a = \frac{1}{2} \left[ \nabla \overline{\vc{u}} + \left( \nabla \overline{\vc{u}} \right)^{\text{T}} \right]. \] the diffusion part \[ \overline{\overline{D}}_{wk} = \frac{1}{2} \left[ \nabla \vc{w}_k + \left( \nabla \vc{w}_k \right)^{\text{T}} \right], \] where the diffusion velocity $\vc{w}_k$ is defined as \begin{equation} \vc{w}_k = \vc{u}_k - \overline{\vc{u}} . \end{equation} With the definition of $\overline{\overline{D}}_{a}$ and $\overline{\overline{D}}_{wk}$, we can further split $\overline{\overline{\tau}}_k$ into the average and diffusion parts: \begin{subequations}\label{eq:stress_disp} \begin{align} \overline{\overline{\tau}}_{ak} = 2\mu_k \overline{\overline{D}}_a + \left(\mu_{b,k} - \frac{2}{3}\mu_k \right) \nabla \dpr \overline{\vc{u}},\\ \overline{\overline{\tau}}_{wk} = 2\mu_k \overline{\overline{D}}_{wk} + \left(\mu_{b,k} - \frac{2}{3}\mu_k \right) \nabla \dpr {\vc{w}}_k. \end{align} \end{subequations} The heat conduction term is \begin{equation}\label{eq:q} q_k = - \nabla \dpr \vc{J}_{qk}, \end{equation} where the Fourier heat flux is \begin{equation}\label{eq:fourier_flux} \vc{J}_{qk} = - \alpha_k \lambda_k \nabla T_k \end{equation} By performing the averaging procedure of Drew et al.\cite{drew1983mathematical}, one can derive the external energy source \begin{equation}\label{eq:I} \mathcal{I}_k = \alpha_k I_k, \end{equation} where $I_k$ denotes the the intensity of the external heat source released in the $k$-th phase, $I_k (\vc{x},t) \geq 0$. Without the diffusion and relaxation processes, the seven-equation model is unconditionally hyperbolic with the following set of wave speeds $u_k \pm a_k, u_k, u_I$, where $a_k$ is the sound speed \begin{equation} a_k ^2 = \left( \dudx{p_k}{\rho_k} \right)_{s_k} = \frac{\frac{p_k}{\rho_k^2} - \left( \dudx{e_k}{\rho_k} \right)_{p_k} }{\left( \dudx{e_k}{p_k} \right)_{\rho_k}} > 0. \end{equation} \begin{remark} For the sake of objectivity, the constitutive relation for the interfacial stress $\overline{\overline{\tau}}_I$ may depend on the following list of frame-invariant variables \[ \alpha, \;\; \text{D}_{k} \alpha / \text{D} t, \;\; \nabla \alpha, \;\; \mathbf{u}_{1}-\mathbf{u}_{2}, \;\; \text{D}_{2} \mathbf{u}_{1} / \text{D} t-\text{D}_{1} \mathbf{u}_{2} / \text{D} t, \;\; \overline{\overline{D}}_{1}, \; \overline{\overline{D}}_{2}, \;\; \nabla\left(\mathbf{u}_{1}-\mathbf{u}_{2}\right), \] where $\text{D}_k \cdot / \text{D} t$ is the material derivative defined in \cref{eq:mat_der}. We postulate that $\overline{\overline{\tau}}_I$ takes the following form \begin{equation}\label{eq:tauI} \overline{\overline{\tau}}_I = \mathcal{B} \left( \vc{u}_{k}-\vc{u}_{k*} \right) \nabla \alpha_k, \end{equation} where $\mathcal{B}>0$ is a function of the above objective variables. The term $\nabla \alpha_k$ acts as a ``Delta function-like'' vector that picks out the diffused interface zone. We will show that this definition of $\overline{\overline{\tau}}_I$ is consistent with the second law of thermodynamics under the temperature equilibrium. In fact, our reduced model to be derived below only includes the mixture momentum equation, where $\overline{\overline{\tau}}_I$ is cancelled out. In general, $\overline{\overline{\tau}}_I$ has an impact on the variation of the volume fraction in the mass diffusion process. \end{remark} \begin{remark} In \cref{eq:bn} we neglect the ``viscous pressure'' terms due to the pulsation damping of bubbles \cite{saurel2003multiphase,perigaud2005compressible}. These terms do not impact our model reduction in the limit of the instantaneous pressure relaxation. \end{remark} \subsubsection{Equations for the primitive variables} In this section, we derive some equations for some primitive variables, which are to be used for further analysis. We introduce the material derivative related to the phase velocity $\vc{u}_k$ and the interfacial velocity $\vc{u}_I$, \begin{equation}\label{eq:mat_der} \frac{\text{D}_g \Phi}{\text{D} \Phi} = \dudx{\Phi}{t} + \vc{u}_g \cdot \nabla{\Phi}, \;\; g = k, I. \end{equation} We also present some thermodynamical relations to be used below: \begin{equation}\label{eq:gibbs} T_k \frac{\text{D}_k s_k}{\text{D} t} = \frac{\text{D}_k e_k}{\text{D} t} - \frac{p_k}{\rho_k^2} \frac{\text{D}_k \rho_k}{\text{D} t}, \;\;\frac{\text{D}_k e_k}{\text{D} t} = \chi_k \frac{\text{D}_k \rho_k}{\text{D} t} + \xi_k \frac{\text{D}_k p_k}{\text{D} t}, \;\; \frac{\text{D}_k p_k}{\text{D} t} = a_k^2 \frac{\text{D}_k \rho_k}{\text{D} t} + \omega_k \frac{\text{D}_k s_k}{\text{D} t}, \end{equation} where the first expression is the Gibbs relation, $s_k$ is the phase entropy, \[\chi_k = \dudx{e_k}{\rho_k}\Big|_{p_k}, \; \xi_k = \dudx{e_k}{p_k}\Big|_{\rho_k}, \;\omega_k = \dudx{p_k}{s_k}\Big|_{\rho_k},\] simple manipulations of \cref{eq:gibbs} lead to $\chi_k = p_k/\rho_k^2 - \xi_k a_k^2$. The Mie-Gr{\"u}neisen coefficient $\Gamma_k$ is defined as \begin{equation}\label{eq:Gam} \Gamma_k = \frac{1}{\rho_k} \dudx{p_k}{e_k}\Big|_{\rho_k} = \frac{1}{\rho_k \xi_k}. \end{equation} With the aid of \cref{eq:Gam}, we reformulate the second relation in \cref{eq:gibbs} as follows \begin{equation}\label{eq:dedrhodp} \Gamma_k \frac{\text{D}_k e_k}{\text{D} t} = \left( \overline{\gamma}_k - \Gamma_k \right) p_k \frac{\text{D}_k v_k}{\text{D} t} + v_k \frac{\text{D}_k p_k}{\text{D} t}, \end{equation} where the specific volume $v_k = 1/\rho_k$, the adiabatic exponent $\overline{\gamma}_k = A_k / p_k$ and $A_k = \rho_k a_k^2$. By performing a procedure similar to that in \cite{murrone2005five,zhangPHD2019}, we can deduce the equations with respect to the primitive variables as follows: \begin{subequations} \label{eq:bn_prim} \begin{align} \alpha_{k} \rho_{k} T_{k} \frac{\mathrm{D}_{k} s_{k}}{\mathrm{D} t} = \left(\boldsymbol{u}_{I}-\boldsymbol{u}_{k}\right) \cdot \mathcal{M}_{k} + \left(p_{k}-p_{I}\right) {\mathcal{F}}_{k}+\left(p_{I}-p_{k}\right)\left(\boldsymbol{u}_{I}-\boldsymbol{u}_{k}\right) \cdot \nabla \alpha_{k} \nonumber\\ + \left( \vc{u}_k -\vc{u}_I \right) \cdot \left( \overline{\overline{\tau}}_I \cdot \nabla \alpha_k \right) + \mathcal{G}_k \label{eq:sk}\\ \alpha_{k} \rho_{k} \frac{\mathrm{D}_{k} \boldsymbol{u}_{k}}{\mathrm{D} t} = \nabla\cdot \left( \alpha_k \overline{\overline{T}}_k \right) - \overline{\overline{T}}_I \cdot \nabla \alpha_{k} + \mathcal{M}_{k} \label{eq:uk}\\ \frac{\mathrm{D}_{k} p_{k}}{\mathrm{D} t}= -\frac{\rho_{k} a_{Ik}^{2}}{\alpha_{k}} \mathcal{F}_{k} + \frac{\vc{u}_k - \vc{u}_I}{\alpha_k \rho_k \xi_k} \left[ \left( \overline{\overline{\tau}}_I - \xi_k \rho_k^2 a_{Ik}^2 \overline{\overline{I}} \right) \cdot \nabla\alpha_k - \mathcal{M}_k \right] + \frac{\mathcal{G}_k }{ \alpha_k \rho_k \xi_k} - \rho_k a_k^2 \nabla\cdot \vc{u}_k \label{eq:pk}\\ \frac{\mathrm{D}_{I} \alpha_{k}}{\mathrm{D} t} = \mathcal{F}_{k} \label{eq:alpk} \end{align} \end{subequations} where \begin{equation}\label{eq:Gk} \frac{\rho_{k} a_{Ik}^{2}}{\alpha_{k}} = \frac{\rho_{k} a_{k}^{2}}{\alpha_{k}} + \frac{p_{I}-p_{k}}{\alpha_{k} \rho_{k} \xi_{k}}, \;\; \mathcal{G}_k = \alpha_k \overline{\overline{\tau}}_k : \overline{\overline{D}}_k + \mathcal{Q}_k + q_k + \mathcal{I}_k. \end{equation} One can check that omitting the dissipative terms, the above equations (\cref{eq:sk,eq:uk,eq:pk,eq:alpk}) can be reduced to those in \cite{murrone2005five}. \subsubsection{Equations for the mixture}\label{eq:mixeqns} In what follows, we derive some average balance equations by replacing the phase velocities with the mass fraction weighted one. \paragraph{The mixture continuity equation} \Cref{eq:bn:mass} can be rewritten as: \begin{equation}\label{eq:mass_diff} \dudx{\alpha_k\rho_k}{t} + \nabla\dpr(\alpha_k\rho_k \overline{\vc{u}}) = - \nabla\dpr \vc{J}_k, \end{equation} where the diffusion flux $\vc{J}_k$ is defined as \begin{align}\label{eq:Jk} \vc{J}_k = \alpha_k \rho_k \vc{w}_k. \end{align} Note that \begin{equation}\label{eq:sumJ} \sum \vc{J}_k = \sum\alpha_k \rho_k \vc{w}_k = 0. \end{equation} In literature there exist some simplified closure relations for the diffusion velocity $\vc{w}_k$ of binary mixtures \cite{williams1985}, for example: \begin{enumerate} \item[(1)] the Fick's law \begin{equation}\label{eq:fick_law} \vc{w}_k = - D \frac{\nabla y_k}{y_k}, \end{equation} \item[(2)] the Stefan-Maxwell equation \begin{equation}\label{eq:stefan} \vc{\nabla} X_{k}=\sum_{j=1}^{2} \frac{X_{k} X_{j}}{D}\left(\vc{w}_{j}-\vc{w}_{k}\right). \end{equation} \end{enumerate} Here, $X_{k}$ is the mole fraction of the component $k$. One can solve diffusion velocities in \cref{eq:stefan} by using \cref{eq:sumJ}. The parameter $D$ is the binary diffusion coefficient, for ideal gases, \begin{equation} D = D_{ij} = \frac{3 \overline{W} k^{0} T}{16 \rho \mu_{ij} \Omega_{ij}}, \end{equation} where $\overline{W}$ is the average molecular weight of the mixture, $k^{0}$ is the Boltzmann's constant, $\mu_{ij}$ is the reduced mass $\mu_{ij} = M_i M_j / \left( M_i + M_j \right)$, $M_i$ and $M_j$ are the masses of the colliding molecules, $\Omega_{ij} = \sigma_{ij} v_{ij} /4$ is the collision integral. $v_{ij} = \sqrt{8k^{0}T/\pi \mu_{ij}}$ is the Maxwellian velocity distribution. From this equation follows $D \sim T^{3/2}/p$. Summing \cref{eq:mass_diff} leads to the equation for the mixture density \begin{equation} \dudx{\rho}{t} + \nabla\dpr(\rho \overline{\vc{u}}) = 0. \end{equation} \paragraph{The mixture momentum equation} Summing \cref{eq:bn:mom}, one can obtain the equation for the mixture momentum \begin{equation}\label{eq:mix_mom} \dudx{\rho \overline{\vc{u}}}{t} + \nabla\dpr\left(\rho \overline{\vc{u}} \tpr \overline{\vc{u}} - \overline{\overline{T}} \right) = - \nabla\dpr\left(\vc{J}_k \tpr {\vc{w}_k} \right), \end{equation} where $\overline{\overline{T}} = \sum \alpha_k \overline{\overline{T}}_k = - \overline{P} \; \overline{\overline{I}} + \sum \alpha_k {\overline{\overline{\tau}}}_k, \;\; \overline{P} = \sum \alpha_k p_k $. We further separate the stress tensor into average and velocity-disequilibrium parts in the following way \begin{subequations}\label{eq:newton_vis_disp} \begin{align} \sum \alpha_k {\overline{\overline{\tau}}}_k = {\overline{\overline{\tau}}} + \sum \alpha_k {\overline{\overline{\tau}}}_{wk}, \label{eq:newton_vis_av}\\ {\overline{\overline{\tau}}} = \sum \alpha_k \overline{\overline{\tau}}_{ak} = 2\mu \overline{\overline{D}}_a + \left(\mu_{b} - \frac{2}{3}\mu \right) \nabla \dpr \overline{\vc{u}}, \\ \mu = \sum \alpha_k \mu_k, \;\; \mu_b = \sum \alpha_k \mu_{b,k}. \end{align} \end{subequations} Thus, \Cref{eq:mix_mom} can be recast as follows: \begin{equation}\label{eq:mix_mom1} \dudx{\rho \overline{\vc{u}}}{t} + \nabla\dpr\left(\rho \overline{\vc{u}} \tpr \overline{\vc{u}} + \overline{P}\; \overline{\overline{I}} - \overline{\overline{\tau}} \right) = \sum \left[ \nabla\dpr \left( \alpha_k \overline{\overline{\tau}}_{wk} \right) - \nabla\dpr\left(\vc{J}_k \tpr {\vc{w}_k} \right) \right], \end{equation} \paragraph{The mixture energy equation} And the summation of \cref{eq:bn:en} leads to \begin{equation}\label{eq:mix_en} \dudx{ \rho E}{t} + \nabla\dpr\left( \rho E \overline{\vc{u}} - \overline{\overline{T}} \dpr \overline{\vc{u}} \right) = - \nabla \dpr \sum E_k \vc{J}_k + \nabla \dpr \sum \alpha_k \overline{\overline{T}}_k \dpr \vc{w}_k + \sum q_k + \sum {\mathcal{I}}_k, \end{equation} where the mixture total energy is \begin{equation}\label{eq:rhoE} \rho E = \sum \alpha_k \rho_k E_k =\rho e + \rho \frac{{|\overline{\vc{u}}}|^2}{2} + \sum \alpha_k \rho_k \frac{|\vc{w}_k |^2}{2}, \quad \rho e = \sum \alpha_k \rho_k e_k. \end{equation} Note that \begin{equation} \sum E_k \vc{J}_k = \sum \left( e_k + \frac{ \left( \overline{\vc{u}} + \vc{w}_k \right) \dpr \left( \overline{\vc{u}} + \vc{w}_k \right) }{2} \right)\vc{J}_k = \sum e_k \vc{J}_k + \sum \vc{J}^{ww}_{k} + \sum \vc{J}^{uw}_{k}, \end{equation} \[\vc{J}^{ww}_{k} = \frac{1}{2} |\vc{w}_k|^2 \vc{J}_k, \;\; \vc{J}^{uw}_{k} = \left( \overline{\vc{u}} \dpr \vc{w}_k \right) \vc{J}_k,\] and \begin{align} \sum \alpha_k \overline{\overline{T}}_k \dpr \vc{w}_k = \sum \alpha_k \overline{\overline{\tau}}_k \dpr \vc{w}_k - \sum \frac{p_k}{\rho_k} \vc{J}_k, \label{eq:Jk_vis} \end{align} thus, with the aid of \cref{eq:newton_vis_disp}, \cref{eq:mix_en} can be recast as \begin{equation}\label{eq:mix_en1} \dudx{ \rho E}{t} + \nabla\dpr\left( \rho E \overline{\vc{u}} + \overline{P} \; \overline{\vc{u}} - \overline{\overline{\tau}} \dpr \overline{\vc{u}} \right) = - \nabla \dpr \sum \left( \vc{J}^{h}_{k} + \vc{J}^{vis}_k + \vc{J}^{ww}_{k} + \vc{J}^{uw}_{k}\right) + \sum q_k + \sum {\mathcal{I}}_k, \end{equation} where $h_k = e_k + p_k/\rho_k$ is the phase enthalpy, \begin{equation}\label{eq:Jhk} \vc{J}^{h}_{k} = h_k \vc{J}_k \end{equation} The term $\vc{J}^{h}_{k}$ is hereby termed as the enthalpy diffusion flux. \begin{equation} \vc{J}^{vis}_k = - \alpha_k \overline{\overline{\tau}}_{wk} \dpr \overline{\vc{u}} - \alpha_k \overline{\overline{\tau}}_{ak} \dpr \vc{w}_k - \alpha_k \overline{\overline{\tau}}_{wk} \dpr \vc{w}_k, \end{equation} is the viscous diffusion flux. Note that the first term comes from the term $-\overline{\overline{T}}\dpr\overline{\vc{u}}$ on the left of \cref{eq:mix_en}, and the last two terms come from the decomposition of the viscous part of \cref{eq:Jk_vis}. In summary, we have obtained the mixture balance equations for mass (\cref{eq:mass_diff}), momentum (\cref{eq:mix_mom1}), and energy(\cref{eq:mix_en1}). The left side of these equations takes the same form as the single phase NS equation. \paragraph{The mixture entropy equation} We define the mixture entropy by assuming the additivity of phase entropies, \begin{equation} \rho s = \sum \alpha_k \rho_k s_k, \end{equation} and the mixture material derivative along the streamline of phases \begin{equation}\label{eq:mixture_mat_der} \rho \frac{\mathrm{D}_{m} \Phi}{\mathrm{D}_{m} t} = \sum \left[ \dudx{\alpha_k \rho_k \Phi_k}{t} + \nabla \dpr \left(\alpha_k \rho_k \vc{u}_k \Phi_k \right) \right] = \sum \alpha_k \rho_k \frac{\mathrm{D}_{k} \Phi_k}{\mathrm{D}_{k} t}. \end{equation} With \cref{eq:mixture_mat_der}, the mixture material derivative for the mixture entropy is \begin{equation}\label{eq:mixture_mat_der1} \rho \frac{\mathrm{D}_{m} s}{\mathrm{D}_{m} t} = \sum \alpha_k \rho_k \frac{\mathrm{D}_{k} s_k}{\mathrm{D}_{k} t} = \rho \frac{\mathrm{D}^{ex}_{m} s}{\mathrm{D}_{m} t} + \rho \frac{\mathrm{D}^{in}_{m} s}{\mathrm{D}_{m} t}, \end{equation} where $\rho {\mathrm{D}^{ex}_{m} s}/{\mathrm{D}_{m} t}$ and $\rho {\mathrm{D}^{in}_{m} s}/{\mathrm{D}_{m} t}$ represent the external entropy flux and the internal entropy production, respectively. The entropy flux is \begin{equation} \rho \frac{\mathrm{D}^{ex}_{m} s}{\mathrm{D}_{m} t} = \sum \nabla\dpr \frac{\vc{q}_k}{T_k}. \end{equation} For an irreversible process, the entropy production should be non-negative. \subsection{Reduction of the of the seven-equation model} In the present work, we do not attempt to solve the complete seven-equation model, which involves complicated wave structure and stiff relaxations. Instead, we derive a simplified version of the seven-equation model in a manner similar to that of the derivation of the one-velocity one-pressure Kapila's model \cite{kapila2001two}. However, Kapila's five-equation model assumes instantaneous velocity equilibrium and pressure equilibrium. The former assumption strips this model of the capability to model the mass diffusion that is characterized by the velocity difference. To restore this ability, the velocity non-equilibrium is retained in the model presented below. In our approach, the velocity non-equilibrium is reserved by assuming different time scales of the velocity relaxation and the pressure relaxation. We assume that the velocity relaxation time $\varepsilon_{u} = \varepsilon$ and the pressure relaxation time scale $\varepsilon_{p} \leq \mathcal{O}(\varepsilon^2)$. The corresponding relaxation rates are $\vartheta = \frac{1}{\varepsilon}$ and $\eta \geq \mathcal{O} (\frac{1}{\varepsilon ^ 2})$, respectively. This assumption is adopted on the basis of the following arguments: \begin{enumerate} \item[(1)] We are interested in problems in presence of strong shocks such as detonation and ICF, where the surface tension is negligible. \item[(2)] As estimated in \cite{kapila2001two} for the DDT problem, the time scales for the velocity relaxation, the pressure relaxation, and the temperature relaxation are 0.1$\mu$s, 0.03$\mu$s, and 18 ms, respectively. In this problem, the time scale for the pressure relaxation is approximately one order smaller than that of the velocity relaxation. Moreover, large amount of physical evaluations show that in common cases $ 0 \sim \varepsilon_p < \varepsilon_u < \varepsilon_T < \varepsilon_g$ \cite{guillard2006numerical,bilicki1996evaluation,petitpas2009modelling,zein2010}, where $\varepsilon_T$ and $\varepsilon_g$ are the relaxation time for temperature and chemical potential, respectively. \end{enumerate} The reduction of the seven-equation model is to derive a limit model when $\varepsilon \to 0$ and $\vartheta \to \infty$, $\eta \to \infty$. The difference of our approach from Kapila's consists in the following two aspects: \begin{enumerate} \item[(1)] Different time scales are assumed for the pressure relaxation and the velocity relaxation, while the same time scale is used in deriving the Kapila's model \cite{murrone2005five,kapila2001two}. \item[(2)] The reduced model is an approximation of the seven-equation model after reserving terms to the order $\mathcal{O}(\varepsilon)$ and abandoning smaller terms, while Kapila's model keeps terms of order $\mathcal{O}(1)$. \end{enumerate} Such assumptions and manipulations allow the velocity disequilibrium and thus can be used to model the mass diffusion. \subsubsection{The pressure relaxation} In this section we drive the phase pressures into equilibrium with the condition $\varepsilon_p \to 0$. The primitive form of the BN model (\cref{eq:bn_prim}) can be recast in the following vector form: \begin{equation} \frac{\partial \boldsymbol{U}}{\partial t}+\sum_{d=1}^{3} \vc{A}_{d}(\vc{U}) \frac{\partial \vc{U}}{\partial x_{d}}=\frac{1}{\varepsilon _p} \vc{H}(\vc{U})+\vc{R}(\vc{U}), \end{equation} where $\vc{U}=\left[\begin{array}{lllllllllllllll} s_{1} & s_{2} & \vc{u}_{1} & \vc{u}_{2} & p_{1} & p_{2} & \alpha_{1} \end{array}\right]$, $\vc{H}(\vc{U})$ is a vector containing the pressure relaxation terms, $\vc{R}(\vc{U})$ is the right hand side terms containing the velocity relaxation and diffusion terms, $d$ is the dimension index. Without loss of generality, we consider the following one-dimensional split form \begin{equation}\label{eq:1D_vecForm} \frac{\partial \vc{U}}{\partial t}=\vc{L}(\vc{U})+\frac{1}{\varepsilon_p} \vc{H}(\vc{U}), \quad \vc{L}(\vc{U})=-\vc{A}(\vc{U}) \frac{\partial \vc{U}}{\partial x}+\vc{R}(\vc{U}) \end{equation} Let us assume the following asymptotic expansion of the solution $\vc{U}$ in the vicinity of the equilibrium one $\vc{U}^{(0)}$: \begin{equation}\label{eq:asympP} \vc{U}=\vc{U}^{(0)}+ \varepsilon_p \vc{U}^{(1)} + \mathcal{O}\left(\varepsilon_p^{2}\right). \end{equation} where $ \varepsilon_p \vc{U}^{(1)}$ represents a fluctuation of order $\varepsilon_p$ in the neighbourhood of $\vc{U}^{(0)}$. The functions $\vc{L}(\vc{U})$ and $\vc{H}(\vc{U})$ are regular enough to allow Taylor series expansion, with the aid of which \cref{eq:1D_vecForm} becomes \begin{equation}\label{eq:asymP_expansion} \left[\frac{\partial \vc{U}^{(0)}}{\partial t}-\vc{L}\left(\vc{U}^{(0)}\right)-\frac{\partial \vc{H}(\vc{U})}{\partial \vc{U}}\left(\vc{U}^{(0)}\right) \cdot \vc{U}^{(1)}\right] -\frac{1}{\varepsilon_p} \vc{H}\left(\vc{U}^{(0)}\right)+\mathcal{O}(\varepsilon_p)=0 \end{equation} In the order of $\mathcal{O}(1/\varepsilon_p)$, we have \begin{equation}\label{eq:orderep} \vc{H}\left(\vc{U}^{(0)}\right)=0, \end{equation} which gives \begin{equation}\label{eq:epP} p_1^{(0)} = p_2^{(0)}. \end{equation} Neglecting terms of order $\mathcal{O}(\varepsilon_p)$ and smaller ones, we have \begin{equation}\label{eq:order1} \frac{\partial \vc{U}^{(0)}}{\partial t}-\vc{L}\left(\vc{U}^{(0)}\right)-\frac{\partial \vc{H}(\vc{U})}{\partial \vc{U}}\left(\vc{U}^{(0)}\right) \cdot \vc{U}^{(1)}=0. \end{equation} Combination of \cref{eq:orderep} and \cref{eq:order1} leads to \begin{subequations} \label{eq:bn_prim_1} \begin{align} \alpha_{k} \rho_{k} T_{k} \frac{\mathrm{D}_{k} s_{k}}{\mathrm{D} t} = \left(\vc{u}_{I}-\vc{u}_{k}\right) \cdot \mathcal{M}_{k} + \left( \vc{u}_k -\vc{u}_I \right) \cdot \left( \overline{\overline{\tau}}_I \cdot \nabla \alpha_k \right) + \mathcal{G}_k \label{eq:sk_1}\\ \alpha_{k} \rho_{k} \frac{\mathrm{D}_{k} \vc{u}_{k}}{\mathrm{D} t} = \nabla\cdot \left( \alpha_k \overline{\overline{T}}_k \right) - \overline{\overline{T}}_I \cdot \nabla \alpha_{k} + \mathcal{M}_{k} \label{eq:uk_1}\\ \frac{\mathrm{D}_{k} p_{k}}{\mathrm{D} t}= -\frac{\rho_{k} a_{Ik}^{2}}{\alpha_{k}} \mathcal{F}_{k}^{(1)} + \mathcal{W}_k + \frac{\mathcal{G}_k }{ \alpha_k \rho_k \xi_k} - \rho_k a_k^2 \nabla\cdot \vc{u}_k \label{eq:pk_1}\\ \frac{\mathrm{D}_{I} \alpha_{k}}{\mathrm{D} t} = \mathcal{F}_{k}^{(1)} \label{eq:alpk_1} \end{align} \end{subequations} where \begin{equation}\label{eq:Wk} \mathcal{W}_k = \frac{\vc{u}_k - \vc{u}_I}{\alpha_k \rho_k \xi_k} \left[ \left( \overline{\overline{\tau}}_I - \xi_k \rho_k^2 a_{Ik}^2 \overline{\overline{I}} \right) \cdot \nabla\alpha_k - \mathcal{M}_k \right]. \end{equation} For simplicity the superscript ``(0)'' over the variables in \cref{eq:bn_prim_1,eq:Wk} is omitted. The terms including $\vc{U}^{(1)}$ are lumped into $\mathcal{F}_{k}^{(1)}$. Summing \cref{eq:pk_1} over $k$, one can obtain \begin{equation} \frac{\text{D}_{m} p}{\text{D}_{m} t} + A \nabla \dpr \vc{u} = A \sum_{k} \frac{\Gamma_k \mathcal{G}_k}{A_k} + A \sum_{k} \frac{\alpha_k \mathcal{W}_k - \alpha_k \vc{w}_k \dpr \nabla p}{A_k} - A \sum \alpha_k \nabla\dpr \vc{w}_k, \end{equation} where $1/A = \sum \alpha_k /A_k$. With \cref{eq:pk_1,eq:epP} and $\mathcal{F}_{1}^{(1)} + \mathcal{F}_{2}^{(1)} = 0$, one can solve \begin{subequations} \begin{align}\label{eq:Fk0} \mathcal{F}_{1}^{(1)} &= \mathcal{F}_{1,Kap}^{(1)} + \mathcal{F}_{1,Diff}^{(1)}, \\ \mathcal{F}_{1,Kap}^{(1)} &= \alpha_1 \alpha_2 \frac{ (A_2 - A_1) \nabla \dpr \vc{u} - (\mathcal{G}_{a2} \Gamma_2 / \alpha_2 - \mathcal{G}_{a1} \Gamma_1 / \alpha_1 ) }{A_1 \alpha_2 + A_2 \alpha_1}, \\ \mathcal{F}_{1,Diff}^{(1)} &= \alpha_1 \alpha_2 \frac{ (A_2 \nabla \dpr \vc{w}_2 - A_1 \nabla \dpr \vc{w}_1) + (\vc{w}_2 - \vc{w}_1)\dpr \nabla p + ( \mathcal{W}_2 - \mathcal{W}_1) }{A_1 \alpha_2 + A_2 \alpha_1}\\ & + \alpha_1 \alpha_2 \frac{ (\alpha_1 \overline{\overline{\tau}}_{1} : \overline{\overline{D}}_1) \Gamma_1 / \alpha_1 - (\alpha_2 \overline{\overline{\tau}}_{2} : \overline{\overline{D}}_1) \Gamma_2 / \alpha_2 }{A_1 \alpha_2 + A_2 \alpha_1} - \mathcal{F}_{1,vis}^{(1)}. \end{align} \end{subequations} Note that the first term $\mathcal{F}_{1,Kap}^{(1)}$ coincides with the corresponding result of Kapila's model (in the absence of heat conduction and viscosity). This term represents the volume fraction variation due to the compaction effect. The term $\mathcal{G}_{ak}$ is defined by replacing the component viscous dissipation in \cref{eq:Gk} with the average one: \[\mathcal{G}_{ak} = \alpha_k \overline{\overline{\tau}}_{ak} : \overline{\overline{D}}_a + \mathcal{Q}_k + q_k + \mathcal{I}_k.\] The second term $\mathcal{F}_{1,Diff}^{(1)}$ is new and due to the velocity non-equilibrium effect (or the mass diffusion process). All velocity-disequilibrium terms are included in $\mathcal{F}_{1,Diff}^{(1)}$. For the definition of $\mathcal{F}_{1,vis}^{(1)}$, see \cref{eq:F1vis}. The first term can be further split into five parts according to the corresponding contribution of each physical process \begin{equation} \mathcal{F}_{1,Kap}^{(1)} = \mathcal{F}_{1,hd}^{(1)} + \mathcal{F}_{1,vis}^{(1)} + \mathcal{F}_{1,ht}^{(1)} + \mathcal{F}_{1,hc}^{(1)} + \mathcal{F}_{1,ex}^{(1)}, \end{equation} where the terms due to the hydrodynamic process, the viscous dissipation, the inter-phase heat transfer, the heat conduction, and the external heat source are as follows \begin{subequations} \begin{align} \mathcal{F}_{1,hd}^{(1)} = \alpha_1 \alpha_2 \frac{ (A_2 - A_1) \nabla \dpr \vc{u} }{A_1 \alpha_2 + A_2 \alpha_1},\\\label{eq:F1vis} \mathcal{F}_{1,vis}^{(1)} = \alpha_1 \alpha_2 \frac{ (\alpha_1 \overline{\overline{\tau}}_{a1} : \overline{\overline{D}}_a) \Gamma_1 / \alpha_1 - (\alpha_2 \overline{\overline{\tau}}_{a2} : \overline{\overline{D}}_a) \Gamma_2 / \alpha_2 }{A_1 \alpha_2 + A_2 \alpha_1}, \\ \mathcal{F}_{1,ht}^{(1)} = \alpha_1 \alpha_2 \frac{ \mathcal{Q}_1 \Gamma_1 / \alpha_1 - \mathcal{Q}_2 \Gamma_2 / \alpha_2 }{A_1 \alpha_2 + A_2 \alpha_1}, \\ \mathcal{F}_{1,hc}^{(1)} = \alpha_1 \alpha_2 \frac{ q_1 \Gamma_1 / \alpha_1 - q_2 \Gamma_2 / \alpha_2 }{A_1 \alpha_2 + A_2 \alpha_1},\\ \mathcal{F}_{1,ex}^{(1)} = \alpha_1 \alpha_2 \frac{ \mathcal{I}_1 \Gamma_1 / \alpha_1 - \mathcal{I}_2 \Gamma_2 / \alpha_2 }{A_1 \alpha_2 + A_2 \alpha_1}. \end{align} \end{subequations} \subsubsection{The velocity relaxation} We continue to perform asymptotic analysis of \cref{eq:bn_prim_1} with respect to the velocity relaxation time $\varepsilon_u = \varepsilon \to 0$. In a similar way we can express the velocity in the following asymptotic expansion: \begin{subequations} \begin{align}\label{eq:asympU} {\vc{U}}^{\prime} &= {\vc{U}}^{\prime(0)} + \varepsilon_u {\vc{U}}^{\prime(1)} + \mathcal{O}\left(\varepsilon_u^{2}\right), \end{align} \end{subequations} where ${\vc{U}}^{\prime}$ is the reduced state variable with equilibrium pressure of \cref{eq:bn_prim_1}, $\vc{U}^{\prime} = \left[ s_{1}^{\prime} \;\; s_{2}^{\prime} \;\; \vc{u}_{1}^{\prime} \;\; \vc{u}_{2}^{\prime} \;\; p^{\prime} \;\; \alpha_{1}^{\prime}\right]$. Similar to the analysis in the above section, one can deduce \begin{equation} \vc{u}_{1}^{\prime(0)} = \vc{u}_{2}^{\prime(0)} = \vc{u}^{\prime(0)}. \end{equation} Then we have \begin{equation} \overline{\vc{u}}^{\prime} = \vc{u}^{\prime(0)} + \varepsilon_{u} \left( y_1 \vc{u}_{1}^{\prime(1)} + y_2 \vc{u}_{2}^{\prime(1)}\right) + \mathcal{O}(\varepsilon_u^2), \end{equation} \begin{equation}\label{eq:wk_ep} \vc{w}_k^{\prime} = \vc{u}_{k}^{\prime} - \overline{\vc{u}}^{\prime} = \varepsilon_u y_{k*} (\vc{u}_{k}^{\prime(1)} - \vc{u}_{k*}^{\prime(1)}) + \mathcal{O} (\varepsilon_u^2). \end{equation} From \cref{eq:wk_ep}, we deduce \begin{equation} \left| \vc{w}_k^{\prime} \right|^2 = \mathcal{O}(\varepsilon_u^2) = \mathcal{O}(\varepsilon_p). \end{equation} At this stage the mixture equations derived in \Cref{eq:mixeqns} still hold. In the reduced model we only retain terms to the order $\mathcal{O}(\varepsilon_u)$. To be consistent with \cref{eq:asymP_expansion}, the term $\left| \vc{w}_k \right|^2$ should be abandoned in the reduction. Thus, the R.H.S. of \cref{eq:mix_mom,eq:mix_en1} can be simplified as follows \begin{equation} \nabla\dpr\left(\vc{J}_k \tpr {\vc{w}_k} \right) \approx \vc{0}. \end{equation} \begin{equation} \vc{J}^{ww}_{k} \approx \vc{0}, \;\; \vc{J}^{uw}_{k} \approx \vc{0}. \end{equation} The definition of the mixture total energy \cref{eq:rhoE} is reduced to \begin{equation} \rho E \approx \sum \alpha_k \rho_k E_k =\rho e + \frac{1}{2} {\overline{\vc{u}}} \dpr \overline{\vc{u}}. \end{equation} Moreover, $\vc{J}^{vis}_k$ becomes \begin{equation}\label{eq:Jk_vis1} \vc{J}^{vis}_k \approx - \alpha_k \overline{\overline{\tau}}_{wk} \dpr \overline{\vc{u}} - \alpha_k \overline{\overline{\tau}}_{ak} \dpr \vc{w}_k, \end{equation} \subsubsection{The complete model} Combining \cref{eq:mass_diff,eq:mix_mom,eq:mix_en,eq:alpk_1}, we summarize the final model in the limit $\varepsilon \to 0$ as follows: \begin{subequations}\label{eq:final_model} \begin{align} \dudx{\alpha_k\rho_k}{t} + \nabla\dpr(\alpha_k\rho_k \overline{\vc{u}}) = - \nabla\dpr \vc{J}_k, \\ \dudx{\rho \overline{\vc{u}}}{t} + \nabla\dpr\left(\rho \overline{\vc{u}} \tpr \overline{\vc{u}} + p \overline{\overline{I}} - \overline{\overline{\tau}} \right) = \nabla \dpr \sum \alpha_k \overline{\overline{\tau}}_{wk},\\ \dudx{ \rho E}{t} + \nabla\dpr\left( \rho E \overline{\vc{u}} + p \overline{\vc{u}} - \overline{\overline{\tau}} \dpr \overline{\vc{u}} \right) = - \nabla \dpr \sum \left( \vc{J}^{h}_{k} + \vc{J}^{vis}_k \right) + \sum q_k + \sum {\mathcal{I}}_k,\\ \dudx{\alpha_k}{t} + \overline{\vc{u}} \dpr \nabla\alpha_k = \mathcal{F}_{k}^{(1)}, \end{align} \end{subequations} where the terms $\vc{J}_k$, $\vc{J}^{h}_{k}$, $\vc{J}^{vis}_k$, $\mathcal{F}_{k}^{(1)}$ are defined in \cref{eq:Jk}, \cref{eq:Jhk}, \cref{eq:Jk_vis1}, \cref{eq:Fk0}, respectively. \begin{remark} It appears that the RHS (right hand side) term of the volume fraction equation $\mathcal{F}_{k}^{(1)}$ is very complicated in comparision with the Kapila's one-velocity model. However, in the case of the concerned scenario where mass diffusion goes under the temperature equilibrium, it can be significantly simplified as we demonstrate below. \end{remark} \begin{remark} In the above model, the mixture stress tensor is a volume fraction weighted average of component stress tensors. It depends on the diffusion velocity $\vc{w}_k$, which is different from the formulation of \cite{Cook2009Enthalpy}. Our formulation is consistent with the analysis of \cite{GEURST1986455,gouin2008dissipative}. \end{remark} \subsubsection{Thermodynamical consistency} \begin{proposition} The reduced model satisfies the entropy condition \begin{equation} \rho {\mathrm{D}^{in}_{m} s}/{\mathrm{D}_{m} t} \geq 0. \end{equation} \end{proposition} \begin{proof} Since no terms of order $\mathcal{O}\left( {\varepsilon_u}^2 \right)$ participate in \cref{eq:sk_1}, it still holds after velocity relaxation. We write the equation for the entropy production as follows: \begin{equation} \rho {\mathrm{D}^{in}_{m} s}/{\mathrm{D}_{m} t} = \sum \frac{1}{T_k} \left( \mathcal{G}_k - \nabla\dpr \frac{\vc{q}_k}{T_k} \right) + \sum \frac{1}{T_k} \left( \overline{\vc{u}}-\vc{u}_{k}\right) \cdot \mathcal{M}_{k} + \sum \frac{1}{T_k} \left( \vc{u}_{k} - \overline{\vc{u}} \right) \cdot \overline{\overline{\tau}}_I \cdot \nabla \alpha_k. \end{equation} By using \cref{eq:relaxations,eq:tauI,eq:newton_vis,eq:fourier_flux}, one can prove that the three terms on the right hand side are all non-negative after some algebraic manipulations, which is omitted here. \end{proof} \section{Numerical method} \label{sec:numer_meth} The model (\ref{eq:final_model}) can be split into five distinct physical processes including the inviscid hydrodynamic process, the viscous process, the heat transfer process, the heat conduction process, the mass diffusion process. The splitting and solution procedures are performed on the basis of physical concerns and assumptions. First, the pressure relaxation takes place much faster than the thermal process. Second, heat conduction and mass diffusion proceeds under temperature and pressure equilibrium. In the first four steps, only the mass fraction averaged velocity is involved. Velocity disequilibrium that leads to the mass diffusion only appears in the mass diffusion process. The last two stages are accompanied by inter-phase heat transfer to maintain the temperature equilibrium. We write the split processes as follows: \begin{enumerate} \item[(a)] The inviscid hydrodynamic process \begin{subequations}\label{eq:HD} \begin{align} \dudx{\alpha_k\rho_k}{t} + \nabla\dpr(\alpha_k\rho_k \overline{\vc{u}}) = 0, \\ \dudx{\rho \overline{\vc{u}}}{t} + \nabla\dpr\left(\rho \overline{\vc{u}} \tpr \overline{\vc{u}} + p\overline{\overline{I}} \right) = 0,\\ \dudx{ \rho E}{t} + \nabla\dpr\left( \rho E \overline{\vc{u}} + p \overline{\vc{u}} \right) = 0,\\ \dudx{\alpha_k}{t} + \overline{\vc{u}} \dpr \nabla\alpha_k = \mathcal{F}_{k,hd}^{(1)}, \end{align} \end{subequations} \item[(b)] The viscous process \begin{subequations}\label{eq:VIS} \begin{align} \dudx{\alpha_k\rho_k}{t} = 0, \label{eq:VIS_mk}\\ \dudx{\rho \overline{\vc{u}}}{t} = \nabla\dpr\left( \overline{\overline{\tau}} \right), \label{eq:VIS_mom}\\ \dudx{ \rho E}{t} = \nabla\dpr\left( \overline{\overline{\tau}} \dpr \overline{\vc{u}} \right), \label{eq:VIS_en}\\ \dudx{\alpha_k}{t} = \mathcal{F}_{k,vis}^{(1)}, \end{align} \end{subequations} \item[(c)] The heat transfer process \begin{subequations}\label{eq:HT} \begin{align} \dudx{\alpha_k\rho_k}{t} = 0, \\ \dudx{\rho \overline{\vc{u}}}{t} = 0,\\ \dudx{ \rho E}{t} = 0,\\ \dudx{\alpha_k}{t} = \mathcal{F}_{k,ht}^{(1)}, \end{align} \end{subequations} \item[(d)] The heat conduction process \begin{subequations}\label{eq:HC} \begin{align} \dudx{\alpha_k\rho_k}{t} = 0, \\ \dudx{\rho \overline{\vc{u}}}{t} = 0,\\ \dudx{ \rho E}{t} = \sum q_k + \sum \mathcal{Q}_k^{HC},\\ \dudx{\alpha_k}{t} = \mathcal{F}_{k,hc}^{(1)} + \mathcal{F}_{k,hcht}^{(1)},\label{eq:HC_alp} \end{align} \end{subequations} where the term $\mathcal{Q}_k^{HC}$ represents the heat transfer between two components in the course of heat conduction, that drives the phase temperatures towards equilibrium. Although the heat transfer does not impact the mixture energy equation ($\sum \mathcal{Q}_k^{HC} = 0$), it leads to the variation of volume fraction through the term $\mathcal{F}_{k,hcht}^{(1)}$. \item[(e)] The mass diffusion process \begin{subequations}\label{eq:MD} \begin{align} \dudx{\alpha_k\rho_k}{t} = - \nabla\dpr \vc{J}_k, \label{eq:MD_par_mass} \\ \dudx{\rho \overline{\vc{u}}}{t} = \nabla \dpr \sum \alpha_k \overline{\overline{\tau}}_{wk}, \label{eq:MD_mom}\\ \dudx{ \rho E}{t} = - \nabla \dpr \sum \vc{J}^{h}_{k} + \nabla \dpr \sum \alpha_k \overline{\overline{\tau}}_{wk} \dpr \overline{\vc{u}}+ \nabla \dpr \sum \alpha_k \overline{\overline{\tau}}_{ak} \dpr {\vc{w}}_k + \sum \mathcal{Q}_{k}^{MD}, \label{eq:MD_en}\\ \dudx{\alpha_k}{t} = \mathcal{F}_{k,Diff}^{(1)} + \mathcal{F}_{k,mdht}^{(1)}, \label{eq:MD_vol} \end{align} \end{subequations} \end{enumerate} where the term $\mathcal{Q}_{k}^{MD}$ is the heat transfer between components in the process of mass diffusion and $\sum \mathcal{Q}_k^{MD} = 0$. $\mathcal{F}_{k,mdht}^{(1)}$ is the volume fraction variation caused by the heat transfer $\mathcal{Q}_{k}^{MD}$. For the solution of this model (\ref{eq:final_model}), we implement the fractional step method, i.e., each set of split governing equations for the physical processes are solved one by one in order. The solution obtained at each step serves as the initial condition for the next step. In numerical implementation, the solution of non-linear parabolic PDEs with respect to the velocity, the temperature and the mass fraction are involved at the heat conduction step, viscous step and mass diffusion step, respectively. For their solution, we implement an efficient explicit local iteration method that is to be described in \Cref{subsec:numer_met_para}. \subsection{Hydrodynamic part} The hydrodynamic part (i.e. \cref{eq:HD}) in fact coincides with the original Kapila's model whose jump conditions, Riemann invariants and numerical solutions have been sufficiently studied in literature \cite{murrone2005five,kapila2001two}. It is established that one should solve the non-conservative advection equation for the volume fraction in DIM for preserving the pressure-velocity equilibrium. Most trials to use the conservative reformulation with the aid of mass conservation fail, as summarized in \cite{Abgrall2001,abgrall1996prevent}. To implement the Godunov method, we reformulate the volume fraction equation as follows: \begin{equation}\label{eq:reduced_five_hyper:vol1} \frac{\partial \alpha_{1}}{\partial t}+ \nabla \cdot \left( \alpha_{1} \vc{u} \right) = \alpha_1 \nabla\cdot \vc{u} + \mathcal{F}_{k,hd}^{(1)} = \frac{A}{A_1}\alpha_1 \nabla\cdot \vc{u}. \end{equation} The hydrodynamic subsystem can be written in the vector form as follows: \begin{equation}\label{eq:reduced_five_hyper_conv} \dudx{\vc{U}}{t} + \nabla\dpr \vc{F}\left( \vc{U}\right) = \vc{S}\left( \vc{U} \right) \nabla\dpr \vc{u}, \end{equation} where \[ \vc{U} = \left[ \alpha_1 \rho_1 \;\; \alpha_2 \rho_2 \;\; \rho u \;\; \rho v \;\; \rho E \;\; \alpha_1 \right]^{\text{T}}, \quad \vc{F}\left(\vc{U}\right) = u \vc{U} + p \vc{D}, \] \[ \vc{D}\left(\vc{U}\right) = \left[ 0 \;\; 0 \;\; 1 \;\; 0 \;\; u \;\; 0 \right]^{\text{T}}, \quad \vc{S}\left(\vc{U}\right) = \left[ 0 \;\; 0 \;\; 0 \;\; 0 \;\; 0 \;\; \frac{A}{A_1}\alpha_1 \right]^{\text{T}}. \] We use the Godunov method with the HLLC approximate solver \cite{Toro2009Riemann} to evaluate the numerical flux of the conservative part of \cref{eq:reduced_five_hyper_conv} (i.e., temporarily omit the right hand side). High orders are achieved by using the fifth order WENO scheme \cite{Coralic2014Finite,Johnsen2006Implementation,JIANG1996202} for the spatial reconstruction of the local characteristic variables on cell faces or the MUSCL scheme \cite{Toro2009Riemann,leveque2002finite} for the spatial reconstruction of the primitive physical variables. The two-stage Heun method (i.e., the modified Euler method) is used for the time integration. The non-conservative term $\vc{S}\left( \vc{U} \right) \nabla\dpr \vc{u}$ is calculated as follows \begin{equation} \frac{1}{V_{i j k}} \int_{V_{i j k}} \frac{A}{A_1}\alpha_1 \nabla \cdot \vc{u} \mathrm{d} V \approx \frac{1}{V_{i j k}} \left( \frac{A}{A_1}\alpha_1 \right)_{i j k} \int_{{\sigma}_{i j k}} \vc{u} \cdot \vc{n} \mathrm{d} {\sigma}, \end{equation} where the subscript $_{ijk}$ denotes the index of the considered cell. The denotations ${V_{i j k}}$, ${{\sigma}_{i j k}}$ and $\vc{n}$ are the cell volume, the surface, and the surface normal, respectively. The variables $A, \; A_1, \; \alpha_1$ are taken to be the cell-averaged values as in \cite{Tiwari2013A,Johnsen2006Implementation,Coralic2014Finite}. \subsection{Viscous part} Observing \cref{eq:VIS_mk}, it can be seen that $m_k = \alpha_k \rho_k$ does not vary at this stage, nor does the mixture density $\rho = \sum m_k$ or the mass fraction $y_k$. The momentum equation and energy equation (\ref{eq:VIS_en}) can be rewritten in the following form \begin{subequations} \begin{align} \rho \dudx{ \overline{\vc{u}}}{t} = \nabla\dpr \overline{\overline{\tau}}, \label{eq:vis_mom}\\ \rho \dudx{ E}{t} = \nabla \dpr \left( \overline{\overline{\tau}} \dpr \overline{\vc{u}} \right). \label{eq:vis_en} \end{align} \end{subequations} \Cref{eq:vis_mom} forms a parabolic PDE set, which is reduced to the following form in 1D: \begin{equation}\label{eq:viscous_para_pde} \rho \frac{\partial \overline{u}}{\partial t}=\frac{\partial}{\partial x}\left(\frac{4}{3} \mu \frac{\partial \overline{u}}{\partial x}\right), \end{equation} where the mixture dynamic viscosity $\mu=\sum \alpha_k \mu_k$. In general, the parabolic PDE set is non-linear due to the dependence of the coefficients on the unknowns. For some application scenarios, the phase viscosity depends on the phase density $\rho_k$, the pressure $p$ and the temperature $T_k$, i.e., $\mu_k = \mu_k (\rho_k, p_k, T_k)$ and the mixture viscosity $\mu = \mu (\alpha_k, p_k, \rho_k, T_k)$. The temperature is subject to the impact of the viscosity terms. The latter varies with the temperature and the pressure. Such non-linearity issues are considered by using the method of iterations, where the coefficients are frozen in each iteration. By doing so, \cref{eq:viscous_para_pde} represents a linear PDE in each iteration. The linearized parabolic PDE is solved with the LIM algorithm \cite{Zhukov2010}. The numerical methods for solving such parabolic equations are summarized in \Cref{subsec:numer_met_para}. \subsection{Temperature relaxation part}\label{subsec:TR} Simple algebraic manipulations of \cref{eq:HT} give \begin{equation}\label{eq:mkdek} m_k = m_k^{(0)} = const, \;\; {\overline{\vc{u}}} ={\overline{\vc{u}}}^{(0)} = const, \;\; \rho E = (\rho E)^{(0)} = const, \end{equation} where the superscript ``${(0)}$'' represent the variables at the beginning of the current stage. Combination of the first three equations in \cref{eq:HT} leads to \begin{equation}\label{eq:dedt_HT} \dudx{e}{t} = 0. \end{equation} The reduced model is in pressure equilibrium, which means: \begin{equation}\label{eq:pres_eq} p_1\left( T_1, \rho_1 \right) = p_2\left( T_2, \rho_2 \right) = p. \end{equation} The saturation condition for volume fractions leads to \begin{equation}\label{eq:vol_saturation} \frac{m_1}{\rho_1} + \frac{m_2}{\rho_2} = 1. \end{equation} With \cref{eq:pres_eq,eq:vol_saturation}, the phase density can be expressed as \begin{equation}\label{eq:rhok_fun} \rho_k = \rho_k \left(m_1, m_2, T_1, T_2 \right). \end{equation} Further, we obtain \begin{equation}\label{eq:ek_mk_Tk} e_k = e_k \left( \rho_k, T_k \right) = e_k \left(m_1, m_2, T_1, T_2 \right), \end{equation} and \begin{equation}\label{eq:e_fun} e = \sum y_k e_k = \sum \frac{m_k}{m_1 + m_2} e_k \left(m_1, m_2, T_1, T_2 \right) = e \left(m_1, m_2, T_1, T_2 \right). \end{equation} Combination of \cref{eq:mkdek,eq:e_fun,eq:dedt_HT} gives \begin{align}\label{eq:tr_eq1} \mathcal{A}_{1}\dudx{T_1}{t} + \mathcal{A}_{2} \dudx{T_2}{t} = 0, \end{align} where \[\mathcal{A}_{1} = \dudx{e}{T_1}, \;\; \mathcal{A}_{2} = \dudx{e}{T_2}.\] The time derivative is approximated as \begin{equation}\label{eq:dTk_dis} \dudx{T_k}{t} = \frac{T_k^{\prime} - T_k^{(0)}}{\Delta t}, \end{equation} here and below the superscripts ``$(0)$'' and ``$\prime$'' represent the variables at the beginning and the end of the current stage, respectively. We assume the heat transfer is large enough to reach a temperature equilibrium at the end of the current time step, thus, we have \begin{equation}\label{eq:Teq} T_1^{\prime} = T_2^{\prime} = T^{\prime}. \end{equation} By using \cref{eq:tr_eq1,eq:dTk_dis,eq:Teq} one can obtain: \begin{equation}\label{eq:tr_temp_av} T^{\prime} = \frac{\mathcal{A}_1 T_1^{(0)} + \mathcal{A}_2 T_2^{(0)}}{\mathcal{A}_1 + \mathcal{A}_2} \end{equation} Having $T^{\prime}$, we can solve for $\rho_k^{\prime}$ with \cref{eq:rhok_fun}, and then for $p^{\prime}$ with \cref{eq:pres_eq}. Since the partial densityss does not vary, i.e. $m_k^{\prime} = m_k^{(0)}$, the volume fractions can be evaluated with $\alpha_k^{\prime} = m_k^{\prime}/\rho_k^{\prime}$. In this way, we can determine the temperature-relaxed state in each cell. \begin{remark} The above manipulations for the temperature relaxation is based on the infinite relaxation rate assumption. In such case the temperature relaxation term $Q_k$ does not appear explicitly. To deal with finite temperature relaxation, the governing equation for the internal energy of each phase is useful. We can obtain these equations by following a similar procedure in the derivation of \cref{eq:dekdt_hc} in the next subsection: \begin{subequations} \begin{align} \alpha_1 \rho_1 \dudx{e_1}{t} = {\mathcal{Q}}_1 - p\dudx{\alpha_1}{t},\\ \alpha_2 \rho_2 \dudx{e_2}{t} = {\mathcal{Q}}_2 - p\dudx{\alpha_2}{t}. \end{align} \end{subequations} The temperature relaxation term $\mathcal{Q}_k$ is prescribed according to specific physical laws. \end{remark} \subsection{Heat conduction part}\label{subsec:HC} The procedure for the heat conduction is totally analogous to that of the heat transfer. From \cref{eq:HC}, one can deduce \begin{equation}\label{eq:dmkdt0} \dudx{m_k}{t} = 0, \;\; \dudx{y_k}{t} = 0, \;\; \dudx{\overline{\vc{u}}}{t} = 0. \end{equation} Invoking equation (\ref{eq:e_fun}), one can write the equation for the mixture internal energy as \begin{equation}\label{eq:e_hc} \rho \dudx{e}{t} = \rho \left( \mathcal{A}_{1}\dudx{T_1}{t} + \mathcal{A}_{2} \dudx{T_2}{t} \right) = \sum_k q_k + \sum_k {\mathcal{Q}}_k^{HC}. \end{equation} By using the definition of the mixture internal energy $e = \sum_k y_k e_k$, from \cref{eq:e_hc} one can deduce \begin{equation}\label{eq:diff_e} \alpha_1 \rho_1 \dudx{e_1}{t} + \alpha_2 \rho_2 \dudx{e_2}{t} = q_1 + q_2 + {\mathcal{Q}}_1^{HC} + {\mathcal{Q}}_2^{HC}. \end{equation} Aided by \cref{eq:dedrhodp}, differentiation of \cref{eq:pres_eq} yields \begin{equation}\label{eq:diff_pres} \frac{p\Gamma_1 - A_1}{\alpha_1} \dudx{\alpha_1}{t} + \rho_1 \Gamma_1 \dudx{e_1}{t} = \frac{p\Gamma_2 - A_2}{\alpha_2} \dudx{\alpha_2}{t} + \rho_2 \Gamma_2 \dudx{e_2}{t}. \end{equation} Solution of \cref{eq:diff_e,eq:diff_pres} with respect to $\alpha_k \rho_k \dudx{e_k}{t}$ gives \begin{subequations} \begin{align}\label{eq:dekdt_hc} \alpha_1 \rho_1 \dudx{e_1}{t} = q_1 + {\mathcal{Q}}_1^{HC} - p\dudx{\alpha_1}{t},\\ \alpha_2 \rho_2 \dudx{e_2}{t} = q_2 + {\mathcal{Q}}_2^{HC} - p\dudx{\alpha_2}{t}, \end{align} \end{subequations} where the last term $p\dudx{\alpha_k}{t}$ represents the thermodynamical work due to the motion of the interface. The term $\dudx{\alpha_k}{t}$ is defined in \cref{eq:HC_alp} and depends on $q_k$, $\mathcal{Q}_k^{HC}$. The temperature relaxation term $\mathcal{Q}_k^{HC}$ represents the heat exchange between phases, which drives phase temperatures towards equilibrium. Here, we assume such a model for $\mathcal{Q}_k^{HC}$ that the phase temperature equilibrium is maintained in the course of the multicomponent heat conduction, i.e., \begin{equation}\label{eq:temp_equi} \dudx{T_1}{t} = \dudx{T_2}{t} = \dudx{T}{t}. \end{equation} Note that the condition \cref{eq:temp_equi} is in fact an implicit condition assumed in one-temperature models, for example, the conservative model used in \cite{Lemartelot2014}. By using \cref{eq:dmkdt0,eq:temp_equi,eq:dekdt_hc,eq:ek_mk_Tk}, one can deduce \begin{equation}\label{eq:deT} \sum m_k \dudx{e_k}{T_k} \dudx{T}{t} = \sum q_k, \end{equation} where $\dudx{e_k}{T_k} = C_{v,k}$ for the ideal gas EOS that is considered in the current work. Solving \cref{eq:deT}, one an obtain the temperature $T^{\prime}$ at the end of this stage. Having $T^{\prime}$, the other variables are computed in the same way as in the last (heat transfer) stage. \subsection{Mass diffusion part} Different from previous stages where the partial densities $m_k$ remain constant, at this stage the mass diffusion leads to the variation of partial densities with time, as can be seen from \cref{eq:MD_par_mass}. Since $\sum\vc{J}_k = \vc{0}$, summing \cref{eq:MD_par_mass} over $k$ leads to \begin{equation}\label{eq:sumrhok} \dudx{\rho}{t} = 0. \end{equation} From \cref{eq:sumrhok,eq:MD_mom} follows that \begin{equation}\label{eq:MD_vel} \rho \dudx{\overline{\vc{u}}}{t} = \sum \nabla \dpr \overline{\overline{\tau}}_{wk}. \end{equation} To compute the RHS of \cref{eq:MD_vel}, we need $\vc{w}_k$ that is evaluated explicitly according to \cref{eq:fick_law}. Combination of \cref{eq:sumrhok,eq:MD_par_mass,eq:fick_law} leads to \begin{equation}\label{eq:md_para_pde} \dudx{m_k}{t} = \rho \dudx{y_k}{t} = \nabla \dpr \left( \rho D \nabla y_k \right). \end{equation} Solving the parabolic PDE (\ref{eq:md_para_pde}) yields the parameters at the end of the mass diffusion stage: $y_k^{\prime}$, $m_k^{\prime}$. Explicit solution of \cref{eq:MD_en} yields $({\rho E})^{\prime}$. In defining $p^{\prime}$ and $\alpha_k^{\prime}$, we use the temperature equilibrium condition which is assumed in the Fick's law and the Stefan-Maxwell law. With \cref{eq:sumrhok,eq:MD_vel,eq:MD_en} one can deduce \begin{equation}\label{eq:MD_diff_e} \rho \dudx{e}{t} = \alpha_1 \rho_1 \dudx{e_1}{t} + \alpha_2 \rho_2 \dudx{e_2}{t} + \mathcal{C}_1 e_1 + \mathcal{C}_2 e_2 = \mathcal{E}^{c}_1 + \mathcal{E}^{c}_2, \end{equation} here for simplicity we introduce \begin{subequations} \begin{align*} \mathcal{C}_k = - \nabla \dpr \vc{J}_k,\\ \mathcal{E}^{c}_k = -\nabla \cdot \boldsymbol{J}_{k}^{h} + \alpha_k \overline{\overline{\tau}}_{wk}:\overline{\overline{D}}_a +\nabla \cdot \alpha_{k} \overline{\overline{\tau}}_{a k} \cdot \boldsymbol{w}_{k}. \end{align*} \end{subequations} Further, from \cref{eq:MD_diff_e,eq:diff_pres} follows \begin{subequations}\label{eq:MD_dekdt} \begin{align} \alpha_1 \rho_1 \dudx{e_1}{t} = - \widetilde{p} \dudx{\alpha_1}{t} + \widetilde{\mathcal{E}}_1,\\ \alpha_2 \rho_2 \dudx{e_2}{t} = - \widetilde{p} \dudx{\alpha_2}{t} + \widetilde{\mathcal{E}}_2, \end{align} \end{subequations} where \begin{subequations} \begin{align} \widetilde{p} &= p - \frac{A_1/\alpha_1 + A_2/\alpha_2}{\Gamma_1/\alpha_1 + \Gamma_2/\alpha_2}, \\ \widetilde{\mathcal{E}}_1 &= \frac{\mathcal{E}_s \Gamma_2 /\alpha_2 + \mathcal{H}_2 - \mathcal{H}_1}{\Gamma_1/\alpha_1 + \Gamma_2/\alpha_2}, \\ \widetilde{\mathcal{E}}_2 &= \frac{\mathcal{E}_s \Gamma_1 /\alpha_1 + \mathcal{H}_1 - \mathcal{H}_2}{\Gamma_1/\alpha_1 + \Gamma_2/\alpha_2} , \\ \mathcal{E}_s &= - \mathcal{C}_1 e_1 - \mathcal{C}_2 e_2 + \mathcal{E}^{c}_1 + \mathcal{E}^{c}_2,\\ \mathcal{H}_k &= \frac{\mathcal{C}_k \left( A_k - \Gamma_k p \right)}{\rho_k}. \end{align} \end{subequations} Heat transfer leads to the variation of the volume fraction. This variation can be determined from \cref{eq:MD_dekdt,eq:ek_mk_Tk} as follows: \begin{equation}\label{eq:MD_dalpdt} \dudx{\alpha_1}{t} = \frac{m_1 m_2 \left( e_{1,T} e_{2,m} - e_{2,T} e_{1,m} \right) + \left( m_2 e_{2,T} \widetilde{\mathcal{E}}_1 - m_1 e_{1,T} \widetilde{\mathcal{E}}_2 \right)}{ \left( m_1 e_{1,T} + m_2 e_{2,T}\right) \widetilde{p}}. \end{equation} where \begin{subequations} \begin{align} e_{k,T} = \dudx{e_k}{T_1} + \dudx{e_k}{T_2},\\ e_{k,m} = \dudx{e_k}{m_1} \mathcal{C}_1 + \dudx{e_k}{m_2} \mathcal{C}_2. \end{align} \end{subequations} For the ideal gas, we have \begin{equation}\label{eq:dekdTj} \dudx{e_k}{T_j} = \delta_{kj} C_{v,k}, \;\; \dudx{e_k}{m_j} = 0, \end{equation} where $\delta_{kj}$ is the Kronecker function. In this case \cref{eq:MD_dalpdt} can be simplified to a large extent as follows \begin{equation}\label{eq:MD_dalpdt1} \dudx{\alpha_1}{t} = \frac{ m_2 C_{v,2} \widetilde{\mathcal{E}}_1 - m_1 C_{v,1} \widetilde{\mathcal{E}}_2 }{ \left( m_1 C_{v,1} + m_2 C_{v,2}\right) \widetilde{p}}. \end{equation} Under the temperature equilibrium assumption we solve \cref{eq:MD_dalpdt1} instead of \cref{eq:MD_vol}. Solution of \cref{eq:MD_par_mass,eq:MD_mom,eq:MD_en,eq:MD_dalpdt} provide a full set of conservative variables at the end of this stage $[m_1^{\prime}, \;\;m_2^{\prime}, \;\;(\rho \vc{\overline{u})}^{\prime}, \;\;(\rho E)^{\prime}, \;\; (\alpha_1)^{\prime} ]$. We also describe another approach to determine $\dudx{\alpha_k}{t}$ is as follows: (1) By using \cref{eq:md_para_pde,eq:MD_diff_e,eq:e_fun,eq:temp_equi}, one can obtain $\dudx{T}{t}$, (2) Having $\dudx{T}{t}$, by using \cref{eq:rhok_fun,eq:md_para_pde}, we obtain $\dudx{\rho_k}{t}$, (3) Having $\dudx{\rho_k}{t}$, by using \cref{eq:md_para_pde}, we obtain $\dudx{\alpha_k}{t}$. We find that these two approaches lead to numerical results with negligible differences. \subsection{Numerical methods for the parabolic diffusion PDEs}\label{subsec:numer_met_para} The dissipation equations (\cref{eq:viscous_para_pde,eq:deT,eq:md_para_pde}) can be written in the following quasi-linear parabolic PDEs (in 1D) as follows: \begin{equation}\label{eq:nonlin_para_pde} \frac{\partial v}{\partial t}=L [v]+f(x, t), \quad x \in \Lambda \subset \mathbb{R}, \end{equation} where the operator $L[\cdot]$ represents a quasi-linear elliptic operator that is positive definite and takes the following form \begin{equation} L[v] = \dudx{}{x}\left( k(v) \dudx{v}{x} \right). \end{equation} For solution of such non-linear parabolic equations, we use an iterative method as follows \begin{align} &v^{(0)} = v^n, \\ &\frac{\partial v^{(s+1)}}{\partial t} = \dudx{}{x}\left( k(v^{(s)}) \dudx{v^{(s+1)}}{x} \right) +f(x, t),\label{eq:lin_para_pde} \end{align} where the non-linear coefficient $k(v^{(s)})$ is linearised by assuming dependence on the solutions of last iteration. One can also use more advanced method such as the Newton-Raphson method to speed up the convergence. The sequences (\ref{eq:lin_para_pde}) is iterated until convergence that is defined as $||v^{(s+1)} - v^{(s)}|| < \chi$ ($\chi$ is a small positive number). To solve the linearised parabolic PDE (\ref{eq:lin_para_pde}) for the unknown $v^{(s+1)}$, one can use various implicit or explicit methods. Here, we use a monotonicity-preserving explicit local iteration method (LIM) \cite{Zhukov2010}. This scheme has a stable time step of order $\mathcal{O}(P^2)$ ($P$ represents the stencil size), thus alleviates the stiffness of the explicit implementation. It has advantages in computation efficiency and parallel scalability than the implicit schemes under not too big parabolic Courant number (less than $10^4$), see \cite{zhukov2018,zhukov2018development}. \subsection{Preservation of the pressure-velocity-temperature equilibrium} To preserve the pressure-velocity-temperature equilibrium for in the pure translation of an isolated interface, two different mixture EOSs are used in \cite{Alahyari2015,Johnsen2012}. However, this may results in incompatibility with the second law of thermodynamics, since one can not define a mixture entropy due to the ambiguity in the EOS definition. In their approach only one temperature is involved, meaning that the temperature equilibrium is always reached. The equilibrium temperature can be regarded as a specific average of the phase temperatures. For thermodynamical considerations, we have explicitly introduced the temperature relaxation mechanism to reach the temperature equilibrium. Our approach ensures the entropy production with one uniquely defined mixture EOS. Moreover, it maintains the pressure-velocity-temperature equilibrium in the pure translation of an isolated material interface. Let us consider the following Riemann problem: \begin{align}\label{eq:initial1} u^{L} =u^{R}=u > 0, \;\; \rho_{k}^{L} =\rho_{k}^{R}=\rho_{k}, \;\; e_{k}^{L} = e_{k}^{R} = e_{k}, \;\; {\alpha_2}^{L} \neq {\alpha_2}^{R}, \;\; p_{L} = p_{R} = p, \;\; T_{L} = T_{R} = T, \;\; k=1,2. \end{align} The phase temperatures on both sides of the interface are in equilibrium at the initial moment, i.e., \begin{equation}\label{eq:initial2} T_{1,L} = T_{2,L} = T_{L}, \; T_{1,R} = T_{2,R} = T_{R}. \end{equation} \begin{proposition} The solution to the proposed model equations (in the absence of mass diffusion) maintains the pressure-velocity-temperature equilibrium with the initial discontinuity (\ref{eq:initial1}) and (\ref{eq:initial2}). \end{proposition} \begin{proof}\label{proof:1} The solutions are obtained in the framework of the Godunov FVM. We use a Riemann solver that restores the isolated contact discontinuity. We shall check the variables in the cell downstream the given discontinuity after a time step which is denoted as $\vc{U}^{*}$. We have $\mathcal{F}_{k,hd}^{(1)} = 0$ since the velocity is uniform across the computational domain. In the absence of diffusions or external energy source, the model (\ref{eq:final_model}) is equivalent to that of \cite{allaire2002five}. Therefore, we directly use the proved theorem in \cite{allaire2002five}: \begin{equation} u^{*} = u, \quad \rho_{k}^{*} =\rho_{k}, \quad e_{k}^{*} =e_{k}, \quad p^{*} =p. \end{equation} According to the EOS, we have $T_k^{*} = T_k (\rho_k^{*}, p^{*})$, which leads to \begin{equation} T_k^{*} = T. \end{equation} Thus, the pressure-velocity-temperature equilibrium is maintained at the hydrodynamic stage. Formally, the temperature relaxation process described in \cref{eq:tr_temp_av} can be treated as an averaging procedure of phase temperatures, thus, the equilibrium temperature remains at this stage. Moreover, the partial densities $m_k$ are also constants in the course of temperature relaxation. By using the relations \cref{eq:rhok_fun}, one can conclude that phase densities $\rho_k$ and volume fractions $\alpha_k$ also remain constant. According to the EOS, the pressure does not change, either. Due to the uniformity of velocity and temperature in space, the viscous dissipation and heat conduction do not alter the above solution. To summarize, the pressure-velocity-temperature equilibrium is maintained. \end{proof} \section{Numerical results} \label{sec:numer_res} In this section, we perform several numerical tests to verify the proposed model and numerical methods. Without mentioning, we use the following default setting: (a) CFL = 0.2, (b) fifth-order WENO scheme to reconstruct the characteristic variables that are linearized on the cell interface \cite{Coralic2014Finite}, (c) the two-stage Heun method for time integration. The considered tests demonstrate the effect of temperature relaxation, mass diffusion, viscosity and heat conduction. Some of the numerical results with heat conduction of the proposed model is compared to those of the conservative four-equation model \cite{Cook2009Enthalpy,larrouturou1991preserve}, demonstrating the ability of the present model to model miscible interface problems without spurious oscillations. We also apply the model for simulating the laser ablative RM instability problem in the ICF field. The results obtained with both models demonstrate noticeable difference in flow structures. \subsection{The pure transport problem}\label{subsec:pureTRS} We first consider the pure translation of a smeared mass fraction profile, which is a mimic of an miscible interface. The pressure, temperature and velocity are uniformly $1\times 10^4$Pa, $5$K and 100m/s across the computational domain, respectively. The two gases are characterized by the IG EOS with $\rho_1 = 20\text{kg}/\text{m}^3, \gamma_1 = 2.0$ and $\rho_2 = 1\text{kg}/\text{m}^3, \gamma_2 = 1.4$, respectively. The parameter $C_{v,k}$ should be chosen such that the initial phase temperatures are equilibrium, i.e. $(\gamma_1 - 1) \rho_1 C_{v,1} = (\gamma_2 - 1) \rho_2 C_{v,2}$. The initial mass fraction is smeared as \cite{Thornber2018,Kokkinakis2015}: \begin{subequations}\label{eq:thornber_analy} \begin{align} {\rho}=\frac{1}{2}\left({\rho}_{1}+{\rho}_{2}\right) - \frac{1}{2}\left({\rho}_{1}-{\rho}_{2}\right) \operatorname{erf}(z), \\ {\rho Y_1}=\frac{1}{2}{\rho}_{1} - \frac{1}{2}{\rho}_{1} \operatorname{erf}(z), \\ z=\frac{x - x_0}{\sqrt{4 D t+h_{0}^{2}}}, \end{align} \end{subequations} where $h_0 = 0.02$m is the initial interface diffuseness, $x_0$ is the center of the computational domain $x\in \left[ 0\text{m}, 1\text{m} \right]$, $D = 0.01\text{m}^2$/s is the diffusivity. Periodical boundary conditions are imposed on both sides of the computational domain. After $\Delta t = 5 / u$ (five periods) the solutions should return to the initial state. The numerical results obtained with the reduced five equation model are demonstrated in \Cref{fig:pureTRS}. It can be seen that the pressure and temperature equilibrium is well preserved. \begin{figure}[ht] \centering \subfloat[Velocity]{\label{fig:pureTRS:vel}\includegraphics[width=0.5\textwidth]{./FIGS/smoothAdvTemp-eps-converted-to.pdf}} \subfloat[Velocity, locally enlarged]{\label{fig:pureTRS:vel1}\includegraphics[width=0.5\textwidth]{./FIGS/smoothAdvPres-eps-converted-to.pdf}} \caption{Numerical results for the pure translation of a diffused interface.} \label{fig:pureTRS} \end{figure} \subsection{The pure mass diffusion problem}\label{subsec:pureMD} To verify the mass diffusion part of the model, we consider a problem with small diffusion velocity in comparison with the sound speed. In this case, the mass diffusion problem can be approximately treated as an incompressible one. The densities, adiabatic coefficients and specific heat capacities are the same as those in the last test. It is assumed that mass diffusion goes at very small scale that the pressure and temperature are nearly uniform. In the incompressible limit of such a problem the governing equations are reduced to \cite{Livescu2013,Kokkinakis2015,Thornber2018}: \begin{equation}\label{eq:analytic_diff} \frac{\partial {\rho}}{\partial t}=\frac{\partial}{\partial x}\left(D \frac{\partial {\rho}}{\partial x}\right), \quad \frac{\partial}{\partial x}\left({u}+\frac{D}{{\rho}} \frac{\partial {\rho}}{\partial x}\right)=0. \end{equation} With the zero-gradient boundary condition for the density, the solution to \cref{eq:analytic_diff} is given by \cref{eq:thornber_analy}. To investigate the convergence performance of the models, the initial conditions are given with the integration of \cref{eq:thornber_analy} and averaging within each cell \cite{Thornber2018}. The initial pressure is uniformly set to be $1\times10^5\text{Pa}$, which results in small enough Mach number so that the compressibility effect can be neglected. Computations are performed on a series of refining grid. The corresponding numerical results are demonstrated in \Cref{fig:figMD}. It can be seen that the numerical results tend to converge to the analytical solution (in the incompressible limit) with the grid refinement with the second-order MUSCL for spatial reconstruction in the hydrodynamic stage. The convergence order of the numerical algorithm is approximately 2, as demonstrated in \Cref{figMD1:conv_rate}. We also compare the temperature error with the results in \cite{Thornber2018} (\Cref{figMD1:T_err}). \begin{figure}[htbp] \centering \subfloat[Velocity]{\label{figMD:vel}\includegraphics[width=0.5\textwidth]{./FIGS/conv_Vel-eps-converted-to.pdf}} \subfloat[Velocity, locally enlarged]{\label{figMD:vel1}\includegraphics[width=0.5\textwidth]{./FIGS/conv_Vel1-eps-converted-to.pdf}}\\ \subfloat[Density]{\label{figMD:pres}\includegraphics[width=0.5\textwidth]{./FIGS/conv_Dens-eps-converted-to.pdf}} \subfloat[Density, locally enlarged]{\label{figMD:temp}\includegraphics[width=0.5\textwidth]{./FIGS/conv_Dens1-eps-converted-to.pdf}} \caption{Numerical results for the pure mass diffusion problem.} \label{fig:figMD} \end{figure} \begin{figure}[htbp] \centering \subfloat[Accuracy order]{\label{figMD1:conv_rate}\includegraphics[width=0.5\textwidth]{./FIGS/md_order_line-eps-converted-to.pdf}} \subfloat[Temperature error]{\label{figMD1:T_err}\includegraphics[width=0.5\textwidth]{./FIGS/Redu_vs_thornber_temp-eps-converted-to.pdf}} \caption{Numerical results of the reduced model for the pure mass diffusion problem. Left: convergence rate. Right: temperature error on 128-cell grid when $P = 1\times 10^4$.} \label{fig:figMD1} \end{figure} \subsection{The convergence of the viscous part} Having validated the convergence of the mass diffusion, we then check the convergence performance of the viscous part. For such purpose, we manufacture an exact solution as follows \begin{equation} \left\{\begin{array}{l} (\alpha_k \rho_k)(x,t) = 0.5 \text{kg}/\text{m}^3, \\ \alpha_1(x,t) = 0.7, \\ p(x,t) = 100 \text{Pa}, \\ u(x,t) = \frac{1+(2 a-1) \exp ((1-a) \xi / v)}{1+\exp ((1-a) \xi / v)}, \quad \xi=x-a t-x_{0}. \end{array}\right. \end{equation} Note the the manufactured solution for $u(x,t)$ is the analytical solution to the viscous Burgers equation \[\dudx{u}{t} + \frac{\partial}{\partial x} \frac{u^2}{2} = \nu \frac{\partial^2 u}{\partial x^2}.\] The properties for the materials are given as $\gamma_1 = 5.0$, $C_{v,1} = 40.0 \text{J}/(\text{kg} \cdot \text{K})$, $\gamma_1 = 1.4$, $C_{v,1} = 400.0 \text{J}/(\text{kg} \cdot \text{K})$ so that the initial temperature equilibrium is satisfied. We use the constants $a = 0.4$, $x_0 = 0.1\text{m}$ and $\nu = 0.01\text{}$, the exact solution at $t = 0.05\text{s}$ is taken as the initial value. We perform computation to $t = 0.15\text{s}$. The numerical results tend to converge to the exact solution with grid refinement (\Cref{figvis:2}). The dependence of the error on the spatial resolution is demonstrated in \Cref{figvis:1}, demonstrating the accuracy order is approximately second order. \begin{figure}[htbp] \centering \subfloat[Accuracy order]{\label{figvis:1}\includegraphics[width=0.5\textwidth]{./FIGS/visconvergenceRate-eps-converted-to.pdf}} \subfloat[Velocity on different grids]{\label{figvis:2}\includegraphics[width=0.5\textwidth]{./FIGS/visconverge-eps-converted-to.pdf}} \caption{Convergence performance of the viscous part.} \label{fig:visconvergence} \end{figure} \subsection{The multi-component shock tube problem}\label{subsec:shock_interf_interaction} To verify and compare different models, we consider a multicomponent shock tube problem with a resolved interface. The initial condition is given as follows \begin{equation} \left(\rho, u, p, \gamma, C_v\right)= \begin{cases}\left(1000 \mathrm{~kg} \cdot \mathrm{m}^{-3}, 0 \mathrm{~m} \cdot \mathrm{s}^{-1}, 10^9 \mathrm{~Pa}, 4.4, 1606 \text{J}/(\text{kg}\cdot\text{K}) \right) & \text { for } x<0.7, \\ \left(50 \mathrm{~kg} \cdot \mathrm{m}^{-3}, 0, 10^5 \mathrm{~Pa}, 1.4, 714 \text{J}/(\text{kg}\cdot\text{K}) \right) & \text { for } 0.7 < x < 1. \end{cases} \end{equation} We have used two sets of grid, i.e., a coarse grid of 1200 uniform cells and a fine one of 12000. The numerical results obtained with different models are demonstrated in \Cref{fig:figSHOCK}. The exact solutions in this figure are those of the multi-component Euler equation without any relaxations and (numerical or physical) diffusion. Therefore, the solution of the hydrodynamic part (\cref{eq:HD}) without any relaxation agrees better with the exact solutions (\Cref{figSHOCK:dens1}). The numerical results of the one-temperature four-equation model and the temperature-disequilibrium model with the complete temperature relaxation deviate from the exact solution in the vicinity of the interface. Moreover, the one-temperature model introduce overshoot in temperature (\Cref{figSHOCK:temp1}). \begin{figure} \centering \subfloat[density]{\label{figSHOCK:dens}\includegraphics[width=0.5\textwidth]{./FIGS/Shocktube_dens-eps-converted-to.pdf}} \subfloat[density, locally enlarged]{\label{figSHOCK:dens1}\includegraphics[width=0.5\textwidth]{./FIGS/Shocktube_dens1-eps-converted-to.pdf}}\\ \subfloat[Temperature]{\label{figSHOCK:temp}\includegraphics[width=0.5\textwidth]{./FIGS/Shocktube_Temp-eps-converted-to.pdf}} \subfloat[Temperature, locally enlarged]{\label{figSHOCK:temp1}\includegraphics[width=0.5\textwidth]{./FIGS/Shocktube_Temp1-eps-converted-to.pdf}} \caption{Numerical results obtained with different models on different grids for the shock tube problem.} \label{fig:figSHOCK} \end{figure} \subsection{Shock passage through a smeared interface} The mass diffusion creates a physically smeared interface. The interaction between such smeared interface and the shock is experimentally investigated in literature, for example, the gas curtain experiment \cite{balakumar2008simultaneous}. The current test is a 1D analogue of this problem. The initial condition is demonstrated in \Cref{fig:figGC_initial}. A shock in air of Mach 5 impact the smeared SF$_6$ zone, within which the volume fraction is distributed as \cite{mikaelian1996numerical} \begin{equation} \alpha_{\text{SF}_6} = \frac{C}{\text{exp}{\left[ (836 (x-0.02))^2\right]}}. \end{equation} Here we take the constant $C$ to be 0.95. The incident shock transmits and reflects in its interaction with the smeared interface. It compresses the smeared interface to form a thin spike in density. We compare the solutions obtained with the one-temperature four-equation model and the reduced model without/without the temperature relaxation. The numerical results for density, temperature and mass fraction are compared in \Cref{figGC:dens,figGC:dens1}, \Cref{figGC:temp,figGC:temp1} and \Cref{figGC:Y,figGC:Y1}, respectively. We see that the the solutions of the reduced model without the temperature relaxation deviate from those with the temperature relaxation being included implicitly (for the four-equation model) or explicitly (for the reduced model). From \Cref{figGC:temp1} one can observe obvious oscillations in the solutions of the four-equation model. \begin{figure}[htbp] \centering \label{figGC:dens}\includegraphics[width=0.5\textwidth]{./FIGS/GC_initial-eps-converted-to.pdf} \caption{Initial condition for the problem of shock passage through a smeared interface.} \label{fig:figGC_initial} \end{figure} \begin{figure}[htbp] \centering \subfloat[density]{\label{figGC:dens}\includegraphics[width=0.5\textwidth]{./FIGS/GC_dens-eps-converted-to.pdf}} \subfloat[density, locally enlarged]{\label{figGC:dens1}\includegraphics[width=0.5\textwidth]{./FIGS/GC_dens1-eps-converted-to.pdf}}\\ \subfloat[Temperature]{\label{figGC:temp}\includegraphics[width=0.5\textwidth]{./FIGS/GC_T-eps-converted-to.pdf}} \subfloat[Temperature, locally enlarged]{\label{figGC:temp1}\includegraphics[width=0.5\textwidth]{./FIGS/GC_T1-eps-converted-to.pdf}}\\ \subfloat[Mass fraction]{\label{figGC:Y}\includegraphics[width=0.5\textwidth]{./FIGS/GC_Y-eps-converted-to.pdf}} \subfloat[Mass fraction, locally enlarged]{\label{figGC:Y1}\includegraphics[width=0.5\textwidth]{./FIGS/GC_Y1-eps-converted-to.pdf}} \caption{Numerical results for the shock passage through a smeared interface.} \label{fig:figGC} \end{figure} \subsection{The shock wave passage through a helium bubble}\label{subsec:HeB_temp} In this section, we consider the interaction of a shock (Mach number 1.22) and a cylindrical helium bubble \cite{Haas1987,2018Capuano}. The computational domain is of size $22.25 \mathrm{~cm} \times 8.90 \mathrm{~cm}$. The bubble with the diameter 5cm is initially located at $(13.80 \mathrm{~cm}, 4.45 \mathrm{~cm})$. The initial data is given as: \begin{equation} \left(\rho, u, p, \gamma, C_v\right)= \begin{cases}\left(1.66 \mathrm{~kg} \cdot \mathrm{m}^{-3},-114 \mathrm{~m} \cdot \mathrm{s}^{-1}, 159080.98 \mathrm{~Pa}, 1.4, 2430.35 \text{J}/(\text{Kg}\cdot\text{K}) \right) & \text { for } x>x_{s} \\ \left(1.2062 \mathrm{~kg} \cdot \mathrm{m}^{-3}, 0,101325 \mathrm{~Pa}, 1.4, 2430.35 \text{J}/(\text{Kg}\cdot\text{K}) \right) & \text { in air, for } x \leq x_{s} \\ \left(0.2204 \mathrm{~kg} \cdot \mathrm{m}^{-3}, 0,101325 \mathrm{~Pa}, 1.6451, 717.50 \text{J}/(\text{Kg}\cdot\text{K})\right) & \text { inside the helium bubble }\end{cases} \end{equation} where the $x_s = 16.80$cm is the initial position of the left-going shock wave. We compute this problem including viscosity, heat conduction and mass diffusion. The viscosity coefficient is determined with Sutherland's equation \cite{Sutherland}. The heat conduction coefficient is determined in the same way as in \cite{2018Capuano}. As for the mass diffusivity, since its dependence on temperature and pressure is $D \sim {T^{3/2}}/{p}$, we calculate it with \begin{equation} D = D_0 \frac{T^{3/2}/p}{T_0^{3/2}/p_0}, \end{equation} where $D_0$ is a reference value at $(p_0, T_0)$. We use $D_0 = 73.35\times 10^{-6}$m$^2$/s at $p=1$atm and $T=298$K \cite{wasik1969measurements}. To verify the numerical results we compare the numerically obtained interface motion of the bubble along the horizontal centreline with those of \citep{2018Capuano} (\Cref{fig:interfPos}). Note that computations of \citep{2018Capuano} is performed without mass diffusivity. Since the shock-interface interaction time is very short, the impact of mass diffusivity on the interface motion is marginal. However, it does modify the small-scale flow structures (\Cref{fig:diff_vs_nodiff}). From the 1D slice along the horizontal centreline, one can see that the diffusivity smears the density profile. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{./FIGS/HeB_interfPos-eps-converted-to.pdf} \caption{The time evolution of the interface position along the horizontal centreline. The lines on the top(bottom) correspond to the positions of the right(left) interface.} \label{fig:interfPos} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{./FIGS/Hbub8.jpg} \caption{The numerical Schlieren obtained with the four-equation model (top) and the five-equation model (bottom) compared with the experimental shadow graph (inside the rectangular area). The dashed circle is the initial position of the bubble.} \label{fig:HeB_sh} \end{figure} \begin{figure}[htbp] \centering \subfloat[The solutions for density in the cases of diffusivity being excluded (left) and included (right)]{\includegraphics[width=0.7\textwidth]{./FIGS/5eqn_dens_diff_vs_nodiff.png}}\\ \subfloat[The density distribution along the horizontal centreline]{\includegraphics[width=0.5\textwidth]{./FIGS/Diff_VS_Nodiff-eps-converted-to.pdf}} \caption{The experiment of RM instability on Omega.} \label{fig:diff_vs_nodiff} \end{figure} The solutions for the temperature obtained with different models in the neighbourhood of the bubble are compared in \Cref{fig:HeB_temp}. Serious non-physical oscillations arise in across the interface in the solutions of the four-equation model, which is more clear in the 1D distribution along the horizontal centreline in \Cref{HeB_temp:1D}. When temperature diffusivity is significant enough, this non-physical error may have a major impact on the convergence of the model. \begin{figure}[htbp] \centering \subfloat[Temperature distribution (at the time moment $t=427\mu$s timing from the beginning of the shock-interface interaction) obtained with the four-equation model (left) and the reduced five-equation model (right).]{\label{HeB_temp:2D}\includegraphics[width=0.7\textwidth]{./FIGS/Temp_4_vs_5.png}}\\ \subfloat[The temperature distribution along the centreline of the bubble]{\label{HeB_temp:1D}\includegraphics[width=0.5\textwidth]{./FIGS/Temp_4_vs_5_1Dslice-eps-converted-to.pdf}} \caption{Numerical results for temperature.} \label{fig:HeB_temp} \end{figure} \subsection{The laser-driven RM instability problem}\label{subsec:ABLRM} In this section we consider the laser-driven RM instability problem conducted on OMEGA \cite{Robey2003The,miles2004numerical}. The schematic for the experiment is demonstrated in \Cref{fig:ABLsetup}. The multi-material target is assembled into a Beryllium shock tube of diameter 800$\mu$m. The target is made up of two sections: the pusher section of length 150$\mu$m on the right and the payload section of length 19mm on the left. A strong shock is generated by laser ablation of the pusher section that consists of the polyimide (C$_{22}$H$_{10}$N$_{2}$O$_{4}$) and the polystyrene (C$_{500}$H$_{457}$Br$_{43}$). This section is modeled as a homogeneous material with density 1.41$\text{g}/\text{cm}^3$ and $\gamma = 5/3$. The remainder of the target is carbon foam payload (C-foam, $\rho = 0.1\text{g}/\text{cm}^3, \; \gamma = 1.4$) . The interface between two sections is initially perturbed as a cosine function with wavelength 50$\mu$m and amplitude 2$\mu$m. The laser has a wavelength of 0.351$\mu$m and average intensity $6\times10^{14}$W/cm$^2$. The shock in the central area has good planarity, and thus is approximated as a planar one. Note that there are many complicated experimental uncertainties that are difficult to account for in numerical simulations, for example, the pre-heating state of the target and the laser energy loss. Moreover, with the polytropic equation of state, the true state of the materials are described with limited accuracy. Moreover, the experiment diagnosis also introducesss some error. Due to these uncertainties, numerical simulations can hardly reproduce the experimental conditions. We set the initial temperature to be 290K based on a trial-and-error approach. Our simulation focuses on one period of the perturbation with periodical boundary conditions being imposed on sides perpendicular to the incident shock. The present simulation includes a complete physical processes: laser energy deposition, heat conduction, viscosity and mass diffusion. The laser energy deposits in a 20$\mu$m area to the right of the critical density $\rho_{crt}$ of the ablator. According to the inverse bremsstrahlung absorption theory the critical density is $\rho_{crt} = 2.78\times 10^{-2}$g/cm$^{3}$. The heat conduction coefficient of the plasma is calculated with the Spitzer-Harm model \cite{Spitzer1953}. The plasma viscosity is modeled with Clerouin's model \cite{clerouin1998viscosity}. As for the mass diffusivity, we use the estimates in \cite{robey2004effects}, prior to the interaction of shock and the interface, the materials are in solid states and the mass diffusivity is negligible. After the shock arrival (at $t \approx 2$ns), the Schmidt number ($Sc = \nu / D$, $\nu$ is the average kinetic viscosity) is almost constant 1. In this estimation, the mass diffusivity is determined with the model of Paquette \cite{paquette1986diffusion}. Computations are performed on a grid of $2880 \times 60$ cells with the proposed reduced model and the conservative four-equation model. The numerical results for density, temperature and mass fraction are displayed in \Cref{fig:figABL_comp}. It can be seen that the solutions of both models have similar flow structures. The shock wave and the interface move slightly faster in the solutions of the reduced model than that in the solutions of the four-equation model. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{./FIGS/ABLsetup.png} \caption{The schematic of the laser-driven RM instability experiment on Omega.} \label{fig:ABLsetup} \end{figure} \begin{figure}[htbp] \centering \subfloat[Density]{\label{figABL_comp:dens}\includegraphics[width=0.98\textwidth]{./FIGS/MILES_4VS5_DENS_.png}}\\ \subfloat[Temperature]{\label{figABL_comp:temp}\includegraphics[width=0.98\textwidth]{./FIGS/MILES_4VS5_T_.png}}\\ \subfloat[Mass fraction]{\label{figABL_comp:y}\includegraphics[width=0.98\textwidth]{./FIGS/MILES_4VS5_Y_.png}} \caption{Numerical results for the density, the temperature and the mass fraction when $t = 12$ns for the laser-driven RM instability problem. Top figures are results of the four-equation model, bottom ones are that of the reduced model. The dimension for length, density and temperature are cm, g/cm$^3$ and MK, respectively. } \label{fig:figABL_comp} \end{figure} In \Cref{fig:ABL_STA} we compare the numerically obtained interface evolution parameters with the experimental ones. \Cref{ABL_STA:interfPos} demonstrates the evolution of the leftmost interface position (i.e., the distance from its initial position) with time. Good agreement with the experimental results are observed. \Cref{ABL_STA:amp} shows the time evolution of the half peak-to-valley amplitude. Both models give results that lie within the measurement range. \begin{figure}[htbp] \centering \subfloat[Interface position]{\label{ABL_STA:interfPos}\includegraphics[width=0.5\textwidth]{./FIGS/MilesInterfLoc-eps-converted-to.pdf}} \subfloat[Amplitude]{\label{ABL_STA:amp}\includegraphics[width=0.5\textwidth]{./FIGS/MilesAmp-eps-converted-to.pdf}} \caption{The time evolution of the interface position and amplitude of the laser-driven RM instability problem.} \label{fig:ABL_STA} \end{figure} We define the Reynolds number $Re = u L / \nu$, where $u$ is the characteristic velocity $\frac{1}{2}\rho u^2 = E$, $E$ is the the total deposited laser energy, $\rho$ is the average mixture density, the characteristic length $L$ is taken to be the wavelength of the initial perturbation and the $\nu$ the initial maximum mixture kinetic viscosity. To investigate the impact of the diffusivity, we increase $Re$ through the viscosity. The numerical solutions for the mass fraction in the case of different diffusivities are compared in \Cref{fig:ABLRM_DIFF_NO_DIFF}. We can see that the transport process tend to wipe out the small flow structures. The last two figures compare the numerical results with and without the mass diffusion, whose effect in smearing the mass fraction is evident. \begin{figure}[htbp] \centering \includegraphics[width=0.6\textwidth]{./FIGS/MILES_4VS5_DIFFDIFF.png} \caption{numerical solutions for the mass fraction of the C-foam in the case of different diffusivities. The Reynolds number for the experiment condition is used as the reference $Re_{exp}$. From top to bottom: (1) $Re = Re_{exp}, \; Sc = 1$, (2) $Re = Re_{exp}/10, \; Sc = 1$, (3) $Re = Re_{exp}/50, \; Sc = 1$, (4) $Re = Re_{exp}, \; 1/Sc \to 0$. } \label{fig:ABLRM_DIFF_NO_DIFF} \end{figure} \section{Conclusion} \label{sec:conclusion} In the present paper we have presented a diffuse-interface model for compressible multicomponent flows with interphase heat transfer, external energy source, and diffusions (including viscous, heat conduction, mass diffusion, enthalpy diffusion processes). The model is reduced from the Baer-Nunziato model in the limit of instantaneous mechanical relaxations. Difference between time scales of velocity, pressure and temperature relaxations has been accounted for. The reduction procedure results in a temperature-disequilibrium, velocity-disequilibrium, and pressure-equilibrium five-equation model. The proposed model if free of the spurious oscillation problem in the vicinity of the interface and respects the laws of thermodynamics. Numerical methods for its solution have been proposed on the basis of the fractional step method. The model is split into five parts including the hydrodynamic part, the viscous part, the temperature relaxation part, the heat conduction part and the mass diffusion part. The hyperbolic equations involved are solved with the Godunov finite volume method, and the parabolic ones with the locally iterative method based on Chebyshev parameters. The developed model and numerical methods have been used for solving several multicomponent problems and verified against analytical and experimental results. Moreover, we have applied our model to simulate the laser ablation process of a multicomponent target, where the Richtmyer-Meshkov instability can be evidently observed. Comparison with experimental results demonstrates that our model captures physical phenomenon of this process. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.84375, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbEvxK02iP5rWIyyn
\section{Introduction} \setcounter{equation}{0} One of the central problems of the theory and practice of electrical impedance tomography is the problem of estimating the volume of the inclusions in terms of boundary measurements, either voltage measurements when currents are applied around the boundary of the body or current measurements when voltages are applied. The problem can described in rigorous terms as follows: Let $D$ be an inclusion inside a body $\Omega$, and suppose that the conductivities of $D$ and $\Omega \setminus D$ are $\sigma_1$ and $\sigma_2$ ($\sigma_1 \neq \sigma_2$), respectively. Let $\sigma= \sigma_1 \chi(D) + \sigma_2 \chi(\Omega \setminus D)$ where $\chi(D)$ is the characteristic function of $D$ and the potential $V$ be the solution to \begin{equation} \left\{ \begin{array}{l} \nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\ V=V^0 \quad \mbox{on } \partial\Omega \end{array} \right. \eeq{In1} for some Dirichlet data (voltage) $V^0$ on $\partial\Omega$. Then the measurement of current (the Neumann data) is $q:= \sigma \frac{\partial V}{\partial {\bf n}}$ on $\partial\Omega$. (Throughout this paper $\frac{\partial}{\partial{\bf n}}$ denotes the normal derivative.) The problem is to estimate the volume $|D|$ of the inclusion using the boundary data $(V^0, q)$ for finitely many voltages, say $V^0=V_1^0, \ldots, V_n^0$. If the Neumann boundary condition $\sigma \frac{\partial V}{\partial {\bf n}} =q$ is prescribed on $\partial\Omega$ instead of the Dirichlet condition, then the measurement is $V^0:= V|_{\partial\Omega}$. The purpose of this paper is to consider this problem and derive optimal upper and lower bounds for the volume fraction of inclusions in two dimensions. In fact, we deal with a more general situation where $\Omega$ is a two phase mixture in which the phase 1 has conductivity $\sigma_1$ and the phase 2 has conductivity $\sigma_2$ ($\sigma_1 > \sigma_2$) so that the conductivity distribution $\sigma$ of $\Omega$ is given by $\sigma({\bf x}) = \sigma_1 \chi_1({\bf x}) + \sigma_2 \chi_2({\bf x})$ where $\chi_j$ is the characteristic function of phase $j$ for $j=1,2$, {\it i.e.}, \begin{equation} \chi_1 ({\bf x})= 1- \chi_2({\bf x}) = \left\{ \begin{array}{l} 1 \quad \mbox{in phase 1}, \\ 0 \quad \mbox{in phase 2}. \end{array} \right. \eeq{In2} We derive optimal upper and lower bounds for the volume fraction $f_1$ of phase 1 ($f_1= \frac{1}{|\Omega|} \int_{\Omega} \chi_1 ({\bf x})$) using boundary measurements corresponding to either a pair of Dirichlet data ($V_1^0$ and $V_2^0$) or a pair of Neumann data ($q_1$ and $q_2$) on $\Omega$. The bounds are optimal in the sense that they are attained by some inclusions or configurations. The bounds can be easily computed from the boundary measurements. In fact, they are given by two quantities: the measurement (or response) matrix $A=(a_{ij})_{i,j=1,2}$ where \begin{equation} a_{ij} := \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j \eeq{In3} and \begin{equation} b_D := \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 \frac{\partial V_2^0}{\partial {\bf t}} \eeq{In4} if the Dirichlet data are used. Here and throughout this paper, $\frac{\partial}{\partial {\bf t}}$ denotes the tangential derivative along $\partial\Omega$ in the positive orientation. If the Neumann data are used, then $b_D$ is replaced with \begin{equation} b_N := \frac{1}{|\Omega|} \int_{\partial\Omega} q_1({\bf x}) (\int_{{\bf x}_0}^{{\bf x}} q_2). \eeq{In5} where the ${\bf x}_0\in\partial\Omega$ and the last integral is on the surface $\partial\Omega$. See Theorem \ref{thm:LB1} and \ref{thm:UB1}. Some significant results on the problem of estimating the volume of inclusion using boundary measurements are as follows. Kang-Seo-Sheen \cite{KSS97}, Alessandrini-Rosset \cite{AR98}, and Alessandrini-Rosset-Seo \cite{ARS00} obtained upper and lower bounds for the volume of the inclusion. However, their bounds involve constants which are not easy to determine, and hence it is not possible to compare them with the bounds of this paper. It is worth emphasizing that these results use only a single measurement. Another important result on volume estimation is that of Capdeboscq-Vogelius \cite{CV022, CV03}. They found, using the Lipton bounds on polarization tensors \cite{Lipton93}, upper and lower estimates for the volume of inclusions occupying a low volume fraction, which are optimal bounds in the asymptotic limit as the volume fraction tends to zero. Recently it was recognised by Milton \cite{milt11} that bounds on the response of two-phase periodic composites could be easily used to bound the multi-measurement response of two-phase bodies when special boundary conditions are imposed (see \eq{HS6} and \eq{HS10} below) and that these could be used in an inverse fashion to bound the volume fraction. As shown here those bounds coincide exactly with the Capdeboscq-Vogelius bounds in the asymptotic limit as the volume fraction tends to zero. The bounds obtained in this paper allow for more general boundary conditions and we emphasize that they are optimal for any volume fraction. They reduce to those of Milton for the special boundary conditions, but have the advantage of being able to utilize the same set of measurements for both the upper and lower volume fraction bounds. We derive the bounds using the translation method which in its simplest form is based on classical variational principles with null Lagrangians added, {\it i.e.}, non-linear functions of fields which may be integrated by parts and expressed in terms of boundary measurements. The translation method, developed by Murat and Tartar \cite{tar79,tar85,mutar85} and independently by Lurie and Cherkaev \cite{lucherk82,lucherk84}, is a powerful method for deriving bounds on effective tensors of composites. As shown by Murat and Tartar it can be extended using the method of compensated compactness to allow for functions more general than null Lagrangians, namely quasiconvex functions. It is reviewed in the books \cite{cherk,milton,allaire,tartar}. The use of classical variational principles to determine information about the conductivity distribution inside a body from electrical impedance measurements was pioneered by Kohn and Berryman \cite{kohnber}. We continue our investigation by looking for necessary and sufficient conditions for the bounds to be attained. These are the exact analogs of the condition found by Grabovsky \cite{grab} for attainability of the translation bounds for composites. (See also section 25.6 of \cite{milton}.). It turns out that the upper bound is attained if and only if the field in phase 1 is uniform and the lower bound is attained if and only if the field in phase 2 is uniform. It means that if phase 1 is an inclusion, the upper bound is attained if the field inside the inclusion is uniform. However, the lower bound can only be approached since no boundary data generate a nonzero uniform field outside the inclusion. The lower bound (for $f_1$) can be attained for the configuration where phase 2 is an inclusion. There are plenty of inclusions inside which the field is uniform for some boundary conditions. We call such inclusions E$_\Omega$-inclusions. They include E-inclusions which were named in \cite{ljl07}. An inclusion $E$ is called an E-inclusion if the field inside $E$ is uniform for any uniform loading at infinity. More precisely, E-inclusions are such that if $V$ is the solution to \begin{equation} \left\{ \begin{array}{l} \nabla \cdot (\sigma_1 \chi(E) + \sigma_2 \chi({\mathbb R}^2 \setminus E)) \nabla V=0 \quad \mbox{in } {\mathbb R}^2, \\ V({\bf x}) - {\bf a} \cdot {\bf x} = O(|{\bf x}|^{-1}) \ \ \mbox{as } |{\bf x}| \to \infty, \end{array} \right. \eeq{In6} then $-\nabla V$ is constant in $E$ for any direction ${\bf a}$. If an E-inclusion $E$ is simply connected, then $E$ must be an ellipse (an ellipsoid in three dimensions). This was known as Eshelby's conjecture \cite{esh61} and resolved by Sendeckyj in two dimensions \cite{sen70} (see also \cite{KM06, liu07} for different proofs), and by Kang-Milton \cite{KM07} and Liu \cite{liu07} in three dimensions. There are E-inclusions with multiple components \cite{che74, liu07, KKM07}. There are also inclusions other than E-inclusions inside which the field is uniform. For example, if $\Omega$ contains a connected component, say $E$, of an E-inclusion with multiple components, then $E$ is an E$_\Omega$-inclusion. More generally if $E$ is an E$_\Omega$-inclusion and $\Psi\subset\Omega$ then the field in $E\cap\Psi$ will be uniform when appropriate boundary conditions are imposed at the boundary of $\Psi$. We perform some numerical experiments to demonstrate how good the bounds are for inclusions. Special attention is paid to the variation of the bounds when certain parameters, such as conductivity, the volume fraction and the distance from the boundary, vary. We also look at the role of boundary data. This paper is organized as follows. In the next section we derive the lower and upper bounds on the volume fraction. In section 3, we obtain conditions for these bounds to be attained, and then in section 4, we show that if the field is uniform in phase 1 then the upper bound is attained and if the field is uniform in phase 2 then the lower bound is attained. In Section 5, we obtain different sufficient conditions for the bounds to be obtained. Section 6 is devoted to the asymptotic analysis of the bounds when the volume fraction tends to zero. Numerical results are presented in section 7. In section 8 we show how to construct a wide variety of simply connected E$_\Omega$-inclusions, following the approach outlined in section 23.9 of \cite{milton}. We emphasize that the method of this paper (the translation method) works for three dimensions as well. The results in three dimensions will be presented in a forthcoming paper. \section{Translation bounds in two dimensions} \setcounter{equation}{0} In this section we derive upper and lower bounds on $f_1$ (the volume fraction of the phase with higher conductivity) using pairs of Cauchy data. Each bound requires two pairs of Cauchy data. The derivation in this section is based on the translation method, and parallels the treatment given by Murat and Tartar \cite{tar79,tar85,mutar85} and Lurie and Cherkaev \cite{lucherk82,lucherk84}. \subsection{Lower bound} Consider two potentials satisfying \begin{equation} \nabla \cdot \sigma \nabla V_j =0 \quad \mbox{in } \Omega, \quad j=1,2. \eeq{LB1} Let \begin{equation} {\bf j}_j ({\bf x}) = - \sigma({\bf x}) \nabla V_j({\bf x}), \quad j=1,2. \eeq{LB2} We want to use information about two pairs of Cauchy data $(V_1^0, q_1:=-{\bf j}_1 \cdot {\bf n})$ and $(V_2^0, q_2:=-{\bf j}_2 \cdot {\bf n})$ on $\partial\Omega$ to generate a lower bound on $f_1$. Using the boundary data we can compute \begin{equation} \langle {\bf j}_i \rangle := \frac{1}{|\Omega|} \int_{\Omega} {\bf j}_i = -\frac{1}{|\Omega|} \int_{\partial\Omega} {\bf x} q_i , \quad i=1,2. \eeq{LB3} We assume that $\langle {\bf j}_1 \rangle$ and $\langle {\bf j}_2 \rangle$ are linearly independent. Then, by taking linear combinations of the old potentials if necessary we may assume \begin{equation} \langle {\bf j}_1 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad \langle {\bf j}_2 \rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix} . \eeq{LB4} With \begin{equation} R_\bot = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}, \eeq{LB5} let us introduce a $4 \times 4$ matrix \begin{equation} L_c({\bf x}):= \begin{bmatrix} \sigma^{-1} & c R_\bot \\ - c R_\bot & \sigma^{-1} \end{bmatrix} \eeq{LB6} where the constant $c$ is chosen so that $L_c({\bf x}) \ge 0$ for all ${\bf x}$. Here we assume that $\sigma$ is an anisotropic conductivity (matrix). With the constants $k_1$, $k_2$, $k_3$, $k_4$, define a 4-dimensional vector $J$ by \begin{equation} J:= \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} . \eeq{LB7} We then consider \begin{equation} W_c := \frac{1}{|\Omega|} \int_\Omega J \cdot L_c ({\bf x}) J. \eeq{LB8} Define a $2 \times 2$ matrix $A=(a_{ij})$, which we call the response (or measurement) matrix, by \begin{equation} a_{ij} := \frac{1}{|\Omega|} \int_\Omega {\bf j}_i \cdot \sigma^{-1} {\bf j}_j = \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j, \quad i,j=1,2, \eeq{LB9} and \begin{equation} b:= \frac{1}{2|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2 - {\bf j}_2 \cdot R_\bot {\bf j}_1 = \frac{1}{|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2 . \eeq{LB10} Since \begin{equation} \int_{\Omega} {\bf j}_i \cdot R_\bot {\bf j}_i =0, \quad i=1,2, \eeq{LB11} one can see that \begin{equation} W_c = \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix} \cdot D_c \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix} \eeq{LB12} where \begin{equation} D_c = \begin{bmatrix} a_{11} & a_{12} & 0 & cb \\ a_{12} & a_{22} & -cb & 0 \\ 0 & -cb & a_{11} & a_{12} \\ cb & 0 & a_{12} & a_{22} \end{bmatrix} . \eeq{LB13} We emphasize that $W_c$ can be computed from the boundary measurements. In fact, since $\nabla \times R_{\bot} {\bf j}_i =0$, there are potentials $\varphi_i$ such that \begin{equation} R_{\bot} {\bf j}_i = \nabla \varphi_i. \eeq{LB14} Moreover, if ${\bf t}$ is the unit tangent vector field on $\partial\Omega$ in the positive orientation, then \begin{equation} {\bf t} \cdot \nabla \varphi_i = R_\bot^T {\bf t} \cdot {\bf j}_i = - {\bf j}_i \cdot {\bf n} = q_i \quad \mbox{on } \partial\Omega \eeq{LB15} (T for the transpose), and hence the boundary value of $\varphi_i$ which we denote by $\varphi_i^0$ is given by \begin{equation} \varphi_i^0 ({\bf x})= \int_{{\bf x}_0}^{{\bf x}} q_i \eeq{LB16} where the integration is along $\partial\Omega$ in the positive orientation (counterclockwise). Hence \begin{equation} b = - \frac{1}{|\Omega|} \int_{\partial\Omega} q_1 \varphi_2^0 = \frac{1}{|\Omega|} \int_{\partial\Omega} q_2 \varphi_1^0. \eeq{LB17} Since \begin{align} W_c & = \frac{1}{|\Omega|} \int_{\Omega} (k_1 {\bf j}_1 + k_2 {\bf j}_2) \sigma^{-1} (k_1 {\bf j}_1 + k_2 {\bf j}_2) + (k_3 {\bf j}_1 + k_4 {\bf j}_2) \sigma^{-1} (k_3 {\bf j}_1 + k_4 {\bf j}_2) \nonumber \\ &\quad + 2c (k_1k_4 -k_2k_3) b, \label{LB18} \end{align} we have the variational principle \begin{equation} W_c= \min_{\displaystyle \nabla \cdot \underline{{\bf j}_1} = \nabla \cdot\underline{{\bf j}_2}=0 \atop \displaystyle \underline{{\bf j}_1} \cdot {\bf n}= q_1 , \ \underline{{\bf j}}_2 \cdot {\bf n}= q_2} \frac{1}{|\Omega|} \int_{\Omega} \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \cdot L_c({\bf x}) \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} . \eeq{LB19} One can easily see from the constraints that \begin{equation} \langle \underline{{\bf j}_i} \rangle =\frac{1}{|\Omega|} \int_{\partial\Omega} -{\bf x} q_i = \langle {\bf j}_i \rangle . \eeq{LB20} So if we replace the constraints by the weaker constraint that \begin{equation} \langle \underline{{\bf j}_i} \rangle = \langle {\bf j}_i \rangle , \quad i=1,2, \eeq{LB21} then we get \begin{equation} W_c \ge \min_{\displaystyle \underline{{\bf j}_1}, \underline{{\bf j}_2} \atop \displaystyle \langle \underline{{\bf j}_i} \rangle = \langle {\bf j}_i \rangle} \left\langle \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \cdot L_c \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \right\rangle . \eeq{LB22} In order to find the minimum, we first observe that at the minimum \begin{equation} \int_{\Omega} \begin{bmatrix} k_1 \psi_1 + k_2 \psi_2 \\ k_3 \psi_1 + k_4 \psi_2 \end{bmatrix} \cdot L_c ({\bf x}) \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} =0 \eeq{LB23} for any (vector-valued) functions $\psi_1, \psi_2$ satisfying $\langle \psi_1 \rangle = \langle \psi_2 \rangle =0$, which implies \begin{equation} \displaystyle L_c ({\bf x}) \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} = \mu \ (\mbox{a constant vector}). \eeq{LB24} We then have \begin{equation} \left\langle \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} \right\rangle = \left\langle \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \right\rangle = \langle L_c^{-1} \rangle \mu \eeq{LB25} Thus the minimum is given by \begin{align} & \left\langle \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \cdot L_c \begin{bmatrix} k_1 \underline{{\bf j}_1} + k_2 \underline{{\bf j}_2} \\ k_3 \underline{{\bf j}_1} + k_4 \underline{{\bf j}_2} \end{bmatrix} \right\rangle \nonumber \\ & = \langle \mu \cdot L_c^{-1} \mu \rangle = \left \langle \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} \right\rangle \cdot \langle L_c^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {\bf j}_1 + k_2 {\bf j}_2 \\ k_3 {\bf j}_1 + k_4 {\bf j}_2 \end{bmatrix} \right\rangle, \label{LB26} \end{align} which implies, thanks to (\ref{LB4}), that \begin{equation} W_c \ge \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix} \cdot \langle L_c^{-1} \rangle^{-1} \begin{bmatrix} k_1 \\ k_2 \\ k_3 \\ k_4 \end{bmatrix} . \eeq{LB27} Thus we have \begin{equation} D_c \ge \langle L_c^{-1} \rangle^{-1} . \eeq{LB28} Let us now assume that $\sigma$ is isotropic so that \begin{equation} L_c= \begin{bmatrix} \sigma^{-1} & 0 & 0 & c \\ 0 & \sigma^{-1} & -c & 0 \\ 0 & -c & \sigma^{-1} & 0 \\ c & 0 & 0 & \sigma^{-1} \end{bmatrix} , \eeq{LB29} and \begin{equation} \langle L_c^{-1} \rangle = \left\langle \frac{1}{(\sigma^{-2}-c^2)} \begin{bmatrix} \sigma^{-1} & 0 & 0 & -c \\ 0 & \sigma^{-1} & c & 0 \\ 0 & c & \sigma^{-1} & 0 \\ -c & 0 & 0 & \sigma^{-1} \end{bmatrix} \right\rangle . \eeq{LB30} Since \begin{equation} \begin{bmatrix} Q^T & 0 \\ 0 & Q^T \end{bmatrix} \langle L_c^{-1} \rangle^{-1} \begin{bmatrix} Q & 0 \\ 0 & Q \end{bmatrix} = \langle L_c^{-1} \rangle^{-1} \eeq{LB31} for any rotation $Q$, we obtain from (\ref{LB28}) that \begin{equation} \begin{bmatrix} Q^T & 0 \\ 0 & Q^T \end{bmatrix} D_c \begin{bmatrix} Q & 0 \\ 0 & Q \end{bmatrix} \ge \langle L_c^{-1} \rangle^{-1} . \eeq{LB32} In particular, we may choose $Q$ so that \begin{equation} Q^T \begin{bmatrix} a_{11} & a_{12} \\ a_{12} & a_{22} \end{bmatrix} Q = \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix} \eeq{LB33} where $\lambda_1 \ge \lambda_2$ are eigenvalues of the response matrix $(a_{ij})$. Then by taking the inverse of both sides of (\ref{LB32}) we get \begin{equation} \frac{1}{(\lambda_1 \lambda_2 - c^2 b^2)} \begin{bmatrix} \lambda_2 & 0 & 0 & -cb \\ 0 & \lambda_1 & cb & 0 \\ 0 & cb & \lambda_2 & 0 \\ -cb & 0 & 0 & \lambda_1 \end{bmatrix} \le \langle L_c^{-1} \rangle . \eeq{LB34} So we get the inequality \begin{equation} \frac{1}{(\lambda_1 \lambda_2 - c^2 b^2)} {\bf v} \cdot \begin{bmatrix} \lambda_2 & -cb \\ -cb & \lambda_1 \end{bmatrix} {\bf v} \le \left\langle \frac{1}{(\sigma^{-2} - c^2)} {\bf v} \cdot \begin{bmatrix} \sigma^{-1} & -c \\ -c & \sigma^{-1} \end{bmatrix} {\bf v} \right \rangle \eeq{LB35} for any vector ${\bf v}$. Now suppose that the medium is 2-phase, with $\sigma_1 > \sigma_2$. In this case $L_c({\bf x}) > 0$ as long as $c < \sigma_1^{-1}$. We take the limit as $c$ approaches $\sigma_1^{-1}$. Then \begin{equation} \frac{{\bf v}}{(\sigma_1^{-2} - c^2)} \cdot \begin{bmatrix} \sigma_1^{-1} & -c \\ -c & \sigma_1^{-1} \end{bmatrix} {\bf v} \eeq{LB36} becomes infinite unless ${\bf v}$ is proportional to $\begin{bmatrix} 1 \\ 1 \end{bmatrix}$, and when ${\bf v}=\begin{bmatrix} 1 \\ 1 \end{bmatrix}$ \begin{equation} \frac{{\bf v}}{(\sigma^{-2} - c^2)} \cdot \begin{bmatrix} \sigma^{-1} & -c \\ -c & \sigma^{-1} \end{bmatrix} {\bf v} = \frac{2(\sigma^{-1} -c)}{\sigma^{-2}-c^2}= \frac{2}{\sigma^{-1} + c} \eeq{LB37} approaches $\sigma_1$ in phase 1 and $2/(\sigma_1^{-1}+\sigma_2^{-1})$ in phase 2. Hence the bound in (\ref{LB35}) reduces to \begin{align} \frac{\lambda_1 + \lambda_2 - 2b/\sigma_1}{\lambda_1 \lambda_2 - b^2/\sigma_1^2} & \le f_1 \sigma_1 + \frac{2f_2}{1/\sigma_1 + 1/\sigma_2} \\ & = f_1 \sigma_1 + \frac{2f_2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \\ & = f_1 \frac{\sigma_1 (\sigma_1 -\sigma_2)}{(\sigma_1 +\sigma_2)} + \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2}, \label{LB37a} \end{align} which gives the desired lower bound on the volume fraction: \begin{equation} f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\lambda_1 + \lambda_2 - 2b/\sigma_1}{\lambda_1 \lambda_2 - b^2/\sigma_1^2} - \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right], \eeq{LB38} or \begin{equation} f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits A - 2b/\sigma_1}{\det A - b^2/\sigma_1^2} - \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right], \eeq{LB39} where the matrix $A$ is defined by (\ref{LB9}). We emphasize that the righthand side of (\ref{LB39}) can be computed by the boundary measurements. In fact, $A$ is computed by using (\ref{LB9}) and $b$ using (\ref{LB17}) under the condition (\ref{LB4}). In general, if Neumann data $q_1$ and $q_2$ do not satisfy \eq{LB4}, then let \begin{equation} P_N:= \begin{bmatrix} \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} q_1 {\bf x} & \ \ \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} q_2 {\bf x} \end{bmatrix}^{-1} . \eeq{LB40} Then $\tilde{{\bf j}}_1$ and $\tilde{{\bf j}}_2$ defined by \begin{equation} \tilde{{\bf j}}_i=\sum_{m=1}^2\left[ P_N\right]_{im} {\bf j}_m,\quad i=1,2 \eeq{LB41} satisfy \eq{LB4}. Since \begin{equation} \left[ \frac{1}{|\Omega|} \int_{\Omega} \tilde{{\bf j}_i} \cdot \sigma^{-1} \tilde{{\bf j}_j} \right]_{i,j=1,2} = P_N A P_N^T \eeq{LB42} and \begin{equation} \frac{1}{|\Omega|} \int_{\Omega} \tilde{{\bf j}}_1 \cdot R_\bot \tilde{{\bf j}}_2 = \frac{\det P_N}{|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2, \eeq{LB43} we obtain the following theorem from \eq{LB39}. \begin{theorem}\label{thm:LB1} Let $P_N$ be given by \eq{LB40} and \begin{equation} b_N := \frac{1}{|\Omega|} \int_{\Omega} {\bf j}_1 \cdot R_\bot {\bf j}_2 = \frac{1}{|\Omega|} \int_{\partial\Omega} q_1({\bf x}) (\int_{{\bf x}_0}^{{\bf x}} q_2). \eeq{LB44} Then, \begin{equation} f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits (P_N A P_N^T) - 2 (\det P_N)b_N /\sigma_1}{(\det P_N)^2 (\det A - b_N^2/\sigma_1^2)} - \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right]. \eeq{LB45} \end{theorem} \subsection{Upper bound} We now derive the upper bound on $f_1$. Let us introduce a $4 \times 4$ matrix \begin{equation} L'_c ({\bf x}):= \begin{bmatrix} \sigma & c R_\bot \\ - c R_\bot & \sigma \end{bmatrix} \eeq{UB1} where the constant $c$ is chosen so that $L'_c({\bf x}) \ge 0$ for all ${\bf x}$. With the constants $k_1$, $k_2$, $k_3$, $k_4$ and \begin{equation} {\bf e}_j ({\bf x}) = - \nabla V_j({\bf x}), \quad j=1,2, \eeq{UB2-1} define a 4-dimensional vector $E$ by \begin{equation} E:= \begin{bmatrix} k_1 {\bf e}_1 + k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {\bf e}_2 \end{bmatrix} . \eeq{UB2} We then consider \begin{equation} W'_c:= \langle E \cdot L'_c E \rangle. \eeq{UB3} The minimization problem in this case is \begin{equation} W'_c \ge \min_{\displaystyle \underline{{\bf e}_1}, \underline{{\bf e}_2} \atop \displaystyle \langle \underline{{\bf e}_i} \rangle = \langle {\bf e}_i \rangle} \left\langle \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \cdot L'_c \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \right\rangle . \eeq{UB3-1} As for \eq{LB24}, one can show that at the minimum of the right hand side of \eq{UB3-1} \begin{equation} \displaystyle L'_c ({\bf x}) \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} = \mu \ (\mbox{a constant vector}) \eeq{UB3-2} and the minimum is given by \begin{align} \left\langle \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \cdot L'_c \begin{bmatrix} k_1 \underline{{\bf e}_1} + k_2 \underline{{\bf e}_2} \\ k_3 \underline{{\bf e}_1} + k_4 \underline{{\bf e}_2} \end{bmatrix} \right\rangle = \left\langle \begin{bmatrix} k_1 {\bf e}_1 + k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {\bf e}_2 \end{bmatrix} \right\rangle \cdot \langle (L'_c)^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {\bf e}_1 + k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {\bf e}_2 \end{bmatrix} \right\rangle. \label{UB3-3} \end{align} Proceeding in the exactly same way as in the previous subsection (with $c$ approaching to $\sigma_2$), we can derive `dual bounds': \begin{equation} \frac{\mathop{\rm Tr}\nolimits A - 2b' \sigma_2}{\det A - b'^2 \sigma_2^2} \le \frac{f_2}{\sigma_2} + \frac{2f_1}{\sigma_1+\sigma_2} \eeq{UB4} where \begin{equation} b' := \langle {\bf e}_1 \cdot R_\bot {\bf e}_2 \rangle \eeq{UB5} and \begin{equation} A = \begin{bmatrix} a_{11} & a_{12} \\ a_{12} & a_{22} \end{bmatrix} \eeq{UB6} in which \begin{equation} a_{ij} := \langle {\bf e}_i \cdot \sigma {\bf e}_j \rangle = \langle {\bf e}_i \cdot {\bf j}_j \rangle, \eeq{UB7} and linear combination of potentials have been chosen so that \begin{equation} \langle {\bf e}_1 \rangle = \frac{1}{|\Omega|} \int_{\partial\Omega} -V_1^0 {\bf n} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \quad \langle {\bf e}_2 \rangle = \frac{1}{|\Omega|} \int_{\partial\Omega} -V_2^0 {\bf n} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \eeq{UB8} Apart from this constraint, ${\bf e}_1$ and ${\bf e}_2$ are any fields solving \begin{equation} \nabla \cdot \sigma \nabla V_j =0 \quad \mbox{in } \Omega, \quad {\bf e}_j = - \nabla V_j. \eeq{UB9} One can obtain from (\ref{UB4}) the upper bound on $f_1$: \begin{equation} f_1 \le \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits A - 2b' \sigma_2}{\det A - b'^2 \sigma_2^2} \right]. \eeq{UB10} We emphasize that $A$ and $b'$ can be computed from the boundary measurements: \begin{equation} a_{ij}= \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j, \eeq{UB11} and \begin{equation} b' = \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 {\bf n} \cdot R_\bot {\bf e}_2 = - \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 {\bf t} \cdot {\bf e}_2 = \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 \frac{\partial V_2^0}{\partial {\bf t}}. \eeq{UB12} More generally, if $V_1^0$ and $V_2^0$ do not satisfy \eq{UB8}, then we have the following theorem in the same way as before. \begin{theorem}\label{thm:UB1} Let \begin{equation} P_D:= \begin{bmatrix} \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} V_1^0 {\bf n} & \ \ \displaystyle \frac{-1}{|\Omega|} \int_{\partial\Omega} V_2^0 {\bf n} \end{bmatrix}^{-1} \eeq{UB20} and \begin{equation} b_D := \frac{1}{|\Omega|} \int_{\partial\Omega} V_1^0 \frac{\partial V_2^0}{\partial {\bf t}}. \eeq{UB21} Then \begin{equation} f_1 \le \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits (P_D A P_D^T) - 2 (\det P_D) b_D \sigma_2}{(\det P_D)^2 (\det A - b_D^2 \sigma_2^2)} \right]. \eeq{UB19} \end{theorem} \subsection{Special boundary data} In the special case where the Neumann data are given by \begin{equation} q_1 = - {\bf n} \cdot {\bf j}_1 = - {\bf n} \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad q_2 = - {\bf n} \cdot {\bf j}_2 = - {\bf n} \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \eeq{HS1} we have \begin{equation} b=1 \quad\mbox{and}\quad A=\bfm\sigma_N^{-1} \eeq{HS2} where $\bfm\sigma_N$ is the Neumann tensor which is defined via the relation \begin{equation} \langle {\bf e} \rangle = \bfm\sigma_N^{-1} \langle {\bf j} \rangle, \eeq{HS3} when the Neumann data is given by $q=-{\bf n}\cdot{\bf v}$ for some constant vector ${\bf v}$. In fact, we have from \eq{LB14} and (\ref{LB17}) \begin{equation} b= \frac{1}{|\Omega|} \int_{\partial\Omega} ({\bf j}_1 \cdot {\bf n}) \varphi_2^0 = \frac{1}{|\Omega|} {\bf j}_1 \cdot \int_{\partial\Omega} {\bf n} \varphi_2^0 =- \begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot \langle \nabla \varphi_2 \rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot R_\bot \begin{bmatrix} 0 \\ 1 \end{bmatrix} =1, \eeq{HS4} and from (\ref{LB9}) \begin{equation} a_{ij} = \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j = \frac{1}{|\Omega|} {\bf j}_j^0 \cdot \int_{\partial\Omega} V_i^0 {\bf n} = \langle {\bf j}_i \rangle \cdot \langle {\bf e}_j \rangle = \langle {\bf j}_i \rangle \cdot \bfm\sigma_N^{-1} \langle {\bf j}_j \rangle . \eeq{HS5} So, the bound (\ref{LB39}) reduces to the bound \begin{equation} f_1 \ge \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits \bfm\sigma_N^{-1} - 2/\sigma_1}{\det \bfm\sigma_N^{-1} - 1/\sigma_1^2} - \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right]. \eeq{HS6} of Milton \cite{milt11}. If the Dirichlet data take the special affine form \begin{equation} V_1^0 = - \begin{bmatrix} 1 \\ 0 \end{bmatrix} \cdot {\bf x}, \quad V_2^0 = - \begin{bmatrix} 0 \\ 1 \end{bmatrix} \cdot {\bf x}, \eeq{HS7} one can prove in the same way that \begin{equation} b'=1 \quad\mbox{and}\quad A=\bfm\sigma_D \eeq{HS8} where $\bfm\sigma_D$ is the Dirichlet tensor which is defined via the relation \begin{equation} \bfm\sigma_D \langle {\bf e} \rangle = \langle {\bf j} \rangle \eeq{HS9} when the Dirichlet data $V^0$ is given by $-{\bf v} \cdot {\bf x}$ for some constant vector ${\bf v}$. Thus the bound (\ref{UB10}) reduces to the other bound \begin{equation} f_1 \le \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits \bfm\sigma_D - 2 \sigma_2}{\det \bfm\sigma_D - \sigma_2^2} \right]. \eeq{HS10} of Milton \cite{milt11}. \section{Attainability conditions of the bounds} In this section we derive conditions on the fields for the bounds in \eq{LB39} and \eq{UB10} to be attained. We will show in the next section that the bounds are actually attained by certain inclusions. The derivation of the lower bound on $f_1$, and in particular \eq{LB24} and \eq{LB25}, suggests that if there is no column vector ${\bf K}=(k_1,k_2,k_3,k_4)^T$, with say $|{\bf K}|^2=k_1^2+k_2^2+k_3^2+k_4^2=1$, such that \begin{equation} L_{\sigma_1^{-1}} \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} = \langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle , \eeq{ACB1} then the lower bound will not be attained. Here, $\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ is understood as the limit of $\langle L_{c}^{-1} \rangle^{-1}$ as $c$ tends to $\sigma_1^{-1}$. To prove this, fix $c_0 < \sigma_1^{-1}$ and let \begin{equation} {\bf F}_c ({\bf x}): = L_{c}({\bf x}) \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} - \langle L_{c}^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle. \eeq{ACB2} for $c$ such that $c_0 \le c < \sigma_1^{-1}$. Then, we have \begin{align} & \langle {\bf F}_c \cdot L_{c_0}^{-1} {\bf F}_c \rangle \le \langle {\bf F}_c \cdot L_{c}^{-1} {\bf F}_c \rangle \nonumber \\ & = \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \cdot L_{c} \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle - \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle \cdot \langle L_{c}^{-1} \rangle^{-1} \left\langle \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} \right\rangle \nonumber \\ &={\bf K}\cdot D_{c}{\bf K}-{\bf K}\cdot\langle L_{c}^{-1}\rangle^{-1}{\bf K} \label{ACB3} \end{align} Letting $c\to \sigma_1^{-1}$ we see that if ${\bf F}_{\sigma_1^{-1}}$ is non-zero (in the $L^2$ norm) for all ${\bf K}$ with $|{\bf K}|=1$ then the right hand side of \eq{ACB3} is non-zero in this limit, or equivalently $D_{\sigma_1^{-1}} > \alpha\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ for some $\alpha>1$. It follows that equality is not achieved in \eq{LB35} and hence in \eq{LB39}, {\it i.e.}, the lower bound on the volume fraction is not attained. Conversely, suppose we have equality in \eq{ACB1} for some ${\bf K}\ne 0$. Then, \begin{equation} {\bf K}\cdot D_{\sigma_1^{-1}}{\bf K}={\bf K}\cdot\langle (L_{\sigma_1^{-1}})^{-1}\rangle^{-1}{\bf K}, \eeq{ACB4} and as $D_{\sigma_1^{-1}} \ge \langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ it follows that $D_{\sigma_1^{-1}}-\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ must have zero determinant. A simple calculation shows that \begin{equation} \langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1} = \frac{1}{g} \begin{bmatrix} I & R_\bot \\ -R_\bot & I \end{bmatrix} \eeq{ACB5} where \begin{equation} g:= f_1 \frac{\sigma_1(\sigma_1 - \sigma_2)}{\sigma_1 + \sigma_2}+ \frac{2 \sigma_1\sigma_2}{\sigma_1 + \sigma_2}. \eeq{ACB6} Hence the matrix \begin{equation} \begin{bmatrix} Q^T & 0 \\ 0 & Q^T \end{bmatrix}[D_{\sigma_1^{-1}}-\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}]\begin{bmatrix} Q & 0 \\ 0 & Q \end{bmatrix} = \begin{bmatrix} \lambda_1-1/g & 0 & 0 & b/\sigma_1-1/g \\ 0 & \lambda_2-1/g & -b/\sigma_1+1/g & 0 \\ 0 & -b/\sigma_1+1/g & \lambda_1-1/g & 0 \\ b/\sigma_1-1/g & 0 & 0 & \lambda_2-1/g \end{bmatrix} \eeq{ACB7} must have zero determinant, which implies \begin{equation} \lambda_1\lambda_2-(\lambda_1+\lambda_2)/g-b^2/\sigma_1^2+2b/(g\sigma_1)=0. \eeq{ACB8} Thus equality holds in \eq{LB37a} and the lower bound on $f_1$ is attained. In summary, the attainability condition is that for some $k_1$, $k_2$, $k_3$ and $k_4$, \begin{equation} L_{\sigma_1^{-1}} {\bf J} = \langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1} \langle {\bf J} \rangle \eeq{AC8} where \begin{equation} {\bf J}= \begin{bmatrix} k_1 {{\bf j}_1} + k_2 {{\bf j}_2} \\ k_3 {{\bf j}_1} + k_4 {{\bf j}_2} \end{bmatrix} . \eeq{AC9} From \eq{ACB5} a vector ${\bf U}$ in the range of $\langle (L_{\sigma_1^{-1}})^{-1} \rangle^{-1}$ takes the form \begin{equation} {\bf U}= \begin{bmatrix} a_1 \\ a_2 \\ -a_2 \\ a_1\end{bmatrix} \eeq{AC12} for some $a_1$ and $a_2$. We have the following theorem. \begin{theorem} The attainability condition \eq{AC8} for the lower bound holds if and only if \begin{equation} L_{\sigma_1^{-1}} {\bf J} = {\bf U} \eeq{AC13} for some ${\bf U}$ of the form \eq{AC12}. \end{theorem} \proof The `only if' part is trivial. Suppose that \eq{AC13} holds. We write $L=L_{\sigma_1^{-1}}$ for the ease of notation. One can see from the definition \eq{LB6} of $L$ that $L_1$ (=$L$ on phase 1) and $L_2$ (=$L$ on phase 2) can be simultaneously diagonalizable. Thus in that basis \eq{AC13} reads as \begin{equation} \left[ \lambda_1^{(j)} \chi_1({\bf x}) + \lambda_2^{(j)} \chi_2({\bf x}) \right] J^{(j)}({\bf x}) = U^{(j)}, \quad j=1,2,3,4. \eeq{AC14} Here $\lambda_1^{(j)}$ and $\lambda_2^{(j)}$ are eigenvalues of $L_1$ and $L_2$, respectively, and $J^{(j)}({\bf x})$ and $U^{(j)}$ are $j$-th components of ${\bf J}$ and ${\bf U}$ in new basis. Since $L_1$ has rank 2, two of eigenvalues $\lambda_1^{(j)}$ are zero, say $\lambda_1^{(3)}$ and $\lambda_1^{(4)}$, and hence $\chi_1 J^{(3)}({\bf x})$ and $\chi_2 J^{(4)}({\bf x})$ may depend on ${\bf x}$. However, $J^{(j)}({\bf x})$ for $j=1,2$ is piecewise constant, and by \eq{AC14}, \begin{equation} J^{(j)}({\bf x}) = \left\{ \begin{array}{l} U^{(j)}/\lambda_1^{(j)} \quad \mbox{in phase 1} \\ U^{(j)}/\lambda_2^{(j)} \quad \mbox{in phase 2} . \end{array} \right. \eeq{AC16} Thus we have \begin{equation} \langle J^{(j)} \rangle = \left[ f_1 /\lambda_1^{(j)} + f_2 /\lambda_2^{(j)} \right] U^{(j)} = \langle L^{-1} \rangle_{jj} U^{(j)}, \quad j=1,2. \eeq{AC17} Here $\langle L^{-1} \rangle_{jj}$ is the $(j,j)$-entry of the diagonal matrix $\langle L^{-1} \rangle$. So, \begin{equation} U^{(j)}= (\langle L^{-1} \rangle^{-1})_{jj} \langle J^{(j)} \rangle, \quad j=1,2. \eeq{AC18} If $j=1,2$, then $(\langle L^{-1} \rangle^{-1})_{jj}=0$ and ${\bf U}$, which belongs to the range of $\langle L^{-1} \rangle^{-1}$ satisfies $U^{(3)}=U^{(4)}=0$, and hence \eq{AC18} holds for all $j$. Therefore \begin{equation} {\bf U}= \langle L^{-1} \rangle^{-1} \langle {\bf J} \rangle, \eeq{AC19} and hence \eq{AC8} holds. \qed Similarly one can show that the attainability condition for the upper bound is that for some $k_1$, $k_2$, $k_3$ and $k_4$ \begin{equation} L'_{\sigma_2} {\bf E} = \langle (L'_{\sigma_2})^{-1} \rangle^{-1} \langle {\bf E} \rangle \eeq{AC20} where \begin{equation} {\bf E} = \begin{bmatrix} k_1 {{\bf e}_1}+ k_2 {\bf e}_2 \\ k_3 {\bf e}_1 + k_4 {{\bf e}_2} \end{bmatrix}, \eeq{AC21} and it is equivalent to \begin{equation} L'_{\sigma_2} {\bf E} = {\bf U} \eeq{AC22} for some ${\bf U}$ of the form \eq{AC12}. We emphasize that the attainability conditions \eq{AC13} and \eq{AC22} are precisely analogous to those found by Grabovsky \cite{grab} for composites. \section{Attainability and uniformity} We now investigate the attainability condition more closely. \eq{AC22} says that the field ${\bf E}$ is uniform in phase 1. This condition alone guarantees that the upper bound is attained. In fact, we show in this section that even more is true: if the field is uniform in phase 1 for a single boundary data $V^0=V^0_1$ then there is a $V^0_2$ such that the upper bound is attained. \begin{theorem}\label{thm:AU1} Suppose that phase 1 and 2 have finitely many connected (possibly multiply connected) components and the interfaces are Lipschitz continuous. Let $V$ be the solution to \begin{equation} \left\{ \begin{array}{l} \nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\ V=V^0 \quad \mbox{on } \partial\Omega. \end{array} \right. \eeq{AU1} If $-\nabla V$ is constant (the field is uniform) in phase 1 for some boundary data $V^0=V^0_1 \neq 0$, then there is a $V^0_2$ such the upper bound is attained. \end{theorem} \proof Phase 1 can be broken into connected components $\Psi_1^{(\alpha)}$, $\alpha= 1, 2, \ldots, m$, and phase 2 can be broken into connected components $\Psi_2^{(\beta)}$, $\beta= 1, 2, \ldots, n$. If $\Psi_2^{(\beta)}$ has a boundary in common with $\Psi_1^{(\alpha)}$, we denote the common boundary by $\Gamma^{\alpha\beta}$. Let $V_\beta({\bf x})$ denote the potential $V({\bf x})$ inside $\Psi_2^{(\beta)}$. If $-\nabla V= \begin{bmatrix} e_1 \\ e_2 \end{bmatrix}$ in phase 1 for some constants $e_1$ and $e_2$, then \begin{equation} V({\bf x}) = -e_1 x - e_2 y + c_\alpha \eeq{AU2} for some constant $c_\alpha$ inside $\Psi_1^{(\alpha)}$ (where $c_\alpha=c_\gamma$ if $\Psi_1^{(\alpha)}$ touches $\Psi_1^{(\gamma)}$ at a common point), and the continuity of the potential on $\Gamma^{\alpha\beta}$ implies \begin{equation} V_\beta ({\bf x}) = -e_1 x - e_2 y + c_\alpha \quad \mbox{on } \Gamma^{\alpha\beta}. \eeq{AU3} Since $\nabla \cdot {\bf j}({\bf x})=0$ in $\Omega$, there is a continuous potential $W({\bf x})$ such that \begin{equation} {\bf j} ({\bf x}) = - \sigma_2 R_\bot \nabla W({\bf x}) \quad \mbox{in } \Omega. \eeq{AU4} In phase 1, inside $\Psi_1^{(\alpha)}$, we have \begin{equation} \nabla W({\bf x}) = \frac{\sigma_1}{\sigma_2} \begin{bmatrix} e_2 \\ -e_1 \end{bmatrix}, \eeq{AU5} and hence \begin{equation} W({\bf x}) = \frac{\sigma_1}{\sigma_2} (e_2 x - e_1 y) + d_\alpha \eeq{AU6} for some constant $d_\alpha$ (where, by continuity of the potential $W$, $d_\alpha=d_\gamma$ if $\Psi_1^{(\alpha)}$ touches $\Psi_1^{(\gamma)}$ at a common point) Let $W_\beta({\bf x})$ denote the potential $W({\bf x})$ inside $\Psi_2^{(\beta)}$. Since $W({\bf x})$ is continuous, \begin{equation} W_\beta ({\bf x}) = \frac{\sigma_1}{\sigma_2} (e_2 x - e_1 y) + d_\alpha \quad\mbox{on } \Gamma^{\alpha\beta}. \eeq{AU8} Note that inside $\Psi_2^{(\beta)}$, \begin{equation} \nabla V({\bf x})= -\frac{1}{\sigma_2} {\bf j}({\bf x}) = R_\bot \nabla W({\bf x}), \eeq{AU9} {\it i.e.}, $V_{\beta, x}=W_{\beta,y}$ and $V_{\beta, y}=-W_{\beta,x}$, which are the Cauchy-Riemann equations. Thus $V_\beta+i W_\beta$ is an analytic function of $z=x+iy$. Now consider \begin{align} V'_\beta({\bf x}) &:= -W_\beta({\bf x}) + (\sigma_1/\sigma_2+1)(e_2 x - e_1 y) \label{AU10} \\ W'_\beta({\bf x}) &:= V_\beta({\bf x}) + (\sigma_1/\sigma_2+1)(e_1 x + e_2 y). \label{AU11} \end{align} Clearly \begin{equation} V'_\beta + i W'_\beta = i (V_\beta + i W_\beta) + (\sigma_1/\sigma_2+1)(e_2+ie_1)(x+iy) \eeq{AU12} is an analytic function of $z$. On $\Gamma^{\alpha\beta}$, we have \begin{align} V'_\beta &= - (\sigma_1/\sigma_2) (e_2 x- e_1 y) + d_\alpha + (\sigma_1/\sigma_2+1)(e_2 x - e_1 y) = e_2 x - e_1 y + d_\alpha, \label{AU13} \\ W'_\beta &= - e_1 x- e_2 y +c_\alpha + (\sigma_1/\sigma_2+1)(e_1 x + e_2 y)= (\sigma_1/\sigma_2) (e_1 x + e_2 y) + c_\alpha. \label{AU14} \end{align} So the conductivity equation $\nabla \cdot \sigma \nabla V=0$ is satisfied with potentials $V'$ and $W'$ defined by $V'=V'_\beta$, $W'=W'_\beta$ in $\Psi_2^{(\beta)}$, and \begin{equation} V'({\bf x})= e_2 x - e_1 y + d_\alpha, \quad W'({\bf x})= (\sigma_1/\sigma_2) (e_1 x + e_2 y) + c_\alpha \eeq{AU15} in $\Psi_2^{(\alpha)}$. Note that \begin{equation} -\nabla V'= \begin{bmatrix} -e_2 \\ e_1 \end{bmatrix}. \eeq{AU16} We then have, in $\Psi_2^{(\beta)}$ \begin{align} L'_{\sigma_2} {\bf E} & = \begin{bmatrix} \sigma_2 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_2 I \end{bmatrix} \begin{bmatrix} -\nabla V \\ -\nabla V' \end{bmatrix} = \begin{bmatrix} \sigma_2 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_2 I \end{bmatrix} \begin{bmatrix} -\nabla V_\beta \\ \nabla W_\beta - (\sigma_1/\sigma_2 +1) \begin{bmatrix} e_2 \\ - e_1 \end{bmatrix} \end{bmatrix} \nonumber \\ & = \begin{bmatrix} - \sigma_2 (\nabla V_\beta - R_\bot \nabla W_\beta) + (\sigma_1 + \sigma_2) \begin{bmatrix} e_1 \\ e_2 \end{bmatrix} \\ \sigma_2 (R_\bot \nabla V_\beta + \nabla W_\beta) + (\sigma_1 + \sigma_2) \begin{bmatrix} -e_2 \\ e_1 \end{bmatrix} \end{bmatrix} = (\sigma_1 + \sigma_2) \begin{bmatrix} e_1 \\ e_2 \\ -e_2 \\ e_1 \end{bmatrix}, \label{AU17} \end{align} and in phase 1 \begin{align} L'_{\sigma_2} {\bf E} & = \begin{bmatrix} \sigma_1 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_1 I \end{bmatrix} \begin{bmatrix} -\nabla V \\ -\nabla V' \end{bmatrix} = \begin{bmatrix} \sigma_1 I & \sigma_2 R_\bot \\ - \sigma_2 R_\bot & \sigma_1 I \end{bmatrix} \begin{bmatrix} \begin{bmatrix} e_1 \\ e_2 \end{bmatrix} \\ \begin{bmatrix} -e_2 \\ e_1 \end{bmatrix} \end{bmatrix} \nonumber \\ & = (\sigma_1 + \sigma_2) \begin{bmatrix} e_1 \\ e_2 \\ -e_2 \\ e_1 \end{bmatrix}. \label{AU18} \end{align} Thus $L'_{\sigma_2} {\bf E}={\bf U}$ where ${\bf U}$ is of the form \eq{AC12}. Hence the upper bound is attained when we take boundary data $V^0_1=V^0$ and $V^0_2=V'^0$ \qed Observe that the Dirichlet condition in \eq{AU1} may be replaced with the Neumann condition. One can prove in the exactly same way that the lower bound is attained if the field is uniform in phase 2. \section{Attainability and analyticity} We have seen that uniformity and independence of the fields ${\bf e}_1=-\nabla V_1$ and ${\bf e}_2=-\nabla V_2$ in phase 1 is necessary and sufficient to ensure that the upper bound is attained. Now we will see there is a condition on the potentials $V_1$ and $V_2$ in phase 2 which is also necessary and sufficient to ensure that the upper bound is attained. We assume that phase 2 is connected and completely surrounds each inclusion of phase 1. First suppose that the upper bound is attained. Then, given a constant $k$, there exist potentials $V$ and $V'$, which are linear combinations of the potentials $V_1$ and $V_2$, such that in phase 1 $-\nabla V= \begin{bmatrix} k \\ 0 \end{bmatrix}$ and $-\nabla V'= \begin{bmatrix} 0 \\ k \end{bmatrix}$. Thus the analysis of the previous section holds with $e_1=k$ and $e_2=0$. In particular, we may choose $k=1/(\sigma_1/\sigma_2+1)$ and, since in phase two $V+iW$ is an analytic function of $z$, it follows from \eq{AU10} that $V-iV'+x$ is an analytic function of $z=x+iy$ in phase 2. Conversely suppose there exist potentials $V$ and $V'$, which are linear combinations of the potentials $V_1$ and $V_2$, such that $V-iV'+x$ is an analytic function of $z$ in phase 2. Then the harmonic conjugate to $V$ in phase 2 is $-V'-y$ and the harmonic conjugate to $V'$ in phase 2 is $V+x$. Since by \eq{AU9} these harmonic conjugates can be identified with the potentials $W$ and $W'$, we have in phase 2 \begin{equation} W=-V'-y,\quad W'=V+x, \eeq{AE12} and in particular these identities hold on the boundary of an inclusion of phase 1. By \eq{AU4} inside that inclusion $V+i(\sigma_2/\sigma_1)W$ and $V'+i(\sigma_2/\sigma_1)W'$ are analytic functions of $z$. Therefore $$ V'+i(\sigma_2/\sigma_1)W'-i(\sigma_1/\sigma_2)(V+i(\sigma_2/\sigma_1)W)-i(x+iy) $$ is also an analytic function of $z$ inside the inclusion and from \eq{AE12} takes the value $$ i(\sigma_2/\sigma_1-1)[(\sigma_1/\sigma_2+1)V+x] $$ at the boundary of the inclusion. Since the only function which has zero real part at the boundary of a closed curve, and which is analytic in the interior, is an imaginary constant, we deduce that $(\sigma_1/\sigma_2+1)V+x$ is constant around the boundary of the inclusion and hence constant inside, {\it i.e.}, in the inclusion $-\nabla V= \begin{bmatrix} k \\ 0 \end{bmatrix}$ with $k=1/(\sigma_1/\sigma_2+1)$. The harmonic conjugate to $V$ inside the inclusion is then $-ky$ which can be identified with $(\sigma_2/\sigma_1)W$ (to within an additive constant). Then from the first condition in \eq{AE12} it follows that (to within an additive constant) $V'$ takes the value $-ky$ around the boundary of the inclusion and hence in its interior too, {\it i.e.}, in the inclusion $-\nabla V'= \begin{bmatrix} 0 \\ k \end{bmatrix}$. Hence the uniform field attainability condition is met, and the upper bound is attained. We summarize our findings as a theorem. \begin{theorem} Provided the body $\Omega$ consists of inclusions of phase 1 completely surrounded by phase 2 then the upper bound is attained if and only if there exist potentials $V$ and $V'$, which are linear combinations of the potentials $V_1$ and $V_2$, such that $V-iV'+x$ is an analytic function of $z=x+iy$ in phase 2. \end{theorem} Similarly we have the following theorem for the lower bound. \begin{theorem} Provided the body $\Omega$ consists of inclusions of phase 2 completely surrounded by phase 1 then the lower bound is attained if and only if there exist potentials $V$ and $V'$, which are linear combinations of the potentials $V_1$ and $V_2$, such that $V-iV'+x$ is an analytic function of $z=x+iy$ in phase 1. \end{theorem} \section{Asymptotic bounds for small volume fraction} Suppose that the phase 1 occupies a region $\omega \subset \Omega$ satisfying \begin{equation} \mbox{dist} (\omega, \partial\Omega) \ge c \eeq{AB1} for some $c>0$. The purpose of this section is to compare the bounds \eq{LB39} and \eq{UB10} with the bounds obtained in \cite{CV03} when the volume $|\omega|$ of $\omega$ tends to $0$. Let $q$ be a function on $L^2(\partial\Omega)$ satisfying $\int_{\partial\Omega} q=0$. Let $V$ be the solution to \begin{equation} \left\{ \begin{array}{l} \nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\ \noalign{\smallskip} \displaystyle \sigma \frac{\partial V}{\partial {\bf n}} = q \quad \mbox{on } \partial \Omega, \quad (\int_{\partial\Omega} V =0), \end{array} \right. \eeq{AB2} and let $U$ be the solution to \eq{AB2} with $\sigma$ replaced with $\sigma_2$. It is proved in \cite{CV022} that given a sequence $\omega_n$ satisfying \eq{AB1} and such that $|\omega_n| \to 0$ there is a subsequence still denoted $\omega_n$, a probability measure $d\mu$ supported in the set $\{ x ~|~ \mbox{dist} (x, \partial\Omega) \ge c \}$, and a (pointwise) polarization tensor field $M({\bf x})$ such that if $V_n$ is the solution to \eq{AB2} when $\omega=\omega_n$, then \begin{equation} V_n ({\bf x}) - U({\bf x}) = -|\omega_n| \int_{\Omega} \nabla U({\bf z}) \cdot M({\bf z}) \nabla_z N({\bf x}, {\bf z}) d\mu({\bf z}) + o(|\omega_n|), \quad {\bf x} \in \partial\Omega, \eeq{AB3} where $N({\bf x},{\bf z})$ is the Neumann function for $\Omega$, {\it i.e.}, $U$ is given by \begin{equation} U({\bf z})= \int_{\partial\Omega} N({\bf x}, {\bf z}) q({\bf x}) ds({\bf x}). \eeq{AB4} Note that we have absorbed a factor of $\sigma_1-\sigma_2$ into the definition of $M$ given by Capdeboscq and Vogelius to be consistent with the conventional definition of polarization tensors. Let $V'_n$ be the solution to \begin{equation} \left\{ \begin{array}{l} \nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\ V = V^0 \quad \mbox{on } \partial \Omega \end{array} \right. \eeq{AB5} with $\omega=\omega_n$ and $U'$ be the solution to \eq{AB5} with $\sigma$ replaced with $\sigma_2$. Then we have \begin{equation} \sigma_2 \frac{\partial V'_n}{\partial {\bf n}} ({\bf x}) - \sigma_2 \frac{\partial U'}{\partial {\bf n}} ({\bf x}) = |\omega_n| \int_{\Omega} \nabla U'({\bf z}) \cdot M({\bf z}) \nabla_z \frac{\partial }{\partial {\bf n}_{\bf x}} G({\bf x}, {\bf z}) d \mu({\bf z}) + o(|\omega_n|), \quad {\bf x} \in \partial\Omega, \eeq{AB6} where $G({\bf x}, {\bf z})$ is the Green function for $\Omega$, {\it i.e.}, $U'$ is given by \begin{equation} U'({\bf z})= \int_{\partial\Omega} \frac{\partial}{\partial {\bf n}_{\bf x}} G({\bf x}, {\bf z}) V^0({\bf x}) ds({\bf x}). \eeq{AB7} To see \eq{AB6} let us define the Neumann-to-Dirichlet (NtD) map $\Lambda_\sigma$ by \begin{equation} \Lambda_\sigma [q]:= V|_{\partial\Omega} \eeq{AB8} where $V$ the solution to \eq{AB2}. Let $\Lambda_{\sigma_2}$ be the NtD map when $\sigma$ is replaced with $\sigma_2$. Observe that because of \eq{AB4}, we have \begin{align} \int_{\Omega} \nabla U({\bf z}) \cdot M({\bf z}) \nabla_z N({\bf x}, {\bf z}) d \mu({\bf z}) &= \int_{\partial\Omega} \left[ \int_{\Omega} \nabla_z N({\bf y}, {\bf z})\cdot M({\bf z}) \nabla_z N({\bf x}, {\bf z}) d \mu({\bf z}) \right] q({\bf y}) ds({\bf y}) \nonumber \\ & := K[q]({\bf x}). \label{AB9} \end{align} So \eq{AB3} can be rewritten as \begin{equation} \Lambda_\sigma[q]= \Lambda_{\sigma_2}[q] - |\omega_n| K[q] + o(|\omega_n|). \eeq{AB10} Then the Dirichlet-to-Neumann map $\Lambda_\sigma^{-1}$ is given by \begin{equation} \Lambda_\sigma^{-1} = (I - |\omega_n| \Lambda_{\sigma_2}^{-1} K)^{-1} \Lambda_{\sigma_2}^{-1} + o(|\omega_n|) = \Lambda_{\sigma_2}^{-1} + |\omega_n| \Lambda_{\sigma_2}^{-1} K \Lambda_{\sigma_2}^{-1} + o(|\omega_n|). \eeq{AB11} Observe that \begin{equation} \Lambda_{\sigma_2}^{-1} [N(\cdot, {\bf z})]({\bf x}) = \frac{\partial}{\partial {\bf n}_{\bf x}} G({\bf x}, {\bf z}), \quad {\bf x} \in \Omega, \ \ {\bf z} \in \Omega. \eeq{AB12} In fact, \begin{equation} \int_{\partial\Omega} \Lambda_{\sigma_2}^{-1} [N(\cdot, {\bf z})]({\bf x}) V^0({\bf x}) = \int_{\partial\Omega} N({\bf x}, {\bf z}) \sigma_2 \frac{\partial U'}{\partial{\bf n}} ({\bf x}) = U'({\bf z}) \quad \mbox{for all } {\bf z} \in \Omega, \eeq{AB13} and hence \eq{AB12} follows. We now obtain \eq{AB6} from \eq{AB11}. Let $U_1({\bf x})= -\sigma_2^{-1} x$ and $U_2({\bf x})= -\sigma_2^{-1} y$, and let $V_j$ be the solution to \eq{AB2} with $q=q_j$ for $j=1,2$, where $q_j=\sigma_2 \frac{\partial U_j}{\partial {\bf n}}=-n_j$. Then, we have \begin{equation} [\bfm\sigma_N^{-1}]_{ij}= a_{ij}= \frac{1}{|\Omega|} \int_{\partial\Omega} V_i q_j = \frac{1}{|\Omega|} \int_{\partial\Omega} (V_i - U_i) q_j + \sigma_2^{-1} \delta_{ij} \eeq{AB14} where $\delta_{ij}$ is Kronecker's delta. Since \begin{equation} \int_{\partial\Omega} N({\bf x}, {\bf z}) q_j({\bf x}) ds({\bf x}) = U_j({\bf z}), \eeq{AB15} we have from \eq{AB3} that \begin{equation} \frac{1}{|\Omega|} \int_{\partial\Omega} (V_i - U_i) q_j = - |\omega_n| \frac{1}{\sigma_2^2 |\Omega|} \int_{\Omega} M_{ij} ({\bf x}) d\mu(x) + o(|\omega_n|) . \eeq{AB16} Thus we have \begin{equation} \bfm\sigma_N^{-1}= - f_1 \sigma_2^{-2} M + \sigma_2^{-1} I + o(f_1) \eeq{AB17} where \begin{equation} M:= \int_{\Omega} M_{ij} ({\bf x}) d\mu({\bf x}). \eeq{AB18} We then have \begin{align} \frac{\mathop{\rm Tr}\nolimits\bfm\sigma_N^{-1} - 2 \sigma_1^{-1}}{\det\bfm\sigma_N^{-1} - \sigma_1^{-2}} &= \frac{-\frac{f_1}{\sigma_2^2} \mathop{\rm Tr}\nolimits M + \frac{2}{\sigma_2} - \frac{2}{\sigma_1} + o(f_1)}{-\frac{f_1}{\sigma_2^3} \mathop{\rm Tr}\nolimits M + \frac{1}{\sigma_2^2} - \frac{1}{\sigma_1^2} + o(f_1)} \nonumber \\ &= \frac{ \frac{2(\sigma_1-\sigma_2)}{\sigma_1 \sigma_2} \left[ 1 -\frac{f_1 \sigma_1}{2\sigma_2(\sigma_1 - \sigma_2)} \mathop{\rm Tr}\nolimits M \right] + o(f_1)}{ \frac{(\sigma_1^2 -\sigma_2^2)}{\sigma_1^2 \sigma_2^2} \left[ 1 -\frac{f_1 \sigma_1^2}{\sigma_2(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits M \right] + o(f_1)} \nonumber \\ &= \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \left[ 1 + \frac{f_1 \sigma_1}{2\sigma_2(\sigma_1 + \sigma_2)} \mathop{\rm Tr}\nolimits M \right] + o(f_1), \label{AB19} \end{align} and hence \begin{equation} \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits \bfm\sigma_N^{-1} - 2/\sigma_1}{\det \bfm\sigma_N^{-1} - 1/\sigma_1^2} - \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right] = \frac{f_1 \sigma_1}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits M + o(f_1). \eeq{AB20} The lower bound \eq{HS6} now reads \begin{equation} \frac{f_1 \sigma_1}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits M \le f_1, \eeq{AB21} or equivalently \begin{equation} \frac{\sigma_1}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits (I-\sigma_2\bfm\sigma_N^{-1}) \le f_1 \eeq{AB22} up to $o(f_1)$ terms by \eq{AB17}. We now consider the upper bound. Let $U_1({\bf x})=-x$ and $U_2({\bf x})=-y$, and let $V_i$ be the solution to \eq{AB5} with $V^0=U_i$ on $\partial\Omega$ for $i=1,2$. Then, defining $q_j=\sigma_2 \frac{\partial V_j}{\partial {\bf n}}$ on $\partial\Omega$, we have \begin{equation} [\bfm\sigma_D]_{ij}= a_{ij}= \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 q_j = \frac{1}{|\Omega|} \int_{\partial\Omega} V_i^0 (q_j - \sigma_2\frac{\partial U_j}{\partial {\bf n}}) + \sigma_2 \delta_{ij} \eeq{AB23} One can use \eq{AB6} and the fact that \begin{equation} \int_{\partial\Omega} \frac{\partial}{\partial {\bf n}_{{\bf x}}} G({\bf x}, {\bf z}) V_i^0({\bf x}) d{\bf x}= U_i({\bf z}) \eeq{AB24} to derive that \begin{equation} \bfm\sigma_D= f_1 M + \sigma_2 I + o(f_1)=\bfm\sigma_N + o(f_1). \eeq{AB25} Thus we obtain \begin{equation} \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits\bfm\sigma_D - 2 \sigma_2}{\det\bfm\sigma_D - \sigma_2^2} = \frac{f_1}{\sigma_2^2} \frac{\det M}{\mathop{\rm Tr}\nolimits M} + o(f_1). \eeq{AB26} Since $\det M = (\mathop{\rm Tr}\nolimits M^{-1})^{-1} \mathop{\rm Tr}\nolimits M$, \eq{HS10} reads \begin{equation} f_1 \le \frac{f_1 (\sigma_1+\sigma_2)}{\sigma_2(\sigma_1-\sigma_2)} (\mathop{\rm Tr}\nolimits M^{-1})^{-1} , \eeq{AB27} or equivalently \begin{equation} f_1 \le \frac{(\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} (\mathop{\rm Tr}\nolimits (-I+\sigma_2^{-1}\bfm\sigma_D)^{-1})^{-1} \eeq{AB28} up to $o(f_1)$ terms. By \eq{AB17} and \eq{AB25}, we have \begin{equation} -I+ \sigma_2^{-1} \bfm\sigma_D = I - \sigma_2\bfm\sigma_N^{-1} \eeq{AB29} modulo $o(f_1)$. Hence by putting \eq{AB22} and \eq{AB28} together, we have \begin{equation} \frac{\sigma_1 \sigma_2}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits (I-\sigma_2\bfm\sigma_N^{-1}) \le f_1 \le \frac{(\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} (\mathop{\rm Tr}\nolimits (I-\sigma_2\bfm\sigma_N^{-1})^{-1})^{-1} \eeq{AB30} modulo $o(f_1)$, where $\bfm\sigma_N^{-1}$ is determined from the boundary measurements with special Neumann conditions, via \eq{AB14}. We emphasize that these asymptotic bounds for small volume fraction were found in \cite{CV022, CV03}. From \eq{AB29} we also have the bounds \begin{equation} \frac{\sigma_1 \sigma_2}{(\sigma_1^2 - \sigma_2^2)} \mathop{\rm Tr}\nolimits (\sigma_2^{-1}\bfm\sigma_D-I) \le f_1 \le \frac{(\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} (\sigma_2^{-1}\bfm\sigma_D-I)^{-1})^{-1} \eeq{AB35} modulo $o(f_1)$, where $\bfm\sigma_D$ is obtained from the boundary measurements with special Dirichlet conditions. It is interesting to observe that the translation bounds also yield the Lipton bounds for the polarization tensor: We obtain from \eq{AB21} and \eq{AB27} that \begin{equation} \mathop{\rm Tr}\nolimits M \le \frac{(\sigma_1^2 - \sigma_2^2)}{\sigma_1} \quad\mbox{and}\quad \mathop{\rm Tr}\nolimits (M^{-1}) \le \frac{(\sigma_1+\sigma_2)}{\sigma_2(\sigma_1-\sigma_2)} . \eeq{AB36} We refer to \cite{book, book2, milton} for properties of polarization tensors. If the phase 1 is an inclusion (or a cluster of inclusions) of the form \begin{equation} D= \varepsilon B + {\bf z} \eeq{AB37} where $\varepsilon$ is a small parameter representing the diameter of $D$, $B$ is a reference domain containing $0$, and ${\bf z}$ indicates the location of $D$ inside $\Omega$, then $M_{ij}=|B|^{-1} M(B)$ (a constant matrix) and $d\mu = \lim_{n \to \infty} |\omega_n|^{-1} \chi(\omega_n) d{\bf x}=\delta({\bf x}-{\bf z})d{\bf x}$. Here $M(B)$ is the polarization tensor associated with $B$. Therefore we have $M=|B|^{-1} M(B)$, and hence \begin{equation} \mathop{\rm Tr}\nolimits(M(B))\leq |B| \frac{(\sigma_1^2 - \sigma_2^2)}{\sigma_1}, \eeq{AB38} and the lower bound is given by \begin{equation} \mathop{\rm Tr}\nolimits (M(B)^{-1})\leq\frac{\sigma_1+\sigma_2}{\sigma_2(\sigma_1-\sigma_2) |B|}. \eeq{AB39} The bounds in \eq{AB38} and \eq{AB39} were obtained by Lipton \cite{Lipton93} and later by Capdeboscq-Vogelius \cite{CV022, CV03} in a more general setting. They can also easily be derived from the bounds of Lurie and Cherkaev \cite{lucherk82} and Tartar and Murat \cite{mutar85,tar85} using the observation made by Milton \cite{milt81} that the low volume fraction limit of bounds on effective tensors of periodic arrays of well-separated inclusions yield bounds on polarization tensors. We also mention that if the lower bound in \eq{AB39} is attained for $B$ and $B$ is simply connected, then $B$ is an ellipse. This was known as the P\'olya-Szeg\"o conjecture and resolved by Kang-Milton \cite{KM06, KM07} (see also a review paper \cite{Kang09}). \section{Numerical results} \subsection{Forward solutions} We implement an integral equation solver in FORTRAN in order to generate forward solutions of the Neumann and Dirichlet problems of the equation $\nabla \cdot \sigma \nabla V=0$ in $\Omega$ when $D$ is an inclusion and $\sigma = \sigma_1 \chi(D) + \sigma_2 \chi(\Omega\setminus D)$. We set $\sigma_2=1$ throughout this section. We compute the forward solutions $V$ with $N=64,80,96,120,160,192,240,320$ and $480$ equi-spaced points on $\partial D$ and $N$ points on $\partial\Omega$. And then they are computed with the solutions on the finer discretization of $N=960$. Figure \ref{conv.1} shows the convergence of a forward solver for the Neumann problem as a function of discretization points, $N$, and Figure \ref{conv.2} for the Dirichlet problem, with $\sigma_1=10$ \begin{figure}[htb] \begin{center} \epsfig{figure=conv1.eps,width=10cm} \end{center} \caption{Convergence error of the forward solver with 64-480 discretization points. The solid line represents the convergence error of $V$ on $\partial\Omega$ for the Neumann problem.}\label{conv.1} \end{figure} \begin{figure}[htb] \begin{center} \epsfig{figure=conv2.eps,width=10cm} \end{center} \caption{Convergence error of the forward solver with 64-480 discretization points. The solid line represents the convergence error of $\frac{\partial V}{\partial{\bf n}}$ for the Dirichlet problem.}\label{conv.2} \end{figure} \subsection{Numerical Experiments} We perform numerical simulations to judge the performance of the bounds when relevant parameters are varying. Parameters under consideration are the conductivity contrast $\sigma_1/\sigma_2$, the volume fraction $f_1$, and the distance between the inclusion and $\partial\Omega$. We also investigate the role of boundary data in deriving bounds. We use boundary data of special forms; $q_1=-\begin{bmatrix} 1 \\ 0 \end{bmatrix}\cdot{\bf n}$ and $q_2=-\begin{bmatrix} 0 \\ 1 \end{bmatrix}\cdot{\bf n}$ as Neumann data for the lower bound, and $V_1=-\begin{bmatrix} 1 \\ 0 \end{bmatrix}\cdot {\bf x}$ and $V_2=-\begin{bmatrix} 0 \\ 1 \end{bmatrix}\cdot {\bf x}$ as Dirichlet data for the upper bound, in all examples except examples \ref{7.4} and \ref{7.5}. Thus except in these examples, the bounds correspond to those derived by Milton \cite{milt11}. Let \begin{equation} L(\sigma_1):= \frac{(\sigma_1 + \sigma_2)}{\sigma_1 (\sigma_1 - \sigma_2)} \left[ \frac{\mathop{\rm Tr}\nolimits A - 2b/\sigma_1}{\det A - b^2/\sigma_1^2} - \frac{2 \sigma_1 \sigma_2}{\sigma_1 + \sigma_2} \right], \end{equation} denote the lower bound on $f_1$ and let \begin{equation} U(\sigma_1) := \frac{\sigma_2 (\sigma_1+\sigma_2)}{(\sigma_1-\sigma_2)} \left[ \frac{1}{\sigma_2} - \frac{\mathop{\rm Tr}\nolimits A - 2b' \sigma_2}{\det A - b'^2 \sigma_2^2} \right] \end{equation} denote the upper bound on $f_1$. \begin{Exa} {\bf (variation of $\sigma_1$)}. We compute the bounds changing $\sigma_1$, keeping $\sigma_2=1$, when the inclusion is a disk or an ellipse inside a disk or a rectangle (with corners rounded). Figures \ref{LU_DYDY1}, \ref{LU_DYDY3}, and \ref{LU_DYDY2} show the numerical results. Figure \ref{LU_DYDN} is when the inclusion is simply connected and of general shape. Figure \ref{ALU} is when the inclusion is not simply connected. The results show that the lower bound deteriorates seriously as the conductivity ratio $\sigma_1$ increases while the upper bounds are relatively good even with large $\sigma_1$. \end{Exa} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_DYDY4.3.eps,width=8cm}\\ \epsfig{figure=LU_DYDY3.4.eps,width=8cm}\vskip 0.5cm \begin{tiny} \title{first diagram\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ second diagram}\\ \begin{tabular}{|c ||c |c|}\hline $\sigma_1$ & $L(\sigma_1)/f_1$ & $U(\sigma_1)/f_1$ \\\hline 1.1&0.9979&1.0000\\ 1.2&0.9925&1.0000\\ 1.5&0.9635&1.0000\\ 2 &0.8979&1.0000\\ 3 &0.7673&1.0000\\ 5 &0.5787&1.0000\\ 10 &0.3518&1.0000\\ 20 &0.1958&1.0000\\\hline \end{tabular}\hskip 0.5cm \begin{tabular}{|c ||c |c|}\hline $\sigma_1$ & $L(\sigma_1)/f_1$ & $U(\sigma_1)/f_1$ \\\hline 1.1&0.9904&1.0077\\ 1.2&0.9783&1.0149\\ 1.5&0.9340&1.0337\\ 2 &0.8532&1.0583\\ 3 &0.7115&1.0917\\ 5 &0.5237&1.1287\\ 10 &0.3113&1.1659\\ 20 &0.1710&1.1889\\\hline \end{tabular} \end{tiny} \end{center} \caption{The bounds with increasing $\sigma_1$ when the inclusion is a disk and $\Omega$ is a circle, and $f_1=0.09$. We take Neumann and Dirichlet data of the special forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis. The values for the bounds are given in the table.}\label{LU_DYDY1} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_DYDN6.3.eps,width=8cm}\\ \epsfig{figure=LU_DYDN5.3.eps,width=8cm}\vskip 0.5cm \begin{tiny} \title{first diagram\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ second diagram}\\ \begin{tabular}{|c ||c |c|}\hline $\sigma_1$ & $L(\sigma_1)/f_1$ & $U(\sigma_1)/f_1$ \\\hline 1.1&0.9982&1.0000\\ 1.2&0.9934&1.0000\\ 1.5&0.9677&1.0000\\ 2 &0.9091&1.0001\\ 3 &0.7895&1.0001\\ 5 &0.6099&1.0001\\ 10 &0.3818&1.0002\\ 20 &0.2170&1.0002\\\hline \end{tabular}\hskip 0.5cm \begin{tabular}{|c ||c |c|}\hline $\sigma_1$ & $L(\sigma_1)/f_1$ & $U(\sigma_1)/f_1$ \\\hline 1.1&0.9921 &1.0062\\ 1.2&0.9819&1.0119\\ 1.5&0.9435&1.0268\\ 2 &0.8712&1.0459\\ 3 &0.7396&1.0714\\ 5 &0.5569&1.0988\\ 10 &0.3395&1.1257\\ 20 &0.1896&1.1420\\\hline \end{tabular} \end{tiny} \end{center} \caption{The bounds with increasing $\sigma_1$ when the inclusion is an ellipse and $\Omega$ is a circle and $f_1=0.08$. We take Neumann and Dirichlet data of the special forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis. The values for the bounds are given in the table.}\label{LU_DYDY3} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_DNDY4.4.eps,width=8cm}\\ \epsfig{figure=LU_DNDY3.3.eps,width=8cm}\vskip 0.5cm \begin{tiny} \title{first diagram\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ second diagram}\\ \begin{tabular}{|c ||c |c|}\hline $\sigma_1$ & $L(\sigma_1)/f_1$ & $U(\sigma_1)/f_1$ \\\hline 1.1&0.9976&1.0003\\ 1.2&0.9917&1.0006\\ 1.5&0.9614&1.0013\\ 2 &0.8939&1.0022\\ 3 &0.7608&1.0033\\ 5 &0.5708&1.0044\\ 10 &0.3449&1.0054\\ 20 &0.1912&1.0060\\\hline \end{tabular}\hskip 0.5cm \begin{tabular}{|c ||c |c|}\hline $\sigma_1$ & $L(\sigma_1)/f_1$ & $U(\sigma_1)/f_1$ \\\hline 1.1&0.9915&1.0065\\ 1.2&0.9803&1.0125\\ 1.5&0.9376&1.0281\\ 2 &0.8576&1.0480\\ 3 &0.7155&1.0744\\ 5 &0.5262&1.1027\\ 10 &0.3122&1.1302\\ 20 &0.1713&1.1468\\\hline \end{tabular} \end{tiny} \end{center} \caption{The bounds with increasing $\sigma_1$ when the inclusion is a disk and $\Omega$ is a square, and $f_1=0.0699$. We take Neumann and Dirichlet data of the special forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis. The values for the bounds are given in the table.}\label{LU_DYDY2} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_DNDN4.3.eps,width=8cm} \end{center} \caption{The bounds with increasing $\sigma_1$ when the inclusion is not a disk or an ellipse and $\Omega$ is a square, and $f_1=0.0673$. We take Neumann and Dirichlet data of the special forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis.}\label{LU_DYDN} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=ALU_DYDY4.3.eps,width=8cm}\\ \epsfig{figure=ALU_DYDY3.4.eps,width=8cm}\\ \epsfig{figure=ALU_DNDY4.4.eps,width=8cm}\\ \epsfig{figure=ALU_DNDY3.3.eps,width=8cm} \end{center} \caption{The bounds with increasing $\sigma_1$ when the inclusion is an annulus. We take Neumann and Dirichlet data of the special forms. The second and third columns are graphs of the same data; the third column is with a log-scale for the $\sigma_1$-axis.}\label{ALU} \end{figure} \begin{Exa}{\bf (variation of $f_1$)}. We compute the bounds for various volume fractions. Figure \ref{LU_area} shows the numerical results. It clearly shows that the lower bound works better for higher volume fractions. \end{Exa} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_area1.4.eps,width=7cm}\\ \epsfig{figure=LU_area4.3.eps,width=7cm} \end{center} \caption{$\sigma_1=5$. The bounds changing the volume fraction. We take Neumann and Dirichlet data with the special forms. }\label{LU_area} \end{figure} \begin{Exa}{\bf (variation of distance from $\partial\Omega$)}. We compute the lower and upper bounds changing the distance between the inclusion and $\partial\Omega$. Figure \ref{LU_dist1} shows the numerical results when $\sigma_1=2$. It shows that the further the inclusion is from $\partial\Omega$, the better bounds are. \end{Exa} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_d3.6.eps,width=8cm} \end{center} \caption{$\sigma_1=2$ and $f_1=0.0262$. The bounds changing the distance between the inclusion and $\partial\Omega$. We take Neumann and Dirichlet data of the special forms. }\label{LU_dist1} \end{figure} \begin{Exa}\label{7.4} {\bf (boundary data)}. In the example we compute the bounds using other boundary data. We use as Neumann data for the lower bound $q_1=-n_1 - n_1 n_2$ and $q_2=-n_2 - n_1 n_2$, and as Dirichlet data for the upper bound $V_1=-x-xy$ and $V_2=-y-xy$. Figure \ref{LU_gDYDY} shows that the special boundary data work much better. \end{Exa} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_gDYDY1.3.eps,width=8cm} \end{center} \caption{The bounds changing $\sigma_1$ in case that we take Neumann data $g_1=-\nu_1-\nu_1\nu_2$ and $g_2=-\nu_2-\nu_1\nu_2$. Also we take Dirichlet data $V_1=-x-xy$ and $V_2=-y-xy$. }\label{LU_gDYDY} \end{figure} \begin{Exa}\label{7.5} When we use special Neumann data $q_1=-n_1$ and $q_2=-n_2$, then a pair of Dirichlet data are measured on $\partial\Omega$. We may use this data to compute the upper bound using the formula \eq{UB19}. Likewise, we may use the measured Neumann data corresponding to the Dirichlet data $V_1=-x$ and $V_2=-y$ to compute the lower bound using the formula \eq{LB45}. Figure \ref{LU_same} shows numerical results when the volume fraction varies. In this example it clearly shows that bounds using the measured data are better than those using the given data. \end{Exa} \begin{figure}[htbp] \begin{center} \epsfig{figure=LU_same_1.eps,width=8cm} \epsfig{figure=LU_same_2.eps,width=8cm} \end{center} \caption{$\sigma_1=5$. $L_N(f_1)$ is the lower bound using the Neumann data corresponding to the special Dirichlet data and $U_D(f_1)$ is the upper bound using the Dirichlet data corresponding to the special Neumann data. }\label{LU_same} \end{figure} \section{Construction of E$_\Omega$-inclusions} Following the method outlined in section 23.9 of \cite{milton} we look for a simply connected inclusion inside which the field is uniform for some boundary condition assigned on the outer boundary. More precisely, we look for an inclusion $E$ contained in a domain $\Omega$ (bounded or unbounded) such that $-\nabla V$ is uniform inside $E$, where $V$ is the solution to \begin{equation} \left\{ \begin{array}{l} \nabla \cdot \sigma \nabla V=0 \quad \mbox{in } \Omega, \\ V=V^0 \quad \mbox{on } \partial\Omega \end{array} \right. \eeq{CE1} for some boundary data $V^0$ with $\sigma=\sigma_1 \chi(E) + \sigma_2 \chi(\Omega \setminus E)$ ($\sigma_1 \neq \sigma_2$). We may suppose, without loss of generality, that ${\bf e}=-\nabla V = (-1, 0)^T$. We also suppose that the coordinates have been positioned and scaled so that $y_{\mbox{max}}=1$ and $y_{\mbox{min}}=-1$, where $y_{\mbox{max}}=\max \{ y ~ | ~(x,y) \in E \mbox{ for some } x \}$ and $y_{\mbox{min}}=\min \{ y ~| ~(x,y) \in E \mbox{ for some } x \}$. Let $W$ be a harmonic conjugate of $V$ in $\Omega \setminus \overline{E}$ so that $V+iW$ is an analytic function of $z=x+iy$ in $\Omega \setminus \overline{E}$. Then we have \begin{equation} V=x, \quad W= \frac{\sigma_1}{\sigma_2} y, \quad \mbox{on } \partial E. \eeq{CE2} Define new potentials $u$ and $v$ by \begin{equation} u+ i v = \frac{i(V+iW-z)}{(1-\sigma_1/\sigma_2)}. \eeq{CE3} Then, $u+ i v$ is still an analytic function of $z=x+iy$ in $\Omega \setminus \overline{E}$, and on $\partial E$ \begin{equation} u = \frac{-W+y}{1-\sigma_1/\sigma_2} = y, \quad v = \frac{V-x}{1-\sigma_1/\sigma_2} = 0. \eeq{CE4} Now assume $u+ i v$ is a univalent function of $z=x+iy$ inside $\Omega \setminus \overline{E}$, and consider $x+iy$ as an analytic function of $u+ i v$ (hodograph transformation). Because of \eq{CE4}, the image of $\partial E$ by $u+ i v$ is the slit $S= [y_{\mbox{min}}, y_{\mbox{max}}] = [-1,1]$ on the $u$-axis, and $y= u$ on $S$. The problem is now to construct a function $z=x+iy=f(u+iv)$ such that \begin{itemize} \item[(i)] $f$ is analytic and univalent in $U \setminus S$ for some neighborhood $U$ of $S$, \item[(ii)] $\mathop{\rm Im}\nolimits f=u$ on $S$, \item[(iii)] $\mathop{\rm Re}\nolimits f|_{+} - \mathop{\rm Re}\nolimits f|_{-} >0$ on $S$ except at $\pm 1$ where it is $0$. \end{itemize} Here $|_{+}$ and $|_{-}$ indicate the limit from above and below $S$, respectively. One can see that the conditions (i), (ii), and (iii) guarantee that $f$ maps $U \setminus S$ onto $\Omega \setminus \overline{E}$ for a simply connected domain $E$ and $\Omega$ a domain containing $\overline{E}$. In fact, (ii) and (iii) imply that $f$ maps $S$ onto $\partial E$ and the orientation is preserved. Since $f$ is conformal, it maps $U \setminus S$ to outside $\overline{E}$. We have the following lemma for univalence. \begin{lemma} \label{lemma21} Let $\gamma$ be a simple closed curve which consists of two curves $\gamma^+$ and $\gamma^-$. Let $U$ be an open neighborhood of $S$ and let $B_1(\delta)$ and $B_{-1}(\delta)$ be open balls of radius $\delta$ centered at $w=1$ and $w=-1$ respectively. Let $f$ be an analytic function in $U\setminus S$ which maps $U\setminus S$ outside $\gamma$ of the form \begin{equation} f(w)=iw + g(w) \eeq{CE5} where $\mathop{\rm Im}\nolimits g=0$ on $S$. Suppose that the mapping $u \mapsto \lim_{v \to 0^+} f(u+iv)$ is one-to-one from $S$ onto $\gamma^+$, and $u \mapsto \lim_{v \to 0^+} f(u-iv)$ is one-to-one from $S$ onto $\gamma^-$. If there is $\delta>0$ such that $f$ is univalent in $B_1(\delta)\setminus S$ and in $B_{-1}(\delta)\setminus S$, then there is an open neighborhood $U_0$ of $S$ such that $f$ is univalent in $U_0 \setminus S$. \end{lemma} \proof Let $\varphi(z)=(z+\frac{1}{z})/2$ for $|z| \ge 1$. $\varphi$ maps $|z|>1$ onto $\mathbb{C} \setminus S$ and $|z|=1$ onto $S$. Let $G(z)= g(\varphi(z))$. Since $\mathop{\rm Im}\nolimits G(z)=0$ on $|z|=1$, $G$ can be extended so that it is analytic in $1-\varepsilon <|z| < 1+ \varepsilon$ for some $\varepsilon>0$. Let $F(z)= f(\varphi(z))$. Then $F$ is analytic in $1-\varepsilon <|z| < 1+ \varepsilon$ and univalent in the neighborhoods of $z=1$ and $z=-1$. Moreover, $F$ is one-to-one from $|z|=1$ onto $\gamma$. We claim that $F$ is univalent in $1-\varepsilon_0 <|z| < 1+ \varepsilon_0$ for some $\varepsilon_0>0$. In fact, if not, then for each $n$ there are $z_{1,n}$ and $z_{2,n}$ such that $1- \frac{1}{n} < |z_{j,n}| < 1+ \frac{1}{n}$, $z_{1,n} \neq z_{2,n}$, and $F(z_{1,n})=F(z_{2,n})$. For $j=1,2$, the sequence $z_{j,n}$ has a subsequence which converges to a point on $|z|=1$, say $z_j$. Since $F$ is one-to-one on $|z|=1$, $z_1=z_2$. But this implies that $F'(z_1)=0$, where \begin{equation} F'(z)=f'(\varphi(z))\varphi'(z)=[i+g'(\varphi(z))](1-z^{-2})/2, \eeq{CM0} and since $g'(\varphi(z_1))$ is real we conclude that $z_1=1$ or $z_1=-1$ which is contradiction since $F$ is univalent in the neighborhoods of these points. Thus $F$ is univalent in $1-\varepsilon_0 <|z| < 1+ \varepsilon_0$ for some $\varepsilon_0>0$. This completes the proof. \qed We now construct $f$ satisfying (i), (ii), and (iii) using conformal mappings. Let $w=u+iv$ and define \begin{equation} g(w)=f(w)-iw \eeq{CM1} so that $\mathop{\rm Im}\nolimits g=0$ on $S$. Let \begin{equation} \xi = \frac{1-w}{1+w}, \eeq{CM2} which maps $S$ onto the positive real axis. Let $\zeta=\sqrt{\xi}$ with the branch cut along the positive real axis and define \begin{equation} F(\zeta) = g \left( \frac{1-\zeta^2}{1+\zeta^2} \right). \eeq{CM3} Then $\mathop{\rm Im}\nolimits F=0$ on the whole real axis. Thus, by defining $F(\zeta^*)=F(\zeta)^*$, where $*$ denotes the complex conjugate, $F$ can be extended as an analytic function in a tubular neighborhood of the real axis. Moreover, since $g$ is analytic in a neighborhood of $-1$ except the part of the slit and the bilinear transform $\zeta$ maps a neighborhood of $-1$ onto outside a compact set, $F$ must be analytic in $\mathbb{C} \setminus (K \cup K^*)$ where $K$ is a compact set in the upper half plane and $K^*$ is its symmetric part with respect to the real axis, {\it i.e.}, $K^*=\{z^* ~|~ z \in K\}$. $F$ satisfies \begin{itemize} \item[(i)$^\prime$] $F$ is analytic in $\mathbb{C} \setminus (K \cup K^*)$ for a compact set $K$ in the upper half plane. \item[(ii)$^\prime$] $\mathop{\rm Im}\nolimits F=0$ on the real axis, \item[(iii)$^\prime$] $F(\zeta) - F(-\zeta) >0$ for real positive $\zeta$. \end{itemize} The function $f$ is now given by \begin{equation} f(w)=iw + F\left( \sqrt{\frac{1-w}{1+w}}. \right) \eeq{CM4} Note that $y=u$ on the slit and hence $\partial E$ is given by \begin{equation} x = F\left( \pm \sqrt{\frac{1-y}{1+y}} \right). \eeq{CM5} In addition to (i)$^\prime$, (ii)$^\prime$, and (iii)$^\prime$, $F$ needs to be univalent inside a suffiently small ball around the origin, and outside a sufficiently large ball. The first condition is satisfied if $F'(0)\ne 0$. Since $F$ maps $\infty$ to a point in $\mathbb{C}$, $F$ being analytic and univalent outside a sufficiently large ball has the series expansion \begin{equation} F(\zeta)= \sum_{j=0}^\infty \frac{\beta_j}{\zeta^j} \eeq{CM6} as $\zeta \to \infty$, where $\beta_1 \ne 0$ (and $\beta_1$ is real and positive from conditions (ii)$^\prime$, and (iii)$^\prime$). We make a record of these conditions: \begin{itemize} \item[(iv)$^\prime$] The derivative $F'(0)$ is non-zero, and $F(\zeta)$ has the asymptotic expansion \begin{equation} F(\zeta)= \beta_0 + \frac{\beta_1}{\zeta} + O(|\zeta|^{-2}) \quad\mbox{as } |\zeta| \to \infty, \eeq{CM7} where $\beta_1$ is real and positive. \end{itemize} Good candidates for functions satisfying (i)$^\prime$, (ii)$^\prime$, and (iv)$^\prime$ are rational functions of the form \begin{equation} F(\zeta)= \sum_{\alpha=1}^n \left[ \frac{b_\alpha}{\zeta-a_\alpha} + \frac{b_\alpha^*}{\zeta-a_\alpha^*} \right] + c \eeq{CM8} where the $a_\alpha$'s are complex numbers with positive imaginary parts, the $b_\alpha$'s are complex numbers, $c$ is a real number, and \begin{equation} \sum_{\alpha=1}^n \mathop{\rm Re}\nolimits (b_\alpha)>0,\quad \sum_{\alpha=1}^n \mathop{\rm Re}\nolimits (b_\alpha/a_\alpha^2)\ne 0. \eeq{CM8a} To ensure that (iii)$^\prime$ is satisfied we require that the function \begin{equation} F(\zeta) - F(-\zeta)=2\zeta\sum_{\alpha=1}^n \left[ \frac{b_\alpha}{\zeta^2-a_\alpha^2} + \frac{b_\alpha^*}{\zeta^2-(a_\alpha^*)^2} \right] \eeq{CM8b} has no real roots aside from $\zeta=0$. (The sign of the inequality in (iii)$^\prime$ is guaranteed by the positivity of $\beta_1$.) Let us now characterize those rational functions $F$ which yield ellipses as E$_\Omega$-inclusions. Because $y=u$ on the slit $[-1,1]$, the ellipse takes the shape like the first figure in Figure \ref{varIma} (after translation). Let the ellipse be given by $x^2+\alpha y^2 + \beta x y=c$ with $4\alpha > \beta^2$. Solving for $x$ we get \begin{equation} x= \frac{-\beta y \pm \sqrt{\beta^2 y^2 - 4(\alpha y^2-c)}}{2}. \eeq{CM9} Since the discriminant vanishes at $y=\pm 1$, we have $c=4\alpha-\beta^2$, and hence \begin{equation} x= \frac{-\beta}{2} y \pm \frac{(1+y)}{2} \sqrt{(4\alpha -\beta^2) \frac{1-y}{1+y}}. \eeq{CM10} Letting $\zeta= \sqrt{\frac{1-y}{1+y}}$, we have \begin{equation} x= \frac{\pm \zeta\sqrt{4\alpha -\beta^2} - \beta}{\zeta^2+1} + \frac{\beta}{2} = F(\zeta) \eeq{CM11} for real $\zeta$. It means that ellipses are obtained by $F$'s of the form \begin{equation} F(\zeta)= \frac{b}{\zeta-a} + \frac{b^*}{\zeta-a^*} + c \eeq{CM12} with $a=i$ and $b$ with positive real part. \medskip \noindent{\bf Example}. In this example, we construct some E$_\Omega$-inclusions other than ellipses. We use $F$ in the form \eq{CM12} with $c=0$ (it amounts to translating the figure). Then in $\zeta$-coordinates $f$ is given by \begin{equation} f(\zeta)= \frac{2i}{\zeta^2+1} + \frac{b}{\zeta-a} + \frac{b^*}{\zeta-a^*}. \eeq{CM13} where both \eq{CM8a} and the absence of real non-zero roots of \eq{CM8b} will be ensured if we choose $b$ and $-b/a^2$ with positive real parts. We will plot the image of a vicinity of the real axis in the upper half plane under the map $f$. To avoid computational difficulty in dealing with an infinite space, we use a bilinear transform \begin{equation} \zeta= \frac{1-iw}{w-i}, \eeq{CM14} which maps the unit disk onto the upper half plane. Then we plot \begin{equation} f(w)= \frac{2i}{\zeta(w)^2+1} + \frac{b}{\zeta(w)-a} + \frac{b^*}{\zeta(w)-a^*} \eeq{CM15} for $w=r e^{i\theta}$ with $1-\varepsilon \le r \le 1$. From the expansions for $F(\zeta)$ in powers of $\zeta$ and $1/\zeta$ we see that near the bottom and top of the inclusion the boundary is given by \begin{equation} x\approx \mathop{\rm Re}\nolimits(b)\sqrt{2(1+y)}+O(1+y),\quad\quad x \approx -2\mathop{\rm Re}\nolimits(b/a)-\mathop{\rm Re}\nolimits(b/a^2)\sqrt{2(1-y)}+O(1-y). \eeq{CM16} Thus the bottom and top are positioned at $x=0$ and $x=-2\mathop{\rm Re}\nolimits(b/a)$ and the curvature of the boundary there is determined by $\mathop{\rm Re}\nolimits(b)$ and $-\mathop{\rm Re}\nolimits(b/a^2)$ respectively. Figure \ref{varRad} shows various shapes of $\partial\Omega$, which are the image of $|z|=r<1$ under $f$, and the boundary of E$_\Omega$-inclusion, which is the image of $|z|=1$. Figure \ref{varRea}, \ref{varIma}, \ref{varReb}, \ref{varImb}, \ref{varImb2}, and \ref{varImba^2} show various shapes of E$_\Omega$-inclusions when we vary the complex parameters $a$, $b$, and $b/a^2$. We emphasize that with these values of $a$ and $b$, the univalence of $f$ is guaranteed by Lemma \ref{lemma21}. \begin{figure}[htbp] \begin{center} \epsfig{figure=fig1.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Re $a$ with Im $a=1$ and $b=1$.} \label{varRea} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=fig2.eps,height=4cm} \caption{With $a=0.8+i$ and $b=1$, the inner most curve (the image of $|z|=1$) is the boundary of the E$_\Omega$-inclusion (the rightmost inclusion in Fig. \ref{varRea}). The others are images of $|z|=0.9, \ 0,8, \ 0,7, \ 0,6, \ 0,5$. These, or any simple closed curve enclosed by them, can be regarded as boundaries of $\Omega$.} \label{varRad} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=fig3.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $a$ with Re $a=0$ and $b=1+i$.}\label{varIma} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=fig4.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Re $b$ with Im $b=1$ and $a=1.3i$.}\label{varReb} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=fig5.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $b$ with Re $b=1$ and $a=1.3i$.}\label{varImb} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=figs_MImb.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $b$ with Re $b=0.1$ and $b/a^2=-10+2i$.}\label{varImb2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \epsfig{figure=figs_MImba2.eps,height=4cm} \caption{Various shapes of E$_\Omega$-inclusions when varying Im $b/a^2$ with $b=0.1+2i$ and Re $b/a^2=-10$.}\label{varImba^2} \end{center} \end{figure} \section*{Acknowledgements} The authors thank Michael Vogelius for comments on a draft of the manuscript, and for spurring the interest of GWM in this problem through a lecture at the Mathematical Sciences Research Institute. GWM is grateful for support from the Mathematical Sciences Research Institute and from National Science Foundation through grant DMS-0707978. HK is grateful for support from National Research Foundation through grants No. 2009-0090250 and 2010-0017532, and from Inha University. The work of EK was supported by Korea Research Foundation, KRF-2008-359-C00004.
{ "attr-fineweb-edu": 1.80957, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbFE4uzqh_DvIkYl-
\section{Introduction} Recent discovery of the standard-model-like Higgs boson particle with mass about 125\,GeV at the LHC~\cite{Higgs} may indicate relatively high-scale supersymmetry (SUSY) where the SUSY particle masses are of order $100$\,TeV~\cite{Okada:1990gg,Giudice:2011cg}. In particular, the observed Higgs boson mass can be naturally explained in the so-called pure gravity mediation model~\cite{Ibe:2011aa}, where sfermion masses as well as the gravitino mass are $O(100)$\,TeV, whereas gaugino masses are $O(100)$\,GeV generated by the anomaly-mediated SUSY breaking (AMSB) effect~\cite{Giudice:1998xp}. In the most parameter space, the lightest SUSY particle (LSP) is the Wino. Although the thermal relic density of the Wino is too small to account for the observed dark matter (DM) for the Wino lighter than $\sim 2.7$\,TeV~\cite{Hisano:2006nn}, it is also produced by the decay of the gravitino. Since the gravitino is heavy enough to decay before big-bang nucleosynthesis (BBN), it does not spoil the success of BBN. If the reheating temperature after inflation, $T_{\rm R}$, is around $10^9$--$10^{10}$\,GeV, the non-thermal Wino can explain the present DM abundance. Such high reheating temperature is also consistent with thermal leptogenesis scenario~\cite{Fukugita:1986hr}. While this is an attractive scenario, it is not trivial whether it is consistent with known inflation models. In a series of works~\cite{Endo:2006zj,Kawasaki:2006gs,Asaka:2006bv,Dine:2006ii,Endo:2006tf, Endo:2006qk,Endo:2007ih,Endo:2007cu,Endo:2007sz}, it was revealed that the inflaton generally decays into gravitinos and these non-thermally produced gravitinos severely constrain inflation models. Even if the gravitino is as heavy as $O(100)$\,TeV, too many gravitinos would result in the LSP overproduction, which severely restricts inflation models. The other aspects of the high-scale SUSY breaking in the context of inflation models is that the inflaton dynamics may be spoiled or significantly modified by the existence of the constant term in the superpotential~\cite{Buchmuller:2000zm,Senoguz:2004vu,Nakayama:2010xf} or by the radiative correction to the inflaton potential~\cite{Nakayama:2011ri}. One way to suppress the gravitino production in inflaton decay is to assign some charge to the SUSY breaking field $z$. Then some of the dangerous terms in the K\"ahler potential, $K\sim |\phi|^2 z,~|\phi|^2 zz$, where $\phi$ denotes the inflaton, can be forbidden. Those operators are indeed suppressed in the low energy if $m_z \gg m_{3/2}$, because the vacuum expectation value (VEV) of $z$ is then negligibly small. This is easily achieved in the dynamical SUSY breaking scenario. Interestingly, gaugino masses are successfully generated by the AMSB contribution in the pure gravity mediation model, even if $z$ is charged under a certain symmetry. In fact, since the F-term of $z$ develops VEV there is still a mixing between $\phi$ and $z$, which induces the inflaton decay into the gravitinos. The rate, however, is significantly suppressed if $m_z \ll m_\phi$, where $m_z$ and $m_\phi$ denote the mass of $z$ and $\phi$, respectively~\cite{Dine:2006ii,Endo:2006tf}. The problem is that if the inflaton mass is larger than the dynamical SUSY breaking scale $\Lambda$, it can decay into hadrons in the hidden sector, which eventually produce many gravitinos~\cite{Endo:2006qk,Endo:2007ih}. Thus, a guess is that the gravitino production is suppressed if the following relation is satisfied : \begin{eqnarray} m_{3/2} \ll m_z \ll m_\phi \lesssim \Lambda. \label{relation} \end{eqnarray} This requires a hierarchy between $m_z$ and $\Lambda$, which can be easily realized in some dynamical SUSY breaking scenarios, as we shall see later. Interestingly enough, the SUSY breaking scale $\Lambda \sim \sqrt{m_{3/2}M_P}$ is close to the inflaton mass in many inflation models for $m_{3/2}\sim O(100)$\,TeV. Thus we have much chance to suppress the gravitino overproduction in the high-scale SUSY scenario. We note however that, if the mass of $z$ is too light, the gravitino production from the coherent oscillations of $z$ becomes non-negligible. Therefore, it is important to take into account all these contributions to see to what extent the constraints on the inflation models can be relaxed. Lastly let us clarify the difference of the present paper from Ref.~\cite{Endo:2007cu}. In Ref.~\cite{Endo:2007cu}, the relation (\ref{relation}) was assumed to avoid the gravitino overproduction in the gravity and gauge mediation, and the allowed region for the single-field new inflation was studied. In the present work, we shall derive the constraints on the general inflation model parameters for the case of heavy gravitino. This paper is organized as follows. In Sec.~\ref{sec:grav} we summarize the inflaton decay rate into the gravitino and the resulting gravitino abundance. In Sec.~\ref{sec:Polonyi}, we discuss the Polonyi problem in dynamical SUSY breaking models and show that the gravitino production can be indeed suppressed in an explicit SUSY breaking model. We conclude in Sec.~\ref{sec:conc}. \section{Non-thermal gravitino production from inflaton decay} \label{sec:grav} We assume dynamical SUSY breaking where SUSY is spontaneously broken by the strong dynamics at the scale $\Lambda$. A concrete model will be given later. Discussion in this section does not depend on details of the dynamical SUSY breaking models. Below the scale $\Lambda$, the SUSY breaking field $z$ has a superpotential of the form \begin{eqnarray} W = \mu^2 z + W_0, \end{eqnarray} where $\mu$ represents the SUSY breaking scale, and the constant $W_0 \simeq m_{3/2} M_P^2$ is fixed so that the cosmological constant almost vanishes. The F-term of $z$ is given by $F_z \simeq - \mu^2 \simeq \sqrt{3} m_{3/2} M_P$, and SUSY is indeed broken. The $z$ obtains a non-SUSY mass through the following non-renormalizable operator in the K\"ahler potential, \begin{equation} K \;\supset\; -\frac{|z|^4}{\tilde\Lambda^2}. \label{Kz4} \end{equation} Here $\tilde\Lambda$ is some cutoff scale, which is roughly equal to $\Lambda$ if $z$ itself is involved in the strong dynamics, while it can be much larger than $\Lambda$ if $z$ is weakly coupled to the strong sector as shown explicitly in Sec.~\ref{sec:DSB}. It generates the mass of $z$ as $m_z^2 = 4|F_z|^2/\tilde\Lambda^2$. We assume $m_z \gg m_{3/2}$ so that the VEV of $z$ is suppressed by $m_{3/2}^2/m_z^2$. Hereafter we assume that $z$ is charged under some symmetry, such as global U(1), which is spontaneously broken by the strong dynamics in the hidden sector. Let us consider the mixing of inflaton, which is denoted by $X$ or $\phi$ in the following, and SUSY breaking field $z$. As an example, we consider the following K\"ahler and super-potentials: \bea K &=& |\phi|^2+|X|^2 +|z|^2 -\frac{|z|^4}{\tilde\Lambda^2},\\ W &=& X(g\phi^n - v^2) + \mu^2 z + W_0, \label{spp} \end{eqnarray} where the first term in $W$ corresponds to the inflaton sector with $g$ being the coupling constant and $v$ the constant giving the inflation energy scale. At the potential minimum, $\phi$ develops a VEV, $\la \phi \right\rangle \equiv |v^2/g|^{1/n}$, while $X$ sits near the origin. Note that $\phi^n$ can be replaced with $(\phi\bar\phi)^{n/2}$, but the following discussion does not change due to this choice. This class of inflation models includes the hybrid $(n=2)$~\cite{Copeland:1994vg} and smooth-hybrid inflation~\cite{Lazarides:1995vr} as well as the new inflation model $(n\geq 4)$~\cite{Izawa:1996dv,Asaka:1999jb}. Also, the following arguments can be applied to the chaotic inflation model~\cite{Kawasaki:2000yn} without a discrete symmetry on $X$ and $\phi$. Around the potential minimum, $\phi$ and $X$ get maximally mixed with each other to form mass eigenstates, $\Phi_\pm \equiv ( \phi \pm X^\dag)/\sqrt{2}$, in the presence of $W_0$~\cite{Kawasaki:2006gs}. The inflaton mass is (approximately) given by $m_\phi = ng \langle\phi\rangle^{n-1}$. This mixing is meaningful as long as the decay rates of $\phi$ and $X$ are smaller than $m_{3/2}$, which is assumed in the following.\footnote{Otherwise, too many gravitinos are thermally produced.} From the supergravity scalar potential, we find the mixing of $X$ and $z$ as \begin{equation} V = e^{K/M_P^2}\left[ K_{i\bar j}^{-1}(D_i W)(D_{\bar j}\bar W) -3\frac{|W|^2}{M_P^2}\right] \supset \frac{m_\phi\langle\phi\rangle \mu^2}{M_P^2}Xz^\dagger + {\rm h.c.}. \end{equation} The mixing angle between $X$ and $z$ is approximately given by \begin{equation} \theta \;\simeq\; \left|\frac{m_\phi\langle\phi\rangle F_z}{M_P^2(m_\phi^2-m_z^2)}\right| \simeq \begin{cases} \displaystyle\frac{\sqrt{3}m_{3/2}\langle\phi\rangle }{m_\phi M_P} & {\rm~for~}m_\phi \gg m_z,\\ \displaystyle\frac{\sqrt{3}m_{3/2}m_\phi\langle\phi\rangle }{m_z^2 M_P} & {\rm~for~}m_\phi \ll m_z. \end{cases} \end{equation} Thus, the effective mixing angle between $\Phi_\pm$ and $z$ is given by $\theta/\sqrt{2}$. The inflaton decay into the gravitino is induced by the operator (\ref{Kz4}). It leads to the following term in the Lagrangian \begin{equation} \mathcal L \supset -2\frac{F_z^\dagger}{\tilde\Lambda^2}z^\dagger \tilde z\tilde z +{\rm h.c.}, \end{equation} where $\tilde z$ denotes the goldstino, which is eaten by the gravitino through the super Higgs mechanism. This operator induces the $z$ decay into the goldstino pair with the decay rate \begin{equation} \Gamma(z\to \tilde z\tilde z) \;\simeq\; \frac{1}{96\pi}\frac{m_z^5}{m_{3/2}^2M_P^2}. \label{StoGG} \end{equation} As far as the inflaton mass is much heavier than the gravitino, we can estimate the inflaton decay into gravitinos in the goldstino picture thanks to the equivalence theorem. The inflaton decays into a pair of goldstinos via the mixing with $z$, and the rate is given by \begin{equation} \Gamma(\Phi \to \tilde z\tilde z) \;\simeq\; \frac{1}{32\pi} \lrfp{\theta}{\sqrt{2}}{2} \frac{m_z^4}{|F_z|^2}m_\phi = \begin{cases} \displaystyle \frac{1}{64\pi}\left( \frac{m_z}{m_\phi} \right)^4\left( \frac{\langle\phi\rangle}{M_P} \right)^2 \frac{m_\phi^3}{M_P^2} & {\rm~for~}m_\phi \gg m_z,\\ \displaystyle \frac{1}{64\pi}\left( \frac{\langle\phi\rangle}{M_P} \right)^2 \frac{m_\phi^3}{M_P^2} & {\rm~for~}m_\phi \ll m_z, \end{cases} \label{PhiSS} \end{equation} where $\Phi$ collectively denotes the inflaton mass eigenstates $\Phi_\pm$. Therefore, the decay rate is suppressed for $m_\phi \gg m_z$. The precise form of the decay rate is given in Appendix. Note that $z$ has a charge and hence terms such as $K \supset |\phi|^2 z$ and $ |\phi|^2 zz$ are forbidden, which would otherwise induce the gravitino oveproduction. However, no symmetry forbids the following non-renormalizable interaction between the inflaton and $z$: \begin{equation} K \;\supset\; -c\frac{|\phi|^2 |z|^2}{M_P^2}, \end{equation} where $c$ is a constant of order unity. This induces the inflaton decay into the scalar component of the SUSY breaking field as \begin{equation} \Gamma(\Phi \to zz^\dagger) = \frac{c^2}{32\pi}\left( \frac{m_z}{m_\phi} \right)^4 \left( \frac{\langle\phi\rangle}{M_P} \right)^2 \frac{m_\phi^3}{M_P^2} \left( 1- \frac{4m_z^2}{m_\phi^2} \right)^{1/2}. \end{equation} Since $z$ predominantly decays into the gravitino pair, this process yields gravitinos with the same order of those from (\ref{PhiSS}). Note also that the operator like $K\sim (|\phi|^2/M_P^2)(|z|^4/\tilde\Lambda^2)$ gives comparable rate with that given above. See Appendix for the details. If the inflaton is heavier than the dynamical scale $\Lambda$, the inflaton decays into hadrons in the hidden sector, which also poses severe constraints on inflation models. The decay proceeds through both tree-level~\cite{Endo:2006qk} and one-loop level~\cite{Endo:2007ih}, but the tree-level process depends on the details of the SUSY breaking models, while the decay via anomalies is more robust. Assuming that the hidden hadron masses are given by $\Lambda$, the decay rate at one-loop level is given by~\cite{Endo:2007ih,Endo:2007sz} \begin{equation} \Gamma(\Phi \to {\rm hadron}) = \begin{cases} \displaystyle \frac{N_g\alpha_h^2}{512\pi^3}(\mathcal T_G-\mathcal T_R)^2\left( \frac{\langle\phi\rangle}{M_P} \right)^2 \frac{m_\phi^3}{M_P^2} & {\rm~for~}m_\phi \gtrsim 2\Lambda,\\ \displaystyle 0 & {\rm~for~}m_\phi \lesssim 2\Lambda, \end{cases} \end{equation} where $\mathcal T_G$ and $\mathcal T_R$ are Dynkin index of the adjoint representation and and matter fields in the representation $R$, $\alpha_h$ is the fine structure constant of the hidden gauge group and $N_g$ the number of generators of the gauge group. We have assumed the minimal coupling between the inflaton sector and the hidden sector in the K\"ahler potential. For simplicity, we take $N_g \alpha_h^2(\mathcal T_G-\mathcal T_R)^2 =1$ in the numerical calculation. If this decay mode is open, the gravitino overproduction problem is severe since each hidden hadron jets finally produce gravitinos. As a result, we obtain the following condition for significantly relaxing the gravitino overproduction problem : \begin{equation} m_{3/2} \ll m_z \ll m_\phi \lesssim \Lambda. \label{inequality} \end{equation} Actually, this condition is easily satisfied in a dynamical SUSY breaking model explained in Sec.~\ref{sec:DSB} (see Eq.~(\ref{mS})). However, one should note that too light $m_z$ may lead to the Polonyi problem as shown later. The gravitino abundance, in terms of the number-to-entropy ratio, $Y_{3/2}\equiv n_{3/2}/s$, is given by \begin{equation} Y_{3/2}^{(\phi)}=\frac{3T_{\rm R}}{4m_\phi} \frac{ 2\Gamma(\Phi\to \tilde z\tilde z)+4\Gamma(\Phi\to zz^\dagger)+2N_{3/2}\Gamma(\Phi\to{\rm hadron}) }{\Gamma_{\rm tot}}, \label{Ygrav} \end{equation} where $\Gamma_{\rm tot}$ is the total decay rate of the inflaton and it is related to the reheating temperature $T_{\rm R}$ as $\Gamma_{\rm tot} \equiv (\pi^2 g_*/90)^{1/2} T_{\rm R}^2/M_P$, and $N_{3/2}$ represents the averaged number of gravitinos per hidden hadron jet. We will take $N_{3/2}=1$ for simplicity. Fig.~\ref{fig:Ygrav} shows non-thermally produced gravitino abundance, $Y_{3/2}^{(\phi)}$, from inflaton decay as a function of inflaton mass $m_\phi$ for several values of $m_z$. We have taken $\Lambda = 10^{14}$\,GeV (top panel) and $\Lambda = 10^{15}$\,GeV (bottom panel) for $\langle\phi\rangle=10^{15}$\,GeV and $T_{\rm R}=3\times 10^9$\,GeV. It is clearly seen that the gravitino abundance is significantly reduced in the range $m_z \ll m_\phi < \Lambda$. At large $m_\phi$, three lines coincide since the gravitino production is dominated by the inflaton decay into hidden hadrons. One can read off the gravitino abundance for other values of $\langle\phi\rangle$ and $T_{\rm R}$ by noting that $Y_{3/2}^{(\phi)}$ simply scales as $\propto T_{\rm R}^{-1}$ and $\propto \langle\phi\rangle^2$. \begin{figure} \begin{center} \includegraphics[scale=1.6]{Ygrav1_.eps} \vskip 1cm \includegraphics[scale=1.6]{Ygrav2_.eps} \caption{ Non-thermally produced gravitino abundance, $Y_{3/2}^{(\phi)}$, from inflaton decay as a function of inflaton mass $m_\phi$ for several values of mass of the SUSY breaking field $m_z$. We have taken $\Lambda = 10^{14}$\,GeV (top panel) and $\Lambda = 10^{15}$\,GeV (bottom panel) for $\langle\phi\rangle=10^{15}$\,GeV and $T_{\rm R}=3\times 10^9$\,GeV. Note that $Y_{3/2}^{(\phi)}$ scales as $\propto T_{\rm R}^{-1}$ and $\propto \langle\phi\rangle^2$. The lines for $m_z = {\rm GeV} {9}$ are flattened because of the kinetic mixing between $\phi$ and $z$. See Appendix for details. } \label{fig:Ygrav} \end{center} \end{figure} \section{Constraints on inflation models in dynamical SUSY breaking} \label{sec:Polonyi} \subsection{Polonyi problem in dynamical SUSY breaking} In this section we discuss the Polonyi problem in the dynamical SUSY breaking scenario. Since the SUSY breaking field $z$ obtains a large mass and can have a charge, the cosmological problem associated with the $z$ coherent oscillation is much weaker than the conventional Polonyi problem in gravity-mediation models~\cite{Coughlan:1983ci}. Still, however, there may be significant contributions to the gravitino abundance from the decay of the $z$ coherent oscillations. Let us go into details. Below the dynamical scale $\Lambda$, the potential of the Polonyi field $z$ can be written as\footnote{ In the hybrid inflation, there will be a linear term $\sim H_{\rm inf} \mu^2 \la X \right\rangle_{\rm inf} z /M_P+ {\rm h.c.}$, where $\la X \right\rangle_{\rm inf}$ represents the inflaton field value during inflation. This however does not change the argument. } \begin{equation} V = bH^2|z|^2 + m_z^2|z|^2 - (2m_{3/2}\mu^2 z + {\rm h.c.}). \end{equation} where $H$ denotes the Hubble parameter and $b$ is a constant of order unity assumed to be positive. Let us estimate the Polonyi abundance in the two cases : $H_{\rm inf} \gg m_z$ and $H_{\rm inf} \ll m_z$, where $H_{\rm inf}$ denotes the Hubble scale during inflation. First we discuss the case of $H_{\rm inf} \gg m_z$. When $H$ is large enough, the minimum of $z$ is close to the origin. It is expected that the $z$ begins to oscillate around the true minimum at $H\simeq m_z$ with an amplitude of \begin{equation} \langle z\rangle = \frac{2\sqrt{3} m_{3/2}^2 M_P}{m_z^2}. \label{SVEV} \end{equation} Thus the Polonyi abundance is given by \begin{equation} \frac{\rho_z}{s} = 3T_{\rm R}\left(\frac{m_{3/2}}{m_z}\right)^4, \end{equation} where $T_{\rm R}$ is the reheating temperature and we have assumed $T_{\rm R} \lesssim \sqrt{m_z M_P}$. Next we consider the opposite case, $H_{\rm inf} \ll m_z$. In this case, $z$ already sits at the position close to the minimum during inflation. The deviation from the true minimum at the end of inflation is estimated as \begin{equation} |\delta z| \simeq \frac{2\sqrt{3} m_{3/2}^2 M_P}{m_z^2}\left( \frac{bH_{\rm inf}^2}{m_z^2} \right) . \end{equation} Since $m_\phi \gg m_z$, the Polonyi cannot track the change of the potential at the end of inflation and oscillation of the Polonyi field is induced~\cite{Nakayama:2011wqa}. Then the Polonyi abundance is given by\footnote{ On the other hand, if $m_\phi \ll m_z$, the change of the Polonyi potential is adiabatic with respective to its mass scale and hence no significant oscillation is induced. } \begin{equation} \frac{\rho_z}{s} \simeq 3T_{\rm R}\left(\frac{m_{3/2}}{m_z}\right)^4 \left( \frac{b^2H_{\rm inf}^2}{m_z^2} \right). \end{equation} As shown in (\ref{StoGG}), the Polonyi dominantly decays into the gravitino pair. The gravitino abundance produced by the Polonyi decay is calculated as \begin{equation} Y_{3/2}^{(z)} = \frac{2}{m_z}\frac{\rho_z}{s} \simeq 6\times 10^{-16} \epsilon \left( \frac{T_{\rm R}}{10^9\,{\rm GeV}} \right) \left( \frac{m_{3/2}}{100\,{\rm TeV}} \right)^4 \left( \frac{10^{9}\,{\rm GeV}}{m_z} \right)^5, \label{Ygrav_z} \end{equation} where \begin{equation} \epsilon = \begin{cases} 1 & {\rm for~~} H_{\rm inf} \gg m_z \\ H_{\rm inf}^2/m_z^2 & {\rm for~~}H_{\rm inf} \ll m_z. \end{cases} \end{equation} Therefore, the contribution to the gravitino abundance from the $z$ coherent oscillations is negligible for $m_z \gtrsim 10^{9}$\,GeV for $m_{3/2} \sim 10^{2}-10^3$\,TeV. We assume this in the following. Note that the VEV of $z$ (\ref{SVEV}) is smaller than $\Lambda$ in such a case, hence the discussion so far remains valid. This should be contrasted to the analysis of Ref.~\cite{Endo:2007cu}. \subsection{A model of dynamical SUSY breaking} \label{sec:DSB} Here we give an example of dynamical SUSY breaking model : the IYIT model~\cite{Izawa:1996pk} having a desired structure to suppress the gravitino overproduction. We introduce chiral superfields $Q_i$ $(i=1-4)$, each of which transforms as a doublet representation under an SP(1) gauge group, which becomes strong at the dynamical scale $\Lambda$. We also introduce six gauge singlets $z_{ij}$ ($z_{ij}=-z_{ji}$) which couples to $Q_i$ as follows : \begin{equation} W = \lambda z_{ij} Q_i Q_j. \label{WSQQ} \end{equation} This form of the coupling is ensured by SU(4)$_F$ flavor symmetry, under which both $Q_i$ and $z_{ij}$ are charged. The strong dynamics enforces a constraint on the $QQ$ pair as ${\rm Pf}(Q_iQ_j) = \Lambda^4$. This contradicts with the equation of motion of $z_{ij}$, $\partial W/\partial z_{ij}=0$. Hence, SUSY is broken dynamically. As a result, one of the combination of $z_{ij}$, which we denote by $z$, obtains an $F$-term as \begin{equation} F_z = \frac{\lambda \Lambda^2}{(4\pi)^2}, \end{equation} where we have relied on the naive dimensional analysis~\cite{Luty:1997fk}. Hereafter we assume that $z$ has a charge under some symmetry group. For example, it can have a global U(1) symmetry under which $z$ and $QQ$ transform as $z \to e^{i\theta} z$ and $(QQ) \to e^{-i\theta} (QQ)$.\footnote{ This symmetry is anomalous under the gauge group and broken down to a discrete subgroup, which is spontaneously broken below the scale $\Lambda$. Hence there may be a domain wall problem. This is avoided if the SUSY is already broken during inflation so that domain walls are inflated away, or if there are small explicit symmetry breaking terms that destabilize domain walls. } Since $F_z$ is related to the gravitino mass through the relation $F_z=\sqrt{3}m_{3/2}M_P$, we can express the dynamical scale $\Lambda$ as \begin{equation} \Lambda = 8\times 10^{12}\,{\rm GeV} \frac{1}{\sqrt{\lambda}}\left(\frac{m_{3/2}}{100\,{\rm TeV}}\right)^{1/2}. \label{Lambda} \end{equation} Notice that this is close to the inflaton mass scale for many inflation models. The mass of $z$ is generated from the quantum corrected effective K\"ahler potential \begin{equation} K \supset -\frac{\lambda^4}{16\pi^2}\frac{|z|^4}{\Lambda^2}. \label{KS4} \end{equation} Therefore, $\tilde\Lambda$ in (\ref{Kz4}) is related with $\Lambda$ through the relation $\tilde\Lambda=(4\pi/\lambda^2)\Lambda$. This yields \begin{equation} m_z = \frac{2\lambda^3}{(4\pi)^3}\Lambda. \label{mS} \end{equation} Thus $m_z$ is much smaller than the dynamical scale $\Lambda$ for $\lambda \ll 4\pi$, while hadrons in hidden sector have masses of $\sim \Lambda$. For fixed gravitino mass, $\Lambda$ becomes larger and $z$ becomes lighter as $\lambda$ decreases, and so, the gravitino production rate is suppressed (see Eq.~(\ref{PhiSS})). This hierarchy between $m_z$ and $\Lambda$ has important implications on the gravitino overproduction problem from inflaton decay. Note that the superpotential (\ref{WSQQ}) induces the three-body inflaton decay into $z QQ$. The decay rate is given by~\cite{Endo:2006qk} \begin{equation} \frac{1}{3}\Gamma(\phi \to z QQ) = \frac{1}{2}\Gamma(\phi \to \tilde z\tilde QQ) =\Gamma(\phi \to z \tilde Q \tilde Q) = \frac{\lambda^2}{768\pi^3} \left( \frac{\langle\phi\rangle}{M_P} \right)^2 \frac{m_\phi^3}{M_P^2}, \end{equation} where $Q (\tilde Q)$ represents the scalar (fermionic) component.\footnote{ Three body decays including the other heavier components of $z_{ij}$ are also possible for $m_\phi \gg \Lambda$. They will increase the gravitino abundance up to some numerical factor. } Gravitinos are produced by these processes and they should be added to the estimate (\ref{Ygrav}) as \begin{equation} \delta Y_{3/2}^{(\phi)}=\frac{3T_{\rm R}}{4m_\phi} \frac{ (10+12N_{3/2})\Gamma(\phi\to z\tilde Q\tilde Q) }{\Gamma_{\rm tot}}, \end{equation} for $m_\phi > 2\Lambda$. \subsection{Constraint on inflation models} \label{sec:const} Now let us derive constraints on inflation models from the gravitino overproduction. We consider the following SUSY inflation models : new inflation~\cite{Izawa:1996dv,Asaka:1999jb,Nakayama:2011ri}, hybrid inflation~\cite{Copeland:1994vg,Nakayama:2010xf}, smooth-hybrid inflation~\cite{Lazarides:1995vr} and chaotic inflation~\cite{Kawasaki:2000yn}. Since we are interested in the heavy gravitino scenario, gravitinos decay well before BBN. The constraint comes from the requirement that LSPs produced by the decay of (non-)thermal gravitino should not exceed the observed DM abundance : $m_{\rm LSP} (Y_{3/2}^{(\phi)} +Y_{3/2}^{\rm (th)}+Y_{\rm LSP}^{\rm (th)})< 4\times 10^{-10}$\,GeV, where $Y_{3/2}^{\rm (th)}$ and $Y_{\rm LSP}^{\rm (th)}$ denote the abundance of thermal gravitinos and the thermal relic abundance of the LSP, respectively~\cite{Bolz:2000fu} and $m_{\rm LSP}$ the LSP mass. Hereafter we assume the Wino LSP. Then, for the Wino mass lighter than $\sim 2.7$\,TeV, the thermal relic density is too small to account for all the dark matter density. Fig.~\ref{fig:const_lam} shows constraints on inflation models on $m_\phi$--$\langle\phi\rangle$ plane for several values of $\lambda$ in the IYIT SUSY breaking model. We have taken $m_{3/2}=100$\,TeV and assumed the AMSB relation for the Wino mass $m_{\tilde W} (\simeq 270\,{\rm GeV})$ in the top panel, while $m_{3/2}=10^3$\,TeV, and the Wino mass set to be $1$\,TeV in the bottom. Note that the gaugino masses do not necessarily satisfy the AMSB relation in the pure gravity mediation~\cite{Ibe:2011aa}. In particular, the Wino mass receives the Higgs-Higgsino loop contribution, and it can be a few times heavier (or lighter) than the mass determined by the AMSB relation. We have fixed $T_{\rm R}$ so that the Winos emitted by the decay of thermally produced gravitinos account for about half of the present DM abundance. The WMAP normalization~\cite{Komatsu:2010fb} on the density perturbation is satisfied for all inflation models. We have included the effect of the constant term in the superpotential, $W_0$, on the inflaotn dynamics. It changes the parameter space for the hybrid inflation model between $m_{3/2}=100$\,TeV and $10^3$\,TeV. For the new and smooth-hybrid inflation, three lines correspond to $n=4,6,8$ from left to right. It is seen that the constraint is significantly relaxed for small $\lambda$ since $m_z$ becomes small and the gravitino production rate gets suppressed by a factor of $\sim (m_z/m_\phi)^4$ for $m_z \ll m_\phi$. It is remarkable that the hybrid inflation model and new inflation with $n>2$, and even the chaotic inflation model without $Z_2$-symmetry may be allowed. The abundance of the non-thermal gravitino is proportional to $m_{\tilde W} \la \phi \right\rangle^2/T_{\rm R}$. Thus, for the other parameters fixed, the constraints in the figure shift as $\sqrt{T_{\rm R}/m_{\tilde W}}$, as long as $m_{\tilde W} (Y_{3/2}^{\rm (th)}+Y_{\tilde W}^{\rm (th)})$ do not exceed about half of the observed dark matter abundance. For instance, if we decrease $T_{\rm R}$ by a factor of $10^2$, the constraint on $\la \phi \right\rangle$ becomes severer by a factor of $10$ for the fixed inflaton mass. Note also that we cannot reduce the value of $\lambda$ further, since it tends to decrease the $z$ mass and correspondingly the Polonyi-induced gravitino problem becomes severer (see Eq.~(\ref{Ygrav_z})). \begin{figure}[] \begin{center} \includegraphics[scale=0.9]{const_lam_m1e5.eps} \vskip 1cm \includegraphics[scale=0.9]{const_lam_m1e6.eps} \caption{ Constraint on inflation models on the $m_\phi$--$\langle\phi\rangle$ plane for several values of $\lambda$. The region above the lines are excluded. We have taken $m_{3/2}=100$\,TeV and assumed the AMSB relation for the Wino mass ($\simeq 270$\,GeV) in the top panel, while $m_{3/2}=10^3$\,TeV, and the Wino mass set to be $1$\,TeV in the bottom. We have fixed $T_{\rm R}$ so that the Winos produced by the decay of thermal gravitinos account for about half of the present DM abundance. } \label{fig:const_lam} \end{center} \end{figure} \section{Conclusions} \label{sec:conc} We have revisited the issue of gravitino overproduction in inflaton decay in light of the recent discovery of the 125\,GeV Higgs boson, which implies relatively heavy gravitino : $m_{3/2}=10^2$--$10^3$\,TeV. It is found that gravitino production rate is significantly suppressed in a dynamical SUSY breaking scenario, if following conditions are met. (1) The SUSY breaking field $z$ is charged under some symmetry, so that terms such as $|\phi|^2 z$ and $|\phi|^2 zz$ are forbidden. (2) There is hierarchy among the gravitino mass, the $z$ mass, $m_z$, and the dynamical scale $\Lambda$. Then, the gravitino overproduction in inflation models with $m_{3/2} \ll m_z \ll m_\phi \lesssim \Lambda$ are greatly relaxed. Thus many inflation models are consistent with the SUSY breaking scenario with $m_{3/2}=10^2$--$10^3$\,TeV. We have obtained the constraints on the inflation models in the pure gravity mediation assuming the IYIT SUSY breaking model. \section*{Acknowledgments} This work was supported by the Grant-in-Aid for Scientific Research on Innovative Areas (No. 21111006 [KN and FT], No.23104008 [FT], No.24111702 [FT]), Scientific Research (A) (No. 22244030 [KN and FT], 21244033 [FT], 22244021 [TTY]), and JSPS Grant-in-Aid for Young Scientists (B) (No.24740135) [FT]. This work was also supported by World Premier International Center Initiative (WPI Program), MEXT, Japan.
{ "attr-fineweb-edu": 1.865234, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbHnxaKPQoka4QqUR
\section{The Science Pipelines Software ``stack''} The Large Synoptic Survey Telescope \citep[LSST;][]{2008arXiv0805.2366I} will take about 15\,TB of image data per night and after ten years of operations will have 15\,petabytes of catalog data for the final data release and 0.5\,exabytes of image data\footnote{\url{http://lsst.org/scientists/keynumbers}}. We are writing a suite of software packages to enable these data products to be created with sufficient quality and performance to meet the established science goals \citep{2009arXiv0912.0201L}. The science pipeline software will enable two key components of the data management system. The Alert Production pipelines (also known as \emph{Level 1}) process the data from the telescope and publish alerts to the community within 60 seconds of data acquisition \citep{2014htu..conf...19K}. Data Release Production (\emph{Level 2}) is responsible for the annual data releases which reprocess all the data each year to generate the best possible catalogs. Both these systems will be integrated with the Calibration Products Production that continuously calculates the best calibrations for the pipelines. The software will also provide a toolkit for user-supplied code that can be used to efficiently and effectively analyze LSST data as part of \emph{Level 3} processing or their own pipelines. Full details of the data management applications design are detailed elsewhere \citep{O3-1_adassxxv,LDM-151}. The LSST data management science pipelines software system, commonly referred to as the ``stack'', is a collection of about 40 separate packages providing functionality such as data access libraries, data models representing exposures and catalogs, source detection algorithms, astrometry fitting, and photometry and measurement algorithms. The software is written in a mixture of Python and C++\footnote{Currently we support Python 2.7 but intend to also support Python 3. We are also migrating the C++ codebase to C++11.}, where the latter is used for CPU-intensive algorithms, or when the algorithms require access to complex data structures. The codebase consists of approximately 100,000 lines of Python and 110,000 lines of C++ (not including comment or blank lines and not counting expanded SWIG \citep[see e.g.][]{beazley2003automated} interface code). The science pipeline packages are namespaced (with an \texttt{lsst.}\ root) and grouped by their functionality. The core namespaces are defined as follows: \begin{description} \item[\textbf{daf}] The Data Access Framework is responsible for mediating between the archive resources and the application writer. The pipeline code has a completely abstract view of file I/O and only has to know how to deal with data objects representing fundamental types such as exposures and tables. Currently FITS is the internal format but the system is designed such that the internal format could be changed to HDF5, for example, and no changes would have to be made to the science pipeline code. This abstraction of the files from the code protects us against shifts in community preferences such as those discussed in \citet{2015ASPC..495...11M}. \item[\textbf{afw}] The Astronomy FrameWork provides the core classes for manipulating exposures and catalogs, including detecting sources and world coordinate handling. \item[\textbf{ip}] These are the image processing classes, including packages for instrument signature removal and image differencing. \item[\textbf{meas}] The measurement packages include code for determining source properties and correcting astrometry and photometry. \item[\textbf{obs}] These classes provide instrument-specific knowledge to the software system, providing information to the data access framework to teach it how to interpret data from a range of optical cameras. The \texttt{obs} packages currently support data from some instruments on Subaru and CFHT, in addition to simulated LSST data. Work is ongoing to add support for DECam. \item[\textbf{pipe}] Pipeline infrastructure and tasks. A task is the name for a core processing component that can be chained with other tasks to build a pipeline. \end{description} More details concerning the history behind the development of the LSST software can be found in \citet{2010SPIE.7740E..15A}. \section{Summer 2015 release} Whilst the software is open source\footnote{\url{https://github.com/lsst}} and can be installed at any time, LSST makes formal releases of the science pipeline software at the end of each six month development cycle in the spring and autumn. The most recent release covered the summer development cycle and was labeled \emph{Summer 2015} and released in September 2015. Detailed release notes can be found online;\footnote{\url{https://community.lsst.org/t/268}} here we provide a summary. \paragraph{Multi-band processing for coadds} New command-line tasks have been added for consistent multi-band processing of coadds. This new data processing flow carefully combines source measurements taken in multiple bands to guarantee consistent deblending across all bands, including when carrying out forced photometry, thereby enabling reliable color measurement, and ensuring that all sources are measured in each band, regardless of the bands where they are detected. \paragraph{Upgraded astrometry calculation} Previously astrometry was calculated using astrometry.net code \citep[][ascl:1208.001]{2010AJ....139.1782L} and related catalogs distributed by LSST. To improve flexibility in the code the astrometry fitter is now pluggable and now includes an alternative implementation. \paragraph{Support for PSFEx} PSFEx (ascl:1301.001) is currently the state of the art external package for point spread function (PSF) determination, used in projects such as DES \citep{2011ASPC..442..435B}. LSST wrappers were created such that PSFEx could be used as a plugin in place of the built in PSF determiner. \paragraph{More efficient handling of large footprints} A footprint defines the pixels associated with a particular source or blended sources. This release saw significant improvements in performance when using very large footprints. \paragraph{Enable use of deblended heavy footprints in coadd forced photometry} Given the new multi-band processing for coadds we now have a reference catalog that is consistent across all bands. This catalog allows the use of the source's heavy footprints\footnote{A heavy footprint is a footprint that includes the pixel values.} to replace neighbors with noise in forced photometry, thus providing deblended forced photometry and consistent deblending across all bands. This provides much better colors for blended objects as well as measurements for drop-out objects that do not get detected in the canonical band. This functionality has been enabled for forced coadd photometry. \paragraph{Significant improvements in the table class} The AFW package has a native C++ implementation of a class for manipulating table data for handling the results of detection and measurement algorithms. This release comes with some major enhancements to the internals of \texttt{afw.table} and, in particular, much better support for compound fields (such as Right Ascension/Declination tuples). \paragraph{Device independent displays} DS9 \citep[][ascl:0003.002]{2011ASPC..442..633J} is no longer hard-wired into the software and the choice of display tool is now user configurable. The intention is for the next release to include support for the Firefly visualization tool \citep{O10-1_adassxxv}. \section{Obtaining the software} The software is known to work on CentOS 6 and 7, recent Debians and Mac OS X Yosemite and Mavericks (this release is known not to work on Mac OS X El Capitan due to interactions between library path environment variables and the new System Integrity Protection feature; this has been fixed in the current development version), with a C++11 compatible compiler such as GCC 4.8.3 or later, or Apple \texttt{clang} 6 . The recommended way to install the software from source is to use the \texttt{eups} distribution installation system \citep{EUPS}. Experimental binary releases have also been made available using a CernVM File System \citep[CernVM-FS;][]{2015JPhCS.608a2031M}. Full details on both these options are available on the release notes page. \acknowledgements The Summer 2015 release of the LSST software stack is the result of the efforts of the many people who are part of the Data Management Team at LSST, as well as outside contributors. This material is based upon work supported in part by the National Science Foundation through Cooperative Support Agreement (CSA) Award No. AST-1227061 under Governing Cooperative Agreement 1258333 managed by the Association of Universities for Research in Astronomy (AURA), and the Department of Energy under Contract No. DE-AC02-76SF00515 with the SLAC National Accelerator Laboratory. Additional LSST funding comes from private donations, grants to universities, and in-kind support from LSSTC Institutional Members.
{ "attr-fineweb-edu": 1.914062, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbOU5qdmB6zNbhoht
\section{Introduction} Cataclysmic variables (CVs) are close binary stars in which a white-dwarf (WD) primary accretes material from an accretion disc fed by a red-dwarf secondary (see \cite{HellierBook} and \cite{Warner} for reviews). Theory predicts that the mass transfer from the secondary to the primary is accompanied by a shortening of the orbital period. This is believed to continue until the secondary star drops below the hydrogen-burning limit ($\sim0.08 M_{\odot}$) and becomes a degenerate, brown-dwarf like object, at which point the period begins to increase. Models show that up to 70\% of all CVs in the Galaxy should have brown-dwarf secondary stars at the present time, yet not a single such object had been positively identified until the discovery by \citet{Science1035} of a secondary star of mass $M_2=0.052\pm0.002 M_{\odot}$ in the CV SDSS\,J103533.03+055158.4. Since then, two more brown-dwarf mass donors have been discovered by \citet{Stu1433} SDSS\,J150137.22+550123.3 and SDSS\,J150722.30+523039.8 and it now appears as if the missing population of post-period minimum CVs has finally been identified. The results of \citet{Stu1433} and \citet{Science1035} rely on an eclipse light-curve fitting technique applied to broad-band photometric data. The assumptions underlying this technique appear to be robust (see \cite{Stu1433} for a discussion), but the results are of such significance that it is important they are independently verified. In this paper we present time-resolved spectroscopy of the CV SDSS\,J143317.78+101123.3 (hereafter SDSSJ1433), obtained with the aim of measuring the radial-velocity semi-amplitude of the white dwarf ($K_1$) and comparing this direct, dynamical measurement of $K_1$ with that predicted by the light-curve model of \citet{Stu1433}. The observations were obtained with an electron-multiplying CCD (EMCCD), to the best of our knowledge the first time that such a device has been used for astronomical spectroscopy. \section{Observations and Reduction\label{sec-obs}} SDSSJ1433 is a challenging target as it is faint ($g^{\prime}=18.5$) and has a short period ($P$=78.1 min). This means that spectra taken with a conventional CCD would be swamped by read noise and for this reason we chose to use an Electron Multiplying CCD \citep{Mackay}. These devices are able to amplify the signal to such an extent that it renders the readout noise negligible. We observed SDSSJ1433 using QUCAM2\footnote{http://www.ing.iac.es/Engineering/detectors/qucam2.html}, an EMCCD-based camera mounted on the red arm of the ISIS spectrograph\footnote{http://www.ing.iac.es/Astronomy/instruments/isis/index.html} on the 4.2m William Herschel Telescope, La Palma. QUCAM2 employs a 1k $\times$ 1k CCD201 detector manufactured by E2V which has a frame-transfer architecture, providing a dead time of only 12\,ms between each frame. With the low predicted $K_1$ it was decided to use the highest dispersion grating available, the R1200R. This provided a wavelength range of 6480--6700\AA{ }at a spectral resolution of 32 km s$^{-1}$ (0.7\AA) with a slit width of 1\arcsec. On 2008 April 16, we obtained 628 spectra, each of 30s exposure time. The moon was full, the seeing was approximately 0.7--1\arcsec\ and the sky transparency was variable. A nearby comparison star was also placed on the slit to correct for slit losses and changes in transparency. Arc spectra were taken every 40 minutes for wavelength calibration. No flux standard was observed. This is justified because there was intermittent cloud, we only observed over a narrow wavelength range and we were only interested in extracting velocities from the data. The spectra were extracted using a simple spatial bin across three windows within each frame covering the target, the reference star and the large area of sky in between. The sky-subtracted spectra were cast into 40 phase bins using the ephemeris of \citet{Stu1433}, and then corrected for slit losses and transparency variations by dividing by the integrated flux in the comparison star spectra. \section{Results \label{sec-results}} \subsection{Averaged spectrum} Figure~\ref{fig:averagespectrum} shows the average spectrum of SDSSJ1433. The most prominent feature is the broad, double-peaked emission line of H$\alpha$. The equivalent and velocity widths of the line are given in Table~\ref{results_table}. A weaker feature due to He\,{\sc i} $\lambda$6678 is also visible. The average spectrum is typical of other eclipsing dwarf novae below the period gap, e.g. WZ Sge \citep{Skidmore}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{AverageSpecDiag} \caption{The average spectrum of SDSSJ1433, obtained by combining all 628 spectra.} \label{fig:averagespectrum} \end{center} \end{figure} \begin{table} \begin{center} \caption{Full-width at zero intensity, full-width at half maximum, peak-to-peak separation and equivalent width of the H$\alpha$ emission line shown in Figure~\ref{fig:averagespectrum}. The radial velocity semi-amplitude of the WD and the systemic velocity derived in Sections \ref{sec-diag} and \ref{light-centre} are also listed.} \begin{tabular}{ l l } \hline FWZI & 5000 $\pm$ 500 km/s \\ FWHM & 2200 $\pm$ 200 km/s \\ Peak Separation & 1300 $\pm$ 200 km/s \\ Equivalent Width & 147.2\ $\pm$ 0.5 \AA \\ $K_1$ & 34 $\pm$ 4 km/s \\ $\gamma$ & 75 $\pm$ 10 km/s \\ \hline \end{tabular} \label{results_table} \end{center} \end{table} \subsection{Continuum and H$\alpha$ light curves \label{sec-lc}} We computed the light curve of the continuum by summing the flux in line-free portions of each spectrum. We then fitted and subtracted the continuum of each spectrum, and summed the residual flux in the H$\alpha$ emission line. The resulting light curves are shown in Figure~\ref{fig:lightcurves}. Figure~\ref{fig:lightcurves} shows that H$\alpha$ experiences broader eclipses than the continuum. The eclipse depth for H$\alpha$ is $\sim 50$ \%, while for the continuum it is $\sim 75$ \%. This is consistent with H$\alpha$ being emitted in the optically thin outer parts of the accretion disc and the hotter, optically thick, inner parts of the disc being responsible for the majority of the continuum emission. This picture is confirmed by analysing the light curves of the wings and core of the line separately, which show a deeper eclipse in the former than the latter. An orbital hump prior to eclipse is also apparent in the H$\alpha$ light curve of figure~\ref{fig:lightcurves}, caused by the changing aspect of emission from the bright spot. \begin{figure} \includegraphics[width=0.45\textwidth]{LightCurvesDiag} \caption{Light curves of the continuum only and the H$\alpha$ emission line.} \label{fig:lightcurves} \end{figure} \subsection{Trailed spectra and Doppler tomogram} The phase-binned, continuum-subtracted H$\alpha$ profile is shown as a trail in the lower panel of Figure~\ref{fig:trail}. The primary eclipse around phase 0 is clearly seen, as is the rotational disturbance where the blue peak of the emission line is eclipsed prior to the red peak. The bright spot is clearly visible as an S-wave moving between the two peaks. The orbital modulation of the wings of the emission line can arguably just be made out. There is also evidence for a \textquoteleft shadow\textquoteright\ on the blue edge of the S-wave around phase 0.25 which appears remarkably similar to the models of gas stream overflow computed by \citet{HellierGas}. We computed the Doppler tomogram of the H$\alpha$ trail using Fourier-filtered back projection (see \cite{Marsh2001} for a review). As expected, the dominant feature is the ring of emission representing the accretion disc which appears to be centred on the expected position of the WD. The bright spot lies on this ring at a velocity intermediate between the free-fall velocity of the gas stream and the Keplerian disc velocity along its path, as observed in some other CVs, e.g. U Gem \citep{MarshUG} and WZ Sge \citep{WZSgt}. There is no evidence for emission from the secondary star, nor for asymmetries in the disc emission due to, for example, spiral shocks. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{TrailerDiag} \caption{\textit{Bottom:} the H$\alpha$ trailed spectrum, with two cycles plotted for clarity. \textit{Top:} the Doppler map of H$\alpha$. The three crosses represent the centre of mass of the secondary star (upper cross), the system (middle cross) and the WD (lower cross). The Roche lobe of the secondary star, the predicted gas stream trajectory and the Keplerian velocity of the disc along the stream are also shown, calculated assuming the system parameters given in \citet{Stu1433}. The circular tick marks represent steps of 0.1 L$_1$ (the inner Lagrangian distance) towards the WD. The asterisks represent the points of closest approach to the WD.} \label{fig:trail} \end{center} \end{figure} \subsection{The diagnostic diagram \label{sec-diag}} The WD primary star orbits with velocity $K_1$ around the centre of mass of the system, resulting in a periodic Doppler shift of any light it emits. If we imagine the simplified case of an ideal accretion disc centred on the WD with a perfectly symmetric brightness distribution, then the velocity shift of the emission lines generated in the disc will be subject to the same modulation as the WD. If we measure the line centroids and plot them as a function of orbital phase then we should see a sine wave corresponding to the orbital motion of the WD. More realistically, however, if we include the light emitted by the bright spot, we can see that calculated line centroids will, rather than precisely following the WD, be perturbed in the direction of the bright spot. The resultant radial velocity (RV) curve will not only have an excessive amplitude but will also be offset in phase with respect to the true motion of the WD. It is thus important to exclude the bright spot contribution by only examining the light emitted in the wings of the line profile. These emissions correspond to material orbiting within the disc at small radii from the primary star, where they are only minimally contaminated by the bright spot. If the RV curve that we obtain using this technique has the correct phase offset relative to the primary eclipse then we can be confident of an accurate semi-amplitude measurement. To measure the radial velocities of the wings of the H$\alpha$ emission line we used the double-Gaussian technique of \citet{schneider+young}. The best signal-to-noise ratio was obtained using a Gaussian width of 350 km~s$^{-1}$ and the separation of the Gaussians was then varied between 1500 and 3000 km~s$^{-1}$ in steps of 100 km~s$^{-1}$ so as to explore annular regions of the accretion disc of decreasing radii. At each Gaussian separation the resulting radial velocities were fitted with a sine function. The phase offset ($\phi_0$), semi-amplitude ($K$), systemic velocity ($\gamma$) and fractional error in the amplitude ($\sigma_K / K$) of the sine fits were then plotted against the Gaussian separation as a diagnostic diagram \citep{Shafter}, as shown in Figure~\ref{fig:diagnostic}. An example of the RV data and sine fit is shown in Figure~\ref{fig:RVcurve}, for a Gaussian separation of 2900 km s$^{-1}$. The standard way of obtaining $K_1$ from a diagnostic diagram is to use the value corresponding to where the $\sigma_K / K$ curve is at a minimum. If we do this we find a value of $K_1=44$km~s$^{-1}$. However, it can be seen that the corresponding value of $\phi_0$ is non-zero and declines towards zero at higher Gaussian separations. As pointed out by \citet{Marsh88}, this highlights the main drawback of diagnostic diagrams -- the derived $K_1$-value is noise dependent. The best solution is to use those data points corresponding to the highest Gaussian separations to extrapolate to a $K_1$-value whose corresponding $\phi_0=0$. This requires the construction of a \textquoteleft Light Centre\textquoteright\ diagram \citep{Marsh88}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{DiagnosticDiag} \caption{The diagnostic diagram for H$\alpha$ -- see text for details.} \label{fig:diagnostic} \end{center} \end{figure} \begin{figure} \includegraphics[width=0.450\textwidth]{RVcurveDiag} \caption{The measured RV data for a Gaussian separation of 2900 km s$^{-1}$ and the resulting sine curve fit (solid line). Only data points marked by a square were included in the fit, i.e. the rotational disturbance was not taken into account. The dashed line shows the sine curve corresponding to the motion of the WD as derived from the light centre diagram presented in Figure~\ref{fig:lightcentre}.} \label{fig:RVcurve} \end{figure} \subsection{The light centre method \label{light-centre}} At no point on the diagnostic diagram does the sine fit have a zero phase offset relative to the WD, although at the extremes of the H$\alpha$ wings it gets close. The light centre method, described by \citet{Marsh88}, offers a way of extrapolating the data to estimate what the amplitude of the RV curve would be at $\phi_0 = 0$. Just as the Doppler map is a view of an emission line transformed into velocity space, so the light centre diagram, shown in Figure~\ref{fig:lightcentre}, is a velocity space projection of the RV data plotted in the diagnostic diagram. \begin{figure} \begin{center} \includegraphics[width=0.35\textwidth]{LightCentreDiag} \caption{The light centre diagram for H$\alpha$, where $-K \sin\phi_0$ is plotted on the abscissa and $-K \cos\phi_0$ on the ordinate. A linear fit to the points marked by squares is also shown, extrapolated to where it intercepts the $y$-axis. The smallest Gaussian separation corresponds to the top-most point and the largest to the right-most point. } \label{fig:lightcentre} \end{center} \end{figure} The radial velocity of the WD can be determined from a light centre diagram by extrapolating a linear fit to the RV data points and reading off the $y$-axis intercept. To avoid contamination from the bright spot we only included the nine largest Gaussian separations in the linear fit. The resulting intercept is $K_1 = 34 \pm 4$ km s$^{-1}$, the error quoted corresponding to a combination of statistical error and systematic error due to the exact choice of which points to exclude from the linear fit. The RV curve corresponding to this value of $K_1$ and $\phi_0 = 0$ is shown as the dashed curve in Figure~\ref{fig:RVcurve}. We are confident that our measurement of $K_1$ accurately reflects the motion of the WD in SDSSJ1433 for the following reasons. First, the bottom panel of figure~\ref{fig:diagnostic} clearly shows a trend to zero phase offset at large Gaussian separations, as one would expect if one is measuring the real radial velocity of the WD in combination with a steadily decreasing contribution from the bright spot. In such instances, the light centre technique has been proven to give reliable results (e.g. IP Peg, \citealt{Marsh88}; U Sco, \citealt{Thoroughgood01}). Second, the top panel of figure~\ref{fig:diagnostic} shows that the radial velocity semi-amplitude is remarkably independent of the Gaussian separation, indicating that our systematic errors are indeed small. \section{Conclusions \label{sec-conclusion}} We have measured the radial velocity amplitude, $K_1 = 34 \pm 4$~km s$^{-1}$, of the WD in the short-period CV SDSSJ1433 by analysing the RV motion of the extreme wings of the H$\alpha$ emission line. In order to account for the velocity contamination from the conspicuous emission of the bright spot, the light centre technique was used. The measured value of $K_1$ is consistent with the predicted value of $K_1=35 \pm 2$ km s$^{-1}$ from the light-curve fitting technique of \citet{Stu1433}. Our result therefore supports the validity and accuracy of the purely photometric technique of measuring masses and argues in favour of the presence of a brown-dwarf donor in SDSSJ1433, and by implication in SDSSJ1501 and SDSSJ1507 also. The results presented in this paper also demonstrate that the combination of a large-aperture telescope, an intermediate-resolution spectrograph and an EMCCD is uniquely capable of tackling this type of photon-starved observation. \section*{Acknowledgments} We thank Tom Marsh for the use of his \texttt{MOLLY} and \texttt{TRAILER} programs and Stuart Littlefair for useful discussions. The observations were made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'\i sica de Canarias.
{ "attr-fineweb-edu": 1.646484, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbP7xaL3SugEfJ3to
\section{FITS serialization} \subsection{Mapping Spectrum to FITS} \paragraph {\bf FITS serialization design:} We define a reference serialization of this data model as a FITS binary table. The table represents a spectrum or photometry point as a single row of a table. This serialization is a special case of an SED (or spectral association) serialization which uses one row per spectral segment; in that case, variable-length arrays may be used to contain the array quantities. In each case below where a `variable length array' is specified, fixed length arrays are suitable for a single spectrum or for multiple spectra where all the arrays are the same length, but readers should be prepared to handle the variable length case. For SEDs, another approach would be to have one FITS HDU per spectrum or photometry point. However this was rejected as unworkable, as the overhead of 5760 bytes (2 FITS blocks) per photometry point would inflate the data for the photometry-only SED case by factors of around 50-100. \paragraph {\bf Standard table keywords:} In table F.1 we give the mapping of data model fields to FITS columns and keywords. For each column, the standard keywords TTYPEn, TUNITn, TFORMn should be provided. Order of keywords and columns is not significant, except that it is strongly recommended that RA and Dec be in adjacent columns or keywords. Additional keywords and columns which are not part of the model (including other conventions such as e.g. TDMINn) are allowed to be present, but are not guaranteed to be propagated by VO software. \paragraph {\bf Keywords and columns: `Greenbank' convention} In Table F.1 we give single metadata items as keywords; arrays of data (members of the Spectrum.Data classes) are stored as columns, and Table F.1 gives the column name, i.e. the value of the keyword TTYPEn. The 'Source' column in Table F.1 indicates if the name (if keyword) or value (if column) is a FITS standard (S), an existing convention (C) such as one of the HEA conventions, or is newly invented (N). In some cases, the column data arrays may have the same value for each data point. In this case we may use the 'Greenbank' convention in which the column is omitted and replaced by a keyword whose name is the same as the column. Further, in SED applications when multiple spectrum data lines are present, some metadata may differ from line to line and be promoted from keyword to column. Therefore, implementors should check both keywords and column names for the appropriate tokens. \paragraph {\bf TUTYP and TUCD keywords:} We map the FITS columns to the model by using TUTYPn keywords. TUTYPn (string-valued) gives the data model field name (UTYPE string) for the data in column n. Thus, the x and y axes (i.e. spectral coordinate and flux-like axes) of the spectrum have TUTYPn value of Spectrum.Data.SpectralAxis.Value and Spectrum.Data.FluxAxis.Value respectively. Different kinds of x and y axis are identified by the Spectrum.Data.SpectralAxis.UCD and Spectrum.Data.FluxAxis.UCD data model fields, which are mapped to TUCDn keywords. TUCDn (string valued) gives the UCD corresponding to the data in column n. Both TUTYPn and TUCDn should be present for any column which corresponds to a Spectrum data model field; they are optional for any additional data columns which are not part of the Spectrum model. The units of spectral coordinate and flux are given in the TUNITn keys of the corresponding data columns. There is no separate provision for units of Char.SpectralAxis or Char.FluxAxis; these are required to be the same as for the data. The TTYPEn keywords for the x and y columns are free, but it is strongly recommended that (for consistency of style with WCS Paper 3) the values for the x axis have for their first 4 characters 'WAVE', 'FREQ' and 'ENER' for the case of wavelength, frequency and energy respectively. We also recommend the value 'FLUX' for the y axis, where appropriate. Nevertheless, it is the TUTYPn and TUCDn keywords that should be used to interpret the semantics of the file. \paragraph {\bf WAVE, ENER, and FREQ} In the header metadata, such as the Spectrum.Char entries, we use SPEC\_ keywords to denote the spectral axis generically, but in the table columns (Spectrum.Data entries) we use the terms WAVE\_, ENER\_, and FREQ\_ as appropriate. Thus if the Spectrum.Data.SpectralAxis.Value field is WAVE, the SpectralAxis.Accuracy.BinLow field should be WAVE\_LO; if Value is FREQ, BinLow should be FREQ\_LO. We believe the small extra parsing overhead is worth it for the readability and interoperability (since these names have been used in existing FITS files) of the crucial main data table. \paragraph {\bf Char and Data keywords:} The model contains both Characterization metadata, giving overall typical values for quantities such as spectral resolution, and the Data object, which can include such quantities on a per-pixel basis. In some cases, the FITS serialization allows the same token for both Char (as a keyword) and Data (as a column name). The name, unit and UCD fields for Char.FluxAxis and Char.SpectralAxis are required to be the same as for Data.FluxAxis and Data.SpectralAxis. The case of TimeAxis is a little different, since there may be no Data.TimeAxis present, and there exist already some HEA conventions for recording TimeAxis characterization, notably the TIMEUNIT keyword. Note that TIMESYS, if present, must be TT. \paragraph {\bf VOCLASS keyword:} We add a new keyword VOCLASS to describe the VO object represented by the FITS table. The value of VOCLASS should be 'SPECTRUM 1.00'. \paragraph {\bf WCS table keywords:} The spectral coordinate may also be identified by optional 1Sn\_1 and 1CTYPn keywords as per WCS Paper 3. Table 9 of that paper implies that each data column which is a function of the spectral coord needs a pair of such keywords. Applications which implement the spectrum data model may ignore the WCS keys and interpret the file by recognizing 'by spec' (using TUTYPn) which column is the spectral coordinate and that FLUX, etc. are functions of it, but the WCS keys give a general FITS application a chance at making sense of the file. In the example, TTYPE5='ERR\_LO' and TUTYP5='Spectrum.Data.FluxAxis.Accuracy.StatErrLow'; the WCS keyword 1CTYPE5='WAVE-TAB' indicates that the data in column 5 is a function of wavelength, and that the wavelengths are in a lookup table. The WCS keyword 1S5\_1='WAVE' indicates that the lookup table for the x-axis of column 5 (in this case, the wavelengths that the ERR\_LO values correspond to) is in the column with TTYPEn='WAVE', in this case column 1. Note that APERTURE has also been used elsewere as string-valued to indicate a named aperture; this is not allowed here. The mid-exposure value is a required field for the internal data model; however it can be calculated from TSTART and TSTOP if they are present, and is then optional for the FITS serialization. The dataset start and stop wavelength may be provided in standard FITS as TDMINn/TDMAXn where n is the number of the column with the wavelengths. For FITS, CoordSys.SpaceFrame.ucd is required to be the same as Char.SpatialAxis.ucd and CoordSys.TimeFrame.ucd is required to be equal to the default value "time". CoordSys.SpectralFrame.ucd is required to be the same as Data.SpectralAxis.ucd, present as a TUCDn value. To express the CoordSys.RedshiftFrame, we recommend using a FITS WCS system with suffix 'Z' applied to the spectral coordinate axis, when appropriate. CoordSys.RedshiftFrame.DopplerDefinition is represented by the first four characters of TCTYPnZ and should have the values VRAD, VOPT, ZOPT or VELO, as per the convention for spectral CTYPE keywords in Paper III of the FITS WCS system. CoordSys.RedshiftFrame.RefPos is represented by SPECSYSZ and should have values as listed in Paper III of the FITS WCS system. \vskip 0.2in {\small \colorbox{iblue}{ \begin{minipage}[l]{6.5in} \begin{tabular}{llll} \hline \multicolumn{3}{c}{Table F.1: FITS keywords for VO Spectrum}\\ \hline Data model field & FITS keyword & Source & Value if fixed \\ \hline & & \\ DataModel & VOCLASS & N & SPECTRUM 1.0\\ Length & DATALEN & N\\ Type & VOSEGT & N\\ CoordSys.ID &VOCSID & N\\ CoordSys.SpaceFrame.Name &RADECSYS &S & e.g. ICRS or FK5 \\ CoordSys.SpaceFrame.Equinox&EQUINOX &S & e.g. 2000.0 \\ CoordSys.SpaceFrame.ucd &SKY\_UCD & & pos.eq \\ CoordSys.SpaceFrame.RefPos &SKY\_REF &S &\\ CoordSys.TimeFrame.Name &TIMESYS &C & TT \\ CoordSys.TimeFrame.ucd& - &C & time \\ CoordSys.TimeFrame.Zero& MJDREF & C&default 0.0\\ CoordSys.TimeFrame.RefPos & & & (not used) \\ CoordSys.SpectralFrame.RefPos & SPECSYS&S & (see below)\\ CoordSys.SpectralFrame.ucd& TUCDn &C & = Data.SpectralAxis.ucd \\ CoordSys.SpectralFrame.Redshift & REST\_Z & N & \\ CoordSys.SpectralFrame.Name &SPECNAME & & \\ CoordSys.RedshiftFrame.Name &ZNAME & & \\ CoordSys.RedshiftFrame.DopplerDefinition &TCTYPnZ & & \\ CoordSys.RedshiftFrame.RefPos & SPECSYSZ & S & \\ Curation.Publisher &VOPUB &N \\ Curation.Reference& VOREF &N \\ Curation.PublisherID &VOPUBID&N \\ Curation.Version&VOVER&N\\ Curation.ContactName&CONTACT&N \\ Curation.ContactEmail&EMAIL &N \\ Curation.Rights & VORIGHTS &N\\ Curation.Date & VODATE &N\\ Curation.PublisherDID &DS\_IDPUB & N\\ Target.Name & OBJECT &S \\ Target.Description & OBJDESC&N\\ Target.Class & SRCCLASS &N \\ Target.spectralClass & SPECTYPE&N\\ Target.redshift& REDSHIFT & C\\ Target.pos & RA\_TARG, DEC\_TARG& C\\ Target.VarAmpl & TARGVAR & N\\ DataID.Title & TITLE & C\\ DataID.Creator & AUTHOR &S \\ DataID.Collection & COLLECTn& N\\ DataID.DatasetID& DS\_IDENT & N\\ DataID.CreatorDID& CR\_IDENT & N\\ DataID.Date & DATE & S\\ DataID.Version & VERSION& C \\ DataID.Instrument & INSTRUME & S\\ DataID.CreationType & CRETYPE&N\\ DataID.Logo&VOLOGO & N\\ DataID.Contributor&CONTRIBn&N\\ DataID.DataSource&DSSOURCE & N\\ DataID.Bandpass & SPECBAND & N\\ \end{tabular} \end{minipage} } \colorbox{iblue}{ \begin{minipage}[l]{6.5in} \begin{tabular}{llll} \hline Data model field & FITS keyword & Source & Value if fixed \\ \hline & & \\ Derived.SNR & DER\_SNR & N\\ Derived.redshift.value & DER\_Z & N\\ Derived.redshift.statError& DER\_ZERR&N\\ Derived.redshift.Confidence& DER\_ZCNF&N\\ Derived.VarAmpl & DER\_VAR & N\\ TimeSI &TIMESDIM &N &\\ SpectralSI & SPECSDIM & N&\\ FluxSI & FLUXSDIM & N&\\ \\ \multicolumn{4}{c}{Omitted Char fields, values inherited from Spectrum.Data}\\ \\ Char.FluxAxis.Name & - & - & TTYPEn for FLUX \\ Char.FluxAxis.Unit & - &- & Same as Data\\ Char.FluxAxis.ucd & - & - & Same as Data\\ Char.SpectralAxis.Name & - & - & Same as Data\\ Char.SpectralAxis.Unit & - & - & Same as Data\\ Char.SpectralAxis.ucd & - & - & Same as Data\\ Char.TimeAxis.Name & - & - & TIME \\ Char.TimeAxis.ucd & - & -& time\\ Char.SpatialAxis.Name & - & & (not used) \\ Char.SpatialAxis.Unit & - & & deg \\ \\ \multicolumn{4}{c}{Char Fields which are the same as for Spectrum.Data}\\ \\ Char.FluxAxis.Accuracy.StatError & STAT\_ERR &C\\ Char.FluxAxis.Accuracy.SysError & SYS\_ERR & C\\ Char.TimeAxis.Accuracy.StatError & TIME\_ERR & N\\ Char.TimeAxis.Accuracy.SysError & TIME\_SYE & N\\ Char.TimeAxis.Resolution & TIME\_RES & N\\ \\ \end{tabular} \end{minipage} } \colorbox{iblue}{ \begin{minipage}[l]{7.0in} \begin{tabular}{lllp{1.5in}} Data model field & FITS keyword& Source & Value if fixed \\ \hline & & \\ \multicolumn{4}{c}{Char Fields which are only present in Char}\\ \\ Char.FluxAxis.Calibration & FLUX\_CAL& N\\ Char.SpectralAxis.Calibration & SPEC\_CAL&N \\ Char.SpectralAxis.Coverage.Location.Value & SPEC\_VAL &N\\ Char.SpectralAxis.Coverage.Bounds.Extent & SPEC\_BW & N\\ Char.SpectralAxis.Coverage.Bounds.Start & TDMINn & \\ Char.SpectralAxis.Coverage.Bounds.Stop & TDMAXn & \\ Char.SpectralAxis.SamplingPrecision. &&\\ SamplingPrecisionRefVal.FillFactor & SPEC\_FIL & N\\ Char.SpectralAxis.SamplingPrecision. &&&\\ \quad SampleExtent& SPEC\_BIN & N & \\ Char.SpectralAxis.Accuracy.BinSize & SPEC\_BIN& N& \\ Char.SpectralAxis.Accuracy.StatError & SPEC\_ERR&N\\ Char.SpectralAxis.Accuracy.SysError & SPEC\_SYE& N\\ Char.SpectralAxis.Resolution & SPEC\_RES & N\\ Char.SpectralAxis.ResPower & SPEC\_RP & N\\ {Char.SpectralAxis.Coverage.Support.Extent } & SPECWID & N\\ Char.TimeAxis.Unit & TIMEUNIT & C\\ Char.TimeAxis.Accuracy.BinSize & TIMEDEL & C \\ Char.TimeAxis.Calibration & TIME\_CAL & N\\ Char.TimeAxis.Coverage.Location.Value & TMID & N\\ Char.TimeAxis.Coverage.Bounds.Extent& TELAPSE & C\\ Char.TimeAxis.Coverage.Bounds.Start & TSTART & C\\ Char.TimeAxis.Coverage.Bounds.Stop & TSTOP & C\\ Char.TimeAxis.Coverage.Support.Extent& EXPOSURE & C\\ Char.TimeAxis.SamplingPrecision. &&\\ SamplingPrecisionRefVal.FillFactor & DTCOR & C\\ Char.TimeAxis.SamplingPrecision. &&&\\ \quad SampleExtent &TIMEDEL & S & \\ Char.SpatialAxis.ucd & SKY\_UCD& N & pos.eq \\ Char.SpatialAxis.Accuracy.StatErr & SKY\_ERR & N\\ Char.SpatialAxis.Accuracy.SysError &SKY\_SYE & N\\ Char.SpatialAxis.Calibration & SKY\_CAL&N\\ Char.SpatialAxis.Resolution & SKY\_RES & N\\ Char.SpatialAxis.Coverage.Location.Value & RA, DEC, etc.& C\\ Char.SpatialAxis.Coverage.Bounds.Extent & APERTURE & C\\ Char.SpatialAxis.Coverage.Support.Area& REGION & N & String value\\ Char.SpatialAxis.Coverage.Support.Extent & AREA &N & \\ Char.SpatialAxis.SamplingPrecision. &&\\ SamplingPrecisionRefVal.FillFactor &SKY\_FILL &N\\ Char.SpatialAxis.SamplingPrecision. &&&\\ \quad SampleExtent& TCDLTn &S &\\ \end{tabular} \end{minipage} } \colorbox{iblue}{ \begin{minipage}[l]{7.0in} \begin{tabular}{lllp{1.5in}} Data model field & FITS keyword& Source & Value if fixed \\ \hline & & \\ \hline \multicolumn{4}{c}{ Per-data-point values }\\ \hline \\ Data.FluxAxis.Value & TTYPEn & S & FLUX\\ UTYPE of above ... & TUTYPn & N & 'Spectrum.Data.FluxAxis.Value'\\\ Data.FluxAxis.Unit & TUNITn& S\\ Data.FluxAxis.ucd & TUCDn & N& (same as Char)\\ Data.FluxAxis.Accuracy.StatError & TTYPEn & N & ERR\\ Data.FluxAxis.Accuracy.StatErrLow & TTYPEn & C & ERR\_LO\\ Data.FluxAxis.Accuracy.StatErrHigh& TTYPEn& C & ERR\_HI\\ Data.FluxAxis.Accuracy.SysError & TTYPEn & C & SYS\_ERR \\ Data.FluxAxis.Quality & TTYPEn & C & QUALITY\\ Data.FluxAxis.QualityN & TTYPEn & C & QUALn\\ Data.SpectralAxis.Value & TTYPEn & S & WAVE,ENER,FREQ\\ UTYPE of above ... & TUTYPn & N & 'Spectrum.Data.SpectralAxis.Value'\\ Data.SpectralAxis.Unit & TUNITn & S & (same as Char)\\ Data.SpectralAxis.ucd & TUCDn &N & (same as Char) \\ Data.SpectralAxis.Accuracy.BinSize& TTYPEn & N & WAVE\_BIN,ENER\_BIN, FREQ\_BIN\\ Data.SpectralAxis.Accuracy.BinLow & TTYPEn & N & WAVE\_LO,ENER\_LO, FREQ\_LO \\ Data.SpectralAxis.Accuracy.BinHigh& TTYPEn & N & WAVE\_HI,ENER\_HI, FREQ\_HI \\ Data.SpectralAxis.Accuracy.StatError & TTYPEn& N & WAVE\_ERR,ENER\_ERR, FREQ\_ERR \\ Data.SpectralAxis.Accuracy.StatErrLow& TTYPEn& N & WAVE\_ELO,ENER\_ELO, FREQ\_ELO \\ Data.SpectralAxis.Accuracy.StatErrHigh & TTYPEn& N& WAVE\_EHI,ENER\_EHI, FREQ\_EHI \\ Data.SpectralAxis.Accuracy.SysError & TTYPEn & N & WAVE\_SYE,ENER\_SYE, FREQ\_SYE \\ Data.SpectralAxis.Resolution & TTYPEn & N & WAVE\_RES,ENER\_RES, FREQ\_RES \\ Data.TimeAxis.Value & TTYPEn & C & TIME\\ UTYPE of above ... & TUTYPn & N & 'Spectrum.Data.TimeAxis.Value' \\ Data.TimeAxis.Unit & TUNITn& S & (same as Char)\\ Data.TimeAxis.ucd & TUCDn & N & time \\ Data.TimeAxis.Accuracy.BinLow & TTYPEn & N & TIME\_LO \\ Data.TimeAxis.Accuracy.BinHigh & TTYPEn & N & TIME\_HI \\ Data.TimeAxis.Accuracy.BinSize & TIMEDEL & S & \\ Data.TimeAxis.Resolution& TTYPEn & N & TIME\_RES \\ Data.TimeAxis.Accuracy.StatError & TTYPEn & N & TIME\_ERR \\ Data.TimeAxis.Accuracy.StatErrLow & TTYPEn & N & TIME\_ELO \\ Data.TimeAxis.Accuracy.StatErrHigh& TTYPEn & N & TIME\_EHI \\ Data.TimeAxis.Accuracy.SysError & TTYPEn & N & TIME\_SYE \\ Data.BackgroundModel.Value & TTYPEn & N & BGFLUX \\ UTYPE of above ... & TUTYPn & N & 'Spectrum.Data.BackgroundModel.Value'\\ Data.BackgroundModel.Unit & TUNITn &S& (same as FluxAxis) \\ Data.BackgroundModel.ucd & TUCDn & N & (same as FluxAxis) \\ Data.BackgroundModel.Accuracy.StatError & TTYPEn & N & BG\_ERR \\ Data.BackgroundModel.Accuracy.StatErrLow & TTYPEn & N & BG\_ELO \\ Data.BackgroundModel.Accuracy.StatErrHigh& TTYPEn & N & BG\_EHI \\ Data.BackgroundModel.Accuracy.SysError & TTYPEn & N & BG\_SYE \\ Data.BackgroundModel.Quality & TTYPEn & N & BGQUAL\\ \\ \end{tabular} \end{minipage} } } \vskip 0.2in \clearpage \subsection{Expressing the spectrum spatial coordinates in FITS} FITS has a sophisticated mechanism for expressing celestial coordinates. However, it applies only to image axes or table columns. If you want to express a single celestial position in the header of a FITS binary table, the WCS conventions do not apply. You could add an extra pair of columns to the table giving the same position in each row, but that would be wasteful. Here we propose a local convention leveraging the existing WCS conventions: \begin{itemize} \item The keyword names for the coordinates are those used in the first four characters of the CTYPE values for the WCS paper: e.g. RA, DEC, GLON, GLAT. \item The coordinate system is identified by the keyword SKY\_UCD with values such as pos.eq, etc. \item The RADECSYS and EQUINOX keywords should be used when appropriate. \item Values are always in degrees. \item The VOCSID optional keyword is provided to allow VO coordinate system names used in Spectrum and STC to be propagated to FITS. Its value is not relevant in the FITS context. \end{itemize} \subsection{The SPECSYS keyword} We note the allowed values of the SPECSYS keyword from Greisen et al and the corresponding values from the VO STC: \vskip 0.1in \colorbox{iblue}{ \begin{tabular}{lll} \hline FITS & STC & Meaning\\ \hline TOPOCENT &TOPOCENTER & Topocenter\\ GEOCENTR &GEOCENTER & Geocenter\\ BARYCENT &BARYCENTER & Solar System Barycenter\\ HELIOCEN &HELIOCENTER & Heliocenter\\ LSRK &LSRK & Kinematic local standard of rest\\ LSRD &LSRD & Dynamic local standard of rest\\ GALACTOC &GALACTIC\_CENTER & Galactic center\\ LOCALGRP &LOCAL\_GROUP\_CENTER & Local group barycenter\\ CMBDIPOL &- & Frame of the Cosmic Microwave Background dipole\\ SOURCE &- & Source rest frame \\ \hline \end{tabular} } \clearpage \subsection{An instance example} We summarize this with a sample FITS extension header. { \footnotesize \begin{verbatim} XTENSION= 'BINTABLE' / binary table extension BITPIX = 8 / 8-bit bytes NAXIS = 2 / 2-dimensional binary table NAXIS1 = 57344 / width of table in bytes NAXIS2 = 1 / number of rows in table PCOUNT = 0 / size of special data area GCOUNT = 1 / one data group (required keyword) TFIELDS = 7 / number of fields in each row EXTNAME = 'SPECTRUM ' / name of this binary table extension VOCLASS = 'Spectrum V1.0' / VO Data Model DATALEN = 180 / Segment size VOSEGT = 'Spectrum' / Segment type VOCSID = 'MY-ICRS-TOPO' / Coord sys ID RADECSYS= 'FK5 ' / Not default - usually ICRS EQUINOX = 2.0000000000000E+03 / default TIMESYS = 'TT ' / Time system MJDREF = 0.0 / [d] MJD zero point for times SPECSYS = 'TOPOCENT' / Wavelengths are as observed VOPUB = 'CfA Archive' / VO Publisher authority VOREF = '2006ApJ...999...99X' / Bibcode for citation VOPUBID = 'ivo://cfa.harvard.edu' / VO Publisher ID URI VOVER = '1.0' / VO Curation version CONTACT = 'Jonathan McDowell, CfA'/ EMAIL = '[email protected]' / VORIGHTS= 'public' / VODATE = '2004-08-30' / DS_IDPUB= 'ivo://cfa.harvard.edu/spec#10304' / Publisher DID for dataset COMMENT DS_IDPUB usually the same as DS_IDENT? OBJECT = 'ARP 220 ' / Source name OBJDESC = 'Merging galaxy Arp 220' / Source desc SRCCLASS= 'Galaxy' / SPECTYPE= 'ULIRG' / REDSHIFT= 0.01812 / Emission redshift RA_TARG = 233.73791700 / [deg] Observer's specified target RA DEC_TARG = 23.50333300 / [deg] Observer's specified target Dec TARGVAR = 0.2 / 20 percent variability amplitude TITLE = 'Observations of Merging Galaxies' / AUTHOR = 'MMT Archive' / VO Creator COLLECT1= 'Misc Pointed Observations' / Collection DS_IDENT= 'ivo://cfa.harvard.edu/spec#10304' / Publisher DID for dataset CR_IDENT= 'ivo://cfa.harvard.edu/tdc#MMT4302-102' / Creator internal ID for dataset DATE = '2004-08-30T14:18:17' / Date and time of file creation VERSION = 2 / Reprocessed 2004 Aug TELESCOP= 'MMT ' / Telescope [Not part of Spectrum DM] INSTRUME= 'MMT/BCS ' / Instrument FILTER = 'G220 ' / Grating [Not part of Spectrum DM] CRETYPE = 'Archival' / Not an on-the-fly dataset VOLOGO = 'http://cfa.harvard.edu/vo/cfalogo.jpg' / VO Creator logo CONTRIB1= 'Jonathan McDowell' / Contributor CONTRIB2= 'Wilhelm Herschel' / Contributor CONTRIB3= 'Harlow Shapley' / Contributor DSSOURCE= 'Pointed' / Survey or pointed, etc DER_SNR = 5.0 / Estimate of signal-to-noise DER_Z = 0.01845 / Redshift measured in this spectrum DER_ZERR = 0.00010 / Error in DER_Z TIMESDIM= 'T' / Time SIDim SPECSDIM= '10-10 L' / Spectral SIDim FLUXSDIM= '10+7 ML-1T-3' / Flux SDim SYS_ERR = 0.05 / Fractional systematic error in flux FLUX_CAL= 'Calibrated' / SPEC_ERR= 0.01 / Stat error in spec coord, in SPEC units SPEC_SYE= 0.001 / Frac sys error in spec coord SPEC_CAL= 'Calibrated' SPEC_RES= 5.0 / [angstrom] Spectral resolution SPECBAND= 'Optical' / SED.Bandpass SPEC_RP = 800.0 / Spectral resolving power SPEC_VAL= 4100.0 / [angstrom] Characteristic spec coord SPEC_BW = 1800.0 / [angstrom] Width of spectrum SPEC_FIL= 1.0 / No gaps between channels TIME_CAL = 'Calibrated' / DATE-OBS= '2004-06-03T21:18:17' / Date and time of observation EXPOSURE = 1500.015 / [s] Effective exposure time TSTART = 52984.301203 / [d] MJD TSTOP = 52984.318564 / [d] MJD TMID = 52984.309883 / [d] MJD mid expsoure SKY_CAL = 'Calibrated' / SKY_RES = 1.0 / [arcsec] Spatial.Resolution RA = 233.73791 / [deg] Pointing position DEC = 23.50333 / [deg] Pointing position APERTURE= 2.0 / [arcsec] Aperture diameter/Slit width TIME = 52984.309883 / [d] MJD of midpoint COMMENT --------------------------- COMMENT WCS Paper 3 Keywords 1S4_1 = 'WAVE' / Column name with spectral coord 1CTYP4 = 'WAVE-TAB' / Spectral coord is WAVE 1S5_1 = 'WAVE' / Column name with spectral coord 1CTYP5 = 'WAVE-TAB' / Spectral coord is WAVE 1S6_1 = 'WAVE' / Column name with spectral coord 1CTYP6 = 'WAVE-TAB' / Spectral coord is WAVE 1S7_1 = 'WAVE' / Column name with spectral coord 1CTYP7 = 'WAVE-TAB' / Spectral coord is WAVE COMMENT --------------------------- TTYPE1 = 'WAVE' / Wavelength TFORM1 = '180E' TUNIT1 = 'angstrom' TUCD1 = 'em.wl' / TDMIN1 = 3195.0 / TDMAX1 = 5005.0 / TUTYP1 = 'Spectrum.Data.SpectralAxis.Value' TTYPE2 = 'WAVE_LO' / TFORM2 = '180E' TUNIT2 = 'angstrom' TUTYP2 = 'Spectrum.Data.SpectralAxis.Accuracy.StatErrLow' TTYPE3 = 'WAVE_HI' / TFORM3 = '180E' TUNIT3 = 'angstrom' TUTYP3 = 'Spectrum.Data.SpectralAxis.Accuracy.StatErrHigh' TTYPE4 = 'FLUX' / TFORM4 = '180E' TUNIT4 = 'erg cm**(-2) s**(-1) angstrom**(-1)' TUTYP4 = 'Spectrum.Data.FluxAxis.Value' TUCD4 = 'phot.fluDens;em.wl' / Type of Y axis: F-lambda TTYPE5 = 'ERR_LO' / TFORM5 = '180E' TUNIT5 = 'erg cm**(-2) s**(-1) angstrom**(-1)' TUTYP5 = 'Spectrum.Data.FluxAxis.Accuracy.StatErrLow' TTYPE6 = 'ERR_HI' / TFORM6 = '180E' TUNIT6 = 'erg cm**(-2) s**(-1) angstrom**(-1)' TUTYP6 = 'Spectrum.Data.FluxAxis.Accuracy.StarErrHigh' TTYPE7 = 'QUALITY' / TFORM7 = '180I' TUTYP7 = 'Spectrum.Data.FluxAxis.Quality' \end{verbatim} } The data would look like {\small \begin{verbatim} WAVE WAVE_LO WAVE_HI FLUX ERR_LO ERR_HI QUALITY 3200.0 3195.0 3205.0 1.48E-12 2.0E-14 2.0E-14 0 3210.0 3205.0 3215.0 1.52E-12 3.0E-14 3.0E-14 0 3220.0 3215.0 3225.0 0.38E-12 0.38E-12 0.0 0 3230.0 3225.0 3235.0 1.62E-12 3.0E-14 3.0E-14 0 ... 5000.0 4995.0 5005.0 1.33E-11 3.0E-13 3.0E-13 1 \end{verbatim} } \subsection*{ Status of this document } This document has been produced by the Data Model Working Group. It has been reviewed by IVOA Members and other interested parties, and has been endorsed by the IVOA Executive Committee as an IVOA Recommendation. It is a stable document and may be used as reference material or cited as a normative reference from another document. IVOA's role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment. This enhances the functionality and interoperability inside the Astronomical Community. This document has been developed with support from the National Science Foundation's\footnote{\url{http://www.nsf.gov/}} Information Technology Research Program under Cooperative Agreement AST0122449 with The Johns Hopkins University, from the UK Particle Physics and Astronomy Research Council (PPARC\footnote{\url{http://www.pparc.ac.uk}}) and from the European Commission's Sixth Framework Program\footnote{\url{http://fp6.cordis.lu/fp6/home.cfm}} via the Optical Infrared Coordination Network (OPTICON\footnote{\url{http://www.astro-opticon.org}}). The {\bf Virtual Observatory (VO)} is general term for a collection of federated resources that can be used to conduct astronomical research, education, and outreach. The {\bf International Virtual Observatory Alliance (IVOA)} \url{(http://www.ivoa.net)} is a global collaboration of separately funded projects to develop standards and infrastructure that enable VO applications. \clearpage \tableofcontents \newpage \addcontentsline{toc}{part}{Part 1 - Spectrum Data Model} {\Large \vfill \htpart{Part 1: Spectrum Data Model} \vfill } \newpage \section{Introduction and Motivation} Spectra are stored in many different ways within the astronomical community. In this document we present a proposed abstraction for spectral data, which can be used for describing spectrum datasets and also reused by other standards. We also provide serializations for the spectrum dataset use case in VOTABLE, FITS, and XML, for use as a standard method of spectral data interchange, and define mandatory, recommended and optional fields for that use case. We distinguish in several places between the implementation proposed in this document, referred to as Version 1, and capabilities proposed for possible later implementation. This version fixes some issues coming from implementation feedback. Most fixes are minor and do not represent any fundamental changes with respect to the previous Recommendation. Modifications to UCDs (Table 1) can have an impact on implementations and/or validators. \subsection{Change Log} \begin{verbatim} 2011 Oct 20 V1.1 Mod 8 - Section 1 (Motivation): added scope of v1.1 changes. Section 1.2 (Architecture): dropped references to SpectrumDM1.2. References update. (As suggested by Christophe Arviset). 2011 Oct 17 V1.1 Mod 7 - UCD updates suggested by S. Derriere. 2011 Mar 19 V1.1 Mod 6 - Architecture section 2011 Mar 3 V1.1 (1.04) Mod 5 - Typo corrections 2011 Jan 19 V1.04 Mod 4 - Further minor updates for 2009 Dec B. Rino comments. 2010 Dec 19 V1.04 Mod 3 - Further text clarifications to the previous changes. Note that there have been no changes to the XML schema. 2010 Apr 29 V1.04 Mod 2 Updated to account for comments by B. Rino. - added extra defatuls for missing utypes in FITS (CoordSys.SpaceFrame.UCD, CoordSys.TimeFrame.UCD, CoordSys.SpectralFrame.UCD). - added FITS SKY_REF keyword for missing utype CoordSys.SpaceFrame.RefPos - clarified meaning of '*' shorthand in listings of UCDs. 2010 Apr 21 V1.04: Made clear that 'mandatory' is for the spectrum document use case and that other use cases can redfine which items are mandatory. 2009 Apr 5 V1.03 Mod 1: Fixed typo for ESAC SI dimensional codes and corrected stat.error UCDs (stat.error must be primary). 2007 Oct 29 V1.03: SpectrumDM 1.03 released as a Recommendation 2007 Oct 28 V1.02 Rev 2: Corrected HTML version; corrected author list 2007 Aug 26 V1.01 RC3 Rev 2 = V1.02 RC3 Rev 2 - Minor edits and corrections to examples; clarification of IVOA identifiers; added NORMALIZED calibration option. (Date changed to Sep 13 for IVOA Doc submission only; version number changed at request of IVOA Doc Coord) 2007 Jul 10 V1.01 RC3 Rev 1 - Minor edits following RFC 2007 May 15 V1.01 RC2 Rev 15 - Proposed recommendation. 2007 May 1 V1.01 RC2 Rev 14 - Trivial cover formatting 2007 Apr 30 V1.01 RC2 Rev 13 - Added Support.Extent as suggested by A. Micol - Improved text in several places 2007 Apr 26 V1.01 RC2 Rev 12 - UCD time.expo updated to time.duration/stop/start;obs.exposure - Updated VOTABLE examples 2007 Apr 25 V1.01 RC2 Rev 11 - Included the correct file; Rev 10 was bogus 2007 Apr 17 V1.01 RC2 Rev 10 - Fixed errors in XSD and in text - Revised Characterization text - Added REST_Z FITS keyword for CoordSys.SpectralFrame.Redshift 2007 Apr 12 V1.01 RC2 Rev 9 - Incorporate D Tody comments - DataID.Title mandatory - Changes to recommended case of Utype fields e.g. Redshift not redshift. - Utypes involving stat.error changed to put the stat.error first. 2007 Apr 4 V1.01 RC2 Rev 8 - FITS keyword TMID added; TDMINn/TDMAXn 2007 Apr 1 V1.01 RC2 Rev 7 - Modifications for compatibility with Char working draft: Moved Calibration utype from CharAxis.Accuracy to CharAxis Added SamplingPrecision.SampleExtent and SamplingPrecision.SamplingPrecisionRefVal.FillFactor 2007 Feb 12 V1.01 RC2 Rev 5 - Curation.Reference can have multiple instances 2007 Jan 17 V1.01 RC2 Rev 4 - Changed FITS keyword SIZE to DATALEN (D Tody request) - Added text describing use of non standard units. - Reformat units in Tables 2,3 to OGIP convention 2006 Dec 11 V1.01 RC2 Rev 3 - Fixd more typos in XSD and XML example 2006 Dec 6 V1.01 RC2 Rev 2 - Upgraded UCDs to version 1.21 - Added SpectralAxis.ResPower and SPEC_RP keyword for resolving power; added element to XSD. - XSD changed segmentType definition to put Data element at end of sequence. - XSD corrected type errors in a few cases in Curation type. - XSD added missing elements CreatorDID, Bandpass to DataID. - XML instance example corrected errors in Characterization axes. - FITS keywords changed: CREATOR to AUTHOR; DER_ERR to DER_ZERR - FITS added more TUTYPn keyword examples. - FITS added comment on VOCSID - Corrected mistakes in FITS and VOT examples - Clarified role of Aperture - Further clarified CreatorDID, PublisherDID, DatasetID distinction. - Clarifications and corrections in text 2006 Oct 22 V1.0 RC1 (since V0.98d Rev 4) - Added table numbers - Changed some defaults in Table 1 - Added flux UCDs for transmission curves, polarized flux - Amplified discussion of RedshiftFrame - Added Spectral location and bounds - Reorganized order of some sections - Further rationalization of FITS keywords, rewrote FITS section - Added TUCDn and TUTYPn \end{verbatim} \subsection{IVOA Architecture Context} \begin{figure}[h] \colorbox{iblue}{ \psfig{file=sd.eps,width=6.0in} } \caption{Spectrum DM in IVOA architecture} \end{figure} Data Models in the VO aim to define the common elements of astronomical data and metadata collections and to provide a framework for describing their relationships so these become inter operable in a transparent manner. The Spectrum Data Model (SpectrumDM) standard presents a data model describing the structure of spectrophotometric datasets with spectral and temporal coordinates and associated metadata. This data model may be used to represent spectra, time series data, segments of SED (Spectral Energy Distributions) and other spectral or temporal associations. SpectrumDM is used with the associated Data Access Protocol, SSAP (Simple Spectra Access Protocol). As with most of the VO Data Models, SpectrumDM makes use of STC, Utypes, Units and UCDs. Furthermore, SpectrumDM makes reference to the CharDM (Characterization Data Model). It can be serialized with a VOTable, among other formats. \clearpage \section{Requirements} We need to represent a single 1-dimensional spectrum in sufficient detail to understand the differences between two spectra of the same object and between two spectra of different objects. We need to represent time series photometry, with many photometry points of the same object at different times. Finally, we need to represent associations of spectra, such as the segments of an echelle spectrum, or spectral energy distributions (SED) which consist of multiple spectra and photometry points, usually for a single object. The 'SED' model will be described in a separate document which builds on the structures described here. \section{Spectral Data Model summary} \subsection{Model Components} Our model for a spectrum is a set of one or more data points (photometry) each of which share the same contextual metadata (aperture, position, etc.). Specifically, a spectrum will have arrays of the following values: \vskip 0.1in \colorbox{ipink}{ \begin{minipage}{0.9\textwidth} \begin{itemize} \item Flux value, with upper and lower statistical (uncorrelated) errors \item Spectral coordinate (e.g. wavelength), central and bin min and max \item (Optionally) Time coordinate, convertible to MJD UTC \item Optional Quality mask \item Optional spectral resolution array \end{itemize} \end{minipage} } \vskip 0.1in and will have associated metadata including, for example, \vskip 0.1in \colorbox{ipink}{ \begin{minipage}{0.9\textwidth} \begin{itemize} \item Data collection and Dataset ID \item Exposure time in seconds \item Position of aperture center, given as ICRS degrees (similar to J2000) \item Aperture size in degrees \item Systematic (correlated) error \item Bibcode \end{itemize} \end{minipage} } \vskip 0.1in In later sections we elaborate these concepts in detail, including some complications that we explicitly do not attempt to handle in this version. \begin{figure}[h] \colorbox{iblue}{ \psfig{file=specumln3.eps,width=6.0in} } \colorbox{iblue}{ \begin{minipage}{6.0in} Figure 1: UML class diagram for the spectral data model. The Characterization, Curation, DataID and Derived classes are shown in detail below in diagram form and with further text description in Section 5. \end{minipage} } \end{figure} \subsection{Use Cases and Required Fields} The main use case considered by this standard is the representation of a dataset (document) actually containing a spectrum. For this use case, the data model fields and possible values are listed. We distinguish between optional and required fields in the text, as well as via a column in the tables which has values of MAN (Mandatory, i.e. required), REC (Recommended) and OPT (Optional). Where appropriate we list those values of the physical units which interoperable implementations are required to recognize. We specify fields that are MANDATORY (MUST), RECOMMENDED (SHOULD), or OPTIONAL (MAY). MANDATORY fields are in bold. MANDATORY means that a document {\bf implementing the Spectrum use case}~must provide a value; however, the value may be UNKNOWN (the value exists but is not known) or N/A (not applicable: for example, RA and DEC for a moving object or absolute time for a theory simulation). RECOMMENDED means that a data provider should try to fill the relevant fields if possible, but the document is still compliant if they are omitted. Particular serializations (FITS, VOTABLE, etc) may amend these requirements by specifying default values for the serialization. The same data model can be reused by other IVOA standards in contexts where a spectrum is merely being {\bf described}: notably, in the Simple Spectral Access Protocol where we are describing a {\bf query about a spectrum}. {\bf The list of mandatory, etc., fields does not apply to these other use cases.} The same data model fields and concepts, or a defined subset of them, should be used, but the list of mandatory, recommended and optional fields will be different and should be identified in the standards defining those use cases. The minimal required content for the spectrum dataset use case is: \begin{itemize} \item Spectrum model version \item Target name and dataset title DataID.Title (which may be the same as each other) \item Characterization Coverage.Location and Coverage.Bounds (Extent or Start/Stop range) descriptions of the location and extent of the data in the RA, Dec, time and spectral domains \item The Curation.Publisher field \item The descriptions of the spectral coordinate and flux fields including UCD and units (Spectrum.Char.SpectralAxis, Spectrum.Char.FluxAxis) \item For each point: the values of the spectral coordinate and flux. \end{itemize} Note that each Spectrum instance has only one spectral coordinate axis. If you want to provide *both* flux-vs-wavelength and flux-vs-frequency for a single dataset, you must (in this version of the model) make two separate instances (VO resources). \begin{figure}[h] \colorbox{iblue}{ \psfig{file=ndataf1.eps,width=6.0in} } { \colorbox{iblue}{ \begin{minipage}{6.0in} Figure 2: Diagram for Data object \end{minipage} } } \colorbox{iblue}{ \psfig{file=ncharf2.eps,width=6.0in} } { \colorbox{iblue}{ \begin{minipage}{6.0in} Figure 3: Diagram for Characterization object \end{minipage} } } \end{figure} \begin{figure}[h] \colorbox{iblue}{ \psfig{file=specstc.eps,width=6.0in} } { \colorbox{iblue}{ \begin{minipage}{6.0in} Figure 4: Diagram for CoordSys object \end{minipage} } } \end{figure} \begin{figure}[h] \colorbox{iblue}{ \psfig{file=specu3f.eps,width=6.0in} } { \colorbox{iblue}{ \begin{minipage}{6.0in} Figure 5: Diagram for remaining metadata: Curation, DataID, Derived, Target objects \end{minipage} } } \end{figure} \clearpage \subsection{Units} We adopt the WCS/OGIP convention for units: Document OGIP 93-001 \\ (http://legacy.gsfc.nasa.gov/docs/heasarc/ofwg/docs/general/ogip\_93\_001/ogip\_93\_001.html). Briefly, units are given in the form \begin{verbatim} 10**(-14) erg/cm**2/s/Hz, 10**3 Jy Hz \end{verbatim} i.e. with exponents denoted by **, division by /, multiplication by a space. This format is mostly consistent with the AAS standards for online tables in journals (http://grumpy.as.arizona.edu/$\sim$gschwarz/unitstandards.html) except for the use of space rather than "." for multiplication and the fact that we do not require the use of SI units. SI prefixes for units are to be recognized; for instance, the listing of "m" as a known unit for wavelength implies that "cm", "nm", and "um" (with "u" the OGIP convention for rendering "micro") are also acceptable. Until IVOA generic unit conversion software is mature and widely deployed, it is helpful to interoperable applications to include a representation of the units in "base SI form", including only the base units kg, m, s (and possibly A, sr) with a numeric prefix. Pedro Osuna and Jesus Salgado have proposed a representation in the spirit of dimensional analysis, using the symbols M, L, T to signify kg, m, s respectively and omitting the ** for powers, so that \begin{verbatim} 10**3 Jy Hz \end{verbatim} which is equivalent to \begin{verbatim} 10**-23 kg s**-2 \end{verbatim} is written compactly as \begin{verbatim} 1.E-23 MT-2 \end{verbatim} This alternate representation is supported for the main model fields (time, spectral coordinate and flux) only. Although the spectral model is flexible enough to permit different units for each field, as a matter of style we strongly recommend that whenever possible the same units should be used for compatible fields (e.g. flux and error on flux). \subsection{UCDs} UCDs or Unified Content Descriptors are the IVOA's standardized vocabulary for astronomical concepts. In this document we use UCDs as field attributes (for example, element attributes in XML) to distinguish alternate physics within the same data model roles - for example, to distinguish frequency versus wavelength on the spectral coordinate `X-axis'. In the list of UCDs below, the notation "em.*" is used to indicate "either em.wl, em.freq or em.energy"; exactly one of these must be used for any one instantiation of the model (so you can't put a literal em.* in a UTYPE field for a spectrum document, you should put whichever of em.wl etc. is appropriate for the actual data.) The current list of valid UCDs is http://cdsweb.u-strasbg.fr/UCD/ucd1p-words.txt with syntax defined in the UCD recommendation http://www.ivoa.net/Documents/latest/UCD.html. UCDs should be case insensitive. \subsection{UTYPEs and model reuse} UTYPE was a concept introduced in VOTABLE to label fields of a hierarchical data model. The word is now used generally to mean a standard identifier for a data model field. They are also case-insensitive and are of the form a.b.c.d where the dots indicate a 'has-a' hierarchy with the leftmost element being the containing model and the rightmost element being the lowest level element referred to. This is quite close to a simple XPATH in an XML schema, but we chose not to use slash instead of dot to emphasize that we are only specifiying the element type, not the exact position in an instance (so no sophisticated query syntax). We use the terms 'data model field' and 'UTYPE' interchangeably. In the main text of this standard, the UTYPEs all begin with the prefix ``Spectrum.'' to clarify their membership in the Spectrum data model. This is intended to reflect the main use case of a Spectrum dataset. Other IVOA standards (e.g. SSA) may use a different prefix instead of ``Spectrum.'', as long as they include a mechanism for unambigiously identifying the data model in use, e.g. by the value of the DataModel utype. This represents data model inheritance; we say that SSA inherits the Spectrum model, so that ``SSA.'' utypes overlap with the Spectrum ones. In the main listing of UTYPEs in Table 1, the Spectrum prefix is omitted. Thus, when using Table 1 in conjunction with the Spectrum model (rather than reusing it in another model), UTYPEs such as 'Curation.Contact.Email' should be instantiated as 'Spectrum.Curation.Contact.Email'. \subsection{Packaging model} The simple Packaging model for SSA describes the format of the associated dataset. Allowed values for the format are briefly listed here; Detailed serialization for formats 4 to 6 are not specified; The metadata (format 7) is not returned by the standard SSA call; it instead uses a new getCapabilities option. See the SSA protocol definition document for details of this. These packaging values will be part of the SSA protocol response, and are implicit in the individual serializations. We only discuss formats 1 to 3 in this document. \colorbox{ipink}{ \begin{minipage}{0.9\textwidth} \begin{itemize} \item (1) FITS (standard BINTABLE for Spectrum, defined in this document) \item (2) VOTABLE \item (3) XML (native XML for web services and XML tools) \item (4) text (simple text table with columns of data and no markup) \item (5) text/html \item (6) graphics; a JPG, GIF etc. representation of the data \item (7) metadata; only the XML metadata. \end{itemize} \end{minipage} } \subsection{Data Model Fields} The DM fields (or UTYPEs) for the Spectrum DM are tabulated on the following pages. The field names are to be used as the UTYPE values in VOTABLE serializations and in the TUTYPn keys in the FITS serialization. The fields are explained in more detail in the following sections. They should have the prefix "Spectrum." when used in the spectrum dataset use case. In Table 1, as well as the field names, a key to the FITS serialization is given. Where this key is blank, there is no FITS support for the field and it must take its default value. Several of the Char fields are required to be the same as the corresponding Data fields, and this is indicated by ``(as Data)''. Some fields describe properties of a FITS table column and use the values of keywords like TTYPEn, TUCDn for that column. The appropriate value of TTYPEn is indicated. In the case of columns related to the spectral coordinates, the appropriate TTYPEn value depends on the type of coordinate (WAVE for UCD em.wl, ENER for em.energy, FREQ for em.freq). This is indicated as e.g. "TTYPEn='WAVE\_ELO',etc", meaning that the column name should be WAVE\_ELO, FREQ\_ELO, ENER\_ELO as appropriate. The FITS serialization is described in more detail in a later section. \input{tab.tex} \section{Spectral data model Measurement objects} \subsection{Spectral coordinate} Astronomers use a number of different spectral coordinates to label the electromagnetic spectrum. The cases enumerated by Greisen et al. (2006) are listed below with their UCDs. {\bf MANDATORY: Exactly one Spectrum.Char.SpectralAxis field should be present, with units and one of the UCD values listed below.} We distinguish between the VO data model field name (which might be used for VOTABLE UTYPE), the FITS WCS name (provided for comparison only), and the UCD1+ names. Note 1: For this version, only the first four entries, Wavelength, Frequency, Energy, and spectral channel, should be used for interoperable transmission of data - implementations are not required to understand (convert) the other UCD values. Note 2: For the velocity cases, the UCD uses a spect.dopplerVeloc tree rather than a {\it src.veloc } tree, because the velocity here is really a labelling of a spectral coordinate, and the link to the physical radial velocity of the different emission sources contributing to the spectrum is rather indirect. Note 3: `em.wl;obs.atmos' (AWAV) is provided for air wavelengths. The basic spectral choices em.wl, em.freq, em.energy are understood to be vacuum values. \begin{flushleft} \colorbox{iblue}{\small \begin{minipage}[l]{6.8in} \begin{tabular}{lllll} \hline \multicolumn{4}{c}{Table 2: Spectral coordinate options}\\ \hline \hline Field & FITS WCS & UCD1+ & Meaning & Units \\ \hline PREFERRED CHOICES \\ SpectralAxis.ucd &{ \bf WAVE} & em.wl &Wavelength &{\aa}ngstrom, m \\ SpectralAxis.ucd &{ \bf FREQ} & em.freq &Frequency of photon &Hz \\ SpectralAxis.ucd &{ \bf ENER} & em.energy &Photon energy &erg, eV, J \\ SpectralAxis.ucd &{ \bf -} & instr.pixel;em.wl &Instrumental spectral bin &chan \\ \\ ALTERNATIVE CHOICES \\ \\ SpectralAxis.ucd &{ \bf WAVN} & em.wavenumber &Wavenumber &m**(-1) \\ SpectralAxis.ucd &{ \bf AWAV} & em.wl;obs.atmos &Air wavelength &{\aa}ngstrom, m \\ SpectralAxis.ucd &{ \bf WAVE-LOG} & em.wl &Log wavelength \\ SpectralAxis.ucd &{ \bf FREQ-LOG} & em.freq &Log frequency of photon \\ SpectralAxis.ucd &{ \bf ENER-LOG} & em.energy &Log photon energy \\ SpectralAxis.ucd &{ \bf VELO} &spect.dopplerVeloc &Apparent radial velocity &m/s \\ SpectralAxis.ucd &{ \bf VRAD} &spect.dopplerVeloc.radio &Radio velocity &m/s \\ SpectralAxis.ucd &{ \bf VOPT} &spect.dopplerVeloc.opt &Optical velocity &m/s \\ SpectralAxis.ucd &{ \bf BETA} &spect.dopplerVeloc &Velocity (c=1) &- \\ \end{tabular} \end{minipage} } \end{flushleft} \subsection{Flux (Spectral Intensity) Object} Two instances of the Flux object are supported: Flux and BackgroundModel. The Flux may be either the background-subtracted net flux or the total flux (the source+background), in the latter case hopefully with the BackgroundModel (see below). Net and total flux are distinguished by the `src.net' UCD adjective. For each of these cases, there are many slightly different physical quantities covered by the general concept of Flux; we distinguish them by their UCD. The table contains a list of flux quantities that applications should expect to read and handle. If you create a Spectrum instance with a flux quantity or flux unit not in the list below, you should expect that applications will be able to propagate it and recognize it, but not be able to merge it or compare it with other Spectrum instances. (For example, an application trying to measure line wavelengths shouldn't care too much that it doesn't understand what the flux units are). Note in particular the distinction between the unit {\bf count } (an instrumental value) and the unit {\bf photon } (used in the photon number flux, i.e. the number of photons incident; photon number flux = energy flux divided by photon energy). Note: The concept of the "nu L-nu" or "lambda L-lambda" luminosity flux, or equivalently the luminosity per logarithmic energy interval L(log nu), is a distinct concept in the world of spectral energy distributions - and it's a different concept from the bolometric luminosity, which has the same units. The UCD board has not yet approved a UCD expressing this concept; we have to use phys.luminosity and infer the concept from the units. My solution for brightness temperature is also rather questionable. Note: we propose the UCD spect.continuum to represent continuum flux. \begin{flushleft} \colorbox{iblue}{\small \begin{minipage}[l]{7.0in} \begin{tabular}{lp{1.5in}p{1.6in}p{2.2in}} \multicolumn{3}{c}{Table 3: Flux Value options} \\ \hline Field &UCD1+ &Meaning &Unit (OGIP style) \\ \hline FluxAxis.ucd & phot.flux.density;em.wl & Flux density per unit wave. & erg cm**(-2) s**(-1) angstrom**(-1),\\ &&& W m**(-2) m**(-1),\\ &&& keV cm**(-2) s**(-1) angstrom**(-1)\\ FluxAxis.ucd & phot.flux.density;em.freq & Flux density per unit freq. & erg cm**(-2) s**(-1) Hz**(-1),\\ &&& Jy, W m**(-2) Hz**(-1)\\ FluxAxis.ucd & phot.flux.density;em.energy & Flux density per energy interval & keV cm**(-2) s**(-1) kev**(-1)\\ FluxAxis.ucd & phot.flux.density;em.energy; meta.number & Photons per unit area, time, energy & photon cm**(-2) s**(-1) keV**(-1)\\ FluxAxis.ucd & phot.flux.density;em.wl & Flux density per log wave interval ($\nu F(\nu)$) &Jy Hz \\ FluxAxis.ucd & phot.flux.density.sb;em.wl & Surface brightness per unit wavelength & erg cm**(-2) s**(-1) angstrom**(-1) arcsec**(-2)\\ FluxAxis.ucd & phot.flux.density.sb;em.freq & Surface brightness per unit frequency & Jy sr**(-1)\\ FluxAxis.ucd & phot.count & Counts in spectral channel &count \\ FluxAxis.ucd & arith.rate;phot.count & Count rate in spectral channel &count/s \\ FluxAxis.ucd & arith.ratio;phot.flux.density & Flux ratio of two spectra &- \\ FluxAxis.ucd & phys.luminosity;em.wl & Luminosity per unit wave & erg s**(-1) angstrom**(-1), W/m \\ FluxAxis.ucd & phys.luminosity;em.freq & Luminosity per unit freq & erg s**(-1) Hz**(-1), W/Hz\\ FluxAxis.ucd & phys.luminosity;em.energy & Luminosity per unit energy & erg s**(-1) keV**(-1)\\ FluxAxis.ucd & phys.luminosity;em.energy& Luminosity per log frequency & erg s**(-1), W\\ FluxAxis.ucd & phys.energy.density & Radiation energy density per unit volume, per unit wave etc. & erg cm**(-3), J m**(-3)\\ FluxAxis.ucd & phot.fluence;em.wl & Photon number flux per unit wave. & photon cm**(-2) s**(-1) angstrom**(-1)\\ FluxAxis.ucd & phot.flux.density;em.wl; phys.polarization &Polarized flux per unit wavelength & erg cm**(-2) s**(-1) angstrom**(-1) \\ FluxAxis.ucd & phys.polarization &Polarized fraction vs spectral coord & (dimensionless)\\ FluxAxis.ucd & phys.luminosity; phys.angArea;em.wl & Flux per unit solid angle (at source) & erg cm**(-2) s**(-1) sr**(-1) angstrom**(-1)\\ FluxAxis.ucd & phot.antennaTemp & Antenna temperature &K \\ FluxAxis.ucd & {phot.flux.density; phys.temperature} & Brightness temperature &K \\ FluxAxis.ucd & phot.mag & Magnitude in defined band &mag \\ FluxAxis.ucd & phot.mag & AB (spectrophotometric) magnitude &mag \\ FluxAxis.ucd & phot.flux.density;instr.beam & Flux per resolution element (e.g. Jy/beam) &Jy/beam \\ FluxAxis.ucd & phot.mag.sb & Surface brightness in magnitudes &mag arcsec**(-2)\\ FluxAxis.ucd & phys.transmission & Filter transmission, 0.0 to 1.0 & (dimensionless)\\ FluxAxis.ucd & phys.area;phys.transmission & Effective area & cm**2\\ FluxAxis.ucd & phot.flux.density;em.wl; spect.continuum & Continuum only & erg cm**(-2) s**(-1) angstrom**(-1) arcsec**(-2)\\ \end{tabular} \end{minipage} } \end{flushleft} \subsection{BackgroundModel Object} We optionally allow a BackgroundModel value for each Flux value. We define NetFlux = TotalFlux - BackgroundModel. The name BackgroundModel, rather than Background, reminds us that it is an estimate: often, the BackgroundModel will be generated by taking a flux measurement at another location and rescaling it for any difference in exposure time or extraction aperture. The BackgroundModel array is required to have the same UCD and units as the Flux array. It represents a model for the expected flux values if the Target had zero flux. {\bf OPTIONAL: There may be at most one BackgroundModel.Value field present. It must have the same UCD as the Flux. } \subsection{Time coordinate} For data with a time-series component, whether regularly sampled or sparse photometry points, the time coordinate is given by an elapsed time in some physical units (e.g. seconds or days) relative to a reference time. This reference time is given in MJD as the field Spectrum.Char.TimeAxis.Coverage.Location, as described in the Characterization section. For a simple spectrum with no time-resolved data, this is the time of the observation (ideally the midpoint). For time-resolved data, the time coordinate Spectrum.Data.TimeAxis.Value refers to the midpoint of the sample interval. See the Space-Time Coordinates document for details of time coordinate complexity. The time unit is specified by a string, and the only valid values for this unit are 's' (seconds) and 'd' (days). \subsection{Position coordinate} In general we may consider position coordinates as part of the measurement (and possibly varying from point to point), but this capability is not included in the current document. The (celestial) position of the aperture for the spectrum is given in the spatial Spectrum.Char.SpatialAxis.Coverage.Location field. The Spectrum.Char.SpatialAxis.Coverage.Location.Value field is in the coordinates of CoordSys.SpaceFrame. The default is ICRS RA,Dec in decimal degrees. \subsection{Accuracy Fields} We include accuracy models for both the coordinates (spectral, spatial and temporal) and the fluxes. The accuracy can appear in two places: in the global characterization, where it represents typical accuracy for the dataset, and in the data points themselves, providing a way to provide per-data-point errors. All the Accuracy fields are optional, both in the per-data-point fields and in the Characterization instances; the per-data fields default to the values in Characterization. \subsubsection{Coordinate bins} We express the bandpass for each spectral bin as a low and high value for the spectral coordinate, or as a width. The same is done for photometry points, which amounts to approximating a filter by a rectangular bandpass. Time bins are also given as low and high values or as a width. Note that the width values are suitable for Spectrum.Char (the global accuracy) while the bin low/high values only have meaning for Spectrum.Data (the per-data-point values). Only one of BinSize, or both BinLow and BinHigh, must be present (possibly as a header parameter implying a constant value for each flux point). If absent, the bin limits are assumed to be halfway between the coordinate values and bounded by the range given in Char.*.Coverage.Extent. \subsubsection{Uncertainties} In addition to the binning, we allow the model to express uncertainties (which may be larger than the bin width), both statistical and systematic. We allow one or two-sided statistical errors but only one-sided systematic errors. You can specify StatErr, or StatErrHigh/StatErrLow, but not both. Statistical errors which have the same units as the data, and systematic errors which are dimensionless fractions (e.g. a 5 percent systematic error is expressed as 0.05). For position we have a single statistical error - a two-sided error doesn't make sense for a 2D coordinate. Eventually we may want a full error ellipse, but this is too complicated for the present model. We also use a very simple error model for the fluxes: we include plus and minus flux errors, and a quality flag. The errors are understood as 1 sigma gaussian errors which are uncorrelated for different points in the spectrum. If the data provider has only upper limit information, it should be represented by setting the flux value and the lower error value equal to the limit, and the upper error value equal to zero (e.g. 5 (+0,-5)). In general applications may choose to render measurements as upper limits if the flux value is less than some multiple (e.g. 3) of the lower error. We also allow a systematic error value, assumed constant across a given spectrum and fully correlated (so that, e.g. it does not enter into estimating spectral slopes). {\bf CLARIFICATION: } the two-sided errors StatErrLow and StatErrHigh are the plus/minus ERRORS, not the (value+error, value-error). In other words, if Value = 10 and there is a symmetric uncertainty of 3, the ErrorLow and ErrorHigh are both +3.0, and NOT 7.0, 13.0. This is different from the sampling description BinLow and BinHigh, which give the VALUES at the low and high end of the bin. Thus if the central wavelength of the bin is 4200.0, and the bin size is 10, then the BinLow and BinHigh values are 4195.0, 4205.0 and NOT 10.0, 10.0. Note that because of this, 0.0 is NOT an acceptable default for BinLow and BinHigh, while it IS acceptable (albeit unlikely) for StatErrLow and StatErrHigh. The StatErrLow, StatErrHigh, SysError fields for SpectralCoord, Time, Sky and Flux are optional; however, omitting these fields indicates that the errors are unknown. Data providers are STRONGLY encouraged to provide explicit error measures whenever possible. \subsubsection{Resolution} We also include a trivial resolution model: a single number nominally representing a FWHM spectral or time resolution expressed in the same units as the spectral or time coordinate. The default is to assume that the resolution is equal to the BinSize if defined. The spatial (sky) resolution may be useful to know if it exceeds the aperture size; the default is to assume it is equal to the aperture size. For the spectral characterization, we allow an alternative field called the spectral resolving power: Spectrum.Char.SpectralAxis.ResPower: this is the dimensionless $\lambda/\Delta\lambda$. It is often preferred for spectra because it is often more constant across the spectrum than the resolution. ResPower and Resolution can be interchanged by dividing out Coverage.Location. Similar quantities can't really be defined for temporal and spatial resolving power since there's no absolute time or spatial scale, so we call the spectral one out as a special case. One could define a temporal or spatial frequency using the bounds - i.e. just the number of resolution elements in the spectrum - but that's a slightly different concept. \subsubsection{Quality} The Quality model represents quality by an integer, with the following meanings: 0 is good data, 1 is data which is bad for an unspecified reason (e.g., no data in the sample interval), and other positive integers greater than 1 may be used to flag data which is bad or dubious for specific reasons. The data provider may also define scalar string-valued metadata fields Quality.2, Quality.3... to define specific quality flags on a per-spectrum basis. Bitmasks, used in some archives such as SDSS, should be remapped to such independent Quality fields. Quality defaults to zero, i.e. good data. \subsubsection{Calibration} {We also introduce a Calibration field which can have the values ABSOLUTE, RELATIVE, NORMALIZED or UNCALIBRATED. This is expected to be particularly useful to describe the flux. ABSOLUTE indicates that the values in the data are expected to be correct within the given uncertainty. RELATIVE indicates that although an unknown systematic error is present, the ratio of any two values will be correct. NORMALIZED indicates that the data have been divided by a reference spectrum; the flux UCD in this case should be {\nchange suffixed}~by 'arith.ratio;' and the units should be blank (dimensionless). UNCALIBRATED indicates that although the values reflect a measurement of the given UCD, they are modified by an unspecified coordinate-dependent correction. Such values may be useful in the case of a spectrum with ABSOLUTE calibration on the wavelengths but UNCALIBRATED fluxes; the wavelengths of discontinuous features such as spectral lines can be measured on the assumption that the missing calibration function has no sharp discontinuities in the region of interest. } The Calibration fields are present in the CharacterizationAxis elements. \clearpage \section{Associated Metadata Fields} Most of the associated metadata are generic observational metadata that can be applied to future data models, and are not specific to spectra. \subsection{CoordSys Fields} The CoordSys object is a simplified instance of the STC CoordSystem object. For XML serializations, it can be replaced by an actual STC CoordSystem instance. CoordSys consists of 1 or more CoordFrame objects, each of which defines the coordinates for a particular axis. The CoordSys has an overall ID string, which is user-defined and arbitrary. Each CoordFrame also has a type, a UCD and a ReferencePosition; the Reference Position gives the origin of the coordinate system (and thus also its rest frame). For the space, time, and spectral axes we define specialized CoordFrames for convenience: SpaceFrame, TimeFrame and SpectralFrame. The CoordFrame names (types) for SpaceFrame and TimeFrame must be from a controlled list; for other frames, the type is an arbitrary string. Note: For compatibility with the Characterization schema, data model elements Spectrum.Char.SpatialAxis.CoordSys, etc. are allowed, but in Spectrum these must be trivial references to the overall Spectrum.CoordSys. \setcounter{table}{3} \begin{table}[h] \small \begin{tabular}{|lll|} \hline Token & Meaning & Note \\ \hline UNKNOWN & Unknown origin&\\ RELOCATABLE & Relative origin& Suitable for simulations\\ CUSTOM & Origin specified wrt another system&\\ TOPOCENTER & Location of the observing device &(telescope)\\ &&\\ BARYCENTER & Solar system barycenter&\\ HELIOCENTER & Center of the Sun&\\ GEOCENTER & Center of the Earth&\\ &&\\ EMBARYCENTER & Earth-Moon barycenter&\\ MOON & Center of the Moon&\\ MERCURY & Center of Mercury&\\ VENUS & Center of Venus &\\ MARS & Center of Mars&\\ JUPITER& Center of Jupiter&\\ SATURN & Center of Saturn&\\ URANUS & Center of Uranus&\\ NEPTUNE & Center of Neptune&\\ PLUTO & Center of Pluto&\\ &&\\ LSRK & Kinematic local standard of rest& Redshift frame only\\ LSRD & Dynamic local standard of rest& Redshift frame only\\ GALACTIC\_CENTER& Center of the Galaxy&\\ LOCAL\_GROUP\_CENTER& Barycenter of the Local Group&\\ \hline \end{tabular} \caption{Allowed values for CoordFrame.ReferencePosition} \end{table} \subsubsection{SpaceFrame} The SpaceFrame has an optional Equinox attribute which is used if the frame name is FK4 or FK5. The allowed frame names for SpaceFrame are listed below. \begin{table}[h] \small \begin{tabular}{|lll|} \hline Token & Meaning & Parameter(s) \\ \hline UNKNOWN & Unknown frame & \\ CUSTOM & Custom frame & Pole, axis \\ AZ\_EL & Azimuth and elevation & \\ BODY & Generic body (eg planet) & \\ &&\\ ICRS & The ICRS frame & \\ FK4 & FK4 & Equinox \\ FK5 & FK5 & Equinox\\ ECLIPTIC& Ecliptic l,b & Equinox\\ GALACTIC\_I& Old galactic LI,BI & \\ GALACTIC\_II& Galactic LII,BII & \\ SUPER\_GALACTIC& SGL, SGB & \\ &&\\ MAG & Geomagnetic ref frame & \\ GSE & Geocentric Solar Ecliptic &\\ GSM & Geocentric Solar Magnetic &\\ SM & Solar Magnetic & \\ HGC & Heliographic & \\ HEE & Heliocentric Earth Ecliptic & \\ HEEQ & Heliocentric Earth Equatorial & \\ HCI & Heliocentric Inertial & \\ HCD & Heliocentric of Date & \\ &&\\ GEO\_C & Geocentric corotating & \\ GEO\_D & Geodetic ref frame & Spheroid\\ MERCURY\_C & Corotating planetocentric & \\ VENUS\_C & Corotating planetocentric & \\ LUNA\_C & Corotating planetocentric & \\ MARS\_C & Corotating planetocentric & \\ JUPITER\_C\_III & Corotating planetocentric & \\ SATURN\_C\_III& Corotating planetocentric & \\ URANUS\_C\_III& Corotating planetocentric & \\ NEPTUNE\_C\_III& Corotating planetocentric & \\ PLUTO\_C &Corotating planetocentric & \\ MERCURY\_G & Corotating planetographic&\\ VENUS\_G & Corotating planetographic&\\ LUNA\_G & Corotating planetographic&\\ MARS\_G & Corotating planetographic&\\ JUPITER\_G\_III&Corotating planetographic&\\ SATURN\_G\_III&Corotating planetographic&\\ URANUS\_G\_III&Corotating planetographic&\\ NEPTUNE\_G\_III&Corotating planetographic&\\ PLUTO\_G&Corotating planetographic&\\ \hline \end{tabular} \caption{Allowed values for CoordSys.SpaceFrame.Name} \end{table} \subsubsection{TimeFrame} The TimeFrame is defined by the frame name and the ReferencePosition. Allowed values of the name are given below. One standard reference time in astronomy is the origin of Julian Day Number on the TT (Terrestrial Time) timescale, BC 4713 Nov 24 at 11:59:27.81 (Gregorian). Using TT is preferable to UTC because it does not contain leap seconds, so the elapsed time in days is just equal to the difference in JD values. The ISO-8601 calendar format standard does not support dates before AD 1, so cannot express this reference time. Therefore, it is not a suitable format for internal representations of such reference times. However, non-default choices of reference time may be specified in external serializations by a date in ISO-8601 format, e.g. "2004-11-30T11:59:00.01". In this version of the model we require use of MJD as the time type for absolute times. (ISO dates and JD are other possibilities covered by the STC document). Relative times in a time series may be in other units, relative to the TimeFrame.Zero value. (Note that in the FITS serialization, the MJDREF keyword allows definition of reference times in decimal days relative to MJD 0.0 = JD 2400000.5.) \begin{table}[h] \small \begin{tabular}{|lll|} \hline Token & Meaning & Note \\ \hline &&\\ LOCAL & Relocatable (simulation) time&\\ &&\\ TT & Terrestrial Time &\\ UTC & Coordinated Universal Time &\\ ET & Ephemeris Time&\\ TDB & Barycentric dynamical time&\\ TCG & Terrestrial Coordinate Time&\\ TCB & Barycentric Coordinate Time&\\ TAI & International Atomic Time&\\ LST & Local Sidereal Time&\\ \hline \end{tabular} \caption{Allowed values for CoordSys.TimeFrame.Name} \end{table} \subsubsection{SpectralFrame} The spectral frame is defined by its ReferencePosition. Once the choice of wavelength versus frequency or energy has been made, the only free parameter is the location at which the spectrum would have the given spectral coordinates. For directly observed data this is the topocenter (location of the observation); spectra may be velocity-corrected to a given velocity frame, which may be defined by the location which is at rest in that velocity frame (e.g. the heliocenter). Strictly, the correction may not be just a velocity shift, but any kind of spectral shift including e.g. gravitational redshifts; it is still true that such a shift corresponds to a location (e.g. surface of a white dwarf star) that can be quoted as a reference position. Since the frame is defined by its ReferencePosition, the frame name is not important, and will not be significant to software. We suggest that it may be filled by the name of the spectral coordinate, using FITS names such as 'WAVE', 'FREQ' or 'ENER'. The spectral frame has an optional Redshift attribute to specify a rest frame; it is used only if the frame's ReferencePosition is "CUSTOM". This redshift is measured in dimensionless units, defined as $\Delta\lambda/\lambda$ and may be negative. No specific interpretation of the shift as a cosmological or velocity shift effect is implied; we note for the record that some co-authors object to using the word `redshift' in this generic sense. \subsubsection{RedshiftFrame (also the Velocity frame)} When you convert the spectral coordinate to velocity or redshift (relative to some assumed rest-frame spectral feature) you need to record some other metadata. Our field name containing this metadata is RedshiftFrame, but we emphasize that the name redshift does not imply that blueshifts are excluded, merely that, in both galactic and extragalactic astronomy, when a shift is interpreted as a velocity a positive value indicates a shift to the red. The concept of Redshift frame includes both cosmological and local Doppler velocities. Note that you only use RedshiftFrame if you're measuring things in velocities; a rest-frame spectrum of a redshifted quasar whose spectral axis is in {\aa}ngstroms will be described by a SpectralFrame. The reason we have BOTH SpectralFrame and RedshiftFrame is to support certain data products, particularly used in spectral line radioastronomy, in which a spectrum (possibly obtained in piecewise spectral regions) is refactored into a set of separate spectral segments centered on different spectral lines; each segment is assigned a velocity axis centered on that line (and the same pixel from the original spectrum can appear in multiple segments each with a different velocity coordinate); you then consider the data as a 2D array with a spectral axis (indexing the segments) and a velocity axis (for each segment/spectral line). Other coordinate system information needed for velocity spectral coordinates include the observation-fixed spectral frame, the observatory location, the source redshift, and the velocity zero point (in Greisen et al, SSYSOBS, OBSGEO, VELOSYS, RESTFRQ/RESTWAV). However, we omit these in the current model. The only metadata we provide is the Doppler Definition - optical, radio or pseudo-relativistic. \subsubsection{STC} {\bf Notes on compatibility with, and differences from, STC 1.0: } \begin{itemize} \item We add the extra Redshift attribute in the SpectralFrame, instead of the more complex CustomReferencePosition approach used in STC. \item In STC's XML serialization, the frame types and reference positions are enumerated defined elements. Here they are strings, and we require that the frame name is the frame type. \item We don't explicitly include the coordinate flavor. \end{itemize} {\bf OPTIONAL: All CoordSys values are optional } , but data providers should take special care to check whether or not the defaults are appropriate for their data. The implications of the defaults are: \begin{itemize} \item Positions are given in ICRS RA,Dec in degrees and are heliocentric values (i.e. corrected for annual parallax and aberration, as normally found in source catalogs). \item Times are given in MJD days and represent times of photon arrival at the telescope. \item Spectral coordinates are as observed at the telescope, and not corrected for redshift, the motion of the Earth, etc. \end{itemize} \clearpage \subsection{Characterization} The Characterization metadata in this document are consistent with the IVOA Characterization data model draft as of March 2007. The Characterization model has a set of CharacterizationAxis objects. Each CharacterizationAxis describes the axis, and contains a Coverage describing the scope of the data, and optionally a Resolution and a Sampling object. The CharacterizationAxis is identified by its UCD attribute. Spectrum instances should have Spatial, Time and Spectral characterization axes as well as FluxAxis. To simplify things for the common axes, we define SpatialAxis, SpectralAxis, TimeAxis objects as special cases of CharacterizationAxis. The CoordSystem element in CharacterizationAxis is there for compatibility with the Characterization document and, if present, should be a simple reference to the main Spectrum CoordSystem. The Characterization fields will have a constant value for a given spectrum. Note: In the SSA protocol/query response, we will restrict the Char units to meters (spectral coord), seconds (time coord), and decimal degrees (spatial), for simplicity and consistency with other parameters. We allow a more general approach for the full Spectrum instance (returned serializations); the units may be as described elsewhere in this document. \subsubsection{Coverage Fields} The coverage fields will have a constant value for a given spectrum. They describe the region of space, time and spectrum from which the data were taken. In the Characterization model, we define progressively more accurate descriptions of this region: Location gives a single characteristic point, Bounds gives a range within which the data lies, and Support gives the detailed spatial field of view footprint, on/off time ranges (including gaps) and spectral ranges. (A fourth level not yet supported, Sensitivity, will provide detailed depth information: exposure map, time sensitivity variation, spectral transmission curve). There is a field for giving the effective exposure time (useful for selecting among multiple spectra from the same instrument). The aperture field is important to determine what part of an extended object is contributing to the spectrum; we allow a simple aperture description (Char.SpatialAxis.Coverage.Bounds.Extent) consisting of a single number representing the aperture size in decimal degrees. For a slit spectrum, the effective aperture on the sky is usually the slit width in the cross-dispersion direction, while for a fiber it may be a circular region. For an accurate description, a full region polygon is allowed in the Area field. Note that since the goal of the VO Spectrum description is to describe the data as it is now, not to describe where it came from, our 'aperture' is always the effective extraction aperture, not the original instrument aperture if that is different. The units of the spectral Coverage.Bounds.Extent (or Coverage.Bounds.Start/Stop) and Coverage.Support should be the same as those of SpectralCoord. For time, the Coverage.Bounds.Start/Stop is a pair of values giving the start and stop time. Coverage.Bounds.Extent is the total elapsed time (Stop - Start) while Coverage.Support.Extent is the effective exposure time (total length of all observing intervals times any statistical dead-time filling factor). In the full Characterization model, Coverage.Support provides a whole array of start-stop pairs indicating data accumulated over a series of intervals. We may add this to the Spectrum model in a later revision. The SpatialAxis.Coverage.Location and SpatialAxis.Coverage.Bounds.Extent, TimeAxis.Coverage.Location are required, as are either TimeAxis.Coverage.Bounds.Extent or TimeAxis.Coverage.Bounds.Start and Stop. If Extent is provided, Start and Stop are defined to be (Location - 0.5* Extent, Location +0.5*Extent). The spectral equivalents, SpectralAxis.Coverage.Location and SpectralAxis.Bounds.Start/Stop, are also required in the model; serializations may decide to omit them since they are easily derived from the data. The SamplingPrecision.SamplingPrecisionRefVal.FillFactor (previously Coverage.Support.Fill) fields give the filling factor, a statistical way of indicating that an axis is only partly sampled. The full IVOA Characterization data model provides a more detailed SamplingPrecision tree; although we fill only part of this we retain the field names for compatibility. FillFactor is used for dead time corrections (time axis), statistical corrections for gaps between active pixels (spatial axis), and so on. Its value should be between 0 and 1, with the default being 1. (Although we provide a SpectralAxis FillFactor for symmetry and completeness, we are not aware of any practical application for it). \subsubsection{Region definitions} In the optional Char.SpatialAxis.Coverage.Support.Area we describe the detailed aperture shape in absolute coords on the sky. However, we don't allow a full STC region description. Our simplified region model allows for (1) a circle and (2) a polygon in a string representation: either \begin{quote} circle x0 y0 r \end{quote} or \begin{quote} polygon x1 y1 x2 y2 x3 y3 ... \end{quote} for example \begin{verbatim} circle 233.70 -13.32 0.00043 polygon 233.70 -13.32 233.71 -13.30 ... \end{verbatim} where the positions and radii are required to be in degrees, in the coordinate system defined by CoordSys. \subsection{Derived Data Fields} The Derived (short for Derived Data) object has useful, and optional, summary information about the spectrum. For now, we include the option of adding signal-to-noise and variability indicators and a measurement of the redshift. \subsubsection{Signal-to-noise} The signal-to-noise is provided mainly as a way for searches to exclude data whose quality is insufficient for a particular study. Data providers may use their own definition, as we do not prescribe a uniform method to calculate it. A suitable method, set forth by the STScI/ST-ECF/CADC Spectral Container Working Group, is to define the signal as the median of the flux values in the spectrum and the noise as the median absolute third-order difference of flux values spaced two pixels apart. This noise value is then multiplied by 1.482602 / sqrt(6). Padded zeros in the flux values are skipped. A detailed description and discussion of the algorithm can be found in the issue \#42 of the ST-ECF newsletter\footnote{ \url{http://www.spacetelescope.org/about/further_information/newsletters/html/newsletter_42.html}}. Implementations of the algorithm can be obtained from the ST-ECF website\footnote{\url{http://www.stecf.org/software/ASTROsoft/DER_SNR/}}. This method describes the high-spectral-frequency noise but does not take into account intermediate-spectral-frequency background `noise'; projects which are background dominated may wish to include this in the noise definition. Furthermore most spectra vary in SNR across their waveband; users should therefore only use this single SNR as a crude selection parameter. \subsubsection{Redshift measurement model} One common piece of derived data for a spectrum is the source redshift. We provide fields for both the redshift measured value and statistical error. As above, we define the redshift to be $\Delta\lambda/\lambda$ and it may be positive or negative. The Derived field represents a measurement of the redshift from the data; a field in the Target object is available to store the redshift of the source as known from other means. We add a further optional measure of accuracy, the Confidence, which expresses a probability between 0 and 1 that the quoted errors do apply. This measure is used in the Sloan spectral service to provide a way of describing the estimated probability that the redshift is completely in error because the lines have been misidentified. Its default value is 1.0. In general, such a Confidence could be useful for any measurement where the error probability distribution has multiple peaks in parameter space, and could later be added to the standard Accuracy model. Note that there are two other redshifts in our model: the Target redshift, a useful piece of metadata particularly for extragalactic objects, considered as an externally known property of the target (and so defined even if no lines are visible in the spectrum); and the SpectralFrame redshift, used only if a "rest frame" spectrum is presented and representing the assumed redshift used to shift the spectrum. \subsubsection{Variability amplitude} The variability amplitude field allows data providers to supply a characteristic amplitude (a precise value is not required). It is dimensionless; a value of 0.2 implies a 20 percent variation around the mean value. \subsection{Curation model} The Curation is an object consistent with the Curation information in the document "Resource Metadata for the Virtual Observatory Version 1.01", although some of the fields from RM curation have been moved to the DataID object, as discussed in the SSAP protocol document. In Curation, we have added a Reference field for a bibliographic or documentation reference (this can occur multiple times), Rights field (same as Resource.Rights) for public/proprietary, and PublisherDID for a publisher-specified IVORN to the data. The Curation.PublisherDID is the same as the Resource Metadata V1.10 Resource.Identifier. Version is provided by the publisher or creator and may be any string. Curation.Publisher is REQUIRED. All other fields are optional. \subsection{Data Identification model} The Data Identification model gives the dataset ID for a particular spectrum, and its membership of larger collections. All DataId fields are optional. There are three dataset idenfifiers in the model: one under Curation and two here. All of them comply with the description of dataset identifiers as specified by the IVOA\footnote{\url{http://www.ivoa.net/Documents/latest/IDs.html}}, including the use of 'stop' characters to identify specific datasets that are not individually in the registry. The DataID.CreatorDID is the dataset ID defined internally by the creator and may be entirely different from the DatasetID described above. It is used to identify a particular original exposure in an archive and will not necessarily change even if the VO object in question is a cutout or is otherwise further processed. The Curation.PublisherDID is a dataset ID defined by a publisher of the data. It may be an internal ID used by the archive. The DataID.DatasetID may be the same as Curation.PublisherDID; for this field we recommend a journal-based URI such as the IVOA/ADEC/ADS dataset identifier. By agreement between the AAS journals, the ADS and the ADEC (NASA data centers), dataset identifiers\footnote{\url{http://vo.ads.harvard.edu/dv/}} will be used to link journal articles back to the archival datasets containing the relevant observational data. If analogous but independent systems of URI designation are later adopted by other centers (e.g. by European journals) and accepted by IVOA, they will be suitable in this field. For example, a dataset held by an archive which curates many missions and telescopes may have an ID allocated by the original mission (CreatorDID), an ID used as an index by the multi-mission archive (PublisherDID), and the ADS-style ID (DatasetID). These may all be different, although we hope that many archives will choose to use the ADS ID as their index. We introduce the concept of a dataset creation type, which can have one of the following values, described in more detail in the SSAP protocol document. \begin{itemize} \item Archival, indicating that it is one of a collection of datasets (in this case spectra) generated in a systematic, homogeneous way and stored statically (or at least versioned). It will be possible to regenerate this dataset at a later date. The remaining types imply on-the-fly manipulation. \item Cutout, indicating that the dataset was created `on-the-fly', by subsetting, but not by modifying values. \item Filtered, which may involve excluding data prior to binning into samples, also on the fly \item Mosaic, combining multiple original datasets on the fly \item spectral extraction, e.g. from a spectral data cube; \item catalog extraction [Not relevant for Spectrum model]. \end{itemize} The dataset is associated with one or more Collections (instrument name, survey name. etc.) indicating some degree of compatibility with other datasets sharing the same Collection properties. Examples of possible Collection values are: "WFC", "Sloan", "BFS Spectrograph", "MSX Galactic Plane Survey". We also include a DataID.Bandpass, which is a string describing the spectral range. It can be one of the strings in Resource-Service-Metadata's Spectral.Coverage (e.g. "Optical") or Spectral.Coverage.Bandpass (e.g. "B" ). At the moment there is no fixed list of values for the RSM Spectral.Coverage.Bandpass. For DataSource, see the SSAP protocol document. \subsection{Target model} In spectral data it is particularly important to be able to specify the target of the observation, which may be an astronomical source or some other target (calibration, diffuse background, etc.). By explicitly including a target model we can not only facilitate searches on particular types of target but also support archives of model spectra for which the Coverage fields may not be relevant. The Target.Name field is required; all other Target fields are optional. The Target.pos field gives a nominal RA and Dec for the target, for example the catalog position of the source; the Coverage.Location fields in the spectrum indicate the actual telescope pointing position for that spectrum. (An SED might have a single Target object with a known position, but many Spectrum objects with slightly different telescope pointings). Similarly, the Target.redshift is the assumed actual redshift of the astronomical object, if applicable (again, usually from a catalog, NED, etc.), while the redshifts in the Derived objects in the spectrum (segment) indicates a redshift measured from that spectrum. The Target.redshift is normally used to store the cosmological redshift of extragalactic objects, although it may also be used to store the observed redshift of Galactic sources if that information is felt by the data provider to be useful. At the moment there is no international standard list of valid values for Target class and spectral class. Nevertheless an initial deployment of the VO would gain some benefit from using archive-specific classes, and provide a framework for converging on a standard list. \subsection{Spectrum top level object} The Spectrum object contains the Data object with the actual data; the Target and Derived objects; and the standard dataset metadata of CoordSys, Characterization, Curation and DataID. We also add a CustomParams field to allow for propagation of unmodelled application-specific metadata. In addition, we add an SIDim field for each axis giving the SI units of the values in the Osuna-Salgado dimensional format. In spectral associations (such as SED applications), the Spectrum model is reused for both Spectrum and TimeSeries and is renamed Segment. The Spectrum object is expected to be generalized to a higher level Dataset object. Each Spectrum (or Segment) may have a Length attribute giving the number of flux points in the data (in some serializations this value is deduced from the size of the data arrays, while in others it is made explicit). Each Spectrum (or Segment) may also have a Type attribute indicating whether the data is intended as a TimeSeries (data are same spectral coord, varying times), Photometry (data are different spectral coords with irregular gaps), Spectrum (data are different spectral coords in contiguous bins), or Mixed (some mixture of the above). This attribute is optional and defaults to Spectrum. Segments are discussed in more detail in the Spectral Associations document which describes SEDs and other groupings. \clearpage \section{Relationship to general VO data models} The Spectrum model involves objects addressed by the proposed VO Observation and Quantity data models. Although these models have not yet been fully worked out, we may note that a single Spectrum maps to the Observation model, which will include the Curation and Characterization objects. The Flux and the spectral coordinate entries together with their associated errors and quality will be special cases of the Quantity model, as will the simpler individual parameters. The field structure presented here is consistent with current drafts of the models. \subsection{Extensibility} The model and serializations defined in this document are extensible in the following sense: \begin{itemize} \item Future versions of the abstract (UML) model may add attributes or fields, and may deprecate the 'optional' property of existing fields. \item The Characterization object may have extra 'generic' axes, but these are not considered to be part of the Spectrum model. See Characterization specification for more. \item For the FITS serialization, implementors may add arbitrary additional keywords or table columns; readers must be able to handle files containing extra keywords and columns, and are encouraged to propagate such extra information when copying files. This permits local conventions to be layered on the basic definition. \item For the VOTABLE serialization, implementors may add arbitrary additional GROUP, PARAM or FIELDref/FIELD elements, with the restriction that the layering of existing elements should not be changed. (e.g. within the spectrum:DataID GROUP one may add a new GROUP containing newly defined PARAMs, but one may not move the existing Title PARAM inside the new group because that would change its indentation level). Readers must be able to handle files containing extra elements and are encouraged to propagate such extra information when copying files. This permits local conventions to be layered on the basic definition. \item For the XML object-based serialization, the CustomParams element at the top level of Spectrum is intended to allow extensibility and is equivalent to the ability to add a new GROUP at the top level. Future versions of the schema could use type extension to back-compatibly include the current schema as a special case, but apart from the CustomParams we have not currently provided for local extensibility within the current schema. (We could improve the schema by allowing the Group element to have arbitrary extra Param elements.) \end{itemize} \section*{References} McDowell, Tody 2011, IVOA Simple Spectral Access Protocol V1.1 \\ http://www.ivoa.net/Documents/SSA/ \vskip 0.1in \par\noindent Martinez, Derriere 2007, The UCD1+ Controlled Vocabulary Version 1.23 \\ http://www.ivoa.net/Documents/cover/UCDlist-20070402.html \vskip 0.1in \par\noindent Louys et al. 2008, Data Model for Astronomical DataSet Characterisation \\ http://www.ivoa.net/Documents/latest/CharacterisationDM.html \vskip 0.1in \par\noindent Rots 2007, Space-Time Coordinate Metadata for the Virtual Observatory \\ http://www.ivoa.net/Documents/latest/STC.html \vskip 0.1in \par\noindent Ochsenbein et al. 2004, VOTable Format Definition V1.1 \\ http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html \vskip 0.1in \par\noindent Greisen, EW, Valdes F G, Calabretta M R and Allen S L 2006, A\&A 446, 747.\\ \vskip 0.1in \par\noindent Hanisch, R., (ed)., Resource Metadata for the VO, Version 1.01, 2004 Apr 26. \\ http://www.ivoa.net/Documents/latest/RM.html \vskip 0.1in \par\noindent Derriere, S. et al (eds.), UCD, Moving to UCD 1+, 2004 Oct 26. \\ http://www.ivoa.net/Documents/latest/UCD.html \clearpage \addcontentsline{toc}{part}{Part 2 - XML schema serialization} {\Large \vfill \htpart{Part 2: XML schema serialization} \vfill } \clearpage \input{specxml2} \clearpage \addcontentsline{toc}{part}{Part 3 - VOTABLE serialization} {\Large \vfill \vskip 5.0in \htpart{Part 3: VOTABLE serialization} \vfill } \clearpage \input{specvot} \clearpage \addcontentsline{toc}{part}{Part 4 - FITS serialization} {\Large \vfill \vskip 5.0in \htpart{Part 4: FITS serialization} \vfill } \clearpage \input{fits2} \end{document} \section{VOTABLE serialization} \subsection{Mapping Schema to VOTABLE} We reproduce below the XML schema instance example as a VOTABLE instance example. To go from the XML instance to the VOTABLE instance, we: \begin{itemize} \item - map the top level element to a RESOURCE \item - map all elements with simple content to PARAM \item - map all elements with complex content to GROUP \item - map the element names (with appropriate path) to values of the utype attribute, \item - but, handle the FIELDS and Data elements in a special way. The FIELDS element is used to define the table fields and the Data element is used to define the table data. \item - but, also, all the second level elements below RESOURCE except SPECTRUM map to an initial TABLE, while we map SPECTRUM to a second TABLE. \item - most of the elements extend the Param element, to which I have added an optional name attribute that I have not used in the instance. If this attribute is used, it can hold the name attributes of the PARAM and FIELD; otherwise the relevant attributes could be filled with the same value as the utype (without namespace prefix). \end{itemize} How can this be generalized to mapping an arbitrary data model schema to VOTABLE? The only tricky parts are \begin{itemize} \item {\bf Spotting where the tabledata parts are. } We could require any DM schema that maps to VOTABLE to include elements called FIELDS and Data (perhaps ROWS would be a better name), otherwise you would get a VOTABLE with no data section. \item {\bf Spotting where to start the main TABLE (i.e. the fact that SPECTRUM is special). } We could change the schema to have an explicit attribute, annotation or other marker to tell us this. \end{itemize} These issues will require further discussion for future models. \subsection{A VOTABLE instance} The VOTable version of Spectrum uses a single VOTable {\mbox{$<$}}TABLE{\mbox{$>$}} (Note that this may appear as one of many tables within an SED VOTable). The data model fields described above as arrays map to VOTable FIELDs, while the remaining fields map to PARAM. We use nested GROUP constructs to delimit data model objects within the main object, and PARAM and FIELD tags for attributes. The nesting beyond a single GROUP is optional, as for cases for which the utypes are unique within a group, the utypes can be used to infer the datamodel structure. See http://webtest.aoc.nrao.edu/ivoa-dal for a service returning VOTABLE Spectrum instances with only one level of GROUP. Names of fields and parameters are left to the data provider. The utype and ucd attributes are used to denote data model and UCD tags. The schema and namespace for the utypes is the XML schema given in section 8.4. We have made up arbitrary NAME attributes for the PARAM and these are not to be considered standard; the name fields are free to be whatever the data provider wants, allowing compatibility with local archive nomenclature. The NAME attributes for the FIELD elements are also not standardized (of course they must be the same as in the matching FIELDrefs); it is the utype attribute which is standardized. The one departure from the XML schema below is that the `Data' element and the individual `Point' elements are implicitly represented by the table structure itself. Perhaps a UTYPE attribute to the TABLEDATA element could be used to make this explicit. The examples below describe a single SPECTRUM. { \footnotesize \begin{flushleft} \begin{fmpage} \begin{verbatim} <?xml version="1.0" encoding="UTF-8"?> <VOTABLE version="1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="xmlns:http://www.ivoa.net/xml/VOTable/VOTable-1.1.xsd" xmlns:spec="http://www.ivoa.net/xml/SpectrumModel/v1.01" xmlns="http://www.ivoa.net/xml/VOTable/v1.1"> <RESOURCE utype="spec:Spectrum"> <TABLE utype="spec:Spectrum"> <GROUP utype="spec:Target"> <PARAM name="Target" utype="spec:Target.Name" datatype="char" arraysize="*" value="Arp 220"/> <PARAM name="TargetPos" utype="spec:Target.pos" unit="deg" datatype="double" arraysize="2" value="233.737917 23.503330"/> <PARAM name="z" utype="spec:Target.redshift" datatype="float" value="0.0018"/> </GROUP> <!-- SegmentType can be Photometry, TimeSeries or Spectrum --> <PARAM name="Segtype" utype="spec:SegmentType" datatype="char" arraysize="*" value="Photometry" ucd="meta.code"/> <GROUP name="CoordSys" utype="spec:CoordSys"> <GROUP utype="CoordSys.SpaceFrame"> <PARAM name="System" utype="spec:CoordSys.SpaceFrame.Name" ucd="pos.frame" datatype="char" arraysize="*" value="ICRS"/> <PARAM name="Equinox" utype="spec:CoordSys.SpaceFrame.Equinox" ucd="time.equinox;pos.eq" datatype="float" value="2000.0" /> </GROUP> <GROUP utype="spec:CoordSys.TimeFrame"> <PARAM name="TimeFrame" utype="spec:CoordSys.TimeFrame.Name" ucd="time.scale" datatype="char" arraysize="*" value="UTC"/> </GROUP> <GROUP utype="spec:CoordSys.SpectralFrame"> <PARAM name="SpectralFrame" utype="spec:CoordSys.SpectralFrame.RefPos" ucd="sdm:spect.frame" datatype="char" arraysize="*" value="BARYCENTER"/> </GROUP> </GROUP> \end{verbatim} \end{fmpage} \begin{fmpage} \begin{verbatim} <GROUP utype="spec:Char"> <GROUP utype="spec:Char.SpatialAxis"> <PARAM name="SpatialAxisName" utype="name" ucd="pos.eq" unit="deg" value="Sky"/> <GROUP utype="spec:Char.SpatialAxis.Coverage"> <GROUP utype="spec:Char.SpatialAxis.Coverage.Location"> <PARAM name="SkyPos" utype="Char.SpatialAxis.Coverage.Location.Value" ucd="pos.eq" unit="deg" datatype="double" arraysize="2" value="132.4210 12.1232"/> </GROUP> <GROUP utype="Bounds"> <PARAM name="SkyExtent" utype="Char.SpatialAxis.Coverage.Extent" ucd="pos.angDistance;instr.fov" datatype="double" unit="arcsec" value="20"/> </GROUP> </GROUP> </GROUP> <GROUP utype="spec:Char.TimeAxis"> <PARAM name="TimeAxisName" utype="Char.TimeAxis.Name" ucd="time" unit="d" value="Time"/> <GROUP utype="Char.TimeAxis.Coverage"> <GROUP utype="Char.TimeAxis.Coverage.Location"> <PARAM name="TimeObs" utype="Char.TimeAxis.Coverage.Location.Value" ucd="time.epoch;obs" datatype="double" value="52148.3252"/> </GROUP> <GROUP utype="Char.TimeAxis.Coverage.Bounds"> <PARAM name="TimeExtent" utype="Char.TimeAxis.Coverage.Bounds.Extent" ucd="time.duration" unit="s" datatype="double" value="1500.0" /> <PARAM name="TimeStart" utype="Char.TimeAxis.Coverage.Bounds.Start" ucd="time.start" unit="s" datatype="double" value="52100.000" /> <PARAM name="TimeStop" utype="Char.TimeAxis.Coverage.Bounds.Stop" ucd="time.end" unit="s" datatype="double" value="52300.000" /> </GROUP> <GROUP utype="Char.TimeAxis.Coverage.Support"> <PARAM name="TimeExtent" utype="Char.TimeAxis.Coverage.Support.Extent" ucd="time.duration;obs.exposure" unit="s" datatype="double" value="1500.0" /> <PARAM name="TimeStart" utype="Char.TimeAxis.Coverage.Bounds.Start" ucd="time.start" unit="s" datatype="double" value="52100.000" /> <PARAM name="TimeStop" utype="Char.TimeAxis.Coverage.Bounds.Stop" ucd="time.end" unit="s" datatype="double" value="52300.000" /> </GROUP> </GROUP> </GROUP> <GROUP utype="spec:Char.SpectralAxis"> <PARAM name="SpectralAxisName" utype="Char.SpectralAxis.Name" ucd="em.wl" unit="angstrom" value="Wavelength"/> <GROUP utype="Char.SpectralAxis.Coverage"> <GROUP utype="Char.SpectralAxis.Coverage.Bounds"> <PARAM name="SpectralExtent" utype="Char.SpectralAxis.Coverage.Bounds.Extent" ucd="instr.bandwidth" unit="angstrom" datatype="double" value="3000.0"/> </GROUP> </GROUP> </GROUP> </GROUP> \end{verbatim} \end{fmpage} \begin{fmpage} \begin{verbatim} <GROUP utype="spec:Curation"> <PARAM name="Publisher" utype="spec:Curation.Publisher" ucd="meta.curation" datatype="char" arraysize="*" value="SAO"/> <PARAM name="PubID" utype="spec:Curation.PublisherID" ucd="meta.ref.url;meta.curation" datatype="char" arraysize="*" value="ivo://cfa.harvard.edu"/> <PARAM name="Contact" utype="spec:Curation.Contact.Name" ucd="meta.bib.author;meta.curation" datatype="char" arraysize="*" value="Jonathan McDowell"/> <PARAM name="email" utype="spec:Curation.Contact.Email" ucd="meta.email" datatype="char" arraysize="*" value="[email protected]"/> </GROUP> <GROUP utype="spec:DataID"> <PARAM name="Title" utype="spec:DataID.Title" datatype="char" arraysize="*" value="Arp 220 SED"/> <PARAM name="Creator" utype="spec:Segment.DataID.Creator" ucd="meta.curation" datatype="char" arraysize="*" value="ivo://sao/FLWO"/> <PARAM name="DataDate" utype="spec:DataID.Date" ucd="time.epoch;meta.dataset" datatype="char" arraysize="*" value="2003-12-31T14:00:02Z"/> <PARAM name="Version" utype="spec:DataID.Version" ucd="meta.version;meta.dataset" datatype="char" arraysize="*" value="1"/> <PARAM name="Instrument" utype="spec:DataID.Instrument" ucd="meta.id;instr" datatype="char" arraysize="*" value="BCS"/> <PARAM name="Filter" utype="spec:DataID.Collection" ucd="inst.filter.id" datatype="char" arraysize="*" value="G300"/> <PARAM name="CreationType" utype="spec:DataID.CreationType" datatype="char" arraysize="*" value="Archival"/> <PARAM name="Logo" utype="spec:DataID.Logo" ucd="meta.ref.url" datatype="char" arraysize="*" value="http://cfa-www.harvard.edu/nvo/cfalogo.jpg"/> </GROUP> <GROUP utype="spec:Derived"> <PARAM name="SNR" utype="spec:Derived.SNR" datatype="float" value="3.0"/> </GROUP> <GROUP utype="spec:Data"> <GROUP utype="spec:Data.SpectralAxis"> <FIELDref ref="Coord"/> <GROUP utype="spec:Data.SpectralAxis.Accuracy"> <FIELDref ref="BinLow"/> <FIELDref ref="BinHigh"/> </GROUP> <!-- In this case Resolution is demoted from Field to Param since it is constant --> <PARAM name="Resolution" utype="spec:Data.SpectralAxis.Resolution" ucd="spect.resolution;em.wl" unit="angstrom" datatype="float" value="14.2"/> </GROUP> \end{verbatim} \end{fmpage} \begin{fmpage} \begin{verbatim} <GROUP utype="spec:Data.FluxAxis"> <FIELDref ref="Flux1"/> <GROUP utype="spec:Data.FluxAxis.Accuracy"> <FIELDref ref="ErrorLow"/> <FIELDref ref="ErrorHigh"/> <PARAM name="SysErr" utype="SysErr" unit="" datatype="float" value="0.05"/> </GROUP> <FIELDref ref="Quality"/> </GROUP> </GROUP> <FIELD name="Coord" ID="Coord" utype="spec:Data.SpectralAxis.Value" ucd="em.wl" datatype="double" unit="angstrom"/> <FIELD name="BinLow" ID="BinLow" utype="spec:Data.SpectralAxis.BinLow" ucd="em.wl;stat.min" datatype="double" unit="angstrom"/> <FIELD name="BinHigh" ID="BinHigh" utype="spec:Data.SpectralAxis.BinHigh" ucd="em.wl;stat.max" datatype="double" unit="angstrom"/> <FIELD name="Flux" ID="Flux1" utype="spec:Data.FluxAxis.value" ucd="phot.flux.density;em.wl" datatype="double" unit="erg cm**(-2) s**(-1) angstrom**(-1)"/> <FIELD name="ErrorLow" ID="ErrorLow" utype="spec:Data.FluxAxis.Accuracy.StatErrLow" datatype="double" unit="erg cm**(-2) s**(-1) angstrom**(-1)"/> <FIELD name="ErrorHigh" ID="ErrorHigh" utype="spec:Data.FluxAxis.Accuracy.StatErrHigh" datatype="double" unit="erg cm**(-2) s**(-1) angstrom**(-1)"/> <FIELD name="Quality" ID="Quality" datatype="int" utype="spec:Data.FluxAxis.Quality"/> <DATA> <TABLEDATA> <!-- Note slightly nonlinear wavelength solution --> <!-- Second row is upper limit --> <!-- Third row has quality mask set --> <TR><TD>3200.0</TD><TD>3195.0</TD><TD>3205.0</TD><TD>1.38E-12</TD><TD>5.2E-14</TD><TD>6.2E-14</TD> <TD>0</TD></TR> <TR><TD>3210.5</TD><TD>3205.0</TD><TD>3216.0</TD><TD>1.12E-12</TD><TD>1.12E-12</TD> <TD>0</TD><TD>0</TD></TR> <TR><TD>3222.0</TD><TD>3216.0</TD><TD>3228.0</TD><TD>1.42E-12</TD><TD>1.3E-14</TD> <TD>0.2E-14</TD><TD>3</TD></TR> </TABLEDATA> </DATA> </TABLE> </RESOURCE> </VOTABLE> \end{verbatim} \end{fmpage} \end{flushleft} } A second example, based on the reference SSAP proxy service for the JHU SDSS spectrum archive: \input{doug.vot} \section{XML schema serialization} \subsection{XML schema} In the following XML schema, we implement the model fairly directly. Within a spectrum the data points are kept together in objects called Point. Also, we have included a CustomParams element to allow site-specific metadata to be added. The Coverage.Location fields have been collapsed to simple values rather than SEDCoord elements; this should perhaps be extended in a future version. The Flux object is defined as an example of a more general SEDQuantity object, which is also used for the Sloan spectral service's redshift information. A SED aggregation model is also included in the schema, as the top level element. This may be ignored until the SED model has been approved by IVOA. { \footnotesize \begin{flushleft} \begin{fmppage} \begin{verbatim} <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns="http://www.ivoa.net/xml/Spectrum/Spectrum-1.01.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xlink="http://www.w3.org/1999/xlink" targetNamespace="http://www.ivoa.net/xml/Spectrum/Spectrum-1.01.xsd" elementFormDefault="qualified" jxb:version="1.0"> <xs:import namespace="http://www.w3.org/1999/xlink" schemaLocation="http://www.ivoa.net/xml/Xlink/xlink.xsd"/> <!-- Customization for code generation with JAXB: not required otherwise --> <xs:annotation> <xs:appinfo> <jxb:globalBindings generateIsSetMethod="true"/> </xs:appinfo> </xs:annotation> <!-- A single segment corresponding to a spectrum or single point --> <xs:element name="BaseSegment" type="segmentType"/> <xs:element name="Spectrum" type="spectrumType" substitutionGroup="BaseSegment"/> <xs:element name="Segment" type="spectrumType" substitutionGroup="BaseSegment"/> <xs:element name="TimeSeries" type="timeSeriesType" substitutionGroup="BaseSegment"/> <xs:complexType name="spectrumType"> <xs:complexContent mixed="false"> <xs:extension base="segmentType"/> </xs:complexContent> </xs:complexType> <xs:complexType name="timeSeriesType"> <xs:complexContent mixed="false"> <xs:extension base="segmentType"/> </xs:complexContent> </xs:complexType> <xs:complexType name="segmentType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Target" type="targetType" /> <xs:element minOccurs="0" maxOccurs="1" name="Char" type="characterizationType" /> <xs:element minOccurs="0" maxOccurs="1" name="CoordSys" type="coordSysType" /> <xs:element minOccurs="0" maxOccurs="1" name="Curation" type="curationType" /> <xs:element minOccurs="0" maxOccurs="1" name="DataID" type="dataIDType" /> <xs:element minOccurs="0" maxOccurs="1" name="Derived" type="derivedDataType" /> <xs:element minOccurs="0" maxOccurs="1" name="CustomParams" type="arrayOfParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Type" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Length" type="intParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="TimeSI" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="SpectralSI" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="FluxSI" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" ref="Data"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <!-- The top level element: an SED with one target and many segments --> <xs:element name="SED" nillable="true" type="sedType" /> <xs:complexType name="sedType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Date" type="timeParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Target" type="targetType" /> <xs:element minOccurs="0" maxOccurs="1" name="CustomParams" type="arrayOfParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Type" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="NSegments" type="intParamType" /> <xs:element minOccurs="0" maxOccurs="unbounded" ref="BaseSegment"/> <xs:element minOccurs="0" maxOccurs="1" name="Creator" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="CreatorDID" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="SpectralMinWavelength" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="SpectralMaxWavelength" type="doubleParamType" /> </xs:sequence> </xs:complexType> <!-- Define the UCDs etc for the SED coordinate and the flux coordinate, and include a global to specify accuracy etc which happens to be constant for the entire segment (note that in SEDCoord, value has minOccurs=0 so it can be omitted) --> <!-- A single SEDCoord (time or spectral coord) value, or two values if it is spatial. --> <xs:complexType name="sedBaseCoordType"> <xs:complexContent mixed="false"> <xs:extension base="Group"/> </xs:complexContent> </xs:complexType> <xs:complexType name="sedCoordType"> <xs:complexContent mixed="false"> <xs:extension base="sedBaseCoordType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Value" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Accuracy" type="accuracyType" /> <xs:element minOccurs="0" maxOccurs="1" name="Resolution" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="sedQuantityType"> <xs:complexContent mixed="false"> <xs:extension base="sedBaseCoordType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Value" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Accuracy" type="accuracyType" /> <xs:element minOccurs="0" maxOccurs="1" name="Resolution" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Quality" type="intParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <!-- A set of useful types to add UCDs and units to base types; like BasicQuantity --> <xs:complexType name="Group" > <xs:attribute name="id" type="xs:ID" use="optional"/> <xs:attribute name="idref" type="xs:IDREF" use="optional"/> </xs:complexType> <xs:complexType name="textParamType"> <xs:simpleContent> <xs:extension base="paramType" /> </xs:simpleContent> </xs:complexType> <xs:complexType name="paramType"> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute name="name" type="xs:string" /> <xs:attribute name="ucd" type="xs:string" /> </xs:extension> </xs:simpleContent> </xs:complexType> <xs:complexType name="dateParamType"> <xs:simpleContent> <xs:extension base="paramType" /> </xs:simpleContent> </xs:complexType> <xs:complexType name="positionParamType"> <xs:sequence> <xs:element minOccurs="2" maxOccurs="2" name="value" type="doubleParamType" /> </xs:sequence> </xs:complexType> <xs:complexType name="doubleParamType"> <xs:simpleContent> <xs:extension base="paramType"> <xs:attribute name="unit" type="xs:string" /> </xs:extension> </xs:simpleContent> </xs:complexType> <xs:complexType name="timeParamType"> <xs:simpleContent> <xs:extension base="paramType"> <xs:attribute name="unit" type="xs:string" /> </xs:extension> </xs:simpleContent> </xs:complexType> <xs:complexType name="intParamType"> <xs:simpleContent> <xs:extension base="paramType"> <xs:attribute name="unit" type="xs:string" /> </xs:extension> </xs:simpleContent> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <!-- The error model. Bin entries will usually be omitted for the flux coordinate --> <xs:complexType name="accuracyType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="BinLow" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="BinHigh" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="BinSize" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="StatError" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="StatErrLow" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="StatErrHigh" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="SysError" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Confidence" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <!-- The Field type allows us to define what our axes are --> <xs:complexType name="arrayOfFieldType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" name="Field" nillable="true" type="fieldType" /> </xs:sequence> </xs:complexType> <xs:complexType name="fieldType"> <xs:attribute name="name" type="xs:string" /> <xs:attribute name="unit" type="xs:string" /> <xs:attribute name="ucd" type="xs:string" /> </xs:complexType> <!-- The Point type groups a single set of time, spectral and flux values --> <xs:element name="Data" type="arrayOfGenPointType"/> <xs:complexType name="arrayOfGenPointType"/> <xs:complexType name="arrayOfPointType"> <xs:complexContent mixed="false"> <xs:extension base="arrayOfGenPointType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" name="Point" nillable="true" type="pointType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:element name="ArrayOfPoint" type="arrayOfPointType" substitutionGroup="Data"/> <xs:complexType name="pointType"> <xs:sequence> <xs:element name="TimeAxis" minOccurs="0" maxOccurs="1" type="sedCoordType" /> <xs:element name="SpectralAxis" minOccurs="0" maxOccurs="1" type="sedCoordType" /> <xs:element name="FluxAxis" minOccurs="0" maxOccurs="1" type="sedQuantityType" /> <xs:element name="BackgroundModel" minOccurs="0" maxOccurs="1" type="sedQuantityType" /> </xs:sequence> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <xs:element name="ArrayOfFlatPoint" type="arrayOfFlatPointType" substitutionGroup="Data"/> <xs:complexType name="arrayOfFlatPointType"> <xs:complexContent mixed="false"> <xs:extension base="arrayOfGenPointType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" name="Point" nillable="true" type="flatPointType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="flatPointType"> <xs:attribute name="T" type="xs:double"/> <xs:attribute name="T_BinL" type="xs:double" /> <xs:attribute name="T_BinH" type="xs:double" /> <xs:attribute name="T_Size" type="xs:double" /> <xs:attribute name="T_Res" type="xs:double" /> <xs:attribute name="SP" type="xs:double" /> <xs:attribute name="SP_BinL" type="xs:double" /> <xs:attribute name="SP_BinH" type="xs:double" /> <xs:attribute name="SP_Size" type="xs:double" /> <xs:attribute name="SP_Res" type="xs:double" /> <xs:attribute name="F" type="xs:double" /> <xs:attribute name="F_ErrL" type="xs:double" /> <xs:attribute name="F_ErrH" type="xs:double" /> <xs:attribute name="F_Sys" type="xs:double" /> <xs:attribute name="F_Qual" type="xs:int" /> <xs:attribute name="BG" type="xs:double" /> <xs:attribute name="BG_ErrL" type="xs:double" /> <xs:attribute name="BG_ErrH" type="xs:double" /> <xs:attribute name="BG_Sys" type="xs:double" /> <xs:attribute name="BG_Qual" type="xs:int" /> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <!-- Now we define the higher level metadata --> <xs:complexType name="contactType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Name" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Email" type="textParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="curationType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Publisher" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="PublisherID" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Reference" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Version" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Contact" type="contactType" /> <xs:element minOccurs="0" maxOccurs="1" name="Rights" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Date" type="dateParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="PublisherDID" type="textParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="characterizationType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element name="SpatialAxis" type="characterizationAxisType" minOccurs="0" maxOccurs="1"/> <xs:element name="TimeAxis" type="characterizationAxisType" minOccurs="0" maxOccurs="1"/> <xs:element name="SpectralAxis" type="spectralCharacterizationAxisType" minOccurs="0" maxOccurs="1"/> <xs:element name="FluxAxis" type="characterizationAxisType" minOccurs="0" maxOccurs="1"/> <xs:element minOccurs="0" maxOccurs="unbounded" name="CharacterizationAxis" type="characterizationAxisType"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <xs:complexType name="characterizationAxisType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="CoordSystem" type="coordSysType" /> <xs:element minOccurs="0" maxOccurs="1" name="Coverage" type="coverageType" /> <xs:element minOccurs="0" maxOccurs="1" name="Resolution" type="doubleParamType"/> <xs:element minOccurs="0" maxOccurs="1" name="Accuracy" type="accuracyType" /> <xs:element minOccurs="0" maxOccurs="1" name="SamplingPrecision" type="samplingPrecisionType" /> <xs:element minOccurs="0" maxOccurs="1" name="Calibration" type="textParamType" /> </xs:sequence> <xs:attribute name="name" type="xs:string" /> <xs:attribute name="ucd" type="xs:string" /> <xs:attribute name="unit" type="xs:string" /> </xs:extension> </xs:complexContent> </xs:complexType> <xs:element name="CharacterizationAxis" type="characterizationAxisType"/> <xs:complexType name="coverageType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Location" type="coverageLocationType" /> <xs:element minOccurs="0" maxOccurs="1" name="Bounds" type="coverageBoundsType" /> <xs:element minOccurs="0" maxOccurs="1" name="Support" type="coverageSupportType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="spectralCharacterizationAxisType"> <xs:complexContent mixed="false"> <xs:extension base="characterizationAxisType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="ResPower" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="coverageLocationType"> <xs:complexContent mixed="false"> <xs:extension base="sedBaseCoordType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="2" name="Value" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Accuracy" type="accuracyType" /> <xs:element minOccurs="0" maxOccurs="1" name="Resolution" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <xs:complexType name="coverageBoundsType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Extent" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Range" type="intervalType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="coverageSupportType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Area" type="skyRegionType" /> <xs:element minOccurs="0" maxOccurs="1" name="Extent" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="unbounded" name="Range" type="intervalType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="intervalType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Min" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Max" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="samplingPrecisionType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="SamplingPrecisionRefVal" type="samplingPrecisionRefValType" /> <xs:element minOccurs="0" maxOccurs="1" name="SampleExtent" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="samplingPrecisionRefValType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="FillFactor" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <xs:complexType name="skyRegionType"> <xs:complexContent mixed="false"> <xs:extension base="textParamType"/> </xs:complexContent> </xs:complexType> <xs:complexType name="derivedDataType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="SNR" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="VarAmpl" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Redshift" type="sedQuantityType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="dataIDType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Title" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Creator" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="unbounded" name="Collection" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="DatasetID" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Date" type="dateParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Version" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Instrument" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="CreationType" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Bandpass" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="CreatorDID" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="unbounded" name="Contributor" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Logo" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="DataSource" type="textParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <xs:complexType name="targetType"> <xs:complexContent mixed="false"> <xs:extension base="Group"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Name" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Description" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="TargetClass" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="SpectralClass" type="textParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Redshift" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="Pos" type="positionParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="VarAmpl" type="doubleParamType" /> <xs:element minOccurs="0" maxOccurs="1" name="CustomParams" type="arrayOfParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="arrayOfParamType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" name="Param" nillable="true" type="paramType" /> </xs:sequence> </xs:complexType> <xs:attributeGroup name="STCReference"> <xs:annotation> <xs:documentation>These four attributes represent the standard IVOA referencing system: internal (within the document) referencing through "id" and "idref", external referencing through Xlink, using only "type=simple" and "href".</xs:documentation> </xs:annotation> <xs:attribute name="id" type="xs:ID" use="optional"/> <xs:attribute name="idref" type="xs:IDREF" use="optional"/> <xs:attribute name="ucd" type="xs:string" use="optional"/> <xs:attribute ref="xlink:type" use="optional" default="simple"/> <xs:attribute ref="xlink:href" use="optional"/> </xs:attributeGroup> <xs:complexType name="coordSysType"> <!--<xs:complexContent> --> <xs:sequence maxOccurs="unbounded"> <xs:element ref="CoordFrame" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attributeGroup ref="STCReference"/> <!-- </xs:complexContent>--> </xs:complexType> <xs:element name="CoordFrame" type="coordFrameType" abstract="true"/> <xs:element name="SpaceFrame" type="spaceFrameType" substitutionGroup="CoordFrame"/> <xs:element name="RedshiftFrame" type="redshiftFrameType" substitutionGroup="CoordFrame"/> <xs:element name="SpectralFrame" type="spectralFrameType" substitutionGroup="CoordFrame"/> <xs:element name="GenericCoordFrame" type="coordFrameType" substitutionGroup="CoordFrame"/> <xs:element name="TimeFrame" type="timeFrameType" substitutionGroup="CoordFrame"/> \end{verbatim} \end{fmppage} \begin{fmppage} \begin{verbatim} <xs:complexType name="coordFrameType"> <xs:annotation> <xs:documentation>Simplification of STC version: RefPos is string</xs:documentation> </xs:annotation> <xs:sequence> <xs:element name="Name" type="xs:string" minOccurs="0"/> <xs:element name="ReferencePosition" type="xs:string" minOccurs="0"/> </xs:sequence> <xs:attribute name="id" type="xs:ID"/> <xs:attribute name="ucd" type="xs:string" use="optional"/> </xs:complexType> <xs:complexType name="spectralFrameType"> <xs:complexContent> <xs:extension base="coordFrameType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Redshift" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="timeFrameType"> <xs:complexContent> <xs:extension base="coordFrameType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Zero" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="redshiftFrameType"> <xs:complexContent> <xs:extension base="coordFrameType"> <xs:sequence> <xs:element name="DopplerDefinition" type="xs:string" nillable="true"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="spaceFrameType"> <xs:complexContent> <xs:extension base="coordFrameType"> <xs:sequence> <xs:element minOccurs="0" maxOccurs="1" name="Equinox" type="doubleParamType" /> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:schema> \end{verbatim} \end{fmppage} \end{flushleft} } \subsection{Instance example} \input{ex.xml}
{ "attr-fineweb-edu": 1.506836, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbPM5qrqCyvZe6ziJ
\section{Introduction} In recent years there has been an increasing interest towards imaging methods which combine different physical modalities of interrogation which are known as hybrid, or coupled physics, inverse problems. In this paper we concentrate on Photoacoustic Tomography (PAT) that couples the (high contrast) optical tomography with the (high resolution) ultrasound waves. The first step of this imaging technique consists of reading the boundary response to an acoustic signal, in order to reconstruct the absorbed energy distribution inside the biological tissue under inspection. We assume this first part as already performed and we focus on the second step of the procedure, which consists of reconstructing the absorption and diffusion coefficients from measurements of the absorbed energy distribution obtained in the previous step. We refer to \cite{RGZ} for an extended bibliographical review on this problem known as Quantitative Photoacoustic Tomography (qPAT). Let us denote by $\Omega\subset\mathbb R^n$ the body enclosing the biological tissue under inspection. Note that the physically significant space dimension is $n=3$, but, since we shall need also to refer to model cases when $n=2$, we shall leave $n\geq 2$ undetermined. We denote by $u(x)$ the photon density at the point $x\in\om$. Then $u(x)$ solves the following boundary value problem \begin{equation} \label{m1} \left\{\begin{array}{ll} -\dive( D\nabla u)+\sigma u=0 & \textrm{in }\Omega,\\[2mm] u=g & \textrm{on }\partial \Omega, \end{array}\right.\end{equation} where $D=D(x)>0$, $\sigma=\sigma(x)$ are the diffusion and absorption coefficients, respectively, and $g$ is the illumination source prescribed on the boundary. The goal of qPAT is to recover information on the coefficients $D$, $\sigma$ from the knowledge of the absorbed energy \[H(x)=\sigma(x)u(x),\qquad x\in\ov\Omega,\] possibly repeating the experiment with different profiles of the illumination $g$. Note, incidentally, that a more accurate model would require the introduction of the additional multiplicative unknown parameter $\Gamma(x)$ called the Gr\"uneisen coefficient. Here we adopt, for simplicity, the commonly used convention of assuming $\Gamma\equiv 1$. See \cite{Ba-Re} for a discussion on this issue. This problem has been considered in \cite{Ba-Uh}, where a uniqueness result with two measurements is proven. The authors assume $D\in C^{k+2}$, $\sigma\in C^{k+1}$ with $k\geq1$. They also present a Lipschitz stability theorem. This result has been later improved in \cite{Ba-Re} also providing a numerical reconstruction procedure. More recently, \cite{RGZ} have considered the same problem when the prescribed illumination is modeled by a Robin boundary condition, rather than a Dirichlet one. A reconstruction method for the full nonlinear inversion is also treated. All the above quoted results rely on a nondegeneracy condition which can be illustrated as follows. If $u_1$, $u_2$ are two solutions to \eqref{m1} corresponding to two different illuminations $g_1$, $g_2>0$, then it is a quite well-known fact that the quotient \[u=\frac{u_2}{u_1}\] satisfies an elliptic equation in pure divergence form \begin{equation*} \left\{\begin{array}{ll} \dive( a\nabla u)=0 & \textrm{in }\Omega,\\[2mm] u=g & \textrm{on }\partial \Omega, \end{array}\right.\end{equation*} where $g$ is the ratio $\frac{g_2}{g_1}$ (see Proposition \ref{proph-1} below). It is also easy to see that the solution of the qPAT problem boils down to solve the inverse problem of finding $a$ given $u$ and the boundary values $a_{|_{\der\om}}$. This is a relatively easy task if $g$ is chosen in such a way that the following nondegeneracy condition holds: \begin{equation}\label{X} |\nabla u|>0,\textrm{ everywhere in }\om. \end{equation} When $n=2$ there exists a well-established criterion which enables to choose the Dirichlet data $g$ (independently of $a$!) so that \eqref{X} holds (\cite{A}, \cite{AM}). Such a criterion is unimodality. That is, roughly speaking that the graph of $g$ has one single peak of maximum points, one of minimum and is monotone in between. In dimension $n\geq 3$ complex valued solutions satisfying \eqref{X} can be constructed by the method of Complex Geometrical Optics \cite{Ba-Uh}, but their boundary data do depend on the interior values of the (unknown) coefficient $a$ and thus they cannot be a-priori chosen. Real valued solutions which locally satisfy $|\nabla u|>0$ can also be constructed (see \cite{Greene-Wu}, \cite[Theorem 4.7]{Bal-Uhl}), but still they depend on the unknown coefficient $a$. Indeed, there are reasons to believe that, when $n\geq 3$, it does not exists any Dirichlet data $g$ such that \eqref{X} is satisfied for every $a$. See, for related discussions, \cite{AN}, \cite{Ca}. The principal aim of the present paper is to treat stability even when the above stated nondegeneracy condition may be violated. We shall show that stability can be obtained with essentially arbitrary illuminations $g_1$, $g_2$ imposing only one constraint on their ratio $g=\frac{g_2}{g_1}$. Namely that $g$ satisfies a condition of unimodality adapted to the $(n-1)$-dimensional boundary $\der\om$. This condition shall be made more precise in Definition \ref{qu}. Our strategy shall be as follows. In dimension $n\geq 3$, for a fixed $g$, we cannot assure the nonvanishing of $|\nabla u|$ throughout $\om$, but, under reasonable assumptions, it is possible to keep under control the vanishing rate in the interior. This will be the content of Lemma \ref{quc}. On the other hand, assuming unimodality we can make sure that $|\nabla u|>0$ on a small neighborhood of $\der\om$ (Lemma \ref{Hopf}). Next we adapt from \cite{A} a weighted stability estimate on the coefficients $a$ in terms of $u$ and of $a_{|_{\der\om}}$. Using the previously mentioned estimates on the vanishing rate of $|\nabla u|$ and a suitable interpolation inequality \cite{S}, we arrive at an (unweighted) stability estimate of H\"older type for $a$ (Theorem \ref{teo2}). The deduction of stability bounds for $D$ and $\sigma$ follows the track of well-known arguments, see for instance \cite{RGZ}. Let us emphasize that most of the present effort is devoted to two main goals: \begin{enumerate} \item To avoid the nondegeneracy condition \eqref{X}. \item To make precise (but feasible) a-priori assumptions which guarantee a quantitative, concrete, evaluation in our stability estimates. \end{enumerate} It is our belief that the present approach can be useful also to other, more complex, hybrid inverse problems, where analogous issues of nondegeneracy arise. The paper is organized as follows. In the next Section \ref{Sec2} we provide the main assumptions and we state our main result (Theorem \ref{theoh-2}). The proof of it is based on some auxiliary propositions, given in the subsequent Section \ref{Sec3}, and is presented in Section \ref{Sec4}. \section{Assumptions and Main Result} \label{Sec2} We assume $\om$ to be a $C^2$ - smooth, bounded domain in $\RR^n$ diffeomorphic to the unit ball $B_1(0)$. More precisely, from a quantitative point of view, we assume that there exists a diffeomorphism $F$ of class $C^2$ such that, for given constants $Q_0$, $Q_1>0$, \begin{subequations}\label{diff} \begin{equation} F:B_1(0)\leftrightarrow\om,\end{equation} \begin{equation} \|F\|_{C^2(B_1(0))}\leq Q_0,\end{equation} \begin{equation} \left|F(x)-F(y)\right|\geq \frac{1}{Q_1} |x-y| ,\textrm{ for every } x,y\in B_1(0).\end{equation} \end{subequations} The constants $Q_0$, $Q_1$, shall be part of the a-priori information that shall be used in our quantitative stability estimates. We are interested in recovering the unknown parameters $D$ and $\sigma$, by performing two measurements, that is prescribing two data $g_1$ and $g_2$ on $\der\om$ and measuring the corresponding internal pressure fields. In particular we establish a continuous dependence of the unknown parameters from the measured data. Given constants $\lambda_0$, $\lambda_1$, $E_0$, $E_1$, $\mu_0$, $\mu_1$, $\mu_2>0$ (which also shall be part of the a-priori information), we consider unknown coefficients $D$ and $\sigma$ such that \begin{equation} \label{h1-1} D\in W^{1,\infty}(\om),\quad \sigma\in W^{1,\infty}(\om), \end{equation} and \begin{equation} \label{h2-1} \lambda_0^{-1}\leq D\leq \lambda_0, \textrm{ for every }x\in \om \end{equation} \begin{equation} \label{h4-1} \lambda_1^{-1}\leq \sigma\leq \lambda_1, \textrm{ for every }x\in \om \end{equation} \begin{equation} \label{h3-1} \|D\|_{W^{1,\infty}(\om)}\leq E_0,\quad \|\sigma\|_{W^{1,\infty}(\om)}\leq E_1 \end{equation} The boundary data we choose are functions $g_1$ and $g_2$ such that \begin{equation} \label{h4.5-1} g_i\in C^2(\der\om), \quad\quad \|g_i\|_{C^2(\der\om)}\leq \mu_0\,\mbox{ for }i=1,2,\end{equation} \begin{equation} \label{h5-1} \mu_1^{-1}\leq g_i(x)\leq \mu_1,\textrm{ for every } x\in\om,\quad i=1,2. \end{equation} Moreover, denoting by $$g=\frac{g_2}{g_1}\quad\textrm{and}\quad\ov g=\frac{1}{|\partial \om|}\int_{\partial\om}g,$$ we assume that \begin{equation} \label{h5.5-1 \|\,g-\ov{g}\,\|_{L^2(\der\om)}\geq \mu_2^{-1}. \end{equation} \begin{rem} Let us emphasize that \eqref{h5.5-1} represents constructively the assumption that the illuminations $g_1$, $g_2$ are linearly independent. \end{rem} \begin{rem} Note that, if $g_1$, $g_2$ satisfy assumptions \eqref{h5-1} and \eqref{h5.5-1}, then the so-called frequency function associated to $g$ \begin{equation} \label{freq} F[g]:=\frac{\|g-\ov{g}\|_{H^{1/2}(\der\om)}}{\|g-\ov{g}\|_{L^{2}(\der\om)} \end{equation} is bounded by a constant depending only on $\mu_0$, $\mu_2$ and $Q_1$. \end{rem} We shall see that it is convenient to assume that the ratio $g=\frac{g_2}{g_1}$ of the illuminations has a specific behaviour which is expressed in the following definition. This is a form of monotonicity assumption, which needs however to be specified in quantitative fashion. \begin{definition}\label{qu} Given $m$, $M$, $0<m<M$ and a continuous, strictly increasing function $\omega:\RR^+\to\RR^+$, such that $\omega(0)=0$, we say that a function $g\in C^1(\der\om,\RR)$ is \emph{quantitatively unimodal} if \begin{equation}\label{maxmin} m\leq g(x)\leq M,\mbox{ for every }x\in \der\om, \end{equation} the subsets of $\der\om$ \begin{equation}\label{ptimaxmin} \Gamma_m=\{x\in\der\om\,:\,g(x)=m\}\mbox{ and }\Gamma_M=\{x\in\der\om\,:\,g(x)=M\} \end{equation} are connected and non-empty, possibly reduced to single points, and, for every $x\in\der\om\setminus(\Gamma_m\cup\Gamma_M)$ such that $dist(x,\Gamma_m\cup\Gamma_M)\geq \delta$, we have \begin{equation}\label{quantunimod} |\nabla_T g(x)|\geq \omega(\delta), \end{equation} where $\nabla_T$ denotes the tangential gradient. \end{definition} Also $m$, $M$, and $\omega$ shall be part of the a-priori information. In our stability result, we compare two different sets of diffusion and absorption coefficients, that we denote by $D^{(1)},\,\sigma^{(1)} $ and $D^{(2)},\, \sigma^{(2)}$. Let $u_i^{(j)}$, for $i,j=1,2$, be solution to \begin{equation}\label{h6-1} \left\{ \begin{array}{ll} -\dive\left(D^{(j)}\nabla u_i^{(j)}\right)+\sigma^{(j)}u_i^{(j)} =0& \mbox{in }\om,\\ u_i^{(j)}=g_i& \mbox{on }\der\om. \end{array} \right. \end{equation} We emphasize that, in the notation $u_i^{(j)}$, the superscript is associated to the unknown parameters $D^{(j)}$, $\sigma^{(j)}$, whereas the subscript is associated to the illumination $g_i$. The available measurements which represent the internal pressure fields generated by the absorptions of photons energy are given by \begin{equation} \label{h8-1} H_i^{(j)}=\sigma^{(j)}u_i^{(j)}. \end{equation} We prove the following (uniqueness and) stability result: \begin{theo}\label{theoh-2} Let all the assumptions stated above be satisfied. If \begin{equation} \label{h1-2} \left\|H_i^{(1)}-H_i^{(2)}\right\|_{L^2(\om)}\leq \ep,\mbox{ for }i=1,2, \end{equation} and \begin{equation}\label{Dbordo} \left\|D^{(1)}-D^{(2)}\right\|_{L^\infty(\der\om)}\leq \ep^\prime, \end{equation} then we have \begin{equation} \label{h2-2} \left\|D^{(1)}-D^{(2)}\right\|_{L^2(\om)}+\left\|\sigma^{(1)}-\sigma^{(2)}\right\|_{L^2(\om)} \leq C\left(\ep+ \ep^\prime\right)^{\theta}, \end{equation} where $C$ and $\theta\in(0,1)$ only depend on the a-priori information $Q_0$, $Q_1$, $\lambda_0$, $\lambda_1$, $E_0$, $E_1$, $\mu_0$, $\mu_1$, $\mu_2$, $m$, $M$ and $\omega$. \end{theo} \section{Auxiliary results} \label{Sec3} The proof of Theorem \ref{theoh-2} is based on a result concerning stable reconstruction of the main coefficient of a second order elliptic equation from the knowledge of internal values of one of its non constant solutions. \begin{theo}\label{teo1} Let $\om$ be diffeomorphic to the unit ball. Let $a$ and $b\in W^{1,\infty}(\om)$ such that \begin{equation} \label{3-1} C_0^{-1}\leq a(x),\,b(x)\leq C_0,\textrm{ for every }x\in \om, \end{equation} and \begin{equation} \label{3.5-1} \left|\nabla a(x)\right|, \left|\nabla b(x)\right|\leq C_1 ,\textrm{ for almost every }x\in \om. \end{equation} Let $g$ and $k\in C^2(\der\om)$, with \begin{equation} \label{4-1} \|g\|_{C^2(\der\om)},\|k\|_{C^2(\der\om)}\leq C_2.\end{equation} Assume $g$ satisfies assumption \eqref{h5.5-1} and \begin{equation} \label{5-1} g(x)\geq C_3^{-1}\mbox{ for every } x\in\om. \end{equation} Let $u$ and $v$ in $W^{1,2}(\om)$ be the unique solutions to boundary value problems \begin{equation}\label{Pu} \left\{ \begin{array}{rl} \dive\left(a(x)\nabla u(x)\right) =0& \mbox{in }\om,\\ u=g& \mbox{on }\der\om, \end{array} \right. \end{equation} and \begin{equation}\label{Pv} \left\{ \begin{array}{rl} \dive\left(b(x)\nabla v(x)\right) =0& \mbox{in }\om,\\ v=k& \mbox{on }\der\om. \end{array} \right. \end{equation} Given $\om^\prime\subset\subset\om$ with $\mbox{dist}\left(\der\om,\om^\prime\right)\geq d_0$ and $\theta\in(0,1/2)$ there are positive constants $\tilde{C}$ and $\beta$, depending only on $C_0$, $C_1$, $C_2$, $\mu_2$, $C_3$, $Q_0$, $Q_1$, $d_0$ and $\theta$, such that \begin{equation} \label{tesiteo1} \|a-b\|_{L^\infty(\om^\prime)}\leq \tilde{C}\left(\|u-v\|_{L^2(\om)}^\theta+\|a-b\|_{L^\infty(\der\om)}\right)^\beta. \end{equation} \end{theo} In order to prove Theorem \ref{teo1}, we need the following lemmas. For $d>0$ we will denote by $\om_d=\{x\in\om\,:\, dist(x,\der\om)>d\}$. \begin{lem}\label{lemma2.2} With the same assumptions of Theorem \ref{teo1}, for every $\rho>0$ and for every $x\in\om_{4\rho}$ \[\int_{B_\rho(x)}|\nabla u|^2\geq C_\rho \int_{\om}|\nabla u|^2,\] where $C_\rho$ depends on $C_0$, $C_1$, $Q_0$, $Q_1$, $F[g]$ and $\rho$ only. \end{lem} \begin{proof} This Lemma corresponds to Theorem 4.2 in \cite{AMR2003} for solutions of Dirichlet boundary problem instead of Neumann boundary problem. The proof follows the same path by taking $u_0=u-\ov{u}$ where $\ov{u}=|\om|^{-1}\int_\om u$. The only difference sits in getting from estimate $(4.28)$ in \cite{AMR2003} to an estimate in terms of the frequency function $F[g]$. In this case, by standard elliptic estimates, \[\|\nabla u_0\|_{L^2(\om)}=\|\nabla (u-\ov{g})\|_{L^2(\om)}\leq C \|g-\ov{g}\|_{H^{1/2}(\der\om)},\] where $C$ depends on $Q_0$, $Q_1$ and $C_0$, and \[\|u_0\|_{L^2(\der\om)}=\|g-\ov{u}\|_{L^2(\der\om)}\geq\|g-\ov{g}\|_{L^2(\der\om)},\] hence, from estimate $(4.28)$ in \cite{AMR2003}, we get \[\frac{\|\nabla u_0\|_{L^2(\om)}}{\|u_0\|_{L^2(\om)}}\leq C\frac{\|\nabla u_0\|^2_{L^2(\om)}}{\| u_0\|^2_{L^2(\der\om)}}\leq C\left(\frac{\|g-\ov{g}\|_{H^{1/2}(\der\om)}}{\|g-\ov{g}\|_{L^2(\der\om)}}\right)^2=CF[g]^2.\] The rest of the proof is as in \cite{AMR2003}. \end{proof} \begin{lem}\label{quc} With the same assumptions of Theorem \ref{teo1}, given $\om^\prime\subset\subset\om$ with $\mbox{dist}\left(\der\om,\om^\prime\right)\geq d_0$, there exist positive constants $K_1$ and $K_2>1$ and $d_1\leq d_0$, depending only on $C_0$, $C_1$, $C_2$, $\mu_2$, $C_3$ and $d_0$, such that, for every $x_0\in\om^\prime$ and $r\leq d_1$, \begin{equation} \label{1-8} \int_{B_r(x_0)}|\nabla u|^2\geq \frac{r^{K_1}}{K_2}. \end{equation} \end{lem} \begin{proof} As in the proof of Theorem 4.3 in \cite{AMR2003}, starting from the doubling inequality by Garofalo and Lin (\cite{GL}) and using Caccioppoli and Poincar\`{e} inequalities, we can show that there is a constant $2d_1\leq d_0$, depending only on $C_0$ and $C_1$, such that we get, for $r\leq\rho\leq d_1$ \begin{equation} \label{6-9} \int_{B_\rho(x_0)}|\nabla u|^2\leq C \left(\frac{2\rho}{r}\right)^{K-2} \int_{B_r(x_0)}|\nabla u|^2, \end{equation} where $C$ and $K$ depend on $C_0$ and $C_1$, and $K$ depends also, increasingly, on \begin{equation} \label{3-9} \tilde{N}(d_1)=\frac{d_1^2\int_{B_{2d_1}(x_0)}|\nabla u|^2}{\int_{B_{2d_1}(x_0)}(u-\ov{u}_r)^2}, \end{equation} where $\ov{u}_r=\frac{1}{|B_r(x_0)|}\int_{B_r(x_0)}u$. Now, in order to estimate $\tilde{N}(d_1)$ from above in terms of a-priori information, we can use Caccioppoli inequality to get \begin{equation} \label{1-10} \tilde{N}(d_1)\leq \frac{C_C}{4}\frac{\int_{B_{2d_1}(x_0)}|\nabla u|^2}{\int_{B_{d_1}(x_0)}|\nabla u|^2}. \end{equation} By \eqref{1-10} and Lemma \ref{lemma2.2} we have \begin{equation} \label{3-11} \tilde{N}(d_1 \leq\frac{C_C}{4}\frac{\int_{\om}|\nabla u|^2}{C_{d_1}\int_{\om}|\nabla u|^2}=\frac{C_C}{4C_{d_1}}. \end{equation} By \eqref{6-9} with $\rho=d_1$, \eqref{3-11} and Lemma \ref{lemma2.2} again, we finally get \begin{equation} \label{4-11} \int_{B_r(x_0)}|\nabla u|^2\geq C^{-1} \left(\frac{r}{d_1}\right)^{K-2} \int_{B_{d_1}(x_0)}|\nabla u|^2\geq \frac{r^{K_1}}{C} \int_\om|\nabla u|^2. \end{equation} Now, by assumption \eqref{h5.5-1}, trace estimates and Poincar\`{e} inequality we get \begin{equation}\label{comelolevo} \mu_2^{-2}\leq\|g-\ov{g}\|_{H^{1/2}}^2\leq C\|u-\ov{g}\|^2_{H^1(\om)}\leq C\|\nabla u\|^2_{L^2(\om)}, \end{equation} hence, by combining \eqref{4-11} and \eqref{comelolevo}, we finally get \eqref{1-8}.\end{proof} \textit{Proof of Theorem \ref{teo1}.} The first step in the proof relies on Lemma 2.1 contained in \cite{A}. Although such Lemma 2.1 is stated in a two-dimensional setting, its validity can be extended in a straightforward manner to any dimension. By Lemma 2.1 of \cite{A}, for any $\theta\in(0,1/2)$ there is a constant $K_0>0$, depending only on $C_0$, $C_1$, $C_2$, $Q_0$, $Q_1$ and $\theta$, such that \begin{equation} \label{1-3} \int_\om|a-b|\left|\nabla u\right|^2\leq K_0\left(\|u-v\|_{L^2(\om)}^\theta+\|a-b\|_{L^\infty(\der\om)}\right). \end{equation} We now reproduce an argument due to Sincich \cite[Proposition 4.9]{S}. Let us set $\phi=a-b$ and let $x_0\in \om^\prime$ such that \begin{equation} \label{1-12} |\phi(x_0)|=\max_{\overline{\om^\prime}}|\phi(x)|. \end{equation} Since $a$ and $b$ satisfy assumption \eqref{3.5-1}, \begin{equation}\label{2-12} |\phi(x_0)|\leq |\phi(x)| +2C_1r,\textrm{ for every }x\in B_r(x_0),\textrm{ with }0<r\leq d_0. \end{equation} Multiplying \eqref{2-12} by $|\nabla u(x)|^2$ and integrating with respect to $x$ on $B_r(x_0)$, we get \begin{equation}\label{3-12} |\phi(x_0)|\int_{B_r(x_0)}\!\!\!\!\!\!|\nabla u(x)|^2dx\leq \int_{B_r(x_0)} \!\!\!\!\!\!|\phi(x)||\nabla u(x)|^2dx +C_1r\int_{B_r(x_0)}\!\!\!\!\!\!|\nabla u(x)|^2dx, \end{equation} hence \begin{equation}\label{4-12} |\phi(x_0)|\leq \frac{\int_{B_r(x_0)} |\phi(x)||\nabla u(x)|^2dx}{\int_{B_r(x_0)}|\nabla u(x)|^2dx}+2C_1r. \end{equation} By \eqref{1-3} and \eqref{1-8}, and by \eqref{1-12} we have \begin{equation}\label{5-12} \max_{\overline{\om^\prime}}|a(x)-b(x)|\leq K_0 K_2 r^{-K_1}\left(\|u-v\|_{L^2(\om)}^\theta+\|a-b\|_{L^\infty(\der\om)}\right)+2C_1r. \end{equation} By choosing an appropriate $r\in (0,d_0)$ we get \eqref{tesiteo1}.\qed Let us now show that, by choosing a boundary condition with some additional features, we can bound form below the norm of $\nabla u$ in a neighborhood of the boundary. The following Lemma is a variation on themes treated in \cite[Lemma 2.8, Theorem 4.1]{AN}. \begin{lem}\label{Hopf} Let $\om$ be diffeomorphic to the unit ball and let $g$ be quantitatively unimodal, according to Definition \ref{qu}. If $u$ is the unique solution of problem \eqref{Pu} for a coefficient $a$ satisfying assumption \eqref{3-1}, then \begin{equation}\label{tshopf} |\nabla u|\geq C,\textrm{ for every }x\in \om,\textrm{ with }dist(x,\der\om)\leq\rho, \end{equation} where $\rho>0$ and $C>0$ depend only on $C_0$, $Q_0$, $Q_1$, $m$, $M$ and $\omega$. \end{lem} \begin{proof} The diffeomorphism $F$ in assumption \eqref{diff} transforms the elliptic equation in \eqref{Pu} into a similar elliptic equation in $B_1(0)$ with $W^{1,\infty}$ main coefficient. The constant of ellipticity and all the constant appearing in the assumptions \eqref{maxmin}, \eqref{ptimaxmin} and \eqref{quantunimod} shall be changed in a controlled manner only depending on the a-priori information. For this reason we assume, without loss of generality, that $\om=B_1(0)$. By regularity estimates for solutions of elliptic equations, the $C^{1,\beta}(\overline{B_1(0))}$ norm of $u$ is bounded in terms of the a-priori information, hence, for $x\in B_1(0)$ and $dist(x,\Gamma_M)<\eta$ we have \[u(x)-m\geq M-m-C\eta.\] By choosing $\eta$ small enough, we get \[u(x)-m\geq \frac{M-m}{2}.\] \noindent By Harnack inequality (\cite[Theorem 8.20, Corollary 8.21]{GT}), \begin{equation} \label{hopf1} u(x)-m\geq C_\eta\frac{M-m}{2},\textrm{ for every }x\in\overline{B_{1-\eta}(0)}. \end{equation} In particular, if we choose $y\in\Gamma_m$ and $x=(1-\eta)y$, we get \[u(x)-u(y)\geq C_\eta\frac{M-m}{2}.\] By Hopf lemma \cite[Lemma 3.4]{GT}) we have \[|\nabla u(y)|\geq k>0,\textrm{ for every }y\in\Gamma_m.\] Since we can proceed in the same way on $\Gamma_M$ and by using again the regularity $C^{1,\beta}$ of $u$ up to the boundary, we have \begin{equation}\label{hopf2}|\nabla u(x) |\geq k-C\delta^\beta ,\textrm{ for every }x\in\der\om,\textrm{ with }dist(x,\Gamma_m\cup\Gamma_M)\leq \delta.\end{equation} By choosing $\delta=\overline{\delta}$ so that $C\overline{\delta}^\beta=k/2$, by \eqref{hopf2} and \eqref{quantunimod}, we have \[|\nabla u|\geq \max\{\omega(\overline{\delta}),k/2\},\textrm{ on }\der\om.\] By using again the $C^{1,\beta}$ regularity of $u$ up to the boundary, we get \eqref{tshopf}. \end{proof} \begin{theo}\label{teo2} Let $\om$, $a$, $b$, $g$ and $k$ be as in Theorem \ref{teo1}. Let us also assume that $g$ is quantitatively unimodal. Let $u$ and $v$ in $W^{1,2}(\om)$ be the unique solutions to boundary value problems \eqref{Pu} and \eqref{Pv}. Given $\theta\in(0,1/2)$ there are positive constants $\tilde{C}$ and $\beta$, depending only on $C_0$, $C_1$, $C_2$, $\mu_2$, $C_3$, $Q_0$, $Q_1$, $d_0$, $\theta$, $m$, $M$ and $\omega$ such that \begin{equation} \label{tesiteo2} \|a-b\|_{L^\infty(\om)}\leq \tilde{C}\left(\|u-v\|_{L^2(\om)}^\theta+\|a-b\|_{L^\infty(\der\om)}\right)^\beta. \end{equation} \end{theo} \begin{proof} The proof follows the same steps of the proof of Theorem \ref{teo1}. It suffices to extend estimate \eqref{1-8} to every point $x_0$ in $\om$. This extension is possible by Lemma \ref{Hopf} and by the regularity assumptions on $\der\om$. \end{proof} \section{Proof of Theorem \ref{theoh-2}} \label{Sec4} We proceed as in \cite{RGZ} (and in \cite{Ba-Uh}) and show that, for a a fixed set of coefficients, the ratio of two solutions (corresponding to different boundary values) satisfies a partial differential equation \begin{prop}\label{proph-1} For $j=1,2$, the function \begin{equation} \label{h9-1} U^{(j)}=\frac{H_2^{(j)}}{H_1^{(j)}} \end{equation} satisfies the equation \begin{equation} \label{h10-1} -\dive\left(a^{(j)}\nabla U^{(j)}\right)=0\mbox{ in }\om, \end{equation} where \begin{equation}\label{h11-1} a^{(j)}=\frac{D^{(j)}}{\left(\sigma^{(j)}\right)^2}\left(H_1^{(j)}\right)^2= D^{(j)}\left(u_1^{(j)}\right)^2. \end{equation} Moreover, \[ U^{(j)}=\frac{g_2}{g_1}\mbox{ on }\der\om. \] \end{prop} \textit{Proof of Proposition \ref{proph-1}.} For the sake of simplicity we drop the superscript $(j)$. It is an easy calculation to check that, since $u_i$ solves equation \eqref{h6-1}, then \[-\dive\left(Du_1^2\nabla \left(\frac{u_2}{u_1}\right)\right)=0, \] hence, by \eqref{h8-1} and \eqref{h11-1} \[-\dive\left(a\nabla \left(\frac{H_2}{H_1}\right)\right)=-\dive\left(Du_1^2\nabla \left(\frac{u_2}{u_1}\right)\right)=0. \] \qed Our aim is to apply Theorem \ref{teo2} to the functions $U^{(1)}$ and $U^{(2)}$ introduced in the previous proposition. First of all, let us show that: \begin{claim}\label{45}If \eqref{h1-2} holds, then \begin{equation} \label{h1-3} \left\|U^{(1)}-U^{(2)}\right\|_{L^2(\om)}\leq C\ep, \end{equation} where $C$ depends only on $\lambda_0$, $\lambda_1$, $\mu_0$, $E_0$ and $E_1$. \end{claim} \textit{Proof of Claim \ref{45}.} Let us write \begin{equation} \label{h2-3} U^{(1)}-U^{(2)}=\frac{\left(H_2^{(1)}-H_2^{(2)}\right)H_1^{(2)}+\left(H_1^{(2)}-H_1^{(1)}\right)H_2^{(2)}}{H_1^{(1)}H_1^{(2)}}. \end{equation} We need to get a lower estimate for the functions $H_1^{(1)}$ and $H_1^{(2)}$. By \eqref{h8-1} and \eqref{h4-1}, it is enough to show that $u_1^{(j)}$ is bounded from below in terms of a-priori information. By \eqref{h4-1} and \eqref{h5-1}, we can apply Maximum Principle and get \begin{equation} \label{h1-4} 0\leq u_1^{(j)}(x)\leq \mu_1,\textrm{ for every }x\in\om. \end{equation} By Theorem 8.33 in \cite{GT}, $\nabla u_1$ is bounded in terms of a-priori information. Since $u_1^{(j)}=g_1\geq \mu_1^{-1}$ on $\der\om$ (by \eqref{h5-1}), there exists a positive constant $d$, such that \begin{equation} \label{h3-4} u_1^{(j)}\geq \frac{\mu_1^{-1}}{2},\textrm{ in }\om\setminus\om_d, \end{equation} where, we recall, $\om_d=\{x\in\om\,:\, dist(x,\der\om)>d\}$. By Harnach inequality (\cite[Theorem 8.20, Corollary 8.21]{GT}), \begin{equation} \label{h5-4} C_H\inf_{\om_{d/2}}u_1^{(j)}\geq \sup_{\om_{d/2}} u_1^{(j)}\geq \frac{\mu_1^{-1}}{2}, \end{equation} hence, by \eqref{h3-1} and \eqref{h5-1}, \begin{equation} \label{h6-4} \inf_\om u_1^{(j)}=\min\left\{\inf_{\om_{d/2}}u_1^{(j)}, \inf_{\om\setminus\om_d}u_1^{(j)}\right\}\geq \min\left\{\frac{\mu_1^{-1}}{2C_H},\frac{\mu_1^{-1}}{2}\right\}:=\mu_3^{-1}. \end{equation} Inequality \eqref{h6-4} holds for $u_2^{(j)}$ as well. By \eqref{h6-4} and \eqref{h4-1}, \begin{equation} \label{h7-4} H_i^{(j)}=\sigma^{(j)}u_i^{(j)}\geq (\lambda_1\mu_3)^{-1}. \end{equation} Moreover, by \eqref{h4-1} and \eqref{h1-4} \begin{equation} \label{h8-4} H_i^{(j)}\leq \lambda_1 \mu_1. \end{equation} By \eqref{h2-3}, \eqref{h7-4}, \eqref{h7-4} and \eqref{h1-2} we finally have \begin{equation} \label{h1-5} \|U^{(1)}-U^{(2)}\|_{L^2(\om)}\leq 2 \mu_1\lambda_1^3\mu_3^2\ep. \end{equation} \qed \textit{Proof of Theorem \ref{theoh-2} (Conclusion).} We notice that, by \eqref{h11-1}, \eqref{h2-1}, \eqref{h5-1}, \eqref{h3-1} and by Theorem 8.33 in \cite{GT}, coefficients $a^{(1)}$ and $a^{(2)}$ satisfy assumptions \eqref{3-1} and \eqref{3.5-1}. Moreover, by \eqref{h11-1}, \eqref{h5-1} and \eqref{Dbordo}, \begin{equation} \label{h3-5} \left\|a^{(1)}-a^{(2)}\right\|_{L^\infty(\der\om)}\leq \mu_1^2 \ep^\prime. \end{equation} Hence, by Theorem \ref{teo2}, \begin{equation} \label{h4-5} \|a^{(1)}-a^{(2)}\|_{L^\infty(\om)}\leq \tilde{C}\left(\ep+\ep^\prime\right)^{\theta}, \end{equation} where $\tilde{C}$ and $\theta$ depend only on the a-priori information. We now proceed as in \cite{RGZ}. By straightforward calculation it is easy to show that the function $u_1^{(j)}$ solves \begin{equation}\label{h1-7} \left\{ \begin{array}{rl} -\dive\left(a^{(j)}\nabla \left(\frac{1}{u_1^{(j)}}\right)\right)=H_1^{(j)}& \mbox{ in }\om,\\[2mm] \displaystyle{\frac{1}{u_1^{(j)}}=\frac{1}{g_1}}& \mbox{ on }\der\om, \end{array} \right. \end{equation} from which we can get \begin{equation} \label{h1.5-7} -\dive\left(a^{(1)}\nabla \left(\frac{1}{u_1^{(1)}}-\frac{1}{u_1^{(2)}}\right)\right)=H_1^{(1)}-H_1^{(2)}+ \dive\left(\left(a^{(1)}-a^{(2)}\right)\nabla \left(\frac{1}{u_1^{(2)}}\right)\right). \end{equation} By Theorem 8.34 in \cite{GT}, \begin{equation} \label{h2-7} \left\|\nabla\left(\frac{1}{u_1^{(2)}}\right)\right\|_{L^\infty(\om)}= \left\|\frac{\nabla u_1^{(2)}}{\left(u_1^{(2)}\right)^2}\right\|_{L^\infty(\om)} \leq \mu_3^2\left\|\nabla u_1^{(2)}\right\|_{L^\infty(\om)}\leq C, \end{equation} hence, by \eqref{h4-5}, by assumption \eqref{h1-2} and by Corollary 8.7 in \cite{GT}, since $\frac{1}{u_1^{(1)}}-\frac{1}{u_1^{(2)}}=0$ on $\der\om$, we conclude that \begin{equation} \label{h1-8} \left\|\frac{1}{u_1^{(1)}}-\frac{1}{u_1^{(2)}}\right\|_{W^{1,2}(\om)} \leq C\left(\ep+ \ep^\prime\right)^{\theta}. \end{equation} Finally, by definition \eqref{h8-1}, \begin{equation} \label{h1-9} \sigma^{(1)}-\sigma^{(2)}=\frac{H_1^{(1)}}{u_1^{(1)}}-\frac{H_1^{(2)}}{u_1^{(2)}} =\frac{H_1^{(1)}-H_1^{(2)}}{u_1^{(1)}}+H_1^{(1)}\left(\frac{1}{u_1^{(1)}}-\frac{1}{u_1^{(2)}}\right), \end{equation} hence, by \eqref{h1-2} and \eqref{h1-8}, \begin{equation} \label{h2-9} \|\sigma^{(1)}-\sigma^{(2)}\|_{L^2(\om)}\leq \mu_3\ep+\lambda_1\mu_1C\left(\ep+ \ep^\prime\right)^{\theta}. \end{equation} Since \begin{equation} \label{h3-9} D^{(1)}-D^{(2)}=\frac{a^{(1)}}{\left(u_1^{(1)}\right)^2}-\frac{a^{(2)}}{\left(u_1^{(2)}\right)^2}, \end{equation} by \eqref{h4-5}, \eqref{h6-4} and \eqref{h1-8} \begin{equation} \label{h4-9} \|D^{(1)}-D^{(2)}\|_{L^2(\om)}\leq C\left(\ep+ \ep^\prime\right)^{\theta}, \end{equation} where $C$ and $\theta$ depend only on the a-priori information. \section*{Acknowledgements} GA was supported by FRA2014 \textit{Problemi inversi per PDE, unicit\`a, stabilit\`a, algoritmi}, Universit\`a degli Studi di Trieste. MDC, EF and SV were partially supported by the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). EF was partially supported by the Research Project FIR 2013 \textit{Geometrical and qualitative aspects of PDE's}. \bibliographystyle{plain}
{ "attr-fineweb-edu": 1.423828, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbQLxaKgS2Gx0cFqR
\section{Introduction} Mesons with exotic quantum numbers have long been attractive in hadron physics, among which are the $J{}^{PC}=1^{-+}$ isovector states $\pi_{1}(1400)$ , $\pi_{1}(1600)$ and $\pi_{1}(2015)$ identified in the experiments \cite{key-2}. The construction of these states are not quite clear, four-quark states \cite{Zhang:2001sb,Zhang:2004nb,General:2007bk,Chen:2008qw} and hybrid states are most possible explanations. Theoretical studies via different methods have shown that some of these states can be considered as good light hybrid candidates. In the bag model, the predicted mass of $1^{-+}$ light hybrid meson is around 1.5\,GeV \cite{key-3}; the mass from the flux tube model is found to be in the range 1.7--1.9\,GeV \cite{key-4-1}; the lattice QCD prediction of $1^{-+}$ mass is 1.9--2.2\,GeV \cite{key-5-1}. Calculations based on QCD sum rules \cite{key-1-1} have been conducted by different groups \cite{key-6-1,key-7-1,key-8-1,key-8-2,key-9,key-9-1} to NLO of $d\leqq 6$ contributions, and the latest versions of the predicted mass are $1.80\pm0.06$\,GeV in \cite{key-10-1} and $1.71\pm0.22$\,GeV in \cite{key-11-1}. Although the hybrid explanation for $\pi_{1}(1600)$ is supported by previous sum rule analyses, the hybrid assignment of $\pi_{1}(2015)$ is also proposed \cite{key-10-1}. Thus the calculation of higher power corrections (HPC) of the OPE is interesting and of value. How and how much the HPC affect the mass prediction would lead to totally different conclusions. In this paper, we focus on the mass prediction of the $1^{-+}$ light hybrid meson using QCD sum rule method. We will first present our calculation of the coefficients of dimension-8 condensates and then include these higher dimensional contributions in the numerical analysis. Due to the possible violation of factorization of $d=6$--$8$ condensates and variation of $\langle g^3G^3\rangle$ condensate, we will consider a conservative range of the mass prediction. We shall compare the results in d$\leqq$8 case with those in d$\leqq$6 case to show the variation of the mass prediction with inclusion of dimension-8 contributions. In order to obtain an objective conclusion, we shall pay special attention to the fixing of the continuum threshold $s_{0}$, which is not rigorously constrained in the original SVZ sum rules and therefore cause uncertainties. To solve the problem, some authors use the stability criterion to fix $s_{0}$ \cite{key-8-2,key-10-1}. In this work, we shall fit the sum rules following the matching procedure introduced by Leinweber in \cite{Leinweber:1995fn} and successfully performed in some other works \cite{Lee:1996dc,Lee:1997ix,Lee:1997ne,Wang:2008vg}, from which the continuum threshold $s_{0}$ is an output parameter and an uncertainty analysis can be provided. For the explicit consideration of higher power corrections is not seen very often in previous sum rule calculations, we will give a slightly more detailed presentation of our calculation and analysis. \section{OPE for the current-current correlator} We start from the two-point correlator \begin{eqnarray} \Pi_{\mu\nu}(q^{2}) & = & i\int d^{4}xe^{iqx}\left\langle 0\left|T\left[j_{\mu}(x)j_{\nu}^{+}(0)\right]\right|0\right\rangle \label{eq:1}\\ & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\Pi_{v}(q^{2})+q_{\mu}q_{\nu}\Pi_{s}(q^{2})\nonumber \end{eqnarray} where $j_{\mu}(x)=g\bar{q}(x)\gamma_{\nu}iG_{\mu\nu}(x)q{(x)}$, and the invariants $\Pi_{v}(q^{2})$ and $\Pi_{s}(q^{2})$ correspond respectively to $1^{-+}$ and $0^{++}$ contributions. The correlator obeys the standard dispersion relation \begin{equation} \Pi_{v/s}(q^{2})=\frac{1}{\pi}\int_{0}^{\infty}ds\frac{\textrm{Im}\Pi_{v/s}(s)}{s-q^{2}-i\epsilon}. \label{eq:20} \end{equation} In this paper, we focus on the dimension-8 corrections to the $1^{-+}$ mass. Before showing the higher power results we need to mention that coefficients of dimension-8 quark-related operators of the $1^{-+}$ light hybrid two-point correlator have been calculated in \cite{key-6-1} and \cite{key-7-1}. In \cite{key-6-1} there is only a factorized form of the total result and a complete result is given in \cite{key-7-1}. We obtain a new complete result which is consistent with the former factorized form but different from the latter one. As for dimension-8 gluon operators, there arise IR divergences in the calculation of the quark loops as the result of setting $m_q=0$ before calculating the integrals. These IR divergences can be canceled after taking operator mixing into account. This process can partly check the calculation about dimension-8 quark and gluon operators and modify the finite part of the coefficients of gluon condensates. Some good examples for the case of $\bar q q$ scalar and vector currents are given in \cite{key-14,key-15}. According to the numbers of quark operators in the condensates, dimension-8 quark condensates can be classified into two groups: two-quark $d=8$ condensates and four-quark $d=8$ condensates. Only the formers can be mixed to $d=8$ gluon condensates in LO. We use the dimensional regularization in $n=4-\epsilon$ space-time dimensions, thus the $O(\epsilon)$ terms of the two-quark $d=8$ condensates can be obtained, which are needed to be multiplied by the $\frac{1}{\epsilon}$ subtractions to modify the finite part of the quark loop calculations (see Eq.\eqref{eq:16}). The dimension-8 quark contributions (corresponding to Feynman diagrams in Figure~\ref{fig:1}) are listed in Appendix A. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{fig1-q.eps} \caption{\label{fig:1}Feynman diagrams of dimension-8 quark contributions.} \end{figure} Dimension-8 contributions of gluon condensates come from the calculations of quark loops. Here we give the quark propagator up to term $O(q^{-5})$ needed in the calculation of the quark loops: \begin{eqnarray} S(q) & = & S_{0}(q)+\frac{ig}{2}G_{\rho\mu}S_{0}(q)\gamma_{\mu}\frac{\partial}{\partial q_{\rho}}S_{0}(q)+\frac{g}{3}D_{\alpha}G_{\rho\mu}S_{0}(q)\gamma_{\mu}\frac{\partial}{\partial q_{\alpha}}\frac{\partial}{\partial q_{\rho}}S_{0}(q)\label{eq:8}\\ & - & \frac{ig}{8}D_{\alpha1}D_{\alpha2}G_{\rho\mu}S_{0}(q)\gamma_{\mu}\frac{\partial}{\partial q_{\alpha1}}\frac{\partial}{\partial q_{\alpha2}}\frac{\partial}{\partial q_{\rho}}S_{0}(q)-\frac{g^{2}}{4}G_{\rho\mu}G_{\sigma\nu}S_{0}(q)\gamma_{\mu}\frac{\partial}{\partial q_{\rho}}\left[S_{0}(q)\gamma_{\nu}\frac{\partial}{\partial q_{\sigma}}S_{0}(q)\right],\nonumber \end{eqnarray} where $D_{\mu}=\partial_{\mu}-igA_{\mu}$ and $S_{0}(q)=\frac{1}{\slashed q}$. For a massless quark Eq.\eqref{eq:8} can be rewritten as \begin{eqnarray} S(q) & = & \frac{\slashed q}{q^{2}}+\frac{1}{q^{4}}gq_{\alpha}\tilde{G}_ {\alpha\beta}\gamma_{\beta}\gamma_{5}\label{eq:9}\\ & + & \frac{1}{q^{6}}\left[-\frac{2}{3}g\left(q_{\alpha}q_{\rho}D_{\rho} G_{\alpha\beta}\gamma_{\beta}-J_{\mu}q_{\mu}\slashed q+q^{2}\slashed J\right)+2igq_{\alpha}q_{\rho}D_{\rho}\tilde{G}_{\alpha\beta}\gamma_{\beta}\gamma_{5}\right]\nonumber \\ & + & \frac{1}{q^{8}}\{-2igq_{\gamma}D_{\gamma}\left(q^{2}\slashed J-q_{\mu}J_{\mu}\slashed q\right)+\left[-4g\left(q_{\gamma}D_{\gamma}\right)^{2} +gq^{2}D^{2}\right]q_{\alpha}\tilde{G}_{\alpha\beta}\gamma_{\beta}\gamma_{5}+ 2ig\left(q_{\gamma}D_{\gamma}\right)^{2}q_{\alpha}G_{\mu\alpha}\gamma_{\mu}\nonumber \\ & + & 2g^{2}q_{\mu}q_{\alpha}G_{\mu\rho}G_{\alpha\rho}\slashed q+2g^{2}q^{2} q_{\mu}G_{\alpha\rho}G_{\rho\mu}\gamma_{\alpha}+ig^{2}q^{2}q_{\alpha}\left( \tilde{G}_{\mu\beta}G_{\alpha\beta}-G_{\mu\beta}\tilde{G}_{\alpha\beta}\right)\gamma_{\mu}\gamma_{5}\},\nonumber \end{eqnarray} where $\tilde{G}_{\alpha\beta}=\frac{1}{2}\varepsilon_{\alpha\beta\mu\nu}G_{\mu\nu}$, $\gamma_{5}=-\frac{i}{4}\varepsilon_{\alpha\beta\mu\nu}\gamma_{\alpha} \gamma_{\beta}\gamma_{\mu}\gamma_{\nu},J_{\mu}=D_{\nu}G_{\mu\nu}= g\underset{uds}{\sum}\overline{\psi}\gamma_{\mu}T^{a}\psi T^{a}$. Eq.\eqref{eq:9} can also be seen in \cite{key-16} and \cite{key-15}, but the last term of \eqref{eq:9} is missed in \cite{key-16} and not consistent with \cite{key-15}. We use \eqref{eq:8} rather than \eqref{eq:9} in practical calculations for \eqref{eq:8} is more convenient in program calculations. Gluon contributions from calculations of quark loops (the corresponding Feynman diagrams are depicted in Figure~\ref{fig:2}) are listed in Appendix A. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{fig1-g.eps} \caption{\label{fig:2}Feynman diagrams of dimension-8 gluon contributions.} \end{figure} \begin{table}[htbp] \caption{\label{tab:1}The independent $d=8$ two-quark condensates and coefficients. $(\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma})_{-}$ and $(\gamma_{\mu}\gamma_{\rho}\gamma_{\sigma})_{-}$ are totally anti-symmetric tensors.} \begin{minipage}{0.5\textwidth} \begin{ruledtabular} \begin{tabular}{cccccc} j & $Q_{j}$ & $C_{j}^{V}$ & $D_{j}^{V}$ & $C_{j}^{S}$ & $D_{j}^{S}$\\ \hline 1 & $-ig^{2}\bar{q}[\slashed DG_{\mu\nu},G_{\rho\sigma}](\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma})_{-}q$ & $\frac{1}{72}$ & $\frac{1}{96}$ & $\frac{1}{24}$ & $\frac{5}{288}$\\ 2 & $-ig^{2}\bar{q}[\slashed DG_{\mu\nu},G_{\mu\nu}]q$ & $-\frac{1}{36}$ & $-\frac{1}{48}$ & $-\frac{1}{12}$ & $-\frac{5}{144}$\\ 3 & $-ig^{2}\bar{q}[G_{\mu\nu},G_{\rho\sigma}]D_{\nu}(\gamma_{\mu}\gamma_{\rho}\gamma_{\sigma})_{-}q$ & $\frac{1}{18}$ & $\frac{1}{24}$ & $\frac{1}{6}$ & $\frac{5}{72}$\\ 4 & $-ig^{2}\bar{q}\{G_{\mu\rho},G_{\mu\nu}\}\gamma_{\nu}D_{\rho}q$ & $-\frac{1}{9}$ & $\frac{1}{9}$ & $\frac{2}{3}$ & $\frac{7}{36}$\\ 5 & $-ig^{2}\bar{q}\{D_{\mu}G_{\nu\mu},G_{\rho\sigma}\}(\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma})_{-}q$ & $-\frac{1}{9}$ & $-\frac{5}{144}$ & $-\frac{1}{12}$ & $-\frac{1}{18}$\\ 6 & $-ig^{2}\bar{q}[D_{\mu}G_{\nu\mu},G_{\nu\alpha}]\gamma_{\alpha}q$ & $-\frac{1}{18}$ & $-\frac{1}{24}$ & $-\frac{1}{6}$ & $-\frac{5}{72}$\\ 7 & $-ig^{2}\bar{q}D^{2}D_{\nu}G_{\alpha\nu}\gamma_{\alpha}q$ & 0 & 0 & 0 & 0\\ \end{tabular} \end{ruledtabular} \end{minipage} \end{table} \begin{table}[htbp] \caption{\label{tab:2}The mixing coefficients in \eqref{eq:16} \cite{key-14}.} \begin{minipage}{0.5\textwidth} \begin{ruledtabular} \begin{tabular}{ccccccccccccc} j & 2 & 3 & 3 & 3 & 3 & 3 & 3 & 4 & 4 & 4 & 6 & 7\\ \hline i & 6 & 1 & 2 & 3 & 4 & 5 & 6 & 1 & 3 & 4 & 5 & 7\\ \hline $Z_{i}^{j}$ & -6 & 6 & -6 & -12 & 12 & -6 & -3 & -6 & 12 & 12 & 6 & -6\\ \end{tabular} \end{ruledtabular} \end{minipage} \end{table} The two-quark $d=8$ condensates in \eqref{eq:3} and \eqref{eq:6} can be expanded in the basis $\left\{ Q_{j}\right\} $ listed in Table~\ref{tab:1}, using the equations of motion and charge conjugation transformation and setting $m_{q}=0$. And we also list the expanding coefficients in Table~\ref{tab:1} and the mixing coefficients of quark condensates with gluon condensates in Table~\ref{tab:2} \cite{key-14}. After taking operator mixing into account, the corrective terms to gluon contributions are obtained: \begin{eqnarray} \Pi_{c}^{G}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\sum_{j=1}^{7}\left(C_{j}^{V}+\epsilon D_{j}^{V}\right)\sum_{i=1}^{7}Z_{i}^{j}g^{n_{i}}O_{i}/\left(72\pi^{2}\omega\right)\frac{1}{q^{4}}\label{eq:16}\\ & + & q_{\mu}q_{\nu}\sum_{j=1}^{7}\left(C_{j}^{S}+\epsilon D_{j}^{S}\right)\sum_{i=1}^{7}Z_{i}^{j}g^{n_{i}}O_{i}/\left(72\pi^{2}\omega\right)\frac{1}{q^{4}}\nonumber \\ & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})[\left(-\frac{5}{864\pi^{2}}+\frac{1}{72\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{1}+\left(-\frac{1}{288\pi^{2}}-\frac{1}{216\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{2}\nonumber \\ & + & \left(\frac{5}{432\pi^{2}}-\frac{1}{36\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{3}+\left(\frac{11}{432\pi^{2}}-\frac{1}{108\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{4}+\left(-\frac{1}{144\pi^{2}}-\frac{1}{108\pi^{2}}\frac{1}{\omega}\right)g^{3}O_{5}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[\left(-\frac{1}{96\pi^{2}}-\frac{1}{24\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{1}+\left(-\frac{5}{864\pi^{2}}-\frac{1}{72\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{2}\nonumber \\ & + & \left(\frac{1}{48\pi^{2}}+\frac{1}{12\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{3}+\left(\frac{19}{432\pi^{2}}+\frac{5}{36\pi^{2}}\frac{1}{\omega}\right)g^{4}O_{4}+\left(-\frac{5}{432\pi^{2}}-\frac{1}{36\pi^{2}}\frac{1}{\omega}\right)g^{3}O_{5})]\frac{1}{q^{4}},\nonumber \end{eqnarray} where $\frac{1}{\omega}=\frac{1}{\epsilon}+\frac{1}{2}\ln4\pi-\frac{\gamma_E}{2}$ and by which the IR divergences in \eqref{eq:13},\eqref{eq:14} and \eqref{eq:15} can be canceled. Thus the total dimension-8 contributions are the sum of \eqref{eq:16} and \eqref{eq:7}$-$\eqref{eq:15}: \begin{eqnarray} \Pi_{\mu\nu}^{d=8}(q^{2}) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})[-\frac{1}{24}g^{3}\left\langle \bar{q}q\right\rangle \left\langle \bar{q}Gq\right\rangle \label{eq:17}\\ & + & \left(-\frac{1}{108\pi^{2}}+\frac{1}{144\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{1}+\left(\frac{1}{216\pi^{2}}-\frac{1}{432\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{2}\nonumber \\ & + & \left(-\frac{1}{108\pi^{2}}-\frac{1}{72\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{3}+\left(\frac{1}{54\pi^{2}}-\frac{1}{216\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{4}\nonumber \\ & + & \left(\frac{1}{864\pi^{2}}-\frac{1}{216\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{3}O_{5}+\frac{1}{288\pi^{2}}g^{2}O_{7}+\frac{1}{288\pi^{2}}g^{3}O_{8}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[-\frac{11}{27}g^{3}\left\langle \bar{q}q\right\rangle \left\langle \bar{q}Gq\right\rangle \nonumber \\ & + & \left(\frac{7}{576\pi^{2}}-\frac{1}{48\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{1}+\left(-\frac{7}{516\pi^{2}}-\frac{1}{144\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{2}\nonumber \\ & + & \left(-\frac{13}{288\pi^{2}}+\frac{1}{24\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{3}+\left(-\frac{7}{288\pi^{2}}+\frac{5}{72\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{4}O_{4}\nonumber \\ & + & \left(\frac{11}{576\pi^{2}}-\frac{1}{72\pi^{2}}\ln\frac{-q^{2}}{\mu^{2}}\right)g^{3}O_{5}+\frac{1}{192\pi^{2}}g^{2}O_{7}]\frac{1}{q^{4}},\nonumber \end{eqnarray} where $\left\langle \overline{q}Gq\right\rangle =\left\langle \overline{q}\frac{\lambda^{a}}{2}G_{\mu\nu}^{a}\sigma_{\mu\nu}q\right\rangle $, $\sigma_{\mu\nu}=\frac{i}{2}[\gamma_{\mu,}\gamma_{\nu}]$. Notice that the quark condensates have been factorized in \eqref{eq:17} so as to conduct the sum rule analysis. As is well-known, factorization hypothesis may have large uncertainties as observed in some other channels \cite{Bertlmann:1984ih,Bertlmann:1987ty,Launer:1983ib,Narison:1992ru,Narison:1995jr,Narison:2009vy}. Therefore we shall consider the possible violation of factorization for quark condensates in the numerical analysis. As for the values of $\left\langle G^{4}\right\rangle $ condensates $\left(O_{1}-O_{4}\right)$, one may also think of using factorization. However, for reasons in \cite{key-18}, factorization hypothesis may as well not be reliable in $\left\langle G^{4}\right\rangle $ case. Therefore we choose to use a modified factorization proposed in \cite{key-19} and supported in \cite{key-13}, which suggests an overestimation of factorization and are based on two technologies: factorization of quartic heavy quark condensates and heavy quark expansion. In the framework of this modified factorization, $O_{1}-O_{4}$ can be expressed in terms of the condensate $\phi=\textrm{Tr}G_{\nu\mu}G_{\mu\rho}\textrm{Tr}G_{\nu\tau}G_{\tau\rho}$, which has been clarified in \cite{key-19} to reasonably satisfy the factorization approximation. Thus after fitting $\phi$ using factorization, $O_{1}-O_{4}$ can be estimated as follows \begin{eqnarray} g^{4}O_{1} & = & \frac{1}{12}\left\langle g^{2}G^{2}\right\rangle ^{2},g^{4}O{}_{2}=-\frac{5}{48}\left\langle g^{2}G^{2}\right\rangle ^{2}+2g^{4}\phi,\label{eq:21}\\ g^{4}O_{3} & = & g^{4}O_{4}=-\frac{1}{192}\left\langle g^{2}G^{2}\right\rangle ^{2}+\frac{1}{2}g^{4}\phi.\nonumber \end{eqnarray} With regard to other $d=8$ gluon condensates, a scale $M^{2}\approx 0.3\,\textrm{GeV}^{2}$ is estimated in \cite{key-13,key-20}, which characterizes the average off-shellness of the vacuum gluons and quarks: \begin{equation} g^{3}O_{5}=-\frac{3}{2}g^{4}\left\langle \bar{q}q\right\rangle ^{2}M^{2},g^{2}O_{7}=-\frac{4}{3}g^{4}\left\langle \bar{q}q\right\rangle ^{2}M^{2},g^{3}O_{8}=\left\langle g^{3}G^{3}\right\rangle M^{2},\label{eq:22}\end{equation} and we shall also consider the violation of factorization of $O_5$ and $O_7$ in the matching procedure. \section{QCD Sum Rules for the $1^{-+}$ light hybrid meson} The $d\leqq6$ contributions to $\Pi_{v}(q^{2})$ including the NLO corrections to the perturbative and the $\left\langle \alpha_{s}G^{2}\right\rangle $ and $\alpha_{s}\left\langle \overline{q}q\right\rangle ^{2}$ terms can be found in \cite{key-6-1,key-7-1,key-8-1,key-8-2,key-9,key-9-1}. $\Pi_{v}^{d\leqq6}(q^{2})$ can be written as \begin{equation} \Pi_v^{d\leqq6}(q^2)=a_{11} q^4\ln\frac{-q^2}{\mu^2}+a_{12} q^4\ln^2\frac{-q^2}{\mu^2}+b_{11} \ln\frac{-q^2}{\mu^2}+b_{12}\ln^2\frac{-q^2}{\mu^2}+c_{11}\frac{1}{-q^2}+c_{12}\frac{1}{-q^2}\ln\frac{-q^2}{\mu^2} \end{equation} with \begin{gather*} a_{11}=-\frac{\alpha_s(\mu)}{240\pi^3}\left(1+\frac{1301}{240}\frac{\alpha_s(\mu)}{\pi}\right), ~~ a_{12}=\frac{\alpha_s(\mu)}{240\pi^3}\frac{17}{72}\frac{\alpha_s(\mu)}{\pi},\\ b_{11}=-\frac{1}{36\pi}\langle \alpha_s G^2\rangle \left(1-\frac{145}{72}\frac{\alpha_s(\mu)}{\pi}\right)- \frac{2}{9}\frac{\alpha_s(\mu)}{\pi} \langle m_q\bar qq\rangle,\\ b_{12}=-\frac{1}{36\pi}\langle \alpha_s G^2\rangle \frac{8}{9} \frac{\alpha_s(\mu)}{\pi},\\ c_{11}=-\frac{4\pi}{9}k_1 \alpha_s \langle \bar qq\rangle^2\left( 1+\frac{1}{108}\frac{\alpha_s(\mu)}{\pi}\right) -\frac{1}{192\pi^2} \langle g^3 G^3\rangle,\\ c_{12}=-\frac{4\pi}{9}k_1\alpha_s\langle \bar qq\rangle^2\frac{47}{72}\frac{\alpha_s(\mu)}{\pi}, \end{gather*} where $\alpha_s(\mu)=4\pi/(9\ln(\mu^2/\Lambda^2_{\textrm{QCD}}))$ is the running coupling constant for three flavors, and $k_1$ indicates the deviation from vacuum saturation of $d=6$ quark condensates. In addition, $\Pi_v^{d=8}(q^2)$ can be obtained from \eqref{eq:17},\eqref{eq:21} and \eqref{eq:22}: \begin{equation} \Pi_v^{d=8}(q^2)=d_{11}\frac{1}{q^4}+d_{12}\frac{1}{q^4}\ln\frac{-q^2}{\mu^2} \end{equation} with \begin{gather*} d_{11}=-\frac{\pi}{6}k_2\alpha_s(\mu)\langle\bar qq\rangle \langle g\bar qGq\rangle-\frac{1}{216}\langle \alpha_s G^2\rangle^2-\frac{11}{108}k_2\alpha_s(\mu)\cdot \alpha_s\langle\bar qq\rangle^2\cdot M^2+\frac{1}{288\pi^2}\langle g^3G^3\rangle M^2,\\ d_{12}=-\frac{1}{648}\langle \alpha_s G^2\rangle^2+\frac{1}{9}k_2\alpha_s(\mu)\cdot \alpha_s\langle \bar qq\rangle^2\cdot M^2, \end{gather*} where $k_2$ indicates the deviation from vacuum saturation of d=8 condensates. The Borel transformation of $\Pi^{\textrm{OPE}}_{v}(q^2)$ can be written as \begin{equation} \label{eq:ope} \begin{split} \Pi^{\textrm{OPE}}_v(\tau)\equiv&\frac{1}{\tau}\hat B_\tau\Pi^{\textrm{OPE}}_{v}(q^2)= a_{11}\frac{-2}{\tau^3} +a_{12}\frac{2}{\tau^3} (2 \gamma_E-3+2\ln(\tau \mu^2))+b_{11}\frac{-1}{\tau}+b_{12}\frac{2}{\tau}(\gamma_E+\ln(\tau\mu^2))\\ &+c_{11}+c_{12}(-\gamma_E-\ln(\tau\mu^2)) +d_{11}\tau+d_{12}\tau(1-\gamma_E-\ln(\tau\mu^2)). \end{split} \end{equation} By using the single narrow resonance spectral density ansatz $\textrm{Im}\Pi_v^{\textrm{phen}}(s)=\pi f_H^2 m_H^4 \delta(s-m_H^2)+\textrm{Im}\Pi_v^{\textrm{OPE}}(s)\theta(s-s_0)$, where $s_0$ is the continuum threshold, $f_H$ and $m_H$ denote the coupling of the hadron to the current and the mass of the hadron respectively, we can obtain the phenomenological representation of $\Pi_v^{\textrm{phen}}(\tau,s_0,f_H,m_H)$ via the dispersion relation: \begin{equation} \label{eq:phen} \Pi_v^{\textrm{phen}}(\tau,s_0,f_H,m_H)=\frac{1}{\pi}\int_0^\infty{\rm Im}\Pi_v^{\textrm{phen}}(s)e^{-s\tau}ds. \end{equation} Then the master equation for QCDSR can be written as \begin{equation} \label{eq:qcdsr} \Pi_v^{\textrm{OPE}}(\tau)=\Pi_v^{\textrm{phen}}(\tau,s_0,f_H,m_H), \end{equation} physical properties of the relevant hadron, i.e., $m_H$, $f_H$ and $s_0$, should satisfy Eq.\eqref{eq:qcdsr}. In order to present the influence of the $d=8$ contributions, we will conduct the sum rule analysis both in $d\leqq6$ and $d\leqq8$ cases. Before those, we should clarify our criteria for establishing the sum rule window in which the mass prediction is reliable. On the OPE side, we wish the Borel parameter $\tau$ is as small as possible so that power series converge as quickly as possible. On the hadron spectrum side, our wish is the opposite, because a larger $\tau$ can better suppress contributions of the excited states and continuum. The common procedure without considering the higher power contributions is usually as follows: 1.keep the highest dimensional contributions (HDC, normally dimension-6 contributions) no more than 10\% (or 15\%) of the total OPE contributions to ensure the convergence of OPE, which gives the upper bound of $\tau$; 2.make sure that the contributions from the continuum are under 50\% of the total contributions, which ensures the validity of the narrow resonance ansatz and gives the lower bound of $\tau$. For our case, if we require dimension-8 contributions are less than 15 percent, it means we choose a window with a larger upper bound compared with $d\leqq$6 case. This choice enhances suppression of excited states and continuum, but the convergence of OPE gets worse, which increases the uncertainties of the OPE side. On the other hand, if we still require the dimension-6 contributions are less than 15 percent, uncertainties from the truncation of OPE are indeed decreased(because the dimension-8 contributions are now taken into account), but the validity of the narrow resonance ansatz is not improved. Apparently, to keep a balance should be a good resolution. Our choice is that make sure both $1\%< d=8$ contributions $<5\%$ and $20\%< d=6$ contributions $<35\%$ (correspondingly the perturbative and $d<6$ contributions would be totally $120\%$ --$140\%$ because the signs of the $d=6$ and $d=8$ contributions are minus), which ensures the OPE series converge in a proper trend and also a larger upper bound of $\tau$ is obtained compared with $d \leqq$6 case,thus uncertainties from both sides of the master equation are reduced. In the original SVZ sum rules, the continuum threshold $s_{0}$ cannot be rigorously constrained. To overcome this shortcoming and make our conclusion more reliable, we use a weighted-least-square method following Leinweber \cite{Leinweber:1995fn} to match the two sides of Eq.\eqref{eq:qcdsr} in the sum rule window. By randomly generating 200 sets of Gaussian distributed phenomenological input parameters with given uncertainties (10\% uncertainties, which are typical uncertainties in QCDSR) at $\tau_j=\tau_\textrm{min}+(\tau_\textrm{max}-\tau_\textrm{min})\times(j-1)/(n_B-1)$, where $n_B=21$, we can estimate the standard deviation $\sigma_{\textrm{OPE}}(\tau_j)$ for $\Pi_v^{\textrm{OPE}}(\tau_j)$. Then,the phenomenological output parameters $s_0$, $f_H$ and $m_H$ can be obtained by minimizing \begin{equation} \chi^2=\sum_{j=1}^{n_B}\frac{(\Pi^{\textrm{OPE}}(\tau_j)-\Pi^{\textrm{phen}}(\tau_j,s_0,f_H,m_H))^2}{\sigma_{\textrm{OPE}}^2(\tau_j)}. \end{equation} We use two sets of parameters as the central values of inputs (see Table~\ref{tab:input}) to conduct the matching procedures respectively. Values in set I are from a recent review of QCD sum rules \cite{key-21}. We choose this set of values to avoid subjective factors in choosing the inputs. We also notice that the value of $g^3\langle G^3\rangle$ in \cite{key-21} is different from the previous one used in \cite{key-9-1,key-10-1,key-11-1}. This value changes from $1.2 \,\textrm{GeV}^2\langle \alpha_{s}G^2\rangle$ (from dilute gas instantons \cite{Novikov:1981xi} and lattice calculations \cite{D'Elia:1997ne}) to $8.2 \,\textrm{GeV}^2\langle\alpha_{s}G^2\rangle$ (from charmonium systems \cite{key-13}), which largely affects the mass predictions. To make our conclusions more reliable and to provide a comparison of the $d\leqq6$ results in this work and those from previous analyses, we maintain the value of $g^3\langle G^3\rangle$ the small one in set II. As in our previous paper \cite{key-11-1}, we generate 2000 sets of Gaussian distributed input parameters with 10\% uncertainties, and for each set we minimize $\chi^2$ to obtain a set of phenomenological output parameters, after this procedure is finished, we can estimate the uncertainties of $s_0$, $f_H$ and $m_H$. \begin{table}[htbp] \caption{\label{tab:input} Different input phenomenological parameters (at scale $\mu_0=1$\,GeV).} \begin{ruledtabular} \begin{tabular}{cccccccc} & $\Lambda_{\textrm{QCD}}$ & $\langle \alpha_s G^2\rangle$ & $m_q$ & $\langle g^3 G^3\rangle$ & $\alpha_s\langle\bar qq\rangle^2$ & $\langle g\bar qGq\rangle$\\ \hline Set I & $0.353\,\textrm{GeV}$ & $0.07\,\textrm{GeV}^4$& $0.007\,\textrm{GeV}$ &$8.2\,\textrm{GeV}^2 \langle \alpha_s G^2\rangle$ & $1.5\times10^{-4}\,\textrm{GeV}^4$ & $0.8\,\textrm{GeV}^2 \langle \bar qq\rangle$\\ Set II & $0.353\,\textrm{GeV}$ & $0.07\,\textrm{GeV}^4$& $0.007\,\textrm{GeV}$ &$1.2\,\textrm{GeV}^2 \langle \alpha_s G^2\rangle$ & $1.5\times10^{-4}\,\textrm{GeV}^4$ & $0.8\,\textrm{GeV}^2 \langle \bar qq\rangle$\\ \end{tabular} \end{ruledtabular} \end{table} Finally, before proceeding with numerical calculations, renormalization-group (RG) improvement of the sum rules, i.e., substitutions $\mu^2\to1/\tau$ in Eq.\eqref{eq:qcdsr}, is needed \cite{Narison:1981ts}. In addition, the anomalous dimensions for condensate $\langle g^3G^3\rangle$ and $\langle \bar qq\rangle \langle g\bar qGq\rangle$ also should be implemented by multiplying $\langle g^3G^3\rangle$ and $\langle \bar qq\rangle \langle g\bar qGq\rangle$ by a factor $L(\mu_0)^{-23/27}$ and $L(\mu_0)^{10/27}$ respectively, where $L(\mu_0)=[\ln(1/(\tau\Lambda^2_{\textrm{QCD}}))/\ln(\mu^2_0/\Lambda^2_{\textrm{QCD}})]$, $\mu_0$ is the renormalization scale for condensates\cite{key-1-1,book2}. The coupling constant $f_H$ also should be multiplied by a factor $L(m)^{-32/81}$, $f_H$ then receives its value at hybrid mass shell. In this paper, we neglect the anomalous dimensions for operators $O_1- O_8$, which are not calculated yet and very likely to have small effects on the mass prediction. Our matching results with input parameters in Set I and Set II can be seen in Apendix B. We consider violation of factorization by different factors (up to 3 for dimension-6 condensates \cite{Bertlmann:1984ih,Bertlmann:1987ty,Launer:1983ib,Narison:1992ru,Narison:1995jr,Narison:2009vy}, and up to 5 for dimension-8 condensates \cite{Narison:2004vz}). The upper bounds of sum rule windows in each table are obtained by different demands on $|$HDC$|$/OPE. The matching results, including the medians and the asymmetric standard deviations from the medians for $s_0$, $m_H$ and $f^2_H$, are reported. By inputting Gaussian distributed input parameters with 10\% uncertainties, we obtain some Gaussian-like distribution results for $s_0$, $m_H$ and $f_H^2$ with uncertainties $<$10\%, this implies the matching results are very stable with different input parameters. Following our criteria above for establishing the window, the phenomenological outputs in the fourth column of each table are the most reliable (optimal windows) for each case. In fact, we can see that the predictions are not very sensitive to the variation of the range of the window. All output parameters slightly decrease for stronger constraints on contributions from HDC. In addition, we also list the results deduced from $d\leqq6$ contributions in the optimal windows of $d\leqq8$ cases to show the variations of the sum rules in these regions after considering the dimension-8 contributions. Under the considerations of possible violation of factorization and different values of $\langle g^3G^3\rangle$, we obtain a mass range 1.88--2.60\,GeV from the optimal windows. Furthermore, We shall also consider effects of tachyonic gluon mass \cite{Chetyrkin:1998yr,Narison:2001ix,Narison:2009ag} beyond the original OPE as in \cite{key-10-1}. The lowest order correction due to this effect can be found in \cite{key-10-1}, which leads to decreases in hybrid mass predictions. Taking this effect into account, the lower bound of the mass range would further decrease, therefore we obtain as conclusion a quite conservative range of the predicted mass, i.e. 1.72--2.60\,GeV, which covers $\pi_{1}(2015)$ and is hard to favor $\pi_{1}(1400)$ and $\pi_{1}(1600)$ as hybrids. As a supplement of our analysis, we also consider as above a conservative mass range in $d\leqq6$ case. With the small $\langle g^3G^3\rangle$ value used in \cite{key-9-1,key-10-1}, the range is 1.55--2.29\,GeV, which is consistent with previous predictions within errors and covers $\pi_{1}(1600)$. In the large $\langle g^3G^3\rangle$ case, the predicted range is 1.84--2.46\,GeV. Notice that even in this case, the hybrid assignment of $\pi_{1}(1600)$ can hardly be favored. More details of the weighted-least-square matching method can be seen in our previous work on the $1^{-+}$ light hybrid meson \cite{key-11-1}. In that work, we concentrate ourselves on the sum rule analysis based on the matching procedure, especially the uncertainty analysis. However,the dimension-8 coefficients used there are not a complete form (only the factorized quark condensates in \cite{key-6-1}). And we follow our earlier works \cite{key-9,key-9-1} in choosing the inputs there and neglect the violation of saturation hypothesis. Moreover, the sum rule window there is just established by keeping HDC$<$10\% as the common procedure, lacking in an explicit consideration of the convergence of OPE. All these lead to the discrepancy of the predictions. \section{Summary} We have calculated the dimension-8 coefficients of the two-point correlator of the current $g\bar{q}(x)\gamma_{\nu}iG_{\mu\nu}(x)q{(x)}$. We find that the inclusion of the dimension-8 condensate contributions in QCDSR analysis increases the predicted mass, and so does the effect of violation of factorization of higher dimensional condensates. Besides, the variation of the value of $\langle g^3G^3\rangle$ also have effects on increasing the mass prediction. Therefore all these new effects suggest that the $1^{-+}$ light hybrid meson may have a larger mass compared with previous QCDSR predictions. From our analysis, the conservative range of the mass is 1.72--2.60\,GeV, which covers $\pi_{1}(2015)$ and disfavors the hybrid explanations for $\pi_{1}(1600)$ and $\pi_{1}(1400)$. One can also consider the central value 2.16\,GeV in this range as a very crude estimation of the mass. As for the effect of the dimension-8 contributions in determining the $1^{-+}$ mass, it's hard to draw a definite conclusion due to the uncertainties from violation of factorization. From the data in Appendix B, we find that 4\%--9\% underestimation would be led to by neglecting the $d=8$ condensate contributions in the case of the $1^{-+}$ light hybrid state. \begin{acknowledgments} This work is supported by NSFC under grant 11175153, 11205093 and 11347020, and supported by K. C. Wong Magna Fund in Ningbo University. \end{acknowledgments} \clearpage \begin{appendix} \section*{Appendix A: Results of Calculations of Feynman Diagrams} We list in this appendix the results of the calculations of the Feynman diagrams in Figure~\ref{fig:1} and Figure~\ref{fig:2}. \begin{eqnarray} \pi_{\mu\nu}^{\textrm{I}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})[\left(-\frac{1}{18}-\frac{\epsilon}{24}\right)ig^{2}\left\langle \bar{q}D_{\mu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\nu}\gamma_{\beta}q\right\rangle \label{eq:3}\\ & + & \left(-\frac{2}{9}+\frac{\epsilon}{36}\right)ig^{2}\left\langle \bar{q}D_{\rho}G_{\mu\alpha}G_{\mu\beta}\gamma_{\alpha}\gamma_{\rho}\gamma_{\beta}q\right\rangle +\left(-\frac{1}{18}-\frac{\epsilon}{24}\right)ig^{2}\left\langle \bar{q}D_{\nu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\mu}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[\left(-\frac{1}{6}-\frac{5\epsilon}{72}\right)ig^{2}\left\langle \bar{q}D_{\mu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\nu}\gamma_{\beta}q\right\rangle +\left(\frac{1}{3}+\frac{\epsilon}{18}\right)ig^{2}\left\langle \bar{q}D_{\rho}G_{\mu\alpha}G_{\mu\beta}\gamma_{\alpha}\gamma_{\rho}\gamma_{\beta}q\right\rangle \nonumber \\ & + & \left(-\frac{1}{6}-\frac{5\epsilon}{72}\right)ig^{2}\left\langle \bar{q}D_{\nu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\mu}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\textrm{II}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})[\frac{5}{18}ig^{3}\left\langle \bar{q}\gamma_{\alpha}T^{a}q\bar{q}T^{a}G_{\mu\beta}\gamma_{\mu}\gamma_{\alpha}\gamma_{\beta}q\right\rangle \label{eq:4}\\ & - & \frac{1}{18}ig^{3}\left\langle \bar{q}\gamma_{\alpha}T^{a}q\bar{q}T^{a}G_{\mu\beta}\gamma_{\alpha}\gamma_{\mu}\gamma_{\beta}q\right\rangle -\frac{2}{9}ig^{3}\left\langle \bar{q}\gamma_{\alpha}T^{a}q\bar{q}T^{a}G_{\alpha\beta}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[-\frac{1}{6}ig^{3}\left\langle \bar{q}\gamma_{\alpha}T^{a}q\bar{q}T^{a}G_{\mu\beta}\gamma_{\mu}\gamma_{\alpha}\gamma_{\beta}q\right\rangle \nonumber \\ & + & \frac{5}{6}ig^{3}\left\langle \bar{q}\gamma_{\alpha}T^{a}q\bar{q}T^{a}G_{\mu\beta}\gamma_{\alpha}\gamma_{\mu}\gamma_{\beta}q\right\rangle -\frac{2}{3}ig^{3}\left\langle \bar{q}\gamma_{\alpha}T^{a}q\bar{q}T^{a}G_{\alpha\beta}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\textrm{III}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\left(\frac{1}{12}g^{3}\left\langle f^{abc}G_{\alpha\beta}^{a}\bar{q}\gamma_{\alpha}T^{b}q\bar{q}T^{c}\gamma_{\beta}q\right\rangle \right)\frac{1}{q^{4}}\\ & + & q_{\mu}q_{\nu}\left(-\frac{3}{4}g^{3}\left\langle f^{abc}G_{\alpha\beta}^{a}\bar{q}\gamma_{\alpha}T^{b}q\bar{q}T^{c}\gamma_{\beta}q\right\rangle \right)\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\textrm{{IV}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})[\frac{1}{18}g^{3}\left\langle \bar{q}G_{\mu\nu}T^{a}\sigma_{\mu\nu}\gamma_{\alpha}q\bar{q}T^{a}\gamma_{\alpha}q\right\rangle \label{eq:5}\\ & + & \frac{1}{36}ig^{3}\left\langle \bar{q}\gamma_{\beta}T^{a}q\bar{q}G_{\alpha\beta}T^{a}\gamma_{\alpha}q\right\rangle +\frac{1}{18}g^{3}\left\langle \bar{q}T^{a}G_{\mu\nu}\gamma_{\alpha}\sigma_{\mu\nu}q\bar{q}T^{a}\gamma_{\alpha}q\right\rangle \nonumber \\ & - & \frac{1}{36}ig^{3}\left\langle \bar{q}\gamma_{\beta}T^{a}q\bar{q}T^{a}G_{\alpha\beta}\gamma_{\alpha}q\right\rangle +\frac{2}{9}g^{2}\left\langle \bar{q}\overleftarrow{D}_{\alpha}T^{a}\gamma_{\beta}\overrightarrow{D}_{\alpha}q\bar{q}T^{a}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[\frac{1}{4}g^{3}\left\langle \bar{q}G_{\mu\nu}T^{a}\sigma_{\mu\nu}\gamma_{\alpha}q\bar{q}T^{a}\gamma_{\alpha}q\right\rangle -\frac{1}{4}ig^{3}\left\langle \bar{q}\gamma_{\beta}T^{a}q\bar{q}G_{\alpha\beta}T^{a}\gamma_{\alpha}q\right\rangle \nonumber \\ & + & \frac{1}{4}g^{3}\left\langle \bar{q}T^{a}G_{\mu\nu}\gamma_{\alpha}\sigma_{\mu\nu}q\bar{q}T^{a}\gamma_{\alpha}q\right\rangle +\frac{1}{4}ig^{3}\left\langle \bar{q}\gamma_{\beta}T^{a}q\bar{q}T^{a}G_{\alpha\beta}\gamma_{\alpha}q\right\rangle \nonumber \\ & + & g^{2}\left\langle \bar{q}\overleftarrow{D}_{\alpha}T^{a}\gamma_{\beta}\overrightarrow{D}_{\alpha}q\bar{q}T^{a}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\text{\mbox{v}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})[\left(-\frac{2}{9}+\frac{\epsilon}{36}\right)ig^{2}\left\langle \bar{q}\overleftarrow{D}_{\rho}G_{\mu\alpha}G_{\mu\beta}\gamma_{\alpha}\gamma_{\varrho}\gamma_{\beta}q\right\rangle \label{eq:6}\\ & + & \left(-\frac{1}{18}-\frac{\epsilon}{24}\right)ig^{2}\left\langle \bar{q}\overleftarrow{D}_{\mu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\nu}\gamma_{\beta}q\right\rangle +\left(-\frac{1}{18}-\frac{\epsilon}{24}\right)ig^{2}\left\langle \bar{q}\overleftarrow{D}_{\nu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\mu}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[\left(\frac{1}{3}+\frac{\epsilon}{18}\right)ig^{2}\left\langle \bar{q}\overleftarrow{D}_{\rho}G_{\mu\alpha}G_{\mu\beta}\gamma_{\alpha}\gamma_{\varrho}\gamma_{\beta}q\right\rangle +\left(-\frac{1}{6}-\frac{5\epsilon}{72}\right)ig^{2}\left\langle \bar{q}\overleftarrow{D}_{\mu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\nu}\gamma_{\beta}q\right\rangle \nonumber \\ & + & \left(-\frac{1}{6}-\frac{5\epsilon}{72}\right)ig^{2}\left\langle \bar{q}\overleftarrow{D}_{\nu}G_{\mu\alpha}G_{\nu\beta}\gamma_{\alpha}\gamma_{\mu}\gamma_{\beta}q\right\rangle ]\frac{1}{q^{4}},\nonumber \end{eqnarray} where $\sigma_{\mu\nu}=\frac{i}{2}[\gamma_{\mu},\gamma_{\nu}]$, and the total result can be factorized as follows \begin{equation} \Pi_{q}^{d=8}(q^{2})=(q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})(-\frac{1}{24}g^{3}\left\langle \bar{q}q\right\rangle \left\langle \bar{q}Gq\right\rangle \frac{1}{q^{4}})+q_{\mu}q_{\nu}(-\frac{11}{27}g^{3}\left\langle \bar{q}q\right\rangle \left\langle \bar{q}Gq\right\rangle \frac{1}{q^{4}}),\label{eq:7}\end{equation} which is consistent with the factorized form in \cite{key-6-1}(the condensate $\left\langle \bar{q}D_{\rho}G_{\mu\alpha}G_{\mu\beta}\gamma_{\alpha}\gamma_{\rho}\gamma_{\beta}q\right\rangle $ that cannot be factorized are set to $-\frac{7ig}{72}\left\langle \bar{q}q\right\rangle \left\langle \bar{q}Gq\right\rangle $ based on the formula $\left\langle \bar{q}D_{\rho}G_{\mu\alpha}G_{\mu\beta}\gamma_{\alpha}\gamma_{\rho}\gamma_{\beta}q\right\rangle =-\frac{7ig}{72}\left\langle \bar{q}q\right\rangle \left\langle \bar{q}Gq\right\rangle -\frac{1}{4}\left\langle \bar{q}d^{abc}D_{\mu}(G_{\alpha\rho}^{a}G_{\mu\beta}^{b})T^{c}\gamma_{\alpha}\gamma_{\rho}\gamma_{\beta}q\right\rangle $+gluon condensates ). \begin{eqnarray} \pi_{\mu\nu}^{\text{\mbox{VI}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\label{eq:10}\\ & \times & [-\frac{1}{144\pi^{2}}g^{4}O_{1}-\frac{1}{144\pi^{2}}g^{4}O_{2}-\frac{1}{144\pi^{2}}g^{4}O_{3}+\frac{1}{16\pi^{2}}g^{4}O_{4}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[-\frac{1}{48\pi^{2}}g^{4}O_{1}-\frac{1}{48\pi^{2}}g^{4}O_{2}-\frac{1}{48\pi^{2}}g^{4}O_{3}+\frac{1}{16\pi^{2}}g^{4}O_{4}]\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\text{\mbox{VII}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\label{eq:11}\\ & \times & [-\frac{1}{288\pi^{2}}g^{4}O_{1}+\frac{1}{288\pi^{2}}g^{4}O_{2}-\frac{1}{144\pi^{2}}g^{4}O_{3}+\frac{1}{144\pi^{2}}g^{4}O_{4}+\frac{7}{576\pi^{2}}g^{3}O_{8}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[\frac{1}{96\pi^{2}}g^{4}O_{1}-\frac{1}{96\pi^{2}}g^{4}O_{2}-\frac{1}{16\pi^{2}}g^{4}O_{3}+\frac{1}{16\pi^{2}}g^{4}O_{4}+\frac{1}{192\pi^{2}}g^{3}O_{8}]\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\text{\mbox{VIII}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\label{eq:12}\\ & \times & [-\frac{1}{288\pi^{2}}g^{4}O_{1}+\frac{1}{288\pi^{2}}g^{4}O_{2}+\frac{1}{54\pi^{2}}g^{4}O_{3}-\frac{1}{54\pi^{2}}g^{4}O_{4}\nonumber \\ & - & \frac{1}{864\pi^{2}}g^{3}O_{5}+\frac{1}{288\pi^{2}}g^{2}O_{7}-\frac{5}{864\pi^{2}}g^{3}O_{8}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[\frac{1}{576\pi^{2}}g^{4}O_{1}-\frac{1}{576\pi^{2}}g^{4}O_{2}+\frac{7}{288\pi^{2}}g^{4}O_{3}-\frac{7}{288\pi^{2}}g^{4}O_{4}\nonumber \\ & - & \frac{1}{576\pi^{2}}g^{3}O_{5}+\frac{1}{192\pi^{2}}g^{2}O_{7}-\frac{1}{288\pi^{2}}g^{3}O_{8}]\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\text{\mbox{IX}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\label{eq:13}\\ & \times & [\left(-\frac{1}{72\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{11}{864\pi^{2}}\right)g^{4}O_{1}+\left(\frac{1}{216\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{5}{864\pi^{2}}\right)g^{4}O_{2}\nonumber \\ & + & \left(\frac{5}{72\pi^{2}}\frac{1}{\hat{\epsilon}}-\frac{19}{864\pi^{2}}\right)g^{4}O_{3}+\left(-\frac{7}{216\pi^{2}}\frac{1}{\hat{\epsilon}}-\frac{53}{864\pi^{2}}\right)g^{4}O_{4}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[\left(\frac{1}{24\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{13}{288\pi^{2}}\right)g^{4}O_{1}+\left(\frac{1}{72\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{11}{864\pi^{2}}\right)g^{4}O_{2}\nonumber \\ & + & \left(-\frac{1}{12\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{1}{72\pi^{2}}\right)g^{4}O_{3}+\left(-\frac{5}{36\pi^{2}}\frac{1}{\hat{\epsilon}}-\frac{41}{216\pi^{2}}\right)g^{4}O_{4}]\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\text{\mbox{X}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\label{eq:14}\\ & \times & [\left(\frac{1}{72\pi^{2}}\frac{1}{\hat{\epsilon}}-\frac{1}{288\pi^{2}}\right)g^{4}O_{3}+\left(-\frac{1}{72\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{1}{288\pi^{2}}\right)g^{4}O_{4}+\left(-\frac{1}{144\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{1}{576\pi^{2}}\right)g^{3}O_{8}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}\left(\frac{1}{48\pi^{2}}g^{4}O_{3}-\frac{1}{48\pi^{2}}g^{4}O_{4}-\frac{1}{96\pi^{2}}g^{3}O_{8}\right)\frac{1}{q^{4}},\nonumber \end{eqnarray} \begin{eqnarray} \pi_{\mu\nu}^{\text{\mbox{XI}}}(q) & = & (q_{\mu}q_{\nu}-q^{2}g_{\mu\nu})\label{eq:15}\\ & \times & [-\frac{1}{432\pi^{2}}g^{4}O_{1}+\frac{1}{432\pi^{2}}g^{4}O_{2}-\frac{1}{18\pi^{2}}\frac{1}{\hat{\epsilon}}g^{4}O_{3}+\frac{1}{18\pi^{2}}\frac{1}{\hat{\epsilon}}g^{4}O_{4}\nonumber \\ & + & \left(\frac{1}{108\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{1}{108\pi^{2}}\right)g^{3}O_{5}+\left(\frac{1}{144\pi^{2}}\frac{1}{\hat{\epsilon}}-\frac{1}{216\pi^{2}}\right)g^{3}O_{8}]\frac{1}{q^{4}}\nonumber \\ & + & q_{\mu}q_{\nu}[-\frac{1}{72\pi^{2}}g^{4}O_{1}+\frac{1}{72\pi^{2}}g^{4}O_{2}-\frac{1}{24\pi^{2}}g^{4}O_{3}\nonumber \\ & + & \frac{1}{24\pi^{2}}g^{4}O_{4}+\left(\frac{1}{36\pi^{2}}\frac{1}{\hat{\epsilon}}+\frac{7}{216\pi^{2}}\right)g^{3}O_{5}+\frac{5}{576\pi^{2}}g^{3}O_{8}]\frac{1}{q^{4}},\nonumber \end{eqnarray} where $O_{1}=\textrm{Tr}\left(G_{\mu\nu}G_{\mu\nu}G_{\alpha\beta}G_{\alpha\beta}\right)$, $O_{2}=\textrm{Tr}\left(G_{\mu\nu}G_{\alpha\beta}G_{\mu\nu}G_{\alpha\beta}\right)$, $O_{3}=\textrm{Tr}\left(G_{\mu\nu}G_{\upsilon\alpha}G_{\alpha\beta}G_{\beta\mu}\right)$, $O_{4}=\textrm{Tr}\left(G_{\mu\nu}G_{\alpha\beta}G_{\nu\alpha}G_{\beta\mu}\right)$, $O_{5}=f_{abc}G_{\mu\nu}^{a}j_{\mu}^{b}j_{\nu}^{c}$, $O_{6}=f_{abc}G_{\mu\nu}^{a}j_{\lambda}^{b}D_{\lambda}G_{\mu\nu}^{c}$, $O_{7}=j_{\mu}^{a}D^{2}j_{\mu}^{a}$, $O_{8}=f^{abc}G_{\mu\nu}^{a}G_{\nu\lambda}^{b}D^{2}G_{\lambda\mu}^{c}$, and $\frac{1}{\hat{\epsilon}}=\frac{1}{\epsilon}-\frac{1}{2}\ln\frac{-q^{2}}{\mu^{2}}$+$\frac{1}{2}\ln4\pi$-$\frac{\gamma_E}{2}$. \section*{Appendix B: Results of Numerical Analysis} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$5\% & $<$1\% & $<$15\% (d$\leqq$6)& - (d$\leqq$6)\\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.24,0.89] & [0.26,0.81] & [0.32,0.60] & [0.36, 0.45]& [0.32,0.60]\\ \hline $s_0$/GeV$^2$ & $10.73^{+0.78}_{-0.65}$ & $9.70^{+0.63}_{-0.55}$ & $7.92^{+0.39}_{-0.37}$ & $7.01^{+0.35}_{-0.34}$ & $7.64^{+0.45}_{-0.42}$ \\ \hline $m_H$/GeV & $2.34^{+0.06}_{-0.05}$ & $2.29^{+0.05}_{-0.05}$ & $2.16^{+0.05}_{-0.04}$ & $2.08^{+0.06}_{-0.06}$ & $2.13^{+0.05}_{-0.05}$ \\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.80^{+0.05}_{-0.05}$ & $0.74^{+0.05}_{-0.05}$ & $0.63^{+0.04}_{-0.04}$ & $0.57^{+0.04}_{-0.03}$ & $0.62^{+0.04}_{-0.04}$\\ \hline \end{tabular} \caption{\label{tab:result1a} Matching results with input parameters in Set I (10\% uncertainties for input phenomenological parameters, $k_1=k_2=1$).} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$5\% & $<$2\% & $<$15\% (d$\leqq$6)& - (d$\leqq$6)\\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.23,0.69] & [0.25,0.61] & [0.28,0.51] & [0.32, 0.38]& [0.28,0.51]\\ \hline $s_0$/GeV$^2$ & $11.47^{+0.88}_{-0.74}$ & $10.40^{+0.69}_{-0.61}$ & $9.38^{+0.52}_{-0.48}$ & $8.10^{+0.43}_{-0.40}$ & $8.75^{+0.56}_{-0.50}$ \\ \hline $m_H$/GeV & $2.50^{+0.07}_{-0.06}$ & $2.43^{+0.06}_{-0.06}$ & $2.36^{+0.06}_{-0.06}$ & $2.25^{+0.06}_{-0.06}$ & $2.30^{+0.06}_{-0.06}$ \\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.78^{+0.04}_{-0.04}$ & $0.72^{+0.04}_{-0.04}$ & $0.66^{+0.04}_{-0.04}$ & $0.58^{+0.04}_{-0.03}$ & $0.62^{+0.04}_{-0.03}$\\ \hline \end{tabular} \caption{\label{tab:result1b} Matching results with input parameters in Set I (10\% uncertainties for input phenomenological parameters, $k_1=k_2=2$).} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$5\% & $<$2\% & $<$15\% (d$\leqq$6) &-(d$\leqq$6)\\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.21,0.60] & [0.23,0.53] & [0.25,0.44] & [0.29, 0.34]& [0.25,0.44]\\ \hline $s_0$/GeV$^2$ & $12.66^{+1.07}_{-0.87}$ & $11.54^{+0.85}_{-0.72}$ & $10.45^{+0.64}_{-0.57}$ & $9.12^{+0.53}_{-0.49}$ & $9.65^{+0.65}_{-0.58}$ \\ \hline $m_H$/GeV & $2.65^{+0.08}_{-0.07}$ & $2.58^{+0.07}_{-0.07}$ & $2.51^{+0.07}_{-0.06}$ & $2.39^{+0.07}_{-0.06}$ & $2.44^{+0.07}_{-0.07}$ \\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.79^{+0.04}_{-0.04}$ & $0.73^{+0.04}_{-0.04}$ & $0.67^{+0.04}_{-0.04}$ & $0.60^{+0.03}_{-0.03}$ & $0.63^{+0.03}_{-0.03}$\\ \hline \end{tabular} \caption{\label{tab:result1c} Matching results with input parameters in Set I (10\% uncertainties for input phenomenological parameters, $k_1=k_2=3$).} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$5\% & $<$3\%\\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.21,0.53] & [0.23,0.46] & [0.25,0.41]\\ \hline $s_0$/GeV$^2$ & $12.23^{+0.85}_{-0.73}$ & $11.17^{+0.67}_{-0.60}$ & $10.66^{+0.58}_{-0.53}$ \\ \hline $m_H$/GeV & $2.64^{+0.07}_{-0.07}$ & $2.58^{+0.06}_{-0.06}$ & $2.54^{+0.06}_{-0.06}$\\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.76^{+0.04}_{-0.04}$ & $0.71^{+0.04}_{-0.04}$ & $0.68^{+0.04}_{-0.04}$ \\ \hline \end{tabular} \caption{\label{tab:result1d} Matching results with input parameters in Set I (10\% uncertainties for input phenomenological parameters, $k_1=3$, $k_2=5$).} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$7\% & $<$5\% & $<$15\% (d$\leqq$6)& - (d$\leqq$6)\\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.36,0.81] & [0.38,0.74] & [0.39,0.68] & [0.48,0.59] & [0.39,0.68]\\ \hline $s_0$/GeV$^2$ & $6.98^{+0.37}_{-0.40}$ & $6.62^{+0.33}_{-0.36}$ & $6.32^{+0.30}_{-0.33}$ & $5.12^{+0.28}_{-0.33}$ & $5.17^{+0.29}_{-0.35}$ \\ \hline $m_H$/GeV & $1.98^{+0.04}_{-0.05}$ & $1.95^{+0.04}_{-0.05}$ & $1.93^{+0.04}_{-0.05}$ & $1.77^{+0.04}_{-0.05}$ & $1.78^{+0.04}_{-0.05}$ \\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.65^{+0.05}_{-0.04}$ & $0.62^{+0.05}_{-0.04}$ & $0.60^{+0.04}_{-0.04}$ & $0.53^{+0.04}_{-0.03}$ & $0.54^{+0.04}_{-0.04}$\\ \hline \end{tabular} \caption{\label{tab:result2a} Matching results with input parameters in Set II (10\% uncertainties for input phenomenological parameters, $k_1=k_2=1$).} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$7\% & $<$4\% & $<$15\% (d$\leqq$6)& - (d$\leqq$6)\\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.29,0.66] & [0.30,0.61] & [0.32,0.54] & [0.39,0.45] & [0.32,0.54]\\ \hline $s_0$/GeV$^2$ & $8.85^{+0.57}_{-0.53}$ & $8.41^{+0.50}_{-0.47}$ & $7.97^{+0.43}_{-0.42}$ & $6.57^{+0.41}_{-0.40}$ & $6.70^{+0.43}_{-0.42}$ \\ \hline $m_H$/GeV & $2.25^{+0.06}_{-0.06}$ & $2.22^{+0.05}_{-0.06}$ & $2.18^{+0.05}_{-0.05}$ & $2.02^{+0.06}_{-0.06}$ & $2.04^{+0.06}_{-0.06}$ \\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.67^{+0.04}_{-0.04}$ & $0.65^{+0.04}_{-0.04}$ & $0.62^{+0.04}_{-0.04}$ & $0.54^{+0.04}_{-0.03}$ & $0.55^{+0.04}_{-0.03}$\\ \hline \end{tabular} \caption{\label{tab:result2b} Matching results with input parameters in Set II (10\% uncertainties for input phenomenological parameters, $k_1=k_2=2$).} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$7\% & $<$4\% & $<$15\% (d$\leqq$6)& - (d$\leqq$6)\\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.25,0.59] & [0.26,0.54] & [0.27,0.48] & [0.33,0.38] & [0.27,0.48] \\ \hline $s_0$/GeV$^2$ & $10.57^{+0.82}_{-0.72}$ & $10.01^{+0.70}_{-0.64}$ & $9.41^{+0.59}_{-0.55}$ & $7.82^{+0.51}_{-0.50}$ & $8.09^{+0.58}_{-0.55}$\\ \hline $m_H$/GeV & $2.46^{+0.07}_{-0.07}$ & $2.42^{+0.07}_{-0.07}$ & $2.38^{+0.06}_{-0.07}$ & $2.22^{+0.07}_{-0.07}$ & $2.24^{+0.07}_{-0.07}$ \\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.71^{+0.05}_{-0.04}$ & $0.68^{+0.04}_{-0.04}$ & $0.65^{+0.04}_{-0.04}$ & $0.56^{+0.04}_{-0.03}$ & $0.58^{+0.04}_{-0.03}$\\ \hline \end{tabular} \caption{\label{tab:result2c} Matching results with input parameters in Set II (10\% uncertainties for input phenomenological parameters, $k_1=k_2=3$).} \end{table} \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|} \hline $|$HDC$|$/OPE & $<$10\% & $<$7\% & $<$5\% \\ \hline $[\tau_{\text{min}},\tau_{\text{max}}]$/GeV$^{-2}$ & [0.24,0.53] & [0.25,0.49] & [0.26,0.45]\\ \hline $s_0$/GeV$^2$ & $10.59^{+0.69}_{-0.63}$ & $10.15^{+0.62}_{-0.57}$ & $9.76^{+0.55}_{-0.52}$\\ \hline $m_H$/GeV & $2.49^{+0.07}_{-0.07}$ & $2.46^{+0.06}_{-0.06}$ & $2.43^{+0.06}_{-0.06}$ \\ \hline $f_H^2$/$10^{-3}$GeV$^2$ & $0.71^{+0.04}_{-0.04}$ & $0.68^{+0.04}_{-0.04}$ & $0.66^{+0.04}_{-0.04}$\\ \hline \end{tabular} \caption{\label{tab:result2d} Matching results with input parameters in Set II (10\% uncertainties for input phenomenological parameters, $k_1=3$, $k_2=5$).} \end{table} \end{appendix} \clearpage
{ "attr-fineweb-edu": 1.384766, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbSbxK1Thg9qFbnVY
\section{Introduction} \IEEEPARstart{T}{he} characterization of the capacity of continuous-time channels with an average power constraint by waterfilling in the frequency domain, going back to Shannon \cite{Shannon1949}, has been given by Gallager \cite{Gallager} for linear time-invariant (LTI) channels in great generality. At least since the advent of mobile communications, there has been a vivid interest in similar results for LTV channels; see \cite{Barbarossa}, \cite{Jung}, \cite{Farrell}, \cite{DSBS} to cite only a few. Although most wireless communication channels are modeled by \emph{random} LTV filters \cite{Bello}, \cite{DSBS}, a waterfilling characterization of the capacity of deterministic LTV channels might also be of interest. Furthermore, many nonstationary continuous-time sources can be described as the response of an LTV filter to white Gaussian noise. It is therefore natural to ask for a solution to the dual problem, namely the reverse waterfilling characterization of the rate distortion function for such sources with a fidelity criterion. The classical answer to this question in the case of a stationary source, already outlined by Kolmogorov in \cite{Kolm}, has been given by Berger \cite{Berger} for a broad class of stationary random processes. Since then, until quite recently \cite{KiGo}, no similar results for nonstationary sources have been reported. Within the framework of time--frequency analysis, treating the time--frequency plane ``as a whole" \cite{Daub}, we present waterfilling solutions to both problems (with constraints on the average energy in the case of the channel and on the squared-error distortion in the case of the source). We consider integral operators $\P$ from the Hilbert space $L^2(\mathbb{R})$ of square-integrable functions $f:\mathbb{R}\rightarrow\mathbb{C}\cup\{\infty\}$ into itself of the form \begin{equation} (\P f)(t)=\int_{-\infty}^{\infty}h(t,t')f(t')\,\d t' \label{Op_P} \end{equation} with the kernel $h\in L^2(\mathbb{R}^2)$, i.e., Hilbert--Schmidt (HS) operators on $L^2(\mathbb{R})$ \cite{Reed}. Every such operator has a unique Weyl symbol $p=\sigma_{\P }\in L^2(\mathbb{R}^2)$ so that Eq.~\eqref{Op_P} may be written as \cite{Pool}, \cite{KoHl} \begin{equation} (\P f)(t)=\frac{1}{2\pi}\iint_{\mathbb{R}^2} p\left(\frac{t+t'}{2},\omega\right)\e^{\i(t-t')\omega}f(t')\,\d t'\,\d\omega. \label{Eq_pP} \end{equation} The Weyl symbol, a concept originating in quantum mechanics \cite{Gosson11}, \cite{Folland}, \cite{Groch}, is now a standard tool for the description of LTV systems \cite{MaHl} (because of its physical provenance, we shall often switch between variables $t,\omega$ and $x,\xi$ standing for time, angular frequency and the corresponding phase space coordinates). The operator \eqref{Op_P}, regarded as an LTV filter for finite-energy signals $f(t)$, will play a central role in our investigations. However, for the formulation of problems it will be necessary to replace $\P$ with the operator $\P_r:L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$ having the \emph{spread} Weyl symbol $\sigma_{\P_r}(t,\omega)=p_r(t,\omega)\triangleq p(t/r,\omega/r)$, where $r\ge1$ is the spreading factor. Eq.~\eqref{Op_P} then turns into \begin{equation} (\P_r f)(t) = \int_{-\infty}^\infty h(r,t,t')f(t')\,\d t', \label{Op_Pr} \end{equation} where $h(r,\cdot,\cdot)\in L^2(\mathbb{R}^2)$ denotes the kernel, now depending on $r$. It is not difficult to express $h(r,t,t')$ in terms of $h(t,t')$ and $r$; however, we shall rarely make use of that representation since the Weyl symbol appears to be the appropriate filter description in our context. Although other choices are possible for that symbol (also called the time--frequency transfer function; see \cite{MaHl} for a systematic overview), the Weyl symbol excels due to some unique properties, one of them being most helpful later on. There is one other choice for the description of LTV filters: the spreading function \cite{Bello}, \cite{Groch}, \cite{MaHl}. This is the two-dimensional (symplectic) Fourier transform of, in our case, the Weyl symbol $\sigma_{\P}$, \[ \hat{\sigma}_{\P}(\tau,\nu)=\frac{1}{2\pi}\iint_{\mathbb{R}^2}\e^{-\i(x\nu-\tau\xi)}\sigma_{\P}(x,\xi)\,\d x\,\d\xi, \] and its popularity in mobile communications comes from the fact that the representation \[ (\P f)(t)=\frac{1}{2\pi}\iint_{\mathbb{R}^2}\hat{\sigma}_{\P}(\tau,\nu)\e^{-\i\tau\nu/2}f(t-\tau)\e^{\i t\nu}\,\d\tau\,\d\nu \] allows a simple interpretation of the operator in terms of a weighted superposition of time delays $\tau$ and Doppler shifts $\nu$ of the input signal. Because of $\hat{\sigma}_{\P_r}(\tau,\nu)=r^2\hat{\sigma}_{\P}(r\tau,r\nu)$ we observe increasing concentration of the spreading function $\hat{\sigma}_{\P_r}$ of operator $\P_r$ around the origin of the $\tau,\nu$-plane as $r\rightarrow\infty$. This behaviour, shared by many practical LTV filters and termed \emph{underspread} in \cite{Kozek97}, \cite{MaHl}, is therefore also peculiar to our setting (where, in principle, $r$ tends to infinity). However, it remains to be remarked that the spreading function would not be the proper means for formulating the subsequent waterfilling theorems, Theorem~\ref{WFT1} and Theorem~\ref{WFT2}. The present paper evolves from previous work presented in \cite{Ham2014}. We now give a brief overview of the contributions of our paper with emphasis on extensions and modifications compared to \cite{Ham2014}; for details, refer to the text. The LTV filters, initially arbitrary HS operators, are later restricted to those having Weyl symbols in the Schwartz space of rapidly decreasing functions (thus including the bivariate Gaussian function used in \cite{Ham2014}). The waterfilling theorem for the capacity of the LTV channel is now stated in terms of the reciprocal squared modulus of the spread Weyl symbol of the LTV filter. Similarly, the reverse waterfilling theorem for the rate distortion function for the nonstationary source is stated in terms of the squared modulus of the spread Weyl symbol of the LTV filter. A major difference from \cite{Ham2014} is the statement of a new Szeg\H{o} theorem, which is now general enough to cover a large class of operators. For part of the proof of the Szeg\H{o} theorem we resort to a powerful asymptotic expansion having its roots in semiclassical physics \cite{Gosson11}, \cite{Folland}, \cite{Robert}. Since our results are asymptotic in nature, there is a need to give a lower bound for the spreading factor so that the formulas in the waterfilling theorems yield useful approximations. A lower bound is suggested by means of the Robertson--Schr\"{o}dinger uncertainty inequality \cite{Gosson11}. Several concrete examples will illustrate our results. \section{Mathematical Preliminaries} \label{Sec_MathPrel} In the present section, we fix the notation and compile some mathematical concepts and results associated with the LTV filter \eqref{Op_Pr}. In Section~\ref{Sec_Fundamentals}, it will be sufficient to restrict ourselves to the spreading factor $r=1$, therefore it is omitted; generalizations to the case $r\ge1$, mostly obvious, will be addressed as needed in the subsequent sections. \subsection{Notation} The following notations will be adopted: The inner product in $L^2(\mathbb{R})$ is denoted by $\langle f_1,f_2\rangle=\int_{-\infty}^\infty f_1(x)\overline{f_2(x)}\,\d x$, and $\|f\|=\langle f,f\rangle^{1/2}$ is the corresponding norm. For an operator $\A:L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$, its adjoint $\A^*:L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$ is defined by the condition $\langle\A f_1,f_2\rangle=\langle f_1,\A ^*f_2\rangle\,\forall f_1,f_2\in L^2(\mathbb{R})$; $\A$ is called self-adjoint if $\A^*=\A$. $\S(\mathbb{R}^n),\,n=1,2,$ is the Schwartz space of rapidly decreasing functions on $\mathbb{R}^n$ (cf. \cite{Groch}); if $n=2$ and the function $u$ additionally depends on the parameter $r$, $u=u(r,x,\xi)$, then $u\in\S(\cdot,\mathbb{R}^2)$ means \[ \sup_{x,\xi}|x^{\beta_1}\xi^{\beta_2}\partial_x^{\alpha_1}\partial_\xi^{\alpha_2}u(r,x,\xi)|\le C_{\bsalpha\bsbeta}<\infty \] for all $\bsalpha=(\alpha_1,\alpha_2),\bsbeta=(\beta_1,\beta_2)\in\mathbb{N}_0^2$, where the constants $C_{\bsalpha\bsbeta}$ do not depend on $r$. $L^2_{\mathbb{R}}(\mathbb{R})$ is the real Hilbert space of real-valued functions in $L^2(\mathbb{R})$. \subsection{Fundamental Concepts and Results} \label{Sec_Fundamentals} \subsubsection{Weyl correspondence} The Weyl symbol $\sigma_{\P}$ of the HS operator $\P$ in \eqref{Op_P} is given by the equation (sometimes called the Wigner transform) \cite{Pool}, \cite{KoHl} \begin{equation} \sigma_{\P }(x,\xi)=\int_{-\infty}^\infty \e^{-\i\xi x'}h\left(x+\frac{x'}{2}, x-\frac{x'}{2}\right)\,\d x'. \label{WT} \end{equation} The linear mapping $\P\mapsto p=\sigma_{\P}$ defined by \eqref{WT} establishes a one-to-one correspondence between all HS operators on $L^2(\mathbb{R})$ and all functions $p\in L^2(\mathbb{R}^2)$ \cite{Pool}, \cite{Groch}. Moreover, it holds (here and hereafter, double integrals extend over $\mathbb{R}^2$) \begin{equation} \frac{1}{2\pi}\iint|p(x,\xi)|^2\,\d x\,\d\xi=\iint|h(x,y)|^2\,\d x\,\d y. \label{JCTP} \end{equation} The above mapping (or rather its inverse) is called Weyl correspondence \cite{Groch}. \subsubsection{Singular value decomposition (SVD)} \label{Sec_SVD} Every HS operator $\P$ on $L^2(\mathbb{R})$ is compact and so is its adjoint $\P^*$ \cite{Reed}. Define the self-adjoint operator $\A \triangleq\P^*\P$ on $L^2(\mathbb{R})$. $\A$ is positive because $\langle \A f,f\rangle=\langle \P f,\P f\rangle\ge0\,\forall f\in L^2(\mathbb{R})$, and compact since one factor, say, $\P$, is compact. Therefore, $\P$ has the SVD \cite{Reed}, \cite[Th.~8.4.1]{Gallager} \begin{equation} (\P f)(x)=\sum_{k=0}^N \sqrt{\lambda_k}\,\langle f,f_k\rangle g_k(x), \label{SVD} \end{equation} where $\{f_0,\ldots,f_N\}$, $\{g_0,\ldots,g_N\}$ ($N\in\mathbb{N}_0$ or $N=\infty$) form orthonormal systems in $L^2(\mathbb{R})$, and $\lambda_0\ge\lambda_1\ge\ldots>0$ are the non-zero eigenvalues of $\A$ (counting multiplicity) with the corresponding eigenfunctions $f_k$; the functions $g_k$ are defined by $g_k=\P f_k/\sqrt{\lambda_k}$, the positive numbers $\sqrt{\lambda_k},\,k=0,\ldots,N$, being the non-zero \emph{singular values} of $\P$. If $\P$ maps $L^2_{\mathbb{R}}(\mathbb{R})$ into itself, then the functions $f_k,g_k$ will be real-valued. Without loss of generality (w.l.o.g.) we shall assume that $N=\infty$ (otherwise, put $\lambda_k=0$ and choose $f_k,g_k$ anyway for $k>N$). Then always $\lambda_k\rightarrow0$ as $k\rightarrow\infty$. \subsubsection{Traces of operators} By Eq.~\eqref{SVD}, the kernel of operator $\P$ in \eqref{Op_P} has the form $h(x,y)=\sum_{k=0}^\infty \sqrt{\lambda_k}g_k(x)\overline{f_k(y)}$ from where we readily obtain $\iint |h(x,y)|^2\,\d x\,\d y=\sum_{k=0}^\infty \lambda_k$. In combination with \eqref{JCTP}, this results in the useful equation \begin{equation} \tr\,\A\triangleq\sum_{k=0}^\infty \lambda_k =\frac{1}{2\pi}\iint|p(x,\xi)|^2\,\d x\,\d\xi<\infty. \label{ID1} \end{equation} Since $\tr\,\A$ (the trace of $\A$) is finite, $\A$ is of \emph{trace class} (see \cite{Reed} for a general definition of trace class operators). In Section \ref{Sec_VI}, the operator $\Atilde\triangleq\P\P^*$ will be considered. Plugging $\P^*f\in L^2(\mathbb{R})$ for $f\in L^2(\mathbb{R})$ in \eqref{SVD} we get for $\Atilde$ the representation $(\Atilde f)(x)=\int K_{\Atilde}(x,y)f(y)\,\d y$ with the kernel \begin{equation} K_{\Atilde}(x,y)=\sum_{k=0}^\infty\lambda_k g_k(x)\overline{g_k(y)}. \label{K_PPstar} \end{equation} $\Atilde$ has the same eigenvalues as $\A$. Furthermore, since we are dealing with the \emph{Weyl} symbol we have the simple rule \begin{equation} \sigma_{\P ^*}(x,\xi)=\overline{\sigma_{\P}(x,\xi)}. \label{Eq_sigmabar} \end{equation} Hence, Eq.~\eqref{ID1} holds by analogy for operator $\Atilde$ (just replace ``$\A$" with ``$\Atilde$"). In quantum mechanics, an operator on $L^2(\mathbb{R})$ is called a density operator, if it is 1) self-adjoint, 2) positive and 3) of trace class with trace one \cite{Gosson11}. Apparently, the above operators $\A,\Atilde$ enjoy all these properties, with the exception of the very last. We give them a name: \begin{definition} \label{Def_QDO} A quasi density operator (QDO) is an operator on $L^2(\mathbb{R})$ of the form $\P^*\!\P$ or $\P\P^*$, where $\P:L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$ is an HS operator. \end{definition} \begin{remark} In \cite{Gosson08} it is noted that \emph{any} self-adjoint, positive operator on $L^2(\mathbb{R})$ of trace class allows factorizations as given in Def.~\ref{Def_QDO}; the above narrow-sense meaning of QDO will be sufficient for our purposes. \end{remark} The following result is key to our paper: If the operator $\B:L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$ has a Weyl symbol $\sigma_{\B}\in\S(\mathbb{R}^2)$, then $\B$ is of trace class and its trace is given by the \emph{trace rule} \cite{Janssen} \begin{equation} \tr\,\B=\frac{1}{2\pi}\iint\sigma_{\B}(x,\xi)\,\d x\,\d\xi. \label{Eq_tracerule} \end{equation} Refer to \cite{Gosson11} concerning the smoothness assumption and for a proof. \subsubsection{Bound on eigenvalues} \label{Sec_CV} If the function $a=a(x,\xi):\mathbb{R}^2\rightarrow\mathbb{C}$ is differentiable up to the sixth order and it holds \begin{equation} \sup_{x,\xi}|\partial_x^{\alpha_1}\partial_\xi^{\alpha_2}a(x,\xi)|\le C_{\bsalpha}<\infty \label{Ineq_CV} \end{equation} for all $\bsalpha=(\alpha_1,\alpha_2)\in I=\{0,1,2,3\}^2$, then the operator $\A$ defined by the Weyl symbol $a$ is a \emph{bounded} operator from $L^2(\mathbb{R})$ into itself, and it holds \[ \|\A f\|\le c_0C\,\|f\|,\,f\in L^2(\mathbb{R}), \] where $C=\sum_{\bsalpha\in I}C_{\bsalpha}$ and $c_0$ is a certain constant not depending on the operator. This is the famous theorem of Calder\'{o}n--Vaillancourt \cite{Calderon}, \cite{Folland}. Consequently, the absolute value $|\lambda|$ of every eigenvalue $\lambda$ of $\A$ is bounded by $c_0C$. \section{Channel Model and Discretization} \label{Sec_III} We consider for any spreading factor $r\ge 1$ held constant the LTV channel \begin{equation} \tilde{g}(t)=(\P _rf)(t)+n(t),\,-\infty<t<\infty, \label{LTV_Ch} \end{equation} where $\P _r$ is the LTV filter \eqref{Op_Pr}, the real-valued filter input signals $f(t)$ are of finite energy and the noise signals $n(t)$ at the filter output are realizations of white Gaussian noise with two-sided power spectral density (PSD) $N_0/2=\theta^2>0$. Moreover, we assume throughout that the kernel $h(t,t')$ of operator $\P$ in \eqref{Op_P} is real-valued; observe that due to \begin{multline*} h(r,t,t') =rh((r^{-1}(t+t')+r(t-t'))/2,\\(r^{-1}(t+t')-r(t-t'))/2), \end{multline*} then also the kernel $h(r,t,t')$ of operator $\P_r$ will be real-valued so that $\P_r$ maps $L^2_{\mathbb{R}}(\mathbb{R})$ into itself. This channel is depicted in Fig.~\ref{Fig_1}. We now reduce the LTV channel \eqref{LTV_Ch} to a (discrete) vector Gaussian channel, following the approach in \cite{Gallager} for LTI channels; our analysis is greatly simplified by the restriction to finite-energy input signals. For the SVD of operator $\P_r$ the $r$-dependent operator $\A(r)\triangleq\PrStarPr$ has to be considered; since eigenvalues $\lambda_k$ and (eigen-)functions $f_k,\,g_k$ in the SVD now also depend on $r$, this will be indicated by a superscript~$\cdot\,^{(r)}$. Then, by Eq.~\eqref{SVD}, the LTV filter \eqref{Op_Pr} has the SVD \begin{equation} (\P_rf)(t)=\sum_{k=0}^\infty [\lambda_k^{(r)}]^{\frac{1}{2}}a_k\,g_k^{(r)}\!(t), \label{SVD_r} \end{equation} where the coefficients are $a_k=\langle f,f_k^{(r)}\rangle,\,k=0,1,\ldots,$ and $\{g_k^{(r)};k=0,1,\ldots\}$ forms an orthonormal system in $L^2(\mathbb{R})$. Recall from Section~\ref{Sec_SVD} that the functions $f_k^{(r)}\!,\,g_k^{(r)}$ are real-valued. The perturbed filter output signal $g=\P_rf$, $\tilde{g}(t)=g(t)+n(t)$, is passed through a bank of matched filters with impulse responses $h_k(t)=g_k^{(r)}(-t),\,k=0,1,\ldots\,.$ The matched filter output signals are sampled at time zero to yield $\langle\tilde{g}(t),h_k(-t)\rangle=b_k+n_k$, where $b_k=\langle g(t),h_k(-t)\rangle=[\lambda_k^{(r)}]^{1/2}a_k$, and the detection errors $n_k=\langle n(t),h_k(-t)\rangle$ are realizations of independent identically distributed (i.i.d.) zero-mean Gaussian random variables $N_k$ with the variance $\theta^2$, $N_k\sim\mathcal{N}(0,\theta^2)$. From the detected values $\hat{b}_k=b_k+n_k$ we get the estimates $\hat{a}_k=[\lambda_k^{(r)}]^{-1/2}\hat{b}_k=a_k+z_k$ for the coefficients $a_k$ of the input signal $f$, where $z_k$ are realizations of independent Gaussian random variables $Z_k\sim\mathcal{N}(0,\theta^2/\lambda_k^{(r)})$. Thus, we are led to the infinite-dimensional vector Gaussian channel \begin{equation} Y_k=X_k+Z_k,\,k=0,1,\ldots, \label{LTV_discr} \end{equation} where the noise $Z_k$ is distributed as described. Note that the noise PSD $\theta^2$, measured in watts/Hz, also has the physical dimension of an energy. \begin{figure} \setlength{\unitlength}{1cm} \begin{picture}(8.5,4) \put(3,1){\framebox(2,1){$p_r(t,\omega)$}} \put(2,1.5){\vector(1,0){1}} \put(1.0,1.4){$f(t)$} \put(6.25,1.5){\vector(1,0){1.0}} \put(5.875,1.375){\makebox(0.25,0.25){$+$}} \put(5,1.5){\vector(1,0){0.75}} \put(7.5,1.45){$\tilde{g}(t)$} \put(6,3){\vector(0,-1){1.25}} \put(5.7,3.2){$n(t)$} \put(6,1.5){\circle{0.5}} \put(6.5,3.2){\parbox{1.75cm}{\footnotesize white Gaussian noise with PSD $N_0/2=\theta^2$}} \put(0.6,0.5){\parbox{1.75cm}{\footnotesize finite-energy, real-valued}} \put(3.4,0.5){\parbox{2cm}{\footnotesize LTV filter}} \end{picture} \caption{Model of the LTV channel. The Weyl symbol $p_r(t,\omega)$ acts as a time--frequency transfer function; $r\ge1$ is the spreading factor.} \label{Fig_1} \end{figure} \begin{figure} \setlength{\unitlength}{1cm} \begin{picture}(8.5,3) \put(3.5,1){\framebox(2,1){$p_r(t,\omega)$}} \put(2.5,1.5){\vector(1,0){1}} \put(1.5,1.4){$n(t)$} \put(5.5,1.5){\vector(1,0){1.0}} \put(6.8,1.4){$x(t)$} \put(6.6,0.75){\footnotesize response} \put(0.5,0.5){\parbox{2.6cm}{\footnotesize white Gaussian noise with PSD $N_0/2=\sigma^2$}} \put(4,0.5){\parbox{2cm}{\footnotesize LTV filter}} \end{picture} \caption{Model of the nonstationary source} \label{Fig_2} \end{figure} \section{A Szeg\H{o} Theorem for Quasi Density Operators} \label{Sec_IV} From now on to the end of the paper, we assume that the Weyl symbol $p$ of the HS operator $\P$ in \eqref{Eq_pP} is in the Schwartz space of rapidly decreasing functions, $p\in\S(\mathbb{R}^2)$. Consider the QDO $\A=\P^*\!\P$ and generalize it as above to the operator $\A(r)=\PrStarPr$, $r\ge1$ (being again a QDO). We now state and prove a Szeg\H{o} theorem for $\A(r)$. Szeg\H{o} theorems like the subsequent Theorem~\ref{SzegoTh} are not new \cite{Widom}, \cite{Janssen}, \cite{Fei}, \cite{Oldfield}, but all the Szeg\H{o} theorems we are aware of are inadequate for our purposes. The proof of Lemma~\ref{Lemma_1} (see below) rests on an asymptotic expansion of the $n$th power of $\A(r)$. Asymptotic expansions such as that (there are different kinds of estimating the error!) have a long tradition in semiclassical physics and the theory of pseudodifferential operators \cite{Widom}, \cite{Folland}; rigorous proofs, however, are sometimes hard to find. A complete proof of the following Lemma~\ref{power_lemma}, which is perhaps closest to results of \cite{Robert}, is shifted to the Appendix. Although we need the lemma only in the case of $m=1$, it would not be natural to omit a full statement of it: \begin{lemma}\label{power_lemma} For any $n\in\mathbb{N}$, the Weyl symbol of the operator $\A^n(r)\triangleq[\A(r)]^n,\,r\ge1,$ has the asymptotic expansion \begin{equation} \sigma_{\A^n(r)}(x,\xi)\sim\sum_{k=0}^\infty r^{-2k}a_k(x/r,\xi/r), \label{AE} \end{equation} where $a_0(x,\xi)=|p(x,\xi)|^{2n}$, $a_k\in\S(\mathbb{R}^2)$ else, and Eq.~\eqref{AE} means that for all $m\in\mathbb{N}$ it holds \begin{multline} \sigma_{\A^n(r)}(x,\xi)=\sum_{k=0}^{m-1} r^{-2k}a_k(x/r,\xi/r)\\ +r^{-2m}R_m(r,x/r,\xi/r),\nonumber \end{multline} where $R_m=R_m(r,x,\xi)\in\S(\cdot,\mathbb{R}^2)$. \end{lemma} \begin{IEEEproof} See Appendix. \end{IEEEproof} Asymptotically, i.e., as $r\rightarrow\infty$, $a_0(x/r,\xi/r)$ is the dominant part of the asymptotic expansion \eqref{AE}. As customary in the theory of pseudodifferential operators (cf., e.g., \cite{Folland}, \cite{Janssen}), the expression $|p_r(x,\xi)|^{2n}$ will be called the \textit{principal symbol} of operator $\A^n(r)$. Observe that the Weyl symbol of the $n$th power of the operator $\Atilde(r)=\P_r\P_r^*,r\ge1,$ has an asymptotic expansion analogous to that of $\A^n(r)$ and the principal symbols of both operators are identical. \begin{definition} \label{def_2} For any two functions $A,\,B:[1,\infty)\rightarrow\mathbb{R}$ the notation $A\doteq B$ means \[ \lim_{x\rightarrow\infty}\frac{A(x)-B(x)}{x^2}=0, \] or, equivalently, $A(x)=B(x)+o(x^2)$ as $x\rightarrow\infty$, where $o(\cdot)$ denotes the standard Landau little-o symbol. \end{definition} In our context, $x$ will always be the spreading factor $r\ge1$. Thus $A\doteq B$ implies that $A(r)/r^2=B(r)/r^2+\epsilon$ where $\epsilon\rightarrow 0$ as $r\rightarrow\infty$. \begin{lemma}\label{Lemma_1} For any polynomial $G_N(x,z)=\sum_{n=1}^N c_n(x) z^n$ with bounded variable coefficients $c_n(x)\in\mathbb{R},\,x\ge 1,$ it holds \[ \sum_{k=0}^\infty G_N(r,\lambda_k^{(r)})\doteq\frac{1}{2\pi} \iint_{\mathbb{R}^2}G_N(r,|p_r(x,\xi)|^2)\,\d x\,\d\xi. \] \end{lemma} \begin{IEEEproof} First, application of operator $\P_r^*$ to both sides of Eq.~\eqref{SVD_r} yields \[ \A(r)f=\sum_{k=0}^\infty\lambda_k^{(r)}\langle f,f_k^{(r)}\rangle f_k^{(r)}. \] So we get for any $f\in L^2(\mathbb{R})$ the expansion \[ G_N(r,\A(r))f=\sum_{k=0}^\infty G_N(r,\lambda_k^{(r)})\langle f,f_k^{(r)}\rangle f_k^{(r)}. \] Hence, operator $\boldsymbol{B}(r)\triangleq G_N(r,\A(r))$ is of trace class with the trace \begin{equation} \tr\,\boldsymbol{B}(r)=\sum_{k=0}^\infty G_N(r,\lambda_k^{(r)}), \label{trace_1} \end{equation} the series being absolutely converging since $G_N(x,0)=0\,\forall x\in[1,\infty)$. Second, we use the trace rule \eqref{Eq_tracerule} to obtain \begin{equation} \label{Eq_trBrInt} \tr\,\B(r)=\frac{1}{2\pi}\iint\sigma_{\B(r)}(x,\xi)\,\d x\,\d\xi, \end{equation} where $\sigma_{\boldsymbol{B}(r)}(x,\xi)$ is the Weyl symbol of operator $\boldsymbol{B}(r)$. By linearity of the Weyl correspondence, $\sigma_{\B(r)}(x,\xi)$ has the expansion \begin{equation} \label{Eq_sigmaBr} \sigma_{\B(r)}(x,\xi)=\sum_{n=1}^N c_n(r)\sigma_{\A^n(r)}(x,\xi). \end{equation} From Lemma~\ref{power_lemma}, taking $m=1$, we infer that \[ \iint\sigma_{\A^n(r)}(x,\xi)\,\d x\,\d\xi \doteq\iint|p_r(x,\xi)|^{2n}\,\d x\,\d\xi. \] Plugging \eqref{Eq_sigmaBr} into \eqref{Eq_trBrInt}, we obtain by means of the latter equation \begin{align} \tr\,\boldsymbol{B}(r)&=\frac{1}{2\pi}\sum_{n=1}^N c_n(r) \iint\sigma_{\A^n(r)}(x,\xi)\,\d x\,\d\xi \nonumber\\ &\doteq\frac{1}{2\pi}\sum_{n=1}^N c_n(r) \iint|p_r(x,\xi)|^{2n}\,\d x\,\d\xi\nonumber\\ &=\frac{1}{2\pi}\iint G_N(r,|p_r(x,\xi)|^2)\,\d x\,\d\xi. \label{trace_2} \end{align} Eq.~\eqref{trace_2} in combination with Eq.~\eqref{trace_1} concludes the proof. \end{IEEEproof} Lemma~\ref{power_lemma} shows that in the case of $n=1$ and, say, $m=1$, the Weyl symbol $a(r,x,\xi)=\sigma_{\A(r)}(x,\xi)$ of operator $\A(r)$ satisfies Ineq.~\eqref{Ineq_CV} of Section~\ref{Sec_CV} with upper bounds $C_{\bsalpha}$ that may be chosen independent of $r\ge1$. Consequently, the eigenvalues $\lambda_k^{(r)}$ of $\A(r)$ are uniformly bounded for $r\ge1$; define \begin{equation} \label{Lambda} \Lambda_p\triangleq\max\left\{\sup_{\,r\ge1}\lambda_0^{(r)},\max_{x,\xi}|p(x,\xi)|^2\right\}. \end{equation} This constant appears in the next theorem: \begin{theorem}[Szeg\H{o} Theorem]\label{SzegoTh} Let $g:[0,\Delta]\rightarrow\mathbb{R}$, $\Delta\in(0,\infty)$, be a continuous function such that $\lim_{x\rightarrow 0+}g(x)/x$ exists. For any functions $a,\,b:[1,\infty)\rightarrow\mathbb{R}$, where $a(x)$ is bounded and $\Lambda_p b(x)\in[0,\Delta]$, define the function $G(x,z)=a(x)g(b(x)z),\,(x,z)\in[1,\infty)\times[0,\Lambda_p]$. Then it holds \begin{equation} \sum_{k=0}^\infty G(r,\lambda_k^{(r)})\doteq\frac{1}{2\pi} \iint_{\mathbb{R}^2}G(r,|p_r(x,\xi)|^2)\,\d x\,\d\xi. \label{Szego} \end{equation} \end{theorem} \begin{IEEEproof} The function $f(x)=g(x)/x,\,x\in(0,\Delta],$ has a continuous extension $F(x)$ onto the compact interval $[0,\Delta]$. By virtue of the Weierstrass approximation theorem, for any $m\in\mathbb{N}$ there exists a polynomial $F_{N_m-1}(x)$ of some degree $N_m-1$ such that $|F(x)-F_{N_m-1}(x)|\le\epsilon_m=\frac{1}{m}$ for all $x\in [0,\Delta]$. Consequently, the polynomial $g_{N_m}(x)=xF_{N_m-1}(x)$ of degree $N_m$ satisfies the inequality \begin{equation} |g(x)-g_{N_m}(x)|\le \epsilon_m x,\,x\in[0,\Delta]. \label{WAS_ineq} \end{equation} Define the polynomial with variable coefficients $G_{N_m}(x,z)=a(x)\,g_{N_m}\!(b(x)z)$. We now show that \begin{equation} r^{-2}\sum_{k=0}^\infty G_{N_m}(r,\lambda_k^{(r)})\rightarrow r^{-2}\sum_{k=0}^\infty G(r,\lambda_k^{(r)}) \label{first_arrow} \end{equation} and \begin{eqnarray} \lefteqn{\frac{r^{-2}}{2\pi}\iint G_{N_m}(r,|p_r(x,\xi)|^2)\,dx\,d\xi}\nonumber\\ &\rightarrow&\frac{r^{-2}}{2\pi}\iint G(r, |p_r(x,\xi)|^2)\,dx\,d\xi \label{second_arrow} \end{eqnarray} as $m\rightarrow\infty$, uniformly for all $r\ge1$ . To this end, first observe that by Eq.~\eqref{ID1} (generalized to the operator $\P_r,r\ge1$) it holds \begin{equation} \label{ID2} \begin{split} \sum_{k=0}^\infty \lambda_k^{(r)}&=\frac{1}{2\pi}\iint|p_r(x,\xi)|^2\,\d x\,\d\xi \\ &=c_pr^2, \end{split} \end{equation} where $c_p=(2\pi)^{-1}\iint|p(x,\xi)|^2\,\d x\,\d\xi$ is a finite constant. \textit{Proof of (\ref{first_arrow}):} By Ineq. (\ref{WAS_ineq}) we get (precluding the trivial case $\Lambda_p=0$) \begin{eqnarray*} \lefteqn{|\sum_{k=0}^\infty G(r,\lambda_k^{(r)})-\sum_{k=0}^\infty G_{N_m}(r,\lambda_k^{(r)})|}\\ &\le& \sum_{k=0}^\infty|G(r,\lambda_k^{(r)})-G_{N_m}(r,\lambda_k^{(r)})|\\ &\le& M\epsilon_m(\Delta/\Lambda_p)\sum_{k=0}^\infty \lambda_k^{(r)}, \end{eqnarray*} where $M=\sup_{x\ge1}|a(x)|<\infty$. Since $\sum_{k=0}^\infty\lambda_k^{(r)}=c_pr^2$, after division of the inequality by $r^2$, convergence in (\ref{first_arrow}) follows as claimed. \textit{Proof of (\ref{second_arrow}):} Similarly, \begin{eqnarray*} \lefteqn{|\iint G(r,|p_r(x,\xi)|^2)\,dx\,d\xi}\\ &&-\iint G_{N_m}(r,|p_r(x,\xi)|^2)\,dx\,d\xi|\\ &\le& M\epsilon_m(\Delta/\Lambda_p)\iint|p_r(x,\xi)|^2\,dx\,d\xi. \end{eqnarray*} Since $(2\pi)^{-1}\iint|p_r(x,\xi)|^2\,dx\,d\xi=c_pr^2$, after division by $2\pi r^2$ we come to the same conclusion as before. Finally, choose a (large) number $m\in\mathbb{N}$, so that the left-hand sides in \eqref{first_arrow}, \eqref{second_arrow} become arbitrarily close to their respective limits. Replace function $G$ in Eq.~\eqref{Szego} with the polynomial $G_{N_m}$. Then, by Lemma~\ref{Lemma_1} and the uniform convergence in \eqref{first_arrow}, \eqref{second_arrow} the theorem follows. \end{IEEEproof} Note that Theorem~\ref{SzegoTh} applies to operator $\Atilde(r)$ without any changes. \section{Waterfilling Theorem for the Capacity of Linear Time-Varying Channels} \label{Sec_V} \subsection{Waterfilling in the Time--Frequency Plane} The function $N_r,r\ge1,$ occurring in the next theorem is defined by $N_r(t,\omega)=N_1(t/r,\omega/r)$ where \begin{equation} N_1(t,\omega)=\frac{\theta^2}{2\pi}\,|p(t,\omega)|^{-2}, \label{N1} \end{equation} $p=\sigma_{\P}$ being the Weyl symbol of operator $\P$. Recall that $p\in\S(\mathbb{R}^2)$. $O(\cdot)$ denotes the standard Landau big-O symbol and $x^+$ denotes the positive part of $x\in\mathbb{R}$, $x^+=\max\{0,x\}$. \begin{theorem}\label{WFT1} Assume that the average energy $S$ of the input signal depends on $r$ such that $S(r)=O(r^2)$ as $r\rightarrow\infty$. Then for the capacity (in nats per transmission) of the LTV channel \eqref{LTV_Ch} it holds \begin{equation} C\doteq \frac{1}{2\pi}\iint_{\mathbb{R}^2}\frac{1}{2} \ln\left(1+\frac{(\nu-N_r(t,\omega))^+}{N_r(t,\omega)}\right)\,\d t\,\d\omega, \label{C} \end{equation} where $\nu$ is chosen so that \begin{equation} S\doteq\iint_{\mathbb{R}^2}(\nu-N_r(t,\omega))^+\,\d t\,\d\omega. \label{S} \end{equation} \end{theorem} \begin{IEEEproof} The first part of the proof is accomplished by waterfilling on the noise variances \cite[Th.~7.5.1]{Gallager}. Let $\nu_k^2=\theta^2/\lambda_k^{(r)}(\mbox{put }\theta^2/0=\infty),\,k=0,1,\ldots,$ be the noise variance in the $k$th subchannel of the discretized LTV channel \eqref{LTV_discr}. We exclude the trivial case $S=0$. The ``water level" $\sigma^2$ is then uniquely determined by the condition \begin{equation} S = \sum_{k=0}^\infty (\sigma^2-\nu_k^2)^+=\sum_{k=0}^{K-1} (\sigma^2-\nu_k^2),\label{def_sigma} \end{equation} where $K=\max\{k\in\mathbb{N};\nu_{k-1}^2<\sigma^2\}$ is the number of subchannels in the resulting finite-dimensional vector Gaussian channel. The capacity $C$ of that vector channel is achieved when the components $X_k$ of the input vector $(X_0,\ldots,X_{K-1})$ are independent random variables $\sim \mathcal{N}(0,\sigma^2-\nu_k^2)$; then \begin{equation} C=\sum_{k=0}^{K-1}\frac{1}{2}\ln\left(1+\frac{\sigma^2-\nu_k^2}{\nu_k^2}\right) \quad\mathrm{nats}. \label{C1} \end{equation} In the second part of the proof we apply the above Szeg\H{o} theorem, Theorem~\ref{SzegoTh}. To start with, note that $\sigma^2$ is dependent on $r$ and that always $\sigma^2=\sigma^2(r)>0$. Additionally, suppose for the time being that the function $\sigma^2(r)$ is finitely upper bounded as $r\rightarrow\infty$. Define \begin{equation} \ln_+ x=\left\{\!\!\begin{array}{cl}\max\{0,\ln x\} & \mbox{if }x>0,\\ 0 & \mbox{if }x=0. \end{array}\right. \label{ln+} \end{equation} By Eq.~\eqref{C1} we now have \begin{align*} C&=\sum_{k=0}^\infty\frac{1}{2}\ln_+\left(\frac{\sigma^2(r)}{\theta^2}\lambda_k^{(r)}\right)\\ &=\sum_{k=0}^\infty a(r)g(b(r)\lambda_k^{(r)}), \end{align*} where $a(r)=1$, $b(r)=\sigma^2(r)/\theta^2$, $g(x)=\frac{1}{2}\ln_+x,x\in[0,\Delta]$, and $\Delta$ is chosen so that $\Lambda_p b(r)\le\Delta<\infty$ when $r$ is large enough, $\Lambda_p$ being the constant \eqref{Lambda}. This choice is possible since $\sigma^2(r)$ remains bounded as $r\rightarrow\infty$; w.l.o.g., we assume $\Lambda_p b(r)\in[0,\Delta]$ for \emph{all} $r\ge1$. Then, by Theorem~\ref{SzegoTh} it follows that $C=C(r)$ satisfies \begin{align} C&\doteq\frac{1}{2\pi}\iint \frac{1}{2}\ln_+\left(\frac{\sigma^2(r)}{\theta^2}\,|p_r(x,\xi)|^2 \right)\,\d x\,\d\xi \nonumber\\ &=\frac{1}{2\pi}\iint\frac{1}{2} \ln\!\left[1+\frac{\left(\frac{\sigma^2(r)}{2\pi}-N_r(t,\omega)\right)^+}{N_r(t,\omega)} \right]\!\d t\,\d\omega, \label{C2} \end{align} where $N_r(t,\omega)=\frac{\theta^2}{2\pi}\,|p_r(t,\omega)|^{-2}$. Next, rewrite Eq.~\eqref{def_sigma} as \[ S=\sum_{k=0}^\infty\sigma^2(r)\left(1 -\frac{1}{\frac{\sigma^2(r)}{\theta^2}\lambda_k^{(r)}}\right)^+. \] Put $a(r)=\sigma^2(r)$, $b(r)=\sigma^2(r)/\theta^2$ and define \[ g(x)=\left\{\!\!\begin{array}{cl}\left(1-\frac{1}{x}\right)^+ & \mbox{if }x>0,\\ 0 & \mbox{if }x=0. \end{array}\right. \] Again, w.l.o.g., we may assume that $a(r)$ is bounded and $\Lambda_p b(r)\in[0,\Delta]$ for \emph{all} $r\ge1$ where $\Delta$ is chosen as above. Then, by Theorem~\ref{SzegoTh} it follows that \begin{align} S&\doteq\frac{1}{2\pi}\iint\sigma^2(r)\left(1- \frac{1}{\frac{\sigma^2(r)}{\theta^2}\,|p_r(x,\xi)|^2}\right)^+\,\d x\,\d\xi \nonumber\\ &=\iint \left(\frac{\sigma^2(r)}{2\pi}-N_r(t,\omega)\right)^+\,\d t\,\d\omega. \label{S2} \end{align} Finally, replacement of $\frac{\sigma^2(r)}{2\pi}$ in Eqs.~\eqref{C2}, \eqref{S2} by parameter $\nu$ yields Eqs. \eqref{C}, \eqref{S}. We complete the proof by a bootstrap argument: Take Eq.~\eqref{S} as a true equation and use \emph{it} for the definition of $\sigma^2(=2\pi\nu)$; after a substitution we obtain \[ \iint (\nu-N_1(t,\omega))^+\,\d t\,\d\omega=S(r)/r^2. \] Because of the growth condition imposed on $S$, $\nu=\nu(r)$ stays below a finite upper bound as $r\rightarrow\infty$, and so does $\sigma^2(r)$. Consequently, the previous argument applies and the capacity $C$ is given by Eq.~\eqref{C}. Second, by reason of Theorem~\ref{SzegoTh}, it holds for the actual average input energy $S_{\mathrm{act}}(r)$ $=\sum_{k=0}^\infty(\sigma^2(r)-\nu_k^2)^+$ that $S_{\mathrm{act}}\doteq S$. Thus, the \emph{dotted} equation \eqref{S} applies anyway---even when $S$ is taken as $S_{\mathrm{act}}$. \end{IEEEproof} From the property $p\in\S(\mathbb{R}^2)$ it is easily deduced that, say, \[ N_1(t,\omega)\ge c_1(t^2+\omega^2),\,(t,\omega)\in\mathbb{R}^2, \] where $c_1$ is some positive constant depending on $p$; therefore, condition \eqref{S} certainly makes sense. Note that the use of Landau symbols in Theorem~\ref{WFT1} does not mean that we need to pass to the limit (here, as $r\rightarrow\infty$). Rather, the dotted equations \eqref{C}, \eqref{S} may give useful approximations even when $r$ is finite (but large enough). \begin{example} \label{Example_1} Consider the HS operator $\P$ on $L^2(\mathbb{R})$ with the bivariate Gaussian function \begin{equation} p(t,\omega)=\e^{-\frac{1}{2}(\gamma^{-2}t^2+\gamma^2\omega^2)},\label{Eq_WS_Exp1} \end{equation} $\gamma>0$ fixed, as the Weyl symbol. Then $\P_r,r\ge1,$ has the Weyl symbol $p_r(t,\omega)=\exp[-(\gamma^{-2}t^2+\gamma^2\omega^2)/(2r^2)]$. $\P_r$ is related to the operator $\P _\delta^{(\gamma)}$ of the so-called heat channel \cite{Ham2014} by the equation $\P_r=c\,\P _\delta^{(\gamma)}$, where $\delta=2\,\mathrm{arccoth}(2r^2)>0$ and $c=\cosh(\delta/2)$. $\P^{(\gamma)}_\delta$ has the diagonalization \cite{Daub}, \cite{Ham2004}, \cite{Ham2014} \[ (\boldsymbol{P}_\delta^{(\gamma)}f)(t)= \sum_{k=0}^\infty\rho^{k+\frac{1}{2}}\langle f,f_k\rangle f_k(t), \] where $\rho=\e^{-\delta}$ and $f_k(t)=(D_\gamma H_k)(t)\triangleq\gamma^{-\frac{1}{2}}H_k(t/\gamma)$ is the dilated $k$th Hermite function $H_k(t)$; the real-valued eigenfunctions $f_k,\,k=0,1,\ldots,$ form an orthonormal system in $L^2(\mathbb{R})$. Therefore, $\A(r)=\PrStarPr=\P_r^2$ has the eigenvalues $\lambda_k^{(r)}=c^2\rho^{2k+1},k=0,1,\ldots,$ so that the LTV channel \eqref{LTV_Ch} reduces to the discrete vector channel \eqref{LTV_discr} where the noise random variables $Z_k\sim\mathcal{N}(0,\nu_k^2)$ have the variances $\nu_k^2=(\theta/c)^2\rho^{-2k-1}$. Take the average input energy $S(r)=2\pi r^2\theta^2\,\SNR$, where $\SNR>0$ is the signal-to-noise ratio ($2\pi r^2\theta^2$ having the interpretation of the average energy of the relevant noise). In Fig.~\ref{Figure_3}, capacity values labeled ``exact" have been computed numerically by waterfilling on the noise variances, as given in the proof of Theorem~\ref{WFT1}. Note that the results do not depend on $\theta^2$. From Theorem~\ref{WFT1}, after computation of the double integrals and elimination of parameter $\nu$ we get the equation \begin{equation} C\doteq\frac{r^2}{8}\left[\LambertW_0((4\pi\,\SNR -1)/\e)+1\right]^2, \label{Eq_Examp1} \end{equation} where $\LambertW_0$ is the principal branch of the Lambert W function determined by the conditions $\LambertW(x)\exp[\LambertW(x)]=x$ for all $x\in[-\e^{-1},\infty)$ and $\LambertW(0)=0$ \cite{CGHJK}, \cite{Ham2009}. In Fig.~\ref{Figure_3}, the approximate capacity \eqref{Eq_Examp1} is plotted as a function of $r$ (labeled ``waterfilling"). Surprisingly, the approximation is good even for spreading factors close to one. \begin{figure} \centering \includegraphics[width=3.5in]{hamme3.eps} \caption{Exact values and waterfilling approximation of the capacity of the LTV channel of Example~\ref{Example_1}} \label{Figure_3} \end{figure} \end{example} \subsection{Operational Meaning of the Capacity Result} \label{Sec_OperationalMeaning} Theorem~\ref{WFT1} gives the \emph{information} capacity (in the sense of \cite{Cover}) of the LTV channel \eqref{LTV_Ch}. To provide this result with an operational meaning, we need to construct a code in the form of a set of continuous-time signals which achieves a rate arbitrarily close to this capacity along with constructive methods of encoding and decoding. We use the notation in the proof of Theorem~\ref{WFT1}. For any fixed average input energy $S>0$ and any spreading factor $r\ge1$ held constant, the construction will be based on the eigenfunctions $f_k^{(r)}\!,\,k=0,\ldots,K-1$, of operator $\A(r)=\PrStarPr$, where $K$ is as in Eq.~\eqref{def_sigma}; at the receiver, the corresponding functions $g_k^{(r)}=\P_r f_k^{(r)}\!/[\lambda_k^{(r)}]^{\frac{1}{2}}$ will be used. Since the functions $g_k^{(r)}\!,f_k^{(r)}$ are in the range of the operators $\P_r,\P_r^*$ with Weyl symbols $p_r,\bar{p}_r\in\S(\mathbb{R}^2)$, resp., these functions are rapidely decreasing, $g_k^{(r)}\!,f_k^{(r)}\in\S(\mathbb{R})$. In practice, any finite collection of functions $u_1,\ldots,u_N\in\S(\mathbb{R})$ may be regarded to be concentrated on a common bounded interval centered at the origin and to be almost zero outside. Thus, for the sake of simplicity, we shall assume that $f_k^{(r)}(t)=g_k^{(r)}(t)=0,\,k=0,\ldots,K-1,$ if $|t|\ge d/2$ for some $d\in(0,\infty)$; $d$ will have the meaning of a delay later on. It will be convenient to switch from natural logarithms to logarithms to the base 2 and so from nats to bits. Then, the (information) capacity $C_k$ of the $k$th subchannel, $k=0,\ldots,K-1$, figuring in the sum on the right-hand side of Eq.~\eqref{C1}, reads \[ C_k=\frac{1}{2}\log_2\left(1+\frac{\sigma^2-\nu_k^2}{\nu_k^2}\right)\quad\mathrm{bits}. \] We treat the $K$ subchannels as independent Gaussian channels with the noise variance $\nu_k^2$ each and follow the classical approach of Shannon \cite{Shannon1949}, \cite{Cover}: For the $k$th subchannel, for any rate $R_k$ with $0<R_k<C_k$ and any $\epsilon>0$ generate a codebook $\{\boldsymbol{a}_k(m)=(a_{k0}(m),\ldots,a_{k,L_k-1}(m));\,m=1,2,\ldots,M_k\triangleq 2^{\lfloor R_k L_k\rfloor}\}\subseteq\mathbb{R}^{L_k}$ with the property that 1) $a_{kl}(m),l=0,\ldots,L_k-1$, are realizations of i.i.d. random variables $\sim\mathcal{N}(0,\sigma^2-\nu_k^2)$ and 2) the probability of a maximum likelihood decoding error is smaller than $\epsilon$ for every transmitted codeword $\boldsymbol{a}_k(m),m=1,2,\ldots,M_k$. We may assume that $L_0=\ldots=L_{K-1}=L$. For every message $\boldsymbol{m}=(m_0,\ldots,m_{K-1})\in\{1,2,\ldots,M_0\}\times\ldots\times\{1,2,\ldots,M_{K-1}\}$ form the pulses \[ u_l(\boldsymbol{m},t-ld)=\sum_{k=0}^{K-1}a_{kl}(m_k)f_k^{(r)}(t-ld),\,l=0,\ldots,L-1, \] and take the pulse train \begin{equation} u(\boldsymbol{m},t)=\sum_{l=0}^{L-1}u_l(\boldsymbol{m},t-ld) \label{Eq_umt} \end{equation} as input signal to the \emph{physical} channel. During transmission over that channel, each pulse $u_l(\boldsymbol{m},t-ld)$ undergoes a distortion modeled by the LTV filter \eqref{Op_Pr}, and results in the deformed pulse \[ v_l(\boldsymbol{m},t-ld)=\sum_{k=0}^{K-1}[\lambda_k^{(r)}]^{\frac{1}{2}}a_{kl}(m_k)g_k^{(r)}(t-ld). \] Thus, the output signal of the physical channel is \[ y(\boldsymbol{m},t)=\sum_{l=0}^{L-1}v_l(\boldsymbol{m},t-ld)+n(t), \] where $n(t)$ is a realization of white Gaussian noise as in the LTV channel model \eqref{LTV_Ch}. For any of the $K$ subchannels, pass the signal $y(\boldsymbol{m},t)$ through the matched filter with impulse response $h_k(t)$ as given in Section~\ref{Sec_III}; sample the matched filter output signal at time $ld,\,l=0,\ldots,L-1$. Since $y(\boldsymbol{m},t)=v_l(\boldsymbol{m},t-ld)+n(t)$ if $|t-ld|\le d/2$, we again obtain estimates $\hat{a}_{kl}(m_k)=a_{kl}(m_k)+z_{kl}$ for $a_{kl}(m_k)$, where $z_{kl}$ are realizations of independent Gaussian random variables $\sim\mathcal{N}(0,\nu_k^2)$. Maximum likelihood decoding of the perturbed codeword $\tilde{\boldsymbol{a}}_k(m_k)\triangleq(\hat{a}_{k0}(m_k),\ldots,\hat{a}_{k,L-1}(m_k))$ yields the correct codeword $\boldsymbol{a}_k(m_k)$ (thus, $m_k$) with a probability of error smaller than $\epsilon$. At the transmitter, choose the message $\boldsymbol{m}$ at random such that each component $m_k$ has probability $M_k^{-1}$ and is independent of the other components; convey $\boldsymbol{m}$ through a pulse train as described. Then---treating each of the $K$ subchannels separately---the total rate $R_{\mathrm{tot}}=\frac{1}{L}\sum_{k=0}^{K-1}\lfloor R_kL\rfloor$ (in bits per pulse) is attained with a total probability of a decoding error smaller than $K\epsilon$. When $L\rightarrow\infty$, Shannon's theory \cite{Shannon1949} ensures that $\epsilon$ can be made as small as we wish. Moreover, $R_{\mathrm{tot}}\rightarrow R\triangleq R_0+\ldots+R_{K-1}$ and, by the law of large numbers, the average input energy $\frac{1}{L}\sum_{l=0}^{L-1}\sum_{k=0}^{K-1}a_{kl}^2(m_k)$ tends to $\sum_{k=0}^{K-1}(\sigma^2-\nu_k^2)=S$ with probability 1. Finally, since the rate $R$ may be chosen arbitrarily close to the capacity $C=C_0+\ldots+C_{K-1}$ (at the expense of a larger length $L$ of the pulse train), the construction of the desired coding system is complete. \begin{example} \label{Example_2} Consider the LTV channel \eqref{LTV_Ch} with the operator $\P_r=c\,\P^{(\gamma)}_\delta$ of Example~\ref{Example_1}. The eigenfunctions of operator $\A(r)=\PrStarPr$ are the functions $f_k^{(r)}(t)=f_k(t)=(D_\gamma H_k)(t),\,k=0,1,\ldots,$ (here, not depending on $r$); the functions $g_k^{(r)}$ in the SVD~\eqref{SVD_r} of $\P_r$ coincide with $f_k^{(r)}$ for all $k$. Now, choose specifically $r=2,\,\gamma=1/10$ and take the average input energy $S=2\pi r^2\theta^2\,\SNR$ (as generally assumed in Example~\ref{Example_1}) with $\SNR=100$ and noise PSD $N_0/2=\theta^2=0.01$ (unit omitted). Waterfilling on the noise variances $\nu_k^2=(\theta/c)^2\rho^{-2k-1}(\rho=\e^{-\delta}),\,k=0,1,\ldots,$ as given in the proof of Theorem~\ref{WFT1}, yields the number of $K=11$ subchannels. In Fig.~\ref{Figure_4}(a), the first $K$ eigenfunctions $f_k^{(r)}(t)=(D_\gamma H_k)(t),\,k=0,\ldots,K-1,$ are displayed. The portion of an input pulse train plotted in Fig.~\ref{Figure_4}(b) has been computed according to Eq.~\eqref{Eq_umt} with the delay parameter $d=6a$, $a=\sqrt{2}r\gamma$, by numerical simulation of the involved random variables. Observe that there is no appreciable overlap of individual pulses. Each pulse transmits 22.6~bits ($=$15.7~nats, cf.~Fig.~\ref{Figure_3}) of information arbitrarily reliably [provided that the length of the pulse train(s) becomes larger and larger]. The meaning of parameter $a$ will be explained in Section~\ref{Sec_VII}. \begin{figure} \centering \includegraphics[width=3.5in]{hamme4.eps} \caption{(a) First eleven eigenfunctions, dilated Hermite functions, for the LTV channel of Example~\ref{Example_2}. (b) Portion of an input pulse train (centered at the origin) to the physical channel and the corresp. distorted output (without noise) of the same example. Time $t$ is measured in some unit of time; on the $y$-axis, also the physical dimension is omitted.} \label{Figure_4} \end{figure} \end{example} \subsection{Comparison with Classical Work} Gallager's theorem \cite[Th.~8.5.1]{Gallager} gives the capacity of LTI channels under very general assumptions. In the case of an LTI filter with a bounded and square-integrable frequency response $H(\omega)=\int_{-\infty}^\infty \e^{-\i\omega t}h(t)\,\d t$ (a.k.a. transfer function; $h\not=0$ is the impulse response) and additive white Gaussian noise of PSD $N_0/2=\theta^2>0$ at the filter output, Gallager's theorem states that the capacity (in bits per second) is given parametrically by \begin{align} C&= \frac{1}{2\pi}\int_{-\infty}^\infty\frac{1}{2} \log_2\left(1+\frac{(\nu-N(\omega))^+}{N(\omega)}\right)\,\d\omega \label{C_LTI}\\ S&=\int_{-\infty}^\infty(\nu-N(\omega))^+\,\d\omega, \label{S_LTI} \end{align} where $\nu$ is the parameter, $S$ is average input \textit{power}, and \begin{equation} N(\omega)=\frac{\theta^2}{2\pi}\,|H(\omega)|^{-2}. \label{N} \end{equation} We observe a perfect formal analogy between the waterfilling formulas \eqref{C_LTI}, \eqref{S_LTI} and those in Theorem~\ref{WFT1}. Moreover, the functions \eqref{N} and \eqref{N1} are the reciprocal squared modulus of the (time--frequency) transfer function of the respective filter times the same noise figure. Eqs.~\eqref{C}, \eqref{S} may also be used, of course, for a parametric representation of the function $C(S)$ with $\nu$ as parameter. \section{Reverse Waterfilling Theorem for Related Nonstationary Sources} \label{Sec_VI} In the present section, we consider the nonstationary source formed by the nonstationary zero-mean Gaussian process given by the Karhunen--Lo\`{e}ve expansion \begin{equation} X(t)=\sum_{k=0}^\infty X_k\,g_k^{(r)}(t),\,t\in\mathbb{R},\label{KL} \end{equation} where the coefficients $X_k,\,k=0,1,\ldots,$ are independent random variables $\sim\mathcal{N}(0,\sigma_k^2)$ with the variances $\sigma_k^2=\sigma^2\lambda_k^{(r)},\,\sigma>0$. This is the response of the LTV filter~\eqref{SVD_r} to white Gaussian noise with PSD $N_0/2=\sigma^2$; cf. \cite{Gallager}. This source is depicted in Fig.~\ref{Fig_2}. \subsection{Wigner--Ville Spectrum of the Source} \label{Sec_VI_A} In the present subsection, the spreading factor $r\ge1$ is initially not essential, hence set to one and not displayed. The Wigner--Ville spectrum (WVS) $\Phi(t,\omega)$ of the nonstationary random process $\{X(t),t\in\mathbb{R}\}$ in \eqref{KL} describes its density of (mean) energy in the time--frequency plane \cite{FlandrinMartin}. The WVS may be regarded as the nonstationary counterpart to the PSD of a stationary random process. It is defined by means of the Wigner distribution $Wx$ of the realizations $x(t)$ of $\{X(t)\}$ and then taking the expectation \cite{FlandrinMartin}. Since $x(t)$ is almost surely in $L^2(\mathbb{R})$, we may write \[ (Wx)(t,\omega)=\frac{1}{2\pi}\int_{-\infty}^\infty \e^{-\i\omega t'} x\left(t+\frac{t'}{2}\right)\overline{x\left(t-\frac{t'}{2}\right)}\d t'. \] The WVS $\Phi(t,\omega)=\mathsf{E}[(WX)(t,\omega)]$ of the random process $\{X(t)\}$ is then given by \begin{equation} \Phi(t,\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\e^{-\i\omega t'} \mathscr{R}\left(t+\frac{t'}{2},t-\frac{t'}{2}\right)\d t',\label{W-Kh} \end{equation} where $\mathscr{R}(t_1,t_2)=\mathsf{E}[X(t_1)\overline{X(t_2)}]$ is the autocorrelation function. Appropriately enough, identities such as \eqref{W-Kh} are called a nonstationary Wiener--Khinchine theorem in \cite{Kozek96}. A computation yields \[ \mathscr{R}(t_1,t_2)=\sigma^2\sum_{k=0}^\infty\lambda_kg_k(t_1)\overline{g_k(t_2)} =\sigma^2K_{\Atilde}(t_1,t_2), \] where $K_{\Atilde}$ is the kernel of the operator $\Atilde=\P\P^*$, see Eq.~\eqref{K_PPstar}. By means of the Wigner transform \eqref{WT}, the Weyl symbol of $\Atilde$ becomes \begin{equation} \sigma_{\Atilde}(t,\omega)=\int_{-\infty}^\infty \e^{-\i\omega t'}K_{\Atilde}\left(t+\frac{t'}{2}, t-\frac{t'}{2}\right)\,\d t'. \label{Eq_sigmaAtilde} \end{equation} Comparing Eqs.~\eqref{Eq_sigmaAtilde} and \eqref{W-Kh} we thus obtain \[ \Phi(t,\omega)=\frac{\sigma^2}{2\pi}\cdot\sigma_{\Atilde}(t,\omega). \] In the general case $r\ge1$, the WVS depends on $r$ and we shall write $\Phi(r,\cdot,\cdot)$ for it; then, the latter equation becomes \begin{equation} \Phi(r,t,\omega)=\frac{\sigma^2}{2\pi}\cdot\sigma_{\Atilde(r)}(t,\omega). \label{WVS} \end{equation} By use of the trace rule \eqref{Eq_tracerule} and Eq.~\eqref{ID1} (rewritten for $\Atilde$ and then generalized to $r\ge1$) we conclude that \begin{align*} \iint\Phi(r,t,\omega)\,\d t\,\d\omega &=\sigma^2\cdot\frac{1}{2\pi}\iint\sigma_{\Atilde(r)}(t,\omega)\,\d t\,\d \omega\\ &=\sigma^2\cdot\tr\,\Atilde(r)\\ &=\sum_{k=0}^\infty\sigma^2\lambda_k^{(r)}, \end{align*} where the last infinite sum is indeed the average energy $E(r)=\sum_{k=0}^\infty\sigma_k^2$ of the realizations $x(t)$ of the random process \eqref{KL}; Eq.~\eqref{ID2} yields \begin{equation} E(r)=c_pr^2\sigma^2. \label{E} \end{equation} By means of Lemma~\ref{power_lemma} we get from \eqref{WVS} for the WVS the asymptotic expansion \[ \Phi(r,t,\omega)\sim\frac{\sigma^2}{2\pi}\left(|p_r(t,\omega)|^2+ \sum_{k=1}^\infty r^{-2k}\tilde{a}_k(t/r,\omega/r)\right), \] where $\tilde{a}_k\in\S(\mathbb{R}^2)$. The expression $\frac{\sigma^2}{2\pi}\,|p_r(t,\omega)|^2$---call it principal \textit{term} of the WVS $\Phi(r,t,\omega)$---will play a prominent role in the next subsection. \begin{remark} Asymptotically, the principal term might be a good substitute for the WVS $\Phi(r,t,\omega)$ itself. It is not only similar in shape, but it also gives the same average energy [see \eqref{ID2}] and is non-negative throughout (cf. \cite{Flandrin}). \end{remark} \subsection{Reverse Waterfilling in the Time--Frequency Plane} \label{Sec_VI_B} Substitute the continuous-time Gaussian process $\{X(t),\,t\in\mathbb{R}\}$ in (\ref{KL}) by the sequence of coefficient random variables $\boldsymbol{X}=X_0,X_1,\ldots\,$. For an estimate $\boldsymbol{\hat{X}}=\hat{X}_0,\hat{X}_1,\ldots$ of $\boldsymbol{X}$ we take the squared-error distortion $D=\mathsf{E}[\sum_{k=0}^\infty(X_k-\hat{X}_k)^2]$ as distortion measure. In our context, $D$ depends on $r$ and it always holds $0<D(r)\le E(r)$, where $E(r)$ is as in \eqref{E}. \subsubsection{Computation of the rate distortion function} In the next theorem, the function $\Phi_r,r\ge1,$ is defined by $\Phi_r(t,\omega)=\Phi_1(t/r,\omega/r)$ where \[ \Phi_1(t,\omega)=\frac{\sigma^2}{2\pi}\,|p(t,\omega)|^2, \] $p\in\S(\mathbb{R}^2)$ being the Weyl symbol of operator $\P$. Recall that \[ \iint_{\mathbb{R}^2}\Phi_r(t,\omega)\,\d t\,\d\omega=E(r). \] The Landau symbol $\Omega(\cdot)$ is defined for any two functions as in Def.~\ref{def_2} as follows: $A(x)=\Omega(B(x))$ as $x\rightarrow\infty$ if $B(x)>0$ and $\liminf_{x\rightarrow\infty}A(x)/B(x)>0$. \begin{theorem}\label{WFT2} Assume that the foregoing average distortion $D$ depends on $r$ such that $D(r)=\Omega(r^2)$ as $r\rightarrow\infty$. Then the rate distortion function $R = R(D)$ for the nonstationary source (\ref{KL}) is given by \begin{equation} R\doteq\frac{1}{2\pi}\iint_{\mathbb{R}^2}\max\left\{0,\frac{1}{2} \ln\frac{\Phi_r(t,\omega)}{\lambda}\right\}\,\d t\,\d\omega, \label{R} \end{equation} where $\lambda$ is chosen so that \begin{equation} D\doteq\iint_{\mathbb{R}^2} \min\left\{\lambda,\Phi_r(t,\omega)\right\}\,\d t\,\d\omega. \label{D} \end{equation} The rate is measured in nats per realization of the source. \end{theorem} \begin{IEEEproof} The reverse waterfilling argument for a finite number of independent Gaussian sources \cite{Berger}, \cite{Cover} carries over to our situation without changes, resulting in a finite collection of Gaussian sources $X_0,\ldots,X_{K-1}$ where $K=\max\{k\in\mathbb{N};\sigma_{k-1}^2>\theta^2\}$ and the ``water table" $\theta^2$ is chosen as the smallest positive number satisfying the condition \begin{equation} D=\sum_{k=0}^\infty \min\{\theta^2,\sigma_k^2\}. \label{D_def} \end{equation} We exclude the trivial case $D=E(r)$. Then $K\ge1$ and the necessary rate $R=R(D)$ for the parallel Gaussian source $(X_0,\ldots,X_{K-1})$ amounts to \cite[Th.~10.3.3]{Cover} \begin{equation} R = \sum_{k=0}^{K-1}\frac{1}{2}\ln\frac{\sigma_k^2}{\theta^2}\quad\mathrm{nats}. \label{R_def} \end{equation} Now we apply the above Szeg\H{o} theorem, Theorem~\ref{SzegoTh}. Again, $\theta^2$ depends on $r$. Suppose for the time being that $\theta^2=\theta^2(r)$ is finitely upper bounded for $r\ge1$ and positively lower bounded as $r\rightarrow\infty$. By Eq.~\eqref{D_def} we have \begin{align*} D&=\sum_{k=0}^\infty\theta^2(r)\min\left\{1,\frac{\sigma^2}{\theta^2(r)}\lambda_k^{(r)}\right\}\\ &=\sum_{k=0}^\infty a(r)g(b(r)\lambda_k^{(r)}), \end{align*} where $a(r)=\theta^2(r)$, $b(r)=\sigma^2/\,\theta^2(r)$, $g(x)=\min\{1,x\}$, $x\in[0,\Delta]$, and $\Delta$ is chosen so that $\Lambda_p b(r)\le\Delta<\infty$ when $r$ is large enough, $\Lambda_p$ being the constant \eqref{Lambda}. This choice is possible since $\theta^2(r)$ is positively lower bounded as $r\rightarrow\infty$; w.l.o.g., we assume here and hereafter that $\Lambda_p b(r)\in[0,\Delta]$ for \emph{all} $r\ge1$. Already, $a(r)$ is bounded for $r\ge1$. Then, from Theorem~\ref{SzegoTh} we infer that \begin{align} D&\doteq\frac{1}{2\pi}\iint\theta^2(r) \min\left\{1,\frac{\sigma^2}{\theta^2(r)}\,|p_r(x,\xi)|^2\right\}\,\d x\,\d\xi \nonumber \\ &=\iint\min\left\{\frac{\theta^2(r)}{2\pi},\Phi_r(t,\omega)\right\}\,\d t\,\d\omega, \label{D2} \end{align} where $\Phi_r(t,\omega)=\frac{\sigma^2}{2\pi}\,|p_r(t,\omega)|^2$. Next, rewrite Eq.~\eqref{R_def} as \[ R=\sum_{k=0}^\infty\frac{1}{2}\ln_+\left(\frac{\sigma^2}{\theta^2(r)}\lambda_k^{(r)}\right), \] where $\ln_+$ is as defined in \eqref{ln+}. Taking $a(r)=1$, $b(r)=\sigma^2/\,\theta^2(r)$, $g(x)=\frac{1}{2}\ln_+x,x\in[0,\Delta]$, $\Delta$ chosen as before, by Theorem~\ref{SzegoTh} it follows that \begin{align} R&\doteq\frac{1}{2\pi}\iint \frac{1}{2}\ln_+\left(\frac{\sigma^2}{\theta^2(r)}\,|p_r(x,\xi)|^2\right)\, \d x\,\d\xi \nonumber \\ &=\frac{1}{2\pi}\iint\frac{1}{2} \ln_+\left[\frac{\Phi_r(t,\omega)}{\frac{\theta^2(r)}{2\pi}}\right]\,\d t\,\d\omega. \label{R2} \end{align} Finally, replacement of $\frac{\theta^2(r)}{2\pi}$ in Eqs.~\eqref{R2}, \eqref{D2} by the parameter $\lambda$ yields Eqs. \eqref{R}, \eqref{D}. Again, we complete the proof by a bootstrap argument: Take Eq.~\eqref{D} as a true equation and use \emph{it} for the definition of $\theta^2(=2\pi\lambda)$; after a substitution we obtain \[ \iint\min\{\lambda,\Phi_1(t,\omega)\}\,\d t\,\d\omega=D(r)/r^2. \] Because of the growth condition imposed on $D$, $\lambda=\lambda(r)$ stays above a positive lower bound as $r\rightarrow\infty$ and so does $\theta^2(r)$. Moreover, always $\theta^2(r)\le2\pi\lambda_{\mathrm{max}}$ may be chosen where $\lambda_{\mathrm{max}}\triangleq\max_{t,\omega}\Phi_1(t,\omega)$. The rest of the argument follows along the same lines as in the proof of Theorem~\ref{WFT1}. \end{IEEEproof} \begin{example} \label{Example_3} Consider the same ``Gaussian" LTV filter (operator) $\P$ with $\P_r=c\,\P^{(\gamma)}_\delta$ as in Example~\ref{Example_1}. The coefficients $X_0,X_1,\ldots$ of the random process $\{X(t)\}$ in \eqref{KL} then form a sequence of independent random variables $\sim\mathcal{N}(0,\sigma_k^2)$ with the variances $\sigma_k^2=(c\sigma)^2\rho^{2k+1}$ (cf. \cite{Ham2014}). For any average energy $E(r)=2^{-1}r^2\sigma^2$ of $\{X(t)\}$ define the distortion by $D(r)=E(r)/\SDR$, where the signal-to-distortion ratio $\SDR$ is at least one. In Fig.~\ref{Figure_5}, ``exact" rates $R$ have been computed numerically by reverse waterfilling on the signal variances, as given in the proof of Theorem~\ref{WFT2}. From the two equations in Theorem~\ref{WFT2} we obtain by elimination of parameter $\lambda$ the closed-form equation \begin{equation} R\doteq \frac{r^2}{8}\left[\LambertW_{-1}(-1/(\e\cdot\SDR))+1\right]^2, \label{Eq_Examp2} \end{equation} where $\LambertW_{-1}$ is the branch of the Lambert W function determined by the conditions $\LambertW(x)\exp[\LambertW(x)]=x$ for all $x\in[-\e^{-1},0)$ and $\LambertW(x)\rightarrow -\infty$ as $x\rightarrow 0-$ \cite{CGHJK}, \cite{Ham2009}. In Fig.~\ref{Figure_5}, the approximate rate \eqref{Eq_Examp2} is plotted against $r$ (labeled ``reverse waterfilling"). Again, we observe a surprisingly good approximation even for spreading factors close to one. \begin{figure} \centering \includegraphics[width=3.5in]{hamme5.eps} \caption{Exact values and reverse waterfilling approximation of the rate for the nonstationary source of Example~\ref{Example_3}} \label{Figure_5} \end{figure} \end{example} \subsubsection{Comparison with classical work} In Theorem~\ref{WFT2}, Eqs. \eqref{R}, \eqref{D} may also be used for a parametric representation of the rate distortion function $R(D)$. In parametric form, $R(D)$ has been given by Berger \cite{Berger} for a broad class of stationary random processes. In the latter parametric interpretation, Eq.~\eqref{R} is in perfect analogy to \cite[Eq.~(4.5.52)]{Berger} [with the (principal term of) WVS instead of the PSD], likewise Eq.~\eqref{D} with regard to \cite[Eq.~(4.5.51)]{Berger} (apart from a factor $\frac{1}{2\pi}$). \section{A Lower Bound for the Spreading Factor} \label{Sec_VII} Until now there has been no indication on how large the spreading factor $r$ should at least be chosen so that the dotted equations in the above waterfilling theorems yield useful approximations. The purpose of the present section is to identify a presumed lower bound for $r$. For any $r\ge1$ define the operator $\Ahat(r):L^2(\mathbb{R})\rightarrow L^2(\mathbb{R})$ by the Weyl symbol $\sigma_{\Ahat(r)}(x,\xi)=2\pi\rho(r,x,\xi)$, where \begin{equation} \rho(r,x,\xi)=\frac{\sigma_{\Atilde(r)}(x,\xi)}{\iint\sigma_{\Atilde(r)}(x',\xi')\,\d x'\,\d\xi'}\,. \label{rho} \end{equation} Then $\Ahat(r)$ is self-adjoint, positive, of trace class with the trace $\tr\,\Ahat(r)=\iint \rho(r,x,\xi)\,\d x\,\d\xi=1$. Thus, $\Ahat(r)$ is a density operator and the Robertson--Schr\"{o}dinger uncertainty inequality (RSUI) applies \cite{Gosson11}; it reads: For any density operator on $L^2(\mathbb{R})$ with a Weyl symbol of the form $2\pi\rho(x,\xi)$, define the moments (for convenience, put $x_1=x$, $x_2=\xi$) \begin{gather} \mu_i=\iint x_i\rho(x_1,x_2)\,\d x_1\,\d x_2, \label{mu}\\ \sigma_{ij}=\iint (x_i-\mu_i)(x_j-\mu_j)\rho(x_1,x_2)\,\d x_1\,\d x_2 \label{sig} \end{gather} and write $\sigma_i^2=\sigma_{ii},\,i,j=1,2.$ Then it holds \begin{equation} \sigma_1^2\sigma_2^2\ge \sigma_{12}^2+\frac{\hbar^2}{4}, \label{RSUI} \end{equation} where $\hbar$ is the reduced Planck constant (which in our context is always set to one). Now replace $\rho(x,\xi)$ with $\rho(r,x,\xi)$; since $\rho(r,x,\xi)$ depends on $r$, we shall write $\mu_i(r),\,\sigma_i^2(r),\,\sigma_{ij}(r)$ for its moments \eqref{mu}, \eqref{sig}. Although $\rho(r,x,\xi)$ is not a true probability density function (PDF), since it may assume negative values, its covariance matrix \[ \Sigma(r)=\left(\begin{array}{cc} \sigma_1^2(r) & \sigma_{12}(r)\\ \sigma_{12}(r) & \sigma_2^2(r) \end{array}\right) \] is always positive definite (as is the covariance matrix of any density operator \cite{Narco}). The operators $\P_r,r\ge1,$ may also be viewed as time--frequency localization operators (TFLOs), comprising in part the TFLOs introduced by Daubechies \cite{Daub}.\footnote{Actually, the operator $\P^{(\gamma)}_\delta$ appearing in Example~\ref{Example_1} originates in such a TFLO (also called a Daubechies operator) with Gaussian weight in time and frequency; see \cite{Daub}, \cite{Ham2004}.} Since $\rho(r,t,\omega)$ is the normalized WVS $\Phi(r,t,\omega)$ discussed in Section~\ref{Sec_VI_A} [cf. Eq.~\eqref{WVS}], it is natural to define the ellipse of concentration (EoC) of $\P_r$ as the boundary of the region in phase space described by the inequality \begin{equation} \big(x-\mu_1(r),\,\xi-\mu_2(r)\big)\,\Sigma(r)^{-1}\! \begin{pmatrix}x-\mu_1(r) \\ \xi-\mu_2(r)\end{pmatrix} \le4 \label{EoC} \end{equation} and having the property that the uniform distribution on it has the same first and second moments as the PDF at hand \cite{Cramer}. Since the EoC \eqref{EoC} has the area $A_{\mathrm{c}}=\pi\sqrt{\det(4\Sigma(r))}$, the RSUI can now be recast in the inequality $A_{\mathrm{c}}=4\pi\sqrt{\det\Sigma(r)}\ge4\pi\sqrt{\hbar^2/4}=2\pi$, or phrased in words: \emph{The area of the EoC of operator $\P_r,r\ge1,$ is at least $2\pi$.} However, this is not a useful criterion since it holds for any $r$; to get a useful criterion, consider the (true) PDF \begin{equation} \rho_r(x,\xi)\triangleq\frac{|p_r(x,\xi)|^2}{\iint|p_r(x',\xi')|^2\,\d x'\d\xi'}, \label{rho_pr} \end{equation} i.e., the normalized principal symbol of $\Atilde(r)$ [or $\A(r)$]. Note that the denominators in \eqref{rho} and \eqref{rho_pr} coincide, \[ \iint\sigma_{\Atilde(r)}(x,\xi)\,\d x\,\d\xi=\iint|p_r(x,\xi)|^2\,\d x\,\d\xi, \] which is a simple consequence of Eq.~\eqref{ID1} (in terms of $\Atilde$), Eq.~\eqref{Eq_tracerule} and a generalization to $r\ge1$; moreover, due to Lemma~\ref{power_lemma} it holds that \begin{equation} \sigma_{\Atilde(r)}(x,\xi)=|p_r(x,\xi)|^2+r^{-2}R_1(r,x/r,\xi/r). \label{Eq_sigmaAtilde_r} \end{equation} The rationale is now as follows: When $r$ is large, $\rho_r(x,\xi)$ will be ``close to" $\rho(r,x,\xi)$; then the RSUI \eqref{RSUI} for $\rho(r,x,\xi)$ may be transposed to $\rho_r(x,\xi)$, resulting in a constraint on $r$. With this in mind, replace in \eqref{mu}, \eqref{sig} function $\rho(x_1,x_2)$ with $\rho_1(x_1,x_2)$ and denote the new values for $\mu_i,\,\sigma_i^2,\,\sigma_{ij}$ by $m_i,\,s_i^2,\,s_{ij}$, respectively. By means of Eq.~\eqref{Eq_sigmaAtilde_r} and observing that the common denominator in \eqref{rho}, \eqref{rho_pr} evaluates to $2\pi c_pr^2$, we then obtain $\mu_i(r)=m_ir+o(1)$ and by this $\sigma_{ij}(r)=s_{ij}r^2+o(r)$. Plugging the latter in the RSUI \eqref{RSUI} for $\rho(r,x,\xi)$ finally results in the desired constraint \begin{equation} r^2\ge\frac{1}{2\sqrt{s_1^2s_2^2-s_{12}^2}}+o(1). \label{criterion} \end{equation} Ineq.~\eqref{criterion} suggests a lower bound for the spreading factor $r$, thus providing the wanted criterion (in practice, the error term would be neglected). Note that asymptotically, i.e., as $r\rightarrow\infty$, Ineq.~\eqref{criterion} (with vanishing error term) becomes a necessary condition. \begin{example} \label{Example_4} Consider the HS operator $\P$ on $L^2(\mathbb{R})$ with the Weyl symbol $p\in\S(\mathbb{R}^2)$ as given in Eq.~\eqref{Eq_WS_Exp1} of Example~\ref{Example_1} for any fixed parameter $\gamma>0$. Then the Weyl symbol $p_r$ of operator $\P_r,r\ge1,$ satisfies $\iint|p_r(x,\xi)|^2\,\d x\,\d\xi=\pi r^2$, so that the PDF \eqref{rho_pr} becomes \begin{equation} \rho_r(x,\xi)=\frac{1}{\pi r^2}\,\e^{-\frac{1}{r^2}(\gamma^{-2}x^2+\gamma^2\xi^2)}. \label{Eq_rho_pr_HC} \end{equation} An evaluation of the integrals in \eqref{mu}, \eqref{sig} yields $m_1=m_2=0,$ $s_1^2=\gamma^2/2,\,s_2^2=\gamma^{-2}/2 $ and $s_{12}=s_{21}=0$. Consequently, Ineq.~\eqref{criterion} turns into \[ r^2\ge1+o(1), \] which, neglecting the error term, means no restriction at all. In fact, in Fig.~\ref{Figure_3} and Fig.~\ref{Figure_5} the approximation is already acceptable for spreading factors close to one. Finally, we add the explanation of the parameter $a$ occurring in Example~\ref{Example_2} of Section~\ref{Sec_OperationalMeaning}. To this end, we determine the EoC \eqref{EoC} of the above operator $\P_r,\,r\ge1,$ by the use of the identity $\P_r=c\,\P^{(\gamma)}_\delta$ (see Example~\ref{Example_1}). The Weyl symbol of the operator $\P^{(\gamma)}_\delta\!\circ(\P^{(\gamma)}_\delta)^*=\P^{(\gamma)}_{2\delta}$ is given in closed form in \cite{Ham2014}. By this means, Eq.~\eqref{rho} readily becomes \[ \rho(r,x,\xi)=\frac{1}{\pi\alpha\beta}\,\exp\left(-\frac{x^2}{\alpha^2}-\frac{\xi^2}{\beta^2}\right), \] where $\alpha=\gamma\sqrt{\coth\delta},\,\beta=\gamma^{-1}\sqrt{\coth\delta}$. The \emph{exact} EoC of the operator $\P_r$ is therefore the ellipse in phase space with the semi-axes $a_{\mathrm{x}}=\sqrt{2}\alpha,\,b_{\mathrm{x}}=\sqrt{2}\beta$ and the equation \[ x^2/a_{\mathrm{x}}^2+\xi^2/b_{\mathrm{x}}^2=1. \] From the PDF~\eqref{Eq_rho_pr_HC}, we obtain asymptotically, i.e., as $r\rightarrow\infty$, the approximate EoC with semi-axes $a=\sqrt{2}r\gamma,\,b=\sqrt{2}r/\gamma$. For instance, in the case of $r=2,\gamma=1/10$ we find the rather good approximations $a=0.2828,\,b=28.28$ (units omitted) of the exact values $a_{\mathrm{x}}=0.2850,\,b_{\mathrm{x}}=28.50$ (which is somewhat surprising since $r=2$ is still small). In Example~\ref{Example_2}, the foregoing value of $a$ has been used as an estimate of the effective half duration of a pulse. \end{example} \section{Conclusion} Waterfilling theorems in the time--frequency plane for the capacity of an LTV channel with an average energy constraint and the rate distortion function for a related nonstationary source with a squared-error distortion constraint have been stated and rigorous proofs have been given. The waterfilling theorem for the LTV channel has been formulated in terms of the reciprocal squared modulus of the spread Weyl symbol of the LTV filter (times a noise figure), whereas in the reverse waterfilling theorem for the nonstationary source simply the squared modulus of the spread Weyl symbol (times a signal figure) has been used. The latter expression has been related to the WVS of the nonstationary source and recognized as its principal term. The LTV filter, initially an arbitrary HS operator, was later restricted to an operator with a Weyl symbol in the Schwartz space of rapidly decreasing functions. This smoothness assumption was a prerequisite for a Szeg\H{o} theorem upon which the proofs of both waterfilling theorems rested in an essential way. A self-contained proof of the Szeg\H{o} theorem has been given. The formulas in the waterfilling theorems depend on the spreading factor and are asymptotic in nature. Two examples with a bivariate Gaussian function as the Weyl symbol showed that the waterfilling theorems may perform well even when the spreading factor is close to one. For the general case, based on an uncertainty inequality, a lower bound for the spreading factor has been suggested.
{ "attr-fineweb-edu": 1.288086, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbUc5qg5A5vxQ5Wo6
\section{Introduction} \label{sec:Intro} Learning-to-rank has been intensively studied and has shown significantly increasing values in a wide range of domains, such as \textit{web search}, \textit{recommender systems}, \textit{dialogue systems}, \textit{machine translation}, \textit{computer vision} and even \textit{computational biology}, to name a few. The information retrieval (IR) community has experienced a flourishing development of learning-to-rank methods, such as \textit{pointwise} methods, \textit{pairwise} methods and \textit{listwise} methods. The pointwise methods \cite{SubsetRegression, RameshL2R, ChuGaussian, ChuSVOR} transform the ranking problem into a task of (ordinal) regression or classification on individual documents. A major problem is that the pointwise methods are agnostic to the relevance-based order information among documents that are associated with the same query. To make a step forward, the pairwise methods \cite{FreundBoosting, ShenPerceptron, RankSVMStruct} were then proposed, which transform the ranking problem into a task of pairwise classification. However, the loss functions merely consider the relative order between two documents rather than the total order relationship among all documents associated with the same query. Moreover, the number of document pairs per query may differ from query to query, thus the result can be biased in favor of queries with more documents in the training data \cite{RankCosine}. To overcome the shortcomings of the aforementioned two categories of ranking methods, the listwise methods \cite{OlivierMargin, AdaRank, YueSVMAP, GuiverSoftRank, SoftRank, QinApproximateNDCG, LambdaMART, ListNet, ListMLE, BoltzRank, LambdaRank, RankCosine} appeal to the loss function that is defined over all documents associated with the same query. Recently, inspired by generative adversarial network (GAN) and its variants, significant efforts \cite{IRGAN, AdverPRR, AdverSMIR, AdverCMR, AdverPreference, AdverFGIMR} have been made to develop meaningful adversarial optimization methods for addressing learning-to-rank problems. Despite the success achieved by the aforementioned methods for learning-to-rank, there are still many open issues. On one hand, with the recent advances in machine learning, the learning-to-rank models are getting increasingly complex. Take the ranking methods based on neural networks for example, it is not trivial to find the optimal setting of hyper-parameters that achieves the best ranking performance. As a result, it becomes more and more difficult to develop a new model and conduct a fair comparison with prior methods, especially for newcomers. On the other hand, the recent publications \cite{WorryingAnalysis, MetricRealityCheck, NeuralHype} pointed out that some reported improvements \textit{don't add up}. The factors that contribute to such phenomena include: (1) using weak baseline methods; (2) difficulties in comparing or reproducing results across papers; (3) using various types of datasets, performance measures and data preprocessing steps. Hence, both academia and industry have recognized the critical importance and the long-term value in developing and maintaining open source projects on popular research topics, such as learning-to-rank. This is also why the \textit{replicability} and \textit{reproducibility} of published experiments have gained increasing attention in the IR research community, as evidenced by the recent workshop \cite{2015RIGOR}, the annual reproducibility track since 2015 of the European Conference on Information Retrieval (ECIR) and the Association for Computing Machinery (ACM) policy on \textit{Artifact Review and Badging}\footnote{https://www.acm.org/publications/policies/artifact-review-badging} in computer science. Motivated by the aforementioned open issues, the focus of this paper is on developing a benchmarking platform for learning-to-rank methods based on neural networks, which is referred to as PT-Ranking. PT-Ranking is implemented as a lightweight Python library based on PyTorch. It can be used within a JupyterLab notebook, where users can make use of the inline plots and interactive visualization features. The main contributions are summarized as follows:\\ \indent \textbf{(1)} PT-Ranking includes a large number of representative learning-to-rank methods, such as the pointwise method RankMSE (ranking based on least mean squares regression \cite{PRML}) , pairwise methods \cite{RankingSVM, MSnDCGRankNet} and listwise methods \cite{TaoWSDM2019, ListMLE, LambdaRank, RankCosine, ListNet, QinApproximateNDCG, StochasticTreatmentRF}. Moreover, besides the traditional optimization strategy via empirical risk minimization, PT-Ranking also includes pointwise, pairwise and listwise methods based on adversarial optimization \cite{IRGAN}, which enables to pinpoint the pros and cons of different optimization frameworks. In order to make a fair comparison with the state-of-the-art approach LambdaMART that builds upon the technique of gradient boosting decision tree (GBDT), the implementations of LambdaMART provided in LightGBM \cite{LightGBM} and XGBoost \cite{XGBoost} are also included.\\ \indent \textbf{(2)} PT-Ranking supports to compare different learning-to-rank methods based on the widely used datasets (e.g., MSLR-WEB30K, Yahoo!LETOR and Istella LETOR) in terms of different metrics, such as precision, MAP, nDCG, nERR. By randomly masking the ground-truth labels with a specified ratio, PT-Ranking allows to examine to what extent the ratio of unlabelled query-document pairs affects the performance of different learning-to-rank methods.\\ \indent \textbf{(3)} PT-Ranking offers deep neural networks as the basis to construct a scoring function. On one hand, PT-Ranking provides facilities to investigate the effects of different hyper-parameters, such as activation functions and number of layers. On the other hand, the simplified modules make it very easy to examine a new loss function or a new optimization strategy. Thanks to this, PT-Ranking facilitates the understanding, comparing, designing of learning-to-rank methods. The remainder of the paper is structured as follows. In the next section, we briefly survey the existing open-source projects which are related to learning-to-rank. In section 3, we give the mathematical formulation of two different learning-to-rank frameworks following the Cranfield paradigm. In section 4, we detail the key components of PT-Ranking for learning-to-rank. In section 5, we demonstrate PT-Ranking's functionalities through a series of demo experiments based on benchmark datasets. Finally, we conclude the paper in section 6. \section{Related Work} \label{sec:ReWork} In this section, we discuss the existing open-source projects on learning-to-rank and show what primary facilities they offer. RankLib\footnote{http://www.lemurproject.org/ranklib.php} is a Java package that implements eight popular learning-to-rank methods, as well as several evaluation metrics. Unfortunately, due to the platform limitation, it is not easy to customize and/or further extend some pre-implemented models, especially in using deep neural networks as the basis to construct a scoring function. QuickRank\footnote{http://quickrank.isti.cnr.it}, RankEval \cite{RankEval}, XGBoost \cite{XGBoost}, LightGBM \cite{LightGBM}, CatBoost \cite{CatBoost} are the leading packages focusing on tree-based models. A representative implementation is the LambdaMART method \cite{LambdaMART} which builds upon gradient boosted decision trees. QuickRank introduces post-learning optimisations pipelined with the learning-to-rank methods. RankEval allows to conduct a structural analysis reporting statistics about shape, depth and balancing of trees in the forest. However, the tree-based models commonly require extensive feature engineering in order to handle textual features. Moreover, we note that XGBoost, LightGBM and CatBoost do not provide dedicated functionalities for learning-to-rank, in terms of both algorithms and metrics. Due to the breakthrough successes of neural networks, many approaches \cite{DSSM, CDSSM, DRMM, HuCNNMatching, PangAAAMatching, MatchSRNN} building upon neural networks are proposed, which are referred to as neural ranking models. Different from the tree-based models that require extensive feature engineering to handle textual features. Neural ranking models can effectively handle sparse features through embeddings. Recently, a number of open-source projects, such as TF-Ranking \cite{TFRanking} and MatchZoo \cite{MatchZoo}, have emerged, which builds upon either TensorFlow or PyTorch. We note that MatchZoo focuses on text matching research. The typical tasks are question answer, information retrieval, and textual entailment. The benchmark datasets, such as MSLR-WEB30K and Yahoo!LETOR, are not supported. Though TF-Ranking supports LETOR datasets in LibSVM format, a number of representative learning-to-rank methods are not included, especially the methods based on adversarial optimization \cite{IRGAN}. Another possible barrier is that some researchers prefer to use PyTorch rather than TensorFlow. PT-Ranking is highly complementary to the aforementioned open-source projects. Yet, to the best of our knowledge, we are the first to support an in-depth comparison of many representative learning-to-rank methods based on PyTorch across several benchmark datasets, such as MSLR-WEB30K and Yahoo!LETOR. \section{Learning-to-Rank} \label{sec:L2R} In this section, we describe the general learning-to-rank formulation following the Cranfield paradigm, where two different optimization frameworks are introduced. \subsection{Preliminaries} \label{subsec:pre} Let $\mathcal{Q}$ and $\mathcal{D}$ be the query space and the document space, respectively. We use $\Phi:\mathcal{Q}\times\mathcal{D}\rightarrow\mathcal{Z}\coloneqq\mathbb{R}^{d}$ to denote the mapping function for generating a feature vector for a document under a specific query context, where $\mathcal{Z}$ represents the $d$-dimensional feature space. We use $\mathcal{T}\coloneqq\mathbb{R}$ to denote the space of the ground-truth labels each document receives. Thus for each query, we have a list of document feature vectors $\mathbf{x}=(x_{1},...,x_{m})\in\mathcal{X}\coloneqq\mathcal{Z}^{m}$ and a corresponding list $\mathbf{y}^{\ast}=(y_{1}^{\ast},...,y_{m}^{\ast})\in\mathcal{Y}\coloneqq\mathcal{T}^{m}$ of ground-truth labels. The subscript $i$ like $x_{i}$ or $y_{i}^{\ast}$ denotes the $i$-position in the list. In practice, we get independently and identically distributed (i.i.d) samples $\mathcal{S}=\{(\mathbf{x}_{j},\mathbf{y}_{j}^{\ast})\}_{j=1}^{n}$ from an unknown joint distribution $P(\cdot,\cdot)$ over $\mathcal{X}\times\mathcal{Y}$. A ranking $\pi$ on $m$ documents $\mathbf{x}=(x_{1},...,x_{m})$ is defined as a permutation of $\mathbf{x}$. $\pi(i)$ / $\pi(x_i)$ yields the \textit{rank} of the $i$-th document within $\mathbf{x}$. $\pi^{-1}(r)$ yields the index within $\mathbf{x}$ of the document at rank $r$, and we have $\pi^{-1}(\pi(i))=i$ or $\pi^{-1}(\pi(x_i))=i$. Since we are interested in sorting documents in descending order according to their relevance, we think of higher positions with smaller rank values as more favorable. A ground-truth ranking refers to the ideal ranking of documents that are sorted according to their real relevance to the query under consideration. We note that there are multiple ideal rankings for a query when we use graded relevance labels due to label ties. We use $f:\mathbf{x}\rightarrow\mathbb{R}^{m}$ to denote the real-valued scoring function, which assigns each document a score. One can design various ranking methods by deploying different loss functions to learn the parameters $\theta$ based on the training data. In the testing phase, the scores of the documents associated with the same query, i.e., $\mathbf{y}=f(\mathbf{x})=(f(x_{1}),f(x_{2}),..., f(x_{m}))$, are used to sort the documents. \subsection{Empirical Risk Minimization} \label{subsec:ERM} Typically, we measure the loss of ranking documents for a query using $f$ with a loss function $\mathcal{R}(f(\mathbf{x}),\mathbf{y}^{\ast})$, which is commonly rank-sensitive. Then the goal of learning-to-rank is to learn the optimal scoring function over a hypothesis space $\mathcal{F}$ of ranking functions that can \emph{minimize the expected risk} as defined below: \begin{equation} \min_{f\in\mathcal{F}}\Re(f)=\min_{f\in\mathcal{F}}\int_{\mathcal{X}\times\mathcal{Y}}\mathcal{R}(f(\mathbf{x}),\mathbf{y}^{\ast})dP(\mathbf{x},\mathbf{y}^{\ast}) \end{equation} Because $\Re(f)$ is intractable to optimize directly and the joint distribution is unknown, we appeal to the \emph{empirical risk minimization} to approximate the expected risk, which is defined as follows: \begin{equation} \label{eq:erm} \min_{f\in\mathcal{F}}\tilde{\Re}(f;\mathcal{S})=\min_{f\in\mathcal{F}}\frac{1}{n}\sum_{j=1}^{n}\mathcal{R}(f(\mathbf{x}_j),\mathbf{y}^{\ast}_j) \end{equation} Most learning-to-rank methods of this kind differ primarily in how they define the surrogate loss function $\mathcal{R}$. These methods are grouped into three categories: pointwise methods \cite{SubsetRegression, RameshL2R, ChuGaussian, ChuSVOR}, pairwise methods \cite{FreundBoosting, ShenPerceptron, RankSVMStruct} and listwise methods \cite{OlivierMargin, AdaRank, YueSVMAP, GuiverSoftRank, SoftRank, QinApproximateNDCG, LambdaMART, ListNet, ListMLE, BoltzRank, LambdaRank, RankCosine}. \subsection{Adversarial Optimization} \label{subsec:AdOpt} Inspired by \cite{GAN, IRGAN}, we can formulate the process of learning-to-rank as a game between two opponents: a \textit{generator} and a \textit{discriminator}. The generator aims to generate (or select) rankings that look like the ground-truth ranking, which may fool the discriminator. Whereas the discriminator aims to make a clear distinction between the ground-truth ranking and the ones generated by its opponent generator. The framework for adversarial learning-to-rank is given as: \begin{equation} \label{Eq:AL2R} J^{G^{*},D^{*}}=\min_{\theta}\max_{\phi}\sum_{n=1}^{N}\mathbb{E}_{\pi\backsim P_{true}(\pi|q_{n})}[\log D_{\phi}(\pi|q_{n})]+\mathbb{E}_{\pi\backsim P_{\theta}(\pi|q_{n})}[\log(1-D_{\phi}(\pi|q_{n}))] \end{equation} where the generator $G$ is denoted as $P_{\theta}(\pi|q_{n})$ that aims to minimize the objective. On one hand, the generator fits the true distribution over all possible rankings $\pi\backsim P_{true}(\pi|q)$. On the other hand, it randomly generates rankings in order to fool the discriminator. The discriminator is denoted as $D_{\phi}(\pi|q_{n})$, which estimates the probability of a ranking being either the ground-truth ranking or not. The objective of the discriminator is to maximize the log-likelihood of correctly distinguishing the ground-truth ranking from artificially generated rankings. Furthermore, we are able to perform adversarial learning in a pointwise ($k=1$), pairwise ($k=2$) and listwise manner ($k\gg1$), respectively by adjusting the size of ranking. For adversarial learning-to-rank, both generator and discriminator are designed to be scoring functions. In particular, instead of generating new document feature vectors, the generation of rankings via generator is formulated as a sampling process. Due to space limitation, we refer readers to the paper \cite{IRGAN} for the details on how to optimise generator and discriminator. \section{Platform Overview} \label{sec:ListwiseMiniMax} In the following, we first show how to develop a new learning-to-rank model based on PT-Ranking. Then we detail its key components. PT-Ranking offers deep neural networks as the basis to construct a scoring function based on PyTorch and can thus fully leverage the advantages of PyTorch. NeuralRanker is a class that represents a general learning-to-rank model. A key component of NeuralRanker is the neural scoring function $f$. The configurable hyper-parameters include activation function, number of layers, number of neurons per layer, etc. All specific learning-to-rank models inherit NeuralRanker and mainly differ in the way of computing the training loss $\mathcal{R}$. Figure \ref{Fig:newltrmodel} shows the main step in developing a new learning-to-rank model following Eq-\ref{eq:erm}, where batch\_preds and batch\_stds correspond to $f(\mathbf{x})$ and $\mathbf{y}^{\ast}$, respectively. We can observe that the main work is to define the surrogate loss function $\mathcal{R}$. Figure \ref{fig:ptr} illustrates the overall architecture of PT-Ranking. The currently supported datasets are LETOR4.0 \cite{LETORIR}, Yahoo! LETOR \cite{YahooL2RData}, MSLR-WEB10K, MSLR-WEB30K\footnote{https://www.microsoft.com/en-us/research/project/mslr/} and Istella LETOR\footnote{http://quickrank.isti.cnr.it/istella-dataset/}. For more detailed information, e.g., the feature description, we refer readers to the corresponding papers. When loading a specified dataset, the supported functionalities are: (1) Label binarization, namely binarize the ground-truth labels if needed; (2) Random masking with a specified ratio, i.e., randomly mask the ground-truth labels per query as unlabelled ones; (3) Feature normalization. For datasets that are provided with raw features, such as MSLR-WEB10K AND MSLR-WEB30K, different methods for query-level normalization are provided. PT-Ranking supports the widely used evaluation metrics, such as Precision, Average Precision (AP), Normalized Discounted Cumulative Gain (nDCG) \cite{FstnDCG} and Expected Reciprocal Rank (ERR) \cite{ERR}. On one hand, these metric can be used to measure the performance of learning-to-rank methods. On the other hand, PT-Ranking also includes methods on how to directly optimize these metrics. Given the configured neural scoring function, we can choose different models and different optimization frameworks (detailed in Section \ref{sec:L2R}) to learn its parameters. We can also examine the effects of different hyper-parameters, namely, grid-search over hyper-parameters of a specific model. \begin{figure}[!htbp] \centering \includegraphics[width=5.3in, totalheight=3.1in]{define_a_new_loss.png} \caption{Develop a new learning-to-rank model.} \label{Fig:newltrmodel} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=5.8in, totalheight=1.5in]{ptltr.png} \caption{The overall architecture of PT-Ranking.} \label{fig:ptr} \end{figure} \section{DEMO Experiments} In this section, we show PT-Ranking's functionalities through a series of demo experiments, namely in-depth comparisons of different learning-to-rank models based on benchmark datasets. In our experiments, we used the publicly available datasets, MSLR-WEB10K and MSLR-WEB30K, where each query-document pair is represented with a feature vector. The ground truth is a multiple-level relevance judgment, which takes $5$ values from $0$ (irrelevant) to $4$ (perfectly relevant). We use nDCG to measure the performance. We report the results with different cutoff values $1$, $3$, $5$, $10$, $20$ and $50$ to show the performance of each method at different positions. \subsection{Methods} For traditional learning-to-rank via empirical risk minimization, a number of typical methods are adopted. RankMSE is a simple pointwise method. RankNet \cite{MSnDCGRankNet} represents the pairwise method. The listwise methods include ListNet \cite{ListNet}, ListMLE \cite{ListMLE}, RankCosine \cite{RankCosine}, LambdaRank \cite{LambdaRank}, ApproxNDCG \cite{QinApproximateNDCG}, WassRank \cite{TaoWSDM2019} and ST-ListNet \cite{StochasticTreatmentRF}. Specifically, for ListNet, the ranking loss is computed based on the top-1 approximation as in the original paper \cite{ListNet}, namely each element of the probability vector represents the probability of the corresponding document being ranked at the top-1 position. For WassRank, the suggested parameter configuration by \cite{TaoWSDM2019} is used. Following the recent studies \cite{RevisitingApproxNDCG, StochasticTreatmentRF}, for ApproxNDCG, the parameter $\alpha$ is set as $10$. Given the raw features per query-document pair, they are normalized using the \textit{z-score} method at a query level. We further use batch normalization between consecutive layers. For adversarial learning-to-rank, IRGAN \cite{IRGAN} is implemented to represent the main approach that adversarially optimizes scoring functions for ranking. The pointwise and pairwise versions are denoted as IRGAN-Point and IRGAN-Pair, respectively. The temperature is set as $0.5$. We note that how to perform adversarial learning-to-rank in a listwise manner is not resolved in \cite{IRGAN}. To address this issue, we formulate both generator and discriminator with the Plackett-Luce model \cite{PlackettLuceModel-2}, namely \begin{equation} P_{\theta}(\pi|q_{n})=\prod_{i=1}^{m}\frac{\exp(f_{\theta}(x_{\pi^{-1}(i)}))}{\sum_{j=i}^{m}\exp(f_{\theta}(x_{\pi^{-1}(j)}))} \end{equation} \begin{equation} D_{\phi}(\pi|q_{n})=\prod_{i=1}^{m}\frac{\exp(f_{\phi}(x_{\pi^{-1}(i)}))}{\sum_{j=i}^{m}\exp(f_{\phi}(x_{\pi^{-1}(j)}))} \end{equation} Inspired by the work of Bruch et al. \cite{StochasticTreatmentRF}, we resort to the Gumbel-softmax trick \cite{GumbelSoftmax, ConcreteDistribution} in order to enhance the efficiency of sampling rankings with $f_{\theta}$. Specifically, we associate an i.i.d sample drawn from $Gumbel(0,1)$ to each document for the query under consideration (i.e., $\mathbf{g}=g_{1},...,g_{m}$ for $\mathbf{x}=x_{1},...,x_{m}$). We then sort $\hat{\mathbf{y}}=\mathbf{g}+f_{\theta}(\mathbf{x})$ in an decreasing order. The corresponding re-ranking of $\mathbf{x}$ is regarded as a sample ranking of the generator. We refer to the method as IRGAN-List. We also test two different values of ranking size, $5$ and $10$, and the corresponding methods are denoted as IRGAN-List-5 and IRGAN-List-10, respectively. For all the methods, the inner loop for training both generator and discriminator is set as $1:1$. We used a simple $5$-layer feed-forward neural network to approximate the scoring function, where the size of a hidden layer is set as $100$. According to the studies \cite{IRGAN, AdverSMIR}, the activation functions \textit{ReLU} is adopted for all methods. We trained all the aforementioned methods using PyTorch v1.3, where one Nvidia Titan RTX GPU with 24 GB memory is used. We used the L2 regularization with a decaying rate of $1\times10^{-3}$ and the Adam optimizer with a learning rate of $1\times10^{-3}$. \subsection{Experimental Results} \subsubsection{Learning-to-rank via Empirical Risk Minimization} We note that the previous studies \cite{RevisitingApproxNDCG, StochasticTreatmentRF, LambdaLossFramework} just used a single fold (i.e., Fold1) for the experimental evaluation. To reduce the possible impact of overfitting on performance comparison, we use all the five folds and perform 5-fold cross validation. In particular, the dataset is randomly partitioned into five equal sized subsets. In each fold, three subsets are used as the training data, the remaining two subsets are used as the validation data and the testing data, respectively. We use the training data to learn the ranking model, use the validation data to select the hyper parameters based on nDCG@5, and use the testing data for evaluation. Finally, we report the ranking performance based on the averaged evaluation scores across five folds with $100$ epochs. In order to show how the setting of activation function affects the performance of different learning-to-rank methods based on deep neural networks, we apply the same training framework for all the methods. Specifically, we used a simple $3$-layer feed-forward neural network, where the size of a hidden layer is set as $100$. Seven different activation functions are adopted, namely ReLU, LeakyReLU, RReLU, ELU, SELU, CELU and Sigmoid. Each method is evaluated with different activation functions, we report its best performance and the corresponding activation function in Table \ref{Table:ltraf}. We can observe that the optimal setting of activation function differs a lot. It reveals that it is necessary to carefully examine the setting of activation function when comparing different learning-to-rank methods or developing new methods. We note that it has been almost 14 years since the publication of LambdaRank. LambdaRank still achieves the best performance as shown in Table \ref{Table:ltraf}. This again reminds us that the open-source projects, such as PT-Ranking and TF-Ranking, are quite necessary for examining whether the reported improvements ``add up'' or not \cite{WorryingAnalysis}. Furthermore, in Fig. \ref{Fig:layer_effects}, we plot the performance of ListNet and LambdaRank in terms of nDCG@1 with respect to the number of layers of the scoring function from $2$ to $20$. It is noticeable that the performance values of both ListNet and LambdaRank fluctuate when changing the number of hidden layers rather than a proportional improvement. One possible explanation is that: as the number of hidden layers increases, the ability of approximating more complex ranking functions (i.e., the model capacity) also increases. However, too many hidden layers may result in overfitting. To summarize, the factors, such as different activation functions and the number of layers, greatly affect the performance of a neural learning-to-rank method. Careful examinations of these factors are highly recommended in experimental comparisons of different learning-to-rank methods. \begin{table} \caption{Performance of different learning-to-rank methods on MSLRWEB10K.} \label{Table:ltraf} \begin{centering} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Method & Activation Function & nDCG@1 & nDCG@3 & nDCG@5 & nDCG@10 & nDCG@20 & nDCG@50\tabularnewline \hline RankMSE & ReLU & 0.4469 & 0.4305 & 0.4328 & 0.4470 & 0.4693 & 0.5052\tabularnewline \hline RankNet \cite{MSnDCGRankNet} & ELU & 0.4449 & 0.4346 & 0.4396 & 0.4557 & 0.4794 & 0.5142\tabularnewline \hline LambdaRank \cite{LambdaRank} & RReLU & \textbf{0.4670} & \textbf{0.4498} & \textbf{0.4528} & \textbf{0.4685} & \textbf{0.4910} & \textbf{0.5237}\tabularnewline \hline ListNet \cite{ListNet} & ReLU & 0.4542 & 0.4324 & 0.4349 & 0.4500 & 0.4730 & 0.5075\tabularnewline \hline ListMLE \cite{ListMLE} & ELU & 0.4523 & 0.4348 & 0.4395 & 0.4553 & 0.4767 & 0.5113\tabularnewline \hline RankCosine \cite{RankCosine} & LeakyReLU & 0.4466 & 0.4300 & 0.4340 & 0.4487 & 0.4714 & 0.5073\tabularnewline \hline ApproxNDCG \cite{QinApproximateNDCG} & Sigmoid & 0.4477 & 0.4263 & 0.4287 & 0.4428 & 0.4653 & 0.5000\tabularnewline \hline WassRank \cite{TaoWSDM2019} & ELU & 0.4494 & 0.4306 & 0.4342 & 0.4494 & 0.4709 & 0.5059\tabularnewline \hline ST-ListNet \cite{StochasticTreatmentRF} & ReLU & 0.4501 & 0.4346 & 0.4382 & 0.4532 & 0.4759 & 0.5111\tabularnewline \hline \end{tabular} } \par\end{centering} \end{table} \begin{figure}[!htbp] \centering \includegraphics[width=2.8in, totalheight=1.8in]{layer_effects.jpg} \caption{The impact of number of layers on neural learning-to-rank.} \label{Fig:layer_effects} \end{figure} \subsubsection{Learning-to-rank via Adversarial Optimization} Following the previous studies \cite{IRGAN, AdverSMIR}, we do not use the validation data when performing adversarial optimization. We use the training data to learn the ranking model, and the testing data for evaluation. Finally, we report the ranking performance based on the averaged evaluation scores across five folds with $100$ epochs. In Table \ref{Table:perm}, we show the performance of adversarial learning-to-rank methods on MSLRWEB30K based on pointwise, pairwise, listwise generators and discriminators, respectively. As each method has two components, namely, generator and discriminator, we differentiate their performance with the suffixes (G) and (D), respectively. Moreover, the best result is indicated in bold. From Table \ref{Table:perm}, we can observe that: (1) IRGAN-pair shows better performance than IRGAN-point, which echoes the experimental results in \cite{IRGAN}. (2) For pointwise, pairwise, listwise adversarial learning-to-rank methods, the discriminator ranking function achieves significantly better performance than the generator ranking function. (3) IRGAN-List is able to achieve better performance based on the discriminator. A possible reason is that the listwise discriminator considers the total order relationship among documents associated with the same query rather than the relative order between two documents treating documents independently. Furthermore, by increasing the size of ranking from $5$ to $10$, the listwise discriminator achieves slightly better performance. \begin{table}[!htbp] \caption{Performance comparison on MSLRWEB30K.} \label{Table:perm} \begin{centering} \begin{tabular}{|l|c|c|c|c|} \hline Method & nDCG@1 & nDCG@3 & nDCG@5 & nDCG@10\tabularnewline \hline IRGAN-Point (D) & 0.2863 & 0.3019 & 0.3160 & 0.3447\tabularnewline \hline IRGAN-Point (G) & 0.1658 & 0.1818 & 0.1963 & 0.2252\tabularnewline \hline IRGAN-Pair (D) & 0.4254 & 0.4101 & 0.4150 & 0.4312\tabularnewline \hline IRGAN-Pair (G) & 0.1929 & 0.2025 & 0.2123 & 0.2342\tabularnewline \hline IRGAN-List-5 (D) & 0.4295 & 0.4133 & 0.4183 & 0.4347\tabularnewline \hline IRGAN-List-5 (G) & 0.1549 & 0.1661 & 0.1785 & 0.2069\tabularnewline \hline IRGAN-List-10 (D) & \textbf{0.4299} & \textbf{0.4141} & \textbf{0.4188} & \textbf{0.4350}\tabularnewline \hline IRGAN-List-10 (G) & 0.1077 & 0.1137 & 0.1217 & 0.1408\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} We note that MSLRWEB30K is a supervised dataset, where all the ground-truth labels of each training query are used during the optimization process. As reported by the prior studies \cite{IRGAN, AdverSMIR}, one potential advantage of adversarial learning-to-rank methods is the ability of allowing unlabelled documents within the training data. In order to understand well to what extent the ratio of unlabelled query-document pairs affects the performance of adversarial learning-to-rank, we randomly mask the ground-truth labels of each training query with a specific ratio. For instance, given the ratio of $0.2$, $20\%$ of ground-truth labels for each query will be masked as unlabelled. To reduce the possible impact of random masking and overfitting on performance comparison, we use all the five folds. We show the performance of adversarial learning-to-rank methods on MSLRWEB30K with randomly masked labels in Table \ref{Table:maskperm}. This time the discriminators' performance is only reported, since generators always show poor performance as shown in Table \ref{Table:perm}. From Table \ref{Table:maskperm}, we can find that: (1) With the increase of unlabelled documents, both IRGAN-Point and IRGAN-Pair show decreased performance, which reveals that the performance is impacted with the increase of unlabelled documents. (2) On the contrary, IRGAN-List demonstrates robustness against the increase of unlabelled documents. This is attributable to the listwise sampling process when generating adversarial rankings. \begin{table}[!htbp] \caption{Performance comparison in terms of nDCG@1 on MSLRWEB30K with randomly masked labels.} \label{Table:maskperm} \begin{centering} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Masking ratio & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5\tabularnewline \hline IRGAN-Point (D) & 0.2863 & 0.2563 & 0.2725 & 0.2392 & 0.2532 & 0.2495\tabularnewline \hline IRGAN-Pair (D) & 0.4254 & 0.4164 & 0.4154 & 0.4093 & 0.4078 & 0.3773\tabularnewline \hline IRGAN-List-5 (D) & 0.4295 & 0.4324 & 0.4320 & 0.4345 & 0.4324 & 0.4317\tabularnewline \hline IRGAN-List-10 (D) & \textbf{0.4299} & \textbf{0.4358} & \textbf{0.4371} & \textbf{0.4376} & \textbf{0.4368} & \textbf{0.4353}\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \section{Conclusion and Future Work} In this work, we introduced PT-Ranking, an open-source package based on PyTorch. PT-Ranking is highly configurable for fine-tuning hyper-parameters and has easy-to-use APIs for developing new learning-to-rank models and optimization frameworks. PT-Ranking is highly complementary to the previous open-source projects for learning-to-rank. We envision that PT-Ranking will provide a convenient open-source platform for evaluating and developing learning-to-rank models based on deep neural networks, and thus facilitate researchers from different backgrounds. For future work, first, we plan to add more learning-to-rank methods, such as \cite{LambdaLossFramework} and \cite{AdverPRR}. Inspired by the recent studies \cite{nbdt, DeepNDF} on neural decision trees, it is interesting to include a number of learning-to-rank methods based on neural-backed decision trees. Second, we do note that the technique of neural architecture search (NAS) \cite{NASSurvey} can be applied for learning-to-rank. There is some hope that incorporating NAS will make PT-Ranking more versatile. Finally, we plan to add an interactive interface so that users can configure, evaluate and analyse learning-to-rank models in a visual manner. \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.46875, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbVk4ubnjop7FENfZ
\section{Introduction} The knowledge of the different time scales associated to the various degrees of freedom involved in heavy-ion collisions at intermediate energy is of crucial importance to determine the physical properties of nuclear sources produced (in the exit channel). Thermal equilibrium has been studied both theoretically and experimentally and times in the range 30-100~fm/c were derived in the Fermi energy region~\cite{Ber78,Toe82,Cas87,Gre87,Ros89,Bor97}. With the announced exotic beams the N/Z degree of freedom will hopefully be explored over a wide range, and thus an estimate of the chemical (isospin) equilibration time becomes essential; moreover experimental constraints can be placed on the asymmetry term of the equation of state which describes its sensitivity to the difference between proton and neutron densities~\cite{Bao01,Bar02,Tsa04}. Theoretical simulations of collisions were performed using isospin-dependent Boltzmann-Uehling-Uhlenbeck transport equations~\cite{Tsa04,Bao98,Shi03}. In the energy domain 20-100$A\,$MeV, estimates of the chemical equilibration times in the range 40-100~fm/c are reported, if one excludes calculations with an asymmetry term rapidly increasing around normal density. Experimentally some investigations concerning this time scale have been done. In the Fermi energy domain, studies on isospin equilibration in fusion-like reactions between medium nuclei (A$\sim50$) have shown that isospin equilibrium occurred prior to light fragment emission, which gives an upper limit around 100~fm/c~\cite{johnston1,johnston2}; for peripheral collisions between Sn isotopes~\cite{Tsa04} only a partial equilibrium (isospin asymmetry of the projectile remnant is half way between that of projectile and the equilibration value) is measured at the separation time ($\sim$100~fm/c) between quasi-projectile and quasi-target. At higher incident energy (400$A\,$MeV) the FOPI Collaboration measured the degree of isospin mixing between projectile and target nucleons, and found that complete mixing is not reached even in the most central collisions~\cite{rami}. In the present study we concentrate on semi-peripheral collision measurements performed with the INDRA array. The properties of the de-excitation products of the quasi-projectiles inform on the degree of N/Z diffusion and the separation time between the two partners will be taken as a clock to derive qualitative information on isospin equilibration. Two reactions with the same projectile, $^{58}Ni$, and two different targets ($^{58}Ni$ and $^{197}Au$) are used at incident energies of 52$A\,$MeV{} and 74$A\,$MeV. The N/Z ratios of the two systems are 1.07 for Ni+Ni and 1.38 for Ni+Au. INDRA only provides isotopic identification up to beryllium and does not detect neutrons. Thus an N/Z ratio for complex particles is constructed which well reflects the evolution of the N/Z of quasi-projectiles with the violence of the collisions. (see the accompanying article~\cite{galichet2}). The Ni+Ni symmetric system is taken as a reference since, on average, the isospin should remain constant with time whatever the collision process is. The paper is divided into three sections. In a first part, we describe the experiment and the event selection, then we present the properties of quasi-projectiles and finally we discuss the evolution of the isospin, before concluding. \section{Experiment and event selection}\label{expsel} \subsection{Experimental details}\label{exp} $^{58}$Ni projectiles accelerated to 52 and 74$A\,$MeV{} by the GANIL facility impinged on $^{58}$Ni (179 $\mu$g/cm$^2$) and $^{197}$Au (200 $\mu$g/cm$^2$) targets. The charged products emitted in collisions were collected by the $4\pi$ detection array INDRA. A detailed description of the apparatus can be found in references~\cite{pouthas1,pouthas2,steck}. All elements were identified within one charge unit up to the projectile charge. Elements from H to Be were isotopically separated when their energy was high enough (above 3, 6, 8~MeV for p, d, t; 20-25~MeV for He isotopes; $\sim$60~MeV for Li and $\sim$80~MeV for Be). However isotopic identification was not possible in the first ring of INDRA, constituted of phoswiches, so in this paper the angular range is limited to 3-176$^o$ for all products. In the following we shall call fragments the products for which only the atomic number is measured (Z$\geq$5). The on-line trigger required that four modules of the array fired. The off-line analysis only considered events in which four charged products were identified. \begin{table}[hbt] \caption{\label{sys} (color online) Characteristics of the systems studied : grazing angle, reaction cross section (calculated from~\cite{Kox84}), and measured cross sections after the different selections} \centerline{\begin{tabular}{|l|c|c||c|c|} \hline & \multicolumn{2}{c||}{Ni + Ni} & \multicolumn{2}{c|}{Ni + Au} \\ E$_{inc}/A$ (MeV)& 52 & 74 & 52 & 74 \\ E$_{c.m.}$ (MeV) & 1508 & 2146 & 2330 & 3316 \\ \hline $\theta_{gr}$ (lab)& 1.9$^o$ & 1.3$^o$ & 4.6$^o$ & 3.2$^o$ \\ \hline $\sigma_R$ (mb) & 3460 & 3410 & 5400 & 5400 \\ $\sigma_{M\geq 4}$ (mb) & 1553 & 1634 & 3780 & 3807\\ Selected events (mb) & 1032 & 953 & 3034 & 2885 \\ Selected QP (mb) & 624 & 491 & 904 & 793 \\ \hline \end{tabular}} \end{table} The characteristics of the systems studied here are displayed in table~\ref{sys}. Note that the grazing angle is below the minimum detection angle of INDRA (2$^o$) for the Ni+Ni system at both energies. This shows through the lower measured percentage of the reaction cross sections ($\sigma_{M\geq 4}$, table~\ref{sys}) for the Ni+Ni system (around 50\%) as compared to those for the Ni+Au system (70\%). The measured cross sections are derived from target thicknesses and integrated beam fluxes. \subsection{Event selection}\label{selevt} A first and simple selection required that the total detected charge amounts to at least 90\% of the charge of the projectile. \begin{figure}[htbp] \resizebox{0.6\textwidth}{!}{% \includegraphics{figure1_exp.eps}} \caption{(color online) The detected momentum $P_{tot}$ versus the total charge $Z_{tot}$ normalized to the incident momentum and the total charge of the system, for the Ni+Au and Ni+Ni systems at 52 and 74$A\,$MeV{} for the first selection. Event scale is logarithmic.} \label{figure1} \end{figure} Figure~\ref{figure1} shows, for the two systems and the two energies, the location of the selected events in the total detected charge and momentum plane. In table~\ref{sys}, one can observe that after this event selection, about 30\% of the reaction cross section is kept for Ni+Ni, against 55\% for the Ni+Au system: because of the detection geometry, peripheral collisions (Z$_{tot}$ $\sim$ Z$_{proj}$ and P$_{tot} \sim$1) are drastically suppressed in the Ni+Ni reactions, neither the projectile nor the target remnants are detected. This effect exists but to a lesser extent for the very asymmetric Ni+Au system, thanks to the larger value of the grazing angle: here events with Z$_{tot}$ $\sim$ Z$_{proj}$ and P$_{tot} \sim$1 are clearly visible. \begin{figure}[htbp] \resizebox{0.6\textwidth}{!}{% \includegraphics{figure2_exp.eps}} \caption{(color online) The emitted products, for the selected events, in $Z - V_{Z}$ plane for the Ni+Ni and Ni+Au sytems at 52 and 74$A\,$MeV. V$_Z$ is in the laboratory system.} \label{figure2} \end{figure} Conversely for this system the probability to detect all the products of an event (Z$_{tot}$ $>$ 0.6) is very small: the target-like fragment remains generally undetected because of the thresholds, unless it undergoes fission. \subsection{Selection of the quasi-projectile}\label{selqp} A further selection must be done to select the ``quasi-projectile''. We do not intend to isolate a ``source'', but rather to select a forward region in phase space where the detected products have a small probability to result from emission by the quasi-target. In principle, this could be done by a cut at the center-of-mass velocity; for the asymmetric system however, the target being more than three times heavier than the projectile, some particles from the target would be kept, as seen in fig.~\ref{figure2} which shows the charge of the products as a function of their laboratory velocity along the beam axis for the selected sets of events. Thus the cut was made at the nucleon-nucleon velocity - note that both cuts are identical for the Ni+Ni system. The quasi-projectile selection only keeps particles and fragments with a parallel velocity higher than the nucleon-nucleon velocity. It was verified that in this region of velocity space, all isotopes of H up to Be were fully identified (Z and A). In fig.~\ref{figure2} a small contribution of fragments which have a velocity $\sim$10\% smaller than the projectile velocity appears for the Ni+Ni reaction at 74$A\,$MeV{} incident energy. It was attributed to a beam halo interacting with the brass target holder and represents about 14\% of the events~\cite{guy}. In order to sharpen the comparison between quasi-projectiles produced in the two systems, the total charge beyond the nucleon-nucleon velocity was required to be in the range 24-32. In all cases 1.3 to 2$\times 10^6$ events are kept, amounting to 14-18\% of the reaction cross sections. In short in the following we call ``quasi-projectile'' the ensemble of charged products which have a velocity higher than the nucleon-nucleon velocity, without prejudice on the shape, degree of equilibration {\ldots} of the ensemble so defined. In figure~\ref{figure3} is represented the fragment ($Z \geq 5$) multiplicity distribution after all selections. \begin{figure}[htbp] \resizebox{0.6\textwidth}{!}{% \includegraphics{figure3_exp.eps}} \caption{(color online) Multiplicity distribution for fragments, $Z\geq5$, for the selected quasi-projectiles.} \label{figure3} \end{figure} In all cases a majority of the quasi-projectiles have only one fragment, which can be considered as the quasi-projectile remnant. For the Ni+Ni system, about 25-30\% of the events have two or more fragments while for the Ni+Au system this percentage is smaller ($\sim$15\%). \section{Properties of the quasi-projectiles}\label{propqp} \subsection{Event sorting}\label{excit} \begin{figure}[htbp] \resizebox{0.6\textwidth}{!}{% \includegraphics{figure4_exp.eps}} \caption{(color online) Reconstructed velocity of the quasi-projectile in the center of mass frame for the two systems and the two energies. The arrows indicate the projectile velocity.} \label{figure4} \end{figure} The method consisting in doing a calorimetry from the measured products can not be applied here, because it would require firstly to isolate sources and secondly an assumption on the number of neutrons, and thus on the N/Z of the quasi-projectiles, which is just the quantity we want to work out. To avoid the above difficulties we choose to sort the events as a function of the dissipated energy, calculated in a binary hypothesis, with the assumptions detailed below. \\ i) The quasi-projectile velocity is equal to the measured velocity of the fragment, or reconstructed from the velocity of all the fragments it contains. The distribution of the quasi-projectile velocities ($V_{QP}^{rec}$) so determined are represented, in the center-of-mass reference frame, on figure~\ref{figure4}. In all cases the reconstructed velocity of the quasi-projectile peaks at a value smaller than the projectile velocity, but remains closer to it for the Ni+Au system. \begin{figure}[htbp] \resizebox{0.6\textwidth}{!}{% \includegraphics{figure5_exp.eps}} \caption{(color online) Distributions of the dissipated energy for Ni+Ni and Ni+Au at 52 and 74 $A\,$MeV.} \label{figure5} \end{figure} ii) The relative velocity between the quasi-projectile and the quasi-target is determined as if the collision was purely binary, without mass exchange: \begin{equation} \label{eq:vrel} V_{rel}=V_{QP}^{rec} \times \frac{A_{tot}}{A_{target}} \end{equation} and thus the total dissipated energy reads: \begin{equation} \label{eq:Eexc} E_{diss}=E_{c.m.}-\frac{1}{2}\mu V_{rel}^2 , \end{equation} with $\mu$ the initial reduced mass. It is demonstrated in~\cite{Yan03,Pian} that the velocity of the QP is a good parameter for following the dissipated energy, except in very peripheral collisions, due to trigger conditions. Moreover, it is shown in figure 5 of the accompanying paper that $E_{diss}$ gives a good measure of the impact parameter. In figure~\ref{figure5} are represented the dissipated energy distributions. For the Ni+Ni system and at the two incident energies, the distributions present a maximum at E$_{diss}\approx$ 350 MeV, while they peak at lower dissipated energies for Ni+Au. This was expected from the remarks made in the previous sections; the most peripheral collisions - low excitation energies - are much more poorly sampled for Ni+Ni reactions than for Ni+Au. The comparison of the properties of the quasi-projectiles between the two systems will be made by sorting data in bins of 100~MeV dissipated energy. \begin{figure}[htbp] \resizebox{0.7\textwidth}{!}{% \includegraphics{figure6_exp.eps} \includegraphics{figure7_exp.eps}} \caption{(color online) Invariant cross sections for protons (left) and complex particles (right) emitted in the Ni+Ni system at 52 and 74$A\,$MeV. Velocities are expressed in the quasi-projectile frame. Contour levels are equidistant.} \label{figure6} \end{figure} \begin{figure}[htbp] \resizebox{0.7\textwidth}{!}{% \includegraphics{figure8_exp.eps}} \caption{(color online) Invariant cross sections of complex particles emitted in Ni on Au collisions at 52 and 74$A\,$MeV.} \label{figure8} \end{figure} \subsection{Invariant cross-section plots} As a verification of the selections and sorting made, we examined the repartition of the different particles in the velocity plane. A sign has been attributed to the perpendicular velocity depending on the value of the azimutal angle ($V_{per}<0$ corresponds to azimutal angles larger than 180$^o$). Such plots in the lab system allow a rough verification of the good operation of the INDRA array. Figs.~\ref{figure6},~\ref{figure8} are presented in the QP frame. The observed asymmetry between positive and negative values of $V_{per}$ comes from a deviation of the beam position from the symmetry axis of INDRA, reflected in the azimutal distribution of projectile residues, and thus showing up when transforming all particles in the frame of this fragment. This is particularly visible for protons at 52$A\,$MeV{} in fig.~\ref{figure6}, and does not affect the following results. In the left panel of figure~\ref{figure6} the invariant cross sections for protons emitted in the Ni+Ni reactions are presented. For all bins of dissipated energy and for the two incident energies, well-defined Coulomb circles are visible, showing that the protons essentially come from one source. The mid-rapidity/neck emission does not seem to be prominent at 52$A\,$MeV, except at low dissipation, (due to the online trigger, when QP and QT are too little excited for evaporating charged particles, configurations with several mid-rapidity particles are enhanced) while it becomes more important for all dissipations at 74$A\,$MeV. Due to the smaller quasi-projectile velocity at 52$A\,$MeV, and to the large proton velocities, the Coulomb circles are slightly cut at the higher dissipated energies (upper pannel). On the right panel of figure~\ref{figure6} are displayed the same plots for complex particles, including deuterons, tritons, helium, lithium and beryllium isotopes and labelled $Z_{complex}$ in figures~\ref{figure6} and~\ref{figure8}. The Coulomb circles are also clearly visible, but an accumulation of particles appears backwards of the quasi-projectile due to the importance of mid-rapidity emission for such products; for the highest dissipated energies the distributions become more forward/backward symmetric for particles emitted at the Coulomb velocity. No sizeable emission from the target is present. For protons emitted in Ni+Au collisions, the pictures (not shown) resemble closely those for Ni+Ni at the same incident energy. In figure~\ref{figure8} are represented the same pictures for complex particles for the Ni+Au system. As in the Ni+Ni system neck emission is apparent at low dissipation and Coulomb rings become more symmetric for higher dissipated energies in all cases. \subsection{The heaviest fragment} \begin{figure}[htbp] \resizebox{0.6\textwidth}{!}{% \includegraphics{figure9_exp.eps}} \caption{(color online) Distribution of the heaviest quasi-projectile fragment for the four systems. The line codes are the same as in fig.~\ref{figure4}} \label{figure9} \end{figure} Figure~\ref{figure9} represents the charge distribution of the heaviest fragment ($Z\geq 5$) of the quasi-projectile, for the four reactions in each dissipated energy bin. For the Ni+Au system, the heaviest quasi-projectile fragment has a charge around Z$_{max}$=24 for peripheral collisions (E$_{diss} <$ 450~MeV). These events have a quasi-projectile fragment multiplicity of one. In all other cases for this system there is no privileged value of the maximum charge, they often correspond to quasi-projectiles with more than one fragment. Note that the Z$_{max}$ distributions barely depend on the target or on the energy when the dissipated energy overcomes 600~MeV. For the Ni+Ni system, the distributions of Z$_{max}$ do not exhibit any peak whatever the dissipation, which again agrees with the lack of very peripheral collisions in the event samples. \subsection{Multiplicity of particles} \begin{figure}[htbp] \resizebox{0.7\textwidth}{!}{% \includegraphics{figure10_exp.eps}} \caption{(color online) Multiplicity of emitted products ($Z<5$) versus the dissipated energy. In all cases, statistical error bars are smaller than the size of the symbols} \label{figure10} \end{figure} The average multiplicities of isotopically resolved charged products associated with quasi-projectiles for each energy bin are displayed in figure~\ref{figure10}. Let us first examine the Ni+Ni system. For hydrogen isotopes the multiplicity is constant at low energies and then rises. For other products the multiplicity slightly decreases before rising with the dissipated energy, above 400-500 MeV. It is indeed at this value that the dissipated energy distribution of the selected quasi-projectiles shows a maximum (fig~\ref{figure5}). The effect is more marked for the heavier products; indeed for the most peripheral collisions, due to the on-line trigger (4 detected charged products), some particular configurations were retained namely those with the highest multiplicities. A situation sampling all configurations for a given dissipation is recovered when the multiplicities start to increase. At 52$A\,$MeV{} the ratio between the maximum and the minimum values of the multiplicities is around 1.5 for protons and $\alpha$ particles, and more than 2.5 for other products. At 74$A\,$MeV, the observed multiplicities are generally close to those for 52$A\,$MeV. Systematically higher values are however found at lower dissipated energies ($<$400 MeV) and in all cases for neutron rich hydrogen and lithium. This obviously comes from the sorting parameter which is not the excitation energy of an equilibrated piece of matter. At the higher energy the ratios between the maximum and the minimum values of the multiplicity of any species are smaller than at 52$A\,$MeV. For the Ni+Au system the multiplicity variations closely resemble those of the Ni+Ni case, showing firstly a slight decrease before a neat increase above dissipations of 250~MeV, which also corresponds to the peak in the dissipated energy distribution (see fig.~\ref{figure5}). At both incident energies the ratios between the maximum and the minimum values of the multiplicity of any species are much higher than in the corresponding Ni+Ni case, particularly for lithium and beryllium isotopes (ratios as high as 10 are observed). The multiplicities for neutron rich species are smaller at 74 than at 52$A\,$MeV, showing the reverse evolution with respect with the Ni+Ni data. If one now compares the different multiplicities for Ni+Ni and Ni+Au, several differences immediately appear from fig~\ref{figure10}: for Ni+Au, there are twice more protons than $\alpha$'s at low dissipated energy, while both multiplicities tend towards equal values with increasing dissipation. Conversely the difference between proton and $\alpha$ multiplicities is almost constant around 30 (40)\% for Ni+Ni at 52 (74)$A\,$MeV. For the Ni+Au system, all neutron rich isotopes are more abundantly produced, as can be seen by comparing tritons and $^3$He, $^6$Li and $^7$Li, $^7$Li and $^7$Be, $^7$Be and $^9$Be. A simple way of observing the isospin effect is to calculate the average mass per element for the two systems, starting from figure~\ref{figure10}. As expected, and due to the huge dominance of $\alpha$'s, the average mass of helium is insensitive to the isospin of the target at variance to those of hydrogen, lithium and beryllium which increase with the target neutron excess. These observations indicate that there is a transfer of neutrons, or an isospin diffusion, from the backward to the forward part of phase space. Similar observations were made for vaporised silicon quasi-projectiles after interaction with targets with different isospins~\cite{Ves00}; as in this paper, we also notice, from the evolution of the average element masses, that hydrogen and beryllium are more sensitive to the isospin of the target than lithium. The average masses are however insensitive to the dissipation, except a slight increase observed for hydrogen in Ni+Au data. A combination of the multiplicities of the different light isotopes will therefore bring more information than the individual evolution per element. The authors of ~\cite{VesR00} also noted a decrease of the t/$^3$He ratio at low temperature, as predicted by Lattice Gas Model calculations~\cite{Cho99}. In the present data the evolution of this ratio with excitation is weak but follows the same trend. To summarize this part, a close examination of the multiplicities of light products in the forward part of phase space clearly shows an influence of the isospin of the target on the neutron richness of these products. In other words there is an isospin diffusion from the target side to the projectile side in the course of the reaction. This effect will be quantified by a single variable in the next section. \section{Isospin diffusion and equilibration} \subsection{Isospin ratio of complex particles} The isospin ratio of quasi-projectiles in intermediate energy heavy ion collision was abundantly studied in the 80's, when the first beams at these energies appeared. The underlying idea was already the determination of the equilibration time of the isospin degree of freedom. The reactions involved $^{40}$Ar and $^{84,86}$Kr projectiles at 27-30 and 44$A\,$MeV{} (see~\cite{Bor90} and references therein for a review). The average N/Z were determined from Z=5 up to the projectile charge, at very forward angles, thus for very small dissipation; to our knowledge, no attempt to study the evolution of N/Z as a function of the dissipated energy, as proposed in the present paper, was ever made. For a given projectile and bombarding energy, it was found that the average N/Z of residues increases with the target N/Z. The difference between a $^{58}$Ni and a Au target becomes smaller when the incident energy increases. Indeed the average N/Z tends towards that of the valley of stability, because of increasing dominance of the de-excitation process. This indicates that to characterize the primary process, not only the projectile residue (Z$_{max}$ in this paper), but also all the emitted products should be detected, including neutrons. No data ever reached this ultimate goal. More information should however be extracted from the emitted products than from Z$_{max}$. In the experiments discussed here, INDRA only provides isotopic identification for isotopes of hydrogen up to beryllium and moreover does not detect neutrons. We wanted to avoid any hypothesis on heavy fragment masses and on the number of emitted neutrons, which would bias our conclusions; we thus construct an isospin ratio for complex particles, most probably different from the (N/Z) of the quasi-projectile, but evolving in the same way with increasing dissipation, as it is shown in the joint paper~\cite{galichet2}. This variable, ($<N>$/$<Z>$)$_{CP}$, is calculated for each dissipated energy bin (containing $N_{evts}$ events) and is defined as \begin{equation} (<N>/<Z>)_{CP} = \sum_{N_{evts}}{\sum_{\nu} {N_{\nu}}} / \sum_{N_{evts}}{\sum_{\nu} {P_{\nu}}} \end{equation} where $N_{\nu}$ and $P_{\nu}$ are respectively the numbers of neutrons and protons bound in particle $\nu$ , $\nu$ being d, t, $^3$He, $^4$He, $^6$He, $^6$Li, $^7$Li, $^8$Li, $^9$Li, $^7$Be, $^9$Be, $^{10}$Be; free protons are excluded as well as $^8$Be, the latter because they are only partly identified, when the two $\alpha$'s that they emit hit the same scintillator. The relative abundances of these nuclei among all those emitted by the quasi-projectiles are assumed to reflect the isospin of the initial emitter. We recall that the light nuclei included in eq.~3 are fully identified, without any energy threshold. Relative systematic errors on ($<N>$/$<Z>$)$_{CP}$ as a function of dissipation mainly come from the wrong identification of a $^8$Be as two $\alpha$'s; they are lower than 0.4\%. \subsection{Evolution of isospin with centrality} \begin{figure}[htbp] \resizebox{0.6\textwidth}{!}{% \includegraphics{figure12_exp.eps}} \caption{(color online) Isospin ratio for complex particles as a function of the normalised dissipated energy. Statistical error bars are smaller than the size of the symbols.} \label{figure11} \end{figure} In figure~\ref{figure11} the N/Z ratio for complex particles is plotted as a function of the normalised dissipated energy for the four reactions. The stars correspond to the Ni+Au system and the squares to the Ni+Ni system. We stress again that errors are very small, and thus the observed variations are significant. It immediately appears from the figures that the behaviour of ($<N>$/$<Z>$)$_{CP}$ is completely different for the two systems. Let us first examine the Ni+Ni data. Within about 1.5\% ($<N>$/$<Z>$)$_{CP}$ values are independent of the dissipated energy, at 52 and 74$A\,$MeV; this can be interpreted as the sign that the variable, used as reference, well reflects the evolution of the average isospin of the initial quasi-projectiles, which is expected to be constant for this system, provided that the de-excitation process does not influence the isospin ratio. The observed value of ($<N>$/$<Z>$)$_{CP}$, closer to 1 than the N/Z of the system (1.07), comes from the dominance of $\alpha$'s among the particles used to calculate it. Moreover ($<N>$/$<Z>$)$_{CP}$ is little dependent on the incident energy. The slight difference observed must be attributed to direct particles emitted at mid-rapidity or neck effect, which are included in the calculation of ($<N>$/$<Z>$)$_{CP}$. Another explanation may be that the system is proton rich, which favors proton preequilibrium emission; preequilibrium emission is expected to increase with the incident energy leading to an increase of the N/Z of primary quasi-projectile (target). This effect is observed in the isospin ratio, independently of the dissipation as the system is expected to remain, on average, symmetric. Observing such an effect is an indication that the chosen variable is indeed sensitive to the initial N/Z of the quasi-projectile. Let us turn now to Ni+Au. A first observation is that the isospin ratio of the Ni+Au system is higher than the isospin ratio of the Ni+Ni system whatever the dissipated energy. One may argue that the difference of ($<N>$/$<Z>$)$_{CP}$ between the two systems is small (0.02-0.05, compared to 0.31 for the true N/Z values for the composite systems). This again can be attributed to the definition of the variable, built on particles among which deuterons and $\alpha$'s are dominant. Therefore ($<N>$/$<Z>$)$_{CP}$ remains closer to 1 than the true isospin of the quasi-projectiles, the larger excess of neutrons being evacuated by free neutrons. The heavy fragments do not carry away a lot of neutrons : it was shown in~\cite{Day86} that for a given projectile, the $<N>$/$Z$ of the heavy fragments are more neutron rich with a Au target than with a Ni target: the difference in average N/Z for $^{40}$Ar residues is about 0.02-0.03, namely the same as what is observed here. Owing to these effects, we think that the gap between the values of ($<N>$/$<Z>$)$_{CP}$ for the two systems is significant and the mixing with the Au target did occur. Hence our variable well reflects the isospin diffusion between the target and the projectile as it is confirmed by figure 8 of the joint paper~\cite{galichet2} where one can see that ($<N>$/$<Z>$)$_{CP}$ values change as those of N/Z for quasi-projectiles but within a reduced and lower domain. For the three first bins of dissipated energy ($<N>$/$<Z>$)$_{CP}$ has the same value for the two incident energies and slightly decrease with dissipation. These points should however be regarded with caution, they correspond to the region where multiplicities decrease with increasing dissipation, due to the selection of particular configurations by the on-line trigger. The isospin ratio of the Ni+Au system is however higher than that of the Ni+Ni system, which could arise from the neutron skin of the Au target and/or from the mid-rapidity particles included in our quasi-projectile selection which are more neutron rich~\cite{I23-Lef00,I17-Pla99}. This result is a first indication of isospin diffusion. At higher dissipated energies, ($<N>$/$<Z>$)$_{CP}$ behaves differently depending on the incident energy. While ($<N>$/$<Z>$)$_{CP}$ presents a significant increase with dissipation at 52$A\,$MeV, the trend is flatter at 74$A\,$MeV. ($<N>$/$<Z>$)$_{CP}$ thus reaches higher values at 52$A\,$MeV. This may be interpreted as a progressive isospin diffusion when collisions become more central, in connection with the interaction time. For a given centrality, the separation time is longer at 52$A\,$MeV{} than at 74$A\,$MeV, leaving more time to the two main partners to go towards isospin equilibration. \section{Conclusion} To summarize, the value of the isospin variable ($<N>$/$<Z>$)$_{CP}$ for Ni+Au is different from and larger than that for Ni+Ni. It does not significantly evolve for the Ni+Ni system, neither with the excitation energy, nor with the incident energy when increased from 52 to 74$A\,$MeV. Therefore ($<N>$/$<Z>$)$_{CP}$ for Ni+Ni provides a good reference to which the same variable for Ni+Au can be compared. The continuous increase of ($<N>$/$<Z>$)$_{CP}$ up to the highest observed dissipation for Ni+Au at 52$A\,$MeV{} indicates that at least a partial isospin equilibration is reached at the corresponding separation time, $\sim$80~fm/c. The separation time $t_{sep}$ was estimated by $t_{sep}$$\sim$$(D_{Ni}+D_{Au}+d)/v_{beam}$$\sim$80~fm/c at 52$A\,$MeV{} and 66~fm/c at 74$A\,$MeV; $D$ is the nuclear diameter, $v_{beam}$ the incident velocity and $d$=3~fm the distance between the two nuclear surfaces at separation. \\ We will see in the accompanying article~\cite{galichet2} that, in the framework of the model employed, ($<N>$/$<Z>$)$_{CP}$ gives a reliable picture of isospin diffusion in the reactions studied and is relevant to determine if isospin equilibration takes place or not for the high excitation energies.
{ "attr-fineweb-edu": 1.958008, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbWzxK0iCl4WD2rqj
\section{Introduction} Video self-supervised learning has progressed at a tremendous pace in recent years, \textit{e}.\textit{g}. ~\cite{ctp-wang2021unsupervised,mmvssl3-Afouras20b,qian2021spatiotemporal,piergiovanni2020evolving, rspnet-chen2020RSPNet,gdt-patrick2020multimodal}, as it offers a crucial starting point from which to learn. This is especially important for video understanding applications, where annotating large amounts of data is extremely expensive, error-prone and sensitive to annotator bias. Hence, learning video representations through self-supervision is crucial, especially for use cases where the downstream video data is limited, because of the domain, task or actions the video contains. However, the majority of current works in video self-supervised learning, \textit{e}.\textit{g}. ~\cite{clip-order-xu2019self,frame-order-misra2016shuffle,avid-cma-morgado2021audio, selavi-asano2020labelling,videomoco-pan2021videomoco} do not test beyond standard benchmarks. The standard protocol is to use unlabeled Kinetics-400~\cite{Kinetics-400-arxiv} for pre-training and then measure performance by finetuning on two action recognition datasets: UCF-101~\cite{UCF-101-arxiv} and HMDB-51~\cite{HMDB-51-ICCV}. While these benchmarks have facilitated the impressive progress of video self-supervised learning in recent years, they cannot indicate the generalizability of such methods as these pre-training and downstream datasets are all similar in appearance and the type of actions they contain. Some methods have started to report finetuning performance on additional datasets like Something-Something-v2~\cite{SS-v2-arxiv} in~\cite{ctp-wang2021unsupervised,rspnet-chen2020RSPNet,large-scale-feichtenhofer2021large}, Diving-48~\cite{diving} in~\cite{dave2021tclr,wang2021removing}, AVA~\cite{AVA-Gu_2018_CVPR} in~\cite{xiao2021modist,yang2020video,large-scale-feichtenhofer2021large}, EPIC-Kitchens-100~\cite{EPIC-100-arxiv} in~\cite{yang2020video}. However, such evaluations are insufficient to understand the generalization of video self-supervised methods by themselves since they only add a single additional dataset, often without comparison to prior methods. In this work, we address the essential need to gauge the sensitivity of existing video self-supervised methods to the current benchmark by thoroughly evaluating their performance for generalization across diverse downstream settings. Similar benchmarking studies have been performed for self-supervised pre-training in images~\cite{when_contrast_work,trasnferability,contrasting_contrastive,edinburgh-ericsson2021well, goyal2019scaling, yang2020transfer, kolesnikov2019revisiting, zhai2019large, asano2019critical, newell2020useful, sariyildiz2021concept, van2021benchmarking, ericsson2021self}, which investigate the importance of pre-training datasets~\cite{when_contrast_work,contrasting_contrastive,goyal2019scaling} and backbone architecture~\cite{kolesnikov2019revisiting}, transferability~\cite{trasnferability,edinburgh-ericsson2021well,newell2020useful,image-eval5-wallace2020extending}, amongst other aspects. Unfortunately, lessons from these works do not directly transfer to video self-supervised learning. First, video self-supervised tasks are distinct from those of images as they are designed to understand the temporal dimension of video~\cite{rspnet-chen2020RSPNet,dave2021tclr,ctp-wang2021unsupervised,yang2020video} in addition to the spatial understanding needed in images~\cite{simclr-pmlr-v119-chen20j}. Second, video is multi-modal and several methods~\cite{gdt-patrick2020multimodal,selavi-asano2020labelling,avid-cma-morgado2021audio} are designed to exploit cross or multi-modal understanding, which is again absent in image-based methods. For videos,~\cite{large-scale-feichtenhofer2021large} extend four image-based self-supervised methods to videos and investigate their performance focusing on different pre-training set ups. We take inspiration from this and benchmarking works in image self-supervised learning and perform a much-needed study for understanding the generalizability of self-supervised methods for video in relation to different downstream factors. As our first contribution, we identify the problem of benchmark-sensitivity in video self-supervised learning and examine this sensitivity along the factors of domain, samples, actions and task. As our second contribution, we perform an extensive evaluation which spans a total of over 500 experiments with 9 video self-supervised learning methods across 7 video datasets and 6 video understanding tasks. We find that standard benchmarks in video self-supervised learning do not indicate generalization along the said sensitivity factors and vanilla supervised pre-training outperforms self-supervised pre-training, particularly when domain change is large and there are only a few downstream finetuning samples available. Third, we propose a subset of our experiments as the SEVERE-benchmark for future self-supervised learning methods to benchmark generalization capability. We also discuss the implication of this benchmark for evaluating the generalizability of representations obtained by existing methods as well as the nature of video self-supervised objectives that currently generalize well. \begin{figure}[t!] \centering \captionsetup{font=small,skip=1mm} \includegraphics[width=\linewidth]{media/concept_figure.pdf} \caption{\textbf{Benchmark-sensitivity.} We evaluate the sensitivity of 9 video self-supervised learning methods along four downstream factors which vary from the pre-training source: the domain, the samples, the actions and the task.} \label{fig:concept-figure} \end{figure} \section{Identifying Benchmark Sensitivity} The vast majority of current works in video self-supervised learning evaluate their approach by pre-training on Kinetics-400~\cite{Kinetics-400-arxiv} and finetuning the learned representation for action recognition on UCF-101\cite{UCF-101-arxiv} and HMDB-51\cite{HMDB-51-ICCV}. Some works~\cite{gdt-patrick2020multimodal, dave2021tclr, pretext-contrast-DBLP:journals/corr/abs-2010-15464, ctp-wang2021unsupervised, rspnet-chen2020RSPNet, selavi-asano2020labelling,gavrilyuk2021motion, lin2021self,huang2021ascnet} also report performance on video retrieval for UCF-101 and HMDB-51 and several recent works~\cite{qian2021spatiotemporal,yang2020video,recasens2021broaden} compare linear evaluation performance on Kinetics-400. However, these downstream datasets are very similar to each other and also share many similarities with the pre-training dataset of Kinetics-400. Videos in all three datasets are clips collected from YouTube that are mostly recorded with a single camera containing a well-positioned single human actor. In terms of class labels, all datasets focus on similar, coarse-grained and mutually exclusive actions with many actions common between pre-training and downstream datasets. Besides all these data similarities, the existing evaluations also ignore a major benefit of self-supervised representation learning for videos, \textit{i}.\textit{e}. finetuning the representation with only a small amount of data samples and transferring to other video understanding tasks beyond action recognition. Hence, we believe the current benchmark standard is insufficiently equipped to gain a true understanding of where video self-supervised models are successful, as it cannot show the generalizability or the sensitivity of methods to factors such as domain shift, amount of finetuning data samples, action similarity or task shift. In this study, we identify the sensitivity of existing evaluations and thoroughly benchmark self-supervised video learning methods along four sensitivity factors as depicted in \cref{fig:concept-figure}. \begin{enumerate}[label=\Roman*.] \item \textbf{Downstream domain.} First, we analyse whether features learned by self-supervised models transfer to datasets that vary in domain with respect to the pre-training dataset. \item \textbf{Downstream samples.} Second, we evaluate the sensitivity of self-supervised methods to the number of downstream samples available for finetuning. \item \textbf{Downstream actions.} Third, we investigate whether self-supervised methods can learn fine-grained features required for recognizing semantically similar actions. \item \textbf{Downstream task.} Finally, we study the sensitivity of video self-supervised methods to the downstream task and question whether self-supervised features can be used beyond action recognition. \end{enumerate} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{media/radar_main_paper.pdf} \caption{\small \textbf{Video dataset characteristics.} Characterizing domain shift in datasets via difference in label overlap, point-of-view (PoV), environment, action length and temporal awareness with Kinetics-400 (shown by dotted line). Kinetics-400 and UCF-101 are highly similar to each other, while datasets like Something-Something-v2, EPIC-Kitchens-100 and Charades have different attributes compared to Kinetics-400.} \label{fig:radar} \end{figure} \subsection{Downstream Video Datasets} \label{subsec:domain-shift} We evaluate various self-supervised models along our four sensitivity factors on 7 video datasets: \textbf{UCF-101}~\cite{UCF-101-arxiv}, \textbf{NTU-60}~\cite{NTU-60-arxiv}, \textbf{FineGym} (Gym-99) \cite{Gym-99-arxiv}, \textbf{SomethingSomething-v2} (SS-v2)~\cite{SS-v2-arxiv}, \textbf{EPIC-Kitchens-100} (EK-100)~\cite{EPIC-100-arxiv}, \textbf{Charades}~\cite{charades-sigurdsson:hal-01418216} and \textbf{AVA}~\cite{AVA-Gu_2018_CVPR}. They include a considerable variety in video domain, the actions they contain and cover a range of video understanding tasks. To get a sense of the differences between these downstream datasets and the Kinetics-400 source dataset, we summarize their similarity to Kinetics-400 by radar plots in \cref{fig:radar} based on several attributes. \textit{Environment} refers to the variety of settings contained in the dataset. This can be very specific, \textit{e}.\textit{g}. a kitchen, or varied when many different settings are included. \textit{Point-of-view} is whether a video is recorded from a first-person or third-person viewpoint. \textit{Temporal awareness} defines the extent to which temporal context is required to recognize or detect actions. We quantify this as the point at which performance saturates with increasing temporal context in the input. \textit{Label overlap} is the fraction of actions in a target dataset that are also present in Kinetics-400. \textit{Action length} is the temporal length of the actions in seconds. Details are provided in the appendix. \subsection{Evaluated Self-Supervised Video Learning Methods} Self-supervised learning methods in video can be grouped into two categories based on the objective they use: pretext task methods and contrastive learning methods. Pretext task methods are based on predictive tasks such as solving spatio-temporal jigsaw puzzles~\cite{jigaw1-ahsan2019video,jigsaw2-huo2021selfsupervised, jigsaw3-kim2019self}, rotation prediction~\cite{rotate1-jing2019selfsupervised}, frame and clip order~\cite{frame-order-misra2016shuffle, shuffle1-fernando2017self, shuffle2-suzuki2018learning, clip-order-xu2019self,yao2021seco}, video speed~\cite{relative-speed1-benaim2020speednet, relative-speed2-jenni2020video, playback1-cho2020self, playback2-yao2020video, playback3-wang2020self}, video completion~\cite{vcp}, predicting motion statistics~\cite{wang2019self}, tracking random patches in video frames~\cite{ctp-wang2021unsupervised} or audio-visual clustering~\cite{multimodal-clustering1-chen2021multimodal, multimodal-clustering2-hu2019deep, selavi-asano2020labelling,alwassel2020self}. Contrastive learning methods discriminate between ``positive'' and ``negative'' pairs to learn invariances to certain data augmentations and instances either from visual-only input~\cite{videomoco-pan2021videomoco,dave2021tclr,han2019video, yang2020video,qian2021spatiotemporal,lin2021self,diba2021vi2clr, sun2021composable} or multi-modal data~\cite{gdt-patrick2020multimodal,avid-cma-morgado2021audio,coclr, tao2020self,ma2021active,korbar2018cooperative}. Some methods also combine the pretext and contrastive approaches~\cite{pretext-contrast-DBLP:journals/corr/abs-2010-15464, rspnet-chen2020RSPNet,pretext-contrast-2-zhang2021contrastive,taco-bai2020can, diba2021vi2clr, huang2021ascnet}. We consider a total of 9 video-based self-supervised methods which achieve good performance on current benchmarks and cover a range of self-supervised paradigms in the video domain, including contrastive learning, pretext-tasks, their combination and cross-modal audio-video learning. Due to the high computational cost of training self-supervised methods, we focus on works with publicly available weights for a common R(2+1)D-18 network~\cite{tran2018closer} pre-trained on Kinetics-400~\cite{Kinetics-400-arxiv}: \textbf{MoCo}~\cite{moco_v2}, \textbf{SeLaVi}~\cite{selavi-asano2020labelling}, \textbf{VideoMoCo}~\cite{videomoco-pan2021videomoco}, \textbf{Pretext-Contrast}~\cite{pretext-contrast-DBLP:journals/corr/abs-2010-15464}, \textbf{RSPNet}~\cite{rspnet-chen2020RSPNet}, \textbf{AVID-CMA}~\cite{avid-cma-morgado2021audio}, \textbf{CtP}~\cite{ctp-wang2021unsupervised}, \textbf{TCLR}~\cite{dave2021tclr} and \textbf{GDT}~\cite{gdt-patrick2020multimodal}. We compare these to no pre-training, \textit{i}.\textit{e}. training from scratch, and fully supervised pre-training for the task of action recognition. It is worth noting that since we use publicly available models we cannot control the exact pre-training setup. There are subtle differences in the training regime for each method, such as how long the models were trained, the data augmentations used and the batch size. Details of these differences are provided in the appendix. However, all models use the same backbone and pre-training dataset thus we can evaluate their downstream abilities in exactly the same way. To finetune for downstream tasks we simply attach a task-dependent head at the last layer of the pre-trained R(2+1)D-18 backbone to produce label predictions for the corresponding task. For a fair comparison, we use the same set of hyper-parameters, optimization and pre-processing during the downstream training of each pre-trained model. \section{Sensitivity Factor I: Downstream Domain} \label{sec:factor_1} We first investigate to what extent self-supervised methods learn features that are applicable to action recognition in any domain. We evaluate the suite of pre-trained models on UCF-101, NTU-60, Gym-99, SS-v2 and EK-100 for the task of action recognition. It is worth noting that as well as variety in domain, these datasets include variety in the amount of training data (9.5k - 168k examples) and cardinality of classification (60 - 300 classes). We attach a single classification layer to the pre-trained backbone and evaluate the models' performance on the downstream task in two settings. First, \textbf{full finetuning} where we train the whole network from the initialization of the pre-trained weights. Second, \textbf{linear evaluation} where we train the classification layer only using the frozen features of pre-trained backbones. We follow the standard splits proposed in the original datasets and report video-level top-1 accuracy on the test sets. The details about splits, pre-processing, training for each dataset are provided in the appendix. \medskip \noindent\textbf{Full finetuning.} The left part of \cref{domain_shift} shows the results of full finetuning. From the results, it is clear that all self-supervised methods are very effective on UCF-101 as there is a significant gap between training from scratch and using self-supervised pre-training. This gap is reduced as the difference between Kinetics-400 and the downstream domain increases. SeLaVi, MoCo and AVID-CMA in particular are evidence of this as these methods suffer when datasets have higher temporal awareness and less label overlap with Kinetics-400. When moving from UCF-101 to NTU-60 and Gym-99 there is a change in the ordering of self-supervised methods. This demonstrates a high performance on UCF-101 does not guarantee a self-supervised model is generalizable to other domains. The change in ranking is even more prominent for SS-v2 and EK-100, which require the most temporal awareness and also shift to a first-person viewpoint. This is particularly noticeable for AVID-CMA. On these datasets, MoCo has similar results to no pre-training, which is evidence that video-specific self-supervised learning methods are needed and that image-based methods are insufficient. Overall, supervised pre-training achieves good performance across the board, outperforming self-supervised methods on the most similar domain (UCF-101) as well as the most dissimilar domains (SS-v2 and EK-100). Amidst the tested self-supervised models, CtP, RSPNet, VideoMoCo and TCLR stand out as the self-supervised pre-training methods most generalizable to different domains. \begin{table}[t] \captionsetup{font=small,skip=2mm} \caption[]{\textbf{Sensitivity Factor I: Downstream Domain.} Video self-supervised methods evaluated across datasets with increasing domain shift with respect to the source dataset (see \cref{fig:radar}). Colors denote relative rankings across methods for each dataset, ranging from \textcolor{lowcolor}{low} \begin{tikzpicture}% \pgfplotscolorbardrawstandalone[% colormap name=PiYG,% colorbar horizontal,% colorbar style={% height=0.18cm,% width=2cm,% hide axis,% }% ]% \end{tikzpicture} \textcolor{highcolor}{high}. The ranking of methods is domain-sensitive for both finetuning and linear classification and becomes less and less correlated with the current UCF-101 benchmark as the domain shift increases.} \centering \midsepremove \resizebox{\linewidth}{!}{\begin{tabular}{ l\C{75.4}{94.1}\C{92.8}{94.3}\C{88.9}{92.2}\C{53.8}{61.0}\C{25.7}{47.7}c\C{7.61}{65.87}\C{37.9}{91.7}\C{15.7}{53.9}\C{20.2}{45.1}\C{4.5}{16.6}\C{20.0}{26.6}} \toprule \addlinespace[0.1cm] \multirow{2}{*}{\textbf{Pre-training}} & \multicolumn{5}{Sc}{\textbf{Finetuning}} & & \multicolumn{6}{Sc}{\textbf{Linear Evaluation}} \\ \addlinespace[0.04cm] \cmidrule{2-6} \cmidrule{8-13} \addlinespace[0.1cm] & \multicolumn{1}{c}{UCF101} & \multicolumn{1}{c}{NTU60} & \multicolumn{1}{c}{Gym99} & \multicolumn{1}{c}{SSv2} & \multicolumn{1}{c}{EK 100} & & \multicolumn{1}{c}{K 400} & \multicolumn{1}{c}{UCF101} & \multicolumn{1}{c}{NTU60} & \multicolumn{1}{c}{Gym99} & \multicolumn{1}{c}{SSv2} & \multicolumn{1}{c}{EK 100}\\ \midrule \addlinespace[0.01cm] None & 75.4 & 92.9 & 89.4 & 56.8 & 25.7 & & \multicolumn{1}{c}{-}& \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-}\\ \addlinespace[0.01cm] \midrule \addlinespace[0.01cm] MoCo & 83.5 & 93.4 & 90.6 & 57.0 & 26.4 & & 34.5 & 65.4 & 16.0 & 21.2 & 7.4 & 21.4 \\ SeLaVi & 84.9 & 92.8 & 88.9 & 56.4 & 33.8 && 24.1 & 51.2 & 15.7 & 20.2 & 4.5 & 22.4 \\ VideoMoCo & 85.8 & 94.1 & 90.5 & 58.8 & 43.6 && 31.0 & 66.3 & 51.6 & 41.6 & 19.5 & 25.7 \\ Pretext-Contrast & 86.6 & 93.9 & 90.3 & 57.0 & 34.3 && 22.4 & 57.2 & 17.6 & 30.0 & 10.9 & 20.0 \\ RSPNet & 88.5 & 93.9 & 91.3 & 59.4 & 42.7 && 46.0 & 76.6 & 33.5 & 32.2 & 12.5 & 24.9 \\ AVID-CMA & 89.3 & 94.0 & 90.6 & 53.8 & 29.9 && 43.5 & 78.1 & 53.9 & 45.1 & 16.1 & 22.5 \\ CtP & 89.8 & 94.3 & 92.2 & 60.2 & 42.8 && 7.6 & 37.9 & 22.6 & 30.6 & 12.2 & 20.0 \\ TCLR & 90.8 & 94.1 & 91.5 & 60.0 & 36.2 && 19.9 & 63.3 & 33.5 & 33.0 & 10.8 & 21.8 \\ GDT & 91.1 & 93.9 & 90.4 & 57.8 & 37.3 && 38.6 & 75.7 & 38.2 & 34.2 & 11.9 & 25.3 \\ \addlinespace[0.01cm] \midrule \addlinespace[0.01cm] Supervised & 94.1 & 93.9 & 91.8 & 61.0 & 47.7 && 65.9 & 91.7 & 45.5 & 42.7 & 16.6 & 26.6 \\ \addlinespace[0.01cm] \bottomrule \end{tabular} } \label{domain_shift} \end{table} \medskip \noindent\textbf{Linear classification.} The right part of \cref{domain_shift} shows the results for linear classification. As with finetuning, the ranking among the self-supervised methods changes as the domain difference between the pre-training and the downstream dataset increases. For example, VideoMoCo ranks lower than GDT and RSPNet for UCF-101 and Kinetics-400 but ranks higher than both for all other datasets. This again demonstrates that performance on UCF-101 does not give a complete picture of a self-supervised model's success. We also observe that linear evaluation on Kinetics-400, as some papers report~\cite{qian2021spatiotemporal, recasens2021broaden, yang2020video}, has the same issue since it is highly correlated to UCF-101 performance. For UCF-101 and Kinetics-400, self-supervised models with contrastive objectives learn highly discriminative features compared to the non-contrastive models. This can be seen by comparing contrastive models AVID-CMA, GDT and RSPNet to non-contrastive SeLaVi and CtP. % From the NTU-60 and Gym-99 results we observe that as the label overlap between the pre-training and the downstream dataset decreases, the performance gap between finetuning and linear evaluation increases considerably. This is true for both supervised and self-supervised pre-training. The most generalizable methods in the linear classification setting are contrastive methods VideoMoCo and AVID-CMA as well as supervised pre-training. Interestingly, there are cases where VideoMoCo and AVID-CMA even outperform supervised pre-training, namely for NTU-60, Gym-99 and SS-v2. \begin{myboxi}[]{LimeGreen!30} \paragraph{Conclusion.} We observe from \cref{domain_shift} that performance for both UCF-101 finetuning and Kinetics-400 linear evaluation is not indicative of how well a self-supervised video model generalizes to different downstream domains, with the ranking of methods changing substantially across datasets and whether full finetuning or linear classification is used. \end{myboxi} \section{Sensitivity Factor II: Downstream Samples} \label{sec:factor_2} The previous section analyzed sensitivity to change in the downstream domain by evaluating performance on several different datasets. However, each of these datasets contains a large number of labeled examples for finetuning, which means training from scratch already obtains good performance. Not all domains and use cases have ample labeled video examples available for finetuning, thus we investigate what the impact of the number of finetuning samples is and whether self-supervised methods can be beneficial in scenarios where we have little data to finetune with. We vary the amount of finetuning data, beginning from 1000 samples, sampled uniformly from the classes, and double the amount until we reach the full finetuning training set size. We report on four of the downstream datasets from the previous section: UCF-101, NTU-60, Gym-99 and SS-v2. The results are summarized in \cref{fig:training-data-size}. \begin{figure}[t!] \captionsetup{font=small,skip=2mm} \centering \includegraphics[width=\linewidth]{media/dataset_size_1x4_v11.pdf} \caption{\textbf{Sensitivity Factor II: Downstream Samples.} Comparison of video self-supervised learning methods using varying number of finetuning samples for four downstream datasets. Both the gap and rank among pre-training methods are sensitive to the number of samples available for finetuning. } \label{fig:training-data-size} \end{figure} We first observe that the trends in the low data regime are different from those for the full data regime. The gap between supervised and self-supervised pre-training is much larger in low data settings, particularly for UCF-101 and Gym-99. NTU is an exception, where, with 1000-4000 samples CtP, GDT, AVID-CMA and TCLR outperform supervised pre-training. As also observed for the downstream domain, the ranking of self-supervised models changes with the amount of downstream examples available for finetuning. For example, on UCF-101, RSPNet is much more successful than CtP and TCLR when using only 1000 samples. This is because some self-supervised models benefit more than others from an increased amount of downstream samples. For example, CtP is one of the most generalizable pre-training strategies when finetuning with the full amount of data on UCF-101, Gym-99 and SS-v2, but this is not the case with fewer training samples. Interestingly, GDT is consistently high in the ranking with low amounts of finetuning samples. This is likely due to the large number of temporal augmentations it uses, which help the generalization ability when the training data is limited. \begin{myboxi}[]{red!10} \paragraph{Conclusion.} We observe from \cref{fig:training-data-size} that video self-supervised models are highly sensitive to the amount of samples available for finetuning, with both the gap and rank between methods changing considerably across sample sizes on each dataset. \end{myboxi} \section{Sensitivity Factor III: Downstream Actions} \label{sec:factor_3} As indicated earlier, existing evaluations of self-supervised video learning methods have been limited to coarse-grained action recognition. In this section, we investigate whether current self-supervised tasks are only effective for these types of benchmarks or whether they are able to learn features that are useful for differentiating more challenging and semantically similar actions. FineGym~\cite{Gym-99-arxiv} provides us with an experimental setup to study sensitivity to this factor. The dataset contains different evaluations with varying levels of semantic similarity, namely action recognition \textit{across all events}, \textit{within an event} or \textit{within a set}. Recognition \textit{across all events} uses the whole of Gym-99 containing all actions from four gymnastic events. For recognition \textit{within an event} there are two subsets: Vault (VT) and Floor Exercise (FX) containing only actions from these two events. Recognition \textit{within a set} has two subsets namely FX-S1, containing different \textit{leaps-jumps-hops} in Floor Exercise, and UB-S1, which consists of types of \textit{circles} in Uneven Bars. We also experiment with the long-tailed version of FineGym, Gym-288, which adds 189 more tail classes to Gym99. Details of these subsets are in the appendix. As before, we attach a classification head to the pre-trained models and finetune the whole network with the training set of each subset. We report Top-1 accuracy (mean per-class) on the testing sets following \cite{Gym-99-arxiv}. The results are shown in \cref{granularity}. \begin{table}[t!] \centering \midsepremove \captionsetup{font=small,skip=2mm} \caption[]{\textbf{Sensitivity Factor III: Downstream Actions.} Video self-supervised models evaluated on different semantic similarities of action in FineGym: across events, within an event and within a set. Colors denote relative rankings across methods for each dataset, ranging from \textcolor{lowcolor}{low} \begin{tikzpicture}% \pgfplotscolorbardrawstandalone[% colormap name=PiYG,% colorbar horizontal,% colorbar style={% height=0.18cm,% width=2cm,% hide axis,% }% ]% \end{tikzpicture} \textcolor{highcolor}{high}. Many methods struggle on the within a set benchmark where actions are most semantically similar.} \setlength{\tabcolsep}{3mm} \resizebox{\textwidth}{!}{% \begin{tabular}{l\C{84.4}{88.3}@{\hskip 2mm}c\C{24.7}{37.7}\C{75.9}{86.2}@{\hskip 2mm}c\C{45.0}{81.0}\C{81.5}{88.4}c\C{50.0}{58.4}} \toprule \addlinespace[0.04cm] & \multicolumn{7}{c}{\textbf{Gym99}} & & \multicolumn{1}{c}{\textbf{Gym288}} \\ \addlinespace[0.04cm] \cmidrule{2-8}\cmidrule{10-10} \addlinespace[0.04cm] \multicolumn{1}{l}{\textbf{Pre-training}} & \multicolumn{1}{c}{Across Events} & & \multicolumn{2}{c}{Within Event} && \multicolumn{2}{c}{Within Set} & & \multicolumn{1}{c}{Across Events}\\ \addlinespace[0.04cm] \cmidrule{2-2}\cmidrule{4-5}\cmidrule{7-8}\cmidrule{10-10} \addlinespace[0.04cm] \multicolumn{1}{c}{} & \multicolumn{1}{c}{All} && \multicolumn{1}{c}{Vault} & \multicolumn{1}{c}{Floor} && \multicolumn{1}{c}{FX-S1} & \multicolumn{1}{c}{UB-S1} & & \multicolumn{1}{c}{All} \\ \addlinespace[0.02cm] \arrayrulecolor{black}\midrule \addlinespace[0.01cm] None & 84.4 && 24.7 & 75.9 && 45.0 & 84.0 & &50.0 \\ \addlinespace[0.01cm]\midrule \addlinespace[0.01cm] SeLaVi & 84.8 && 25.4 & 76.0 && 50.2 & 81.5 & &52.8 \\ Pretext-contrast & 85.7 && 28.5 & 81.4 && 65.8 & 86.2 && 52.7 \\ AVID-CMA & 85.8 && 30.4 & 82.7 && 67.2 & 88.4 & & 52.5 \\ MoCo & 86.2 && 33.2 & 83.3 && 65.1 & 85.0 & & 55.1 \\ VideoMoCo & 86.4 && 28.4 & 79.5 && 60.4 & 82.1 && 54.1 \\ GDT & 86.5 && 36.9 & 83.6 && 65.7 & 81.6 & & 55.4 \\ RSPNet & 87.6 && 33.4 & 82.7 && 63.5 & 85.1 & &55.2 \\ TCLR & 88.0 && 29.8 & 84.3 && 61.0 & 85.3 & & 55.4 \\ CtP & 88.3 && 26.8 & 86.2 && 79.7 & 88.4 & & 56.5 \\ \addlinespace[0.01cm]\midrule \addlinespace[0.01cm] Supervised & 88.0 && 37.7 & 86.1 && 81.0 & 86.9 & & 58.4 \\ \addlinespace[0.01cm] \bottomrule \end{tabular}% } \label{granularity} \end{table} Performance of self-supervised methods also varies considerably across downstream actions. The methods that perform best on Gym-99 do not necessarily generalize well to the subsets with higher semantic similarity among actions. This is particularly noticeable for RSPNet and TCLR which drop in the ranking for the within-set subsets. All self-supervised methods, except GDT, struggle on Vault, likely due to the very intense motions of this action. Surprisingly, MoCo performs reasonably well when actions are more semantically similar, and is even comparable to GDT and RSPNet. The best self-supervised method for the subsets with high semantic similarity is CtP. This is especially evident from FX-S1 where it outperforms the second-best self-supervised method, AVID-CMA, with a 12\% absolute margin. As with downstream domain and downstream samples, supervised pre-training generalizes better than self-supervised methods across downstream actions with only CtP achieving comparable performance. \cref{granularity} also compares balanced Gym-99 with long-tailed Gym-288. We observe that self-supervised methods are not robust to this change in distribution, with the gap in performance with respect to supervised pre-training increasing. However, the ranking remains consistent, meaning the performance on the balanced set is generally indicative of the performance on the long-tailed set. \begin{myboxi}[]{NavyBlue!15} \paragraph{Conclusion.} Most self-supervised methods in \cref{granularity} are sensitive to the actions present in the downstream dataset and do not generalize well to more semantically similar actions. This further emphasizes the need for proper evaluation of self-supervised methods beyond current coarse-grained action classification. \end{myboxi} \section{Sensitivity Factor IV: Downstream Tasks} \label{sec:factor_4} The fourth factor we investigate is whether self-supervised video models are sensitive to the downstream task or whether features learned by self-supervised models are useful to video understanding tasks beyond action recognition. We evaluate this in two ways. First, we keep the domain fixed and evaluate different tasks in a domain similar to the pre-training dataset. We also explore further tasks by changing the domain and seeing how these two factors interplay. \subsection{Task-shift within domain.} We consider three different tasks which are all defined for UCF-101: spatio-temporal action detection~\cite{yowo}, repetition counting~\cite{rep_counting} and arrow-of-time prediction~\cite{arrow_of_time}. Using UCF-101 allows us to keep the domain fixed across tasks and eliminates the impact of domain shift. Note that each task uses a different subset of the full UCF-101 dataset, however, the domain remains consistent. For each task, we use the R(2+1)D-18 networks as the pre-trained backbones as before and attach task-dependent heads. We report mean Average Precision for spatio-temporal localization~\cite{mettes2016spot}, mean absolute counting error for repetition counting~\cite{rep_counting} and classification accuracy for arrow-of-time prediction~\cite{arrow_of_time, aot2-wei2018learning}. Further details are in the appendix. \begin{table}[t] \centering \midsepremove \captionsetup{font=small,skip=2mm} \caption[]{\textbf{Sensitivity Factor IV: Downstream Tasks.} Transferability of self-supervised video learning methods across video understanding tasks. Colors denote relative rankings across methods for each dataset, ranging from \textcolor{lowcolor}{low} \begin{tikzpicture}% \pgfplotscolorbardrawstandalone[% colormap name=PiYG,% colorbar horizontal,% colorbar style={% height=0.18cm,% width=2cm,% hide axis,% }% ]% \end{tikzpicture} \textcolor{highcolor}{high}. Note that for repetition counting lower (error) is better. Self-supervised features are transferable to different downstream tasks when the domain shift is low, but struggle when there is also a domain shift. Action recognition on UCF-101 is not a good proxy for self-supervised video learning use cases where a downstream domain- and task-shift can be expected. } \setlength{\tabcolsep}{3mm} \resizebox{\textwidth}{!}{% \begin{tabular}{l\C{75.4}{94.1}\C{0.327}{0.482}\CR{0.137}{0.232}\C{56.1}{87.0}c\C{7.9}{23.6}\C{7.4}{17.9}} \toprule \addlinespace[0.07cm] & \multicolumn{4}{c}{\textbf{Task-shift within domain}} & & \multicolumn{2}{c}{\textbf{Task-shift out of domain}} \\ \addlinespace[0.04cm] \cmidrule{2-5}\cmidrule{7-8} \addlinespace[0.04cm] \addlinespace[0.04cm] \multicolumn{1}{l}{\textbf{Pre-training}} & \multicolumn{1}{c}{Action} & \multicolumn{1}{c}{Action} & \multicolumn{1}{c}{Repetition} &\multicolumn{1}{c}{Arrow of} & & \multicolumn{1}{c}{Multi-label} & \multicolumn{1}{c}{Action} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{Recognition} & \multicolumn{1}{c}{Detection} & \multicolumn{1}{c}{Counting} &\multicolumn{1}{c}{Time} & & \multicolumn{1}{c}{Recognition} & \multicolumn{1}{c}{Detection} \\ \addlinespace[0.02cm] \midrule \addlinespace[0.01cm] None & 75.4 & 0.327 & 0.232 & 56.1 && 7.9 & 7.4 \\ \addlinespace[0.01cm]\midrule \addlinespace[0.01cm] MoCo & 83.5 & 0.416 & 0.220 & 80.3 && 8.1 & 11.7 \\ SeLaVi & 84.9 & 0.419 & 0.171 & 77.4 && 8.2 & 10.2 \\ VideoMoCo & 85.8 & 0.440 & 0.171 & 72.9 && 10.5 & 13.1 \\ Pretext-contrast & 86.6 & 0.462 & 0.168 & 77.2 && 8.9 & 12.7 \\ RSPNet & 88.5 & 0.467 & 0.151 & 87.0 && 9.1 & 14.1 \\ AVID-CMA & 89.3 & 0.435 & 0.162 & 83.3 && 8.4 & 10.0 \\ CtP & 89.8 & 0.465 & 0.178 & 77.1 && 9.6 & 10.0 \\ TCLR & 90.1 & 0.476 & 0.149 & 85.6 && 11.1 & 10.8 \\ GDT & 91.1 & 0.463 & 0.137 & 76.4 && 8.5 & 12.6 \\ \addlinespace[0.01cm]\midrule \addlinespace[0.01cm] Supervised & 94.1 & 0.482 & 0.137 & 77.0 && 23.6 & 17.9 \\ \addlinespace[0.01cm] \bottomrule \end{tabular}% } \label{task_shift} \end{table} From the results in \cref{task_shift}, we observe that self-supervised learning is beneficial to tasks beyond action recognition, with almost all methods outperforming training from scratch on spatio-temporal action detection, repetition counting and arrow-of-time prediction. Action detection results are well correlated with action recognition. Repetition counting and arrow-of-time have less correlation with action recognition, suggesting that the current benchmark on UCF-101 action recognition by itself is not a good indication of how well self-supervised methods generalize to other tasks. For repetition counting and arrow-of-time prediction, some methods perform comparably to or outperform supervised pre-training. Notably, RSPNet and TCLR generalize the best across these tasks, with GDT also performing well on repetition counting. CtP ranks high on action recognition and detection but performs modestly for repetition counting. This shows that different methods have different task sensitivity, so a thorough evaluation along downstream tasks is needed. \subsection{Task-shift out of domain.} We also evaluate how well the self-supervised models generalize when both the domain and the task change. We do so with two popular video understanding benchmarks: long-term multi-label classification on Charades \cite{charades-sigurdsson:hal-01418216} and short-term spatio-temporal action detection on AVA \cite{AVA-Gu_2018_CVPR}. For both, we follow the setup and training procedure from \cite{Feichtenhofer2019SlowFastNF} with R(2+1)D-18 models as the pre-trained backbone and we measure performance in mean Average Precision. Details are in the appendix. From the results in \cref{task_shift}, we observe that supervised pre-training is far more generalizable than all self supervised methods, which all struggle considerably when both the domain and task change. For long-term action classification on Charades, TCLR is slightly better than other methods. On AVA, RSPNet is the best performing self-supervised method with VideoMoCo second. In \cref{sec:factor_1}, we earlier observed that these were two of the methods more robust to domain shift suggesting that this factor is key to success on AVA. \begin{myboxi}[]{Goldenrod!50} \paragraph{Conclusion.} The results in \cref{task_shift} reveal that action classification performance on UCF-101 is mildly indicative for transferability of self-supervised features to other tasks on UCF-101. However, when methods pre-trained on Kinetics-400 are confronted with a domain change in addition to the task change, UCF-101 results are no longer a good proxy and the gap between supervised and self-supervised pre-training is large. \end{myboxi} \section{SEVERE-benchmark} As evident from the results in previous sections, current video self-supervised methods are benchmark-sensitive to the four factors we have studied. Based on our findings, we propose the SEVERE-benchmark (\underline{SE}nsitivity of \underline{V}id\underline{E}o \underline{RE}presentations) for use in future works to more thoroughly evaluate new video self-supervised methods for generalization along the four sensitivity factors we have examined. Since we do not expect future works to run all the experiments from our study, we create a subset of experiments that are indicative benchmarks for each sensitivity factor and realistic to run. We summarize the benchmark composition in \cref{proposed-benchmarks} and detail its motivation per factor. \noindent\textbf{Downstream domain.} To measure a self-supervised model's domain sensitivity we recommend using Something-Something-v2 and FineGym-99. These two datasets come from domains distinct to Kinetics-400 and UCF-101 and also each other. FineGym-99 evaluates a model's ability to generalize to datasets with less distinctive backgrounds where there are few actions in common with Kinetics-400. SS-v2 evaluates the generalizability to actions that require high temporal awareness as well as the shift to a first-person viewpoint. It is evident from \cref{proposed-benchmarks} that there are significant rank changes between UCF-101, Gym-99 and SS-v2 thus these three datasets provide a challenging subset for future methods. \noindent\textbf{Downstream samples.} For the sample sensitivity, we recommend using 1000 samples on UCF-101 and Gym-99. Using 1000 samples showed the most dramatic difference from the full dataset size particularly for these datasets where there is a considerable gap between self-supervised and supervised pre-training as well as considerable rank change among the methods. \noindent\textbf{Downstream actions.} To test generalizability to recognizing semantically similar actions, we recommend evaluating the two within-set granularities of Gym-99 \textit{i}.\textit{e}. FX-S1 and UB-S1. Both of these subsets have high semantic similarity between actions with methods currently struggling to generalize to both of these subsets as can be seen in \cref{proposed-benchmarks}. There is also a significant gap between supervised and self-supervised pre-training for FX-S1, highlighting the potential for future works in this area. \noindent\textbf{Downstream task.} To evaluate the task sensitivity, we recommend future works use repetition counting on UCF-101 and multi-label action classification on Charades. Repetition counting on UCF-101 highlights different strengths to action recognition as it allows investigation of a model's ability to generalize to a task that requires more temporal understanding without measuring the impact of the domain. We recommend multi-label classification on Charades as it is currently a very challenging task for self-supervised models and allows the combination of domain change and task shift to be investigated. Code to compare on the SEVERE-benchmark will be made available. \begin{table}[t] \captionsetup{font=small,skip=2mm} \caption[]{\textbf{Proposed SEVERE-benchmark} for evaluating video self-supervised methods for generalization along downstream domains, samples, actions and tasks. } \centering \midsepremove \resizebox{\linewidth}{!}{\begin{tabular} { l\C{75.4}{94.1}c \C{53.8}{61.0}\C{88.9}{92.2}@{\hskip 2mm}c \C{43.1}{86.0}\C{19.2}{51.2}@{\hskip 2mm}c \C{45.0}{81.0}\C{81.5}{88.4}@{\hskip 2mm}c \CR{0.137}{0.232}\C{7.9}{23.6} } \toprule \addlinespace[0.1cm] & \multicolumn{1}{Sc}{\textbf{Existing}} & & \multicolumn{11}{Sc}{\textbf{SEVERE-benchmark}} \\ \addlinespace[0.04cm] \cmidrule{2-2} \cmidrule{4-14} \addlinespace[0.1cm] \multicolumn{1}{l}{\textbf{Pre-training}} & \multicolumn{1}{c}{} & & \multicolumn{2}{Sc}{Domains} & & \multicolumn{2}{Sc}{Samples} & & \multicolumn{2}{Sc}{Actions}& & \multicolumn{2}{Sc}{Tasks}\\ \cmidrule(lr){4-5} \cmidrule(lr){7-8} \cmidrule(lr){10-11} \cmidrule(lr){13-14} \addlinespace[0.1cm] & \multicolumn{1}{c}{UCF101} & & \multicolumn{1}{c}{SS-v2} & \multicolumn{1}{c}{Gym-99} & & \multicolumn{1}{c}{UCF ($10^{3}$)} & \multicolumn{1}{c}{Gym-99 ($10^{3}$)} & & \multicolumn{1}{c}{FX-S1} & \multicolumn{1}{c}{UB-S1}& & \multicolumn{1}{c}{UCF-RC} & \multicolumn{1}{c}{Charades-MLC}\\ \midrule \addlinespace[0.01cm] None & 75.4 & & 56.8 & 89.4 && 43.1 & 23.1 && 45.0 & 84.0 && 0.232 & 7.9 \\ \addlinespace[0.01cm] \midrule \addlinespace[0.01cm] MoCo & 83.5 & & 57.0 & 90.6 && 60.7 & 29.0 && 65.1 & 85.0 && 0.220 & 8.1 \\ SeLaVi & 84.9 && 56.4 & 88.9 && 69.2 & 28.3 && 50.2 & 81.5 && 0.171 & 8.2 \\ VideoMoCo & 85.8 && 58.8 & 90.5 && 65.8 & 19.2 && 60.4 & 82.1 && 0.171 & 10.5 \\ Pretext-Contrast & 86.6 && 57.0 & 90.3 & &62.7 & 25.9 & &65.8 & 86.2 && 0.168 & 8.9 \\ RSPNet & 88.5 && 59.4 & 91.3 && 75.7 & 32.2 && 63.5 & 85.1 && 0.151 & 9.1 \\ AVID-CMA & 89.3 && 53.8 & 90.6 && 68.8 & 32.1 && 67.2 & 88.4 && 0.162 & 8.4 \\ CtP & 89.8 && 60.2 & 92.2 && 63.7 & 31.2 && 79.7 & 88.4 && 0.178 & 9.6 \\ TCLR & 90.8 && 60.0 & 91.5 && 70.6 & 24.5 && 61.0 & 85.3 && 0.149 & 11.1 \\ GDT & 91.1 && 57.8 & 90.4 && 77.8 & 44.1 && 65.7 & 81.6 && 0.137 & 8.5 \\ \addlinespace[0.01cm] \midrule \addlinespace[0.01cm] Supervised & 94.1 && 61.0 & 91.8 && 86.0 & 51.2 && 81.0 & 86.9 && 0.137 & 23.6 \\ \addlinespace[0.01cm] \bottomrule \end{tabular} } \label{proposed-benchmarks} \end{table} \section{Observations, Limitations and Recommendations} \textbf{Observations.} We hope that our study and resulting benchmark provides a helpful insight for future research to design novel self-supervised methods for generalizable video representation learning. From the benchmark results in~\cref{proposed-benchmarks}, we observe that: \begin{enumerate}[label=(\roman*)] \item There is no clear winner as different methods stand out in different downstream settings. \item Supervised pre-training is dominant across all sensitivity factors, especially when the number of available downstream samples are limited and when there is a change in both the downstream domain and the downstream task. \item Self-supervised contrastive methods that explicitly encourage features to be distinct across the temporal dimension transfer well. This is visible from the consistent performance of GDT, TCLR and RSPNet across different sensitivity factors. \item Learning certain temporal invariances may prevent generalizability to temporal or fine-grained benchmarks. This is evident from GDT's performance on SS-v2 and UB-S1. These benchmarks require distinction between actions such as \textit{moving something left} vs. \textit{moving something right} in SS-v2 and \textit{giant circle forwards} vs. \textit{giant circle backwards} in UB-S1. The invariance to temporal reversal learned by GDT impacts its ability to recognize such actions. Similarly, MoCo outperforming VideoMoCo on the FX-S1 and UB-S1 Gym-99 subsets suggests that invariance to frame dropout in VideMoCo can harm the performance on highly similar actions. \item Pretext-tasks specific to videos can be effective to learn more fine-grained features. CtP generalizes well both to different domains where the background is less indicative of the action and to more semantically similar actions. The pretext task is to track and estimate the position and size of image patches moving in a sequence of video frames. Such a formulation requires the network to learn to follow moving targets and ignore the static background information. CtP's generalization success demonstrates that contrastive learning is not the only way forward for self-supervised video representation learning. \item \cref{features} shows the feature similarity on Kinetics using centered kernel alignment ~\cite{cka} between supervised pre-training and the best self-supervised methods~\textit{i}.\textit{e}. GDT, RSPNet, TCLR, CtP. This figure illustrates that contrastive methods seem to imitate supervised pre-training as the correlation between supervised pre-training and the three contrastive methods (RSPNet, GDT and TCLR) is high. This explains the good performance of these methods on UCF-101 with 1000 examples. By contrast, CtP's features are far away from supervised pre-training. This is interesting because CtP generalizes well to new domains and actions, it shows that good generalization capability can be obtained without imitating supervised pre-training. \end{enumerate} \begin{figure}[t!] \captionsetup{font=small,skip=1mm} \centering \includegraphics[width=\linewidth]{media/cka_heatmaps_v3.0.pdf} \caption{\textbf{Representation similarity} between features of top self-supervised methods and supervised pre-training on Kinetics-400 validation set (using centered kernel alignment~\cite{cka}). Contrastive methods have a high correlation with supervised pretraining, while CtP's features are far away. Thus, showing potential for both imitating supervised learning as well as learning features distinct to it. } \label{features} \end{figure} \medskip \noindent\textbf{Limitations.} While our study has highlighted the benchmark sensitivity of video self-supervised learning across four factors, there are many more factors that we do not consider in this work. Due to computational limits, we keep the source dataset fixed as Kinetics-400 and use publicly available pre-trained models. This means there is variability in the exact pre-training setup used by each model. We hope that future works will explore this factor as well as the impact of other large-scale pre-training datasets such as Ego4D~\cite{Ego4D2021} for the generalization of video self-supervised models. Another limitation of our study is that we only consider a fixed R(2+1)D-18 backbone, which is currently one of the most commonly used in video self-supervised learning. This allows our comparison between methods to be fair, however, it does limit the ability of methods to perform well on datasets such as EPIC-Kitchens-100. Another factor that could be explored further is the task. We have considered a selection of various video understanding tasks centered around actions. However, there are many more tasks that could be explored both action-centric and beyond, for example action anticipation, action segmentation, tracking and motion estimation. \noindent\textbf{Recommendations.} Based on the results and our observations, we have several recommendations for future works in video self-supervised learning. (i) Our study has highlighted the need for more focus on generalizability of self-supervised learning methods, particularly along the domain and dataset size factors. (ii) Distinguishing across the temporal dimension is effective and is a useful direction to pursue further for generalizability. (iii) Pretext-tasks like the one used in CtP are good for the generalizability to domain and action, thus designing new video specific pretext tasks is a promising direction. This could also be combined with contrastive learning tasks to gain the benefits of both types of learning. \section{Details of the Evaluated Self-Supervised Models} \label{sec:evaluated-models} We use a variety of different self-supervised methods in our paper, here we describe each method: \noindent\textbf{MoCo~\cite{moco_v2}} is a contrastive learning method proposed for representation learning in images. Positives are created by performing different spatial augmentations on a video. Negatives are other videos. To obtain negatives beyond the current batch, MoCo proposes a momentum encoder which maintains a queue of momentum-updated data samples from previous batches. \noindent\textbf{SeLaVi~\cite{selavi-asano2020labelling}} views the audio and visual modalities as different augmentations of a video and learns with a cross-modal clustering pretext task. \noindent\textbf{VideoMoCo~\cite{videomoco-pan2021videomoco}} extends MoCo to the temporal domain. It does this with an adversarial dropout augmentation which removes the frames the model considers most important. With the contrastive learning loss, the model learns invariance to this adversarial frame dropout alongside the spatial augmentations used in MoCo. \noindent\textbf{Pretext-Contrast~\cite{pretext-contrast-DBLP:journals/corr/abs-2010-15464}} combines the pretext task approach with contrastive learning. As its pretext task it uses video cloze procedure~\cite{vcp} where the goal is to predict which augmentations have been applied to a video clip. For the contrastive learning objective different temporal shifts, \textit{i}.\textit{e}. distinct clips from the same video, are considered. \noindent\textbf{RSPNet~\cite{rspnet-chen2020RSPNet}} also combines pretext and contrastive tasks, with a focus on video speed. The pretext task is to predict the relative difference in speed between two versions of the same video, while the contrastive task creates extra positives and negatives by augmenting videos with different speeds along with the spatial augmentations. \noindent\textbf{AVID-CMA~\cite{avid-cma-morgado2021audio}} is a multi-modal contrastive learning method which uses audio in addition to the visual modality. It first uses cross-modal contrastive learning where the one modality is used as the positives and the other as the negatives. Then it uses within modality contrastive learning where additional positives which have high audio and visual similarity are sampled. \noindent\textbf{CtP~\cite{ctp-wang2021unsupervised}} performs self-supervised learning through a ``catch the patch'' pretext task. The goal in this task is to predict the trajectory of an image patch which is resized and moved through a sequence of video frames. \noindent\textbf{TCLR~\cite{dave2021tclr}} is a contrastive method which encourages features to be distinct across the temporal dimension. It does this by using clips from the same video as negatives. Therefore, instead of encouraging invariance to temporal shift as other methods to, it encourages the model to be able to distinguish between different shifts. It also uses an extensive set of spatial augmentations. \noindent\textbf{GDT~\cite{gdt-patrick2020multimodal}} is a multi-modal contrastive method which composes a series of different augmentations and encourages model to learn invariance to some and learns to distinguish between others. We use the best performing version of GDT which encourages invariance to spatial augmentations, the audio and visual modalities and temporal reversal, while encouraging the model to distinguish between different temporal shifts. While all models are pre-trained on Kinetics-400 and use an R(2+1)D-18 backbone with 112x112 spatial input size, there are some smaller differences in how the models are trained. Due to the computational cost of training these models we download publicly available models or obtain them from the authors, therefore we cannot control for these smaller differences in the pre-training set up. These differences include number of pre-training epochs, batch size, number of video frames used and spatial and temporal augmentations. We list these differences in Table~\ref{tab:method_diffs}. \begin{table}[] \captionsetup{font=footnotesize,skip=1mm} \centering \caption{\textbf{Pre-training differences of our evaluated self-supervised methods.} While all models are pre-trained with the same backbone and dataset, there are differences in how many epoches they were trained for, the batch size and number of frames they use and the spatial and temporal augmentations they are encouraged to be invariant to.} \resizebox{\textwidth}{!}{% \begin{tabular}{lllllccccccccccc} \toprule \multicolumn{1}{l}{\multirow{3}{*}{\textbf{Method}}} & & & & && \multicolumn{6}{c}{\textbf{Spatial Augmentations}} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textbf{Temporal Augmentations}} \\ \cmidrule{7-12} \cmidrule{14-16} \multicolumn{1}{c}{} & Extra & Epochs & Batch & Num & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Random} & \multicolumn{1}{c}{Horiz.} & \multicolumn{1}{c}{Grayscale} & \multicolumn{1}{c}{Color} & \multicolumn{1}{c}{Gaussian} & \multicolumn{1}{c}{Scaling} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Shift} & \multicolumn{1}{c}{Reversal} & \multicolumn{1}{c}{Speed} \\ & Modality & & Size & Frames & & Crop & Flip & & Jitter & Blur & \\ \midrule MoCo & & 200 & 128 & 16 & & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & &\ding{51}& & \\ SeLaVi & Audio & 200 & 1024 & 30 & & \ding{51} & \ding{51} & & & & & & & & \\ VideoMoCo & & 200 & 128 & 32 & & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & & & & \\ Pretext-Contrast & & 200 & 16 & 16 & & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & & \ding{51} & & \\ RSPNet & & 200 & 64 & 16 & & \ding{51} & & & \ding{51} & \ding{51} & & & \ding{51} & & \ding{51} \\ AVID-CMA & Audio & 400 & 256 & 16 & & \ding{51} & \ding{51} & & \ding{51} & & \ding{51} & & & \\ CtP & & 90 & 32 & 16 \\ TCLR & & 100 & 40 & 16& & \ding{51} & \ding{51} & \ding{51} & \ding{51} & & \ding{51} & & & & \\ GDT & Audio & 100 & 512 & 30 & & \ding{51} & \ding{51} & &\ding{51} & & & & & \ding{51} & \\ \midrule Supervised & & 45 & 32 & 16 & & \ding{51} & \ding{51} & & & & & & \ding{51}\\ \bottomrule \end{tabular}% } \label{tab:method_diffs} \end{table} \section{Downstream Experimental Details} \label{sec:expt-details} \subsection{Downstream Domain} \label{app:domain-shift-expt} In \cref{sec:factor_1} we investigate to what extent self-supervised methods learn features applicable to action recognition in any domain. Here we explain the datasets, splits and training details we use to do this. \noindent\textbf{Datasets} We report our experiments on the following datasets:\\ \textit{UCF-101} \cite{UCF-101-arxiv} is currently one of the most widely used datasets for evaluating video self-supervised learning models. It consists of YouTube videos from a set of 101 coarse-grained classes with a high overlap with actions in Kinetics-400. We use the first standard split proposed in the original paper \cite{UCF-101-arxiv} containing 9,537 training and 3,783 testing samples for the 101 action classes.\\ \textit{NTU-60}: \cite{NTU-60-arxiv} consists of daily human actions captured in a controlled lab setting with a fixed number actors. Although it has some overlap with Kinetics-400 actions, it is quite different visually due to the setting. We use the cross-subject protocol proposed in \cite{NTU-60-arxiv} to split the data into 40,320 training and 16,560 testing samples for 60 action classes.\\ \textit{Gym-99}. We use FineGym version $v1.0$ \cite{Gym-99-arxiv} which is a dataset of fine-grained actions constructed from recorded gymnastic competitions. We use the Gym 99 subset which contains 99 action classes with 20,484 and 8,521 samples in the train and test sets respectively.\\ \textit{SS-v2}: \cite{SS-v2-arxiv} is a crowdsourced collection of first-person videos aimed to instill common-sense understanding. It differs significantly with respect to Kinetics-400 in terms of visual appearance and point-of-view. We use the original dataset splits from \cite{SS-v2-arxiv} containing 168,913 training and 24,777 testing samples for 174 action classes.\\ \textit{EPIC-Kitchens-100}: \cite{EPIC-100-arxiv} is a large-scale egocentric dataset consisting of daily actions performed in a kitchen. It has annotations for verbs (97) and nouns (300) and the action is defined a tuple of these. Like SS-v2, EK-100 also differs significantly from Kinetics-400 in terms of visual appearance and point-of-view. We use standard splits from \cite{EPIC-100-arxiv} containing 67,217 samples in training set and 9,668 in the validation set. In the main paper we only aim to recognize the 97 verb classes, we provide results for the noun and action recognition tasks in \cref{sec:epic_nouns}. \noindent\textbf{Training Details} During training, we sample a random clip from each video of 32 frames with standard augmentations \textit{i}.\textit{e}. a random multi-scale crop of size 112x112 and color jittering. We train with the Adam optimizer. The learning rates, scheduling and total number of epochs vary across datasets and are shown in \cref{tab:downstream_domain_training}. However, each model is trained with the same hyper-parameters for the corresponding dataset. For inference, we use 10 linearly spaced clips of 32 frames each. For each frame we take a center crop which is resized to 112x112 pixels. To calculate the action class prediction of a video, we take the mean of the predictions from each clip and report top-1 accuracy. \begin{table}[] \captionsetup{font=footnotesize,skip=1mm} \centering \caption{\textbf{Training details} of finetuning and linear evaluation on various downstream datasets. Learning rate is scheduled using a multip-step scheduler with $\gamma = 0.1$ at corresponding steps for each dataset. We train all the models with same hyperparameters for the corresponding dataset.} \resizebox{\textwidth}{!}{% \begin{tabular}{lccccccccc} \toprule \multicolumn{1}{l}{\multirow{3}{*}{\textbf{Dataset}}} & \multicolumn{4}{c}{\textbf{Finetuning}} & \multicolumn{1}{c}{} & \multicolumn{4}{c}{\textbf{Linear Evaluation}} \\ \cmidrule{2-5} \cmidrule{7-10} & Batch Size & Learning rate & Epochs & Steps & & Batch Size & Learning rate & Epochs & Steps \\ \midrule UCF-101 & 32& 0.0001 & 160 & [60,100,140] & & 64& 0.01 & 100 & [40,80] \\ NTU-60 & 32& 0.0001 & 180 & [90, 140, 160] & & 64& 0.01 & 120 & [40,80,100] \\ Gym-99 & 32& 0.0001 & 160 & [60,100,140] & & 64 & 0.01 & 120 & [40,80,100] \\ SS-v2 & 32& 0.0001 & 45 & [25, 35, 40] & & 64& 0.01 & 40 & [20,30] \\ EK-100 & 32& 0.0025 & 30 & [20, 25] & & 32 & 0.0025 & 30 & [20, 25] \\ K-400 & -& -& -&- & & 64& 0.01 & 40 & [10,20,30] \\ \bottomrule \end{tabular}% } \label{tab:downstream_domain_training} \end{table} \subsection{Downstream Samples} In \cref{sec:factor_2} we measure how sensitive current video self-supervised models are to the amount of downstream samples. We do this by varying the size of the training data starting from 1000 examples and doubling it until we reach the full train set. We use the same data splits as in the downstream domain experiments, explained in \cref{app:domain-shift-expt}, and sample a subset of video clips from the respective train sets. We use the same random subset across the different models to make the comparison fair. For each dataset, we use same training and testing procedure as the downstream domain experiments, explained in \cref{app:domain-shift-expt} and \cref{tab:downstream_domain_training}. \subsection{Downstream Actions} In \cref{sec:factor_3} we measure how benchmark-sensitive current video self-supervised models are to downstream actions. We do so by measuring performance on different subsets, defined in the FineGym dataset~\cite{Gym-99-arxiv}, which have increasing semantic similarity. We provide the details of Gym-99, Gym-288 and the four different subsets we use of Gym-99 below: \noindent\textbf{Gym-99} consists of 29k video clips of 99 different actions across the four different gymnastic events in FineGym: Vault, Floor Exercise, Balance Beam and Uneven Bars. This is a relatively balanced subset of the full FineGym dataset with all actions having more than 80 occurrences. There are a total 20.5k training videos and 8.5k testing videos. \noindent\textbf{Vault} is a subset of Gym 99 containing 1.5k videos of the 6 actions from the Vault event. The training split contains 1.0k examples and the testing split contains 0.5k examples. \noindent\textbf{Floor} contains actions in the Floor Exercise event from Gym-99. It consists of 7.5k instances of over 35 actions with a split of 5.3k for training and 2.2k for testing. \noindent\textbf{FX-S1} is a subset of actions of leaps, jumps and hops from the Floor event in Gym-99. This subset of 11 actions contains a total of 2.6k video clips with 1.9k for training and 0.7k for testing. \noindent\textbf{UB-S1} contains 5k videos of 15 actions from the Uneven Bars event with a split of 3.5k for training and 1.5k for testing. The actions consist of different types of circles around the bars. \noindent\textbf{Gym-288} is a long-tailed version of Gym 99 containing 32k videos with 22.6K training and 9.6K testing samples. It adds 189 infrequent classes to the 99 classes in Gym 99, where actions can have as little as 1 or 2 instances in training. This results in a total of 288 action classes from the four different gymnastic events. We follow the same training and evaluation procedure as that for finetuning Gym-99 in downstream domain training. In particular, for training we sample a random clip from each video of 32 frames with standard augmentations \textit{i}.\textit{e}. a random multi-scale crop of size 112x112 and color jitter. Each model is trained with the Adam optimizer using a learning rate of 0.0001 and multi-step scheduler with $\gamma {=} 0.1$ at epochs [60, 100, 140] for 160 epochs. For inference, we use 10 linearly spaced clips of 32 frames each. For each frame we take a center crop which is resized to 112x112 pixels. To calculate the action class prediction of a video, we take the mean of the predictions from each clip. For each subset, we compute accuracy per action class and report the mean over all action classes as in the original dataset \cite{Gym-99-arxiv}. \subsection{Downstream Tasks} In \cref{sec:factor_4} we investigate how sensitive self-supervised methods are to the downstream task and whether they generalize beyond action recognition. We provide details of the experimental setup used for each task below. \noindent\textbf{Spatio-temporal action detection}. The goal of this task is to predict the bounding box of an actor in a given video clip, both spatially and temporally, along with the action class. We use the UCF101-24 benchmark which is a subset of UCF-101 with bounding box annotations for 3,207 videos from 24 action classes. We follow the implementation of K{\"{o}}p{\"{u}}kl{\"{u}} \textit{et al}. \cite{yowo} using only a 3D-CNN branch for spatio-temporal action detection. We initialize the 3D backbone with the pre-trained, self-supervised R(2+1D)-18 models. A clip size of 16 frames is sampled from the video as the input with standard data augmentations \textit{i}.\textit{e}. horizontal flipping, random scaling and random spatial cropping. Each model is trained using the Adam optimizer with an initial learning rate of 1e-4, weight decay of 5e-4 and batch size 64, for a total of 12 epochs. The learning rate is decayed using a multi-step scheduler with $\gamma {=} 0.5$ at epochs [4,6,8,10]. For testing we also follow \cite{yowo} and report video-mAP over all the action classes. \noindent\textbf{Repetition counting}. The goal of the this task is to estimate the number of times an action repeats in a video clip. We use the UCFRep benchmark proposed by Zhang \textit{et al}. \cite{rep_counting}, which is a subset of UCF-101. The dataset consists of 526 videos with 3,506 repetition number annotations. From the annotated videos, 2M sequences of 32 frames and spatial size 112x112 are constructed which are used as the input. We use the implementation from the original benchmark \cite{rep_counting} with pre-trained R(2+1)D-18 models as the backbone networks. Each model is trained for 100 epochs with a batch size of 32 using the Adam optimizer with a fixed learning rate of 0.00005. For testing, we follow the protocol from \cite{rep_counting} and report mean counting error. \noindent \textbf{Arrow-of-time}. The goal of this task is to predict the direction (forward of backward) of the video. We closely follow the setup used by Ghodrati \textit{et al}. \cite{arrow_of_time}. The full UCF-101 dataset is used with two versions of each video, one normal and one reversed. During training, for each video, we sample 8 frames linearly with a random offset, with batch size of 12 and 112x112 center crops, number of epochs 10, learning rate of $1e^{-5}$. We do not use any augmentations or learning rate schedulers. During testing, we sample 8 frames linearly. We report top-1 binary classification accuracy. \noindent\textbf{Multi-label classification on Charades}. Charades \cite{charades-sigurdsson:hal-01418216} is made up of videos of people recording casual everyday activities at their homes. Videos in Charades are longer than the other datasets we use and the goal is to recognize multiple different actions in each video. A per-class sigmoid output is used for multi-class prediction. We use the implementation of Feichtenhofer \textit{et al}. \cite{large-scale-feichtenhofer2021large}\footnote{\href{https://github.com/facebookresearch/SlowFast}{https://github.com/facebookresearch/SlowFast}} with the R(2+1)D-18 backbone. During training, we use 32 frames with a sampling rate of 8. Since this task requires longer temporal context, we observe that using more frames with higher sampling rate is beneficial. We use a spatial crop of 112x112 and augmentations such as random short-side scaling, random spatial crop and horizontal flip. We train for 57 epochs in total with a batch size of 16 and a learning rate of 0.0375 with multi-step scheduler with $\gamma = 0.1$ at epochs [41, 49]. During testing, following \cite{large-scale-feichtenhofer2021large}, we spatio-temporally max-pool predictions over 10 clips for a single video. We report mean average precision (mAP) across classes. \noindent\textbf{Action detection on AVA.} AVA \cite{AVA-Gu_2018_CVPR} consists of clips extracted from films. We use version v2.2 with bounding box annotations for spatio-temporal action detection of temporally fine-grained action classes. The goal of this task is to detect and predict action classes from proposals generated by off-the-shelf person detectors. We again use the implementation of \cite{large-scale-feichtenhofer2021large} with the R(2+1)D-18 backbone. During training, we use 32 frames with a sampling rate of 2. We use spatial crop of 112x112 and augmentations such as random short-side scaling, random spatial crop, horizontal flip. We train for 20 epochs with learning rate of 0.1 with multi-step scheduler with $\gamma = 0.1$ at epochs [10, 15] and a batch size of 32. During testing, following \cite{large-scale-feichtenhofer2021large}, we use a single clip at the center of the video with 8 frames and sampling rate of 8. We report mean average precision (mAP) across the classes. \section{Correlations of Downstream Performance} \label{correlattion_plots} As observed from the results of \cref{sec:factor_1}, the performance for both UCF-101 finetuning and Kinetics-400 linear evaluation is not indicative of how well a self-supervised video model generalizes to different downstream domains, samples, actions and tasks. Here, we plot the performance of each pre-trained model for each downstream settings and show the correlation with UCF-101 finetuning and Kinetics-400 linear evaluation performances. The results are shown in \cref{fig:corr_on_ucf,fig:samples-corr_on_ucf,fig:actions-corr_on_ucf,fig:tasks-corr_on_ucf,fig:domains-corr_on_k400,fig:samples-corr_on_k400,fig:actions-corr_on_k400,fig:tasks-corr_on_k400}. These plots further demonstrate that the correlations are overall low for each downstream factor \textit{i}.\textit{e}. domain, samples, actions and tasks, indicating that more thorough testing of video self-supervised methods is needed. \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_domain_shift_on_UCF101_v2.pdf} \caption{\textbf{Downstream domain against UCF-101 finetuning.} We plot the corelations between finetuning performance of video pre-training methods on UCF-101 and performances on finetuning and linear-evaluation on all downstream datasets.} \label{fig:corr_on_ucf} \end{figure} \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_samples_shift_on_UCF101_v1.pdf} \caption{\textbf{Downstream samples against UCF-101 finetuning.} For the low data setting (1000-2000 samples), we plot the correlations of performance of video pre-training methods against that for UCF-101 finetuning.} \label{fig:samples-corr_on_ucf} \end{figure} \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_action_granularity_on_UCF101-finetune_v2.pdf} \caption{\textbf{Downstream actions against UCF-101 finetuning.} We plot the corelations of performances of video pre-training methods between UCF-101 finetuning and FineGym subsets.} \label{fig:actions-corr_on_ucf} \end{figure} \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_task_shift_on_UCF101-finetune_v2.pdf} \caption{\textbf{Downstream tasks against UCF-101 finetuning.} We plot the corelations between performance on UCF-101 finetuning and other downstream tasks for the video pre-training methods.} \label{fig:tasks-corr_on_ucf} \end{figure} \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_domain_shift_on_K400_v2.pdf} \caption{\textbf{Downstream domain against Kinetics-400 linear evaluation.} We plot the corelations between finetuning performance of video pre-training methods on Kinetics-400 linear-evaluation and performances on finetuning and linear-evaluation on all downstream datasets.} \label{fig:domains-corr_on_k400} \end{figure} \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_samples_shift_on_k400_v1.pdf} \caption{\textbf{Downstream samples against Kinetics-400 linear evaluation.} For the low data setting (1000-2000 samples), we plot the correlations of performance of video pre-training methods against that for Kinetics-400 linear-evaluation.} \label{fig:samples-corr_on_k400} \end{figure} \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_action_granularity_on_K400-linear_v2.pdf} \caption{\textbf{Downstream actions against Kinetics-400 linear evaluation.} We plot the corelations of performances of video pre-training methods between Kinetics-400 linear-evaluation and FineGym subsets.} \label{fig:actions-corr_on_k400} \end{figure} \begin{figure}[htb!] \captionsetup{font=footnotesize,skip=1mm} \centering \includegraphics[width=\linewidth]{media/correlations_task_shift_on_K400-linear_v2.pdf} \caption{\textbf{Downstream tasks against Kinetics-400 linear evaluation.} We plot the corelations between performance on Kinetics-400 linear-evaluation and other downstream tasks for the video pre-training methods.} \label{fig:tasks-corr_on_k400} \end{figure} \clearpage \section{Representation Similarity Matrices} \label{similarity_features} We plot the the feature similarity on Kinetics validation set using centered kernel alignment ~\cite{cka} between supervised pre-training and our evaluated self-supervised pre-training methods in \cref{fig:cka_all}. We showed a subset of these plots in \cref{features}, here we show the feature similarity for all the self-supervised models we used in our experiments. \begin{figure} \captionsetup{font=footnotesize,skip=1mm} \includegraphics[width=\linewidth]{media/cka_all_v2.pdf} \caption{\textbf{Representation similarity} between features of self-supervised methods and supervised pre-training on Kinetics-400 validation set using centered kernel alignment. Features of contrastive methods are more closer to the features of supervised pretraining. } \label{fig:cka_all} \end{figure} \section{Downstream Dataset Attributes} \label{sec:video-datasets} \begin{figure} \centering \begin{tabular}{@{}c@{}c} \includegraphics[width=0.75\linewidth]{media/radar_together_detailed_v1.pdf} \end{tabular} \caption{\small \textbf{Radar plots with details}. The radar plots contain details of the values along the axis for every attribute for the datasets we use in this study.} \label{fig:radar-detailed} \end{figure} We define several attributes in \cref{subsec:domain-shift} in order to characterize differences in domain between the downstream datasets and the Kinetics-400 pre-training dataset in \cref{fig:radar}. We provide detailed radar plots in \cref{fig:radar-detailed} with axes labeled with relevant values for each attribute. The attributes \textit{Point-of-view} and \textit{Environment} are defined qualitatively based on the contents of the target dataset. Examples of videos from each of the datasets are shown in \cref{domain_frames_appendix}. We can see that FineGym \cite{Gym-99-arxiv} consists of videos of Olympic gymnastic events. Thus, we label it as \textit{stadium} for environment and \textit{third-person} for point-of-view. On the radar plots, we order environment in descending order of variability contained in a given dataset. Kinetics-400 is placed near the origin as it has much higher variability than NTU-60 for example, which is captured in a controlled lab setting. \textit{Action length} is the average duration of the actions in each of the datasets. We quantify \textit{temporal awareness} as the minimum number of frames (temporal context) required to best recognize the action. We do this by finetuning R(2+1)D with weights initialized from supervised pre-training on Kinetics-400 and we denote temporal awareness ($\tau$) as: \begin{equation} \tau = \arg\min_{t \in\{1, 2, ..., N\}}\left[ \left(100 \times \frac{f_{t+1} - f_{t}}{f_{t}}\right) < \alpha \right] \end{equation} where $\alpha$ is chosen to be $1$. This means $\tau$ indicates the number of frames after which relative improvement in performance is lesser than $\alpha$, \textit{i}.\textit{e}. when the performance has plateaued. \cref{fig:action_temporality-intradataset} shows the top-1 action recognition performance against increasing number of frames for each of our downstream datasets. We use bilinear interpolation to estimate performance at given number of frames beyond those that we experiments with. For example, using our method to compute temporal awareness, the performance for UCF-101 plateaus at 7 frames while that for EK-100 plateaus at 32 frames indicating that EK-100 needs much larger temporal context for recognition while UCF-101 may suffice with a shorter temporal context. \textit{Label overlap} is the amount of actions which are present in both the downstream dataset and the pretraining dataset (Kinetics-400). We quantify this by matching identical actions as well as manually checking for reworded versions of the same action class. For example, ``head massage'' in UCF-101 has a corresponding action ``massaging person's head'' in Kinetics-400. In NTU-60 action class ``brushing teeth'' has a matching action ``brushing teeth'' in Kinetics-400. \begin{figure} \centering \includegraphics[width=\linewidth]{media/domains.pdf} \caption{Example video frames from the Kinetics-400 pre-training dataset and the 7 different downstream datasets we consider. Note the differences in the capture setting and point-of-view across these datasets.} \label{domain_frames_appendix} \end{figure} \begin{figure} \centering \begin{tabular}{@{}c@{}c} \includegraphics[width=0.8\linewidth]{media/action_temporality_v7.pdf} \end{tabular} \caption{\small \textbf{Temporal awareness}. Illustrating the effect of temporal awareness (increasing temporal-context) on the action recognition performance using a standard 3D-CNN for different action datasets. } \label{fig:action_temporality-intradataset} \end{figure} \begin{figure}[t!] \captionsetup{font=small,skip=2mm} \centering \includegraphics[width=\linewidth]{media/dataset_size_linear_1x4_v1.pdf} \caption{\textbf{Linear evaluation for Downstream Samples.} Comparison of video self-supervised learning methods using varying number of samples on linear evaluation for four downstream datasets. Rank changes are less significant with increasing sample size. } \label{fig:training-data-size-linear} \end{figure} \clearpage \section{Linear Evaluation for Downstream Samples} \label{sec:lin_eval_samples} In \cref{sec:factor_2} we evaluated our pre-trained models with varying amounts of downstream samples for finetuning. In this section we provide the results for the same experiment but using linear-evaluation instead of finetuning. The results are shown in \cref{fig:training-data-size-linear}. We observe that rank changes are not significant between different sample sizes as they are for full finetuning., However similar to finetuning, supervised pretraining is dominant for low data setting as shown by the performance on NTU-60 and GYM-99 with 1000-4000 examples. \section{Verb vs. Noun in Downstream Action Recognition} \label{sec:epic_nouns} EPIC-Kitchens-100 \cite{EPIC-100-arxiv} consists of noun and verb annotations for each video. An action is defined as a tuple of these. In the main paper, we report verb recognition performance across all experiments. In \cref{additional-expts-epic} we compare the performance on verb recognition to the performance on noun and action recognition. In general, performance is lower for noun and action recognition in comparison to verb recognition. This is likely due to the R(2+1)D-18 backbone being insufficient to model the complex actions found in EPIC-Kitchens-100. Interestingly, good performance on verb recognition is not a good indication that the model will perform well at noun or action recognition. Notably, some methods such as VideoMoCo and CtP perform well at verb recognition but struggle on noun recognition. RSPNet seems to perform reasonably well for both verb and noun recognition. \begin{table}[t] \centering \midsepremove \captionsetup{font=small,skip=2mm} \caption[]{\textbf{Ablation on Verb and Noun Recognition.} On EPIC-Kitchens-100, we show results for noun, verb and action recognition. Colors denote relative rankings across methods for each dataset, ranging from \textcolor{lowcolor}{low} \begin{tikzpicture}% \pgfplotscolorbardrawstandalone[% colormap name=PiYG,% colorbar horizontal,% colorbar style={% height=0.18cm,% width=2cm,% hide axis,% }% ]% \end{tikzpicture} \textcolor{highcolor}{high}. Most pre-training methods struggle on noun and action recognition and have little correlation with verb recognition. } \setlength{\tabcolsep}{3mm} \resizebox{0.4\textwidth}{!}{% \begin{tabular}{l\C{25.7}{47.7}\C{6.9}{24.5}\C{1.8}{16.0} \toprule \addlinespace[0.07cm] & \multicolumn{3}{c}{\textbf{EPIC-Kitchens-100}} \\ \addlinespace[0.04cm] \cmidrule{2-4} \addlinespace[0.04cm] \addlinespace[0.04cm] \addlinespace[0.02cm] \multicolumn{1}{l}{\textbf{Pre-training}} & \multicolumn{1}{c}{Verb} & \multicolumn{1}{c}{Noun} & \multicolumn{1}{c}{Action} \\%&& \multicolumn{1}{c}{Verb} & \multicolumn{1}{c}{Noun} & \multicolumn{1}{c}{Action} \\ \addlinespace[0.02cm] \midrule \addlinespace[0.01cm] None & 25.7 & 6.9 & 1.8 \\%&& 0.5 & 0.08 & 0.02 \\ \addlinespace[0.01cm]\midrule \addlinespace[0.01cm] MoCo & 26.4 & 13.9 & 6.9 \\%&& 1.1 & 0.55 & 0.18 \\ SeLaVi & 33.8 & 12.1 & 5.9 \\%&& 1.3 & 0.46 & 0.15 \\ VideoMoCo & 43.6 & 15.1 & 9.4 \\%&& 1.5 & 0.52 & 0.15 \\ Pretext-contrast & 34.3 & 11.4 & 5.6 \\%&& 1.1 & 0.44 & 0.14 \\ RSPNet & 42.7 & 18.7 & 11.7 \\%&& 1.7 & 0.89 & 0.38 \\ AVID-CMA & 29.9 & 8.7 & 3.6 \\%&& 0.7 & 0.20 & 0.05 \\ CtP & 42.8 & 12.0 & 7.8 \\%&& 1.3 & 0.29 & 0.09 \\ TCLR & 36.2 & 11.7 & 5.8 \\%&& 1.1 & 0.36 & 0.10 \\ GDT & 37.3 & 15.5 & 8.4 \\%&& 0.4 & 0.03 & 0.00 \\ \addlinespace[0.01cm]\midrule \addlinespace[0.01cm] Supervised & 47.7 & 24.5 & 16.0 \\%&& 2.3 & 1.82 & 0.69 \\ \addlinespace[0.01cm] \bottomrule \end{tabular}% } \label{additional-expts-epic} \end{table} \section{Random Section} \subsection{Double blind review} \label{sec:blind} ECCV reviewing is double blind, in that authors do not know the names of the area chair/reviewers of their papers, and the area chairs/reviewers cannot, beyond reasonable doubt, infer the names of the authors from the submission and the additional material. Avoid providing links to websites that identify the authors. Violation of any of these guidelines may lead to rejection without review. If you need to cite a different paper of yours that is being submitted concurrently to ECCV, the authors should (1) cite these papers, (2) argue in the body of your paper why your ECCV paper is non trivially different from these concurrent submissions, and (3) include anonymized versions of those papers in the supplemental material. Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work. In fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for technical reports). Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an excellent paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L. and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} 1. Authors. ``The frobnicatable foo filter'', BMVC 2014 Submission ID 324, Supplied as additional material {\tt bmvc14.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ECCV audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. \\ For sake of anonymity, it's recommended to omit acknowledgements in your review copy. They can be added later when you prepare the final copy. \section{Manuscript Preparation} This is an edited version of Springer LNCS instructions adapted for ECCV 2020 first paper submission. You are strongly encouraged to use \LaTeX2$_\varepsilon$ for the preparation of your camera-ready manuscript together with the corresponding Springer class file \verb+llncs.cls+. We would like to stress that the class/style files and the template should not be manipulated and that the guidelines regarding font sizes and format should be adhered to. This is to ensure that the end product is as homogeneous as possible. \subsection{Printing Area} The printing area is $122 \; \mbox{mm} \times 193 \; \mbox{mm}$. The text should be justified to occupy the full line width, so that the right margin is not ragged, with words hyphenated as appropriate. Please fill pages so that the length of the text is no less than 180~mm. \subsection{Layout, Typeface, Font Sizes, and Numbering} Use 10-point type for the name(s) of the author(s) and 9-point type for the address(es) and the abstract. For the main text, please use 10-point type and single-line spacing. We recommend using Computer Modern Roman (CM) fonts, which is the default font in this template. Italic type may be used to emphasize words in running text. Bold type and underlining should be avoided. With these sizes, the interline distance should be set so that some 45 lines occur on a full-text page. \subsubsection{Headings.} Headings should be capitalized (i.e., nouns, verbs, and all other words except articles, prepositions, and conjunctions should be set with an initial capital) and should, with the exception of the title, be aligned to the left. Words joined by a hyphen are subject to a special rule. If the first word can stand alone, the second word should be capitalized. The font sizes are given in Table~\ref{table:headings}. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Font sizes of headings. Table captions should always be positioned {\it above} the tables. The final sentence of a table caption should end without a full stop} \label{table:headings} \begin{tabular}{lll} \hline\noalign{\smallskip} Heading level & Example & Font size and style\\ \noalign{\smallskip} \hline \noalign{\smallskip} Title (centered) & {\Large \bf Lecture Notes \dots} & 14 point, bold\\ 1st-level heading & {\large \bf 1 Introduction} & 12 point, bold\\ 2nd-level heading & {\bf 2.1 Printing Area} & 10 point, bold\\ 3rd-level heading & {\bf Headings.} Text follows \dots & 10 point, bold \\ 4th-level heading & {\it Remark.} Text follows \dots & 10 point, italic\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} Here are some examples of headings: ``Criteria to Disprove Context-Freeness of Collage Languages'', ``On Correcting the Intrusion of Tracing Non-deterministic Programs by Software'', ``A User-Friendly and Extendable Data Distribution System'', ``Multi-flip Networks: Parallelizing GenSAT'', ``Self-determinations of Man''. \subsubsection{Lemmas, Propositions, and Theorems.} The numbers accorded to lemmas, propositions, and theorems etc. should appear in consecutive order, starting with the number 1, and not, for example, with the number 11. \subsection{Figures and Photographs} \label{sect:figures} Please produce your figures electronically and integrate them into your text file. For \LaTeX\ users we recommend using package \verb+graphicx+ or the style files \verb+psfig+ or \verb+epsf+. Check that in line drawings, lines are not interrupted and have constant width. Grids and details within the figures must be clearly readable and may not be written one on top of the other. Line drawings should have a resolution of at least 800 dpi (preferably 1200 dpi). For digital halftones 300 dpi is usually sufficient. The lettering in figures should have a height of 2~mm (10-point type). Figures should be scaled up or down accordingly. Please do not use any absolute coordinates in figures. Figures should be numbered and should have a caption which should always be positioned {\it under} the figures, in contrast to the caption belonging to a table, which should always appear {\it above} the table. Please center the captions between the margins and set them in 9-point type (Fig.~\ref{fig:example} shows an example). The distance between text and figure should be about 8~mm, the distance between figure and caption about 5~mm. \begin{figure} \centering \includegraphics[height=6.5cm]{eijkel2} \caption{One kernel at $x_s$ ({\it dotted kernel}) or two kernels at $x_i$ and $x_j$ ({\it left and right}) lead to the same summed estimate at $x_s$. This shows a figure consisting of different types of lines. Elements of the figure described in the caption should be set in italics, in parentheses, as shown in this sample caption. The last sentence of a figure caption should generally end without a full stop} \label{fig:example} \end{figure} If possible (e.g. if you use \LaTeX) please define figures as floating objects. \LaTeX\ users, please avoid using the location parameter ``h'' for ``here''. If you have to insert a pagebreak before a figure, please ensure that the previous page is completely filled. \subsection{Formulas} Displayed equations or formulas are centered and set on a separate line (with an extra line or halfline space above and below). Displayed expressions should be numbered for reference. The numbers should be consecutive within the contribution, with numbers enclosed in parentheses and set on the right margin. For example, \begin{align} \psi (u) & = \int_{0}^{T} \left[\frac{1}{2} \left(\Lambda_{0}^{-1} u,u\right) + N^{\ast} (-u)\right] dt \; \\ & = 0 ? \end{align} Please punctuate a displayed equation in the same way as ordinary text but with a small space before the end punctuation. \subsection{Footnotes} The superscript numeral used to refer to a footnote appears in the text either directly after the word to be discussed or, in relation to a phrase or a sentence, following the punctuation sign (comma, semicolon, or full stop). Footnotes should appear at the bottom of the normal text area, with a line of about 2~cm in \TeX\ and about 5~cm in Word set immediately above them.\footnote{The footnote numeral is set flush left and the text follows with the usual word spacing. Second and subsequent lines are indented. Footnotes should end with a full stop.} \subsection{Program Code} Program listings or program commands in the text are normally set in typewriter font, e.g., CMTT10 or Courier. \noindent {\it Example of a Computer Program} \begin{verbatim} program Inflation (Output) {Assuming annual inflation rates of years}; const MaxYears = 10; var Year: 0..MaxYears; Factor1, Factor2, Factor3: Real; begin Year := 0; Factor1 := 1.0; Factor2 := 1.0; Factor3 := 1.0; WriteLn('Year repeat Year := Year + 1; Factor1 := Factor1 * 1.07; Factor2 := Factor2 * 1.08; Factor3 := Factor3 * 1.10; WriteLn(Year:5,Factor1:7:3,Factor2:7:3,Factor3:7:3) until Year = MaxYears end. \end{verbatim} \noindent {\small (Example from Jensen K., Wirth N. (1991) Pascal user manual and report. Springer, New York)} \subsection{Citations} The list of references is headed ``References" and is not assigned a number in the decimal system of headings. The list should be set in small print and placed at the end of your contribution, in front of the appendix, if one exists. Please do not insert a pagebreak before the list of references if the page is not completely filled. An example is given at the end of this information sheet. For citations in the text please use square brackets and consecutive numbers: \cite{Alpher02}, \cite{Alpher03}, \cite{Alpher04} \dots \section{Submitting a Camera-Ready for an Accepted Paper} \subsection{Converting Initial Submission to Camera-Ready} To convert a submission file into a camera-ready for an accepted paper: \begin{enumerate} \item First comment out \begin{verbatim} \usepackage{ruler} \end{verbatim} and the line that follows it. \item The anonymous title part should be removed or commented out, and a proper author block should be inserted, for which a skeleton is provided in a commented-out version. These are marked in the source file as \begin{verbatim} \end{verbatim} and \begin{verbatim} \end{verbatim} \item Please write out author names in full in the paper, i.e. full given and family names. If any authors have names that can be parsed into FirstName LastName in multiple ways, please include the correct parsing in a comment to the editors, below the \begin{verbatim}\author{}\end{verbatim} field. \item Make sure you have inserted the proper Acknowledgments. \end{enumerate} \subsection{Preparing the Submission Package} We need all the source files (LaTeX files, style files, special fonts, figures, bib-files) that are required to compile papers, as well as the camera ready PDF. For each paper, one ZIP-file called XXXX.ZIP (where XXXX is the zero-padded, four-digit paper ID) has to be prepared and submitted via the ECCV 2020 Submission Website, using the password you received with your initial registration on that site. The size of the ZIP-file may not exceed the limit of 60 MByte. The ZIP-file has to contain the following: \begin{enumerate} \item All source files, e.g. LaTeX2e files for the text, PS/EPS or PDF/JPG files for all figures. \item PDF file named ``XXXX.pdf" that has been produced by the submitted source, where XXXX is the four-digit paper ID (zero-padded if necessary). For example, if your paper ID is 24, the filename must be 0024.pdf. This PDF will be used as a reference and has to exactly match the output of the compilation. \item PDF file named ``XXXX-copyright.PDF": a scanned version of the signed copyright form (see ECCV 2020 Website, Camera Ready Guidelines for the correct form to use). \item If you wish to provide supplementary material, the file name must be in the form XXXX-supp.pdf or XXXX-supp.zip, where XXXX is the zero-padded, four-digit paper ID as used in the previous step. Upload your supplemental file on the ``File Upload" page as a single PDF or ZIP file of 100 MB in size or less. Only PDF and ZIP files are allowed for supplementary material. You can put anything in this file – movies, code, additional results, accompanying technical reports–anything that may make your paper more useful to readers. If your supplementary material includes video or image data, you are advised to use common codecs and file formats. This will make the material viewable by the largest number of readers (a desirable outcome). ECCV encourages authors to submit videos using an MP4 codec such as DivX contained in an AVI. Also, please submit a README text file with each video specifying the exact codec used and a URL where the codec can be downloaded. Authors should refer to the contents of the supplementary material appropriately in the paper. \end{enumerate} Check that the upload of your file (or files) was successful either by matching the file length to that on your computer, or by using the download options that will appear after you have uploaded. Please ensure that you upload the correct camera-ready PDF–renamed to XXXX.pdf as described in the previous step as your camera-ready submission. Every year there is at least one author who accidentally submits the wrong PDF as their camera-ready submission. Further considerations for preparing the camera-ready package: \begin{enumerate} \item Make sure to include any further style files and fonts you may have used. \item References are to be supplied as BBL files to avoid omission of data while conversion from BIB to BBL. \item Please do not send any older versions of papers. There should be one set of source files and one XXXX.pdf file per paper. Our typesetters require the author-created pdfs in order to check the proper representation of symbols, figures, etc. \item Please remove unnecessary files (such as eijkel2.pdf and eijkel2.eps) from the source folder. \item You may use sub-directories. \item Make sure to use relative paths for referencing files. \item Make sure the source you submit compiles. \end{enumerate} Springer is the first publisher to implement the ORCID identifier for proceedings, ultimately providing authors with a digital identifier that distinguishes them from every other researcher. ORCID (Open Researcher and Contributor ID) hosts a registry of unique researcher identifiers and a transparent method of linking research activities to these identifiers. This is achieved through embedding ORCID identifiers in key workflows, such as research profile maintenance, manuscript submissions, grant applications and patent applications. \subsection{Most Frequently Encountered Issues} Please kindly use the checklist below to deal with some of the most frequently encountered issues in ECCV submissions. {\bf FILES:} \begin{itemize} \item My submission package contains ONE compiled pdf file for the camera-ready version to go on Springerlink. \item I have ensured that the submission package has all the additional files necessary for compiling the pdf on a standard LaTeX distribution. \item I have used the correct copyright form (with editor names pre-printed), and a signed pdf is included in the zip file with the correct file name. \end{itemize} {\bf CONTENT:} \begin{itemize} \item I have not used \verb \thanks or \verb \footnote commands and symbols for corresponding authors in the title (which is processed with scripts) and (optionally) used an Acknowledgement section for all the acknowledgments, at the end of the paper. \item I have read the Springer author guidelines, and complied with them, including the point on providing full information on editors and publishers for each reference in the paper (Author Guidelines – Section 2.8). \item I have used the same name spelling in all my papers accepted to ECCV and ECCV Workshops. \item I have inserted the ORCID identifiers of the authors in the paper header (see http://bit.ly/2H5xBpN for more information). \item I have not decreased the font size of any part of the paper (except tables) to fit into 14 pages, I understand Springer editors will remove such commands. \end{itemize} {\bf SUBMISSION:} \begin{itemize} \item All author names, titles, and contact author information are correctly entered in the submission site. \item The corresponding author e-mail is given. \item At least one author has registered by the camera ready deadline. \end{itemize} \section{Conclusions} The paper ends with a conclusion. \clearpage\mbox{}Page \thepage\ of the manuscript. \clearpage\mbox{}Page \thepage\ of the manuscript. This is the last page of the manuscript. \par\vfill\par Now we have reached the maximum size of the ECCV 2020 submission (excluding references). References should start immediately after the main text, but can continue on p.15 if needed. \section{Related Works}
{ "attr-fineweb-edu": 1.583008, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbXzxK7Dgs_XS-NYm
\section{Noncommutative spectral geometry and the standard model} \label{NCSG} One may assume that near the Planck energy scale, the geometry of space-time ceases to have the simple continuous form we are familiar with. At high enough energy scales, quantum gravity effects turn on and they alter space-time. One can thus assume that at high energy scales, space-time becomes discrete and the coordinates do not longer commute. Such an approach could {\sl a priori} be tested by its phenomenological and cosmological consequences. Combining noncommutative geometry~\cite{ncg-book1,ncg-book2} with the spectral action principle, led to Noncommutative Spectral Geometry (NCSG), used by Connes and collaborators~\cite{ccm} in an attempt to provide a purely geometric explanation for the Standard Model (SM) of electroweak and strong interactions. In their approach, the SM is considered as a phenomenological model, which dictates the geometry of space-time so that the Maxwell-Dirac action functional leads to the SM action. The model is constructed to hold at high energy scales, namely at unification scale; to get its low energy consequences which will then be tested against current data, one uses standard renormalization techniques. Since the model lives at high energy scales, it can be used to investigate early universe cosmology~\cite{Nelson:2008uy}-\cite{Sakellariadou:2011dk}~\footnote{See the contribution of M.\ Sakellariadou in the same conference.}. The purpose of this contribution is twofold: firstly, to investigate the physical meaning of the choice of the {\sl almost} commutative geometry and its relation to quantization~\cite{PRD}, and secondly to explore the relation of NCSG with the gauge structure of the theory and with dissipation~\cite{PRD}. We will show that Connes construction is intimately related with the deformed Hopf algebra characterizing quantum field theory (QFT)~\cite{BlasoneJizbaVitiello:2011} and therefore the seeds of quantization are built in such a NCSG construction. We start by summarizing the main ingredients of NCSG, composed by a two-sheeted space, made from the product of a four-dimensional smooth compact Riemannian manifold ${\cal M}$ with a fixed spin structure, by a discrete noncommutative space ${\cal F}$ composed by only two points. Thus, geometry is specified by the product of a continuous manifold for space-time times an internal geometry for the SM. The noncommutative nature of the discrete space ${\cal F}$ is denoted by a spectral triple $({\cal A, H}, D)$. The algebra ${\cal A}=C^\infty({\cal M})$ of smooth functions on ${\cal M}$ is an involution of operators on the finite-dimensional Hilbert space ${\cal H}$ of Euclidean fermions; it acts on ${\cal H}$ by multiplication operators. The operator $D$ is the Dirac operator ${\partial\hspace{-5pt}\slash}_{\cal M}=\sqrt{-1}\gamma^\mu\nabla_\mu^s$ on the spin Riemannian manifold ${\cal M}$; $D$ is a self-adjoint unbounded operator in ${\cal H}$. The space ${\cal H}$ is the Hilbert space $L^2({\cal M},S)$ of square integrable spinors $S$ on ${\cal M}$. Thus one obtains a model of pure gravity with an action that depends on the spectrum of the Dirac operator, which provides the (familiar) notion of a metric. The most important ingredient in this NCSG model is the choice of the algebra constructed within the geometry of the two-sheeted space ${\cal M}\times {\cal F}$; it captures all information about space. The product geometry is specified by: \beq \lab{1} {\cal A}={\cal A}_1\otimes {\cal A}_2~,~~~~{\cal H}={\cal H}_1\otimes {\cal H}_2~, \eeq as a consequence of the noncommutative nature of the discrete space ${\cal F}$ composed by only two points. In other words, Eq.~(\ref{1}) expresses, at the algebraic level and at the level of the space of the states, the two-sheeted nature of the geometry space ${\cal M}\times {\cal F}$. Assuming ${\cal A}$ is symplectic-unitary, it can be written as~\cite{Chamseddine:2007ia} \begin{equation} \mathcal{A}=M_{a}(\mathds{H})\oplus M_{k}(\mathds{C})~, \end{equation} with $k=2a$ and $\mathds{H}$ being the algebra of quaternions, which encodes the noncommutativity of the manifold. The first possible value for the even number $k$ is 2, corresponding to a Hilbert space of four fermions, but this choice is ruled out from the existence of quarks. The next possible value is $k=4$ leading to the correct number of $k^2=16$ fermions in each of the three generations. Thus, we will consider the minimal choice that can accommodate the physics of the SM; certainly other choices leading to larger algebras which could accommodate particle beyond the SM sector could be possible. The second basic ingredient of the NCSG is the spectral action principle stating that, within the context of the product ${\cal M}\times {\cal F}$, the bare bosonic Euclidean action is given by the trace of the heat kernel associated with the square of the noncommutative Dirac operator and is simply \be {\rm Tr}(f(D/\Lambda))~, \ee where $f$ is a cut-off function and$\Lambda$ fixes the energy scale. This action can be seen {\sl \`a la} Wilson as the bare action at the mass scale $\Lambda$. The fermionic term can be included in the action functional by adding $(1/2)\langle J\psi,D\psi\rangle$, where $J$ is the real structure on the spectral triple and $\psi$ is a spinor in the Hilbert space ${\cal H}$ of the quarks and leptons. Since we are considering a four-dimensional Riemannian geometry, the trace ${\rm Tr}(f(D/\Lambda))$ can be expressed perturbatively as~\cite{sdw-coeff}-\cite{nonpert} \be\label{asymp-exp} {\rm Tr}(f(D/\Lambda))\sim 2\Lambda^4f_4a_0+2\Lambda^2f_2a_2+f_0a_4+\cdots +\Lambda^{-2k}f_{-2k}a_{4+2k}+\cdots~, \ee in terms of the geometrical Seeley-deWitt coefficients $a_n$, known for any second order elliptic differential operator. It is important to note that the smooth even test function $f$, which decays fast at infinity, appears through its momenta $f_k$: \beq \nonumber f_0 &\equiv& f(0)~,\\ \nonumber f_k &\equiv&\int_0^\infty f(u) u^{k-1}{\rm d}u\ \ ,\ \ \mbox{for}\ \ k>0 ~,\nonumber\\ \mbox f_{-2k}&=&(-1)^k\frac{k!}{(2k)!} f^{(2k)}(0)~. \nonumber \eeq Since the Taylor expansion of the cut-off function vanishes at zero, the asymptotic expansion of Eq.~(\ref{asymp-exp}) reduces to \be \label{asympt} {\rm Tr}(f(D/\Lambda))\sim 2\Lambda^4f_4a_0+2\Lambda^2f_2a_2+f_0a_4~. \ee Hence, the cut-off function $f$ plays a r\^ole only through its three momenta $f_0, f_2, f_4$, which are three real parameters, related to the coupling constants at unification, the gravitational constant, and the cosmological constant, respectively. More precisely, the first term in Eq.~(\ref{asympt}) which is in in $\Lambda^4$ gives a cosmological term, the second one which is in $\Lambda^2$ gives the Einstein-Hilbert action functional, and the third one which is $\Lambda$-independent term yields the Yang-Mills action for the gauge fields corresponding to the internal degrees of freedom of the metric. The NCSG offers a purely geometric approach to the SM of particle physics, where the fermions provide the Hilbert space of a spectral triple for the algebra and the bosons are obtained through inner fluctuations of the Dirac operator of the product ${\cal M}\times {\cal F}$ geometry. The computation of the asymptotic expression for the spectral action functional results to the full Lagrangian for the Standard Model minimally coupled to gravity, with neutrino mixing and Majorana mass terms. Supersymmetric extensions have been also considered. In this report we closely follow the presentation of Ref.~\cite{PRD} (see also~\cite{Vienna2012}). The relation between the algebra doubling and the deformed Hopf algebra structure of QFT are discussed in Section 2; dissipation, the gauge structure and quantization in Section 3. Section 4 is devoted to conclusions, where we also mention about the dissipative interference phase arising in the noncommutative plane in the presence of algebra doubling. \section{Noncommutative spectral geometry and quantum field theory} \label{Sec2} Our first observation is that the doubling of the algebra, ${\cal A} \, \rightarrow \,{\cal A}_1\otimes {\cal A}_2$ acting on the ``doubled" space ${\cal H}={\cal H}_1\otimes {\cal H}_2$, which expresses the two sheeted nature of the NCSG (cf. Eq.~(\ref{1})), is a key feature of quantum theories. As observed also by Alain Connes in Ref.~\cite{ncg-book1}, already in the early years of quantum mechanics (QM), in establishing the ``matrix mechanics" Heisenberg has shown that noncommutative algebras governing physical quantities are at the origin of spectroscopic experiments and are linked to the discretization of the energy of the atomic levels and angular momentum. One can convince himself that this is the case by observing that in the density matrix formalism of QM the coordinate $x(t)$ of a quantum particle is split into two coordinates $x_+(t)$ (going forward in time) and $x_-(t)$ (going backward in time). The forward in time motion and the backward in time motion of the density matrix $W(x_{+},x_{-},t)\equiv \langle x_{+}|\rho (t)|x_{-}\rangle = \psi^* (x_{-},t)\psi (x_{+},t)$, where $x_{\pm}=x\pm y/2$, is described indeed by ``two copies" of the Schr\"odinger equation, respectively: \be i\hbar {\partial \psi (x_{+},t) \over \partial t}=H_{+}\psi (x_{+},t), \qquad \qquad -i\hbar {\partial \psi^* (x_-,t) \over \partial t}=H_-\psi^* (x_-,t), \lab{(4a)} \ee which can be written as \be i\hbar {\partial \langle x_+|\rho (t)|x_-\rangle \over \partial t}= {\hat H}\ \langle x_+|\rho (t)|x_- \rangle, \lab{(5a)} \ee where $H$ is given in terms of the two Hamiltonian operators $H_{\pm}$ as \be {\hat H}=H_+ -H_- ~. \lab{(5b)} \ee The introduction of a doubled set of coordinates, $(x_{\pm}, p_{\pm})$ (or $(x,p_{x})$ and $(y,p_{y})$) and the use of the two copies of the Hamiltonian $H_{\pm }$ operating on the outer product of two Hilbert spaces ${\cal H}_{+} \otimes {\cal H}_{-}$ thus show that the eigenvalues of ${\hat H}$ are directly the Bohr transition frequencies $h \nu_{nm}=E_n-E_m$, which are at the basis of the explanation of spectroscopic structure. We have observed elsewhere that the doubling of the algebra is implicit also in the theory of the Brownian motion of a quantum particle~\cite{Blasone:1998xt} (see also~\cite{PRD} and the references there quoted) and the doubled degrees of freedom are known~\cite{BlasoneJizbaVitiello:2011} to allow quantum noise effect. It is thus evident the connection with the two sheeted NCSG. Moreover, it has been shown~\cite{PRD} that as a consequence of the algebra doubling Eq.~(\ref{1}) the NCSG construction has an intrinsic gauge structure and is a thermal dissipative field theory. As we will discuss below, this suggests that Connes construction carries in itself the seeds of quantization, namely it is more than just a classical theory construction. Let us start by discussing the simple case of the massless fermion and the $U(1)$ local gauge transformation group. We will see how in this case the doubling of the algebra is related to the gauge structure of the theory. Extension to the massive fermion case, the boson case and non-Abelian gauge transformation groups is possible~\cite{Celeghini:1992a,Celeghini:1993a}. The system Lagrangian is \be \hat L = L - {\tilde L} = - \overline{\psi} \gamma^{\mu}\partial_{\mu}\psi + \overline{\tilde {\psi}} \gamma^{\mu}\partial_{\mu} \tilde{\psi}. \label{(7)} \ee The fermion tilde-field $\tilde{\psi}(x)$, which satisfies the usual fermionic anticommutation relations and anticommutes with the field $\psi(x)$, is a ``copy" (with the same spectrum and couplings) of the $\psi$-system. We thus ``double'' the field algebra by introducing such a tilde-field $\tilde{\psi}(x)$. For simplicity, no coupling term of the field ${\psi}(x)$ with ${\tilde {\psi}(x)}$ is assumed in $\hat L$. In the quantized theory, let $a_{\bf k}^{\dag}$ and $\tilde a_{\bf k}^{\dag}$ denote the creation operators associated to the quantum fields $\psi $ and $\tilde {\psi}$, respectively (all quantum number indices are suppressed except momentum). The vacuum $|0({\theta}) \rangle$ of the theory is: \be|0(\theta) \rangle =\prod_k \lf[\cos\theta_k + \sin\theta_k a_{\bf k}^{\dag} \tilde a_{\bf k}^{\dag}\ri] |0 \rangle , \label{(g9)} \ee namely a condensate of couples of $a_{\bf k}^{\dag}$ and $\tilde a_{\bf k}^{\dag}$ modes. Here $|0 \rangle$ denotes the vacuum $|0, 0 \rangle \equiv |0 \rangle \, \otimes \, |{\tilde 0} \rangle$, with $|0 \rangle$ and $|{\tilde 0} \rangle$ the vacua annihilated by the annihilation operators $a_{\bf k}$ and $\tilde a_{\bf k}$, respectively. On the other hand, $|0(\theta) \rangle $ is the vacuum with respect to the fields ${\psi(\theta;x)}$ and $\tilde {\psi}(\theta;x)$ which are obtained by means of the Bogoliubov transformation: \bsa \psi(\theta ; x) &=& B^{-1}(\theta) \psi(x) B(\theta), \\ [2mm] \tilde \psi (\theta;x) &=& B^{-1}(\theta) \tilde \psi(x) B(\theta)~, \label{ex(10)} \esa where $B(\theta) \equiv e^{-i{\cal G}}$, with the generator ${\cal G} = -i \sum_{\bf k} \theta_{k} (a_{\bf k}^{\dag} \tilde a_{\bf k}^{\dag} - a_{\bf k} \tilde a_{\bf k})$. For simplicity, ${\theta}$ is assumed to be independent of space-time. Extension to space-time dependent Bogoliubov transformations can be done~\cite{Celeghini:1993a}. $|0({\theta}) \rangle $ is a $SU(2)$ generalized coherent state~\cite{Perelomov:1986tf}. The Hamiltonian for the $\{\psi(x), \, \tilde{\psi}(x)\}$ system is ${\hat H} = H - {\tilde H}$ (to be compared with Eq.~(\ref{(5b)})), and is given by ${\hat H} = \sum_{\bf k} \hbar\, \om_{\bf k}(a_{\bf k}^{\dag} a_{\bf k} - \tilde a_{\bf k}^{\dag} \tilde a_{\bf k})$. The $\theta$-vacuum $|0(\theta) \rangle$ is the the zero eigenvalue eigenstate of ${\hat H}$. The relation $ [ a_{\bf k}^{\dag} a_{\bf k} - \tilde a_{\bf k}^{\dag} \tilde a_{\bf k} ]|0(\theta) \rangle = 0$, for any ${\bf k}$ (and any $\theta$) characterizes the $\theta$-vacuum structure and it is called the $\theta$-vacuum condition. The space of states ${\hat {\cal H}} = {\cal H} \otimes {\tilde {\cal H}} $ is constructed by repeated applications of creation operators of ${\psi(\theta;x)}$ and ${\tilde \psi(\theta;x)}$ on $|0({\theta}) \rangle $ and is called the $\theta$-representation $\lbrace |0({\theta}) \rangle \rbrace $. $\theta$-representations corresponding to different values of the $\theta$ parameter are unitarily inequivalent representations of the canonical anti-commutation relations in QFT~\cite{BlasoneJizbaVitiello:2011}. The state $|0({\theta}) \rangle $ is known to be a finite temperature state~\cite{BlasoneJizbaVitiello:2011,Celeghini:1992a,Celeghini:1993a,Celeghini:1998a,Umezawa:1982nv}, namely one finds that $\theta$ is a temperature-dependent parameter. This tells us that the algebra doubling leads to a thermal field theory. The Bogoliubov transformations then induce transition through system phases at different temperatures. We now consider the subspace ${\cal H}_{\theta c} \, {\subset} \, \lbrace |0({\theta}) \rangle \rbrace$ made of all the states $|a \rangle_{\theta c} $, including $|0({\theta}) \rangle $, such that the {\it $\theta$-state condition} \be [ a_{\bf k}^{\dag} a_{\bf k} - \tilde a_{\bf k}^{\dag} \tilde a_{\bf k} ] |a \rangle_{\theta c} = 0 ~, \quad \quad {\rm for ~ any} \quad {\bf k}, \label{ex(11)}\ee holds in ${\cal H}_{\theta c}$ (note that Eq.~(\ref{ex(11)}) is similar to the Gupta-Bleurer condition defining the physical states in quantum electrodynamics (QED)). Let $\langle ... \rangle_{\theta c}$ denote matrix elements in ${\cal H}_{\theta c}$; we have \be \langle j_{\mu}(x) \rangle_{\theta c} = \langle \tilde j_{\mu}(x) \rangle_{\theta c} , \label{(12)} \ee where $j_{\mu}(x) = {\overline \psi}\gamma^{\mu}\psi$ and ${\tilde j}_{\mu}(x) = {\overline {\tilde \psi}}\gamma^{\mu}{\tilde \psi}$. Equalities between matrix elements in ${\cal H}_{\theta c}$, say $\langle A \rangle_{\theta c} = \langle B \rangle_{\theta c} $, are denoted by $A\cong B $ and we call them $\theta$-w-equalities ($\theta$-weak equalities). They are classical equalities since they are equalities among c-numbers. ${\cal H}_{\theta c}$ is invariant under the dynamics described by $\hat H$ (even in the general case in which interaction terms are present in $\hat H$ provided that the charge is conserved). The key point is that, due to Eq.~(\ref{(12)}), the matrix elements in ${\cal H}_{\theta c}$ of the Lagrangian Eq.~(\ref{(7)}) are invariant under the simultaneous local gauge transformations of $\psi$ and $\tilde \psi$ fields given by \be \psi(x)\to \exp{[ig\alpha(x)]}\psi(x), \qquad \qquad \tilde \psi(x) \to \exp{[ig\alpha(x)]}\, \tilde \psi(x) ~, \label{(13)} \ee i.e., \be \langle \hat L \rangle_{\theta c} \to \langle \hat L^\prime \rangle_{\theta c} = \langle \hat L \rangle_{\theta c}~,~~{\rm in}~~{\cal H}_{\theta c}, \label{ex(14)} \ee under the gauge transformations (\ref{(13)}). The tilde term $\overline{\tilde{\psi}}\gamma^\mu\partial_{\mu}\tilde{\psi}$ thus plays a crucial r\^ole in the $\theta$-w-gauge invariance of $\hat L$ under Eq.~(\ref{(13)}). Indeed it transforms in such a way to compensate the local gauge transformation of the $\psi$ kinematical term, i.e., \be \overline {\tilde \psi} (x) \gamma^{\mu}\partial_{\mu} \tilde \psi (x) \to \overline {\tilde \psi} (x) \gamma^{\mu} \partial_{\mu} \tilde \psi (x) + g \partial^{\mu}\alpha(x) \tilde{j}_{\mu}(x). \label{(15)} \ee This suggests to us to introduce the vector field $A_{\mu}^\prime$ by \be gj^{\bar \mu} (x) A_{\bar \mu}^\prime(x)\cong \overline {\tilde \psi} (x) \gamma^{\bar \mu} \partial_{\bar \mu} \tilde \psi (x)~,~~~ \bar \mu=0,1,2,3. \label{(16)} \ee Here and in the following, the bar over ${\mu}$ means no summation on repeated indices. Thus, the vector field $A_{\mu}^\prime$ transforms as \be A_{\mu}^\prime(x) \to A_{\mu}^\prime(x) + \partial_{\mu}\alpha(x), \label{ex(17)} \ee when the transformations of Eq.~(\ref{(13)}) are implemented. In ${\cal H}_{\theta c}$, $A_{\mu}^\prime$ can be then identified with the conventional U(1) gauge vector field and can be introduced it in the original Lagrangian through the usual coupling term $ig{\overline \psi}\gamma^{\mu}\psi A_{\mu}^\prime$. The position (\ref{(16)}) does not change the $\theta$-vacuum structure. Therefore, provided that one restricts himself/herself to matrix elements in ${\cal H}_{\theta c}$, matrix elements of physical observables, which are solely functions of the ${\psi}(x)$ field, are not changed by the position (\ref{(16)}). Our identification of $A_{\mu}^\prime$ with the U(1) gauge vector field is also justified by the fact that observables turn out to be invariant under gauge transformations and the conservation laws derivable from $\hat L$, namely in the simple case of Eq.~(\ref{(7)}) the current conservation laws, $\partial^{\mu}j_{\mu}(x) = 0$ and $\partial^{\mu}\tilde j_{\mu}(x) = 0$, are also preserved as $\theta$-w-equalities when Eq.~(\ref{(16)}) is adopted. Indeed, one obtains~\cite{Celeghini:1992a,Celeghini:1993a} $\partial^\mu j_\mu (x) \cong 0$ and $\partial^{\mu} \tilde j_{\mu} (x) \cong 0$. One may also show that \be \label{(29)} \partial^{\nu}F_{\mu\nu}^\prime(x) \cong -gj_{\mu}(x), \quad \quad \partial^{\nu}F_{\mu\nu}^\prime(x) \cong -g \tilde j_{\mu}(x), \ee in ${\cal H}_{\theta c}$. In the the Lorentz gauge from Eq.~(\ref{(29)}) we also obtain the $\theta$-w-relations $\partial^{\mu}A_{\mu}^\prime(x) \cong 0$ and $\pa^{2} A_{\mu}^\prime(x) \cong gj_{\mu}(x)$. In conclusion, the ``doubled algebra" Lagrangian (\ref{(7)}) for the field $\psi$ and its ``double" $\tilde \psi$ can be substituted in ${\cal H}_{\theta c}$ by: \be \hat L_{\rm g} \cong - 1/4{F^{\prime \mu\nu}} F_{\mu\nu}^\prime - \overline \psi \gamma^{\mu}\partial_{\mu}\psi + ig{\overline \psi}\gamma^{\mu}\psi A_{\mu}^\prime ~, ~~{\rm in}~{\cal H}_{\theta c}~, \label{(31)} \ee where, remarkably, the tilde-kinematical term $\overline{\tilde {\psi}} \gamma^{\mu}\partial_{\mu} \tilde{\psi}$ is replaced, in a $\theta$-w-sense, by the coupling term $ig{\overline \psi}\gamma^{\mu}\psi A_{\mu}^\prime$ between the gauge field $A_{\mu}^\prime$ and the matter field current ${\overline \psi}\gamma^{\mu}\psi$. Finally, in the case an interaction term is present in the Lagrangian (\ref{(7)}), $\hat L_{\rm tot}={\hat L}+{\hat L}_{I},~~~{\hat L}_{I}=L_{\rm I}-{\tilde L}_{\rm I}$, the above conclusions still hold provided ${\cal H}_{\theta c}$ is an invariant subspace under the dynamics described by $\hat L_{\rm tot}$. Our discussion has thus shown that the ``doubling" of the field algebra introduces a gauge structure in the theory: Connes two sheeted geometric construction has intrinsic gauge properties. The algebraic structure underlying the above discussion is recognized to be the one of the noncommutative $q$-deformed Hopf algebra~\cite{Celeghini:1998a}. We remark that the Hopf coproduct map ${\cal A} \, \rightarrow \, {\cal A} \otimes \mathds{1} + \mathds{1} \otimes {\cal A} \equiv \, {\cal A}_1 \otimes {\cal A}_2$ is nothing but the map presented in Eq.~(\ref{1}) which duplicates the algebra. On the other hand, it can be shown~\cite{Celeghini:1998a} that the Bogoliubov transformation of ``angle" $\theta$ relating the fields $\psi (\theta; x)$ and ${\tilde \psi} (\theta; x)$ to $\psi (x)$ and ${\tilde \psi} (x)$, Eqs.~(\ref{ex(10)}), are obtained by convenient combinations of the $q$-{\it deformed} Hopf coproduct $\Delta a^{\dag}_q=a^{\dag}_q\otimes q^{1/2} + q^{-1/2}\otimes a^{\dag}_q$, with $q \equiv q (\theta)$ the deformation parameters and $a^{\dag}_q$ the creation operators in the $q$-deformed Hopf algebra~\cite{Celeghini:1998a}. These deformed coproduct maps are noncommutative. All of this signals a deep physical meaning of noncommutativity in the Connes construction since the deformation parameter is related to the condensate content of $|0 (\theta) \rangle$ under the constrain imposed by the $\theta$-state condition Eq.~(\ref{ex(11)}). Actually, such a state condition is a characterizing condition for the system physical states. The crucial point is that a characteristic feature of quantum field theory~\cite{BlasoneJizbaVitiello:2011,Celeghini:1998a} is that the deformation parameter {\it labels} the $\theta$-representations $\{|0 (\theta) \rangle\}$ and, as already mentioned, for $\theta \neq \theta'$, $\{|0 (\theta) \rangle\}$ and $\{|0 (\theta') \rangle\}$ are unitarily inequivalent representations of the canonical (anti-)commutation rules~\cite{BlasoneJizbaVitiello:2011,Umezawa:1982nv}. In turn, the physical meaning of this is that an order parameter exists, which assumes different $\theta$-dependent values in each of the representations. Thus, the $q$-deformed Hopf algebra structure of QFT induces the {\it foliation} of the whole Hilbert space into physically inequivalent subspaces. From our discussion we conclude that this is also the scenario which NCSG presents to us. One more remark in this connection is that in the NCSG construction the derivative in the discrete direction is a finite difference quotient~\cite{ncg-book1,ncg-book2,PRD} and it is then suggestive that the $q$-derivative is also a finite difference derivative. This point deserves further formal analysis which is in our plans to do. We thus conclude that Connes NCSG construction is built on the same noncommutative deformed Hopf algebra structure of QFT. In the next Section we show that it is also related with dissipation and carries in it the seeds of quantization. \section{Dissipation, gauge field and quantization} In the second equation in (\ref{(29)}) the current $\tilde j_{\mu}$ act as the source of the variations of the gauge field tensor $F_{\mu \nu}^\prime$. We express this by saying that the tilde field plays the r\^ole of a ``reservoir". Such a reservoir interpretation may be extended also to the gauge field $A_{\mu}^\prime$, which is known to act, indeed, in a way to ``compensate" the changes in the matter field configurations due to the local gauge freedom. When we consider variations in the $\theta$ parameter (namely in the $q$-deformation parameter), induced by the Bogoliubov transformation generator, we have (time-)evolution over the manifold of the $\theta$-labeled (i.e. $q$-labeled) spaces and we have dissipative fluxes between the doubled sets of fields, or, in other words, according to the above picture, between the system and the reservoir. We talk of dissipation and open systems when considering the Connes conctruction and the Standard Model in the same sense in a system of electromagnetically interacting matter field, neither the energy-momentum tensor of the matter field, nor that of the gauge field, are conserved. However, one verifies in a standard fashion~\cite{Landau} that $\partial_{\mu} T^{\mu \nu}_{\rm matter} = e F^{\mu \nu} j_{\mu} = - \partial_{\mu} T^{\mu \nu}_{\rm gauge\ field}$, so that what it is conserved is the {\sl total} $T^{\mu \nu}_{\rm total} = T^{\mu \nu}_{\rm matter} + T^{\mu \nu}_{\rm gauge \ field}$, namely the energy-momentum tensor of the {\it closed} system \{matter field, electromagnetic field\}. As remarked in Ref.~\cite{PRD}, each element of the couple is {\it open} (dissipating) on the other one, although the {\it closeness} of the total system is ensured. Thus the closeness of the SM is not spoiled in our discussion. In order to further clarify how the gauge structure is related to the algebra doubling and to dissipation we consider the prototype of a dissipative system, namely the classical one-dimensional damped harmonic oscillator \be m \ddot x + \gamma \dot x + k x = 0~, \label{2.1a} \ee and its time-reversed $(\gamma \rightarrow - \gamma)$ (doubled) image \be m \ddot y - \gamma \dot y + k y = 0 ~,\label{2.1b} \ee with time independent $m$, $\gamma$ and $k$, needed~\cite{Celeghini:1992yv} in order to set up the canonical formalism for open systems. The system of Eq.~(\ref{2.1a}) and Eq.~(\ref{2.1b}) is then a closed system described by the Lagrangian density \be L (\dot{x},\dot{y},x,y)= m\dot{x}\dot{y}+ {\ga \over 2}(x\dot{y}-y\dot{x})+k x\,y~. \lab{(26)} \ee It is convenient to use the coordinates ${{x_1}(t)}$ and ${{x_2}(t)}$ defined by \be x_{1}(t) = \frac{x(t) + y(t)}{\sqrt{2}}~, \qquad x_{2}(t) = \frac{x(t) - y(t)}{\sqrt{2}}~. \ee The motion equations are rewritten then as \bsa m \ddot x_1 + \gamma \dot x_2 + k x_1 &=& 0~, \label{2.16} \\ [1mm] m \ddot x_2 + \gamma \dot x_1 + k x_2 &=& 0~. \esa The canonical momenta are: $\, p_{1} = m {\dot x}_{1} + (1/2) \ga {x_2}$ ; $p_{2} = - m {\dot x}_{2} - (1/2) \ga {x_1} \,$; the Hamiltonian is \bea {\hat H} &=& H_1 - H_2 = {1 \over 2m} (p_1 - {\gamma\over 2}x_2)^2 + {k\over 2} x_1^2 -{1 \over 2m} (p_2 + {\gamma\over 2}x_1)^2 - {k\over 2} x_2^2~. \label{2.17} \eea We then recognize~\cite{Tsue:1993nz,Blasone:1996yh,Celeghini:1992a,Celeghini:1993a} that \be A_i = {B\over 2} \epsilon_{ij} x_j~, ~~~(i,j = 1,2)~, \qquad \qquad {\epsilon}_{ii} = 0~,~~ {\epsilon}_{12} = - {\epsilon}_{21} = 1~, \label{2.21} \ee with $B \equiv {c\, \gamma/e}$, acts as the vector potential and obtain that the system of oscillators Eq.~(\ref{2.1a}) and Eq.~(\ref{2.1b}) is equivalent to the system of two particles with opposite charges $e_1 = - e_2 = e$ in the (oscillator) potential $\Phi \equiv (k/2/e)({x_1}^2 - {x_2}^2) \equiv {\Phi}_1 - {\Phi}_2$ with $ {\Phi}_i \equiv (k/2/e){x_i}^{2}$ and in the constant magnetic field $\boldvec{B}$ defined as $\boldvec{B}= \boldvec{\nabla} \times \boldvec{A} = - B \boldvec{{\hat 3}}$. The Hamiltonian is indeed \bea {\hat H} = {H_1} - {H_2} = {1 \over 2m} (p_1 - {e_1 \over{c}}{A_1})^2 + {e_1}{\Phi}_1 - {1 \over 2m} (p_2 + {e_2 \over{c}} A_2)^2 + {e_2}{\Phi}_2~.\label{2.22} \eea and the Lagrangian of the system can be written in the familiar form \bea L &=& {1 \over 2m} (m{\dot x_1} + {e_1 \over{c}} A_1)^2 - {1 \over 2m} (m{\dot x_2} + {e_2 \over{c}} A_2)^2 - {e^2\over 2mc^2}({A_1}^2 + {A_2}^2) - e\Phi \label{2.24i} \nonumber\\ &=& {m \over 2} ({\dot x_1}^2 - {\dot x_2}^2) +{e\over{c}}( {\dot x}_1 A_1 + {\dot x}_2 A_2) - e{\Phi}~. \label{2.24} \eea Note the ``minus" sign of the Lorentzian-like (pseudoeuclidean) metric in Eq.~(\ref{2.24}) (cf. also Eqs.~(\ref{(5b)}), (\ref{(7)}) and(\ref{2.22})), not imposed by hand, but derived through the doubling of the degrees of freedom and crucial in our description (and in the NCSG construction). The doubled coordinate $x_2$ thus acts as the gauge field component $A_1$ to which the $x_1$ coordinate is coupled, and {\sl vice versa}. The energy dissipated by one of the two systems is gained by the other one and viceversa, in analogy to what happens in standard electrodynamics as observed above. The picture is recovered of the gauge field as the bath or reservoir in which the system is embedded~\cite{Celeghini:1992a,Celeghini:1993a}. Our toy system of harmonic oscillators Eq.~(\ref{2.1a}) and Eq.~(\ref{2.1b}) offers also an useful playground to show how dissipation is implicitly related to quantization. 't~Hooft indeed has conjectured that classical, deterministic systems with loss of information might lead to a quantum evolution~\cite{'tHooft:1999gk,erice,'tHooft:2006sy} provided some specific energy conditions are met and some constraints are imposed. Let us verify how such a conjecture is confirmed for our system described by Eq.~(\ref{2.1a}) and Eq.~(\ref{2.1b}). We will thus show that quantization is achieved as a consequence of dissipation. We rewrite the Hamiltonian Eq.~(\ref{2.17}) as ${\hat H} = H_{\rm \I} - H_{\rm \II}$, with \bea &&H_{\rm \I} = \frac{1}{2 \Om {\cal C}} (2 \Om {\cal C} - \Ga J_2)^2 ~~,~~ H_{\rm \II} = \frac{\Ga^2}{2 \Om {\cal C}} J_2^2\, \label{split} \eea where {\cal C} is the Casimir operator and $J_2$ is the (second) $SU(1,1)$ generator~\cite{Blasone:1996yh}: \bea\lab{ca} {\cal C} = \frac{1}{4 \Om m}\lf[ \lf(p_1^2 - p_2^2\ri)+ m^2\Om^2 \lf(x_1^2 - x_2^2\ri)\ri]~, \qquad J_2 = \frac{m}{2}\lf[\lf( {\dot x}_1 x_2 - {\dot x}_2 x_1 \ri) - {\Ga} r^2 \ri] ~. \eea ${\cal C}$ is taken to be positive and $$\Ga = {\ga\over 2 m}~,~ \Om = \sqrt{\frac{1}{m} (k-\frac{\ga^2}{4m})}~,~ \mbox{with}~~ k >\frac{\ga^2}{4m}~.$$ This ${\hat H}$ belongs to the class of Hamiltonians considered by 't~Hooft and is of the form~\cite{Blasone:2000ew,Blasone:2002hq} \bea \lab{pqham} {\hat H} &=& \sum_{i=1}^2p_{i}\, f_{i}(q)\,, \eea with $p_1 = {\cal C}$, $p_2 = J_2$, $f_1(q)=2\Om$, $f_2(q)=-2\Ga$. Note that $\{q_{i},p_i\} =1$ and the other Poisson brackets are vanishing. The $f_{i}(q)$ are nonsingular functions of $q_{i}$ and the equations for the $q$'s, $\dot{q_{i}} = \{q_{i}, H\} = f_{i}(q)$, are decoupled from the conjugate momenta $p_i$. A complete set of observables, called {\em beables}, then exists, which Poisson commute at all times. Thus the system admits a deterministic description even when expressed in terms of operators acting on some functional space of states $|\psi\ran$, such as the Hilbert space~\cite{erice}. Note that this description in terms of operators and Hilbert space, does not imply {\em per se} quantization of the system. Physical states are defined to be those satisfying the condition $J_2 |\psi\ran = 0$. This guaranties that ${\hat H}$ is bounded from below and ${\hat H} |\psi\ran= H_{\rm \I} |\psi\ran = 2\Om {\cal C}|\psi\ran$. $H_{\rm \I}$ thus reduces to the Hamiltonian for the two-dimensional ``isotropic'' (or ``radial'') harmonic oscillator $\ddot{r} + \Om^2 r =0 $. Indeed, putting $K\equiv m \Om^2$, we obtain \bea \lab{17} {\hat H} |\psi\ran= H_{\rm \I} |\psi\ran = \left( \frac{1}{2m}p_{r}^{2} + \frac{K}{2}r^{2}\right) |\psi \ran \, . \eea The physical states are invariant under time-reversal ($|\psi(t)\ran = |\psi(-t)\ran$) and periodical with period $\tau = 2\pi/\Omega$. $ H_{\rm \I} = 2 \Om{\cal C} $ has the spectrum ${\cal H}^n_{\rm \I}= \hbar \Om n$, $n = 0, \pm 1, \pm 2, ...$; since ${\cal C}$ has been chosen to be positive, only positive values of $n$ are considered. Then one obtains $$\frac{ \langle \psi(\tau)|{\hat H} |\psi(\tau) \rangle }{\hbar} \tau - \phi = 2\pi n~~,~~ n = 0, 1, 2, \ldots~.$$ Using $\tau = 2 \pi/\Om$ and $\phi = \alpha \pi$, where $\al$ is a real constant, leads to \bea\lab{spectrum} {\cal H}_{\rm \I,e\!f\!f}^n \equiv \langle \psi_{n}(\tau)| {\hat H} |\psi_{n}(\tau) \rangle= \hbar \Om \lf( n + \frac{\alpha}{2} \ri) ~. \eea ${\cal H}_{\rm \I,e\!f\!f}^n$ gives the effective $n$th energy level of the system, namely the energy given by ${\cal H}_{\rm \I}^n$ corrected by its interaction with the environment. We conclude that the dissipation term $J_2$ of the Hamiltonian is responsible for the zero point ($n = 0$) energy: $E_{0} =(\hbar/2) \Om \alpha$. In QM the zero point energy {\it is} the ``signature" of quantization since it is formally due to the nonzero commutator of the canonically conjugate $q$ and $p$ operators. Our conclusion is thus that the (zero point) ``quantum contribution" $E_0$ to the spectrum of physical states is due and signals the underlying dissipative dynamics: dissipation manifests itself as ``quantization". We remark that the ``full Hamiltonian'' Eq.~(\ref{pqham}) plays the r{\^o}le of the free energy ${\cal F}$, and $2 \Ga J_{2}$ represents the heat contribution in ${\hat H}$ (or $\cal F$). Indeed, using $S \equiv (2 J_{2}/\hbar)$ and $U \equiv 2 \Om {\cal C}$ and the defining relation for temperature in thermodynamics: $\pa S/\pa U = 1/T$, with the Boltzmann constant $k_{\rm B} = 1$, from Eq.~(\ref{pqham}) we obtain $T = \hbar \Ga$, i.e., provided that $S$ is identified with the entropy, $\hbar \Ga$ can be regarded as the temperature. Note that it is not surprising that $2 J_{2}/\hbar$ behaves as the entropy since it controls the dissipative part of the dynamics (thus the irreversible loss of information). It is interesting to observe that this thermodynamical picture is also cofirmed by the results on the canonical quantization of open systems in quantum field theory~\cite{Celeghini:1992yv}. On the basis of these results (confirming 't~Hooft's conjecture) we have proved that the NCSG classical construction, due to its essential ingredient of the doubling of the algebra, is intrinsically related with dissipation and thus carries in itself the seeds of quantization~\cite{PRD}. \section{Conclusions} In Ref.~\cite{Sivasubramanian:2003xy} it has been shown that a ``dissipative interference phase'' also appears in the evolution of dissipative systems. Such a feature exhibits one more consequence of the algebra doubling which plays so an important r{\^o}le in the Connes NCSG construction. In this report we have discussed how such a doubling process implies the gauge structure of the theory, its thermal characterization, its built in dissipation features and the consequent possibility to exhibit quantum evolution behaviors. Here we only mention that the doubling of the coordinates also provides a relation between dissipation and noncommutative geometry in the plane. We refer to Ref.\cite{PRD} for details with reference to Connes construction (see also~\cite{Vienna2012}). We only recall that in the $ (x_+,x_-)$ plane, $x_{\pm}=x\pm y/2$ (cf. Section 2), the components of forward and backward in time velocity $ v_\pm =\dot{x}_\pm$, given by \be v_{\pm }={{\partial H} \over \partial p_{\pm }} =\pm \, \frac{1}{m}\lf( p_\pm \mp \frac{\ga}{2}\, x_\mp \ri) ~, \label{(9)} \ee do not commute: $[v_+,v_-]=i\hbar \,\ga/ m^2$. Thus it is impossible to fix these velocities $v_+$ and $v_-$ as being identical~\cite{Sivasubramanian:2003xy}. Moreover, one may always introduce a set of canonical conjugate position coordinates $ (\xi_+,\xi_-)$ defined by $\xi_\pm = \mp L^2K_\mp$, with $\hbar K_\pm = m v_\pm$, so that $\left[\xi_+,\xi_- \right] = iL^2$, which characterizes the noncommutative geometry in the plane $(x_+,x_-)$. $L$ denotes the geometric length scale in the plane. One can show~\cite{Sivasubramanian:2003xy} that an Aharonov--Bohm-type phase interference can always be associated with the noncommutative $(X,Y)$ plane where \begin{equation} \label{1noncomm} [X,Y]=iL^2~. \end{equation} Eq.~(\ref{1noncomm}) implies the uncertainty relation $ (\Delta X) (\Delta Y) \ge L^2/2$, due to the zero point fluctuations in the coordinates. Consider now a particle moving in the plane along two paths starting and finishing at the same point, in a forward and in a backward direction, respectively, forming a closed loop. The phase interference $\vartheta$ may be be shown~\cite{Sivasubramanian:2003xy} to be given by $\vartheta = {\cal A}/L^{2}$, where $ {\cal A} $ is the area enclosed in the closed loop formed by the paths and Eq.~(\ref{1noncomm}) in the noncommutative plane can be written as \begin{equation} [X,P_X]=i\hbar \ \ \ {\rm where} \ \ \ P_X=\left(\frac{\hbar Y}{L^2}\right)~. \label{phase4} \end{equation} The quantum phase interference between two alternative paths in the plane is thus determined by the length scale $ L $ and the area $ {\cal A} $. In the dissipative case, it is $L^2=\hbar / \ga$, and the quantum dissipative phase interference $\vartheta = {\cal A}/L^2 = {\cal A} \ga/\hbar$ is associated with the two paths in the noncommutative plane, provided $x_+ \neq x_-$. When doubling is absent, i.e., $x_+ = x_-$, the quantum effect disappear. It is indeed possible to show~\cite{Blasone:1998xt} that in such a classical limit case the doubled degree of freedom is associated with ``unlikely processes'' and it may thus be dropped in higher order terms in the perturbative expansion, as indeed it happens in Connes construction. At the Grand Unified Theories scale, when inflation took place, the effect of gauge fields is fairly shielded. However, since these higher order terms are the ones responsible for quantum corrections, the second sheet cannot be neglected if one does not want to preclude quantization effects, as it happens once the universe entered the radiation dominated era. \section*{References}
{ "attr-fineweb-edu": 1.597656, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbdPxK6EuNBRhGl6-
\section{Introduction} {$\xi^1$\,CMa} (HR\,2387 = HD\,46328 = HIP\,31125 = MCW 441 = ADS\,5176A) is a bright ($V=4.3$\,mag), apparently single, B0.5 subgiant located near the middle of its main sequence evolution \citep[e.g.][]{2005A&A...433..659N,2017MNRAS.471.2286S}. It has long been known to exhibit large-amplitude $\beta$~Cep radial pulsations with a period of roughly $P=0.20958$~d \citep[4.77 d$^{-1}$; e.g.][]{1953PASP...65..193M,1954PASP...66..200W,1992AAS...96..207H, 2006CoAst.147..109S}. \citet{2009RMxAC..36..319H} reported the star to be magnetic. \citet{2017MNRAS.471.2286S} performed a detailed determination of the star's physical parameters, finding $T_{\rm eff}=27\pm 1$~kK, $\log g=3.78\pm 0.07$, and an age of $11.1\pm 0.7$~Myr. At an inferred mass of $14.2\pm 0.4$~M$_\odot$, this implies that {$\xi^1$\,CMa} has completed three-quarters of its main sequence evolution. Analysis of high resolution spectropolarimetry obtained between 2000\,--\,2017 led \citet{2017MNRAS.471.2286S} to conclude that the star's rotation period is remarkably long, over 30 years. In particular, \citet{2017MNRAS.471.2286S} \citep[see as well][]{2018MNRAS.478L..39S} demonstrated that previous claims of much shorter rotation periods were unable to explain the magnetic observations. \cite{2018MNRAS.478L..39S} discovered the presence of unexpected crossover signatures in Stokes $V$ profiles of $\xi^1$\,CMa\ obtained near the phase of null longitudinal field. They demonstrated that the combination of radial pulsation and departures from a dipole magnetic field geometry could explain the presence of this novel and unexpected ``radial crossover'' effect. \citet{2017MNRAS.471.2286S} also examined the behaviour of the radial-velocity (RV) pulsations of the star over a span of 17~yr. They demonstrated that a constant pulsation period was unable to phase those data coherently, and consequently inferred that the period was increasing at a rate of 0.96 s/cen. This result is qualitatively consistent with earlier reports of period instability of $\xi^1$\,CMa. \citet{1999NewAR..43..455J}, in his summary of period evolution of $\beta$~Cep stars, cites a rate of period change of $0.37\pm 0.05$~s/cen reported {by \cite{1992PhDT-Pigulski}}. \citet{2015A&A...584A..58N} used those results to test the influence of rotation and convective core overshoot on models of massive star evolution, finding that the measured rate of period change of $\xi^1$\,CMa\ was in good agreement with that predicted by models under the constraints applied by the physical parameters of \citet{2016PhDT.......390S,2017MNRAS.471.2286S}. Real or apparent pulsation period evolution can be the consequence of a number of phenomena, including binarity and stellar evolution. For example, \cite{1984PASP...96..657O, 2012A&A...544A..28O} reported that the $\beta$~Cep star BW~Vul exhibits complex period variability that \citet{2012A&A...544A..28O} concluded is best understood as a piecewise linear ephemeris, corresponding to a constant period interrupted every few decades by an abrupt period change. A number of studies have considered the role of stellar evolution and binarity in understanding pulsation period changes \citep{2016JAVSO..44..179N}. \cite{1919Obs....42..338E} conducted the first test by considering the rate of period change for the prototype Cepheid, $\delta$~Cephei, and showed that the rate of period change was inconsistent with energy generation from gravitational contraction. More recently, numerous works have used period change measurements to test evolution of Cepheids such as Polaris, $\delta$~Cep, and l~Car \citep{2012ApJ...745L..32N,2016JAVSO..44..179N,2014A&A...563A..48N,2015MNRAS.449.1011F,2018A&A...611L...7A}. \cite{2015ApJ...804..144A} considered the period change of $\delta$~Cep as potential evidence for an undetected close companion. Period change due to evolutionary effects has not been examined in detail for RR~Lyrae stars; however \cite{1994ApJ...423..380K} and \cite{2011AJ....141...15K} made predictions of period change from stellar evolution models and found some consistency with observations. This has been confirmed by \cite{2007A&A...476..307L} and \cite{2013JAVSO..41...75P}. The light-time effect due to binary companions appears to be one of the origins of apparent period changes for the $\beta$~Cephei stars \citep{1992A&A...253..178P,1992A&A...261..203P,1993A&A...274..269P}. In this paper we revisit (a) the pulsation frequency spectrum and (b) the evolution of the fundamental pulsation period of $\xi^1$\,CMa. We report new two-colour BRITE photometry of the star which we analyse in tandem with SMEI photometry to search for evidence of additional pulsation frequencies. We then revisit the period evolution reported by \citet{1999NewAR..43..455J} and \citet{2017MNRAS.471.2286S} using all published photometric and radial velocity (RV) measurements of $\xi^1$\,CMa, spanning over 100 years. \begin{table*} \begin{flushleft} \caption[]{\label{tab:RV_table} New (2018-2019) radial velocity measurements of $\xi^1$CMa. These data are described in Sect.~\ref{specpol}.} \end{flushleft} \begin{center} \begin{tabular}{c c|c c|c c} \hline HJD-2458000 & RV (km s$^{-1}$) & HJD-2458000 & RV (km s$^{-1}$) & HJD-2458000 & RV (km s$^{-1}$) \\ \hline 148.75321 & $11.8 \pm 0.7$ & 557.77240 & $39.9 \pm 1.3$ & 559.79702 & $14.5 \pm 0.6$ \\ 148.75429 & $11.6 \pm 0.7$ & 557.77349 & $40.1 \pm 1.3$ & 559.81117 & $20.4 \pm 0.8$ \\ 148.75537 & $10.0 \pm 0.7$ & 557.77459 & $40.0 \pm 1.3$ & 559.81230 & $20.9 \pm 0.8$ \\ 148.75645 & $10.2 \pm 0.7$ & 557.77569 & $40.0 \pm 1.3$ & 559.81342 & $21.5 \pm 0.8$ \\ 148.97144 & $9.5 \pm 0.7$ & 557.83743 & $16.4 \pm 0.7$ & 559.81455 & $21.9 \pm 0.8$ \\ 148.97252 & $9.3 \pm 0.7$ & 557.83856 & $15.9 \pm 0.7$ & 559.82356 & $26.1 \pm 0.9$ \\ 148.97360 & $9.8 \pm 0.7$ & 557.83969 & $15.4 \pm 0.7$ & 559.82467 & $26.6 \pm 0.9$ \\ 148.97468 & $10.0 \pm 0.7$ & 557.84081 & $15.0 \pm 0.7$ & 559.82577 & $27.1 \pm 0.9$ \\ 150.79889 & $33.0 \pm 0.9$ & 557.84309 & $14.0 \pm 0.6$ & 559.82688 & $27.7 \pm 0.9$ \\ 150.79997 & $32.3 \pm 0.9$ & 557.84419 & $13.5 \pm 0.6$ & 560.71247 & $40.0 \pm 1.3$ \\ 150.80105 & $32.3 \pm 0.9$ & 557.84529 & $13.1 \pm 0.6$ & 560.71357 & $40.5 \pm 1.3$ \\ 150.80213 & $31.7 \pm 0.9$ & 557.84639 & $12.6 \pm 0.6$ & 560.71467 & $39.7 \pm 1.3$ \\ 153.85512 & $19.0 \pm 0.8$ & 557.84869 & $11.8 \pm 0.6$ & 560.71577 & $39.6 \pm 1.3$ \\ 153.85620 & $18.9 \pm 0.8$ & 557.84984 & $11.5 \pm 0.6$ & 560.71862 & $39.1 \pm 1.3$ \\ 153.85728 & $19.2 \pm 0.8$ & 557.85100 & $11.1 \pm 0.6$ & 560.71973 & $38.9 \pm 1.3$ \\ 153.85835 & $20.0 \pm 0.8$ & 557.85214 & $11.4 \pm 0.6$ & 560.72083 & $39.0 \pm 1.2$ \\ 154.70789 & $25.1 \pm 0.9$ & 559.76216 & $7.2 \pm 0.5$ & 560.72193 & $38.7 \pm 1.3$ \\ 154.70898 & $25.4 \pm 0.9$ & 559.76327 & $7.3 \pm 0.5$ & 560.72359 & $37.9 \pm 1.2$ \\ 154.71008 & $26.1 \pm 0.8$ & 559.76438 & $7.3 \pm 0.5$ & 560.72470 & $37.5 \pm 1.2$ \\ 154.71119 & $26.4 \pm 0.8$ & 559.76548 & $7.3 \pm 0.5$ & 560.72580 & $37.2 \pm 1.2$ \\ 154.91334 & $23.4 \pm 0.8$ & 559.76721 & $7.5 \pm 0.5$ & 560.72690 & $36.8 \pm 1.2$ \\ 154.91444 & $23.6 \pm 0.8$ & 559.76837 & $7.6 \pm 0.5$ & 560.76130 & $21.8 \pm 0.8$ \\ 154.91554 & $24.2 \pm 0.8$ & 559.76953 & $7.6 \pm 0.5$ & 560.76242 & $20.8 \pm 0.8$ \\ 154.91664 & $24.7 \pm 0.8$ & 559.77068 & $7.9 \pm 0.5$ & 560.76352 & $20.3 \pm 0.8$ \\ 156.76397 & $9.5 \pm 0.6$ & 559.77267 & $8.1 \pm 0.5$ & 560.76463 & $19.8 \pm 0.8$ \\ 156.76505 & $9.5 \pm 0.6$ & 559.78286 & $10.0 \pm 0.6$ & 563.74274 & $6.3 \pm 0.5$ \\ 156.76613 & $10.2 \pm 0.7$ & 559.78396 & $10.3 \pm 0.6$ & 563.74385 & $6.2 \pm 0.5$ \\ 156.76721 & $12.1 \pm 0.7$ & 559.78506 & $10.5 \pm 0.6$ & 563.74496 & $7.2 \pm 0.5$ \\ 156.91427 & $16.5 \pm 0.8$ & 559.78616 & $10.9 \pm 0.6$ & 563.74607 & $7.3 \pm 0.5$ \\ 156.91535 & $16.1 \pm 0.8$ & 559.78814 & $11.5 \pm 0.6$ & 564.84618 & $22.6 \pm 0.8$ \\ 156.91643 & $15.6 \pm 0.8$ & 559.78928 & $11.9 \pm 0.6$ & 564.84728 & $23.2 \pm 0.8$ \\ 156.91751 & $15.1 \pm 0.7$ & 559.79043 & $12.2 \pm 0.6$ & 564.84839 & $23.7 \pm 0.9$ \\ 557.76676 & $39.5 \pm 1.2$ & 559.79158 & $12.7 \pm 0.6$ & 564.84949 & $24.2 \pm 0.9$ \\ 557.76787 & $39.7 \pm 1.2$ & 559.79356 & $13.3 \pm 0.6$ & 564.85109 & $24.8 \pm 0.9$ \\ 557.76897 & $39.8 \pm 1.3$ & 559.79471 & $13.7 \pm 0.6$ & 564.85220 & $25.4 \pm 0.9$ \\ 557.77007 & $39.9 \pm 1.3$ & 559.79586 & $14.1 \pm 0.6$ & 564.85331 & $26.0 \pm 0.9$ \\ & & & & 564.85443 & $26.5 \pm 0.9$ \\ \hline= \end{tabular} \end{center} \end{table*} \section{Space photometric observations}\label{observations} \subsection{BRITE-Constellation photometry}\label{brite} {$\xi^1$\,CMa} was observed by BRITE-Constellation \citep{2014PASP..126..573W,2016PASP..128l5001P} during its run in the Canis Major/Puppis~I field between October 26, 2015, and April 18, 2016. The observations were taken by three BRITE satellites, red-filter BRITE-Heweliusz (BHr) and BRITE-Toronto (BTr), and blue-filter BRITE-Lem (BLb), in `chopping mode' \citep{2016PASP..128l5001P}. A short summary of the characteristics of the BRITE data is given in Table \ref{tab:brite}. The photometry was obtained by means of the photometric pipeline described by \cite{2017A&A...605A..26P} and then corrected for instrumental effects according to the procedure described by \citet{2018pas8.conf..175P}. The complete reduced BRITE dataset spans 173.5 days and consists of 152\,112 photometric measurements. \begin{table} \centering \caption{{Space} photometry of {$\xi^1$\,CMa}. RSD and DT stand for residual standard deviation and detection threshold defined as signal-to-noise (S/N) equal to 4 in the frequency spectrum.} \label{tab:brite} \begin{tabular}{crrrr} \hline Satellite & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{$N_{\rm obs}$} & \multicolumn{1}{c}{RSD} & \multicolumn{1}{c}{DT}\\ ID & \multicolumn{1}{c}{span [d]} & & [mmag] & [mmag] \\ \hline BLb & 96.7 & 31\,255 & 19.2 & 0.93 \\ BTr & 156.5 & 64\,883 & 5.4 & 0.18 \\ BHr & 166.9 & 55\,974 & 14.4 & 0.53 \\ BTr\,$+$\,BHr & 173.5 & 120\,857 & 10.6 & 0.17 \\ \hline SMEI & 2884.7 & 25\,581 & 10.6 & 0.32 \\ \hline {TESS} & {21.8} & {14\,814} & {0.2} & {0.03} \\ \hline\hline \end{tabular} \end{table} \subsection{SMEI photometry}\label{smei} The Solar Mass Ejection Imager (SMEI) experiment \citep{2003SoPh..217..319E,2004SoPh..225..177J} was placed on-board of the Coriolis spacecraft and was aimed at measuring sunlight scattered by free electrons of the solar wind. We used photometry of {$\xi^1$\,CMa} obtained between 2003 and 2010 and available through the University of California San Diego (UCSD) web page\footnote{http://smei.ucsd.edu/new\_smei/index.html}. The SMEI time series are affected by long-term calibration effects, especially a repeatable variability with a period of one year. {The raw SMEI UCSD photometry of {$\xi^1$\,CMa} were corrected for the one-year variability by subtracting an interpolated mean light curve, which was obtained by folding the raw data with the period of one year, calculating median values in 200 intervals in phase, and then interpolating between them. In addition, the worst parts of the light curve and outliers were removed. The data points were also assigned individual uncertainties calculated using the scatter of the neighbouring data. Then, a model consisting of the dominant pulsation frequency (4.77~d$^{-1}$) and its detectable harmonics was fitted to the data. Finally, the low-frequency instrumental variability was filtered out by subtracting a trend using residuals from the fit. The last two steps were iterated several times.} The SMEI dataset that we analysed spans 2885 days and consists of 25\,581 photometric measurements. \subsection{TESS photometry}\label{tess} {The primary goal of the NASA's TESS mission \citep{2014SPIE.9143E..20R,2015JATIS...1a4003R} is the detection of planets by means of the transit method. TESS observations cover almost the entire sky, excluding only the regions with low Galactic latitudes ($|b|<$~6$\degr$). Observations are carried out with a 30-min cadence, but selected stars, including {$\xi^1$\,CMa}, are observed with a shorter, 2-min cadence. The star was observed with TESS camera \#2 in Sector 6. The observations spanned 21.8~d between December 11, 2018, and January 7, 2019, and consisted of 15\,678 data points. In the subsequent analysis (Sect.~\ref{freqanalysis}) we used SAP fluxes and removed all data points with quality flag different from 0.} \subsection{Frequency analysis}\label{freqanalysis} Fourier analysis with prewhitening was performed on the BRITE photometry using the {\sc Period04} package \citep{2005CoAst.146...53L}. We combined the BTr and BHr data and analyzed them as a single dataset. One significant frequency was detected at 4.771491(7)~d$^{\rm -1}$ (amplitude of 14.8~mmag, corresponding to a period of $0.2095781(3)$~d), along with its first two harmonics (with amplitudes of 2.0 and 0.6~mmag, respectively). The original and prewhitened (up to the second harmonic) Fourier amplitude spectra of the red BRITE data are illustrated in Fig.~\ref{fig1}. As is evident in Fig.~\ref{fig1}, the BRITE data reveal no evidence for new independent pulsation frequencies with amplitudes larger than about 0.17~mmag. Analysis of the blue BRITE data yields compatible results, but with higher uncertainties and upper limits (0.93~mmag). A similar analysis was performed on the SMEI data (Fig.~\ref{fig1}). The fundamental pulsation frequency 4.771517(2)~d$^{\rm -1}$ (amplitude of 11.7~mmag, corresponding to a period of $0.2095770(1)$~d) was clearly detected. No significant non-instrumental signal was detected after prewhitening with the fundamental frequency and its first two harmonics (amplitudes of 1.5~mmag and 0.9~mmag, respectively). However, the fundamental pulsation frequency determined using the SMEI data is somewhat higher than that derived using the BRITE data. This difference is consistent with the reported evolution of the pulsation period of $\xi^1$\,CMa, and will be discussed further below. The detection threshold of the SMEI data, taking into account uncertainties associated with the removal of long-term trends, is about {0.3 mmag}. {Finally, we analyzed TESS photometry of {$\xi^1$\,CMa}. With the exceptionally low detection threshold of about 0.03~mmag (Table \ref{tab:brite}) we were able to detect not only the dominant frequency (at {4.771483(6)}~d$^{\rm -1}$, with an amplitude of {12.8}~mmag), but also its four lowest harmonics. In addition (Fig.\,\ref{fig1b}), the frequency spectrum shows extra power below $\sim$5~d$^{-1}$. Several significant peaks with amplitudes below 0.12~mmag can be identified. They may correspond to both $p$ and/or $g$ modes. The detailed analysis and possible seismic modeling with the use of these frequencies is, however, beyond the scope of this paper.} \begin{figure*} \centering \includegraphics[width=9cm]{PREWHITE/Br_orig_mmag2} \hspace{-0.5cm}\includegraphics[width=9cm]{PREWHITE/Br_prewhite3_weights} \includegraphics[width=9cm]{PREWHITE/Smei_orig_weights}\hspace{-0.5cm}\includegraphics[width=9cm]{PREWHITE/Smei_prewhite3_weights} \includegraphics[width=9cm]{PREWHITE/TESS_orig.pdf}\hspace{-0.5cm}\includegraphics[width=9cm]{PREWHITE/TESS_3.pdf} \caption{Fourier amplitude spectra of BTr+BHr data (upper frames) and SMEI data (lower frames) in mmag. {\em Upper left:} BTr+BHr data. {\em Upper right:} Spectrum of residuals following prewhitening with the fundamental frequency of 4.771491~d$^{-1}$ and the first two harmonics. {\em Middle left:} Orbit-averaged SMEI data. {\em Middle right:} Spectrum of residuals following prewhitening with the fundamental frequency of 4.771517~d$^{-1}$ and its first two harmonics. {\em Bottom left:} TESS data. {\em Lower right:} Spectrum of residuals following prewhitening with the fundamental frequency of 4.771483(6)~d$^{-1}$ and its first four harmonics. In the BRITE and SMEI residuals, the peaks at 1 and 2~d$^{-1}$ are instrumental, as are peaks between 3 and 5~d$^{-1}$. } \label{fig1} \end{figure*} \begin{figure} \centering \includegraphics[width=9cm]{TESS_3_cut.pdf} \caption{Fourier amplitude spectra of TESS sector 6 photometry showing weak peaks at low frequencies.} \label{fig1b} \end{figure} \section{Spectropolarimetric observations and radial velocities}\label{specpol} In addition to the ESPaDOnS RVs published by \cite{2017MNRAS.471.2286S}, we have included new RV measurements obtained from follow-up ESPaDOnS observations in 2018 and 2019\footnote{Program codes and 18AC19 and 19AC20.}. Eight spectropolarimetric sequences were obtained in 2018; the magnetic analysis of these data were described by \cite{2018MNRAS.478L..39S}. A further 19 sequences were obtained in 2019; the magnetic analysis will be presented by Erba et al.\ (in prep.). ESPaDOnS is an echelle spectropolarimeter with a high resolving power ($\lambda/\Delta\lambda \sim 65,000$), with a wavelength range of about 370 nm to 1000 nm, mounted at the Cassegrain focus of the 3.6 m Canada-France-Hawaii Telescope (CFHT). The instrument properties and data reduction were described in detail by \cite{2016MNRAS.456....2W}. Each spectropolarimetric sequence consists of 4 differently spectra. The 2019 observations are essentially identical to the 2018 observations described by \cite{2018MNRAS.478L..39S}, with a mean peak signal-to-noise (S/N) per spectral pixel of about 400 in the individual intensity spectra. In the case of $\xi^1$ CMa the exposure time per individual spectrum (72 s) is a much lower fraction of the pulsation period than the combined 8 minute exposure-plus-readout time of a full sequence (0.004\% vs.\ 0.027\%), therefore individual spectra were used for RV measurements, thus yielding 32 measurements in 2018 and 76 measurements in 2019. RVs were measured from the weighted means of the centres-of-gravity across multiple unblended spectral lines, using the same method and line list described by \cite{2017MNRAS.471.2286S}. \section{Evolution of the pulsation period from 1906-2017} To investigate the behaviour of the pulsation period of $\xi^1$\,CMa\ we have constructed an {O\,$-$\,C} diagram using all available spectroscopic and photometric observations. Since both light and radial velocity can be described by a single periodicity, the times of maximum light (and radial velocity) were derived by fitting a function of the form \begin{equation} \sum\limits_{m=1}^N A_m\sin(2\pi mft + \phi_m), \label{eq:tsin} \end{equation} to the light or radial velocity time-series. In this equation, $f$ stands for the pulsation frequency, $t$ is the time elapsed from the initial epoch, while $A_m$ and $\phi_m$ are respectively semi-amplitudes and phases of the consecutive harmonic terms. Depending on the data, the fitted model included all detectable harmonics, up to $N=5$. The harmonics account for deviations from the sinusoidal shape of the light or radial velocity curve. The times of maximum summarized in {Tables \ref{tab:tmax-rvel} and \ref{tab:tmax-phot}} correspond to the maximum of the fit given by Eq.~(\ref{eq:tsin}), that is, including all detectable harmonics. In our analysis, all dates are given as HJD at the mid-time of exposures. \begin{table*} \centering \caption{Times of maximum radial velocity for {$\xi^1$\,CMa}. {Columns give HJD of maximum {RV}, the number of cycles before/since the reference ephemeris, the inferred O-C, the source of the data, the number of observations, and any notes or comments.}} \label{tab:tmax-rvel} \begin{tabular}{lrccrl} \hline \multicolumn{1}{c}{$T_{\rm max}-$}& $E$ & (O$-$C) & \multicolumn{1}{c}{Source of} & $N_{\rm obs}$ & Notes, comments\\ \multicolumn{1}{c}{HJD\,2\,400\,000}&&[d] &\multicolumn{1}{c}{data}& \\ \hline 17221.336(10)& $-$114874 & $+$0.06029 & \cite{1926ApJ....64....1F} & 5 & 1905 -- 1906\\ 19523.709(10)& $-$103888 & $+$0.03734 & \cite{1928PLicO..16....1C} & 5 & 1909 -- 1913\\ 22697.054(10)& $-$88746 & $-$0.01448 & \cite{1921PDO.....5...45H} & 7 & best of 1921 only\\ 34439.0916(11)& $-$32718 & $-$0.06859 & \cite{1955ApJ...122...95M} & 45 & 1952 -- 1953\\ 34816.1163(22)& $-$30919 & $-$0.07022 & \cite{1956PASP...68..263M} & 18 & 1954\\ 51624.74742(39)& 49284 & $-$0.02292 & \cite{2017MNRAS.471.2286S} & 51 & Feb/Apr 2000\\ 51888.60434(22)& 50543 & $-$0.02156 & \cite{2017MNRAS.471.2286S} & 52 & Dec 2000\\ 51949.17159(23)& 50832 & $-$0.02163 & \cite{2017MNRAS.471.2286S} & 51 & Feb 2001\\ 52233.98574(24)& 52191 & $-$0.02058 & \cite{2017MNRAS.471.2286S} & 58 & Nov 2001\\ 52580.41476(73)& 53844 & $-$0.01986 & \cite{2017MNRAS.471.2286S} & 10 & Oct/Nov 2002\\ 52989.29838(24)& 55795 & $-$0.01804 & \cite{2017MNRAS.471.2286S} & 71 & Dec 2003\\ 53078.99670(25)& 56223 & $-$0.01804 & \cite{2017MNRAS.471.2286S} & 60 & Mar 2004\\ 53278.30372(30)& 57174 & $-$0.01732 & \cite{2017MNRAS.471.2286S} & 39 & Sep/Oct 2004\\ 55180.42624(24)& 66250 & $-$0.00204 & \cite{2017MNRAS.471.2286S} & 79 & Jan 2008 -- Dec 2010\\ 56308.58266(25)& 71633 & $-$0.00947 & \cite{2017MNRAS.471.2286S} & 56 & Feb 2012 -- Jan 2014\\ 57805.59816(24)& 78776 & $+$0.02717 & \cite{2017MNRAS.471.2286S} & 85 & Feb/Mar 2017\\ {58560.28537(11)}& {82377} & {$+$0.03301} & {This paper} & {77} & {Mar 2019}\\ \hline\hline \end{tabular} \end{table*} \subsection{Radial velocity data} The radial velocity data consist of a rich, high-quality data set of spectroscopic measurements obtained in the years {2000\,--\,2019} and five archival data sets, which extend the study of period changes to over a century. The 2000\,--\,2016 spectroscopy was used by \citet{2017MNRAS.471.2286S} and \citet{2018pas8.conf..154B} to conclude that the pulsation period of {$\xi^1$\,CMa} changes with the constant rate of $+$0.9 $\pm$ 0.1\,s/cen. The archival data include radial velocities published by \cite{1926ApJ....64....1F} (these are the corrected discovery data of \cite{1907ApJ....25R..59F} plus one additional spectrum), \cite{1921PDO.....5...45H}, \cite{1928PLicO..16....1C}, and \cite{1955ApJ...122...95M,1956PASP...68..263M}. All data are available and were used to derive the times of maximum presented in Table \ref{tab:tmax-rvel}. Heliocentric corrections were applied to all data for which the reported time was given in Julian Days. The 2000\,--\,2016 spectroscopy was split into 11 subsets, usually corresponding to a single observing season. In case the number of observations was small, data from adjacent seasons were combined \subsection{Photometric data} The archival photometry of {$\xi^1$\,CMa} includes ground-based observations of \cite{1954PASP...66..200W}, \cite{1962ZA.....56..141V}, \cite{1971ApJ...170..345W}, \cite{1973MNRAS.162...25S}, and \cite{1992AAS...96..207H}. In addition, we used Str\"omgren $uy$ photometry obtained at Fairborn Observatory in 2018 by one of us (G.H.). Surprisingly, the star was frequently observed from space. The data sets we used include ultraviolet (UV) photometry from the TD-1A \citep{1977AA....61..815B,1980AAS...39..301B} and ANS \citep{1979AA....79..115L} satellites and the optical-domain data from Hipparcos, BRITE (Sect.~\ref{brite}), SMEI (Sect.~\ref{smei}), {and TESS (Sect.~\ref{tess}).} \subsubsection{Effective wavelengths} In the presence of phase lags between the times of maximum in different photometric bands (Sect.\,\ref{plags}), it became necessary to derive effective wavelengths for the passbands used in the observations of {$\xi^1$\,CMa}. They were defined with the following expression: \begin{equation} \lambda_{\rm eff} = \frac{\int\limits_{\lambda_1}^{\lambda_2} \lambda S(\lambda) T_1(\lambda) T_2(\lambda)T_3(\lambda)d\lambda}{\int\limits_{\lambda_1}^{\lambda_2} S(\lambda) T_1(\lambda) T_2(\lambda)T_3(\lambda)d\lambda}, \end{equation} where $S(\lambda)$ represents a model spectrum with $T_{\rm eff}= 27500$\,K, $\log g=3.75$ taken from the OSTAR2002 grid of models \citep{2003ApJS..146..417L}. The model parameters are close to the values derived for {$\xi^1$\,CMa} by \cite{2017MNRAS.471.2286S}. The variables $T_1(\lambda)$, $T_2(\lambda)$, and $T_3(\lambda)$ (all included optionally) are filter transmission curves, detector sensitivity curves, and (for ground-based observations) the atmosphere transmission curves. Values of $\lambda_1$ and $\lambda_2$ were chosen to encompass the non-zero values of the sensitivity and transmission curves. Details concerning $T_1$, $T_2$, and $T_3$ are as follows: \begin{itemize} \item \cite{1973MNRAS.162...25S}: $T_1$ was Johnson $V$ (the author used Corning 3384 filter which defines $V$ band and an EMI\,8094S photomultiplier with a S-11 photocathode, similar to that used in a 1P21 photmultiplier, which defines $V$), that is, a combination of filter transmission and detector sensitivity, taken from ADPS\footnote{http://ulisse.pd.astro.it/Astro/ADPS/} \citep{2000A&AS..147..361M}. For $T_2$ we took the extinction curve (twice as large as in La Silla; see Geneva photometry below). The same combination of $T_1$ and $T_2$ was adopted for the data published by \cite{1954PASP...66..200W} and \cite{1962ZA.....56..141V}. \item \cite{1971ApJ...170..345W}: $T_1$ was Newell $v$ band transmission taken from ADPS, $T_2$ was the same as for \cite{1973MNRAS.162...25S}. \item TD-1A: $T_1$: Passband centres and effective widths were taken from the HEASARC web page\footnote{https://heasarc.gsfc.nasa.gov/W3Browse/all/td1.html}; shapes were approximated by an $\exp(-(\Delta\lambda/\sigma)^4)$ function. $T_2$, corresponding mainly to the detector response, was estimated for each TD-1A passband from fig.~8 of \cite{1973MNRAS.163..291B}. \item ANS: $T_1$: Instrument response was taken from \cite{1975A&A....39..159V}. \item Geneva: Filter transmission curves $T_1$ were taken from ADPS; $T_2$ was S-11 photocatode QE curve\footnote{http://www.r-type.org/pdfs/9531.pdf}. As $T_3$, the La Silla extinction coefficient dependence was taken\footnote{https://www.eso.org/sci/observing/tools/Extinction.html}. \item Hipparcos: $T_1$: The passband as defined by \cite{2000PASP..112..961B} was used. \item SMEI: For $T_1$ we adopted the typical E2V Technologies standard front-illuminated CCD sensitivity curve\footnote{https://www.e2v.com/content/uploads/2017/08/ccdtn101.pdf} because it seems to be similar to the description of the E2V CCD05-30-231A chip, given by \cite{2003SoPh..217..319E}. \item BRITE: $T_1$: The BRITE filter transmission curves from \cite{2014PASP..126..573W}. $T_2$: Kodak KAI-11002 sensitivity curve from the product sheet\footnote{http://www.onsemi.com/pub/Collateral/KAI-11002-D.PDF}. \item Fairborn Observatory ground-based observations in Str\"omgren $u$ and $y$ filters. For $T_1$ the transmissions curves for Str{\"o}mgren $u$ and $y$ filters from ADPS were taken. For $T_2$ and $T_3$ the QE curve from \cite{1997PASP..109..697S} for the Thorn-EMI 9124QB photomultiplier and the La Silla extinction were used, respectively. \item {TESS: The TESS curve, including both the sensitivity of the detector \citep{2015JATIS...1a4003R} and the filter transmission curve, has been taken from the NASA's High Energy Astrophysics Science Archive Research Center web page\footnote{https://heasarc.gsfc.nasa.gov/docs/tess/data/tess-response-function-v1.0.csv}.} \end{itemize} The calculated effective wavelengths are given in Table \ref{tab:tmax-phot}. \begin{table*} \centering \caption{\label{tab:tmax-phot}Times of maximum light and related information for {$\xi^1$\,CMa}. Columns give HJD of maximum light, correction to the HJD of maximum light in the visual domain {(Eq.~\ref{eq-corr})}, the number of cycles before/since the reference ephemeris, the inferred O-C, the passband, the effective wavelength, the source of the data, the number of observations, and any notes or comments.} \setlength\tabcolsep{3pt} \begin{tabular}{lcrccclrl} \hline \multicolumn{1}{c}{$T_{\rm max}^{\rm obs}-$}& $C_{\rm Vis}$ &$E$ & (O$-$C) & Passband(s) & $\lambda_{\rm eff}$ & \multicolumn{1}{c}{Source of} & $N_{\rm obs}$ & Notes, comments\\ \multicolumn{1}{c}{HJD\,2\,400\,000}&[d]&&[d] &&[nm]&\multicolumn{1}{c}{data}& \\ \hline 34719.3697(38)&$-$0.0019& $-$31381 & +0.0052 & yellow & 547 & \cite{1954PASP...66..200W} & 99 & data read off the figures\\ 37658.0342(15) &$-$0.0019& $-$17359 & $+$0.0020 & $Y$ & 547 & \cite{1962ZA.....56..141V} & unkn. & combined 13 $T_{\rm max}$\\ 40562.9555(13) &$-$0.0013& $-$3498 & $-$0.0021 & Newell $v$& 533& \cite{1971ApJ...170..345W} &unkn.& from \cite{1973MNRAS.162...25S}\\ 41296.0514(6)&$-$0.0019& 0 & $-$0.0019 & yellow & 547 & \cite{1973MNRAS.162...25S} & 562 & original $T_{\rm max}$\\ {41406.911(2)} &{---}& {529} & {$-$0.0058} & {155\,--\,275} & {214} & \cite{1977AA....61..815B} & {8} & {TD-1A}\\ 42867.8747(40) &---& 7500 & $+$0.0071 & 155, 180, 220 & 184 & \cite{1979AA....79..115L} & 25 & ANS\\ 42867.8727(40) &---& 7500 & $+$0.0051 & 250, 330 & 289 & \cite{1979AA....79..115L} & 25 & ANS\\ 47504.3265(10)&$+$0.0061& 29623 & $+$0.0262 & Geneva $U$ & 348 & \cite{1992AAS...96..207H} & 203 & \\ 47504.3292(11)&$+$0.0040& 29623 & $+$0.0268 & Geneva $B_1$ & 401 & \cite{1992AAS...96..207H} & 203 & \\ 47504.3300(10)&$+$0.0032& 29623 & $+$0.0268 & Geneva $B$ & 419 & \cite{1992AAS...96..207H} & 203 & \\ 47504.3305(12)&$+$0.0022& 29623 & $+$0.0263 & Geneva $B_2$ & 446 & \cite{1992AAS...96..207H} & 202 & \\ 47504.3342(17)&$-$0.0013& 29623 & $+$0.0265 & Geneva $V_1$ & 534 & \cite{1992AAS...96..207H} & 204 & \\ 47504.3326(19)&$-$0.0014& 29623 & $+$0.0248 & Geneva $V$ & 535 & \cite{1992AAS...96..207H} & 202 & \\ 47504.3359(18)&$-$0.0028& 29623 & $+$0.0267 & Geneva $G$ & 570 & \cite{1992AAS...96..207H} & 200 & \\ 48380.3596(13)&$+$0.0004& 33803 & $+$0.0280 & $H_{\rm p}$ & 490 & Hipparcos & 216 & \\ 52992.3066(7)&$-$0.0046& 55809 & $+$0.0515 & SMEI & 615 & SMEI (UCSD) & 5116 & SMEI data, part 1\\ 53592.1155(6)&$-$0.0046& 58671 & $+$0.0553 & SMEI & 615 & SMEI (UCSD) & 5116 & SMEI data, part 2\\ 54123.6040(8)&$-$0.0046& 61207 & $+$0.0604 & SMEI & 615 & SMEI (UCSD) & 5116 & SMEI data, part 3\\ 54656.5586(7)&$-$0.0046& 63750 & $+$0.0645 & SMEI & 615 & SMEI (UCSD) & 5116 & SMEI data, part 4\\ 55252.1769(7)&$-$0.0046& 66592 & $+$0.0692 & SMEI & 615 & SMEI (UCSD) & 5117 & SMEI data, part 5\\ 57367.6476(3)&$+$0.0030& 76686 & $+$0.0924 & BRITE blue & 424 & this paper & 31255 & BLb \\ 57384.8364(11)&$-$0.0019& 76768 & $+$0.0911 &Str{\"o}mgren $y$& 546 &this paper&494& APT (Fairborn)\\ 57385.0417(5)&$+$0.0060& 76769 & $+$0.0947 &Str{\"o}mgren $u$& 345 &this paper&501& APT (Fairborn)\\ 57401.60040(10)& $-$0.00415& 76848 & $+$0.08683 & BRITE red & 604 & this paper & 120857 & BHr + BTr \\ {58479.250031(7)}& {---} & {81990} & {$+$0.103386} & {TESS} & {733} & {TESS} & {14812} & {Sector 6, camera \#2}\\ \hline\hline \end{tabular} \end{table*} \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig-phase-lag-eps-converted-to.pdf} \caption{The times of maximum light derived from the Geneva photometry \citep{1992AAS...96..207H} in the visual domain (red points). The linear coefficient of the fitted line corresponds to $A_{\rm Vis}= (+$3.99\,$\pm$\,0.33)\,$\times$\,10$^{-5}$~d\,(nm)$^{-1}$. {The dashed lines represent the theoretical dependences for the fundamental radial mode in two BG models with $M=14$\,$M_\odot$, $T_{\rm eff}=27$\,kK, $\log g = 3.74$, and $\xi=2$\,{km\,s$^{-1}$} (blue) and 10\,{km\,s$^{-1}$} (green); see Sect.\,\ref{seisminf} for explanation.}} \label{fig:pl} \end{figure} \subsubsection{Correction for the phase lag between photometric bands}\label{plags} The radial velocities of {$\xi^1$\,CMa} were derived from the optical spectra and the analysis rarely included hydrogen lines. Therefore, systematic effects related to the velocity gradient in the atmosphere and non-adiabaticity, resulting in the phase lag between hydrogen and other lines, called van Hoof effect \citep{1953PASP...65..158V,1991A&A...252..245M}, is probably negligible in the case of {$\xi^1$\,CMa}. On the other hand, the lack of phase lag cannot be assumed for photometric data, especially for a radially pulsating star with large amplitude like {$\xi^1$\,CMa}. There are two multicolour data sets, which potentially enable to check if the phase lags are observed for {$\xi^1$\,CMa}. The first one is the UV TD-1A photometry of {$\xi^1$\,CMa} in four bands published by \cite{1980AAS...39..301B}, the other one is the seven-band Geneva photometry of \cite{1992AAS...96..207H}\footnote{Kindly provided by Gerald Handler.}. In both data sets, the observations in all passbands are simultaneous. The effective wavelengths of these 11 passbands range between 151 and 570~nm. The Geneva photometry covers a wider wavelength range and is of better quality than the very scarce (8 data points only) TD-1A time-series. Due to the scarcity, the latter data set cannot be used to conclusively discuss the phase lags in the UV. The situation is much better for the Geneva data. The derived times of maximum light obtained from the Geneva data are plotted against the effective wavelengths of the passbands in Fig.\,\ref{fig:pl}. There is a clear dependence of the time of maximum light on $\lambda_{\rm eff}$ --- the longer the wavelength, the later the time of maximum occurs. A least-squares fit gives the rate of the time of maximum lag $A_{\rm Vis}= (+$3.99\,$\pm$\,0.33)\,$\times$\,10$^{-5}$~d\,(nm)$^{-1}$. The derived value of $A_{\rm Vis}$ translates into a difference in the time of maximum light equal to 0.0090~d $=$ 13~min or 7 per cent of the pulsation period in the full range of the effective wavelengths covered by the Geneva filters (348\,--\,570~nm). Because simultaneous UV and visual data for {$\xi^1$\,CMa} do not exist, the phase-lag corrections cannot be applied to the UV data. Consequently, we do not use the UV data in the fits shown in Fig.\,\ref{fig:o-c} (although they are shown for reference). {Similarly, the phase lag is not extrapolated and the time of maximum not corrected for the TESS passband.} The effect of phase (or times-of-maximum) lag has to be taken into account to properly use photometric data obtained in different passbands in the {O\,$-$\,C} diagram. These corrections to the times of maximum light in the visual domain, $C_{\rm Vis} \text{[d]} = A_{\rm Vis}(500 - \lambda_{\rm eff})$, where $\lambda_{\rm eff}$ is in nm, were added to the observed times of maximum light, $T_{\rm max}^{\rm obs}$: \begin{equation}\label{eq-corr} T_{\rm max}^{\rm corr} = T_{\rm max}^{\rm obs} + C_{\rm Vis} \end {equation} before the {O\,$-$\,C} values were calculated. Both $T_{\rm max}^{\rm obs}$ and $C_{\rm Vis}$ are reported in Table\,\ref{tab:tmax-phot}. The values of $T_{\rm max}^{\rm corr}$ were subsequently used to calculate the values of {O\,$-$\,C} according to the ephemeris, taken from \cite{1973MNRAS.162...25S}: \begin{equation} T_{\rm max} = \mbox{HJD} 2441296.0514 + 0.2095755 \times E, \end{equation} where $E$ is the number of periods elapsed from the initial epoch. The same ephemeris was used for radial velocity times of maximum (Table \ref{tab:tmax-rvel}). Whenever applicable and possible, the uncertainties of $T_{\rm max}$ were derived by means of the bootstrap method. For samples with small numbers of data points (ANS photometry and the oldest radial velocities), the uncertainties were inferred from the least-squares variance and multiplied by 4. This number was taken from the comparison of uncertainties derived from least-squares and bootstrapping errors for slightly more numerous but still small samples of data. For the data published by \cite{1962ZA.....56..141V}, the published values of $T_{\rm max}$ were averaged after transferring to the same (mean) epoch, and the uncertainty was estimated as a standard deviation. Finally, the published uncertainties of $T_{\rm max}$ (when no data were available) were multiplied by two as a conservative choice, because usually no details of their derivation were given. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Fig-O-C-eps-converted-to.pdf} \caption{Top: {O\,$-$\,C} diagram for the light (green dots) and radial velocity (blue dots) times of maximum given in Tables \ref{tab:tmax-rvel} and \ref{tab:tmax-phot}. A constant shift of $+$0.070~d was applied to the times of maximum radial velocity. Two violet dots correspond to $T_{\rm max}$ derived from TD-1A and ANS UV observations, {while the red dot corresponds to $T_{\rm max}$ derived from the TESS data. These three} values of O\,--\,C are not considered in the fits. Four of the parabolas are fits with different weighting schemes: weights proportional to $\sigma^{-2}$ (red continuous line), $\sigma^{-1}$ (red long-dashed line), $\sqrt{\sigma}$ (red short-dashed line) and with equal weights (black line). The blue line shows the fit to the recent {(2000\,--\,2019)} RV data only ({$\dot{P}=$ 0.97\,$\pm$\,0.13~s/cen} with equal weights). Bottom: residuals from the fit with equal weights. } \label{fig:o-c} \end{figure*} As a by-product of the procedure of the determination of times of maximum light, we obtained amplitudes of the radial mode. The amplitudes are shown in Fig.\,\ref{fig:ampl} as a function of $\lambda_{\rm eff}$. A strong increase of amplitude towards short wavelengths, typical for radial modes in $\beta$~Cep stars, can be seen. In addition, a small amplitude change could have taken place in {$\xi^1$\,CMa}. For example, the amplitudes in the two BRITE bands are about 25\% larger than those derived from the Geneva data. The two data sets are separated by almost three decades, however. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig-amplitudes-eps-converted-to.pdf} \caption{{Full amplitudes} of the radial mode of {$\xi^1$\,CMa}. The labels indicate either space mission (TD-1, ANS, Hipparcos, BRITE, SMEI, {TESS}) or photometric system. The observations can be identified in Table \ref{tab:tmax-phot}.} \label{fig:ampl} \end{figure} \subsection{Correction for the phase lag between light and radial velocity changes}\label{RVcorr} The {lag} between light and radial velocity data was derived by a trial and error procedure using photometry from SMEI and BRITE and the recent radial velocity measurements. The phase lag is equal to $+$0.070\,{$\pm$\,0.001~d}, corresponding to 0.334\,{$\pm$\,0.005} in phase. This number is very different from a typical value of 1/4, corresponding to the maximum light at the epoch of minimum radius, and observed in other $\beta$~Cep stars. Even for the two high-amplitude $\beta$~Cep stars, BW~Vul and $\sigma$~Sco, the phase shifts amount to 0.249 \citep{1993A&A...274..269P} and 0.265 \citep{1992A&A...261..203P}, respectively; that is, much less than in {$\xi^1$\,CMa}. {Nonlinear pulsational calculations indicate that BW Vul is almost certainly a fundamental radial pulsator \citep{1994IAUS..162...19M}. In this case, the observed standstill in the light curve is caused by an emerging shock wave which originates at the bottom of the He\,{\sc ii} ionization zone. The first overtone mode is stable.} \subsection{Seismic inference from phase lags}\label{seisminf} The phase lag between light and radial-velocity curves as well as the dependence of phase of the maximum light on wavelength (Fig.\,\ref{fig:pl}) can possibly be used to constrain stellar parameters or verify mode identification for the dominant mode. The 4.77~d$^{\rm -1}$ frequency has already been identified as a radial mode by \cite{1994A&AS..105..447H}, \cite{1994A&A...291..143C}, and \cite{2006CoAst.147..109S} using both photometry and spectroscopy. The strong dependence of amplitude on wavelength (Fig.\,\ref{fig:ampl}) provides a clear indication that this is the case. However, it was not clear if the mode was fundamental or an overtone {although \cite{2017MNRAS.471.2286S} showed that the stellar parameters they derived are consistent with the fundamental mode.} We therefore checked if seismic models are able to reproduce the two key observed characteristics: the large phase lag between light and radial-velocity curves, and the phase dependence on wavelength. For this purpose, we calculated a grid of models for stars with stellar parameters close to those provided by \cite{2017MNRAS.471.2286S}. The models were calculated in the same way as described by \cite{2014A&A...565A..76C}. We used {OPAL} opacities, hydrogen mass abundances {between $X = 0.6$ and 0.8 and two metallicities, $Z = 0.0134$ and 0.0168. } Pseudo-rotating, spherically symmetric models (following \citealt{1998A&A...334..911S}) were built assuming rigid rotation with constant total angular momentum during the main sequence evolution. The models were also used to calculate amplitude ratios and phase differences for the Str\"{o}mgren, Geneva and BRITE photometric systems following the procedure presented by \cite{1994A&A...291..143C}. {We used LTE models calculated by \cite{2003IAUS..210P.A20C} (hereafter referred to as CK models) and non-LTE models calculated by \cite{2007ApJS..169...83L} using the TLUSTY code \citep{1988CoPhC..52..103H,2011ascl.soft09021H} for microturbulent velocity of $\xi = 2$\,{km\,s$^{-1}$} (BG models). The latter grid of non-LTE models has been extended for $\log g > 3.0$ assuming $\xi = 10$\,{km\,s$^{-1}$} following the procedure described by \cite{2012A&A...547A..42C}.} {The first conclusion that can be drawn from these calculations is that} the frequency of the mode for models with masses between 14.0 and 14.5\,M$_\odot$ and $T_{\rm eff}=27$\,kK, consistent with parameters of {$\xi^1$\,CMa} provided by \cite{2017MNRAS.471.2286S}, can be reproduced only if the radial mode is fundamental. A first overtone {can be excluded because: (i) It} would require much smaller $T_{\rm eff}\approx 24.5$\,kK, which is neither compatible with the spectrum nor the colours of {$\xi^1$\,CMa}. {(ii) It is stable in models. (iii) The theoretical phase lag between light and RV curve is smaller than 0.25 for all models in the considered range of masses, in contrast with the observed value. (iv) As already concluded by \cite{2017MNRAS.471.2286S}, the stellar parameters they derived are consistent with thes pulsation constant corresponding to the fundamental radial mode. For an overtone, the mass and luminosity inferred from the pulsation constant would be much too high. The second and higher overtones can be excluded for the same reasons. Therefore, we conclude that the observed variation corresponds to the fundamental radial mode.} {As can be seen in Fig.\,\ref{fig:pl}, the wavelength dependence of the photometric phase lags is well reproduced by non-rotating BG models with $M=14$\,$M_\odot$, $T_{\rm eff} = 27$\,kK, $X=0.7042$, $Z=0.0162$, and $\log g = 3.74$ provided that it is assumed that the mode is radial fundamental. The same models predict phase lags between light and RV curves in the range 0.38\,--\,0.39, slightly too high in comparison with the observed value. For the non-rotating CK models with $X=0.7374$ and solar metallicity ($Z=0.0134$), the phase lag equals to 0.35\,--\,0.37. Although still slightly higher than the observed value of 0.334\,$\pm$\,0.005, this value can be regarded as fairly consistent with the observations given the non-sinusoidality of the light curve, which is not reproduced by the models.} \subsection{The resulting {O\,$-$\,C} diagram} The {O\,$-$\,C} diagram, which uses both photometry and spectroscopy, is shown in Fig.\,\ref{fig:o-c}. Given the uncertainties of the times of maximum light and radial velocity, the changes of pulsation period cannot be perfectly approximated by a simple parabola corresponding to $\dot{P} =$~const, although such a model is relatively good as a first approximation. The residuals shown in the lower panel of Fig.\,\ref{fig:o-c} are much larger than the associated uncertainties (if they cannot be seen, they are smaller than the size of the symbols). Due to the inadequacy of the fitted model, a typical weighting scheme (weights $\propto \sigma^{-2}$) is not the best choice in this case (a large range of uncertainties) leading to large residuals. In total, four different weighting schemes were tried and the results are shown in Fig.~\ref{fig:o-c}: \begin{enumerate} \item weights $\propto\sigma^{-2}$, $\dot{P}= 0.603\pm 0.090$~s/cen. \item weights $\propto\sigma^{-1}$, $\dot{P}=$ 0.325\,$\pm$\,0.024~s/cen. \item weights $\propto\sigma^{-1/2}$, $\dot{P}=$ 0.338\,$\pm$\,0.012~s/cen. \item equal weights, $\dot{P}=$ 0.358\,$\pm$\,0.008~s/cen. \end{enumerate} The values are consistent with 0.37 $\pm$ 0.05 s/cen reported by \citet{1999NewAR..43..455J} and derived {by \cite{1992PhDT-Pigulski}}. As mentioned above, the residuals from the best-fit parabola exhibit scatter that is larger than the uncertainties. In particular, the dense photometric and spectroscopic sampling since the year $\sim 2000$ is clearly incompatible with the long-term trend, and suggests more complex and rapid period variations. This is confirmed when we attempt to phase these recent measurements using $\dot P=0.3$~s/cen: the measurements are not coherently phased, with clear phase offsets between the datasets. As demonstrated by \citet{2017MNRAS.471.2286S} and \citet{2018pas8.conf..154B}, a much larger $\dot P\sim 0.9$~s/cen rate of period change is needed in order to reconcile them. A period search of the residuals shown in the bottom panel of Fig.~\ref{fig:o-c} yields weak evidence for a period around 40\,yr. \section{Interpretation} Examination of the O\,$-$\,C\ diagram shows that there are two phenomena to be explained: the longer-term increase of the pulsation period at a rate of $\sim$0.3~s/cen, and the more rapid variations detected by \citet{2017MNRAS.471.2286S}. Because the more rapid changes are diagnosed principally by the modern data, it is unclear if they are a recent phenomenon, or if they existed all along and were only revealed by the recent high-precision observations (although the significant scatter of much of the modern data on the O\,$-$\,C\ diagram suggests the latter). In this Section we examine the potential contributions of various phenomena (binarity, stellar evolution, additional (undetected) pulsation modes, and stellar rotation/magnetism) to the long- and short-term evolution of the (apparent) pulsation period. \subsection{Binarity} The light-time effect in a binary system is well known to result in apparent changes of period of orbiting pulsating stars \citep[e.g.][]{1992A&A...261..203P}. The change in pulsation period is given by: \begin{equation} \Delta P=P\, \Delta V_{\rm r}/c \end{equation} where $\Delta P$ is the predicted change of pulsation period $P$ due to light-time effects associated with a radial velocity variation $\Delta V_{\rm r}$, and $c$ is the speed of light. \citet{2017MNRAS.471.2286S} searched using PIONIER in particular for a Be star companion to $\xi^1$\,CMa. They were able to rule out any companion brighter than 1.7\% of $\xi^1$\,CMa's flux (in the $HK$ bands) beyond 40 AU, with a similar upper limit derived from the standard deviation of the RVs, within about 40 AU. {$\xi^1$\,CMa} is reported in the {Washington Double Star Catalogue} (WDS) to have a companion ($V=14$~mag) located at 28\arcsec\ from the primary. At the Hipparcos distance of {$\xi^1$\,CMa} (424 pc\footnote{Gaia DR2 gives $\pi=4.984\pm 0.346$~mas. This is completely incompatible with the Hipparcos parallax used by \citet{2017MNRAS.471.2286S}. It is also less precise. At the corresponding distance of about $200 \pm 14$ pc, the star would have a luminosity of $\log{L} = 3.83 \pm 0.06$, which at its effective temperature would place it on the Zero-Age Main Sequence, an age at which it would be unlikely to be a $\beta$ Cep pulsator. {As Gaia parallaxes of bright stars ($V\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 6$) should, for the time being, be considered with caution \citep{2018A&A...616A...2L}, we adopt the Hipparcos parallax.}}), one arcsec is {424 AU, so this separation would correspond to nearly 12000 AU}. The flux difference is nearly 10 mag in the $V$ band, so we estimate that the companion (assuming it is located at the same distance) would be a K dwarf with a mass of $\sim$0.7\,$M_\odot$. The resultant periods are far too long to explain the observed O\,$-$\,C\ diagram. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{xi1cma_rv_zeropoint.pdf} \caption{{\em Top:} CORALIE and ESPaDOnS RVs phased with the pulsation period determined from the first epoch of CORALIE data. Colours indicate {time bins}. Dashed lines show 3$^{\rm rd}$-order harmonic fits to the CORALIE data, shifted to minimize the standard deviation of the residuals. {\em Middle:} phase shift $\Delta\phi$ of the RV curve in each epoch. {\em Bottom:} systemic velocity $v_{\rm sys}$ determined from the mean residual RV after subtraction of the phase-shifted curves in the top panel. {The solid line shows the mean $v_{\rm sys}$, the dotted lines the standard deviations.} } \label{rv_zero} \end{figure} To attempt to measure the systemic radial velocity $v_{\rm sys}$ of $\xi^1$\,CMa, we fit the RVs from the first year of CORALIE data described by \citet{2017MNRAS.471.2286S} with a 3$^{\rm rd}$-harmonic fit, shifted this curve in phase in each successive {two-year bin} in order to minimize the standard deviation of the residuals, and then calculated the mean residual RV after subtraction of the curve. The results of this exercise are shown in Fig.\,\ref{rv_zero}. As can be seen in the bottom panel, the RV curve is consistent with no change in $v_{\rm sys}$ to within about {$\pm 0.4$} km\,s$^{-1}$~over the span of observations. {If the 0.9 s/cen period change is due to orbital motion, then it should have corresponded to a change in RV of about 3 km\,s$^{-1}$~over the approximately 20-year span of the RV observations. Since this would have been easily detected, binarity can be ruled out as the source of this period change.} If the 0.3 s/cen period change is due to binarity, on the other hand, we would expect a maximum RV shift of about {1} km\,s$^{-1}$~over the {20 years} covered by these data. {Since this is comparable to the standard deviation of $v_{\rm sys}$, orbital motion cannot be excluded in this case. However, there is no positive evidence for a change in $v_{\rm sys}$, and as will be shown below in Sect.\,\ref{stelev} there is good reason to believe that the 0.3 s/cen period change is due to stellar evolution.} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{HR.pdf} \includegraphics[width=0.5\textwidth]{PdotTeff.pdf} \caption{{\em Upper panel:}\ Theoretical HR diagram showing evolutionary models ignoring and including the effects of rotation, calculated by \protect{\citet{2012AA...537A.146E}}. {\em Lower panel:}\ $\dot{P}-T_{\rm eff}$ plane showing the same evolutionary models {above}, calculated according to Eq.(\ref{pdot_evol}) using the evolutionary models. The positions of {$\xi^1$\,CMa} according to the physical parameters inferred by \protect{\citet{2017MNRAS.471.2286S}} and the rate of period change determined here are shown in red.} \label{HRD} \end{figure} \subsection{Stellar evolution}\label{stelev} Since the period is growing, this is qualitatively consistent with the increasing radius of the star as expected due to stellar evolution on the main sequence \citep[e.g.][]{2015A&A...584A..58N}. As reported by those authors, the fractional rate of change of the pulsation period of a radially pulsating star due to evolving mass $M$ and radius $R$ on evolutionary timescales can be computed according to: \begin{equation} \frac{\dot{P}}{P} = -\frac{1}{2}\frac{\dot{M}}{M}+\frac{3}{2}\frac{\dot{R}}{R}.\label{pdot_evol} \end{equation} We have exploited the evolutionary model calculations of \citet{2012AA...537A.146E} to predict the variation of $\dot{P}$ according to Eq.\,(\ref{pdot_evol}). Given that {$\xi^1$\,CMa} is a (relatively) cool upper-main sequence star, its mass-loss rate is expected to be low; hence we have assumed $\dot{M}=0$ in Eq.\,(\ref{pdot_evol}). In Fig.~\ref{HRD} we show the star's position on the Hertzsprung-Russell (HR) diagram and on the $\dot{P}$ vs. $T_{\rm eff}$ diagram, using the physical parameters of \citet{2017MNRAS.471.2286S} and $\dot P=0.32\pm 0.05$~s/cen, and models both including the effects of rotation ($v_{\rm rot}=200$~km/s) and ignoring those effects \citep{2012AA...537A.146E} (These tracks bracket potential evolutionary histories of $\xi^1$\,CMa; although the star is known to be a very slow rotator today, the rotational history of the star, and hence the appropriate evolutionary tracks, are unknown.). For the non-rotating tracks, we derive a best-fitting mass of 14.6~M$_{\odot}$\ and age of 9.2~Myr. For the rotating tracks, we derive a best-fitting mass of 14.4~M$_{\odot}$\ and age of 11.1~Myr. The masses are formally consistent with that derived by \citet{2017MNRAS.471.2286S}. The position on the $\dot P-T_{\rm eff}$ is formally consistent with both sets of models. We note that the more recent $\dot P\sim 0.9$~s/cen rate of period change does not agree with the models and derived $T_{\rm eff}/\log L$. \subsection{Undetected pulsation modes} To interpret 25-year period variations in the O-C diagram of the $\beta$~Cep star BW~Vul, \citet{1984PASP...96..657O} submits that a second pulsation mode close in frequency to the primary model could result in the apparent period variation: ``An alternate interpretation of the behaviour of BW~Vul is in terms of two pulsation modes which are so close to the same period that they don't 'beat' in the normal sense. In this case, the smaller-amplitude mode would have its peak first on the rise up to the peak of the large- amplitude mode, thus making maximum brightness earlier than usual." In the case of BW~Vul, the primary pulsation period is very similar to that of $\xi^1$\,CMa, but in our case no obvious O-C periodicity is observed. However, if we assume that the $\sim$40~yr timescale of O-C variability is a result of such a model, then according to \citet{1984PASP...96..657O} the second mode would differ from the known radial pulsation period by about 0.25\,s. In any case, as mentioned by \citet{1984PASP...96..657O}, such unresolved beating should also result in amplitude changes in accordance with apparent period changes. This is not seen for for BW~Vul nor $\xi^1$\,CMa. \subsection{Stellar rotation \& magnetism} As mentioned earlier, the O-C residuals show weak evidence for a periodic behaviour with $P \sim$ 40 years. Such a timescale could be compatible with rotation of $\xi^1$\,CMa. Magnetic early-type stars typically exhibit line profile variability coherent with the rotation period. This can be a consequence of Zeeman splitting, surface chemical abundance peculiarities, or magnetospheric emission. In principle these effects can affect radial velocity measurements. The period change apparent in the more recent data might be oscillatory, and possibly with a decadal timescale; since the rotation period of {$\xi^1$\,CMa} is extremely long \citep[at least 30 years;][]{2017MNRAS.471.2286S,2018MNRAS.478L..39S}, it is natural to wonder if some form of rotationally modulated variation might be influencing the radial velocity or light variation of the star. \begin{figure*} \centering \begin{tabular}{ccc} \includegraphics[trim = 50 50 0 0, width=0.333\textwidth]{xi1cma_halpha_rot_dyn.pdf} & \includegraphics[trim = 50 50 0 0, width=0.333\textwidth]{xi1cma_HeII4686_dyn.pdf} & \includegraphics[trim = 50 50 0 0, width=0.333\textwidth]{xi1cma_OII4662_dyn.pdf} \\ \includegraphics[trim = 50 50 0 0, width=0.333\textwidth]{xi1cma_SiIV4116_dyn.pdf} & \includegraphics[trim = 50 50 0 0, width=0.333\textwidth]{xi1cma_SIII4362_dyn.pdf} & \includegraphics[trim = 50 50 0 0, width=0.333\textwidth]{xi1cma_FeIII5834_dyn.pdf} \\ \end{tabular} \caption{Dynamic spectra displaying line profile variations coherent with the rotational ephemeris of \citet{2017MNRAS.471.2286S,2018MNRAS.478L..39S}. Top panels show residual intensity mapped to colour as a function of rotational phase; bottom panels show phase-binned intensity spectra (black) and the mean reference spectrum (red).}\label{profiles} \end{figure*} To investigate this question, we calculated dynamic spectra for various spectral lines using the combined CORALIE and ESPaDOnS dataset. These are shown in Fig.\,\ref{profiles}. The data were first shifted to zero velocity by subtracting the measured radial velocity, phased assuming a 30-year rotational period with $T_0 = 2455219$ set by the time of maximum \ensuremath{\langle B_z\rangle}, and then co-added in 30 phase bins. Radial velocity correction of individual spectra, combined with the presence of between 16 and about 100 spectra per bin, mean that pulsational variability should be removed to first order. For reference spectra we used the mean spectrum created from the full dataset. For reference, the top left panel of Fig.\,\ref{profiles} shows H$\alpha$; this line is already known to exhibit rotationally modulated magnetospheric emission \citep{2017MNRAS.471.2286S}. Maximum emission occurs at phase 0 \citep[i.e.\ at maximum \ensuremath{\langle B_z\rangle}),][]{2017MNRAS.471.2286S,2018MNRAS.478L..39S} and the variability pattern is a smooth change in the strength of the central emission feature. Essentially all of the lines we examined display some form of rotationally modulated variation. The lines shown in Fig.\,\ref{profiles} were selected as exemplars of the three different patterns of variability. {He}\,{\sc ii} 4686\,\AA\ shows a very similar pattern of emission to H$\alpha$, suggesting that it is also partly formed within the magnetosphere. {O}\,{\sc ii} 4662\,\AA\ and {Si}\,{\sc iv} 4116\,\AA\ both show a pattern of alternating line strength between the cores and wings, with deeper line cores co-occuring with shallower line wings and vice-versa; the amplitude of the variation is furthermore similar between core and wings. {S}\,{\sc iii} 4362\,\AA\ and {Fe}\,{\sc iii} 5834\,\AA\ both show similar patterns of variation, but with the much more pronounced changes in line depth and relatively minor variation in the line wings. All of the metallic lines we investigated showed similar variations, with the lines at their deepest at phase 0, and at their shallowest near phases 0.25 and 0.75 (with the former definitely occurring near {\ensuremath{\langle B_z\rangle}} $=0$). It is not clear what the source of the metallic line profile variability is. Magnetospheric variation seems unlikely, since: 1) H$\alpha$ emission is very weak, implying negligible emission in metallic lines; 2) the metallic lines are at peak absorption at phase 0 when (if the variability were magnetospheric in origin) in-filling by emission should be at the greatest. Chemical spots also seem unlikely, as: 1) the formation of surface abundance inhomogeneities is inhibited in B0/B1 stars by their strong(er) winds; 2) all lines vary in essentially the same fashion, whereas the distribution of chemical spots tends to differ from one chemical element to another. Zeeman splitting may be plausible (the expected amplitude in a line with a Land\'e factor of 1.2, at 5000 \AA, for a star with a 1.2~kG surface magnetic field is about 2~km\,s$^{-1}$), however this does not seem to be the source of the variation: such splitting should be at its strongest (i.e.\ line width should be at a maximum) at maximum \ensuremath{\langle B_z\rangle}, whereas precisely the opposite is the case. The difficulty in explaining $\xi^1$ CMa's rotationally modulated variation via conventional mechanisms suggests some heretofore unrecognized phenomenon. Whatever the origin of the rotationally modulated variation, however, in all cases it appears to be symmetrical about the line profile; thus, it should have no biasing affect upon the measurement of radial velocities. \section{Conclusions} We have analyzed spectroscopic and photometric data spanning over a century with the principal goal of examining the pulsation period evolution of the magnetic, slowly-rotating $\beta$~Cep star $\xi^1$\,CMa. The observations confirm the previously-reported long-term increase of the pulsation period at a rate of approximately $0.3$\,s/cen, as well as recent, more rapid evolution corresponding to an approximate rate roughly three times larger. $\xi^1$\,CMa\ exhibits a number of other characteristics that cause it to stand out from the broader population of $\beta$~Cep stars. The star exhibits a highly dominant radial pulsation mode, and a century of observations reveals no clear evidence for change in pulsation amplitude. New TESS observations furthermore permit the detection of several low-amplitude modes with frequencies below 5~d$^{-1}$. As discussed in Sect.~\ref{RVcorr}, $\xi^1$\,CMa\ exhibits a phase offset between maximum light and maximum RV of 0.334, significantly larger than the typical value of $\sim 0.25$. We demonstrate that these properties can be reconciled by a seismic model in which the star pulsates in the fundamental radial mode. It has the strongest magnetic field of any known $\beta$~Cep star, and it is one of the most slowly rotating known magnetic stars. We conclude that the long-term lengthening of the period is not likely a consequence of a binary companion. That rate is however consistent with that expected from evolution of the star at its current position on the main sequence inferred using standard stellar evolution models. We have no particular explanation for the recent, more rapid period evolution, although the associated timescale may be compatible with stellar rotation. Alternatively, we recall that we have observed only a very small part of a phenomenon which may take place on the nuclear timescale. Should we really expect it to proceed so smoothly? Given that the most recent observational data are more precise and provide a much denser temporal sampling, it may well be that similar short-term pulsational accelerations and decelerations have occurred in the past, and that they are a typical phenomenon. In fact, we know of no well-studied $\beta$~Cep star that shows period evolution with a constant $\dot P$ consistent with that expected from stellar evolution models. Period changes in BW~Vul, for example, were historically interpreted as a combination of $\dot P=$\,const and light-time effect, but the recent study by \citet{2012A&A...544A..28O} shows the reality can be (much) more complicated. Continued high-cadence monitoring of the pulsation period of the star will be a key to understanding the roles and relationships of these properties in producing the observed period evolution. \section*{Acknowledgments} Based on data collected by the BRITE Constellation satellite mission, designed, built, launched, operated and supported by the Austrian Research Promotion Agency (FFG), the University of Vienna, the Technical University of Graz, the University of Innsbruck, the Canadian Space Agency (CSA), the University of Toronto Institute for Aerospace Studies (UTIAS), the Foundation for Polish Science \& Technology (FNiTP MNiSW), and National Science Centre (NCN). Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut national des sciences de l'Univers of the Centre national de la recherche scientifique of France, and the University of Hawaii. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Explorer Program. Funding for the TESS Asteroseismic Science Operations Centre is provided by the Danish National Research Foundation (Grant agreement no.: DNRF106), ESA PRODEX (PEA 4000119301) and Stellar Astrophysics Centre (SAC) at Aarhus University. We thank the {\it TESS} team and staff and TASC/TASOC for their support of the present work. APi acknowledges support from the NCN grant 2016/21/B/ST9/01126, and helpful discussions with Przemek Walczak. Adam Popowicz was responsible for image processing and automation of photometric routines for the data registered by BRITE-nanosatellite constellation, and was supported by statutory activities grant SUT 02/010/BKM19 t.20. GAW acknowledges support from the Natural Sciences and Engineering Research Council (NSERC) of Canada in the form of a Discovery Grant. GH gratefully acknowledges funding through NCN grant 2015/18/A/ST9/00578. We thank Daniel Heynderickx for supplying his photometric data and Monika Rybicka for help with the APT data reduction. KZ acknowledges support by the Austrian Space Application Programme (ASAP) of the Austrian Research Promotion Agency (FFG). MES acknowledges support from the Annie Jump Cannon Fellowship, supported by the University of Delaware and endowed by the Mount Cuba Astronomical Observatory. The authors would like to thank Dr. V. Petit (University of Delaware) and the anonymous referee for helpful comments on the manuscript. \bibliographystyle{mnras}
{ "attr-fineweb-edu": 1.592773, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbeI5qsMAIoJibVRo
\section{Introduction} Optical telescopes and gravitational-wave detectors are two of the most important technologies in modern physics and astronomy. This paper studies a remarkable connection between them from the perspective of quantum metrology. The key insight is that the photons from incoherent sources received by a telescope and an optomechanical system under a stochastic gravitational-wave background can both be modeled as quantum systems under random displacements. In both imaging and the sensing of stochastic gravitational-wave backgrounds, measurements are performed to estimate the probabilistic properties of the displacements, and the measurements for both problems turn out to share significant similarities in a statistical sense. My group has studied both problems \cite{tnl,tsang19a,tsang_nair,ng16}, but the connection has hitherto not been elaborated. Inspired by the connection, here I use the insights gained from our study of incoherent imaging to devise an optimal measurement for an optical random displacement model with squeezed light, thus solving the open problems in Refs.~\cite{tsang_nair,ng16}. The optimal measurement is far superior to the standard homodyne detection in the same way quantum-inspired imaging methods can beat direct imaging. Beyond imaging, optomechanics, and gravitational-wave detection, the random displacement model is also relevant to magnetometers under fluctuating magnetic fields \cite{budker07} and microwave cavities driven by hypothetical dark-matter axions \cite{backes21}, so the insights and results here should have wider implications. \section{Models} Consider the following model for the one-photon density operator $\rho$ in the incoherent optical imaging problem \cite{tnl,tsang19a}: \begin{align} \rho &= \int dP\ U_{X} \ket{\psi}\bra{\psi} U_{X}^\dagger, \label{rho} \\ U_X &= \prod_{m=1}^M \exp\bk{-i k_m X_m}, \label{UX_imaging} \end{align} where $M$ is the dimension of the object and image planes, $\ket{\psi}$, an element of the Hilbert space $\mathcal H = \mathcal H_1 \otimes \dots \otimes \mathcal H_M$, models the diffraction-limited point-spread function of the imaging system, $k_m$ is a momentum operator on $\mathcal H_m$, $U_X$ is a unitary operator that models the photon displacement on the image plane due to a point source, and $X$ is a real classical $M$-dimensional random vector under the probability measure $P$, which models the object intensity function. Mathematically, Eq.~(\ref{rho}) is a Bochner integral; both $dP$ and $X$ in Eq.~(\ref{rho}) depend implicitly on $x \in S$ in terms of a probability space $(S,\Sigma,P)$ \cite{holevo11}. The imaging problem can be framed as a quantum detection or estimation problem \cite{tsang19a,helstrom}, where $P$ belongs to a family of probability measures $\{P_\theta:\theta \in \Theta\}$ parametrized by a parameter $\theta$ in some parameter space $\Theta$ and a parameter of interest $\beta(\theta)$ is to be estimated via measurements of the optical fields. Studies in the area of quantum-inspired superresolution have shown that spatial-mode demultiplexing (SPADE) can offer a far superior performance over direct imaging and achieve the quantum limits in the resolution of two point sources \cite{tnl,lu18}, object-size estimation \cite{tsang17,dutton19}, and moment estimation \cite{tsang17,tsang18a,tsang19,zhou19,tsang19b,tsang21a,tsang22}. The incoherent imaging model turns out to be mathematically similar to a noise spectroscopy model also proposed by my group in Refs.~\cite{tsang_nair,ng16}. The main difference is in the dimension $M$: imaging problems usually assume that $M$ is one or two, whereas Refs.~\cite{tsang_nair,ng16} assume that it is infinite. In the noise spectroscopy problem, $\rho$ is the state of a quantum dynamical system coupled to quantum fields, $\ket{\psi}$ is an element of an infinite-dimensional Hilbert space $\mathcal H$ that models the input state of the total system, $X(t)$ is a real classical random process with respect to a time variable $t$ that generalizes the $m$ in Eq.~(\ref{UX_imaging}), and the unitary is \begin{align} U_X &= \mathcal T \exp\Bk{-i \int_{0}^{T} dt k(t) X(t)}, \label{UX} \end{align} where $\mathcal T$ denotes time ordering, $T$ is the total observation time, and $k(t)$ is a Hermitian operator on $\mathcal H$ in an interaction picture \cite{tsang_open}. $\ket{\psi}$ and $k(t)$ are assumed to be independent of $X$. Any sequential measurements concurrent with the displacement can be modeled as a final measurement via the principle of deferred measurement \cite{twc,nielsen}. Examples include an optical field under a random displacement or phase modulation, an optomechanical system under a stochastic force \cite{aspelmeyer14,nimmrichter}, a gravitational-wave detector under a stochastic background \cite{christensen19}, a spin ensemble under a stochastic magnetic field \cite{budker07}, and a microwave cavity driven by dark-matter axions \cite{backes21}. References~\cite{tsang_nair,ng16} assume that $X(t)$ is a stationary zero-mean Gaussian random process, and its power spectral density $S_X(\omega|\theta)$ depends on the unknown parameter $\theta$. Reference~\cite{tsang_nair} assumes that $\Theta$ is binary with $S_X(\omega) = 0$ for one of the hypotheses, such that the problem of interest is the detection of a random displacement, while Ref.~\cite{ng16} assumes that $\Theta$ is a multidimensional Euclidean space, such that the problem is spectrum-parameter estimation. In other words, Refs.~\cite{tsang_nair,ng16} assume parametric models for the probability measure $P$, in the same way parametric models for $P$ are assumed for incoherent imaging. \section{Spectrum-parameter estimation} The power spectral density, being a second-order statistic, is analogous to the second-order object moments in the context of imaging. Since SPADE can enhance the estimation of second-order moments \cite{tsang17,tsang18a,tsang19,zhou19,tsang19b,tsang21a,tsang22}, it is natural to ask if a similar enhancement can be found for noise spectroscopy. The answer is yes---Ref.~\cite{ng16} considers an optical field under weak and random phase modulation and finds that spectral photon counting, a discrete-variable measurement analogous to SPADE, can be far superior to homodyne detection, a continuous-variable measurement analogous to direct imaging, when the input state $\ket{\psi}$ is a coherent state. Spectral photon counting is quantum-optimal and enjoys significant superiority over homodyne detection in the regime of low signal-to-noise ratios, just as SPADE is quantum-optimal and superior in the regime of subdiffraction object sizes for imaging. In the following, I adopt a level of mathematical rigor typical of the physics and engineering literature \cite{gardiner_zoller,shapiro09} to arrive at results quickly, following Refs.~\cite{tsang_nair,ng16}. To derive a quantum limit to noise spectroscopy, Ref.~\cite{ng16} makes the following assumptions: \begin{enumerate}[label=(A\arabic*)] \item $X(t)$ is a zero-mean Gaussian process. \item The processes $X(t)$ and \begin{align} \Delta k(t) \equiv k(t) - \bra\psi k(t)\ket\psi \end{align} are stationary in the wide sense \cite{vantrees,shumway_stoffer,gardiner_zoller}, such that \begin{align} C_X(\tau|\theta) &\equiv \expect_\theta[X(t)X(t+\tau)], \\ C_k(\tau) &\equiv \bra{\psi}\Delta k(t)\circ \Delta k(t+\tau)\ket{\psi} \end{align} are independent of $t$. ($\expect_\theta$ denotes the expectation with respect to $P_\theta$ and $A\circ B \equiv (AB+BA)/2$ denotes the Jordan product.) \item The observation time $T$ is long enough to justify certain approximations regarding stationary processes \cite{vantrees,shumway_stoffer}. \end{enumerate} Such assumptions are common in statistics \cite{vantrees,shumway_stoffer} and have the virtue of giving simple closed-form results for the infinite-dimensional model. Assuming also that $\theta$ is a scalar for simplicity, a quantum limit to the Fisher information $J$ for any measurement is \cite{ng16} \begin{align} J &\le K \le \tilde K \to T \int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\frac{(\partial \ln S_X)^2}{2 + 1/(S_k S_X)} , \label{ec} \\ S_X(\omega|\theta) &\equiv \int_{-\infty}^{\infty} d\tau C_X(\tau|\theta) \exp(i\omega\tau), \\ S_k(\omega) &\equiv \int_{-\infty}^{\infty} d\tau C_k(\tau) \exp(i\omega \tau), \end{align} where $K$ is the Helstrom information in terms of $\rho$ as a function of $\theta$ \cite{helstrom,hayashi}, $\tilde K$ is a bound derived in Ref.~\cite{ng16} using the extended convexity of $K$ \cite{alipour}, $\partial \equiv \pdv*{}{\theta}$, and $\to$ denotes the long-time limit. Generalization for a vectoral $\theta$ is straightforward \cite{ng16,tsang20,tsang21a}. If $P_\theta$ is not Gaussian, a bound may still be obtained by using the convexity of the Helstrom information; Section~6 of Ref.~\cite{tsang17} uses the convexity to derive a quantum limit to object-size estimation in the context of imaging and shows that SPADE can approach the limit. At the time of Ref.~\cite{ng16}, we were unable to find a quantum-optimal measurement when the input state is not a coherent state, but the correspondence with incoherent imaging offers a new insight. We know that SPADE can remain superior and optimal as long as its basis is adapted to $\ket{\psi}$ \cite{tnl,rehacek17,tsang18a,tsang21a,lu18}. This fact suggests that a discrete-variable measurement is still optimal for noise spectroscopy with a nonclassical state, as long as the measurement basis is adapted to the input state $\ket{\psi}$. If $\ket{\psi}$ is a squeezed state, it still has a Gaussian wavefunction and is analogous to a Gaussian point-spread function in imaging. The imaging correspondence then suggests that an optimal basis adapted to $\ket{\psi}$ is simply a squeezed version of an optimal basis adapted to the vacuum. A measurement in that basis can be implemented by unsqueezing the output field, analogous to an image magnification, before spectral photon counting. I now show the optimality of the unsqueezing and spectral photon counting (USPC) method in detail. Let \begin{align} k(t) &= A^\dagger(t) A(t), & \qty[A(t),A^\dagger(t')] &= \delta(t-t'), \end{align} where $A(t)$ is the annihilation operator for the slowly varying envelope of an optical field with carrier frequency $\Omega$ \cite{gardiner_zoller,shapiro09} and $k(t)$ is the photon-flux operator. $X$ is then a phase modulation on the optical field. Since $k(t)$ commutes with itself at different times, the time ordering in Eq.~(\ref{UX}) is redundant. Assume also \begin{align} \ket{\psi} &= D(\alpha) V\ket{\textrm{vac}}, \label{sq_vac} \end{align} where $\ket{\textrm{vac}}$ is the vacuum state, $V$ is a unitary operator that models the squeezing, and $D(\alpha)$ is the displacement operator that gives a constant mean field $\bra{\psi} A(t)\ket{\psi} = \alpha$. $|\alpha|^2$ is the mean photon flux. With a high $|\alpha|$ and weak phase modulation, $D^\dagger k(t) D$ can be linearized as an intensity quadrature operator \begin{align} D^\dagger k(t) D &\approx |\alpha|^2 + \kappa(t), & \kappa(t) &\equiv \alpha A^\dagger(t) +\alpha^* A(t), \end{align} and $D^\dagger U_X D$ becomes a displacement operator. The initial squeezing $V$ should squeeze the orthogonal phase quadrature \begin{align} \eta(t) &\equiv \frac{1}{2i|\alpha|^2} \qty[\alpha A^\dagger(t) -\alpha^* A(t)] \end{align} and antisqueeze the intensity quadrature, such that \begin{align} V^\dagger \eta V &= h * \eta, & V^\dagger \kappa V &= g * \kappa, \end{align} where $h *\eta \equiv \int_{-\infty}^{\infty} dt' h(t-t') \eta(t')$ denotes the convolution and the real Green functions $h(t)$ and $g(t)$ model the squeezing and the antisqueezing, respectively \cite{gardiner_zoller}. Their Fourier transforms are related by \begin{align} |\tilde h(\omega) \tilde g(\omega)| &= 1, \end{align} where \begin{align} \tilde g(\omega) &\equiv \int_{-\infty}^{\infty} dt g(t)\exp(i\omega t), \end{align} and $\tilde h(\omega)$ is defined similarly. After $U_X$, suppose that the mean field is nulled by $D^\dagger(\alpha)$ and then the squeezing is undone by a unitary $W$, which is the same as $V$ except that a negative sign is introduced to the parametric-amplifier Hamiltonian. The effect of $W$ on the quadratures can be modeled as \begin{align} W^\dagger \eta W &= g * \eta, & W^\dagger \kappa W &= h * \kappa. \end{align} Note that $W$ is not $V^\dagger$, as the Green functions would become anticausal and thus unphysical if $W$ were $V^\dagger$. Conditioned on $X$, the output state \begin{align} \ket{\psi'} = W D^\dagger U_X D V \ket{\textrm{vac}} \end{align} is a coherent state with mean field \begin{align} \alpha'(t) &\equiv \bra{\psi'}A(t)\ket{\psi'} = -i \alpha g * X, \label{mean_field} \end{align} where the displacement in the phase quadrature is amplified by the unsqueezing. This model is also applicable to the dark port of a Michelson interferometer \cite{caves81}, where the squeezing $V$ and the unsqueezing $W$ should be applied to the input and output of the dark port, respectively, the displacements $D$ and $D^\dagger$ are naturally implemented by the beam splitter in the interferometer, and $X$ is the relative phase between the two arms. Any radiation-pressure-induced noise is assumed to be negligible or eliminated \cite{klmtv,qnc}. To facilitate the analysis of the subsequent step of spectral photon counting, I discretize frequency by assuming that $X(t)$ is given by the Fourier series \begin{align} X(t) &= \sum_{m=-\infty}^\infty \tilde X_m\exp(-i\omega_m t), \quad \omega_m \equiv \frac{2\pi m}{T}, \label{series} \\ \tilde X_m &\equiv \frac{1}{T}\int_0^T dt X(t) \exp(i\omega_m t). \end{align} Then the mean field of the output coherent state given by Eq.~(\ref{mean_field}) can be expressed as \begin{align} \alpha'(t) &= -i \alpha \sum_{m=-\infty}^\infty \tilde g(\omega_m) \tilde X_m\exp(-i\omega_m t). \end{align} Suppose that a spectrometer disperses the output field in terms of frequency modes defined by the annihilation operators \begin{align} a_m &\equiv \frac{1}{\sqrt{T}}\int_0^T dt A(t) \exp(i\omega_m t), & m &\in \mathbb Z, \end{align} where $\omega_m$ is a sideband frequency relative to the carrier $\Omega$ \cite{shapiro98}. Each frequency mode is then in a coherent state with a displacement given by \begin{align} \tilde \alpha_m \equiv \bra{\psi'}a_m\ket{\psi'} = \sqrt{T}\alpha\tilde g(\omega_m) \tilde X_m. \end{align} Since $X(t)$ is real, $\tilde X_{-m} = \tilde X_m^*$. Let $\{\tilde X_m: m \ge 0\}$ be independent zero-mean complex Gaussian random variables, each with variance \begin{align} s_m(\theta) \equiv \expect_\theta(|\tilde X_m|^2). \end{align} This assumption allows the Fourier series given by Eq.~(\ref{series}) to approach any real stationary zero-mean Gaussian process with $T s_m(\theta) \to S_X(\omega_m|\theta)$ in the long-time limit \cite{shumway_stoffer}. By summing the photon counts at each pair of sideband frequencies $-\omega_{m}$ and $\omega_m$, one obtains a set of photon counts that follow the Bose-Einstein distribution \begin{align} f_\theta(n) &= \prod_{m\ge 0}\frac{1}{1+\bar N_m}\qty(\frac{\bar N_m}{1+\bar N_m})^{n_m}, \label{bose} \\ \bar N_m(\theta) &= \begin{cases} 2\expect_\theta(|\tilde\alpha_m|^2) = 2 |\alpha\tilde g(\omega_m)|^2 T s_m(\theta), & m > 0,\\ \expect_\theta(|\tilde\alpha_0|^2) = |\alpha\tilde g(0)|^2 T s_0(\theta), & m = 0. \end{cases} \label{bose_mean} \end{align} The Fisher information for USPC is hence \begin{align} J_{\textrm{USPC}} &\equiv \sum_n f_\theta(n) \qty[\partial \ln f_\theta(n)]^2 \\ &= \sum_{m \ge 0} \frac{(\partial \ln \bar N_m)^2}{1+1/\bar N_m} \\ &\to T \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\frac{(\partial \ln S_X)^2} {2 + 1/(|\alpha \tilde g|^2 S_X)}, \label{Juspc} \end{align} where the long-time limit gives $\sum_{m\ge 0} \to T \int_0^\infty d\omega/(2\pi)$ and the integral $\int_0^\infty d\omega$ for an even integrand is rewritten as the double-sided integral $\int_{-\infty}^{\infty} d\omega/2$ for easier comparison with Eq.~(\ref{ec}). To compare this result with the quantum bound, note that the power spectral density of $\Delta k(t)$ with respect to $\ket{\psi} = D V \ket{\textrm{vac}}$ is the same as that of $\Delta k'(t)$ with respect to $\ket{\textrm{vac}}$, where $k'(t) \equiv V^\dagger D^\dagger k(t) D V$, and the antisqueezing of the intensity quadrature by $V$ leads to \begin{align} S_k(\omega) &= |\alpha\tilde g(\omega)|^2. \label{Sp} \end{align} With this $S_k(\omega)$, the USPC information given by Eq.~(\ref{Juspc}) matches the quantum bound $\tilde K$ given by Eq.~(\ref{ec}) and is hence quantum-optimal. For comparison, the Fisher information for homodyne detection of the phase quadrature $U_X^\dagger \eta U_X = \eta + X$ is \cite{ng16,whittle} \begin{align} J_{\textrm{hom}} &\to T \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} \frac{(\partial \ln S_X)^2}{2 + 4S_\eta/S_X + 2(S_\eta/S_X)^2}, \label{Jhom} \end{align} where $\eta$ is assumed to be a stationary zero-mean Gaussian process with power spectral density $S_\eta$. For the squeezed $\ket{\psi}$, \begin{align} S_\eta(\omega) = \frac{1}{4 S_k(\omega)}, \end{align} and $S_\eta S_k \ge 1/4$ in general \cite{gardiner_zoller}. Compared with the optimal information given by Eqs.~(\ref{ec}) and (\ref{Juspc}), the homodyne information with a quantum-limited $S_\eta$ has an extra factor $2(S_\eta/S_X)^2$ in the denominator, which is significant when the spectral signal-to-noise ratio $S_X/S_\eta$ is low. To see their difference more clearly, assume \begin{align} \frac{S_X}{S_\eta} = 4S_k S_X \ll 1 \label{low_snr} \end{align} and perform Taylor approximations of Eqs.~(\ref{ec}), (\ref{Juspc}), and (\ref{Jhom}), which give \begin{align} J_{\textrm{USPC}} &= \tilde K \approx T \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} S_k S_X (\partial \ln S_X)^2, \label{Juspc_low} \\ J_{\textrm{hom}} &\approx 8T \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} (S_kS_X)^2 (\partial \ln S_X)^2. \label{Jhom_low} \end{align} $J_{\textrm{hom}}$ is much lower because of an extra factor of $8S_kS_X$ in the integrand. For a simple example, suppose that $S_X(\omega|\theta) = \theta^2 R(\omega)$, where $\theta$ is the magnitude of the displacement and $R(\omega)$ is a known spectrum. Then \begin{align} J_{\textrm{USPC}} &= \tilde K \to T \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\frac{4}{2\theta^2 + 1/(S_k R)}, \\ J_{\textrm{hom}} &\to T \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\frac{4} {2\theta^2 + 1/(S_kR) + 1/[8(\theta S_k R)^2]}. \end{align} As $\theta \to 0$, $J_{\textrm{hom}}$ scales quadratically with $\theta$ and vanishes, while $J_{\textrm{USPC}}$ tends to a positive constant. These behaviors are analogous to the phenomenon of Rayleigh's curse for direct imaging and the superiority of SPADE in two-point resolution and object-size estimation \cite{tnl,tsang17,dutton19}. \section{Stochastic-displacement detection} Consider now the detection problem studied in Ref.~\cite{tsang_nair}. Let $\theta \in \Theta = \{0,1\}$, $P_0$ be the measure that gives the deterministic $X = 0$ when the displacement is absent, $P_1$ be the measure for $X$ when the displacement is present, and $\rho_\theta$ be the quantum state as a function of $\theta$. Since $\rho_0 = \ket{\psi}\bra{\psi}$ is pure in this problem, the Uhlmann fidelity is given by \begin{align} F &\equiv \trace \sqrt{\sqrt{\rho_0}\rho_1\sqrt{\rho_0}} = \sqrt{\bra{\psi}\rho_1\ket{\psi}}, \end{align} while the quantum Chernoff exponent \cite{audenaert08} is given by \begin{align} \xi &\le \zeta \equiv -\ln \inf_{0\le s \le 1}\trace\qty(\rho_0^{1-s}\rho_1^s) = - 2\ln F, \end{align} where $\xi$ is the classical Chernoff exponent for any measurement. $F$ and $\zeta$ can be used to set a variety of lower and upper bounds on the error probabilities under the Bayesian or Neyman-Pearson criterion \cite{helstrom,hayashi,tsang_nair,audenaert08}. Assuming (A1)--(A3) for $P_1$ and $\ket{\psi}$ and also \begin{enumerate} \item[(A4)] $\ket{\psi}$ is a Gaussian state, \item[(A5)] $k(t)$ is a linear function of bosonic creation and annihilation operators, such that $U_X$ is a displacement operator, \end{enumerate} we found that the quantum exponent is \cite{tsang_nair} \begin{align} \zeta &\to \frac{T}{2} \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\ln \qty(1 + 2 S_k S_X). \label{qexp} \end{align} We also considered in Ref.~\cite{tsang_nair} the performances of the Kennedy receiver and the homodyne detection for the optical model, but we were unable to find the exact optimal measurement at the time. Here I solve the open problem by showing that USPC is also optimal for the detection problem, in analogy with the optimality of SPADE for the binary-source detection problem \cite{lu18}. Assuming again weak phase modulation, the USPC distribution given by Eqs.~(\ref{bose}) and (\ref{bose_mean}), and \begin{align} S_X(\omega|0) &= 0, & f_0(n) &= \delta_{n0}, & S_X(\omega|1) &= S_X(\omega), \end{align} the Chernoff exponent is \begin{align} \xi_{\textrm{USPC}} &\equiv -\ln \inf_{0\le s \le 1}\sum_n \qty[f_0(n)]^{1-s} \qty[f_1(n)]^s \\ &= \sum_{m\ge 0} \ln\qty[1+\bar N_m(1)] \\ &\to \frac{T}{2} \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\ln \qty(1 + 2|\alpha\tilde g|^2 S_X). \label{exp_uspc} \end{align} With the $S_k$ given by Eq.~(\ref{Sp}), $\xi_{\textrm{USPC}}$ matches the quantum limit given by Eq.~(\ref{qexp}). For comparison, consider the classical Chernoff exponent for homodyne detection given by \cite{tsang_nair,shumway_stoffer} \begin{align} \xi_{\textrm{hom}} &\to \sup_{0\le s \le 1}\frac{T}{2} \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} \ln\qty[\frac{1+s S_X/S_\eta}{(1 + S_X/S_\eta)^{s}}]. \label{exp_hom} \end{align} The imaging correspondence suggests that there should be a significant gap between $\zeta$ and $\xi_{\textrm{hom}}$, although we did not realize it at the time of Ref.~\cite{tsang_nair}. To demonstrate the gap now, assume again a low spectral signal-to-noise ratio as per Eq.~(\ref{low_snr}) and perform Taylor approximations of Eqs.~(\ref{qexp}), (\ref{exp_uspc}), and (\ref{exp_hom}), which give \begin{align} \xi_{\textrm{USPC}} &= \zeta \approx T \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} S_k S_X , \label{exp_uspc_low} \\ \xi_{\textrm{hom}} &\approx \sup_s \frac{s(1-s)T}{4} \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} \qty(\frac{S_X}{S_\eta})^2 \\ &= T\int_{-\infty}^{\infty} \frac{d\omega}{2\pi} \qty(S_k S_X)^2. \label{exp_hom_low} \end{align} The optimal exponent is linear with respect to $S_X$, whereas the homodyne exponent is only quadratic. These scalings are analogous to the scalings of the optimal exponent and the direct-imaging exponent with respect to the source separation in the binary-source detection problem \cite{lu18}. It is possible to study more precisely the error probabilities of the detection problem under the Neyman-Pearson criterion \cite{tsang_nair,hayashi,huang21,zanforlin22}, although the insights offered by such calculations should not deviate much from the ones reported here. \section{Discussion} Since homodyne detection is the current standard measurement method in gravitational-wave detection \cite{danilishin19}, the superior scalings of the USPC information quantities indicated by Eqs.~(\ref{Juspc_low}), (\ref{Jhom_low}), (\ref{exp_uspc_low}), and (\ref{exp_hom_low}) are important discoveries. They suggest that USPC can substantially enhance the detection and spectroscopy of stochastic gravitational-wave backgrounds when the signal-to-noise ratio is low, in the same way SPADE can enhance incoherent imaging. Considering that squeezed light is now being used in gravitational-wave detectors \cite{tse19}, the unsqueezing step proposed here is important, as it optimizes the measurement for squeezed light beyond the coherent-state case considered in Refs.~\cite{tsang_nair,ng16} and allows the full potential of quantum-enhanced interferometry to be realized for noise spectroscopy. The correspondence between the incoherent imaging model and the random displacement model is used implicitly in Section~6 of Ref.~\cite{tsang17} and briefly mentioned in Ref.~\cite{tsang19} but not elaborated there. References~\cite{gefen19,mouradian21} point out the correspondence between incoherent imaging and noise spectroscopy more explicitly, although they assume a low dimension for the Hilbert space and somewhat different parametric models. A more recent outstanding work by G\'orecki, Riccardi, and Maccone \cite{gorecki22} also notices the correspondence and also uses the convexity of the Helstrom information to derive a quantum bound for a random displacement model with one optical mode. They discovered independently that unsqueezing before photon counting is optimal for a squeezed input state and superior to homodyne detection. Another outstanding relevant work is Ref.~\cite{shi22} by Shi and Zhuang, who also discovered independently the optimality and superiority of unsqueezing and photon counting for a slightly different random displacement model. References~\cite{gefen19,mouradian21,gorecki22,shi22} all do not consider the detection problem and are not aware of the prior Refs.~\cite{tsang_nair,ng16}. As there exist many other results in quantum-inspired superresolution that have not yet been translated to noise spectroscopy, and vice versa, the correspondence between the two models should have a lot more to give. \section*{Acknowledgment} This research is supported by the National Research Foundation (NRF) Singapore, under its Quantum Engineering Programme (Grant No.~QEP-P7).
{ "attr-fineweb-edu": 1.99707, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbec5qWTD6heGqtXg
\section{Introduction} \label{section:IntroApplication} Networks are of wide interest and able to represent many different phenomena, for example social interactions and connections between regions in the brain \citep{kolaczyk2009statistical,ginestet2017hypothesis}. The study of dynamic networks has recently increased as more data of this type is becoming available \citep{rastelli_latouche_friel_2018}, where networks evolve over time. In this paper we develop some non-parametric regression methods for modelling and predicting networks where covariates are available. An application for this work is the study of dynamic networks derived from the Enron email corpus, in which each network corresponds to communications between employees in a particular month \citep{Diesner2005}. Another motivating application is the study of evolving writing styles in the novels of Jane Austen and Charles Dickens, in which each network is a representation of a novel based on word co-occurences, and the covariate is the time that writing of the novel began \citep{Severnetal19}. In both applications, the goal is to model smooth trends in the structure of the dynamic networks as they evolve over time. An example of previous work on dynamic network data is \citet{friel2016interlocking} who embedded nodes of bipartite dynamic networks in a latent space, motivated by networks representing the connection of leading Irish companies and board directors. We will also use embeddings in our work, although the bipartite constraints are not present. Further approaches include object functional principal components analysis \citep{Dubeymueller19} applied to time-varying networks from New York taxi trip data; multi-scale time-series modelling \citep{Kangetal17} applied to magnetoencephalography data in neuroscience; and quotient space space methodology applied to brain arterial networks \citep{Guosriv20}. The analysis of networks is a type of object oriented data analysis \citep{Marralon14}, and important considerations are to decide what are the data objects and how are they represented. We consider datasets where each observation is a weighted network, denoted $G_m=(V,E)$, comprising a set of nodes, $V=\lbrace v_1, v_2,\ldots, v_m\rbrace$, and a set of edge weights, $E=\lbrace w_{ij} : w_{ij}\geq 0, 1\leq i,j \leq m\rbrace$, indicating nodes $v_i$ and $v_j$ are either connected by an edge of weight $w_{ij}>0$, or else unconnected (if $w_{ij}=0$). An unweighted network is the special case with $w_{ij}\in\lbrace 0,1\rbrace$. We restrict attention to networks that are undirected and without loops, so that $w_{ij}=w_{ji}$ and $w_{ii}=0$, then any such network can be identified with its graph Laplacian matrix $\mathbf{L}=(l_{ij})$, defined as \begin{align*} l_{ij} = \begin{cases} -w_{ij}, & \text{if } i\neq j\\ \sum_{k\neq i}w_{ik},& \text{if } i=j \end{cases} \end{align*} for $1\leq i,j \leq m$. The graph Laplacian matrix can be written as $\mathbf{L}=\mathbf{D}-\mathbf{A}$, in terms of the adjacency matrix, $\mathbf{A}=(w_{ij})$, and degree matrix $\mathbf{D}=\text{diag}(\sum_{j=1}^mw_{1j},\ldots,\sum_{j=1}^mw_{mj})=\text{diag}(\mathbf{A}\mathbf{1}_m)$, where $\mathbf{1}_m$ is the $m$-vector of ones. The $i$th diagonal element of $\mathbf{D}$ equals the degree of node $i$. The space of $m \times m$ graph Laplacian matrices is of dimension $m(m-1)/2$ and is \begin{align}\label{eq:lapl space} \mathcal{L}_m=\lbrace \mathbf{L}=(l_{ij}):\mathbf{L}=\mathbf{L}^T ;\, l_{ij}\leq 0 \, \forall i\neq j ;\, \mathbf{L} {\mathbf{1}_m}={\mathbf{0}_m} \rbrace, \end{align} where $\mathbf{0}_m$ is the $m$-vector of zeroes. In fact the space $\mathcal{L}_m$ is a closed convex subset of the cone of centred symmetric positive semi-definite $m\times m$ matrices and $\mathcal{L}_m$ is a manifold with corners \citep{ginestet2017hypothesis}. Since the sample space $\mathcal{L}_m$ for graph Laplacian data is non-Euclidean, standard approaches to non-parametric regression cannot be applied directly. In this paper, we use the statistical framework introduced in \citet{Severnetal19} for extrinsic analysis of graph Laplacian data, in which ``extrinsic'' refers to the strategy of mapping data into a Euclidean space, where analysis is performed, before mapping back to $\mathcal{L}_m$. The choice of mapping enables freedom in the choice of metric used for the statistical analysis, and in various applications with manifold-valued data analysis there is evidence of advantage in using non-Euclidean metrics \citep{Drydenetal09,Pigolietal14}. A summary of the key steps in the extrinsic approach is: \begin{enumerate}[i)] \item embedding to another manifold ${\mathcal M}_m$ by raising the graph Laplacian matrix to a power, \item mapping from $\mathcal{M}_m$ to a (Euclidean) tangent space $T_\nu(\mathcal{M}_m)$ in which to carry out statistical analysis, \item inverse mapping from the tangent space $T_\nu(\mathcal{M}_m)$ to the embedding manifold $\mathcal{M}_m$, \item reversing the powering in i), and projecting back to graph Laplacian space $\mathcal{L}_m$, \end{enumerate} which we explain in more detail as follows. First write $\mathbf{L}=\mathbf{U}\boldsymbol{\Xi}\mathbf{U}^{T}$ by the spectral decomposition theorem, with $\boldsymbol{\Xi}=\text{diag}(\xi_1,\ldots,\xi_m)$ and $\mathbf{U}=(\mathbf{u}_1,\ldots,\mathbf{u}_m)$, where $\lbrace\xi_i\rbrace_{i=1,\ldots,m}$ are the eigenvalues, which are non-negative as any $\mathbf{L}$ is positive semi-definite, and $\lbrace\mathbf{u}_i\rbrace_{i=1,\ldots,m}$ are the corresponding eigenvectors of $\mathbf{L}$. We consider the following map which raises the graph Laplacian to the power $\alpha>0$: \begin{align} \begin{split} \text{F}_\alpha(\mathbf{L})&= \mathbf{L}^\alpha=\mathbf{U}\boldsymbol{\Xi}^\alpha\mathbf{U}^T : \mathcal{L}_m\rightarrow \operatorname{Image}(\mathcal{L}_m) \subset \mathcal{M}_m.\\ \end{split} \label{Fmap} \end{align} In this paper we take ${\cal M}_m$ to be the Euclidean space of symmetric $m \times m$ matrices. In terms of $\text{F}_\alpha(\cdot)$ we define the power Euclidean distance between two graph Laplacians as \begin{equation} d_\alpha( \mathbf{L}_1 , \mathbf{L}_2 ) = \| \text{F}_\alpha(\mathbf{L}_1) - \text{F}_\alpha(\mathbf{L}_2) \|, \label{eqn:power:euclidean:distance} \end{equation} where $\| \mathbf{X} \| = \sqrt{ {\rm trace}(\mathbf{X}^T \mathbf{X}) }$ is the Frobenius norm, also known as the Euclidean norm. For the special case $\alpha=1$, (\ref{eqn:power:euclidean:distance}) is just the Euclidean distance. \citet{Severnetal19} further considered a Procrustes distance which includes minimization over an orthogonal matrix, approximately allowing relabelling of the nodes, and in this case the embedding manifold $\mathcal{M}_m$ is a Riemannian manifold known as the size-and-shape space \citep[p.99]{Drydmard16}. However, in this paper for simplicity and because the labelling is known, we shall just consider the power Euclidean metric. After the embedding the data are mapped to a tangent space $T_\nu(\mathcal{M}_{m})$ of the embedding manifold at $\nu$ using the bijective transformation $$\pi_\nu^{-1} : \mathcal{M}_{m} \to T_\nu(\mathcal{M}_{m}) . $$ In our applications we take $\nu=0$ and the tangent co-ordinates are $\mathbf{v} = {\rm vech}(\mathbf{H} \text{F}_\alpha(\mathbf{L}) \mathbf{H}^T)$, where ${\rm vech}()$ is the vector of elements above and including the diagonal and $\mathbf{H}$ is the Helmert sub-matrix \citep[p49]{Drydmard16}. The main point of note is that the tangent space is a Euclidean space of dimension $m(m-1)/2$. Hence, multivariate linear statistical analysis can be carried out in $T_\nu(\mathcal{M}_{m})$; for example, \citet{Severnetal19} considered estimation, two-sample hypothesis tests, and linear regression. After carrying out statistical analysis, the fitted values are then transformed back to the graph Laplacian space as follows: \begin{equation} P_{\cal L} \circ \text{G}_\alpha \circ \pi_\nu : T_\nu(\mathcal{M}_{m}) \to \mathcal{L}_m \label{uniquemap} \end{equation} where $\text{G}_\alpha: \mathcal{M}_{m} \to \mathcal{M}_{m}$ is a map that reverses the power, and $P_{\cal L} : \mathcal{M}_{m} \to \mathcal{L}_{m} $ is the projection to the closest point in graph Laplacian space using Euclidean distance. The projection $P_{\cal L}$ is obtained by solving a convex optimization problem using quadratic programming, and the solution is therefore unique. See \citet{Severnetal19} for full details of this framework. For visualising results, it is useful to map the data and fitted regression lines in $\mathcal{L}_m$ into $\mathbb{R}^2$. In this paper we do so using principal component analysis (PCA) such that the two plotted dimensions reflect the two orthogonal dimensions of greatest sample variability in the tangent space \citep{Severnetal19}. \section{Nadaraya-Watson estimator for network data} \subsection{Nadaraya-Watson estimator} We first review the classical Nadaraya-Watson estimator \citep{10.2307/25049340,doi:10.1137/1110024} before defining an analogous version for data on $\mathcal{L}_m$. Consider the regression problem where we want to predict an unknown variable $y(\mathbf{x}) \in \mathbb{R}$ with known covariate $\mathbf{x} \in \mathbb{R}^p$ for the dataset of independent and identically distributed random variables $(\lbrace Y_1, \mathbf{X}_1\rbrace,\dots, \lbrace Y_n,\mathbf{X}_n\rbrace)$ observed at $(\lbrace y_1, \mathbf{x}_1\rbrace,\dots, \lbrace y_n,\mathbf{x}_n\rbrace)$ with $E\{ |Y|\} < \infty$. The aim is to estimate the regression function $$ m(\mathbf{x}) = E [ y(\mathbf{x}) | \mathbf{x} ] . $$ The Nadaraya-Watson estimator is \begin{align}\label{eq: NW estimator} \hat m(\mathbf{x})=\frac{\sum_{i=1}^nK_h(\mathbf{x}-\mathbf{x}_i)y_i}{\sum_{i=1}^nK_h(\mathbf{x}-\mathbf{x}_i)}, \end{align} where $K_h \ge 0$ is a kernel function with bandwidth $h>0$. Consider now a version of the regression problem with $y(\mathbf{x})$ replaced with an $m \times m$ graph Laplacian matrix $\mathbf{L}(\mathbf{x}) \in \mathcal{L}_m$ with known covariates $\mathbf{x} \in \mathbb{R}^p$ and dataset $(\lbrace \mathbf{L}_1, \mathbf{x}_1\rbrace,\dots, \lbrace \mathbf{L}_n,\mathbf{x}_n\rbrace)$ with $E\{ |(\mathbf{L})_{ij}|\} < \infty, i=1,\ldots,m; j=1,\ldots,m$. We wish to estimate the regression function $$ \boldsymbol{\Lambda}(x) = E [ \mathbf{L}(\mathbf{x}) | \mathbf{x} ] . $$ A natural analogue of $\hat{m}(\mathbf{x})$ in \eqref{eq: NW estimator} for graph Laplacian data given covariate, $\mathbf{x} \in \mathbb{R}^p$, is \begin{align}\label{eq:euclidean nadaraya} \hat{\mathbf{L}}_{NW}(\mathbf{x})=\frac{\sum_{i=1}^nK_h(\mathbf{x}-\mathbf{x}_{i})\mathbf{L}_i}{\sum_{i=1}^nK_h(\mathbf{x}-\mathbf{x}_{i})} = \sum_{i=1}^n W_{hi}(\mathbf{x}) \mathbf{L}_i , \end{align} where $$W_{hi}(\mathbf{x}) = \frac{ K_h(\mathbf{x}-\mathbf{x}_{i}) } {\sum_{i=1}^nK_h(\mathbf{x}-\mathbf{x}_{i})} \ge 0$$ and note that $\sum_{i=1}^n W_{hi}(\mathbf{x}) = 1$. A common choice of kernel function is the Gaussian kernel \begin{align}\label{eq:kernel function} K_h(\mathbf{u})= \frac{1}{h\sqrt{2\pi}}\exp\left(-\frac{\Vert \mathbf{u}\Vert^2 }{2h^2}\right), \end{align} which is bounded above and strictly positive for all $\mathbf{u}$. We use a truncated version of \eqref{eq:kernel function} such that $K_h(\mathbf{u}) = 0$ for $\Vert \mathbf{u} \Vert > c $ (with $c$ large) in order that this truncated kernel has compact support, as required by theoretical results presented later. Wherever $\hat{\mathbf{L}}_{NW}$ is defined (meaning that at least one of the $K_h(\mathbf{x}-\mathbf{x}_{i})$ is non-zero) it is a sum of positively weighted graph Laplacians. Since the space $\mathcal{L}_m$ is a convex cone \citep{ginestet2017hypothesis}, itself defined as the sum of positively weighted graph Laplacians, thus $\hat{\mathbf{L}}_{NW}(\mathbf{t})\in \mathcal{L}_m$ as required. The estimator in (\ref{eq:euclidean nadaraya}) can equivalently be written as the graph Laplacian that minimises a weighted sum of squared Euclidean, $d_1$, distances to the sample data \begin{align}\label{eq:manifold nadaraya euc} \hat{\mathbf{L}}_{NW}(\mathbf{x}) =\arg\inf_{{\mathbf{L}}\in \mathcal{L}_m}\sum_{i=1}^n W_{hi}(\mathbf{x}) d_1(\mathbf{L}_i, \boldsymbol{\mathbf{L}})^2 = \arg\inf_{{\mathbf{L}}\in \mathcal{L}_m}\sum_{i=1}^n W_{hi}(\mathbf{x}) \| \mathbf{L}_i - \boldsymbol{\mathbf{L}} \|^2 , \end{align} using weighted least squares. In principle, the Euclidean distance $d_1$ in $\eqref{eq:manifold nadaraya euc}$ can be replaced with a different distance metric, $d$, though solving for the estimator entails an optimisation on the manifold $\mathcal{L}_m$, which can be theoretically and computationally challenging. Hence instead we generalise to other distances via an extrinsic approach, and define \begin{align}\label{eq:manifold nadaraya} \hat{\mathbf{L}}_{NW,d}(\mathbf{x}) =\text{P}_\mathcal{L} \left(\arg\inf_{\boldsymbol{\mathbf{L}}\in \mathcal{M}_m}\sum_{i=1}^n W_{hi}(\mathbf{x}) d(\mathbf{L}_i, \boldsymbol{\mathbf{L}})^2\right), \end{align} which is simpler provided, as here, the embedding manifold $\mathcal{M}_m$ is chosen such that the optimisation is straightforward. The projection is needed to map back to graph Laplacian space ${\cal L}_m$. For the power Euclidean metric, $d_\alpha$, consider the Nadaraya-Watson estimator in the tangent space $T_\nu({\cal M}_m)$, \begin{equation} \hat{\mathbf{L}}_{ {\cal M}_m ,\alpha} (\mathbf{x}) = \sum_{i=1}^n W_{hi}(\mathbf{x}) \pi_\nu^{-1}( \text{F}_\alpha ( \mathbf{L}_i) ) , \end{equation} in terms of which, after mapping back to ${\mathcal L}_m$ using (\ref{uniquemap}), the resulting Nadaraya-Watson estimator in the graph Laplacian space is \begin{align} \hat{\mathbf{L}}_{NW,\alpha}(\mathbf{x}) = P_{\cal L} \circ \text{G}_\alpha \circ \pi_\nu \left( \hat{\mathbf{L}}_{{\cal M}_m,\alpha}(\mathbf{x}) \right). \label{powerNW} \end{align} When $\alpha=1$ this simplifies to (\ref{eq:euclidean nadaraya}). \subsection{Uniform weak consistency} First we show that the Nadaraya-Watson estimator for graph Laplacians (\ref{eq:euclidean nadaraya}) is uniformly weakly consistent. Let \begin{equation} J_{n} = E \left\{ \int \left\| \hat {\mathbf{L}}_{NW}(\mathbf{x}) - \boldsymbol\Lambda(\mathbf{x}) \right\|^2 \mu(\mathrm{d}\mathbf{x}) \right\} \end{equation} where $\mu$ is the probability measure of $\mathbf{x}$. \begin{proposition} Suppose the kernel function $K_h$ is non-negative on $\mathbb{R}^p$, bounded above, has compact support and is strictly positive in a neighbourhood of the origin. If $E\{ \| \mathbf{L} \|^2 \} < \infty$, as $h \to 0$ and $n h^d \to \infty$ it follows that $J_{n} \to 0$. Hence the Nadaraya-Watson estimator $\hat {\mathbf{L}}_{NW}(\mathbf{x})$ is uniformly weakly consistent for the true regression function $\boldsymbol\Lambda(\mathbf{x})$. \end{proposition} \begin{proof} {\rm Consider the univariate regression problem for the $(i,j)$th element of $\hat{\mathbf{L}}_{NW}(\mathbf{x})$. From \citet{devroye1980} we know that under the conditions of the proposition we have $$J_{n,ij} = E \left\{ \int \left| \left( \hat{\mathbf{L}}_{NW}(\mathbf{x})\right)_{ij} - \left(\boldsymbol\Lambda(\mathbf{x})\right)_{ij} \right|^2 \mu(\mathrm{d}\mathbf{x}) \right\} \to 0 \; , \; i=1,\ldots,m; j=1,\ldots,m,$$ as $h \to 0$ and $n h^d \to \infty$; and since $$ J_{n} = \sum_{i=1}^m\sum_{j=1}^m J_{n,ij},$$ thus $J_{n} \rightarrow 0$. } \end{proof} The result can be extended to the power Euclidean distance based Nadaraya-Watson estimator (\ref{powerNW}). Let \begin{equation} J_{n,\alpha} = E \left\{ \int \left\| \hat{\mathbf{L}}_{NW,\alpha}(\mathbf{x}) - \boldsymbol\Lambda(\mathbf{x}) \right\| ^2 \mu(\mathrm{d}\mathbf{x}) \right\}. \end{equation} where $\mu$ is the probability measure of $X$. \begin{proposition} Under the conditions of Proposition 1 it follows that $J_{n,\alpha} \to 0$. Hence the power Euclidean Nadaraya-Watson estimator $\hat{\mathbf{L}}_\alpha(\mathbf{x})$ is uniformly weakly consistent for the true regression function $\boldsymbol\Lambda(\mathbf{x})$. \end{proposition} \begin{proof} {\rm First embed the graph Laplacians in the Euclidean manifold ${\cal M}_m$ and map to a tangent space $T_\nu({\mathcal M}_m)$. Consider the univariate regression problem for the $(i,j)$th element of $\pi_\nu^{-1}(\mathbf{F}_\alpha( \mathbf{L}(\mathbf{x}) ))$. Again from \citet{devroye1980,spiegelman1980} we know that under the conditions of the proposition we have uniform weak consistency in the tangent space. $$ J_{n,\alpha,ij} = E \left\{ \int \left| \left( \hat{ \mathbf{L}} _{ {\mathcal M}_m,\alpha} ( \mathbf{x}) \right)_{ij} - \pi_\nu^{-1}\left(\mathbf{F}_\alpha(\boldsymbol\Lambda(\mathbf{x}))\right)_{ij} \right|^2 \mu(\mathrm{d}\mathbf{x}) \right\} \to 0 \; , \; i=1,\ldots,m; j=1,\ldots,m,$$ as $h \to 0$ and $n h^d \to \infty$. Also, using the continuous mapping theorem and Pythagorean arguments as in \citet{Severnetal19}, we have $$ J_{n,\alpha} \le \sum_{i=1}^m\sum_{j=1}^m J_{n,\alpha,ij} \to 0 , $$ as $h \to 0$ and $n h^d \to \infty$. } \end{proof} \subsection{Bandwidth selection} The result that the power Euclidean Nadaraya-Watson estimator is uniformly weakly consistent gives reassurance that the method is a sensible practical approach to non-parametric regression for predicting networks. The result is asymptotic, however, which leaves open the question of how to choose the bandwidth, $h$, in practice. One way to do so is to select it via cross validation \citep{Efron93} as follows. Denote by $\hat{\mathbf{L}}_{-i}(\mathbf{x}; h)$ a Nadaraya-Watson estimator at $\mathbf{x}$, based on distance metric $d$, with bandwidth $h$, trained on all the training observations excluding the $i$th. Selection of bandwidth by cross validation then involves choosing $h$ to minimise the criterion \begin{equation} \sum_{i=1}^n d\left(\mathbf{L}_i, \hat{\mathbf{L}}_{-i}(\mathbf{x}_i; h) \right)^2. \label{eqn:CV:criterion} \end{equation} \section{Application: Enron email corpus}\label{sec:enron} The Enron dataset was made public during the legal investigation of Enron by the Federal Energy Regulatory Commission \citep{klimt2004introducing} and an overview can be found in \citet{Diesner2005}. Similar to \citet{shetty2004enron} we use this data to form social networks between $m=151$ employees and the data are available from \citet{Enrondata}. For each month we create a network with employees as nodes. The edges between nodes have weights that are the number of emails exchanged between the two employees in the given month. The networks we consider are for the whole months from June 1999 (month 1) to May 2002 (month 36), and we standardise by dividing by the trace of the graph Laplacian for each month. The aim is to model smooth trends in the structure of the dynamic networks as they evolve over time, and we also wish to highlight anomalous months where the network is rather unusual compared to the fitted trend. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{enron3a_newformat} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{enron4a_newformat} \end{subfigure} \caption{ {\textit{ Distances, $d(\mathbf{L}_{i-1}, \mathbf{L}_{i}), i=2,\ldots,36$, between consecutive observations for the monthly Enron networks for (a) the Euclidean metric, $d_1$, and (b) the square root Euclidean metric, $d_{\frac{1}{2}}$. }}}\label{fig:dist} \end{figure} In Figure \ref{fig:dist} we plot the distances between consecutive monthly graph Laplacians using Euclidean distance (a) and square root Euclidean distance (b). Some of the largest successive distances are at times $1-2, 6-7, 33-34, 34-35, 35-36$, and these are possible candidate positions for anomalous networks that are rather different. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{enron_euc_NW_bw2.eps} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{enron_sqrt_NW_bw1.eps} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{NW_dist_pred_true_euc_bw2_lines} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{NW_dist_pred_true_sqrt_bw1_lines} \end{subfigure} \caption{ {\textit{ PCA plots showing the data and Nadaraya-Watson curves, and residual plots for the Enron network data. In each plot the red digits indicate the observation number (month index). In the upper plots the black lines show the Nadaraya-Watson regression curves in the space of the first two principal components, using (a) the Euclidean metric, $d_1$, and $h=2$; (b) the Square root Euclidean metric, $d_{\frac{1}{2}}$, and $h=1$. We performed the calculations for a various values $h=0.5,1,2,4,8$, and the chosen value of $h$ was whichever that was optimal with respect to \eqref{eqn:CV:criterion}. Plots (c) and (d) are corresponding residual plots showing distance $d(\hat{\mathbf{L}}_i,{\mathbf{L}}_i) $ between the fitted values $\hat{\mathbf{L}}_i$ and the observations ${\mathbf{L}}_i$. }}}\label{fig:pca enron} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{Dmat_edit_simplex_enron} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{Dmat_edit_power_enron} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{simplex_rho_max_pc1} \end{subfigure} \begin{subfigure}[b]{0.47\textwidth} \caption{} \includegraphics[width=8.0cm]{power_rho_max_pc1} \end{subfigure} \caption{ {\textit{ The red digits indicate the month of the data. MDS plots using the Mahalanobis metric for (a) the Enron data with $\alpha=1$ and (b) with $\alpha=\frac{1}{2}$ using an overall estimate from $\rho$, and the Mahalanobis metric when $\rho$ is estimated to maximize the variance explained by PC1 for (c) the Enron data with $\alpha=1$ and (d) with $\alpha=\frac{1}{2}$. }}}\label{fig:edit mds enron} \end{figure} We provide a PCA plot of the first two PC scores in Figure \ref{fig:pca enron}(a),(b) and include the Nadaraya-Watson estimator projected into the space of the first two PCs. Here the bandwidth has been chosen by cross-validation as $h=2$ for the Euclidean case and $h=1$ for the square root metric. The Nadaraya-Watson estimator provides a smooth path through the data, and the structure is clearer in the square root metric plot. We are interested in finding anomalies in the Enron dynamic networks and so we compute the distances from each network to the fitted value from the Nadaraya-Watson estimate. Figure \ref{fig:pca enron} shows these residual distances of each graph Laplacian to the fitted Nadaraya-Watson values for (c) the Euclidean metric and (d) the square root metric. Some of the largest residuals are months 1,7,35 for Euclidean and 7,33,34,35 for the square root metric, and these are candidates for anomalies. From Figure \ref{fig:pca enron}(b) it looks like there is an approximate horseshoe shape in the PC score plot which could be an example of the horseshoe effect \citep{kendall1971abundance, diaconis2008horseshoes, morton2017uncovering}. We might conclude there is a change point in the data around months 20-26 from these plots but this may be misleading \citep{doi:10.1098/rsta.1970.0091}. Explained in \citet[page 412]{Mardiaetal79}, the horseshoe effect occurs when the distances which are ``large'', between data points, appear the same as those that are ``moderate''. \citet{morton2017uncovering} described this as a ``saturation property'' of the metric, and so on the PCA plot the point corresponding to a `large' time is pulled in closer to time 1 than we intuitively would expect. As an alternative to PCA, which seeks to address this horseshoe effect, we consider multidimensional scaling (MDS) with a Mahalanobis metric in the tangent space \citep[p.31]{Mardiaetal79} between two graph Laplacians $\mathbf{L}_k$ and $\mathbf{L}_l$, at times $k$ and $l$ respectively, which is: \begin{align*} \sqrt{(\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_k))-\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_l))-\boldsymbol{\mu})^T\boldsymbol{\Sigma}_{kl}^{-1}(\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_k))-\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_l))-\boldsymbol{\mu})}, \end{align*} where $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}_{kl}$ are the mean and covariance matrix of $\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_k))-\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_l))$ respectively. Here we take $\boldsymbol{\mu}$ as zero and consider an isotropic AR(1) model which has covariance matrix $$ \boldsymbol{\Sigma}_{kl}=\frac{\sigma^2\rho^{\vert k-l \vert}}{1-\rho}\mathbf{I}_\frac{m(m-1)}{2}, $$ which is a diagonal matrix where the diagonal elements are the variance of elements and we have assumed a 0 covariance between any other elements. Writing $\mathbf{y}_k=\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_k))$ and $\mathbf{v}_k=\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_{k-1}))$ we estimate $\rho$ by least squares $ \rho =\frac{\sum_{k=2}^{n}(\mathbf{y}_k^T\mathbf{v}_k)}{\sum_{k=2}^{n}(\mathbf{v}_k^T\mathbf{v}_k)}, $ and we take $\sigma = 1$ as this is just an overall scale parameter. The Mahalanobis metric between graph Laplacians, $\mathbf{L}_k$ and $\mathbf{L}_l$, can now be written as \begin{align*} &=\sqrt{\frac{1-\rho}{\rho^{\vert k-l \vert}}(\pi_0^{-1}((\text{F}_\alpha(\mathbf{L}_k))-\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_l)))^T(\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_k))-\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_l)))}\\ &=\sqrt{\frac{1-\rho}{\rho^{\vert k-l \vert}}} \Vert \pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_k))-\pi_0^{-1}(\text{F}_\alpha(\mathbf{L}_l)) \Vert =\sqrt{\frac{1-\rho}{\rho^{\vert k-l \vert}}} \Vert \text{F}_\alpha(\mathbf{L}_k)- \text{F}_\alpha(\mathbf{L}_l) \Vert = \sqrt{\frac{1-\rho}{\rho^{\vert k-l \vert}}} d_{\alpha}( \mathbf{L}_k , \mathbf{L}_l ) . \end{align*} The plot of MDS with the Mahalanobis distance are given in Figure \ref{fig:edit mds enron}(a)-(b). In both plots there are large distances between the first few and last few observations compared to the central observations, which is broadly in keeping with Figure \ref{fig:dist}(a),(b) although the middle observations do seem too close together in the MDS plots. We consider an alternative estimate in choosing $\rho$ that maximises the variance explained by the first PC scores for each example, shown in Figure \ref{fig:edit mds enron}(c)-(d). These final MDS plots are more in agreement with the distance plots of Figure \ref{fig:dist}. In particular in Figure \ref{fig:edit mds enron}(d) we see that months 7, 34, 35, 36 look rather different from the rest. Finally we consider the main features of all the results from Figure \ref{fig:dist}-Figure \ref{fig:edit mds enron} and we see that the 7th, 34th and 35th months stand out as strong anomalies. The 7th month corresponds to December 1999, and this is picked out to be an anomaly in \citet{wang2014locality}, believed to coincide with Enron's tentative sham energy deal with Merrill Lynch created to meet profit expectations and boost the stock price. Month 34 and 35 correspond to March and April 2002 these correspond to the former Enron auditor, Arthur Andersen, being indicted for obstruction of justice \citep{guardianEnron}. \section{Application: 19th century novel networks} \label{ex: Nw novels} We consider an application where it is of interest to analyze dynamic networks from the novels of Jane Austen and Charles Dickens. The 7 novels of Austen and 16 novels of Dickens were represented as samples of network data by \citet{Severnetal19}. Each novel is represented by a network where each node of the network is a word, and edges are formed with weights proportional to the number of times a pair of words co-occurs closely in the text. For each novel we produce a network counting pairwise word co-occurrences, and words are said to co-occur if they appear within five words of each other in the text. A choice that needs to be made is if we allow co-occurrences over sentence boundaries and chapter boundaries \citep[Section 3]{evert2008corpora}, and for this dataset we allow it. The data are obtained from CLiC \citep{doi:10.3366/cor.2016.0102}. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.49\textwidth} \caption{ } \includegraphics[trim={0 1cm 0 0.5},clip]{NWregression_euc_bw1} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \caption{ } \includegraphics[trim={0 1cm 0 0.5},clip]{NWregression_sqrt_bw1} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \caption{ } \includegraphics[trim={0 1cm 0 0.5},clip]{NWregression_euc_bw2} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \caption{ } \includegraphics[trim={0 1cm 0 0.5},clip]{NWregression_sqrt_bw2} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \caption{ } \includegraphics[trim={0 1cm 0 0.5},clip]{NWregression_euc_bw4} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \caption{ } \includegraphics[trim={0 1cm 0 0.5},clip]{NWregression_sqrt_bw4} \end{subfigure} \vspace{0.5cm} \caption{ Regression paths for Austen novels (blue) between the years 1794 to 1815, numbered 1-7 according to chronology of the novels, and for Dickens novels (red) between the years 1836 to 1870, numbered 8-23; using (left to right) $d=d_1$ and $d=d_\frac{1}{2}$, with bandwidth (top to bottom) $h=1,\, 2,\, 4$, of which $h=4$ gave the smallest value of the cross-validation criterion \eqref{eqn:CV:criterion}. }\label{fig:NW regression novels} \end{figure} We take the node set $V$ as the $m=1000$ most common words across all the novels of Austen and Dickens. A pre-processing step for the novels is to normalise each graph Laplacian, in order to remove the gross effects of different lengths of the novels, by dividing each graph Laplacian by its own trace, resulting in a trace of 1 for each novel. Our key statistical goal is to investigate the authors' evolving writing styles, by carrying out non-parametric regression with a graph Laplacian response on the year $t$ that each novel was written. We apply the Nadaraya-Watson regression to the Jane Austen and Charles Dickens networks separately to predict their writing styles at different times. The response is a graph Laplacian and the covariate is time $t$ for each novel, with a separate regression for each novelist. We compared using the metrics $d_1$ and $d_\frac{1}{2}$. For each author a Nadaraya-Watson estimate was produced for every 6 months within the period the author was writing. We compared different bandwidths, $h$, in the Gaussian kernel. The results are shown in Figure \ref{fig:NW regression novels} plotted on the first and second principal component space for all the novels. For both metrics for Dickens when $h=1$ the regression lines are not at all smooth. For both metrics with $h=2$ the regression line for Dickens appears to show an initial smooth trend, then a turning point around the years 1850 and 1851 (between David Copperfield and Bleak House which are novels 16 and 17 in Figure \ref{fig:NW regression novels}). After 1851 there is much less dependence on time. This change in structure is especially evident in the $h=4$ plot for both metrics, which has the smallest value of the cross-validation criterion \eqref{eqn:CV:criterion} out of these choices $h \in \{ 1,2,4 \}$. In the year 1851 Dickens had a tragic year including his wife having a nervous breakdown, his father dying and his youngest child dying \citep{charlesDickens}. We see that the possible turning point is around the same time as these significant events. As there are far fewer novels written by Austen it is less obvious if there is any turning point in her writing, however it is clear that Lady Susan (novel 1) is an anomaly, not fitting with the regression curve that does follow Austen's other works more closely. Lady Susan is Austen's earliest work, and is a short novella published 54 years after Austen's death. \section{Discussion} The two applications presented involve a scalar covariate, but the Nadaraya-Watson estimator is appropriate to more general covariates, e.g. spatial covariates. A further extension would be to adapt the method of kriging, also referred to as Gaussian process prediction. Kriging is a geospatial method for prediction at points on a random field \citep[e.g. see][]{Cressie93}, and \citet{Pigolietal16} considered kriging for manifold-valued data. The kriging predictor of an unknown graph Laplacian $\mathbf{L}(\mathbf{x})$ on a random field with known coordinates $\mathbf{x}$ for the dataset $(\lbrace \mathbf{L}_1, \mathbf{x}_1\rbrace,\dots, \lbrace \mathbf{L}_n,\mathbf{x}_n\rbrace)$ is of the form $Z(\mathbf{x})=\sum_{i=1}^n b(\mathbf{x}_i)\mathbf{L}_i$, where the weights, $b(\mathbf{x}_i)$, are determined by minimizing the mean square prediction error for a given covariance function. The Nadaraya-Watson estimator can also be applied in a reverse setting where some variable $\mathbf{t}_i$ is dependent on the graph Laplacian $\mathbf{L}_i$, this can be written as $\mathbf{t}_i=\mathbf{t}(\mathbf{L}_i)$. This could be used if, for example, one had the times networks were produced and then wanted to predict the time a new network was produced. In this case the Nadaraya-Watson estimator is a linear combination of known $\mathbf{t}_i$ values, weighted by the graph Laplacian distances, given by \begin{align}\label{eq: NW II} \hat{\mathbf{t}}(\mathbf{L})=\frac{\sum_{i=1}^nK_h(d(\mathbf{L}, \mathbf{L}_{i}))\mathbf{t}_i}{\sum_{i=1}^nK_h(d(\mathbf{L}, \mathbf{L}_{i}))}, \end{align} where $d$ can be any metric between two graph Laplacians. \citet{Severn19} provided an application of this approach using the Gaussian kernel defined in (\ref{eq:kernel function}), predicting the time that a novel was written using the network graph Laplacian as a covariate. Other metrics could also be used, for example the Procrustes metric of \citet{Severnetal19}. To solve (\ref{eq:manifold nadaraya}) for the Procrustes metric, the algorithm for obtaining a weighted generalised Procrustes mean given in \citet[Chapter 7]{Drydmard16} can be implemented. In Euclidean space there are more general results for the Nadaraya-Watson estimator including weak convergence in $L_p$ norm, rather than $p=2$ results that we have used \citep{devroye1980,spiegelman1980}. More general results also exist, e.g. see \citet{Walk2002}, including strong consistency. It will be interesting to explore which of these results can be extended to graph Laplacians, although the additive properties of the $p=2$ case have been particularly important in our work. \section*{Acknowledgments} This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/T003928/1]. The datasets were derived from the following resources available in the public domain: The Network Repository http://networkrepository.com and CLiC https://clic.bham.ac.uk \bibliographystyle{apalike}
{ "attr-fineweb-edu": 1.263672, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbfvxaKgS2EKBg1Xa
\section*{Methods} \subsection*{Laser system and CEP control} The experiment was conducted using a double Chirped-Pulse Amplification (CPA) system delivering 10\,mJ, 25\,fs pulses at a center wavelength of 800\,nm at a repetition rate of 1\,kHz. The laser spectrum is then broadened by propagating in a 2.5\,m long hollow-core fiber differentially filled with helium \cite{bohle2014,ouille_relativistic-intensity_2020} and re-compressed with chirped mirrors to 4\,fs pulses centred at $\lambda_0$ = 760\,nm. The pulse duration is measured in vacuum using the d-scan technique\cite{miranda12}. The final energy on target is 2.5\,mJ per pulse. The laser beam is tightly focused by a f/2 off-axis parabola to a $2.7\times2.8\,\mathrm{\mu m}$ FWHM spot, resulting in a vacuum peak intensity of I = $5\times10^{18}\,\mathrm{W.cm^{-2}}$. The CEP stabilization is done in two loops, the first of which is a fast feedback loop on the oscillator which modulates the power of the pump laser (managed by an XPS800 by Menlo Systems, Garching, Germany). A second feedback loop stabilizes the CEP after amplification, spectral broadening and compression. It uses a wedge reflection of our probe beam (a fraction of the main beam split off by a holed mirror and used for plasma diagnostics) which is sent to an f-2f interferometer \cite{kakehata2001single} which consists of a \textbeta-barium borate crystal for frequency doubling and a polarizer to project the fundamental and second harmonic polarizations onto the same axis. The interference spectrum is analyzed shot-to-shot by a Fringeezz \cite{lucking2014approaching} (Fastlite, Antibes, France) to measure changes in the CEP. This measurement is fed back to an acousto-optic programmable dispersive filter (Fastlite, Antibes, France) in the first amplification stage to stabilize the CEP. This system stabilizes the CEP with a shot-to-shot dispersion between 240\,mrad RMS and 550\,mrad RMS depending on the target value (see Supplementary Material). As the system measures CEP changes, and not the absolute CEP, there is an arbitrary offset between the curves from the experiment and from the simulation. We shifted the simulated curve in figure \ref{fig:simu}e such that the two curves are in phase. \subsection*{Target and detectors} The laser is focused 150\,$\mathrm{\mu m}$ from the exit of a supersonic ``de Laval" nozzle with a 60\,$\mathrm{\mu m}$ throat, and 180\,$\mathrm{\mu m}$ exit diameter. The gas jet flows continuously thanks to a pumping system which keeps the residual gas pressure inside the chamber below $10^{-2}$\,mbar. We measured the gas jet density profile using a quadri-wave lateral shearing interferometer and deduced the plasma density assuming full L-shell ionization of neutral nitrogen $N_2$ into $N^{5+}$. We estimate the peak plasma density obtained in the experiment with a 15\,bar backing pressure to be about $n_e=1.4\times10^{20}\,\mathrm{cm^{-3}}$. In order to keep the density profile as constant as possible throughout the experiment, a pressure controller ensures a sub-percent stability on the backing pressure applied to the gas jet. The electron beam charge and distribution are measured with a calibrated CsI(Tl) phosphor screen imaged on a CCD camera. The energy of the electrons is measured by inserting a removable spectrometer consisting of a 500\,$\mathrm{\mu m}$ pinhole and two permanent circular magnets providing a 58\,mT magnetic field. During the experiments, the continuously flowing gas jet allows us to operate the laser-plasma accelerator at the actual repetition rate of 1\,kHz. \subsection*{Simulations} For the simulations we have used a fully relativistic electromagnetic Particle-in-Cell code FBPIC\cite{lehe_spectral_2016} equipped with the pseudo-spectral analytical time domain (PSATD) quasi-cylindrical solver. The PSATD electromagnetic solver is free of numerical dispersion and it provides high-accuracy description of laser propagation and laser-particles interactions, and quasi-cylindrical geometry allows to obtain correct three-dimensional description at a moderate computational cost. The mesh used for simulations is $\Delta z$ = $\lambda_0/60$ and $\Delta r=5\Delta z$. Five azimuthal Fourier modes were used to properly capture all asymmetries. The simulations were initialized with pure neutral Nitrogen, and ionization was calculated with the ADK model of tunnel ionization \cite{ammosov_tunnel_1986}. Atomic nitrogen was initialized using 96 macroparticles per r-z cell, and each such macroparticle could produce up to 7 macroparticles of electron species via ionization. Idealized Gaussian temporal and spatial laser profiles were used, with waist and pulse duration matching the experiment, and a pulse energy of 2.3\,mJ. Dispersion in the plasma was pre-compensated by adding a $5\,\mathrm{fs}^2$ positive chirp. For the simulated plasma profile, we used a combination of two supergaussian functions to fit the experimentally measured profile, with a peak density of $1.8\times10^{20}\mathrm{cm^{-3}}$. The laser focus position was placed $\SI{25}{\micro\meter}$ upstream of the center of the profile.\par
{ "attr-fineweb-edu": 1.87207, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbhM4uzlgqIdPi-wp
\section{Introduction} \label{se:intro} We consider measurement error that results from using predictions from a first-stage statistical model as the covariate of interest (the exposure) in a second-stage association study. Regardless of the exposure prediction model, there will be measurement error from the difference between predictions and the unmeasured true values. In contrast with standard measurement error models and the usual classification into classical and Berkson error, such predictions induce a complicated form of measurement error that is heteroscedastic and correlated across study subjects \citep{Gryparis2009,Szpiro2011biostats}. Our objectives are to characterize the effects of this error, to give guidelines for study design to minimize the impact, and to provide a correction method that reduces bias and gives valid confidence intervals. In this section, we review the literature on measurement error correction for air pollution cohort studies and describe how our approach advances the state of the art (Section~\ref{se:intro1}), comment on connections between our work and fundamental statistical issues concerning the interpretation of random effects models and the interplay between random vs. fixed covariate regression and misspecified mean models (Section~\ref{se:intro2}), and outline the main sections of the paper (Section~\ref{se:intro3}). \subsection{Measurement error} \label{se:intro1} There has been extensive research on measurement error \citep{Carroll2006}, but the statistical literature has only recently begun to deal with the problem presented here. For spatial exposure contrasts, we have generalized the standard categories by decomposing the measurement error into a Berkson-like component from smoothing the exposure surface and a classical-like component from variability in estimating exposure model parameters \citep{Gryparis2009,Szpiro2011biostats,Sheppard2011}. We and others have also shown that the parametric bootstrap (or a computationally efficient approximation to the parametric bootstrap) can be used to correct for the effects of measurement error \citep{Madsen2008, lopiano2011comparison,Szpiro2011biostats}. However, validity of these results depends crucially on having a correctly specified exposure model. In practice such models are developed for predictive performance and often use predictors based on convenience, so we believe misspecification is ubiquitous. A distinguishing feature of our methodology in this paper is that it is robust to misspecification of the mean and/or variance in the exposure model and still provides valid second-stage inference. We focus on two-stage analysis, as this is a common and practical approach when exposure is not directly measured. An alternative is a unified analysis in which the exposure model is a component of a joint model for the exposure and health data (e.g., \cite{Sinha2010} in nutritional epidemiology and \cite{Gryparis2009} in air pollution epidemiology), but this type of joint model has several difficulties. First, it presupposes that one has a correct (or at least nearly correct) exposure model; we argue that an exposure model can generally only capture a portion of the full exposure and should be treated in this light. Second, outlying second-stage data may influence estimation of the exposure model in unexpected ways, especially when the second-stage model is misspecified (noting at the same time that this feedback is an essential aspect of a coherent joint model and leads to increased efficiency). Third, the same exposure data are often used with multiple second-stage outcome data, and it is scientifically desirable to use the same predicted exposures across studies. Finally, exposure modeling can be computationally demanding, involving spatial and spatio-temporal prediction, so pursuing a two-stage strategy has practical appeal. For further elaboration of these points, see \citet{bennett2001errors}, \citet{wakefield2006health}, \citet{Gryparis2009}, \citet{lunn2009combining}, and \citet{Szpiro2011biostats}. We and others have also evaluated standard correction methods such as regression calibration, including using personal measurements as validation data \citep{Gryparis2009, Spiegelman2010, Szpiro2011biostats}. Performance in the spatial setting is mixed, most likely because the error structure differs substantially from classical measurement error. The methodology we describe here directly accounts for the spatial characteristics of measurement error and relies on statistical estimates of uncertainty from the exposure model rather than validation data. A key application, and the one that motivates this work, is studying the health effects of chronic exposure to ambient air pollution. Long-term air pollution exposure has been linked with increased cardiovascular morbidity and mortality in prominent studies that form part of the basis for regulations with broad economic impact \citep{Dockery1993,Pope2002,Peters2002}. Early air pollution cohort studies focused on mortality \citep{Dockery1993,Pope2002}, while more recent work has shown associations with non-fatal cardiovascular events \citep{Miller2007} and sub-clinical indicators of disease \citep{Kunzli2005,VanHee2009,Adar2010,van2011association}. In general, the health risk of air pollution to any single individual is thought to be small, but there are important public health implications because of the large number of people exposed and the ability of governments to mitigate exposure through regulatory action \citep{Pope2006}. In air pollution studies, exposure modeling is motivated by the desire to estimate intra-urban (i.e., within a metropolitan area) variation in exposure, which is more difficult to quantify than inter-urban pollution contrasts. There are significant advantages to exploiting intra-urban contrasts, as this can increase statistical power to detect health effects, help rule out unmeasured confounding by city or region, and improve our ability to differentiate between the effects of different pollutants or pollutant components. Pollution data are typically available from regulatory and research monitoring networks but not from long-term residential or personal monitoring of individuals participating in observational health studies, leading to a spatial misalignment problem. Typical exposure prediction models rely on monitoring data in a regression with geographically-varying covariates and smoothing by splines or kriging \citep{Fanshawe2008,Jerrett2005a, Hoek2008, Su2009, Yanosky2009, Szpiro2009, Brauer2010}. Standard practice is to select an exposure model with good prediction accuracy, treat the predicted exposures as known, and plug them into a health model to estimate the association of interest without accounting for measurement error \citep{Jerrett2005, Kunzli2005,kim2009,Puett2009,Adar2010,van2011association}. While motivated by air pollution epidemiology, the core measurement error ideas in this paper have much broader relevance. Indeed, \cite{Prentice2010} (in the 2008 RA Fisher lecture at the Joint Statistical Meetings) states that ``measurement error in exposure assessment may be a potentially dominating source of bias in such important prevention research areas as nutrition and physical activity epidemiology.'' It is essential to better understand the implications of measurement error in a wide variety of applications in which one must first estimate exposure. These applications include (1) nutritional epidemiology, (2) physical activity epidemiology, (3) a environmental and occupational epidemiology, (4) exposure to disease vectors or infectious agents, and (5) two-stage analyses in functional data contexts. Statistical exposure models are commonly used in environmental and occupational epidemiology \citep{Dement1983,Preller1995,Stram1999, Ryan2007, Slama2007}, with kriging and land use regression particularly popular in air pollution research. More generally, proxy data are becoming increasingly available and a natural idea in many contexts is to model an exposure of interest given publicly available data. Such data could include remote sensing from satellites or large networks of inexpensive sensors deployed to measure physical phenomena. \subsection{Connections to other fundamental statistical issues \label{se:intro2} In addition to advancing measurement error research, our development emphasizes the relationships between certain foundational issues in applied statistics that are of current interest in the field, specifically the interpretation of random effects models and the interplay between random vs. fixed covariate regression and misspecified mean models. As discussed in Section~\ref{se:dgm}, we have chosen to condition on a fixed but unknown spatial air pollution surface, rather than taking the more conventional geostatistical approach of modeling an unknown spatial surface as a random effect or spatial random field \citep{Cressie1993, Banerjee2004}. The repercussions of this decision are related to the more general question of how to interpret random effects models in light of reasonable assumptions about the true data-generating mechanism, and whether this terminology is even adequate for describing the range of problems to which random effects-based algorithms are currently applied \citep{Gelman2005, Hodges2010}. Indeed, in a new book on richly parameterized models, \citet{Hodges2013} points out that our particular modeling framework illustrates an important practical difference in inferential methodology between what he calls `old' and `new' style random effects. As discussed in Sections~\ref{se:dgm}--\ref{se:exposEst}, we regard the entire unknown exposure surface as part of the mean in a finite rank regression, rather than allocating the spatial component to the variance by means of a random effect, so we must address the consequences of a misspecified mean model. In addition, we regard the exposure monitor locations as random rather than fixed (since they could presumably vary between hypothetical repeated experiments), so we are in the setting of random covariate regression with a misspecified mean model. \citet{Buja2013} and \citet{Szpiro2010a} have recently discussed some implications of the distinction between fixed and random covariates when the mean model is misspecified. In fact, the seminal paper by \citet{White1980} on sandwich covariance estimators includes the case of a misspecified mean model, but perhaps in part because the title focuses on heteroscedasticity, applications of the sandwich estimator tend to focus only on the importance of non-constant variance. One often neglected consequence of the ``conspiracy of model violation and random X'' \citep{Buja2013} that is important in our development is that regression parameter estimates are not quite unbiased. We provide an approach to characterizing and estimating the asymptotic bias (see equation (\ref{eq:biasgammahat.body}) and the surrounding discussion) that is, as far as we know, novel. \subsection{Outline of paper} \label{se:intro3} Section 2 presents our basic framework, a key feature of which is that it avoids the assumption that the exposure model is correct and instead projects exposure data into a lower dimensional space. We present conditions on the compatibility of the first and second stage designs that have important real-world design and analysis implications. Section 3 decomposes the resulting measurement error into Berkson-like and classical-like components. Under the compatibility conditions, we show that there is essentially no bias from the Berkson-like error, although this component of the error still increases variability of second-stage effect estimates. We then derive asymptotic estimates of the bias and variance caused by the classical-like error. Section 4 describes our measurement error correction approach, wherein we correct for bias from the classical-like error using our asymptotic results and estimate the uncertainty, including that from both sources of measurement error, using a form of the nonparametric bootstrap. Sections 5 and 6 present simulations and an example application to the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air). \section{Analytical framework} \subsection{Data-generating mechanism} \label{se:dgm} We develop an analytic framework for air pollution cohort study data that also more generally illustrates how one can use a measurement error paradigm to formalize two-stage analysis with a misspecified first-stage model. While previous work has modeled the spatial variation in air pollution as a random field \citep{Gryparis2009,Szpiro2009,Szpiro2011biostats}, we regard the spatial surface itself as fixed and treat the data locations as stochastic. This avoids the philosophical difficulties inherent in attributing spatial structure of long-term average air pollution to a stochastic spatial process that would be different in a hypothetical repeated experiment. Long-term average air pollution concentrations over one or more years are predominately determined by fixed but complex climatological, economic, and geographic systems, so it is scientifically preferable to regard the unknown surface as deterministic. Thus, we condition on the fixed physical world in the time period of the study and consider a repeated sampling framework in which observations might have been collected at different locations according to a (not necessarily known) study design. In Section~\ref{se:disc}, we discuss the implications of this approach when considering shorter-term air pollution exposures. More formally, consider an association study with health outcomes $y_i$ and corresponding exposures $x_i$ for subjects $i=1,\ldots,n$ at geographic locations ${\mathbf{s}}_i \in \mathbb R^2$, with additional health model covariates ${\mathbf{z}}_i =(z_{i1},\ldots,z_{ip}) \in \mathbb R^p$, including an intercept. Consider a linear model, \begin{equation} \label{eq:healthlin} y_i=x_i\beta+{\mathbf{z}}_i {\boldsymbol\beta}_z +\epsilon_i, \end{equation} where conditional on covariates the $\epsilon_i$ are independent but not necessarily identically distributed, satisfying $E(\epsilon_i)=0$. Our target of inference is the health effect parameter, $\beta$. If the $x_i$ and ${\mathbf{z}}_i$ were observed without error, inference for $\beta$ would be routine by ordinary least squares (OLS) and sandwich-based standard error estimates \citep{White1980}. We are interested in the situation where the $y_i$ and ${\mathbf{z}}_i$ are observed for all subjects, but instead of the actual subject exposures we observe monitoring data, $x^*_j$, for $j=1,\ldots,n^*$, at different locations ${\mathbf{s}}^*_j$. Nonlinear health models are of course important and are the subject of ongoing research, but the linear setting is helpful for developing the general framework and our specific asymptotic results. We emphasize that we regard the spatial locations ${\mathbf{s}}_i$ and ${\mathbf{s}}^*_j$ of study subjects and monitors as realizations of spatial random variables. The locations are chosen at the time of the study design, and it is natural to regard them as stochastic in order to address the statistical question of how the estimates of $\beta$ would vary if different locations were selected according to similar criteria. Thus, in our development we assume the ${\mathbf{s}}_i$ and ${\mathbf{s}}^*_j$ are distributed in $\mathbb R^2$ with unknown densities $g({\mathbf{s}})$ and $h({\mathbf{s}})$, respectively, and corresponding distribution functions $G({\mathbf{s}})$ and $H({\mathbf{s}})$. Throughout, we assume the subject locations are chosen independently of the monitoring locations. To simplify the exposition, we further assume in Sections~\ref{se:decomp} and~\ref{se:correction} that both sets of locations are i.i.d. It is straightforward to account for clustering of subject or monitor locations; see, for example, the simulation study in Section~\ref{se:spatial.ex} and the data analysis in Section~\ref{se:mesa}. Conditional on the ${\mathbf{s}}_i$, we assume the $x_i$ satisfy \[ x_i = \Phi({\mathbf{s}}_i) + \eta_i, \] with i.i.d. mean zero $\eta_i$. The function $\Phi({\mathbf{s}})$ is a deterministic spatial surface that is potentially predictable by covariates and spatial smoothing, and the $\eta_i$ represent variability between exposures for subjects at the same physical location. We assume an analogous model for the monitoring data at locations ${\mathbf{s}}^*_j$, with the same deterministic spatial field $\Phi({\mathbf{s}}_j^*)$ and with instrument error represented by $\eta_j^*$ having variance $\sigma^2_{\eta^*}$. Finally, we assume the additional health model covariates ${\mathbf{z}}_i$ satisfy \[ {\mathbf{z}}_i = {\mathbf{\Theta}}({\mathbf{s}}_i) + {\boldsymbol\zeta}_i, \] where ${\mathbf{\Theta}}({\mathbf{s}})=(\theta_1({\mathbf{s}}),\ldots,\theta_p({\mathbf{s}}))$ is a $p$-dimensional vector-valued function representing the spatial component of the additional covariates, and which includes the intercept, and the ${\boldsymbol\zeta}_i=(\zeta_{i1},\ldots,\zeta_{ip})$ are random $p$-vectors independent between subjects and independent of the $\eta_i$. Each component of ${\boldsymbol\zeta}_i$ has mean zero, but the components of ${\boldsymbol\zeta}_i$ are not necessarily independent of each other. To illustrate, one additional health model covariate might be household income, decomposed into spatial variation representing the socioeconomic status of the neighborhood and the residual variation between residences. \subsection{Exposure estimation} \label{se:exposEst} Standard practice is to derive a spatial estimator of exposure $\hat{w}({\mathbf{s}})$ based on the monitoring data and then to use the $\hat{w}({\mathbf{s}}_i)$ in place of the $x_i$ in~(\ref{eq:healthlin}) to estimate $\beta$. We consider a hybrid regression (on geographically-defined covariates) and regression spline exposure model. Thus, we let ${\mathbf{R}}({\mathbf{s}})$ be a known function from $\mathbb R^2$ to $\mathbb R^r$ that incorporates $q$ covariates and $r-q$ spline basis functions. If we knew the least-squares fit of the exposure surface with respect to the density of subject locations $g({\mathbf{s}})$, \begin{equation} \label{eq:gamma} {\boldsymbol\gamma} = \argmin_{\boldsymbol\xi} \int \big(\Phi({\mathbf{s}})-{\mathbf{R}}({\mathbf{s}}){\boldsymbol\xi}\big)^2 dG({\mathbf{s}}), \end{equation} it would be natural to approximate $x_i$ by $w({\mathbf{s}}_i)={\mathbf{R}}({\mathbf{s}}_i){\boldsymbol\gamma}$. Notice that we do not assume the spatial basis is sufficiently rich to represent all of the structure in $\Phi({\mathbf{s}})$, so we allow for misspecification in the sense that $\Phi({\mathbf{s}}) \neq w({\mathbf{s}})$ for some ${\mathbf{s}} \in \mathbb R^2$, for any choice of ${\boldsymbol\gamma}$. We do not know ${\boldsymbol\gamma}$, so we will estimate it from the monitoring data by $\hat{{\boldsymbol\gamma}}$ and then use the estimated exposure, $\hat{w}({\mathbf{s}}_i)={\mathbf{R}}({\mathbf{s}}_i)\hat{{\boldsymbol\gamma}}$, in place of $x_i$. In particular, we derive $\hat{{\boldsymbol\gamma}}$ by OLS \begin{equation} \label{eq:gammahat2} \hat{{\boldsymbol\gamma}} = \argmin_{\boldsymbol\xi} \sum_{j=1}^{n^*} \left(x_i^*-{\mathbf{R}}({\mathbf{s}}^*_j){\boldsymbol\xi}\right)^2. \end{equation} Under standard regularity conditions \citep{White1980}, $\hat{{\boldsymbol\gamma}}$ is asymptotically normal and converges a.s. to ${\boldsymbol\gamma}^*$ as $n^*\rightarrow\infty$, where ${\boldsymbol\gamma}^*$ is the solution to (\ref{eq:gamma}) with $H({\mathbf{s}})$ in place of $G({\mathbf{s}})$. In Section~\ref{se:compatibility}, we discuss the implications of distinct reference distributions in (\ref{eq:gamma}) and (\ref{eq:gammahat2}). \subsection{Exposure model choice} \label{se:compatibility} So far we have taken ${\mathbf{R}}({\mathbf{s}})$ to be a known function from $\mathbb R^2$ to $\mathbb R^r$, encoding a set of decisions about which covariates and spline basis functions to include in the exposure model. Indeed, model selection is a complex task that involves trading off flexibility (advantageous for modeling as much of the true exposure surface as possible) and parsimony (advantageous for reducing estimation error). We begin by specifying compatibility conditions for the first-stage exposure model that are needed to guarantee consistent estimation of $\beta$ in the second-stage health model. The following two conditions are sufficient, and we will discuss their motivation further in Section~\ref{se:decomp}. \begin{cond} \label{cond1} The probability distribution of ${\mathbf{R}}({\mathbf{s}})$ is the same if ${\mathbf{s}}$ is sampled from $G({\mathbf{s}})$ or $H({\mathbf{s}})$. \end{cond} \begin{cond} \label{cond2} The span of ${\mathbf{R}}({\mathbf{s}})$ includes the elements of ${\mathbf{\Theta}}({\mathbf{s}})$, $\theta_k({\mathbf{s}}), k=1,\ldots,p$, the spatially structured components of the additional health model covariates. \end{cond} Note that Condition~\ref{cond1} is satisfied if the probability distributions of subject and monitor locations are identical, i.e., $g({\mathbf{s}})=h({\mathbf{s}})$ for all ${\mathbf{s}}$. Visual inspection on a map can be useful for verifying that $g({\mathbf{s}})$ and $h({\mathbf{s}})$ represent similar spatial patterns that are relevant for spline functions, but individual geographic covariates may have very fine spatial structure, so it is also useful to examine the values of these geographic covariates at subject and monitor locations. If a particular covariate has noticeably different distributions in the two populations, then it should not be included in ${\mathbf{R}}({\mathbf{s}})$ (see, for example, the discussion of the MESA data analysis in Section~\ref{se:mesa}). Selecting ${\mathbf{R}}({\mathbf{s}})$ to satisfy Condition~\ref{cond2} implicitly requires that ${\mathbf{\Theta}}({\mathbf{s}})$ be defined at all locations in the supports of $g({\mathbf{s}})$ and $h({\mathbf{s}})$. If $g({\mathbf{s}})=h({\mathbf{s}})$ for all ${\mathbf{s}}$, then this is automatically true since ${\mathbf{\Theta}}({\mathbf{s}})$ is defined at all locations where it is possible for study subjects to be located. Beyond the compatibility conditions above, there is a sizable and relevant statistical literature on methods for maximizing out-of-sample prediction accuracy, which for spline models amounts to selecting the number of basis functions and locations of knots or selecting a penalty parameter \citep{Hastie2001, ruppert2003semiparametric}. In our setting, improved accuracy of exposure model predictions will often correspond to improved efficiency in estimating $\beta$, although this is not always the case \citep{Szpiro2011epi}. We comment further on the tradeoff between exposure model complexity and parsimony in Section~\ref{se:disc}, but a specific algorithm for selecting geographic covariates or spline basis functions is beyond the scope of this paper. \section{Measurement error} \label{se:decomp} Let $\hat{\beta}_{n,n^*}$ be the health effect estimate obtained from the OLS solution to (\ref{eq:healthlin}) using $\hat{w}({\mathbf{s}}_i)$ estimated from $n^*$ monitoring locations in place of $x_i$, for study subjects $i=1,\ldots,n$. This estimator is affected by two fundamentally different types of measurement error: Berkson-like and classical-like components \citep{Szpiro2011biostats}. Defining $w^*({\mathbf{s}}_i)={\mathbf{R}}({\mathbf{s}}_i){\boldsymbol\gamma}^*$, we can express the measurement error, $u_{i}=x_i-\hat{w}({\mathbf{s}}_i)$, as \begin{eqnarray} u_{i}&=&\big(x_i- w^*({\mathbf{s}}_i)\big) + \big(w^*({\mathbf{s}}_i)-\hat{w}({\mathbf{s}}_i) \big) \nonumber\\ &=&u_{i,BL}+u_{i,CL} \label{eq:decomp}. \end{eqnarray} The Berkson-like component, $u_{i,BL}$, is the information lost from smoothing even with unlimited monitoring data (a form of exposure model misspecification), and the classical-like component, $u_{i,CL}$, is variability that arises from estimating the parameters of the exposure model based on monitoring data at $n^*$ locations. The designation of $u_{i,BL}$ as Berkson-like error refers to the fact that this is part of the true exposure surface that our model is unable to predict, even in an idealized situation with unlimited monitoring data. As such, it results in predictions that are less variable than truth. In Section~\ref{se:uBL} we consider the impact of the Berkson-like error alone and demonstrate asymptotic unbiasedness for large $n$ in Lemma~\ref{le:ubl}, assuming the compatibility conditions of Section~\ref{se:compatibility} are satisfied. This result motivates the need for the compatibility conditions, but it is not used directly in our measurement error methodology in Section~\ref{se:correction}. Our consistency result in Lemma~\ref{le:ubl} is analogous to Lemma~1 in \citet{White1980}, indicating that finite sample bias occurs in generic random covariate regression even in the absence of measurement error. Here we regard this bias as negligible, because in public health contexts $n$ is often relatively large, particularly compared to $n^*$. Although $u_{i,BL}$ alone does not induce important bias, it does inflate the variability of health effect estimates, and we account for this with the nonparametric bootstrap in our proposed measurement error methodology in Section~\ref{se:correction}. Classical-like measurement error, $u_{i,CL}$, results from the finite $n^*$ variability of $\hat{{\boldsymbol\gamma}}$ as an estimator of ${\boldsymbol\gamma}^*$. As discussed by \citet{Szpiro2011biostats}, it is similar to classical measurement error in the sense that it contributes additional variability to exposure estimates that is not related to the outcome. Like classical measurement error, $u_{i,CL}$ introduces bias in estimating $\beta$ and affects the standard error, but it is not the same as classical measurement error because it is heteroscedastic and shared between subjects. In Section~\ref{se:uCL} we estimate the bias from classical-like measurement error (still under the the compatibility conditions of Section~\ref{se:compatibility}). This estimate will provide a means to correct for bias as part of our measurement error methodology in Section~\ref{se:correction}. \subsection{Berkson-like error ($u_{i,BL}$)} \label{se:uBL} Considering our estimator, $\hat{\beta}_{n,n^*}$, we isolate the impact of $u_{i,BL}$ by operating in the $n^*= \infty$ limit with $w^*({\mathbf{s}}_i)={\mathbf{R}}({\mathbf{s}}_i){\boldsymbol\gamma}^*$ and analyzing the behavior of $\hat{\beta}_{n,\infty}$. The following lemma holds under sufficient regularity of $g({\mathbf{s}})$, $h({\mathbf{s}})$, $\Phi({\mathbf{s}})$, and ${\mathbf{R}}({\mathbf{s}})$. We include the proof here because it is helpful for understanding the importance of the compatibility conditions in Section~\ref{se:compatibility}. \begin{lem} \label{le:ubl} Assuming Conditions~\ref{cond1} and~\ref{cond2}, $\hat{\beta}_{n,\infty}$ converges a.s. to $\beta$ as $n\rightarrow \infty$. \end{lem} \begin{proof} It is easy to see that $\hat{\beta}_{n,\infty}$ is the OLS solution to (\ref{eq:healthlin}) using $w^*({\mathbf{s}}_i)={\mathbf{R}}({\mathbf{s}}_i){\boldsymbol\gamma}^*$ in place of $x_i$. Condition~\ref{cond1} implies ${\boldsymbol\gamma}^*={\boldsymbol\gamma}$, so we consider the impact of using $w({\mathbf{s}}_i)={\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}$ as the exposure. We write \begin{equation} \label{eq:ubl} y_i={\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma} \beta + {\mathbf{z}}_i {\boldsymbol\beta}_z + \left(\left(\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}\right)\beta + \eta_i \beta+ \varepsilon_i\right), \end{equation} where the three terms grouped in parentheses are regarded as unobserved error terms. To apply Lemma~1 from \citet{White1980}, it is sufficient that \begin{equation} E\left\{{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma} \times (\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma})\right\}=0 \label{eq:exporthog} \end{equation} and for each $k=1,\ldots,p$ \begin{equation} E\left\{\theta_k({\mathbf{s}}_i) \times (\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma})\right\}=0, \label{eq:zorthog} \end{equation} where the random sampling of ${\mathbf{s}}_i$ is according to the density of subject locations, $g({\mathbf{s}})$. Orthogonality of residuals in the least squares optimization for ${\boldsymbol\gamma}$ in (\ref{eq:gamma}) implies (\ref{eq:exporthog}), and Condition~\ref{cond2} implies (\ref{eq:zorthog}) since each $\theta_k({\mathbf{s}})$ can be represented as a linear combination of elements of ${\mathbf{R}}({\mathbf{s}})$. We actually need (\ref{eq:zorthog}) with $z_{ik}=\theta_k({\mathbf{s}}_i) + \zeta_{ik}$ in place of $\theta_k({\mathbf{s}}_i)$ for Lemma~1 of \citet{White1980}, but this follows from (\ref{eq:zorthog}) since $\zeta_{ki}$ has mean zero and is independent of ${\mathbf{s}}_i$. \end{proof} We comment on the necessity of Conditions~\ref{cond1} and~\ref{cond2}. The proof of Lemma~\ref{le:ubl} depends on ${\boldsymbol\gamma}^* = {\boldsymbol\gamma}$. This will always hold if $\Phi({\mathbf{s}})$ is spanned by the ${\mathbf{R}}({\mathbf{s}})$, but otherwise we rely on Condition~\ref{cond1}. If ${\boldsymbol\gamma}^* \neq {\boldsymbol\gamma}$, then (\ref{eq:ubl}) becomes \begin{equation} \label{eq:ubl.mismatch} y_i={\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}^* \beta + {\mathbf{z}}_i {\boldsymbol\beta}_z + \left( \left(\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}^*\right)\beta + \eta_i \beta+ \varepsilon_i\right). \end{equation} We cannot expect that ${\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}^*$ is orthogonal to $\left(\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}^*\right)$ when ${\mathbf{s}}_i$ is drawn according to the probability density $g({\mathbf{s}})$, since $\gamma^*$ is the least squares fit from (\ref{eq:gamma}) with $H({\mathbf{s}})$ in place of $G({\mathbf{s}})$. Therefore, treating $\left(\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}^*\right)$ as part of the random variation in (\ref{eq:ubl.mismatch}) results in the equivalent of omitted variable bias when estimating $\beta$. Condition~\ref{cond2} is needed to guarantee (\ref{eq:zorthog}) in the proof of Lemma~\ref{le:ubl}. The difficulty if (\ref{eq:zorthog}) does not hold is that $\left(\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}\right)$ in (\ref{eq:ubl}) may be correlated with one or more elements of the ${\mathbf{\Theta}}({\mathbf{s}}_i)$ component of ${\mathbf{z}}_i$. Intuitively, this can introduce bias because estimation of $\beta$ relies on the variation in ${\mathbf{R}}({\mathbf{s}}_i){\boldsymbol\gamma}$ that is unrelated to the covariates ${\mathbf{z}}_i$, i.e., the residual variation after projecting onto the span of the elements of ${\mathbf{z}}_i$, which is equivalent to the span of ${\mathbf{\Theta}}({\mathbf{s}}_i)$. Without (\ref{eq:zorthog}), the residual term $\left(\Phi({\mathbf{s}}_i)-{\mathbf{R}}({\mathbf{s}}_i) {\boldsymbol\gamma}\right)$ in (\ref{eq:ubl}) need not be orthogonal to this variation. Note that the need to include the covariates from the health model in the exposure model is analogous to the inclusion of covariates in standard regression calibration \citep{Carroll2006}. \subsection{Classical-like error ($u_{i,CL}$)} \label{se:uCL} We will isolate the impact of $u_{i,CL}$ on $\hat{\beta}_{n,n^*}$ by operating in the $n= \infty$ limit, corresponding to the entire superpopulation of study subjects, and analyzing the asymptotic properties of $\hat{\beta}_{\infty,n^*}$ as $n^* \rightarrow \infty$. The exposure model parameter vector, $\hat{{\boldsymbol\gamma}}$, is asymptotically normal (as discussed in Section~\ref{se:exposEst}) with dimension fixed at $r$, and $\hat{\beta}_{\infty,n^*}$ is a deterministic function of $\hat{{\boldsymbol\gamma}}$, so under the conditions of Lemma~\ref{le:ubl} a standard delta-method argument can be used to establish that $\hat{\beta}_{\infty,n^*}$ is asymptotically normal with mean $\beta$. In particular, this implies that bias from classical-like error is asymptotically negligible in the sense that it is of comparable magnitude to the variance. This situation contrasts with classical measurement error where there are as many random error terms as observations and there is large-sample bias \citep{Carroll2006}. Even though the bias term is asymptotically negligible, our simulation studies suggest that it can still be important for moderate size $n^*$, so we will derive a bias correction. Since only the variability in the exposure estimate that is orthogonal to covariates from the health model plays a role in deriving $\hat{\beta}_{\infty,n^*}$, it is helpful in the following analysis to define ${\mathbf{R}}^c({\mathbf{s}})$ with elements $R_k^c({\mathbf{s}})=R_k(s)-{\mathbf{\Theta}}({\mathbf{s}}){\boldsymbol\psi}_k$, where ${\boldsymbol\psi}_k = \argmin_{\boldsymbol\omega} \int (R_k({\mathbf{s}})-{\mathbf{\Theta}}({\mathbf{s}}){\boldsymbol\omega}\big)^2 dG({\mathbf{s}})$. Analogous to $\hat{w}({\mathbf{s}})$ and $w({\mathbf{s}})$, we define $\hat{w}^c({\mathbf{s}})={\mathbf{R}}^c({\mathbf{s}})\hat{{\boldsymbol\gamma}}$ and $w^{c}({\mathbf{s}})={\mathbf{R}}^c({\mathbf{s}}){\boldsymbol\gamma}$. Note that the expectation of $\hat{\beta}_{\infty,n^*}$ need not be defined for finite $n^*$ because it is a function of $\hat{{\boldsymbol\gamma}}$ , and the denominator in the OLS solution for $\hat{{\boldsymbol\gamma}}$ is not bounded away from zero. Therefore, we adapt the definition of asymptotic expectation for a sequence of random variables from \citet[page 135]{Shao2010}. The basic idea is to identify the highest order term in a power series expansion that has non-zero expectation as the asymptotic expectation. See a related discussion of concepts of asymptotic bias in \citet[Appendix A.1.2]{Lumley2010}. \begin{defin} \label{def:asympt} Let $\upsilon_1,\upsilon_2,\ldots$ be a sequence of vector-valued random variables and let $a_1,a_2,\ldots$ be a sequence of positive numbers such that $\lim_{n\rightarrow\infty}a_n = \infty$. (i) Suppose $\upsilon$ is such that $E|\upsilon|<\infty$ and we can write $\upsilon_n =\tilde{\upsilon}_n+ \upsilon^\prime_n$ with $E(\tilde{\upsilon}_n)$=0 and $\lim_{n\rightarrow \infty} a_n \upsilon^\prime_n \rightarrow_d \upsilon$. Then we denote $E_{[a_n]}(\upsilon_n)= E(\upsilon)$ and call $E_{[a_n]}(\upsilon_n)/a_n$ an order $a_n^{-1}$ asymptotic expectation of $\upsilon_n$. (ii) Suppose $\upsilon$ is such that ${\rm Cov }(\upsilon)<\infty$ and $\lim_{n\rightarrow \infty} \sqrt{a_n} \upsilon_n \rightarrow_d \upsilon$. Then we denote ${\rm Cov }_{[a_n]}(\upsilon_n)= {\rm Cov }(\upsilon)$ and call ${\rm Cov }_{[a_n]}(\upsilon_n)/a_n$ an order $a_n^{-1}$ asymptotic covariance of $\upsilon_n$. \end{defin} \begin{lem} \label{le:ucl} Assume sufficient regularity of $g({\mathbf{s}})$, $h({\mathbf{s}})$, $\Phi({\mathbf{s}})$, and ${\mathbf{R}}({\mathbf{s}})$ and Conditions~\ref{cond1} and~\ref{cond2}. If we set \begin{eqnarray} E_{[n^*]}(\hat{\beta}_{\infty,n^*} - \beta) &=&\beta \Big \{ -\frac{\int w^{c}({\mathbf{s}}) E_{[n^*]} \left(\hat{w}^c({\mathbf{s}})-w^{c}({\mathbf{s}})\right) dG({\mathbf{s}})}{\int w^{c}({\mathbf{s}})^2 dG({\mathbf{s}})} -\frac{\int {\rm Var}_{[n^*]}\left(\hat{w}^c({\mathbf{s}})\right) dG({\mathbf{s}})}{\int w^{c}({\mathbf{s}})^2 dG({\mathbf{s}})} + \nonumber \\ \label{eq:cl.bias}\\ && 2 \frac{\int w^{c}({\mathbf{s}}_1) w^{c}({\mathbf{s}}_2) {\rm Cov }_{[n^*]} \left(\hat{w}^c({\mathbf{s}}_1), \hat{w}^c({\mathbf{s}}_2)\right) dG({\mathbf{s}}_1) dG({\mathbf{s}}_2)}{\left(\int w^{c}({\mathbf{s}})^2 dG({\mathbf{s}})\right)^2} \Big \}\nonumber \end{eqnarray} \newline \noindent and \nonumber \begin{eqnarray} {\rm Var}_{[n^*]}(\hat{\beta}_{\infty,n^*})&=&\beta^2 \Big\{\frac{\int w^{c}({\mathbf{s}}_1) w^{c}({\mathbf{s}}_2) {\rm Cov }_{[n^*]} \left(\hat{w}^c({\mathbf{s}}_1), \hat{w}^c({\mathbf{s}}_2)\right) dG({\mathbf{s}}_1) dG({\mathbf{s}}_2)}{\left(\int w^{c}({\mathbf{s}})^2 dG({\mathbf{s}})\right)^2} \Big \}, \label{eq:cl.var} \end{eqnarray} then $E_{[n^*]}(\hat{\beta}_{\infty,n^*} - \beta)/n^*$ is an asymptotic expectation of $\hat{\beta}_{\infty,n^*} - \beta$ and ${\rm Var}_{[n^*]}(\hat{\beta}_{\infty,n^*})/n^*$ is an asymptotic variance of $\hat{\beta}_{\infty,n^*}$ (both of order ${n^*}^{-1}$) . \end{lem} The proof is outlined in Appendix~\ref{se:appendixa}, where we express $\hat{\beta}_{\infty,n^*}$ as a function of $\hat{{\boldsymbol\gamma}}$ and do a second order Taylor expansion around ${\boldsymbol\gamma}$. Definition~\ref{def:asympt} is required to define the order ${n^*}^{-1}$ asymptotic expectation in the first term of (\ref{eq:cl.bias}), which is a linear function of $\hat{{\boldsymbol\gamma}}-{\boldsymbol\gamma}$. The first order terms in a Taylor expansion of $\hat{{\boldsymbol\gamma}}-{\boldsymbol\gamma}$ are of order $n^{*-1/2}$ and do not converge when multiplied by $n^*$. However, they have expectation zero, so they play the role of $\tilde{\upsilon}_n$ and do not contribute to the asymptotic expectation. See (\ref{eq:biasgammahat.body}) and the surrounding discussion. The practical import of Lemma~\ref{le:ucl} is that we can use (\ref{eq:cl.bias}) to correct for the bias from classical-like error. The variance estimate in (\ref{eq:cl.var}) is not directly useful as a standard error because it does not include variability from Berkson-like error or from having $n<\infty$ study subjects, but it provides insight into the relative magnitudes of bias and variance from classical-like error. To estimate (\ref{eq:cl.bias}), we can estimate $w^{c}({\mathbf{s}})$ by $\hat{w}^c({\mathbf{s}})={\mathbf{R}}^c({\mathbf{s}})\hat{{\boldsymbol\gamma}}$, noting that ${\mathbf{R}}^c({\mathbf{s}})$ is approximated from the observed exposure covariates for the health observations, orthogonalizing with respect to the health model covariates, which is the finite sample approximation to the construction of ${\mathbf{R}}^c({\mathbf{s}})$ stated earlier in this section. To estimate the variances and covariances of $\hat{w}^c({\mathbf{s}})$, we use a robust estimator for $\mbox{Cov}_{[n^*]}(\hat{{\boldsymbol\gamma}})$ \citep{White1980, Carroll2006}. We use the sandwich estimator to avoid the assumption of having a correctly-specified model, as required for the standard model-based estimator. Given these estimators, all the integrals in the first two terms of (\ref{eq:cl.bias}) can be estimated as averages with respect to the discrete measure with equal weight on each health observation, the standard plug-in estimator for $G({\mathbf{s}})$. Finally, in the third term of (\ref{eq:cl.bias}), we need to estimate $E_{[n^*]}(\hat{w}^c({\mathbf{s}}) - w^{c}({\mathbf{s}})) = {\mathbf{R}}({\mathbf{s}})E_{[n^*]}(\hat{{\boldsymbol\gamma}} - {\boldsymbol\gamma})$, and therefore the asymptotic expectation of $\hat{{\boldsymbol\gamma}}$. Since we have assumed Condition~\ref{cond1}, which implies ${\boldsymbol\gamma}={\boldsymbol\gamma}^*$, the expectation of $\hat{{\boldsymbol\gamma}}$ is approximately equal to ${\boldsymbol\gamma}$. However, $\hat{{\boldsymbol\gamma}}$ is derived by means of a random covariate regression with a misspecified mean model, so its standard expectation is not defined. An estimate of its asymptotic expectation is developed as follows. Let ${\mathbf{\Phi}}^*$ be the vector comprised of the $\Phi({\mathbf{s}}^*_j)$ and ${\mathbf{R}}^*$ the $n^* \times r$ matrix obtained by stacking the ${\mathbf{R}}({\mathbf{s}}^*_j)$ for $j=1,\ldots,n^*$. For arbitrary $m_j$, denote by ${\mathbf{M}}$ the $n^*\times n^*$ diagonal matrix with entries $m_1,\ldots,m_{n^*}$. If we set $m_j=1/n^*$ for $j=1,\ldots,n^*$ and define \begin{equation} \kappa(m_1,\ldots,m_{n^*}) = \big({\mathbf{R}}^{*^ \top} {\mathbf{M}} {\mathbf{R}}^*\big)^{-1}{\mathbf{R}}^{*^\top} {\mathbf{M}}{\mathbf{\Phi}}^* \label{eq:kappa}, \end{equation} then we notice $E\left(\hat{{\boldsymbol\gamma}}|{\mathbf{s}}^*_1,\ldots,{\mathbf{s}}^*_{n^*}\right)=\kappa(m_1,\ldots,m_{n^*})$. We are interested in the unconditional expectation of $\hat{{\boldsymbol\gamma}}$. Heuristically, we assume that the true $h({\mathbf{s}})$ is supported on the observed monitor locations and gives equal weight to each observation (i.e., we use the plug-in estimator for $h({\mathbf{s}})$). In that case, a realization of ${\mathbf{s}}^*_1,\ldots,{\mathbf{s}}^*_{n^*}$ can be expressed as a multinomial draw, $m_1,\ldots,m_{n^*}$, where the $m_j$ are the fraction of times each location in the support of $h({\mathbf{s}})$ is drawn. We can estimate the expectation of $\kappa(m_1,\ldots,m_{n^*})$ by means of a Taylor series expansion of $\kappa(\cdot)$ around $m_j=\frac{1}{n^*}$ for $j=1,\ldots,n^*$. Using the first and second moments of a multinomial distribution, we have \begin{equation} \label{eq:biasgammahat.body} E_{[n^*]} \left(\hat{{\boldsymbol\gamma}} - {\boldsymbol\gamma}\right) \approx \frac{1}{2}\left(\frac{1}{n^*}-\frac{1}{(n^*)^2}\right)\sum_{j=1}^{n^*} \frac{\partial^2 \kappa}{\partial m_j^2} - \frac{1}{2} \frac{1}{(n^*)^2} \sum_{j,k=1;j\neq k}^{n^*} \frac{\partial^2 \kappa}{\partial m_j \partial m_k}. \end{equation} It easy to see that the first order terms in the Taylor expansion of $\kappa(\cdot)$ (not shown) have expectation zero, so they play the role of $\tilde{\upsilon}_n$ in Definition~\ref{def:asympt} and do not contribute to the asymptotic expectation. We give further details on numerical calculation of the above expression in Appendix~\ref{se:appendixb}. A more formal derivation that does not begin by assuming a discrete distribution could be developed by a von Mises expansion with the empirical process of monitor locations \citep[Section 20.1]{Vaart1998}. Note that although we do not observe the $\Phi({\mathbf{s}}^*_j)$, replacing them with $x^*_j$ in (\ref{eq:kappa}) does not introduce bias since $x^*_j=\Phi({\mathbf{s}}^*_j)+\eta^*_j$, and the $\eta^*_j$ are independent of everything else and have mean zero. Finally, we can gain additional insight into the bias and variance contributions from classical-like error by considering the simplified situation in which the exposure model is correctly specified so that $w({\mathbf{s}})={\mathbf{R}}({\mathbf{s}}) {\boldsymbol\gamma}$ for all ${\mathbf{s}}$, the subject and monitor location densities, $g({\mathbf{s}})$ and $h({\mathbf{s}})$, are the same, and there are no additional covariates or intercept in the health model. In that case it is easy to show that the asymptotic expectation simplifies to \begin{eqnarray} \label{eq:simp.bias} \frac{1}{n^*}E_{[n^*]}(\hat{\beta}_{\infty,n^*} - \beta) &=&- \beta \frac{1}{n^*} \frac{(r-2)\sigma^2_{\eta^*}}{\int w({\mathbf{s}})^2 dG({\mathbf{s}})}, \end{eqnarray} and the asymptotic variance simplifies to \begin{eqnarray} \label{eq:simp.var} \frac{1}{n^*}{\rm Var}_{[n^*]}(\hat{\beta}_{\infty,n^*})&=&\beta^2 \frac{1}{n^*} \frac{\sigma^2_{\eta^*}}{\int w({\mathbf{s}})^2 dG({\mathbf{s}})}. \end{eqnarray} The $r-2$ term in (\ref{eq:simp.bias}) illustrates the fact that the bias is away from the null in the case of a one-dimensional exposure model and that more typically it is toward the null and becomes larger with higher-dimensional exposure models, for a given true exposure surface. This is what occurs empirically in our simulations and examples. In addition, the ratio of the squared bias to the variance is \begin{equation} \frac{(r-2)^2}{n^*}\frac{\sigma^2_{\eta^*}}{\int w({\mathbf{s}})^2 dG({\mathbf{s}})}, \end{equation} which demonstrates that the importance of the bias depends on the dimensionality of the exposure model relative to the sample size and the ratio of the noise to the signal in the exposure data. \section{Measurement error correction \label{se:correction}} We correct for measurement error by means of an optional asymptotic bias correction based on (\ref{eq:cl.bias}) followed by a design-based nonparametric bootstrap standard error calculation (incorporating the asymptotic bias correction in the bootstrap, if appropriate). Given a bias estimate $\hat{b}$ from (\ref{eq:cl.bias}) the bias-corrected $\hat{\beta}_{bc}$ is $\hat{\beta}/(1+\hat{b})$. Bias correction is optional since the asymptotic results of Section~\ref{se:decomp} show that the naive health effect estimator is consistent, with variance dominating the bias in the limit as the number of exposure observations increases. We explore the magnitude of bias and utility of including the asymptotic correction via simulation in the next section and comment further on this topic in Section~\ref{se:disc}. We need to estimate the uncertainty in either $\hat{\beta}$ or $\hat{\beta}_{bc}$ in a way that accounts for all the components of the measurement error and the sampling variability in the health model. Note that the asymptotic variance (\ref{eq:cl.var}) accounts only for the variance from the classical-like measurement error. Since we have assumed that the locations of health and exposure data are randomly drawn according to the densities $g({\mathbf{s}})$ and $h({\mathbf{s}})$, respectively, a simple design-based nonparametric bootstrap is a suitable approximation to the data-generating mechanism. To obtain each bootstrap dataset, we separately resample with replacement $n^*$ exposure measurements and $n$ health observations. We fit the exposure model to the bootstrapped exposure measurements and use the results to predict exposures at the locations of the bootstrapped health observations. We then obtain bootstrap health effect estimates (with or without bias correction) and estimate the standard error of $\hat{\beta}$ or $\hat{\beta}_{bc}$ by means of the empirical standard deviation of these values. In principle, we could avoid the asymptotic calculations in (\ref{eq:cl.bias}) by employing a bootstrap procedure to estimate bias followed by a second round of bootstrapping for standard error estimation. Such a nested bootstrap is computationally demanding. Furthermore, our strategy of using the bootstrap after addressing the bias is consistent with the comments of \cite[Chapter 10]{Efron1993} who caution that bias correction with the bootstrap is more difficult than variance estimation. Along similar lines, \cite[p. 216]{Buon2010} notes the need for additional assumptions when developing a two-stage bootstrap that includes bias correction. \section{Simulations\label{se:sims}} \subsection{One dimensional exposure surface} \label{se:1d.ex} Our first set of simulations is in the simplified setting of a one dimensional exposure surface. In this setting, we illustrate the bias from Berkson-like error for very large $n^*$ when either Condition~\ref{cond1} or~\ref{cond2} is violated, and we illustrate the finite $n^*$ measurement error correction methods from Section~\ref{se:correction} when both compatibility conditions are satisfied. We simulate 1,000 Monte Carlo datasets and use 100 bootstrap samples, where applicable. The true health model is linear regression with $\beta=1$, with i.i.d. $\epsilon \sim N(0,1)$ and an intercept but no additional health model covariates. We use $n=500$ subjects. The true exposure surface on $(0,10)$ is a combination of low frequency and high frequency sinusoidal components \[ \Phi({\mathbf{s}}) = \sin({\mathbf{s}}+3.5) + \frac{{\mathbf{s}}+4}{20}\sin(4 {\mathbf{s}} - 10.5), \] and we set $\sigma^2_\eta=\sigma^2_{\eta^*}=0.5$. The density of monitor locations is \begin{equation} \label{eq:h.dens} h({\mathbf{s}})=\left\{ \begin{array}{ll} 0.142&0<{\mathbf{s}}\leq \frac{10}{3}, \frac{20}{3}<{\mathbf{s}}<10\\ 0.0142& \frac{10}{3}\leq {\mathbf{s}} \leq \frac{20}{3}, \end{array} \right. \end{equation} and we use an exposure model ${\mathbf{R}}({\mathbf{s}})$ comprised of a B-spline basis with $5$ to $25$ degrees of freedom \citep{Hastie2001}. To illustrate the bias from the Berkson-like error when either of the compatibility conditions is violated, we set $n^*=1000$ so that the classical-like error is negligible. The results of these simulations are shown in Figure~\ref{fi:ubl}. In panels~(a) and~(b), the health model is fit with an intercept but no additional health model covariates, so Condition~\ref{cond2} is automatically satisfied. In panel~(a) the density of subject locations $g({\mathbf{s}})$ is the same as $h({\mathbf{s}})$, and there is no evidence of bias in $\hat{\beta}$. In panel~(b), $g({\mathbf{s}})$ is uniform on the interval $(0,10)$ so that Condition~\ref{cond1} is violated. There is clear evidence of bias away from the null for $5$ and $9$ df exposure models. There is no evidence of bias with $13$ df, which can be attributed to the fact that the exposure model with $13$ df is sufficiently rich to account for almost all of the spatial structure in $\Phi({\mathbf{s}})$, meaning that the Berkson-like error behaves like pure Berkson error. In panels~(c) and~(d) of Figure~\ref{fi:ubl}, $g({\mathbf{s}})$ is the same as $h({\mathbf{s}})$, but we fit the health model including an additional covariate $z_i=\sin({\mathbf{s}}_i)$. In panel~(c), this covariate is also included in the exposure model, and as expected we see no evidence of bias in $\hat{\beta}$. In panel~(d), the additional covariate is not included in the exposure model, so Condition~\ref{cond2} is violated. There is noticeable bias of $\hat{\beta}$ toward the null, especially for the $5$ and $9$ df spline models. In Figure~\ref{fi:corr}, we show results from a separate set of simulations with $n^*=200$ in order to illustrate the measurement error correction methods from Section~\ref{se:correction}. In these simulations, $g({\mathbf{s}})$ is the same as $h({\mathbf{s}})$ and the health model is fit without additional covariates, so Conditions~\ref{cond1} and~\ref{cond2} are satisfied. The mean out-of-sample $R^2$ ranges from $0.25$ for 5 df to $0.35$ for 13 df, corresponding to the challenging situation of an exposure model with marginal performance that can lead to substantial bias in estimating $\beta$. In panel~(a), we see that the uncorrected health effect estimates have notable bias, especially for larger df exposure models, and our correction successfully removes most of the bias. Panel~(b) shows the coverage of nominal 95\% confidence intervals. In the uncorrected analyses, coverage ranges from 45\% to 80\%, depending on the df in the exposure model. Confidence intervals that incorporate either the bias correction or bootstrap standard errors improve the coverage. We obtain nearly perfect 95\% coverage when we incorporate the bias correction and bootstrap standard errors. \subsection{Spatial exposure surface} \label{se:spatial.ex} Our second set of simulations is based on the MESA Air study design in the Baltimore region (Section~\ref{se:mesa}), using 1,000 simulated datasets and 100 bootstrap samples. We enforce Conditions~\ref{cond1} and~\ref{cond2} and focus on illustrating the value of the correction methods described in Section~\ref{se:correction} in a realistic spatial setting. The spatial domain is a $257 \times 257$ discrete grid scaled to be a square 30 units on a side. There are $n^*=125$ monitor locations, sampled in clusters by first choosing 25 locations i.i.d. uniformly on $S$ and then also including the four nearest neighbors for each such location. Our bootstrap for these simulations resamples clusters of five monitors. A total of $n=600$ subject locations are selected uniformly and independently from $S$. The predictable part of the exposure surface is \[ \Phi({\mathbf{s}})=\gamma_0+\gamma_1 R_1({\mathbf{s}}) + \gamma_2 R_2({\mathbf{s}}) + \gamma_3 R_3({\mathbf{s}}) + \Phi_1({\mathbf{s}}). \] Each $\gamma_i=4.9$, and each $R_k({\mathbf{s}})$ is constructed by drawing i.i.d. realizations from $N(0,1/3)$ at each ${\mathbf{s}}\in S$. $\Phi_1({\mathbf{s}})$ is a fixed realization from a spectral approximation to a Gaussian field with Mat\'{e}rn covariance \citep{Paciorek2007} with range 20 and unit differentiability parameter, normalized such that the variance of $\Phi_1({\mathbf{s}})$ on $S$ is 30. Thus the total variance of $\Phi({\mathbf{s}})$ on $S$ is approximately 54. In the true exposure surface and monitoring data, there is also a nugget with variance $\sigma^2_\eta=\sigma^2_{\eta^*}=6$. We consider two spatial scenarios, corresponding to different fixed realizations of $\Phi_1({\mathbf{s}})$. These surfaces are shown in Figure~\ref{fi:spatial}. The spatial exposure model has ${\mathbf{R}}({\mathbf{s}})$ comprised of $R_k({\mathbf{s}})$ for $k=1,2,3$ and a thin-plate spline basis derived by fitting a GAM from the MGCV package in R \citep{Wood2006} to the observed monitoring data with fixed degrees of freedom (df). Thus the spatial basis is actually different for each simulated dataset since it depends on the monitor locations, but we keep the same basis functions for the bootstrap analysis within each simulation run. We estimate the standard error of $\hat{{\boldsymbol\gamma}}$ using a sandwich estimator for clustered data implemented in the R package geepack \citep{Geepack2006}. The true and fitted health models have an intercept but no additional covariates. We set $\beta=0.1$ and consider i.i.d. normally distributed $\epsilon \sim N(0,\sigma^2_\epsilon)$ with $\sigma^2_\epsilon$ equal to 200 or 10. The larger value of $\sigma^2_\epsilon=200$ is consistent with what we see in the MESA Air data with left ventricular mass index (LVMI) as the outcome, where the air pollution exposure explains approximately $0.3\%$ of the variance after adjustment for known risk factors. We also consider $\sigma^2_\epsilon=10$ such that air pollution exposure explains approximately $5\%$ of the health outcome variance in order to see more clearly the potential impact of exposure measurement error. The two spatial surfaces, while generated at random, represent different deterministic scenarios in which we could find ourselves (e.g., different metropolitan areas). In scenario~1 the spatially structured part of the air pollution surface $\Phi_1({\mathbf{s}})$ can be represented fairly well using thin-plate splines with either 5 or 10 df, while the spatial surface in scenario 2 cannot be represented well with 5 df but can be reasonably well modeled with 10 df. This is reflected in the $R^2$ values in Figure~\ref{fi:spatial}, which represent the best thin-plate spline fits to the surfaces, assuming essentially unlimited monitoring data is available. The cross-validated and out-of-sample $R^2$ values for predicting the full air pollution surface $\Phi({\mathbf{s}})$ (including non-spatially structured covariates) based on monitoring data reported in Table~\ref{ta:linear.sim} exhibit a similar pattern. For leave-one-out cross-validation, the clusters of five adjacent monitors are treated as a single unit. We focus our discussion on the scenarios with $\sigma^2_\epsilon=10$ because this is where the impact of exposure measurement error is most prominent. The measurement error impact is qualitatively similar for $\sigma^2_\epsilon=200$, but it is less important because the unmodeled variability in the health outcome dominates. Our theory dictates that the relative biases for $\sigma^2_\epsilon=10$ and $\sigma^2_\epsilon=200$ are identical, which we verified in simulations out to four significant digits, so we only report one value. When we fit the exposure model with a 5 df thin-plate spline, there is modest bias toward the null of $3\%$ in scenario~1 and more substantial bias of $12\%$ in scenario~2. Our asymptotic correction reduces the magnitude of bias in both instances. The bias correction followed by bootstrap standard errors consistently gives valid inference, including accurate standard error estimates and nominal coverage of $95\%$ confidence intervals. In scenario~1, we also get valid inference with bootstrap standard errors and no bias correction. When we increase the complexity of the spatial model to 10 df, prediction accuracy improves in both scenarios, but inference about the health effect parameter is degraded. The magnitude of bias is approximately the same as with 5 df, but our asymptotic correction is less effective. Furthermore, the bootstrap standard error estimates tend to be too large, resulting in over-coverage of 95\% confidence intervals. These findings are not surprising, because while each simulated dataset has $125$ monitor locations, they are clustered in groups of $5$ so that there are effectively only $25$ unique locations for estimating the smooth component of the spatial surface, so a thin-plate spline model with 10~df overfits these data in the sense that we do not expect to be able to rely on large $n^*$ asymptotic approximations such as (\ref{eq:cl.bias}) or the nonparametric bootstrap. \section{Data analysis} \label{se:mesa} The Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) is an ongoing cohort study designed to investigate the relationship between air pollution exposure and progression of subclinical atherosclerosis \citep{Bild2002,Kaufman2012}. The MESA Air cohort includes over 6,000 subjects in six U.S. metropolitan areas (Baltimore City and Baltimore County, MD; Chicago, IL; Forsyth County (Winston-Salem), NC; Los Angeles and Riverside Counties, CA; New York and Rockland County, NY; and St Paul, MN). Four ethnic/racial groups were targeted, white, African American, Hispanic, and Chinese American, and all study participants (46 to 87 years of age) were without clinical cardiovascular disease at the baseline examination (2000–-2002). An early cross-sectional finding from MESA Air is that an elevated left-ventricular mass index (LVMI) is associated with exposure to traffic related air pollution, specifically outdoor residential concentrations of gaseous oxides of nitrogen (NOx) \citep{VanHee2009,VanHee2012}. \citet{VanHee2012} found that an increase in NOx concentration of 10 parts per billion (ppb) is associated with a 0.36 $g/m^2$ increase in LVMI (95\% CI: 0.02 - 0.7 $g/m^2$). \citet{VanHee2012} utilized predictions from a spatio-temporal exposure model that incorporates regulatory and study-specific monitoring data in all six regions \citep{Szpiro2009}. To illustrate our methodology for a purely spatial exposure model, we re-analyze the data restricted to subjects in the Baltimore region, and we construct an exposure model based on data from three community snapshot monitoring campaigns conducted by MESA Air. In brief, the community snapshot campaign consisted of three separate rounds of spatially rich sampling during single two-week periods in different seasons. In the Baltimore area, approximately 100 measurements were made in each of three two-week periods in May 2006, November 2006, and February 2007. In each round of snapshot monitoring, the majority of monitors were arranged in clusters of six, with three on either side of a major road at distances of approximately 50, 100, and 300 meters \citep{Cohen2009}. In addition, the locations were chosen to characterize different land use categories and to cover the geographic region as broadly as possible. To help with satisfying Condition~\ref{cond1}, we exclude one cluster from our analysis because it is far from any of the study subjects, and we approximate long-term average concentrations by averaging the three available measurements at locations that were monitored in all three seasons. The 93 monitor locations and 625 subject locations in our analysis are shown in Figure~\ref{fi:mesa}. Our exposure model incorporates five geographic covariates: (i) distance to a major road, (ii) local-source traffic pollution from a dispersion model \citep{Wilton2010}, (iii) population density in a 1 km buffer, (iv) distance to downtown, and (v) transportation land use in a 1 km buffer. The first three of these geographic covariates are log-transformed. An additional covariate describing the density of high-intensity land-use (commercial, industrial, residential, etc.) was also incorporated in the original spatio-temporal model predictions used by \citep{VanHee2012}, but we exclude this covariate from our model because it has very different distributions across subject and monitor locations, a clear violation of Condition~\ref{cond1}. To account for unmodeled spatial structure, we use a thin-plate spline basis with 0, 5, or 10 df, constructed as in the simulations. We estimate the standard error of $\hat{{\boldsymbol\gamma}}$ using a sandwich estimator for clustered data implemented in the R package geepack \citep{Geepack2006}. We estimate the association between NOx and LVMI by fitting a multivariate linear regression, including an exhaustive set of additional health model covariates that could be potential confounders \citep{VanHee2009}. The results of our analysis are shown in Table~\ref{ta:mesa}, with 10,000 bootstrap replicates (resampling clusters of monitors, where applicable). Our findings are very similar for an exposure model that is purely land-use regression and one that includes splines with 5 df. We estimate that an increase in NOx concentration of 10 parts per billion (ppb) is associated with approximately a 0.7 $g/m^2$ increase in LVMI. Our standard error estimates for these models in Table~\ref{ta:mesa} range from $0.55$ to $0.68$ $g/m^2$, so the difference in effect size from that found by \citet{VanHee2012} is very likely due to our more limited dataset. The exposure model that includes 5 df splines has a larger cross-validated $R^2$, suggesting that it captures more variability in the exposure. This translates into a smaller model-based standard error, but this apparent advantage is attenuated when we correct for exposure measurement error with bootstrap standard error estimates, and it goes away entirely when we also incorporate the bias correction. The exposure model with 10 df gives slightly larger effect estimates and standard errors. There is also more evidence of bias from classical-like error than for the lower dimensional exposure models. However, our simulation results in Table~\ref{ta:linear.sim} suggest that a 10 df spline is too rich of a model for the available monitoring data and that these results should be considered less reliable than those based on 5 df splines. \ \section{Discussion} \label{se:disc} We have developed a statistical framework for characterizing and correcting measurement error in two-stage analyses, focusing particularly on problems where a first-stage spatial model is used to predict exposure that is measured at different locations than are needed in a second-stage health analysis. Our methodology is robust to misspecification of the exposure model, treating it as a device to explain some portion of the variability in exposure. We adopt a design-based perspective in which the process of selecting exposure measurement and subject locations is the primary source of spatial randomness, leading naturally to nonparametric bootstrap resampling for standard errors. A major contribution of our work is that we delineate the potential sources of bias from Berkson-like and classical-like measurement error and provide strategies for reducing bias and variance at the design and analysis stages. Bias from classical-like error can be corrected using an asymptotic approximation, whereas bias from Berkson-like error should be addressed at the design stage or when selecting an exposure model. While our research is primarily motivated by epidemiologic analysis of long-term air pollution health effects, we note that the spatial prediction problem can be interpreted as a linear model. Thus, our measurement error decomposition, asymptotic results, and bias correction hold equally well in non-spatial settings. Our theory and simulations demonstrate that bias from the classical-like error is small when the exposure model is not overfit in the sense that there are sufficient observations relative to the dimension of the exposure model for the large $n^*$ asymptotics to be relevant. The limited magnitude of the bias suggests that measurement error correction efforts should focus on avoiding overfitting the exposure model and satisfying the conditions needed to ensure that Berkson-like error does not induce important bias (at least in a linear health model). Nonetheless, in several simulation scenarios our asymptotic correction for bias from classical-like error results in improved estimation and inference, even at the expense of increased variance. Indeed, in our analyses and simulations the increased variance caused by estimating the bias is modest. Our theoretical development motivates the use of a nonparametric bootstrap to account for variability induced by measurement error. When the bias correction is not used, simulations suggest that the underestimation of uncertainty from ignoring the measurement error (using a sandwich variance estimator) is modest, but even so there are cases in which accounting for the effect of measurement error is necessary. When we include the asymptotic bias correction, the bootstrap is more generally necessary for valid confidence intervals. As we remarked in Section~\ref{se:compatibility}, exposure model selection is a broad topic and a specific algorithm for selecting geographic covariates or spline basis functions is beyond the scope of this paper. However, we discuss below several practical approaches that can be considered in designing a study to approximately satisfy the compatibility conditions from Section~\ref{se:compatibility}, so as to minimize the bias from Berkson-like error. We will explore these options and related tradeoffs further in future work. First, to satisfy Condition~\ref{cond1}, as much care as possible should be taken at the design stage to ensure the sampling densities of locations and exposure covariates are as similar as possible in the first-stage exposure observations and the second-stage outcome observations. While this criterion is overly abstract in the context of a specific study, the practical implication is that first-stage and second-stage locations should be chosen to be similar in terms of location and pertinent covariates. If exposure data have already been collected, it may be necessary to consider excluding exposure or outcome data or deleting one or more covariates from ${\mathbf{R}}({\mathbf{s}})$ in order to minimize the mismatch. If we are particularly concerned about Condition~\ref{cond2}, we can add terms to ${\mathbf{R}}({\mathbf{s}})$ to span ${\mathbf{\Theta}}({\mathbf{s}})$. We generally will not know ${\mathbf{\Theta}}({\mathbf{s}})$ directly, but if we do (e.g., if household income were known and monitors were located at homes) then supplementing ${\mathbf{R}}({\mathbf{s}})$ with ${\mathbf{\Theta}}({\mathbf{s}})$ or projecting ${\mathbf{R}}({\mathbf{s}})$ to make it orthogonal to ${\mathbf{\Theta}}({\mathbf{s}})$ are equivalent. In most realistic settings, we will assume that ${\mathbf{\Theta}}({\mathbf{s}})$ is a set of smooth functions of space that can be modeled by spline terms, but we will not know the minimal spanning spline basis. In this case it is preferable to supplement ${\mathbf{R}}({\mathbf{s}})$ with as rich of a basis as possible without introducing substantial classical-like error. Projecting ${\mathbf{R}}({\mathbf{s}})$ to make it orthogonal to a similarly rich spline basis would likely result in a significant diminution of exposure variability beyond what is needed to eliminate bias from not satisfying Condition \ref{cond2}. The possibility of adding dimensions to ${\mathbf{R}}({\mathbf{s}})$ highlights the critical tradeoff between Berkson-like and classical-like error. Augmenting ${\mathbf{R}}({\mathbf{s}})$ reduces Berkson-like error by accounting for more of the variability in $w({\mathbf{s}})$. Since eliminating Berkson-like error also eliminates the need to satisfy the compatibility conditions, we generally expect that adding such terms will limit bias from the Berkson-like error. A side effect of augmenting ${\mathbf{R}}({\mathbf{s}})$ is to change the sampling variability of $\hat{{\boldsymbol\gamma}}$, which impacts the classical-like error. This could be beneficial if the additional terms in ${\mathbf{R}}({\mathbf{s}})$ account for a substantial amount of variability in $w({\mathbf{s}})$, since the result will be to reduce the variance of the original components of $\hat{{\boldsymbol\gamma}}$. On the other hand, if the coefficients for the new terms are difficult to estimate, the result will be a substantial new contribution to the classical-like error, leading to additional bias and variance in the second-stage estimation. In fact, in order to reduce classical-like error, one might choose to remove selected dimensions from ${\mathbf{R}}({\mathbf{s}})$ if their coefficients are particularly difficult to estimate. There are some key assumptions in our model that may not be strictly satisfied in air pollution epidemiology studies. First, we regard the sets of locations of exposure and health observations to be independent, or at least independent clusters. This assumption can be questioned, particularly in the case of air pollution monitors, as one would not expect a government agency to select two sites that are very close together. Second, a major source of exposure heterogeneity that we do not consider is the difference between exposure at a residence and the exposure experienced by individuals when they are not home. Mobility may be less important in studies of small children and the elderly, but this remains an open issue in the epidemiologic literature. Finally, as described in Section~\ref{se:dgm}, we condition on the unobserved but deterministic spatial variation in exposure during the time period of the study. This avoids having to postulate that one could meaningfully repeat the experiment in other time periods. This is particularly important when the averaging period of interest is one or more years since secular trends in the nature and sources of air pollution limit the number of years during which air pollution studies can be regarded as answering analogous scientific questions. For shorter-term studies, there is additional variability associated with the choice of time period, and it would be reasonable to regard the different air pollution surfaces at different times as arising from a random spatial process. However, with data from only a single time period and a misspecified mean model, it is impossible to identify both the fixed and random components of the spatial residuals, so we do not incorporate a random effect in our formulation. Our measurement error correction is based on asymptotic approximations derived for linear regression for the exposure and health models. Real world applications often involve additional complications, suggesting further research directions. On the exposure model side, our methods can be extended to penalized models and full-rank models such as universal kriging and related spatio-temporal models that are often used in environmental studies. Nonlinear models such as logistic regression and Cox regression are commonly used for the second stage in health studies, and it is also important to consider the implications of misspecification in the second-stage model, in addition to the exposure model. Two-stage analyses to date have taken the approach of optimizing the exposure model for exposure prediction accuracy, based on the implicit assumption that this will also lead to optimal second-stage health effect inference. In previous work we have shown that optimizing the exposure model for prediction accuracy can be sub-optimal for health effects estimation \citep{Szpiro2011epi}. An interesting avenue for future research involves developing methods to optimize the exposure model for estimation of the health effect of interest in the second-stage model. A final direction for additional research that is of great interest in air pollution epidemiology is to extend these methods for measurement error correction when assessing health effects of multiple exposures or mixtures of exposures. When predictions for more than one exposure are used in a health model, there is the possibility of a form of omitted variable bias from components of variability that are missing from the predictions of the exposures. \section*{Acknowledgments} AAS was supported by the United States Environmental Protection Agency through R831697 and RD-83479601 and by the National Institute of Environmental Health Sciences through R01-ES009411 and 5P50ES015915. CJP was supported by the National Institute of Environmental Health Sciences through ES017017-01A1. We thank Brent Coull and Lianne Sheppard for comments on the manuscript and Joel Kaufman, Victor Van Hee, and the MESA Air data team for assistance with MESA Air data. Although the research described in this presentation has been funded wholly or in part by the United States Environmental Protection Agency through R831697 and RD-83479601 to the University of Washington, it has not been subjected to the Agency's required peer and policy review and therefore does not necessarily reflect the views of the Agency and no official endorsement should be inferred. \clearpage \newpage
{ "attr-fineweb-edu": 1.53125, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbi05qsFAfmG_YRef
\section{Introduction} \label{intro} Graph neural networks (GNNs) are the state-of-the-art approach to molecular property prediction \citep{duvenaud2015convolutional, gilmer2017neural, wu2018moleculenet, yang2019analyzing}. A GNN operates on the graph structure of a molecule in two phases. In the message passing phase, a molecular representation is learned by passing messages between atom or bond states. In the readout phase, a feed forward network (FFN) converts this representation into a prediction. \textbf{Motivation}. The particular challenges of molecular property prediction marry well with the potential advantages of Bayesian learning. Generalisation is made difficult in cheminformatics by the concept of a molecular scaffold: the structural core of a compound to which functional groups are attached. Highly parameterised GNNs are prone to over-fit to training scaffolds, learning a poor molecular representation and failing to generalise at test time \citep{yang2019analyzing}. Models are at risk of returning over-confident predictions when operating on new scaffolds, conveying little of the uncertainty associated with a new chemical space. Poorly quantified uncertainty makes it especially challenging to evaluate model robustness and out-of-domain applicability \citep{hirschfeld2020uncertainty}. We believe the best answer to these deficiencies is Bayesian modelling. Whereas a `classical' neural network bets everything on one hypothesis, a Bayesian approach builds a predictive distribution by considering every possible setting of parameters. Bayesian marginalisation can improve the calibration \citep{maddox2019simple} and accuracy \citep{izmailov2019subspace} of deep neural networks underspecified by data. \textbf{Related work}. Two recent studies are particularly pertinent. Firstly, \cite{hirschfeld2020uncertainty} benchmark a set of methods for uncertainty quantification in molecular property prediction using the same GNN architecture that we employ in this paper. They find no consistently leading method, though replacing readout with a Gaussian process (GP) or random forest leads to reasonable performance across evaluation metrics. We extend the work of \citeauthor{hirschfeld2020uncertainty} by considering four additional Bayesian methods (SWAG, SGLD, BBP and DUN). Secondly, \cite{hwang2020benchmark} benchmark a set of Bayesian GNNs for molecular property prediction, assessing calibration and predictive accuracy across four classification datasets. They find that Stochastic Weight Averaging (SWA) and SWA-Gaussian (SWAG) demonstrate superior performance across metrics and datasets. We extend the work of \citeauthor{hwang2020benchmark} by (i) working in the regression setting where aleatoric and epistemic uncertainty are more explicitly separable, (ii) directly comparing a Bayesian readout phase with a full Bayesian GNN, and (iii) featuring a downstream molecular search experiment. We release PyTorch code at \url{https://github.com/georgelamb19/chempropBayes}. \section{Model and Data} \label{model} \textbf{The D-MPNN}. Our GNN is a directed message passing neural network (D-MPNN) \citep{yang2019analyzing}, a variant of the MPNN family \citep{gilmer2017neural}. The D-MPNN consistently matches or outperforms previous GNN architectures across datasets and splits types \citep{yang2019analyzing}. It has also demonstrated promise in a proof-of-concept antibiotic discovery pipeline \citep{stokes2020deep}. The D-MPNN is the core of the Chemprop project (\url{https://chemprop.readthedocs.io}). \textbf{QM9}. We perform all experiments on QM9. QM9 contains 12 geometric, energetic, electronic and thermodynamic properties for 133,885 small molecules \citep{ramakrishnan2014quantum}. Assessments of uncertainty calibration in Bayesian deep learning tend to focus on classification tasks \citep{lakshminarayanan2017simple, maddox2019simple}. We complement previous studies by exploring calibration and uncertainty quantification in a real-valued regression setting. \section{Methods} \label{methods} We implement eight separate methods. \textbf{MAP}: Our baseline is classical \textit{maximum a posteriori} training, in which we find the regularised maximum likelihood solution. \textbf{GP}: We replace the final layer of the readout FFN with a variational GP and train the resulting model end-to-end (deep kernel learning). The GP is a non-parametric Bayesian method which captures uncertainty in functional form. \textbf{DropR}: MC dropout uses a spike and slab variational distribution to view test time dropout as approximate variational inference \citep{gal2016dropout}. `DropR' is the implementation of MC dropout across readout FFN layers. \textbf{DropA}: We separately implement MC dropout over the full GNN. \textbf{SWAG}: Stochastic Weight Averaging (SWA) \citep{izmailov2018averaging} computes an average of SGD iterates with a high constant learning rate schedule, providing improved generalisation. We implement SWA-Gaussian \citep{maddox2019simple}, which builds on SWA by computing a `low rank plus diagonal' covariance. \textbf{SGLD}: Stochastic Gradient Langevin Dynamics \citep{welling2011bayesian} uses first order Langevin dynamics in the stochastic gradient setting. SGLD is a Markov Chain Monte Carlo (MCMC) method. Within this class of methods Hamiltonian Monte Carlo (HMC) \citep{Neal94} is the gold standard, but requires full gradients which are intractable for modern neural networks. \cite{chen2014stochastic} propose Stochastic Gradient HMC (SGHMC), but in practice tuning this method can be challenging. \textbf{BBP}: Variational Bayesian (VB) methods fit a variational approximation to the true posterior by minimising a Kullback–Leibler (KL) divergence or equivalently maximising an evidence lower bound (ELBO). Bayes by Backprop (BBP) \citep{blundell2015weight} assumes a fully factorised Gaussian posterior and utilises a reparameterisation trick to sample gradients; we also use `local reparameterisation' \citep{kingma2015variational} as a variance reduction technique. \textbf{DUN}: As an addition to the set of established methods above we implement a novel depth uncertainty network (DUN), which permits inference over both weights and the number of message passing iterations. Our DUN combines Bayes by Backprop with the `vanilla' DUN proposed by \cite{antoran2020depth}, and is introduced in Appendix \ref{DUN}. For context, Bayesian modelling is reviewed in Appendix \ref{bayesian_modelling}. \section{Experiments} \label{experiments} \subsection{Predictive accuracy and calibration} \textbf{Framework}. We perform 5 runs for each of the 8 methods, corresponding to different random seeds for weight initialisation. The runs enable an analysis of both `single' models and model ensembles, the latter incorporating multiple posterior basins of attraction. We analyse single models by averaging \textit{scores} across runs, computing a mean and standard deviation. We form a model ensemble by averaging \textit{predictive distributions} across runs, constituting a second layer of Bayesian model averaging. For calibration analysis we model aleatoric noise; a scalar noise per QM9 property is learned within the D-MPNN. Each posterior sample yields an individual Gaussian predictive distribution, representing aleatoric uncertainty. The full Bayesian predictive distribution is a mixture of Gaussians, representing aleatoric and epistemic uncertainty. We create our own data split with [train/val/test] proportions $[0.64,0.16,0.20]$ using Chemprop's `scaffold split' function, which partitions molecules into bins based on their Murcko scaffold. Method implementation is detailed in Appendix \ref{implementation}. \textbf{Evaluation}. We measure the mean absolute error (MAE) of Bayesian predictive means and rank methods for each of the 12 QM9 tasks. The mean rank across 12 tasks is our chief accuracy evaluation metric. To assess calibration we generalise reliability diagrams \citep{guo2017calibration} to the regression setting and aggregate QM9 tasks. We consider confidence intervals (CIs) around the Bayesian predictive mean. CI size is plotted on the $x$-axis. On the $y$-axis we plot the proportion of molecules in our test set falling within each CI, minus CI size. A perfectly calibrated model is represented by the line $y=0$. We summarise performance on the reliability diagram by computing miscalibration area (MA); the average absolute difference between confidence and accuracy. \textbf{Results (accuracy)}. Results are presented in Table \ref{table:main} and Appendix \ref{granular}. The leading methods in both single and ensemble settings are BBP, SGLD, SWAG and GP. SGLD and SWAG may suffer slightly versus BBP because they employ vanilla SGD optimisation rather than Adam. SGLD has a higher rank in the single model setting where it is distinguished by its ability to explore multiple posterior modes. Note that the GP captures uncertainty only in readout. Dropout performance is poor, which perhaps could be attributed to an insufficiently large network. DUN accuracy results should be considered in light of the fact that the variational posterior over depths collapses to $d=5$ (we consider depths of $1$ to $5$), indicating that it has likely failed to capture the true posterior correctly. \textbf{Results (calibration)}. Reliability diagrams are shown in Figure \ref{fig:reliability_show} and Appendix \ref{reliability_appendix}. With original Gaussian likelihoods we observe pathological underconfidence across methods. We find that this universal underconfidence is driven by overestimated aleatoric uncertainty, a consequence of heavy-tailed residual distributions containing extreme outliers. We improve calibration by fitting post-hoc $t$-distribution likelihoods to training residuals. MA results for post-hoc $t$-distribution likelihoods are shown in Table \ref{table:main}. The post-hoc results motivate re-training with a gamma prior over the precision of our Gaussian likelihood function; placing a prior Gam$(\tau|a,b)$ over $\tau$ and integrating out the precision we obtain a marginal distribution which, after conventional reparameterisation, equates to the $t$-distribution (see \citet[section 2.3.7]{bishop2006pattern}). We leave this to future work. \subsection{Molecular search} \textbf{Framework.} We follow the approximate Bayesian optimisation setup in \cite{hernandez2017parallel}, running both Thompson sampling and greedy trials. Given an unlabelled dataset, the goal is to discover molecules with the largest values of some target property in as few evaluations as possible. At each Thompson iteration we: (i) draw $S$ posterior samples to obtain $S$ deterministic regressors; (ii) for each sample find the molecule with the largest predicted target value, yielding a total batch of $S$ molecules; (iii) query said batch and add it to the labelled training set. Our dataset is a 100k subset of QM9 and our target is the first QM9 property, `mu'. We begin with 5k labelled molecules (selected uniformly at random) and make 30 batch additions with $S=50$. We perform 5 runs per method, corresponding to different base labelled sets and random weight initialisations. \begin{wrapfigure}{r}{0.47\textwidth} \begin{center} \includegraphics[width=0.47\textwidth]{figs/thompson.pdf} \end{center} \caption{Search trajectories for Thompson sampling. Fractions are averaged over 5 runs.} \label{fig:thompson} \end{wrapfigure} \textbf{Evaluation}. After each batch addition we measure the fraction of the top 1\% of molecules discovered. The final metric used to compare methods is the fraction discovered following 30 batch additions, at the close of the experiment. At this point there are 6.5k labelled molecules. \textbf{Results}. Search scores are presented in Table \ref{table:main}. Thompson sampling trajectories are shown in Figure \ref{fig:thompson} alongside a Monte Carlo baseline. We omit DUN given the collapse of the posterior over depths. As explained in \citet{hernandez2017parallel}, Thompson sampling uses epistemic variance in the Bayesian predictive distribution to perform exploration. In contrast, greedy search selects molecules using the Bayesian predictive mean and exercises pure exploitation. Across methods we find no notable difference between Thompson and greedy search scores. This likely reflects reduced epistemic uncertainty; having randomly selected a subset of 5k molecules to initially label we are operating `in-distribution'. We considered smaller initially labelled sets but found BBP performed particularly poorly. Tuning BBP, SGLD and SWAG without a large validation set is challenging. In contrast, dropout methods and the GP demonstrate robustness to dataset size and hyperparameter settings. The particular success of dropout might also be attributed to its regularising effect. \section{Discussion and Future Work} \label{discussion} The most performant methods involve Bayesian message passing as well as a Bayesian FFN. We conclude that there is meaningful and useful epistemic uncertainty to be captured in learned molecular representations as well as in readout. When applied to the full QM9 dataset, BBP, SGLD and SWAG enhance accuracy versus a MAP baseline. However, in the context of molecular search the sensitivity of these methods is limiting. Our recommendations follow the observed trends. For precise property prediction with $>10,000$ labelled molecules we suggest experimenting with BBP, SGLD, and SWAG. For molecular search, the robustness of dropout and deep kernel learning to different dataset sizes and hyperparameter settings is advantageous. Our results suggest single model SGLD is the best method for obtaining calibrated uncertainty estimates, though this is likely to be a task-specific phenomenon; extreme outlying residuals are still affecting calibration results despite post-hoc $t$-distribution likelihoods. We identify three avenues for future work: (i) benchmarking Bayesian GNNs on the complete MoleculeNet dataset collection (see \href{http://moleculenet.ai/datasets-1}{\underline{here}}); the majority of these datasets contain $<10,000$ molecules; (ii) adapting our D-MPNN by placing a gamma prior over the Gaussian likelihood precision, increasing network size for dropout, and experimenting with larger depths (following the DUN posterior collapse we trial depths up to $d=8$ and find accuracy increases monotonically); and (iii) incorporating meta-learning into Bayesian GNNs to improve initialisation in search tasks; meta-initialisations enable rapid learning in low resource settings \citep{nguyen2020meta}. \section{Reliability Diagrams} \label{reliability_appendix} Figure \ref{fig:reliability_set} contains a complete set of reliability diagrams. Each row (pair of diagrams) corresponds to a different likelihood function. The first pair of diagrams are generated with Gaussian likelihoods; Gaussian noise parameters are learned when we train the D-MPNN. We observe pathological underconfidence across methods. The second pair of diagrams are generated with post-hoc $t$-distribution likelihoods and demonstrate improved calibration. The third pair of diagrams are generated without modelling aleatoric noise. In this case, the Bayesian predictive distribution is approximated as a single Gaussian rather than a mixture. We fit the single Gaussian to $S \times N$ predictive means, after drawing $S$ posterior samples from an ensemble of $N$ models. The third row demonstrates that overestimated aleatoric uncertainty drives the underconfidence in the first row. The elbow shape towards the end of the reliability lines in the third row points to the presence of outlying residuals. \vspace{5mm} \begin{figure}[h] \centering \includegraphics[width=0.52\textwidth]{figs/calib_g_s.pdf} \hfill \includegraphics[width=0.4428\textwidth]{figs/calib_g_e.pdf}\\ \vspace{3mm} \includegraphics[width=0.52\textwidth]{figs/calib_t_s.pdf} \hfill \includegraphics[width=0.4428\textwidth]{figs/calib_t_e.pdf}\\ \vspace{3mm} \includegraphics[width=0.52\textwidth]{figs/calib_n_s.pdf} \hfill \includegraphics[width=0.4428\textwidth]{figs/calib_n_e.pdf} \caption{Reliability diagrams for \textbf{single models} (left column) and \textbf{model ensembles} (right column). Each row (pair of diagrams) corresponds to a different likelihood function. Each line is the average of 5 runs.} \label{fig:reliability_set} \end{figure} \section{Granular Predictive Accuracy Results} \label{granular} Tables \ref{table:acc_ens} and \ref{table:acc_single} present granular predictive accuracy results. MAEs are scaled by task such that predicting with the mean of test targets would yield an MAE score of 100. The primary metric in these tables is `mean rank', calculated (per run) by averaging the rank of a method across the 12 tasks. Using mean rank ensures we evenly weight the 12 tasks. To use MAE averaged across tasks (the `All' row in the tables) would be to give a higher weighting to more difficult tasks. \begin{table}[h] \vspace{1cm} \caption{(split table). MAE for MAP and Bayesian \textbf{single models}. Means and standard deviations are computed across 5 runs. Results are scaled by task such that predicting with the mean of test targets would yield an MAE score of 100.\vspace{3mm}} \centering \label{table:acc_single} \begin{tabular}{l r r r r r r r r} \toprule \multirow{2}{*}{Property} & \multicolumn{2}{c}{\hspace{0.6cm}MAP} & \multicolumn{2}{c}{\hspace{0.6cm}GP} & \multicolumn{2}{c}{\hspace{0.6cm}DropR} & \multicolumn{2}{c}{\hspace{0.6cm}DropA}\\ \cmidrule{2-9} \hspace{1.0cm} & \hspace{0.6cm}mean & std & \hspace{0.6cm}mean & std & \hspace{0.6cm}mean & std & \hspace{0.6cm}mean & std\\ \midrule mu & 48.41 & 0.43 & 48.95 & 0.29 & 50.42 & 0.67 & 52.77 & 0.37 \\ alpha & 7.89 & 0.15 & 8.01 & 0.31 & 9.32 & 0.22 & 10.12 & 0.12 \\ HOMO & 30.06 & 0.27 & 31.13 & 0.40 & 30.76 & 0.29 & 33.38 & 0.40 \\ LUMO & 11.08 & 0.15 & 11.44 & 0.04 & 11.86 & 0.36 & 12.34 & 0.21 \\ gap & 15.35 & 0.11 & 15.87 & 0.12 & 16.07 & 0.25 & 17.24 & 0.38 \\ R2 & 15.44 & 0.17 & 15.74 & 0.28 & 16.86 & 0.27 & 17.53 & 0.20 \\ ZPVE & 1.93 & 0.06 & \textbf{1.74} & 0.14 & 4.72 & 0.27 & 4.60 & 0.20 \\ Cv & 6.94 & 0.04 & 7.05 & 0.21 & 9.26 & 0.29 & 9.50 & 0.17 \\ U0 & 1.88 & 0.09 & \textbf{1.55} & 0.19 & 3.58 & 0.15 & 3.76 & 0.10 \\ U & 1.88 & 0.08 & \textbf{1.55} & 0.19 & 3.58 & 0.15 & 3.76 & 0.10 \\ H & 1.88 & 0.08 & \textbf{1.55} & 0.19 & 3.58 & 0.15 & 3.76 & 0.10 \\ G & 1.88 & 0.09 & \textbf{1.55} & 0.18 & 3.58 & 0.15 & 3.76 & 0.10 \\ \textit{All} & 12.05 & 0.05 & 12.18 & 0.14 & 13.63 & 0.15 & 14.37 & 0.16 \\ \midrule Mean rank & 4.08 & 0.16 & 3.87 & 0.42 & 7.05 & 0.15 & 7.87 & 0.19 \\ \midrule \\ \\ \midrule \multirow{2}{*}{Property} & \multicolumn{2}{c}{\hspace{0.6cm}SWAG} & \multicolumn{2}{c}{\hspace{0.6cm}SGLD} & \multicolumn{2}{c}{\hspace{0.6cm}BBP} & \multicolumn{2}{c}{\hspace{0.6cm}DUN}\\ \cmidrule{2-9} & mean & std & mean & std & mean & std & mean & std \\ \midrule mu & 48.03 & 0.33 & 47.80 & 0.33 & \textbf{47.72} & 0.47 & 48.24 & 0.29 \\ alpha & 7.58 & 0.03 & 7.49 & 0.10 & \textbf{7.43} & 0.15 & 7.84 & 0.15 \\ HOMO & 29.49 & 0.25 & 28.93 & 0.32 & 29.12 & 0.25 & \textbf{28.64} & 0.17 \\ LUMO & 10.79 & 0.07 & \textbf{10.49} & 0.08 & 10.53 & 0.13 & 10.54 & 0.02 \\ gap & 14.98 & 0.08 & 14.63 & 0.10 & 14.68 & 0.08 & \textbf{14.59} & 0.22 \\ R2 & 15.28 & 0.11 & 15.06 & 0.18 & \textbf{15.02} & 0.08 & 15.41 & 0.18 \\ ZPVE & 1.90 & 0.04 & 1.95 & 0.09 & 1.88 & 0.11 & 2.13 & 0.11 \\ Cv & 6.80 & 0.05 & 6.79 & 0.10 & \textbf{6.64} & 0.08 & 6.94 & 0.14 \\ U0 & 2.04 & 0.20 & 2.30 & 0.22 & 1.76 & 0.08 & 2.42 & 0.21 \\ U & 2.01 & 0.18 & 2.30 & 0.22 & 1.76 & 0.08 & 2.42 & 0.22 \\ H & 2.02 & 0.19 & 2.30 & 0.22 & 1.76 & 0.09 & 2.42 & 0.22 \\ G & 2.04 & 0.19 & 2.30 & 0.22 & 1.76 & 0.09 & 2.42 & 0.22 \\ \textit{All} & 11.91 & 0.07 & 11.86 & 0.13 & \textbf{11.67} & 0.09 & 12.00 & 0.06 \\ \midrule Mean rank & 3.55 & 0.12 & 3.23 & 0.51 & \textbf{1.95} & 0.40 & 4.40 & 0.25 \\ \bottomrule \end{tabular} \end{table} \section{Implementation of Methods} \label{implementation} In this section we describe the implementation of methods for the predictive accuracy and calibration experiment. Unless otherwise specified, hyperparameters are set to optimise validation MAE (averaged across QM9 tasks) following grid-search. An exhaustive list of the hyperparameter settings for all our experiments can be found in the file \verb+chempropBayes/scripts/bayesHyp.py+. \subsection{MAP} In order to learn aleatoric noise we instantiate a log standard deviation parameter within the D-MPNN model (the log ensures non-negativity). This parameter is a set of 12 scalars, one for each of the QM9 tasks. We henceforth refer to the parameter as `log noise'. Our full loss object is the negative log likelihood plus the negative log prior. A function to compute the former takes as input predictions, targets and log noise. We place a zero-mean Gaussian prior over each D-MPNN weight and control $\sigma_{\text{prior}}$ via a weight decay hyperparameter $\lambda$ inside our optimiser. In practice we scale the negative log likelihood to a manageable order of magnitude by dividing by the batch size. This scales the relationship between weight decay and our prior sigma. Precisely, we have $\lambda = 1 / \sigma_{\text{prior}}^{2}N$ where $N$ is the training set size. The default batch size in Chemprop is 50 and we find this works well; we use this batch size across all methods. Our optimiser is Adam. Following grid search we set the weight decay to $\lambda = 0.01$. Chemprop utilises a `noam' learning rate scheduler with piecewise linear increase and exponential decay (based on the scheduler in \citet{vaswani2017attention}). We train for 200 epochs, using the `noam' scheduler for the first 100. We linearly increase the learning rate from $lr_{\text{min}}$ to $lr_{\text{max}}$ over 2 epochs and decay back to $lr_{\text{min}}$ over the following 98, from which point we remain at $lr_{\text{min}}$. Following grid search we set $lr_{\text{min}}$ = 1e-4 and $lr_{\text{max}}$ = 1e-3. The saved MAP model following each training run is that which achieves the best validation accuracy. We also apply this selection procedure to GP, DropR, DropA, BBP and DUN. \textbf{Architecture}. We grid search over 24 architectures, exploring combinations of hidden size $h \in \{300,500\}$, message passing depth $d \in \{ 2, 3, 4, 5 \}$ and number of FFN readout layers $L \in \{ 2, 3, 4 \}$. We find that $(h,d,L) = (500,5,3)$ achieves optimal accuracy. We choose not to investigate larger hidden sizes or depths to manage compute requirements. For context, the Chemprop defaults are $(h,d,L) = (300,3,2)$. We maintain a fixed architecture across all methods. \textbf{Standardising features and targets}. Before training the D-MPNN we standardise atom and bond features in the training set, and apply the same transformation to validation and test molecules. We also standardise training targets, later applying the reverse transform when making predictions on validation or test molecules. Both these standardisations occur across all methods. \subsection{GP} Each GP run is initialised with the output of the corresponding MAP run (e.g. GP run 1 is initialised with the output of MAP run 1). We take the pre-trained MAP D-MPNN and replace the final layer of readout with 12 batched stochastic variational GPs (SVGPs), one per task. We train the resulting architecture end-to-end. This end-to-end training is known as deep kernel learning (DKL). We implement the GPs in GPyTorch, following the example SVGP and DKL implementations as a guide (\url{https://docs.gpytorch.ai}). Our variational distribution is a multivariate normal (`CholeskyVariationalDistribution') with batch shape 12. We use a multitask variational strategy with 1200 inducing points based on methodology in \citet{hensman2013gaussian}. The variational strategy tells us how to transform a distribution $q(u)$ over the inducing point values to a distribution $q(f)$ over the latent function values for some input $x$. 1200 is the maximum feasible number of inducing points given compute constraints (corresponding to 10-15 minutes per epoch on a single GPU node). We note that the closeness of the variational GP approximation to the true posterior increases only with the log of the number of inducing points \citep{matthews2016sparse}. Each GP is defined by a constant mean and RBF kernel. We train GP hyperparameters, a scalar aleatoric noise per task and D-MPNN weights by minimising a negative variational ELBO. We train with a batch size of 50 for 200 epochs, following the same learning rate profile as for MAP. We use the Adam optimiser. For fair comparison with other methods we regularise D-MPNN weights with a weight decay of $0.01$. \subsection{DropR, DropA} The dropout models follow a similar training procedure to MAP; here we highlight differences. For DropR we activate dropout layers following the D-MPNN atom representation step, and following every FFN layer except for the output layer. For DropA we additionally activate dropout layers following D-MPNN hidden state updates. For both DropR and DropA we grid search over $p \in \{0.1, 0.2, 0.3, 0.4, 0.5 \}$ (these are \textit{dropout} probabilities). For both DropR and DropA the optimal probability is $0.1$. We run the final models with this dropout probability during training and testing. Both dropout methods take significantly longer to converge than MAP. This is expected given the noise inherent in dropout. We train for 300 epochs, extending the MAP learning rate profile by 100 additional epochs at a fixed learning rate of $lr_{\text{min}}$ = 1e-4. At test time we draw 30 samples. \subsection{SWAG} The SWAG implementation is based on code attached to the original SWAG paper \citep{maddox2019simple}. We first build a wrapper around the D-MPNN, referring to the latter as our `base' model. The wrapper contains a list of the parameter objects in the base model (where a parameter `object' is, for example, a weight matrix). Within the wrapper list (which includes log noise) we register buffers for each parameter object to store first and second uncentred moments. During training we `collect' models by looping through parameter objects in the base model and updating buffers in the wrapper list. At test time we generate new sample parameters \textit{directly within the wrapper list}. Because the parameter objects in the list point to the same place in memory as the parameters in the base model, base model parameters also change when we sample. The starting point for SWAG training is the pre-trained MAP model. We run SWAG training for 100 epochs, collecting one model per epoch after 20 warm-up epochs (thus collecting 80 models in total). We limit the rank of our estimated covariance matrix by using only the last $K=20$ models to compose a deviation matrix (the same setting as in the original paper). For fair comparison with other methods we set weight decay to $\lambda=0.01$. The performance of the SWAG method is sensitive to learning rates. To prevent spikes in loss during training we make three changes versus MAP. Firstly, we lower the main learning rate. In practice 2e-5 is the highest rate with which we can achieve reasonable validation accuracy during training (SWAG should be run with a constant `high' learning rate). Secondly, we reduce the learning rate even further for log noise; at all times it is one fifth of the learning rate applied to other model parameters. Thirdly, at the start of SWAG training we gradually increase learning rates from 1e-10 up to their maxima over 5 epochs, using a cosine scheduler (the scheduler is not particularly important; we use a cosine scheduler for alignment with SGLD). SWAG's sensitivity is a result of using the SGD optimiser as opposed to Adam (which enjoys momentum and adaptive learning rates). We try SWAG with momentum $\rho \in \{ 0.5, 0.9, 0.99 \}$ but see more volatile loss profiles so momentum is kept at zero. At test time we draw 30 samples. \subsection{SGLD} SGLD parameter updates are equivalent to MAP updates with the addition of Gaussian noise. As with MAP, we scale the SGLD loss to a manageable order of magnitude by dividing by the training set size, $N$. This division effectively rescales the SGLD learning rate to be $\epsilon / N$. It follows that we should also divide the variance of our added Langevin noise by $N$. Denoting the batch size as $B$, our parameter update equation is: \begin{equation*} \begin{split} \Delta \omega_{t} &= \frac{\epsilon}{2N} \bigg( \nabla \log p(\omega_{t}) + \frac{N}{B} \sum_{i=1}^{B} \nabla \log p(\mathbf{y}_{i}|\mathbf{x}_{i}, \omega_{t}) \bigg) + \eta_{t},\\ \eta_{t} &\sim \mathcal{N}(0,\epsilon / N). \end{split} \end{equation*} We implement SGLD as an optimiser which inherits from PyTorch's SGD base class. The SGLD optimiser loops through parameter groups and adds two gradient terms to the already-computed log likelihood gradients. Firstly, it adds the gradient of the log prior; we parameterise this via a weight decay and set the weight decay to $\lambda = 0.01$ for fair comparison with other methods. Secondly, the optimiser adds appropriately scaled Langevin noise. The starting point for SGLD is the pre-trained MAP model. We run SGLD with a cyclical cosine learning rate schedule, following the proposal of \cite{zhang2019cyclical}. The idea is that larger steps discover new posterior modes during a period of exploration (effectively a burn-in phase), and that smaller steps characterise each mode. We use PyTorch's `OneCycleLR' scheduler and configure a single cycle as follows: we cosine anneal the learning rate from 1e-10 to a maximum learning rate of 1e-4 over 5 epochs, and then cosine anneal from this maximum to a minimum learning rate of 1e-5 over the following 45 epochs. At all times the learning rate for log noise is one fifth of the main learning rate. We save a posterior sample at the end of each 50 epoch cycle. Given this relatively expensive serial sampling procedure, we collect only 20 samples for SGLD. \subsection{BBP} Again, we scale the loss to be a per example measure. Given batch size $B$ the loss function is: \begin{equation*} \mathcal{L}(\omega,\theta) = \frac{1}{N} \bigg( \log q(\omega|\theta) - \log p(\omega) - \frac{N}{B} \sum_{i=1}^{B} \nabla \log p(\mathbf{y}_{i}|\mathbf{x}_{i},\omega) \bigg). \end{equation*} In practice, we average this loss across 5 forward passes before every backward pass to reduce variance. To implement BBP, we define a Bayesian linear layer class to replace the existing linear layers in the D-MPNN. Within the Bayesian linear layer we implement the `local reparameterisation trick' \citep{kingma2015variational}. This involves calculating the mean and variance of activations in closed form and sampling activations instead of weights. With the local reparameterisation trick the variance of our MC ELBO estimator scales as $1/B$; sampling weights directly it scales $(B-1)/B$ (with $B$ the batch size). Within each Bayesian linear layer we also compute the KL divergence in closed form using a result from \citet[Appendix B]{kingma2013auto}. Each layer returns standard output as well as a KL. We sum the latter across layers to compute a total KL. We initialise BBP from the MAP solution and train for 100 epochs at a constant learning rate of 1e-4. We set $\sigma_{\text{prior}}=0.05$ which is approximately equivalent to a weight decay of $0.01$ given our scaled loss. Initialising $\rho$ parameters in the correct range is important for reasonable convergence; we initialise uniformly at random between $-5.5$ and $-5$. Each training run we save the BBP model with the best validation accuracy, where validation accuracy is calculated for the mean of the variational posterior. At test time we draw 30 samples. \subsection{DUN} The DUN method is implemented on top of BBP. Our loss is the negative of equation (\ref{eq:dun_loss}) (though we compute an exact BBP KL). As with previous methods, we rescale the loss by dividing by the training set size. The variational categorical distribution $q(d|\alpha)$ is learned as a set of parameters within the D-MPNN, where we use logs to ensure non-negativity. In a single forward pass our DUN D-MPNN returns a BBP KL (the sum of KLs computed in closed form within Bayesian linear layers), a KL of categorical distributions (also computed exactly) and predictions corresponding to five different depths. Our categorical prior is the uniform distribution. Categorical distributions are over $d \in \{1,2,3,4,5 \}$. Note that in Chemprop the depth $d$ counts the number of message passing steps plus the hidden state initialisation step. We do not exceed $d=5$ for fair comparison; improved DUN performance versus other methods may otherwise be caused by the inclusion of a deeper sub-model alone. Recall that when selecting a model architecture we only grid search up to $d=5$. We train DUN models for 350 epochs. For the first 100, BBP $\rho$ parameters are frozen at zero (thus our variational posterior over weights is a point mass) and we freeze $q(d|\alpha)$ to be a uniform distribution. This first phase of training is designed to minimise the chance of the variational categorical distribution later collapsing to a single depth. For the first 100 epochs we use the `noam' scheduler with MAP learning rates. After 100 epochs we initialise $\rho$ as in the BBP method and unfreeze our variational categorical parameters. From 100 epochs onward we use a constant learning rate of 1e-4. We save the DUN model achieving the best validation accuracy. At test time we generate two sets of samples. We draw 30 samples from the marginal posterior over weights, each of which predicts by taking an expectation w.r.t. depth; we use these samples to evaluate DUN accuracy. We also draw 100 samples from the joint posterior over weights and depth, using these to assess uncertainty calibration; the larger number of samples is necessary to minimise discretisation error. \begin{table}[h] \vspace{1.6cm} \caption{MAE for MAP and Bayesian \textbf{model ensembles}. The 5 runs in Table \ref{table:acc_single} constitute an ensemble of 5 models. Results are scaled by task such that predicting with the mean of test targets would yield an MAE score of 100.\vspace{3mm}} \centering \label{table:acc_ens} \begin{tabular}{l r r r r r r r r} \toprule Property & \hspace{0.25cm}MAP & \hspace{0.5cm}GP & \hspace{0.1cm}DropR & \hspace{0.1cm}DropA & \hspace{0.15cm}SWAG & \hspace{0.2cm}SGLD & \hspace{0.2cm}BBP & \hspace{0.2cm}DUN\\ \midrule mu & 45.64 & 45.43 & 49.10 & 52.17 & 45.47 & 45.41 & \textbf{45.13} & 46.35 \\ alpha & 6.70 & 6.68 & 8.20 & 9.22 & 6.49 & 6.56 & \textbf{6.39} & 6.90 \\ HOMO & 26.69 & 27.24 & 28.59 & 32.42 & 26.37 & \textbf{26.08} & 26.25 & 26.27 \\ LUMO & 9.52 & 9.85 & 10.42 & 11.68 & 9.40 & 9.28 & \textbf{9.25} & 9.51 \\ gap & 13.37 & 13.75 & 14.47 & 16.56 & 13.12 & \textbf{12.97} & 13.04 & 13.17 \\ R2 & 13.52 & 13.51 & 15.50 & 16.55 & 13.45 & 13.41 & \textbf{13.34} & 13.99 \\ ZPVE & 1.57 & \textbf{1.38} & 3.27 & 3.49 & 1.56 & 1.61 & 1.59 & 1.70 \\ Cv & 5.76 & 5.78 & 7.61 & 8.36 & 5.74 & 5.83 & \textbf{5.70} & 5.99 \\ U0 & 1.57 & \textbf{1.30} & 2.71 & 2.76 & 1.86 & 1.95 & 1.51 & 1.92 \\ U & 1.57 & \textbf{1.30} & 2.71 & 2.76 & 1.81 & 1.95 & 1.51 & 1.93 \\ H & 1.57 & \textbf{1.30} & 2.71 & 2.76 & 1.83 & 1.95 & 1.51 & 1.93 \\ G & 1.57 & \textbf{1.30} & 2.71 & 2.76 & 1.83 & 1.94 & 1.51 & 1.92 \\ \textit{All} & 10.76 & 10.73 & 12.33 & 13.46 & 10.74 & 10.75 & \textbf{10.56} & 10.96 \\ \midrule Mean rank & 4.00 & 3.17 & 7.00 & 8.00 & 3.25 & 3.75 & \textbf{1.75} & 5.08 \\ \bottomrule \end{tabular} \end{table} \section{The Depth Uncertainty Network} \label{DUN} Here we consider capturing uncertainty in model weights \textit{and} the number of message passing iterations. For simplicity we refer to the latter parameter as `depth'. There is motivation to acknowledge and capture uncertainty in the MPNN depth parameter. Different depths allow hidden states to represent different sized molecular substructures. Incorporating different sized spheres of representation at test time may enhance predictive accuracy. \subsection{Depth uncertainty in an FFN} \cite{antoran2020depth} perform inference over the depth of an FFN. Different depths correspond to subnetworks which share weights. Exploiting the sequential structure of FFNs, \citeauthor{antoran2020depth} evaluate a training objective and make predictions with a single forward pass. \citeauthor{antoran2020depth} define a categorical prior over network depth, $p_{\beta}(d)=\text{Cat}(d|\{ \beta_{i} \}_{i=0}^{D})$. They parameterise the likelihood for each depth using the corresponding subnetwork's output: $p(\mathbf{y}|\mathbf{x},d=i,\omega)=p(\mathbf{y}|f_{D+1}(\mathbf{a}_{i},\omega))$. Here, $f_{D+1}(\cdot)$ is an output layer and $\mathbf{a}_{i}$ the activation at depth $i \in [0,D]$ given input $\mathbf{x}$ and weight configuration $\omega$. For a given weight configuration, the likelihood for each depth and consequently the model's marginal log likelihood (MLL) can be computed from a single forward pass. The MLL is computed as \begin{equation*} \log p(\mathcal{D},\omega) = \log \sum_{i=0}^{D} \Bigg( p_{\beta}(d=i) \cdot \prod_{n=1}^{N} p(\mathbf{y}^{(n)}|\mathbf{x}^{(n)},d=i,\omega) \Bigg). \end{equation*} The posterior over depth is a tractable categorical distribution which tells us how well each subnetwork explains the data given some set of weights $\omega$: \begin{equation*} p(d|\mathcal{D},\omega) = \frac{p(\mathcal{D}|d,\omega) p_{\beta}(d)}{p(\mathcal{D},\omega)}. \end{equation*} \citeauthor{antoran2020depth} try learning weights by maximising the MLL directly using backpropagation and the \textit{log-sum-exp} trick, but find the posterior collapses to a delta function over an arbitrary depth. This is explained by the gradients of each subnetwork being weighted by that subnetwork's posterior mass, leading to local optima where all but one subnetwork's gradients vanish. The solution is to separate the optimisation of network weights from the posterior distribution as in the expectation maximisation (EM) algorithm for latent variable models. \citeauthor{antoran2020depth} achieve this by performing stochastic gradient variational inference. They introduce a variational posterior $q_{\alpha}(d)=\text{Cat}(d|\{ \alpha_{i} \}_{i=0}^{D})$. They learn variational parameters $\alpha$ and weights $\omega$ by maximising the following ELBO: \begin{equation*}\label{deep_elbo1} \log p(\mathcal{D},\omega) \geq \mathcal{L}(\alpha, \omega) = \sum_{n=1}^{N} \mathbb{E}_{q_{\alpha}(d)} \big[ \log p(\mathbf{y}^{(n)}|\mathbf{x}^{(n)},d,\omega) \big] - \text{KL}(q_{\alpha}(d)\hspace{1mm}||\hspace{1mm}p_{\beta}(d)). \end{equation*} Maximising this ELBO by taking gradients actually constitutes exact inference. The ELBO is convex w.r.t. $\alpha$ because the variational and true posteriors are categorical. The variational family contains the exact posterior, thus at the maxima $q_{\alpha}(d)=p(d|\mathcal{D},\omega)$. \subsection{Incorporating uncertainty in model weights} Our depth uncertainty network (DUN) combines the model above with Bayes by Backprop. We assume a fully factorised variational posterior over depth and neural network weights. We derive an ELBO by expanding the KL divergence: \begin{equation*} \begin{split} \text{KL} \big[ &q(d|\alpha) q(\omega|\theta) \hspace{1mm}||\hspace{1mm} p(d|\mathcal{D},\omega) p(\omega|\mathcal{D}) \big]\\ &= \text{KL} \big[ q(d|\alpha) \hspace{1mm}||\hspace{1mm} p(d) \big] + \text{KL} \big[ q(\omega|\theta) \hspace{1mm}||\hspace{1mm} p(\omega) \big] - \mathbb{E}_{q(d|\alpha)q(\omega|\theta)}[\log p(\mathcal{D}|d,\omega)] + \log p(\mathcal{D})\\ &= -\mathcal{L}(\alpha, \theta) + \log p(\mathcal{D}). \end{split} \end{equation*} $\mathcal{L}(\alpha, \theta)$ here is the ELBO. Due to the non-negativity of the KL divergence, $\mathcal{L}(\alpha, \theta) \leq \log p(\mathcal{D})$. We can learn variational parameters $\alpha$ and $\theta$ by maximising the ELBO using backpropagation. The KL term involving categorical distributions and the expectation of the log likelihood w.r.t. the posterior over depth can be computed analytically with a single forward pass. Expectations w.r.t. the variational posterior $q(\omega | \theta)$ are estimated as in Bayes by Backprop, by sampling unbiased gradients. In practice we can use mini-batches, estimating the ELBO as follows: \begin{equation}\label{eq:dun_loss} \begin{split} \mathcal{L}(\omega, \alpha, \theta) \approx \frac{N}{B} \sum_{n=1}^{B} \sum_{i=0}^{D} \big( \log p(\mathbf{y}^{(n)}|&\mathbf{x}^{(n)}, d = i, \omega) \cdot \alpha_{i} \big)\\ &- \log \frac{q(\omega | \theta)}{p(\omega)} - \sum_{i=0}^{D} \big( \alpha_{i} \log \frac{\alpha_{i}}{\beta_{i}} \big). \end{split} \end{equation} At test time, predictions for new data are made by the following Bayesian model average: \begin{equation*}\label{eq:dun_bma} \begin{split} p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathcal{D}) &\approx \frac{1}{J} \sum_{j=1}^{J} \sum_{i=0}^{D} p(\mathbf{y}^{*}|\mathbf{x}^{*},d=i,\omega_{j}) q_{\alpha}(d=i),\\ \omega_{j} &\sim q(\omega|\theta). \end{split} \end{equation*} \section{Bayesian Modelling} \label{bayesian_modelling} \cite{wilson2020case} emphasises that the distinguishing property of a Bayesian approach is marginalisation rather than optimisation. A Bayesian approach forms a predictive distribution by marginalising over different parameter settings, each weighted by their posterior probability. In contrast, classical learning involves maximising a posterior. \subsection{Bayesian marginalisation} We consider the case of Bayesian parametric regression. Given inputs $X$ and outputs $Y$, we desire the parameters $\omega$ of a function $f^{\omega}(\cdot)$ that is likely to have generated our inputs. We place a prior $p(\omega)$ over the space of possible parameter settings, representing our \textit{a priori} belief about which parameters are likely to have generated the data. To transform the prior distribution in light of the observed data we define a likelihood distribution $p(y|x,\omega)$, the probabilistic model by which inputs generate outputs for parameter settings $\omega$. We look for the posterior distribution over the space of parameters by invoking Bayes' theorem: \begin{equation*} p(\omega|\mathcal{D}) = \frac{p(Y|X,\omega)p(\omega)}{p(Y|X)}. \end{equation*} A predictive distribution is obtained by marginalising over $\omega$: \begin{equation}\label{eq:predictive} p(y|x,\mathcal{D}) = \int p(y|x,\omega)p(\omega|\mathcal{D}) d \omega. \end{equation} Equation (\ref{eq:predictive}) is a Bayesian model average (BMA), representing model uncertainty. \subsection{Bayesian deep learning} Modern neural networks often contain millions of parameters. The posterior over these parameters is generally intractable. In Bayesian deep learning we deal with the problem of inference by making two, layered approximations. Firstly, we approximate the Bayesian posterior. Methods differ with respect to posterior approximation. Secondly, we approximate the Bayesian integral (\ref{eq:predictive}) by Monte Carlo (MC) sampling. MC integration is common across methods. With $q(\omega|\mathcal{D})$ our approximate posterior, the MC BMA is: \begin{equation*} p(y|x,\mathcal{D}) \approx \frac{1}{J} \sum_{j=1}^{J} p(y|x,\omega_{j}), \hspace{3mm} \omega_{j} \sim q(\omega|\mathcal{D}). \end{equation*} Following MC integration, we have approximated the true posterior with a set of point masses, where their locations are given by samples from $q(\omega|\mathcal{D})$: \begin{equation*} p(\omega|\mathcal{D}) \approx \frac{1}{J} \sum_{j=1}^{J} \delta (\omega = \omega_{j}), \hspace{3mm} \omega_{j} \sim q(\omega|\mathcal{D}). \end{equation*} \section*{Appendices} The appendices are structured as follows: \begin{itemize} \item Appendix \ref{bayesian_modelling} reviews the main concepts underlying Bayesian modelling, for readers less familiar with the Bayesian framework. \item Appendix \ref{DUN} introduces our depth uncertainty network (DUN) which permits inference over both model weights and the number of message passing iterations. \item Appendix \ref{implementation} describes the implementation of methods, and explains key hyperparameter choices. \item Appendix \ref{granular} contains granular predictive accuracy results (scaled MAE by task). \item Appendix \ref{reliability_appendix} contains a full set of reliability diagrams. Three diagram pairs correspond to (i) Gaussian likelihoods, (ii) post-hoc $t$-distribution likelihoods, and (iii) omission of modelled aleatoric noise. \end{itemize} \input{appendix/appendixA} \input{appendix/appendixB} \input{appendix/appendixC} \input{appendix/appendixD} \input{appendix/appendixE} \end{document}
{ "attr-fineweb-edu": 1.682617, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbjY4dbjiU5gJugnY
\section*{Supplementary Material} \renewcommand{\thefigure}{S\arabic{figure}} \setcounter{figure}{0} \title{Substitutional Si impurities in monolayer hexagonal boron nitride\\---\\Supplementary Material} \author{Mohammad Reza Ahmadpour Monazam} \email{[email protected]} \affiliation{University of Vienna, Faculty of Physics, Boltzmanngasse 5, A-1090, Vienna, Austria} \author{Ursula Ludacka} \affiliation{University of Vienna, Faculty of Physics, Boltzmanngasse 5, A-1090, Vienna, Austria} \author{Hannu-Pekka Komsa} \affiliation{Department of Applied Physics, Aalto University, P.O. Box 11100, 00076 Aalto, Finland} \author{Jani Kotakoski} \affiliation{University of Vienna, Faculty of Physics, Boltzmanngasse 5, A-1090, Vienna, Austria} \date{\today} \maketitle \section*{Methods} The samples were commercially available single-layer h-BN grown via chemical vapor deposition on copper by Graphene Laboratories, Inc. They were directly transferred onto golden transmission electron microscopy grids with perforated amorphous carbon membrane (QUANTIFOIL\textregistered) without the use of a polymer, which decreases the amount of contamination on the samples. The copper was etched in a bath of FeCl over night and the samples were cleaned with deionized water and isopropyl alcohol. Samples were baked in vacuum at 150$^\circ$C for at least eight hours before being inserted into the microscope. We acquired the experimental data using a Nion UltraSTEM 100 microscope~\cite{krivanek_electron_2008} with a cold field-emission electron gun operated at 60~keV. The near-ultrahigh vacuum conditions at the objective area (pressure below $10^{-9}$~mbar) around the sample ensure a minimum influence of chemical reactions~\cite{leuthner_scanning_2018} on the sample during observation. The beam convergence semiangle was 30~mrad and the used medium angle annular dark field detector angular range 60-200~mrad. Typical beam current of the device is on the order of 30~pA. We used density functional theory as implemented in the Vienna ab initio simulation package (VASP)~\cite{Kresse1993}. The electron exchange and correlation was treated by Perdew-Burke-Ernzerdorf (PBE) functional \cite{Perdew1996}. The total energy of the system was calculated via the pseudopotential-momentum-space formalism using projector-augmented-wave (PAW) method \cite{Kresse1999}. The Kohn-Sham wavefunctions are expanded over plane-wave basis sets with the kinetic energy cut off set to 525~eV. A supercell of $8\times8\times1$ was employed to study different defect states in the membrane with the assumption of minimizing the lateral interaction of the defect with its periodic images. The interlayer vacuum space of 43.46 \text{\AA} was selected according to "special vacuum" proposed in Refs. \cite{komsa2014, Komsa2018}. The results were compared to those calculated with a supercell of $6\times6\times1$ and vacuum size of 30.27 \text{\AA}. The locally optimized configurations and formation energies were in good agreement for the two different system sizes. The Brillouin-Zone integration was done over a $\Gamma$-centered $5\times5\times1$ k-point mesh. The damped molecular dynamics method was used to optimize the ionic degrees of freedom until the residual forces were below 0.01 eV/$\text{\AA}$. Although it is known that the band gaps calculated using PBE underestimate the true band gap of semiconductors, we restricted our calculation to the level of PBE due to the agreement between PBE formation energies and those calculated with the HSE formalism \cite{Heyd2003,Berseneva2011}. Therefore one only needs to re-scale the electron chemical potential using the difference in the band gap obtained from the two methods. Due to the computational cost, we carried out only one HSE calculation for bulk h-BN for estimating the band gap. The size of the band gap in this case is 5.72 eV as compared to 4.48 eV calculated with PBE. The HSE calculation is performed using HSE06 functional with 0.25 fraction of exact exchange \cite{Aliaksandr2006}. For STEM image simulations, we used the QSTEM package~\cite{Koch2002}, where all parameters were set to correspond to our experimental setup. The energy barrier estimation is based on the nudged elastic band (NEB) method implemented in VASP~\cite{Henkelman2000}. A set of calculations with five images between the initial and final configurations were performed. The standard dynamic calculation was performed using in DFT-based molecular dynamics with Nos\'e-thermostat ; an increasing initial vertical velocity toward the h-BN plane is applied to the silicon atom until it passed through the membrane. The time step is set to 0.5 fs. \begin{figure}[h!] \includegraphics[width=.37\textwidth]{Si_N_Only_bands.pdf} \caption{\label{Bands-SiN} Electronic band structure of Si$_N^0$, Si$_N^{-1}$ and Si$_N^{+1}$. The Fermi Level is set to zero. The first defect level in panel (a) around -0.18 eV in the spin up channel actually consists of three levels very close in energy, and the level close to Fermi level in the spin down channel is actually two levels. This enables charge states from -3 to +5. By adding an electron, to the lowest empty band in the spin down channel becomes spin-unpolarized (panel b). The defect level at 3.93 eV in the spin up channel and level 3.59 eV in the spin up channel are close to CBM and expected to become higher by adding an electron. In this case, the NFE bands push further down so that no further defect levels remain. Likewise, in +1 charge state, only one defect level very close to Fermi level remains.} \end{figure} \begin{figure}[h!] \includegraphics[width=.37\textwidth]{Si_DV_Only_bands.pdf} \caption{\label{SiDV-BS} lectronic band structures of Si$_{BN}^0$, Si$_{BN}^{-1}$ and Si$_{BN}^{+1}$. The band structure for neutral charge case (panel a) reveals four defect levels in the spin up and the spin down channels. Both occupied (empty) levels are in the spin up (spin down) channel. Therefore, the possible charge states extend from -2 to +2. However, from band structure of the -1 charge state (panel b), it is clear that adding a single electron to the lowest defect level in the spin down channel pushes the second defect level higher than CBM. So, only -1 state should be possible.} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=.37\textwidth]{FE_SiB_Brich_8x8x1.pdf} \caption{The formation energy of Si in boron substitution in B-rich environment.} \label{fig:FE-SiBB} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=.37\textwidth]{FE_SiN_Nrich_8x8x1.pdf} \caption{The formation energy of Si in nitrogen substitution in N-rich environment.} \label{fig:FE-SiNN} \end{figure} \end{document}
{ "attr-fineweb-edu": 1.241211, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbkvxaJiQnsTQxkp-
\section{Introduction} \label{sec:intro} Nonperturbative QCD poses significant challenges. Primary amongst them is a need to chart the behaviour of QCD's running coupling and masses into the domain of infrared momenta. Contemporary theory is incapable of solving this problem alone but a collaboration with experiment holds a promise for progress. This effort can benefit substantially by exposing the structure of nucleon excited states and measuring the associated transition form factors at large momentum transfer~\cite{Aznauryan:2012ba}. Large momenta are needed in order to pierce the meson-cloud that, often to a significant extent, screens the dressed-quark core of all baryons~\cite{Roberts:2011rr, Kamano:2013iva}; and it is via the $Q^2$ evolution of form factors that one gains access to the running of QCD's coupling and masses from the infrared into the ultraviolet~\cite{Cloet:2013gva, Chang:2013nia}. It is within the context just described that we have performed a simultaneous treatment of elastic and transition form factors involving the Nucleon, Delta and Roper baryons in Refs.~\cite{Segovia:2013rca, Segovia:2013uga, Segovia:2014aza, Segovia:2015ufa, Segovia:2015hra}. In order to address the issue of charting the behaviour of the running coupling and masses in the strong interaction sector of the Standard Model, we use a widely-accepted leading-order (rainbow-ladder) truncation of QCD's Dyson-Schwinger equations~\cite{Chang:2011vu, Bashir:2012fs, Cloet:2013jya} and compare results between a QCD-based framework and a confining, symmetry-preserving treatment of a vector$\,\otimes\,$vector contact interaction. A unified QCD-based description of elastic and transition form factors involving the nucleon and its resonances has acquired additional significance owing to substantial progress in the extraction of transition electrocouplings, $g_{{\rm v}NN^\ast}$, from meson electroproduction data, obtained primarily with the CLAS detector at the Thomas Jefferson National Accelerator Facility (JLab). The electrocouplings of all low-lying $N^\ast$ states with mass less-than $1.6\,{\rm GeV}$ have been determined via independent analyses of $\pi^+ n$, $\pi^0p$ and $\pi^+ \pi^- p$ exclusive channels~\cite{Agashe:2014kda,Mokeev:2012vsa}; and preliminary results for the $g_{{\rm v}NN^\ast}$ electrocouplings of most high-lying $N^\ast$ states with masses below $1.8\,{\rm GeV}$ have also been obtained from CLAS meson electroproduction data~\cite{Aznauryan:2012ba,Mokeev:2013kka}. \begin{figure}[!t] \begin{center} \hspace*{0.50cm} \includegraphics[clip,width=0.40\textwidth,height=0.18\textheight] {figNN_Faddeev.eps} \hspace*{1.00cm} \includegraphics[clip,width=0.40\textwidth,height=0.20\textheight] {figNN_Nucleon.eps} \caption{\label{fig:Faddeev} {\it Left panel:} Poincar\'e covariant Faddeev equation. $\Psi$ is the Faddeev amplitude for a baryon of total momentum $P= p_q + p_d$, where $p_{q,d}$ are, respectively, the momenta of the quark and diquark within the bound-state. The shaded area demarcates the Faddeev equation kernel: {\it single line}, dressed-quark propagator; $\Gamma$, diquark correlation amplitude; and {\it double line}, diquark propagator. {\it Right panel:} Dominant piece in the nucleon's eight-component Poincar\'e-covariant Faddeev amplitude: $S_1(|p|,\cos\theta)$. In the nucleon rest frame, this term describes that piece of the quark--scalar-diquark relative momentum correlation which possesses zero intrinsic quark-diquark orbital angular momentum, i.e. $L=0$ before the propagator lines are reattached to form the Faddeev wave function. Referring to Fig.~\ref{fig:Faddeev}, $p= P/3-p_q$ and $\cos\theta = p\cdot P/\sqrt{p^2 P^2}$. The amplitude is normalised such that its $U_0$ Chebyshev moment is unity at $|p|=0$. } \vspace*{-0.50cm} \end{center} \end{figure} \vspace*{-0.50cm} \section{Baryon structure} \label{sec:Baryons} Dynamical chiral symmetry breaking (DCSB) is a theoretically-established feature of QCD and the most important mass generating mechanism for visible matter in the Universe, being responsible for approximately $98\%$ of the proton's mass. A fundamental expression of DCSB is the behaviour of the quark mass-function, $M(p)$, which is a basic element in the dressed-quark propagator: \begin{equation} S(p) = 1/[i\gamma \cdot p A(p^2) + B(p^2)] = Z(p^2 )/[i\gamma \cdot p + M(p^2)], \end{equation} and may be obtained as a solution to QCD's most basic fermion gap equation, i.e. the Dyson-Schwinger equation (DSE) for the dressed-quark propagator~\cite{Cloet:2013jya}. The nontrivial character of the mass function arises primarily because a dense cloud of gluons comes to clothe a low-momentum quark. It explains how an almost-massless parton-like quark at high energies transforms, at low energies, into a constituent-like quark with an effective mass of around $350\,{\rm MeV}$. DCSB ensures the existence of nearly-massless pseudo-Goldstone modes (pions). Another equally important consequence of DCSB is less well known. Namely, any interaction capable of creating pseudo-Goldstone modes as bound-states of a light dressed-quark and -antiquark, and reproducing the measured value of their leptonic decay constants, will necessarily also generate strong colour-antitriplet correlations between any two dressed quarks contained within a baryon. Although a rigorous proof within QCD cannot be claimed, this assertion is based upon an accumulated body of evidence, gathered in two decades of studying two- and three-body bound-state problems in hadron physics. No realistic counter examples are known; and the existence of such diquark correlations is also supported by simulations of lattice QCD. The existence of diquark correlations considerably simplifies analyses of the three valence-quark scattering problem and hence baryon bound states because it reduces that task to solving a Poincar\'e covariant Faddeev equation depicted in the left panel of Fig.~\ref{fig:Faddeev}. Two main contributions appear in the binding energy: i) the formation of tight diquark correlations and ii) the quark exchange depicted in the shaded area of the left panel of Fig.~\ref{fig:Faddeev}\footnote{Whilst an explicit three-body term might affect fine details of baryon structure, the dominant effect of non-Abelian multi-gluon vertices is expressed in the formation of diquark correlations~\cite{Eichmann:2009qa}.}. This exchange ensures that diquark correlations within the baryon are fully dynamical: no quark holds a special place because each one participates in all diquarks to the fullest extent allowed by its quantum numbers. Attending to the quantum numbers of the nucleon and Roper, scalar-isoscalar and pseudovector-isotriplet diquark correlations are dominant. For the $\Delta$-baryon, only the pseudovector-isotriplet ones are present. The quark$+$diquark structure of the nucleon is elucidated in the right panel of Fig.~\ref{fig:Faddeev}, which depicts the leading component of its Faddeev amplitude: with the notation of Ref.~\cite{Segovia:2014aza}, $S_1(|p|,\cos\theta)$, computed using the Faddeev kernel described therein. This function describes a piece of the quark$+$scalar-diquark relative momentum correlation. Notably, in this solution of a realistic Faddeev equation there is strong variation with respect to both arguments. Support is concentrated in the forward direction, $\cos\theta >0$, so that alignment of $p$ and $P$ is favoured; and the amplitude peaks at $(|p|\simeq M_N/6,\cos\theta=1)$, whereat $p_q \sim p_d \sim P/2$ and hence the natural relative momentum is zero. In the antiparallel direction, $\cos\theta<0$, support is concentrated at $|p|=0$, i.e. $p_q \sim P/3$, $p_d \sim 2P/3$. \vspace*{-0.50cm} \section{The \mbox{\boldmath $\gamma^\ast N \to Nucleon$} Transition} \label{sec:Roper} The strong diquark correlations must be evident in many physical observables. We focus our attention on the flavour separated versions of the Dirac a Pauli form factors of the nucleon. The upper panels of Figure~\ref{fig:F1F2fla1} display the proton's flavour separated Dirac and Pauli form factors. The salient features of the data are: the $d$-quark contribution to $F_1^p$ is far smaller than the $u$-quark contribution; $F_2^d/\kappa_d>F_2^u/\kappa_u$ on $x<2$ but this ordering is reversed on $x>2$; and in both cases the $d$-quark contribution falls dramatically on $x>3$ whereas the $u$-quark contribution remains roughly constant. Our calculations are in semi-quantitative agreement with the empirical data. It is natural to seek an explanation for the pattern of behaviour in the upper panels of Fig.~\ref{fig:F1F2fla1}. We have mentioned that the proton contains scalar and pseudovector diquark correlations. The dominant piece of its Faddeev wave function is $u[ud]$; namely, a $u$-quark in tandem with a $[ud]$ scalar correlation, which produces $62\%$ of the proton's normalisation. If this were the sole component, then photon--$d$-quark interactions within the proton would receive a $1/x$ suppression on $x>1$, because the $d$-quark is sequestered in a soft correlation, whereas a spectator $u$-quark is always available to participate in a hard interaction. At large $x=Q^2/M_N^2$, therefore, scalar diquark dominance leads one to expect $F^d \sim F^u/x$. Available data are consistent with this prediction but measurements at $x>4$ are necessary for confirmation. Consider now the ratio of proton electric and magnetic form factors, $R_{EM}(Q^2) = \mu_p G_E(Q^2)/G_M(Q^2)$, $\mu_p=G_M(0)$. A clear conclusion from lower-left panel of Fig.~\ref{fig:F1F2fla1} is that pseudovector diquark correlations have little influence on the momentum dependence of $R_{EM}(Q^2)$. Their contribution is indicated by the dotted (blue) curve, which was obtained by setting the scalar diquark component of the proton's Faddeev amplitude to zero and renormalising the result to unity at $Q^2=0$. As apparent from the dot-dashed (red) curve, the evolution of $R_{EM}(Q^2)$ with $Q^2$ is primarily determined by the proton's scalar diquark component. As we have explained above, in this component, the valence $d$-quark is sequestered inside the soft scalar diquark correlation so that the only objects within the nucleon which can participate in a hard scattering event are the valence $u$-quarks. The scattering from the proton's valence $u$-quarks is responsible for the momentum dependence of $R_{EM}(Q^2)$. However, the dashed (green) curve in the lower-left panel of Fig.~\ref{fig:F1F2fla1} reveals something more, i.e. components of the nucleon associated with quark-diquark orbital angular momentum $L\geq1$ in the nucleon rest frame are critical in explaining the data. Notably, the presence of such components is an inescapable consequence of the self-consistent solution of a realistic Poincar\'e-covariant Faddeev equation for the nucleon. It is natural now to consider the proton ratio: $R_{21}(x) = x F_2(x)/F_1(x)$, $x=Q^2/M_N^2$, drawn in the lower-right panel of Fig.~\ref{fig:F1F2fla1}. As with $R_{EM}$, the momentum dependence of $R_{21}(x)$ is principally determined by the scalar diquark component of the proton. Moreover, the rest-frame $L\geq1$ terms are again seen to be critical in explaining the data: the behaviour of the dashed (green) curve highlights the impact of omitting these components. \begin{figure}[!t] \begin{center} \begin{tabular}{ll} \includegraphics[clip,width=0.45\textwidth]{figNN_F1.eps} & \includegraphics[clip,width=0.45\textwidth]{figNN_F2.eps} \\ \includegraphics[clip,width=0.46\textwidth]{figNN_GEpGMp.eps} & \includegraphics[clip,width=0.45\textwidth]{figNN_F2pF1p.eps} \end{tabular} \caption{\label{fig:F1F2fla1} {\it Upper-Left panel:} Computed ratio of proton electric and magnetic form factors. Curves: solid (black) -- full result, determined from the complete proton Faddeev wave function and current; dot-dashed (red) -- momentum-dependence of scalar-diquark contribution; dashed (green) -- momentum-dependence produced by that piece of the scalar diquark contribution to the proton's Faddeev wave function which is purely $S$-wave in the rest-frame; dotted (blue) -- momentum-dependence of pseudovector diquark contribution. All partial contributions have been renormalised to produce unity at $Q^2=0$. Data: circles (blue)~\protect\cite{Gayou:2001qt}; squares (green)~\protect\cite{Punjabi:2005wq}; asterisks (brown)~\protect\cite{Puckett:2010ac}; and diamonds (purple)~\protect\cite{Puckett:2011xg}. {\it Upper-Right panel:} Proton ratio $R_{21}(x) = x F_2(x)/F_1(x)$, $x=Q^2/M_N^2$. The legend for the curves is the same than that of the Upper-Left panel. Experimental data taken from Ref.~\protect\cite{Cates:2011pz}. {\it Lower-Left panel:} Flavour separation of the proton's Dirac form factor as a function of $x=Q^2/M_N^2$. The results have been obtained using a framework built upon a Faddeev equation kernel and interaction vertices that possess QCD-like momentum dependence. The solid-curve is the $u$-quark contribution, and the dashed-curve is the $d$-quark contribution. Experimental data taken from Ref.~\protect\cite{Cates:2011pz} and references therein: circles -- $u$-quark; and squares -- $d$-quark. {\it Lower-Right panel:} Same for Pauli form factor. } \vspace*{-0.50cm} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \begin{tabular}{ll} \includegraphics[clip,height=0.20\textheight,width=0.43\textwidth] {figND_GM-Jones.eps} & \includegraphics[clip,height=0.2025\textheight,width=0.45\textwidth] {figND_GM-Ash.eps} \\ \hspace*{-0.50cm} \includegraphics[clip,height=0.20\textheight,width=0.455\textwidth] {figND_RSM.eps} & \includegraphics[clip,height=0.20\textheight,width=0.45\textwidth] {figND_REM.eps} \end{tabular} \caption{\label{fig:NucDel} \emph{Upper-left panel} -- $G_{M,J-S}^{\ast}$ result obtained with QCD-based interaction (solid, black) and with contact-interaction (CI) (dotted, blue); The green dot-dashed curve is the dressed-quark core contribution inferred using SL-model~\protect\cite{JuliaDiaz:2006xt}. \emph{Upper-right panel} -- $G_{M,Ash}^{\ast}$ result obtained with QCD-based interaction (solid, black) and with CI (dotted, blue). \emph{Lower-left panel} -- $R_{SM}$ prediction of QCD-based kernel including dressed-quark anomalous magnetic moment (DqAMM) (black, solid), nonincluding DqAMM (black, dashed), and CI result (dotted, blue). \emph{Lower-right panel} -- $R_{EM}$ prediction obtained with QCD-kindred framework (solid, black); same input but without DqAMM (dashed, black); these results renormalised (by a factor of $1.34$) to agree with experiment at $x=0$ (dot-dashed, red - zero at $x\approx 14$; and dot-dash-dashed, red, zero at $x\approx 6$); and CI result (dotted, blue). The data in the panels are from references that can be found in~\protect\cite{Segovia:2014aza}. } \vspace*{-0.5cm} \end{center} \end{figure} \vspace*{-0.50cm} \section{The \mbox{\boldmath $\gamma^\ast N \to Delta$} Transition} \label{sec:FFnucdel} The electromagnetic $\gamma^{\ast}N\to \Delta$ transition is described by three Poincar\'e-invariant form factors~\cite{Jones:1972ky}: magnetic-dipole, $G_{M}^{\ast}$, electric quadrupole, $G_{E}^{\ast}$, and Coulomb (longitudinal) quadrupole, $G_{C}^{\ast}$; that can be extracted in the Dyson-Schwinger approach by a sensible set of projection operators~\cite{Eichmann:2011aa}. The following ratios \begin{equation} R_{\rm EM} = -\frac{G_E^{\ast}}{G_M^{\ast}}, \qquad R_{\rm SM} = - \frac{|\vec{Q}|}{2 m_\Delta} \frac{G_C^{\ast}}{G_M^{\ast}}\,, \label{eq:REMSM} \end{equation} are often considered because they can be read as measures of the deformation of the hadrons involved in the reaction and how such deformation influences the structure of the transition current. In considering the behaviour of the $\gamma^\ast N \to \Delta$ transition form factors, it is useful to begin by recapitulating upon a few facts. Note then that in analyses of baryon electromagnetic properties, using a quark model framework which implements a current that transforms according to the adjoint representation of spin-flavour $SU(6)$, one finds simple relations between magnetic-transition matrix elements~\cite{Beg:1964nm,Buchmann:2004ia}: \begin{equation} \label{eqBeg} \langle p | \mu | \Delta^+\rangle = -\langle n | \mu | \Delta^0\rangle\,,\quad \langle p | \mu | \Delta^+\rangle = - \surd 2 \langle n | \mu | n \rangle\,; \end{equation} i.e., the magnetic components of the $\gamma^\ast p \to \Delta^+$ and $\gamma^\ast n \to \Delta^0$ are equal in magnitude and, moreover, simply proportional to the neutron's magnetic form factor. Furthermore, both the nucleon and $\Delta$ are $S$-wave states (neither is deformed) and hence $G_{E}^{\ast} \equiv 0 \equiv G_{C}^{\ast}$. The second entry in Eq.~\eqref{eqBeg} is consistent with perturbative QCD (pQCD)~\cite{Carlson:1985mm} in the following sense: both suggest that $G_{M}^{\ast p}(Q^2)$ should decay with $Q^2$ at the same rate as the neutron's magnetic form factor, which is dipole-like in QCD. It is often suggested that this is not the case empirically~\cite{Aznauryan:2011ub, Aznauryan:2011qj}. However, as argued elsewhere~\cite{Segovia:2013rca, Segovia:2013uga}, such claims arise from a confusion between the form factors defined in the Ash~\cite{Ash1967165} and Jones-Scadron~\cite{Jones:1972ky} conventions. In addition, helicity conservation arguments within the context of pQCD enable one to make~\cite{Carlson:1985mm} the follow predictions for the ratios in Eq.\,\eqref{eqREMSM}: \begin{equation} \label{eqREMSM} R_{EM} \stackrel{Q^2\to\infty}{=} 1 \,,\quad R_{SM} \stackrel{Q^2\to\infty}{=} \,\mbox{\rm constant}\,. \end{equation} These predictions are in marked disagreement with the outcomes produced by $SU(6)$-based quark models: $R_{EM} \equiv 0 \equiv R_{SM}$. More importantly, they are inconsistent with available data~\cite{Aznauryan:2011ub, Aznauryan:2011qj}. The upper-left panel of Fig.~\ref{fig:NucDel} displays the magnetic transition form factor in the Jones-Scadron convention. Our prediction obtained with a QCD-based kernel agrees with the data on $x\gtrsim 0.4$, and a similar conclusion can be inferred from the contact interaction result. On the other hand, both curves disagree markedly with the data at infrared momenta. This is explained by the similarity between these predictions and the bare result determined using the Sato-Lee (SL) dynamical meson-exchange model~\cite{JuliaDiaz:2006xt}. The SL result supports a view that the discrepancy owes to omission of meson-cloud effects in the DSEs' computations. An exploratory study of the effect of pion-cloud contributions to the mass of the nucleon and the $\Delta$-baryon has been performed within a DSEs' framework in Ref.~\cite{Sanchis-Alepuz:2014wea}. Presentations of the experimental data associated with the magnetic transition form factor typically use the Ash convention. This comparison is depicted in the upper-right panel of Fig.~\ref{fig:NucDel}. One can see that the difference between form factors obtained with the QCD-kindred and CI frameworks increases with the transfer momentum. Moreover, the normalized QCD-kindred curve is in fair agreement with the data, indicating that the Ash form factor falls unexpectedly rapidly mainly for two reasons. First: meson-cloud effects provide up-to $35\%$ of the form factor for $x \lesssim 2$; these contributions are very soft; and hence they disappear quickly. Second: the additional kinematic factor $\sim 1/\sqrt{Q^2}$ that appears between Ash and Jones-Scadron conventions and provides material damping for $x\gtrsim 2$. Our predictions for the ratios in Eq.~(\ref{eq:REMSM}) are depicted in the lower panels of Fig.~\ref{fig:NucDel}. The lower-left panel displays the Coulomb quadrupole ratio. Both the prediction obtained with QCD-like propagators and vertices and the contact-interaction result are broadly consistent with available data. This shows that even a contact-interaction can produce correlations between dressed-quarks within Faddeev wave-functions and related features in the current that are comparable in size with those observed empirically. Moreover, suppressing the dressed-quark anomalous magnetic moment (DqAMM) in the transition current has little impact. These remarks highlight that $R_{SM}$ is not particularly sensitive to details of the Faddeev kernel and transition current. This is certainly not the case with $R_{\rm EM}$. The differences between the curves displayed in the lower-right panel in Fig.~\ref{fig:NucDel} show that this ratio is a particularly sensitive measure of diquark and orbital angular momentum correlations. The contact-interaction result is inconsistent with data, possessing a zero that appears at a rather small value of $x$. On the other hand, predictions obtained with QCD-like propagators and vertices can be viable. We have presented four variants, which differ primarily in the location of the zero that is a feature of this ratio in all cases we have considered. The inclusion of a DqAMM shifts the zero to a larger value of $x$. Given the uniformly small value of this ratio and its sensitivity to the DqAMM, we judge that meson-cloud affects must play a large role on the entire domain that is currently accessible to experiment. \begin{figure}[!t] \begin{center} \includegraphics[clip,height=0.18\textheight,width=0.40\textwidth] {figNR_NucWFU.eps} \hspace*{1.00cm} \includegraphics[clip,height=0.18\textheight,width=0.40\textwidth] {figNR_RopWFU.eps} \caption{\label{fig:NucRop_v1} \emph{Left panel}. Zeroth Chebyshev moment of all $S$-wave components in the nucleon's Faddeev wave function. \emph{Right panel}. Kindred functions for the first excited state. Legend: $S_1$ is associated with the baryon's scalar diquark; the other two curves are associated with the axial-vector diquark; and the normalisation is chosen such that $S_1(0)=1$.} \vspace*{-0.50cm} \end{center} \end{figure} \vspace*{-0.50cm} \section{The \mbox{\boldmath $\gamma^\ast N \to Roper$} Transition} \label{sec:Roper} Jefferson Lab experiments~\cite{Aznauryan:2011qj, Aznauryan:2009mx, Dugger:2009pn, Mokeev:2012vsa} have yielded precise nucleon-Roper ($N\to R$) transition form factors and thereby exposed the first zero seen in any hadron form factor or transition amplitude. It has also attracted much theoretical attention; but Ref.~\cite{Segovia:2015hra} provides the first continuum treatment of this problem using the power of relativistic quantum field theory. That study begins with a computation of the mass and wave function of the proton and its first radial excitation. The masses are (in GeV): $M_{\rm nucleon\,(N)} = 1.18$ and $M_{\rm nucleon-excited\,(R)}=1.73$. These values correspond to the locations of the two lowest-magnitude $J^P=1/2^+$ poles in the three-quark scattering problem. The associated residues are the Faddeev wave functions, which depend upon $(p^2,p\cdot P)$, where $p$ is the quark-diquark relative momentum. Fig.~\ref{fig:NucRop_v1} depicts the zeroth Chebyshev moment of all $S$-wave components in that wave function. The appearance of a single zero in $S$-wave components of the Faddeev wave function associated with the first excited state in the three dressed-quark scattering problem indicates that this state is a radial excitation. The empirical values of the pole locations for the first two states in the nucleon channel are~\cite{Suzuki:2009nj}: $0.939\,{\rm GeV}$ and $1.36 - i \, 0.091\,{\rm GeV}$, respectively. At first glance, these values appear unrelated to those obtained within the DSEs framework. However, deeper consideration reveals~\cite{Eichmann:2008ae, Eichmann:2008ef} that the kernel in the Faddeev equation omits all those resonant contributions which may be associated with the meson-baryon final-state interactions that are resummed in dynamical coupled channels models in order to transform a bare-baryon into the observed state~\cite{Suzuki:2009nj, Kamano:2013iva}. This Faddeev equation should therefore be understood as producing the dressed-quark core of the bound-state, not the completely-dressed and hence observable object. Crucial, therefore, is a comparison between the quark-core mass and the value determined for the mass of the meson-undressed bare-Roper in Ref.~\cite{Suzuki:2009nj} which is $1.76\,{\rm GeV}$. \begin{figure}[!t] \begin{minipage}[t]{\textwidth} \begin{minipage}{0.49\textwidth} \centerline{\includegraphics[clip,width=0.85\linewidth]{figNR_F1s.eps}} \end{minipage} \begin{minipage}{0.49\textwidth} \centerline{\includegraphics[clip,width=0.85\linewidth]{figNR_F2s.eps}} \end{minipage} \end{minipage} \caption{\label{fig:NucRop_v2} \emph{Left} -- Dirac transition form factor, $F_{1}^{\ast}(x)$, $x=Q^2/m_N^2$. Solid (black) curve, QCD-kindred prediction; dot-dashed (red) curve, contact-interaction result; dotted (green) curve, inferred meson-cloud contribution; and dashed (blue) curve, anticipated complete result. \emph{Right} -- Pauli transition form factor, $F_{2}^{\ast}(x)$, with same legend. Data in both panels: circles (blue)~\cite{Aznauryan:2009mx}; triangle (gold)~\cite{Dugger:2009pn}; squares (purple)~\cite{Mokeev:2012vsa}; and star (green)~\cite{Agashe:2014kda}.} \vspace*{-0.50cm} \end{figure} The transition form factors are displayed in Fig.~\ref{fig:NucRop_v2}. The results obtained using QCD-derived propagators and vertices agree with the data on $x\gtrsim 2$. The contact-interaction result simply disagree both quantitatively and qualitatively with the data. Therefore, experiment is evidently a sensitive tool with which to chart the nature of the quark-quark interaction and hence discriminate between competing theoretical hypotheses. The mismatch between the DSE predictions and data on $x\lesssim 2$ is due to Meson-cloud contributions that are expected to be important on this domain. An inferred form of that contribution is provided by the dotted (green) curves in Fig.~\ref{fig:NucRop_v2}. These curves have fallen to just 20\% of their maximum value by $x=2$ and vanish rapidly thereafter so that the DSE predictions alone remain as the explanation of the data. Importantly, the existence of a zero in $F_{2}^{\ast}$ is not influenced by meson-cloud effects, although its precise location is. \vspace*{-0.50cm} \section{Conclusions} \label{sec:conclusions} \vspace*{-0.20cm} We have presented a unified study of nucleon, Delta and Roper elastic and transition form factors, and compare predictions made using a framework built upon a Faddeev equation kernel and interaction vertices that possess QCD-like momentum dependence with results obtained using a symmetry-preserving treatment of a vector$\,\otimes\,$vector contact-interaction. The comparison emphasises that experiment is sensitive to the momentum dependence of the running coupling and masses in QCD and highlights that the key to describing hadron properties is a veracious expression of dynamical chiral symmetry breaking in the bound-state problem. Amongst our results, the following are of particular interest: The scaling behaviour of the electromagnetic ratios $G_{E}^{p}/G_{M}^{p}$ and $F_{2}^{p}/F_{1}^{p}$ is due to higher quark orbital angular momentum components in the nucleon wave function but also to strong diquark correlations. In fact, the presence of strong diquark correlations within the nucleon is sufficient to understand empirical extractions of the flavour-separated versions of Dirac and Pauli form factors. In connection with the $\gamma^{\ast}N\to \Delta$ transition, the momentum-dependence of the magnetic transition form factor, $G_M^\ast$, matches that of $G_M^n$ once the momentum transfer is high enough to pierce the meson-cloud; and the electric quadrupole ratio is a keen measure of diquark and orbital angular momentum correlations, the zero in which is obscured by meson-cloud effects on the domain currently accessible to experiment. Finally, the Roper resonance is at heart of the nucleon's first radial excitation, consisting of a dressed-quark core augmented by a meson cloud that reduces its mass by approximately $20\%$. Our analysis shows that a meson-cloud obscures the dressed-quark core from long-wavelength probes, but that it is revealed to probes with $Q^2 \gtrsim 3 m_N^2$. \vspace*{-0.30cm} \begin{acknowledgements} The material described in this contribution is drawn from work completed in collaboration with numerous excellent people, to all of whom I am greatly indebted. I would also like to thank V. Mokeev, R.~Gothe, T.-S.\,H.~Lee and G. Eichmann for insightful comments; and to express my gratitude to the organisers of the ECT$^{\ast}$ Workshop in Trento {\it Nucleon Resonances: From Photoproduction to High Photon Virtualities}, whose support helped my participation. I acknowledges financial support from the Alexander von Humboldt Foundation. \end{acknowledgements} \vspace*{-0.80cm}
{ "attr-fineweb-edu": 1.584961, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUblg5qdmB62S3XJSz
\part{Hello} \title{Nonparametric estimation of galaxy cluster's emissivity and point source detection in astrophysics with two lasso penalties} \date{\today} \author{Jairo Diaz Rodriguez, Dominique Eckert, Hatef Monajemi,\\ St\'ephane Paltani and Sylvain Sardy} \maketitle {\bf Abstract}: Astrophysicists are interested in recovering the 3D gas emissivity of a galaxy cluster from a 2D image taken by a telescope. A blurring phenomenon and presence of point sources make this inverse problem even harder to solve. The current state-of-the-art technique is two step: first identify the location of potential point sources, then mask these locations and deproject the data. We instead model the data as a Poisson generalized linear model (involving blurring, Abel and wavelets operators) regularized by two lasso penalties to induce sparse wavelet representation and sparse point sources. The amount of sparsity is controlled by two quantile universal thresholds. As a result, our method outperforms the existing one. \newpage \section{Introduction} \subsection{Emissivity of astrophysical sources} Several types of astrophysical sources originate from the radiative processes occurring in an ``optically thin'' environment, that is, a situation in which a photon has a low probability of interacting with the surrounding material and can escape the source freely. Such a situation occurs when the mean density of material in the source is very low. Examples of such astronomical sources include galaxies (where the observed light is the sum of the light emitted by all stars), the coronae of the Sun and other convective stars, cocoons of expanding material after supernova explosions (\emph{supernova remnants}) and galaxy groups and clusters (which are filled with a hot ($10^7-10^8$ Kelvin) low-density plasma that constitutes the majority of the ordinary matter of large-scale structures in the Universe). In case the source is optically thin, the electromagnetic radiation $I$ in a given direction is the integral of the intrinsic emissivity of the source over the source volume, \begin{equation} I = \frac{1}{4\pi D^2} \int_{V} \varepsilon\,dV, \label{eq:abelint} \end{equation} where the emissivity $\varepsilon$ is the energy emitted by the source in electromagnetic radiation and $D$ is the source distance. The three-dimensional distribution of the emissivity is of interest as it provides valuable information on the physical properties of the emitting material (e.g., density, temperature, metallicity). In the case of galaxy clusters, the emitting plasma is so hot that these structures radiate predominantly in X-rays \citep{sarazin88}. Current X-ray telescopes like \emph{XMM-Newton} and \emph{Chandra} are able to detect the emission from the plasma and make detailed maps of the distribution of hot gas in galaxy clusters, which are extremely useful to understand the formation and evolution of structures in the Universe \citep{kravtsov12}, study the overall matter content and the missing mass (``dark matter'') problem \citep{clowe06}, and constrain the cosmological parameters governing the evolution of the Universe as a whole \citep{allen11}. In most cases, X-ray images of galaxy clusters show round, azimuthally symmetric morphologies indicating that the geometry of these structures is nearly spherical. The observed emissivity decreases radially from the center of the source to its outermost border \citep{e12}. Assuming spherical symmetry, \eqref{eq:abelint} can be written explicitly as a function of projected distance $s$ to the cluster center, \begin{equation} I(s)\propto\int \varepsilon(r)\, dz \quad \mbox{ with } \quad r^2=s^2+z^2, \label{eq:proj} \end{equation} where $r$ is the three-dimensional distance to the cluster center, $I(s)$ is the observed azimuthally-averaged brightness profile, and the integral is performed along the line of sight $z$. While $\varepsilon(r)$ can in principle be evaluated directly from the observed emission by solving the integral \eqref{eq:proj}, in practice the problem is rendered complicated by the presence of noise in the original data, as for instance with the XMM-Newton telescope described below. Indeed, as for all inverse problems the projection kernel smooths small-scale fluctuations, thus the inverse transformation has the opposite effect and the noise can be greatly amplified \citep[see][]{lucy74,lucy94}. This effect is particularly important in the low signal-to-noise regime. \subsection{The \emph{XMM-Newton} mission} The \emph{XMM-Newton} space telescope \citep{jansen01} is a cornerstone mission of the European Space Agency. It was put in orbit on December 10, 1999 by an Ariane 5 launcher and it remains to this day the largest X-ray telescope ever operated. The spacecraft is made of three co-aligned X-ray telescopes that observe the sky simultaneously. At the focal point of the three telescopes are located two instrument, the European Photon Imaging Camera (EPIC) and the Reflection Grating Spectrometer (RGS). The left image of Figure~\ref{fig:cosmodata} is an image of the galaxy cluster Abell 2142 recorded by the \emph{XMM-Newton} observatory \citep{tchernin16}. The data were acquired in 2012 (PI: Eckert) as part of the \emph{XMM-Newton} guest observer program, in which astronomers are invited to propose suitable targets to be observed by the spacecraft and provide a detailed scientific justification for their program. \begin{figure*}[ht] \centering $ \begin{array}{cc} \includegraphics[width=2.10in, angle=90]{sketch} & \includegraphics[width=2.28in]{data512PS} \end{array} $ \caption{Left: schematical view of a telescope, the image taken by it, a galaxy cluster and two point sources. Right: real image taken by the XMM-Newton telescope. \label{fig:cosmodata}} \end{figure*} EPIC \citep{turner01} consists of three high-sensitivity cameras which cover a field of view of 30 arcmin diameter roughly equivalent to the size of the full moon. The cameras are made of $600\times600$ pixels organized in 8 individual chips which record the time, energy and position of incoming X-ray photons, resulting in an image like on the right side of Figure~\ref{fig:cosmodata}. The sensitivity of the instrument is maximal for sources precisely aligned with the axis of the telescopes (the aim point) and gradually declines for sources located slightly offset from the optical axis. The angular resolution of the telescope is 6 arcsec at the aim point and it degrades to 15 arcsec at the edge of the field of view Astrophysical sources with an apparent size smaller than the angular resolution of the instrument thus appear blurred with a typical size and shape that is known from the characteristics of the telescopes. Similarly, the degradation of the sensitivity of the instrument with off-axis angle has been extensively calibrated and follows a known pattern that needs to be taken into account to recover the true flux radiated by a source. Apparent on the image of Figure~\ref{fig:cosmodata} are bright spots called point sources. The vast majority of these sources are active galactic nuclei, which originate from material falling onto a supermassive black hole located at the center of a galaxy. Since they are not originated from the galaxy cluster under study, the estimation of emissivity should be robust to potential point sources. \subsection{State of the art ``onion peeling'' deprojection} \label{subsct:SA} Traditionally, the main approach used to solve \eqref{eq:proj} has been by inverting directly the projection kernel \citep[e.g.][]{fabian81,kriss}. Within the region encompassed between projected radii $r_{i}$ and $r_{i+1}$ from the center of the image of Figure~\ref{fig:cosmodata}, the counts are averaged to give an estimate~$\hat I_i$ of the quantity of radiation received. This amounts to discretizing \eqref{eq:proj} such that the projection kernel reduces to an upper-triangular convolution matrix~$V$, where the matrix element $V_{i,j}$ correspond to the volume of the spherical shell $j$ projected along the line of sight of annulus $i$ \citep{kriss}. The averaged counts $\hat I_i$ are related to the intrinsic 3D emissivity in the spherical shell between $r_{i}$ and $r_{i+1}$ as \begin{equation} \hat I_i=\sum_{j=1}^n V_{i,j} \varepsilon_j + {\rm error} .\label{eq:iter} \end{equation} Since the projection matrix $\mathbf{V}$ is upper triangular, the deprojected profile can be evaluated starting from the outermost shell (where projection effects are assumed to be negligible) and then solving \eqref{eq:iter} iteratively when proceeding inwards (hence the nickname of ``onion peeling''). This method has the advantage of being nonparametric in that it makes no assumption on the shape of the intrinsic profile. It suffers from severe drawbacks however. As already discussed in the introduction, this method is very sensitive to measurement uncertainties, since small variations in the projected profile can be greatly magnified; therefore, the resulting profile is generally not smooth. Moreover, the propagation of statistical fluctuations can result in unphysical negative emissivities. This method also requires that the position of contaminating point sources be estimated in a first step, so as to mask the corresponding areas prior to applying the algorithm. To alleviate these issues, many variants of the direct deprojection technique exist, including a correction for edge effects \citep{mclaughlin99}, spectral information \citep{nulsen95,pizzolato03}, or emission-weighted volumes \citep{morandi07}. However, from the point of view of the mathematical treatment these procedures are similar. In summary, the current method is a two step method (identify, mask the point sources, and then estimate the emissivity) that does not model well the stochastic nature of the data and that propagates errors from the outskirt of the galaxy cluster (large radius) to the center of the cluster. \section{A nonparametric Poisson linear inverse model} The important stylized features of the astrophysical data described above can be summarized as follows: \begin{enumerate} \item Many bright spots are observed on the image. They are the so-called point sources, that is, sources with an angular size that is much smaller than the angular resolution of the telescope. Their location is unknown. \item Although point sources are expected to be much smaller than the size of a pixel, their apparent size is much larger. This is due to the finite precision of the alignment of the telescope, which induces a blurring effect that has been well studied and can be considered as known. \item There are artifacts in the form of lines that are due to the poor sensitivity of the telescope at the connection between the various chips. \item Near its center, the image has a region of high intensity: it is the center of a galaxy cluster where the gas density is high. The emissivity decreases sharply towards the outskirts, implying that the gas density drops radially. The overall shape is nearly spherically symmetric, exception made of the point sources. \item Each pixel is a random count of X-rays during a time of exposure. \end{enumerate} To account for these specificities, we propose the following model. Considering the telescope first, each image pixel indexed by $(x,y)$ is modeled as \begin{equation} \label{eq:Poissondata} Y_{x,y}\sim{\rm Poisson}(\mu_{x,y}) \quad {\rm for} \quad x=1,\ldots,N \quad {\rm and} \quad \ y=1,\ldots,N, \end{equation} where $\mu_{x,y}$ reflects the integral of the intrinsic emissivity of the cosmos. Without the presence of any cosmological background, the XMM telescope has its own electronic noise with small and known mean counts $e_{x,y}\geq 0$. In other words, without any cosmological object facing the telescope, we have $\mu_{x,y}=e_{x,y}$, which can be seen as a known offset. Considering now the cosmos, each pixel faces a region of the cosmos along a line going from zero (the captor) to infinity. Some lines go through the galaxy cluster, some go through a point source, other go through both. Calling $\epsilon(x,y,z)\geq 0$ the emissivity of the galaxy cluster along that line and $S_{x,y}\geq 0$ a potential point source, the integral of the cosmos emissivity along that line is \begin{equation} \label{eq:alongline} I_{x,y} = \int_0^\infty \epsilon(x,y,z) dz + S_{x,y} \quad {\rm for} \quad x=1,\ldots,N \quad {\rm and} \quad \ y=1,\ldots,N. \end{equation} Moreover, owing to the rare existence of point sources (see first stylized feature), $S$ is a sparse $N\times N$ matrix. The connection between $\mu_{x,y}$ and $I_{x,y}$ depends on the characteristics of the telescope. The blurring effect (second stylized feature) is known through the so-called point spread function of the telescope. Likewise the sensitivity of the telescope (third stylized feature) is known. As a result, the Poisson intensity in~\eqref{eq:Poissondata} is modeled as \begin{equation} \label{eq:firstlinearmodel} \mu_{x,y}= e_{x,y} + (B(E \circ {\boldsymbol I}))_{x,y}, \end{equation} where $B$ is the known blurring operator, $E$ is the known $N \times N$ sensitivity matrix, and $\circ$ is the notation for the Hadamard product between two matrices. We pause here to make an important remark. The Poisson counts~\eqref{eq:Poissondata} are linked to the unknown parameters~\eqref{eq:alongline} though a linear model. This model belongs to the class of nonparametric generalized linear models \citep{NW72}, but as opposed to the classical approach, the link here must be the identity link. In other words, the canonical link is not appropriate to properly model the physic. The unknown objects are the gas emissivity $\epsilon(x,y,z)$ as well as the location and intensities of the point sources $S$. An assumption is needed to estimate the three-dimensional gas density function because the problem is unidentifiable in its current form. Indeed, an infinite number of 3D-functions have the same 2D projection, that is, one cannot recover $\epsilon(x,y,z)$ from $\int \epsilon(x,y,z) dz$. The fourth stylized feature states that a good approximation of the shape of the galaxy cluster is that it is spherical, that is, $\epsilon(x,y,z)=\epsilon_R(r)$ with $r=\sqrt{x^2+y^2+z^2}$ is radial. Invariance by rotation makes the problem simpler since the emissivity is known through a univariate function $\epsilon_R(r)$ of the distance $r$ to the center must be estimated. The association is moreover linear since the integral in~\eqref{eq:alongline} becomes $$ \int_0^\infty \epsilon(x,y,z) dz=(A \epsilon_R) (x,y), $$ where $A$ is the Abel transform. The final assumption we make is that $\epsilon_R$ has a sparse representation on basis functions $\phi_p$: \begin{equation}\label{eq:linearexpansion} \epsilon_R(r)=\alpha_0+\sum_{p=1}^P \alpha_p \phi_p(r). \end{equation} The choice of basis functions $\phi_p$ is based on prior knowledge. Cosmologists expect a decreasing function from the center of the galaxy cluster to its outskirt. So we use a generalization of the so-called \emph{King}'s functions \begin{equation} \phi_p(r)=(1+ (r/\rho)^2)^{-\beta}, \quad \rho \in \{\rho_1, \ldots, \rho_I\},\, \beta \in \{\beta_1, \ldots, \beta_J\} \end{equation} parametrized by $p=(\rho,\beta)$ \citep{2016A&A...592A..12E}. A grid of $(\rho,\beta)$ lead to $P/2$ such functions. To allow more flexibility and discover galaxy clusters with singularities, we also use $P/2$ orthonormal wavelets defined on equispaced radii. Here we chose $P$ of the order of $N$, more precisely $P=2^{\lfloor\log_2(N) \rfloor}$. We provide more details of our implementation in Appendix~\ref{app:waveletimplement}. Putting all components together leads to the following linear model for the Poisson parameters: \begin{equation}\label{eq:poismodel} \mu_{x,y}=e_{x,y}+ (B(E \circ (A (\alpha_0 {\bf 1} + \Phi {\boldsymbol \alpha})+{\bf s})))_{x,y}, \end{equation} where the unknown parameters are the intercept $\alpha_0$, the sparse $N$-vector ${\boldsymbol \alpha}$ of the linear expansion~\eqref{eq:linearexpansion} and the sparse $N \times N$-matrix $S$ of potential point sources put in vector form ${\bf s}$. This is a linear inverse problem in the sense that the unknown quantities are indirectly observed through the linear operators. \section{Estimation with two sparsity constraints} Based on stylized feature five, the Poisson negative log-likelihood \begin{eqnarray} \label{eq:MLE} -l(\alpha_0,{\boldsymbol \alpha}, S; {\bf y})&=&\sum_{(x,y)\in \{1,\ldots,N\}^2} \mu_{x,y} - Y_{x,y} \log \mu_{x,y} \end{eqnarray} is a natural measure of goodness-of-fit of the counts data to the linear model for $\mu_{x,y}$~\eqref{eq:poismodel}. This model is a generalized linear model (GLM) for Poisson noise with identity link. Note that the log-term in~\eqref{eq:MLE} prevents the estimated Poisson intensities from being negative. The number $1+N+N^2$ of parameters $(\alpha_0,{\boldsymbol \alpha}, {\bf s})$ exceeds the number of observations $N^2$, so that regularization is needed. Owing to the sparse representation of the univariate gas density on its basis functions and to the rare existence of point sources, we regularize the likelihood by enforcing sparsity on the estimation of ${\boldsymbol \alpha}$ and ${\bf s}$ with two $\ell_1$ penalties \begin{equation} \label{eq:l1penalty} (\hat \alpha_0, \hat {\boldsymbol \alpha}, \hat {\bf s})_{\lambda_1, \lambda_2} = \arg \min_{\alpha_0,{\boldsymbol \alpha}, {\bf s}} -l(\alpha_0,{\boldsymbol \alpha}, {\bf s}; {\bf y}) + \lambda_1 \|{\boldsymbol \alpha}\|_1 + \lambda_2 \|{\bf s}\|_1 \end{equation} in the spirit of lasso \citep{Tibs:regr:1996,SAT01} and glmnet \citep{ParkHastie07}. We rely on FISTA \citep{beck2009} to solve the high-dimensional and non-differentiable optimization problem for given hyperparameters $(\lambda_1,\lambda_2)$. It has the advantage over glmnet to handle the identity link function and positivity constraints on the King's coefficients, and does no require building and storing a very large matrix. The selection of the regularization parameters $(\lambda_1,\lambda_2)$ is a key issue. Performing cross validation on a 2D-grid would be computationally intensive and would require segmenting the image into sub-images. Another approach is the universal threshold of \citet{Dono94b}. Derived for Gaussian regression, the universal threshold has the property to reproduce the true signal with high probability when the true signal is the constant function. This choice of $\lambda$ has remarkable near minimax properties when the function to estimate lives in Besov's spaces \citep{Dono95asym}. The quantile universal threshold is the extension of the universal threshold to other noise distributions, models and estimators \citep{Giacoetal16}. We now derive it for~\eqref{eq:l1penalty}. First we derive the zero-thresholding function for~\eqref{eq:l1penalty}. The proof is in Appendix~\ref{app:proof}. \bigskip \noindent \begin{property} \label{prop:ztf} Given an image ${\bf y}$, the smallest $\lambda_1$ and $\lambda_2$ that jointly set $(\hat {\boldsymbol \alpha}, \hat s)_{\lambda_1,\lambda_2}$ in~\eqref{eq:l1penalty} to zero is given by the zero-thresholding function \begin{equation} \lambda({\bf y})=(\lambda_1({\bf y}), \lambda_2({\bf y})):= \left \{ \begin{array}{ll} \left ( \|X_1^{\rm T} \left ( \frac{ {\bf y} - \hat {\boldsymbol \mu}_\lambda(\hat \alpha_0) } {\hat {\boldsymbol \mu}_\lambda(\hat \alpha_0) } \right ) \|_\infty , \|X_2^{\rm T} \left ( \frac{ {\bf y} - \hat {\boldsymbol \mu}_\lambda(\hat \alpha_0) } {\hat {\boldsymbol \mu}_\lambda(\hat \alpha_0) } \right ) \|_\infty \right )& {\rm if} \ {\bf y} \in {\cal D} \\ (+ \infty, + \infty) & {\rm otherwise} \end{array} \right . , \end{equation} where $\hat {\boldsymbol \mu}_\lambda(\hat \alpha_0)={\bf e}+ {\bf x}_0 \hat \alpha_0$, ${\bf x}_0=BE \circ A {\bf 1}$, $X_1=BE \circ A \Phi$, $X_2=BE \circ A$ and ${\cal D} = \{ {\bf y} : \exists \hat \alpha_0 \in {\mathbb R} \ {\rm satisfying} \ \hat {\bf x}_0^{\rm T} {\bf 1}= {\bf x}_0^{\rm T} ({\bf y}/({\bf e}+ {\bf x}_0 \hat \alpha_0)) \ {\rm and} \ {\bf e}+ {\bf x}_0 \hat \alpha_0 > {\bf 0}\}$. \end{property} \bigskip \noindent Second we define the corresponding null-thresholding statistic. \begin{definition} \label{def:nts} The null-thresholding statistic $\Lambda$ for $(\hat {\boldsymbol \alpha}, \hat s)_{\lambda_1,\lambda_2}$ in~\eqref{eq:l1penalty} is $$ \Lambda=(\Lambda_1, \Lambda_2):=(\lambda_1({\bf Y}_0), \lambda_2({\bf Y}_0)) \quad {\rm with} \quad {\bf Y}_0 \sim {\rm Poisson}({\bf e}+{\bf x_0} \alpha_0). $$ \end{definition} Note that ${\bf Y}_0$ has mean ${\bf e}+{\bf x_0} \alpha_0$, that is, the zero-scene assumes zero emissivity (i.e., ${\boldsymbol \alpha}={\bf 0}$) and no point source (i.e., ${\bf s}={\bf 0}$). The goal of our selected hyperparameters $(\lambda_1^{\rm QUT}, \lambda_2^{\rm QUT})$ is to reproduce this zero-scene with high probability. This is achieved with the third step by taking marginal quantiles of the null-thresholding statistic. \noindent \begin{definition} \label{def:qut} The quantile universal thresholds $(\lambda_1^{\rm QUT}, \lambda_2^{\rm QUT})$ are the upper $\alpha_1$-quantile of $\Lambda_1$ for $\lambda_1$ and the upper $\alpha_2$-quantile of $\Lambda_2$ for $\lambda_2$. \end{definition} The quantile universal thresholds has the following desired property. \bigskip \noindent {\bf Property}: With $(\lambda_1^{\rm QUT}, \lambda_2^{\rm QUT})$, the estimator~\eqref{eq:l1penalty} reproduces the zero-scene with probability at least $1-\alpha_1-\alpha_2$ since ${\mathbb P}((\hat {\boldsymbol \alpha}, \hat {\bf s})_{\lambda_1^{\rm QUT},\lambda_2^{\rm QUT}}= ({\bf 0}, {\bf 0}) ; {\boldsymbol \alpha}={\bf 0}, {\bf s}={\bf 0}) \geq 1-\alpha_1-\alpha_2$. \bigskip In practice, the choice of $\alpha_1$ and $\alpha_2$ can be guided by the following considerations. Since the former is linked to the estimation of the emissivity function $\epsilon_R$, we choose $\alpha_1=1/\sqrt{\pi \log P}$ as for the universal threshold of \citet{Dono94b} in the Gaussian case. The latter is linked to the identification of the point sources, so we recommend for instance $\alpha_2=1/N^2$ to control the false discovery rate at level $\alpha_2$ in the weak sense: with $\alpha_2=1/N^2$, the average number of falsely detected point sources is one per image when no point sources are present. \section{Numerical experiments} \label{sec:tests} \subsection{Simulated data} \begin{figure}[!t] \centering \includegraphics[width=1.0\textwidth]{simtests.pdf} \caption{Three different simulation profiles (top row) with a corresponding simulated galaxy cluster images (bottom row).} \label{fig:simprofiles} \end{figure} We simulate galaxy clusters according to model~\eqref{eq:poismodel} with known constant background $e_{x,y}=10^{-4}$, known sensitivity matrix~$E$ and blurring operator~$B$ corresponding to the point spread function $${\rm psf}(r; r_0, \alpha)=\left(1+\left(\frac{r}{r_0}\right)^2\right)^{-\alpha}$$ of the XMM telescope ($\alpha=1.449$ and $r_0=2.2364$ pixels). The simulations are based on three profile functions: {\tt cosmoBlocks} is a cropped version of the well known standard function {\tt blocks} used in signal processing \citep{Dono94b} and although not expected to describe a galaxy cluster, it allows to show the flexibility of our procedure; {\tt cosmo1} and {\tt cosmo2} are typical profiles according to cosmologists. For each test profile, we simulate $N\times N$ images of galaxy clusters for $N \in \{ 128, 256, 512 \}$ en perform $M \in \{96, 48, 24\}$ Monte Carlo samples, respectively, to estimate the mean squared error. We consider two scenarios: first without, then with point sources to quantify the robustness of the methods to the presence of point sources. A total of $N/4$ points sources are uniformly distributed on the whole image. The amplitude of each point source is uniformly distribution on $[0, 0.002]$. We compare our estimator (QUT-lasso) to the state-of-the-art method used by cosmologists (SA) described in Section~\ref{subsct:SA}. Recall that the SA method is a two step method: first estimate the location of potential point sources, then perform the deprojection. We help the SA method by being oracle in the first step: since we are doing a simulation, we know where the point sources are and provide this information through the sensitivity matrix $E$ in that $E_{x,y}=0$ when pixel $(x,y)$ has a point source. Table~\ref{meanMSE} shows the estimated mean square error between $\log \hat \epsilon$ and $\log \epsilon$ for each simulation. The first striking result is that QUT-lasso performs better than the state-of-the-art method, without and with point sources. Second, as we excepted, QUT-lasso is robust to point sources by means of the $\ell_1$ penalty on the point source matrix $S$. The state-of-the-art method is not at all robust for {\tt cosmo1}. \input{tablebigsims.tex} Cosmologists are also interested in quantifying the uncertainty on the emissivity estimation. To that aim, the image can be segmented into blocks of size $2 \times 2$ pixels and, assuming that the four pixels are approximately i.i.d., bootstrapping within each block can be employed to provide bootstrapped images and corresponding emissivity curves. Pointwise quantiles of these estimated curves provide a measure of uncertainty, as shown on Figure \ref{confidenceintervals} for the three test functions and two sample sizes. We observe that the proposed estimator (red curve) is closer to the true emissivity (black) and less wiggly that the state-of-the-art (red), and that coverage improves as the sample size increases, especially in areas of discontinuities. \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{confidenceintervals256_512.pdf} \caption{Confidence intervals obtained by bootstrap for images of size $N\times N$ with $N=256$ (top) and $N=512$ (bottom). In black the true profile, in blue the state-of-the-art estimate and in red the estimated profile with its confidence intervals in gray obtained with the proposed method. \label{confidenceintervals}} \end{figure*} \subsection{Real data} We applied our method to real data in Figure \ref{realresult}. We obtained confidence intervals by bootstrapping the image in squares of $2 \times 2$. We also compared to results obtained by the state-of-the-art methodology. The result can be observed in Figure \ref{realresult}. We show the results in the interval $(-0.06,0.06)$ and $(-0.16,0.16)$ for Chandra and XMM-Newton, respectively. We find an excellent agreement between the results obtained with the two independent telescopes. Given the better spatial resolution of Chandra compared to XMM-Newton, it is able to sample better the shape of the emissivity profile in the innermost regions, whereas in XMM-Newton the peak is smeared out by the point spread function of the telescope. Conversely, the higher sensitivity of XMM-Newton allows it to detect the emission from the source out to larger radii than Chandra. \begin{figure*}[!t] \centering \includegraphics[width=1.0\textwidth, height=6.3in]{xmmANDchandra.pdf} \caption{Real data results. Top: pictures taken by two telescopes of same galaxy cluster: Chandra (high resolution) and XMM (high sensitivity). Middle: Estimated emissivities by our method (continuous line) and state-of-the-art (dotted line). Bottom: all four estimates on the same plot.} \label{realresult} \end{figure*} \subsection{Summary of empirical findings} As shown in Table~1, our method outperforms the current state-of-the-art method by providing results that are typically closer to the true value by a factor of three to five on average. Thanks to the use of wavelets in the linear expansion~\eqref{eq:linearexpansion}, QUT-lasso adapts to local features of the emissivity. Moreover our method does not require an \emph{a priori} knowledge of the position of contaminating point sources, but proposes, in a single step, an estimation of the emissivity robust to the presence of point sources. For the selection of its two regularization parameters, the quantile universal thresholds for Poisson GLM with identity link is employed, which makes the method fully automatic and superior by far to the methods that are commonly used in astrophysics. Figure~\ref{confidenceintervals} also shows good coverage by the bootstrap-based confidence intervals, especially with large sample size. Application to the Chandra and XMM-Newton telescopes shows in Figure~\ref{realresult} good agreement between the profiles reconstructed with QUT-lasso and the standard method, yet with a smoother profile recovered by QUT-lasso. Given that the true emissivity profile of the source is unknown, we cannot make a quantitative assessment based on this plot. However, our results obtained with simulated data clearly highlight the superiority of our method over the current state-of-the-art. \section{Conclusions} In this paper, we have presented a novel technique to reconstruct the three-dimensional properties of an ``optically thin'' astrophysical source from two-dimensional observations including the presence of background, unrelated point sources and Poisson noise. This method is based on Poisson GLM with identity link and a lasso-type regularization with two regularization parameters that are selected with the quantile universal threshold (QUT). The linear model for the emissivity curve is based on an expansion on basis functions which include wavelets. This makes the QUT-lasso method particularly flexible to discover galaxy clusters with unusual shapes. Future applications to real data will allow us to reconstruct accurately the three-dimensional gas density profiles in galaxy clusters, which can be used to study the astrophysical properties of the plasma in clusters of galaxies, estimate cosmological parameters, and measure the gravitational field in massive structures to set constraints on dark matter and modified gravity. \section{Acknowledgements} The authors thank the Swiss National Science Foundation. \section{Reproducible research} The code and data that generated the figures in this article may be found online at {\tt http://www.unige.ch/math/folks/sardy/astroRepository}
{ "attr-fineweb-edu": 1.479492, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbmY5qsFAftmvNXqS
\section{Introduction} The object of this paper are two transport problems, namely the \emph{branched transport problem} and the \emph{urban planning problem}. In general terms, a transport problem asks how to move mass from a given initial spatial distribution to a specific desired final mass distribution at the lowest possible cost. Different cost functionals now lead to quite different optimisation problems. \paragraph{Monge's problem.} The prototype of all these problem is Monge's problem. In a rather general setting, let $\mu_+, \mu_-$ be finite positive Borel measures on $\R^n$ with the same total mass. The transportation of $\mu_+$ onto $\mu_-$ is modelled by a measurable map $t : \R^n \to \R^n$ such that $\mu_-(B) = \mu_+(t^{-1}(B))$ for all Borel sets $B$. The cost to move an infinitesimal mass $\de\mu_+(x)$ in the point $x$ to the the point $t(x)$ is given by $d(x,t(x))\,\de\mu_+(x)$, and the total cost is then given by the formula \begin{equation}\label{eq:monge} \int_{\R^n} d(x,t(x))\,\de\mu_+(x)\,. \end{equation} Usually, $d(x,y) = |x-y|^p$ for some $p\geq1$. \paragraph{Branched transport problem.} Monge's cost functional is linear in the transported mass $\de\mu_+(x)$ and thus does not penalise spread out particle movement. Each particle is allowed to travel independently of the others. This feature makes the Monge's problem unable to model systems which naturally show ramifications (e.\,g., root systems, leaves, the cardiovascular or bronchial system, etc.). For this reason, the \emph{branched transport problem} has been introduced by Maddalena, Morel, and Solimini in \cite{Maddalena-Morel-Solimini-Irrigation-Patterns} and by Xia in \cite{Xia-Optimal-Paths}. It involves a functional which forces the mass to be gathered as much as possible during the transportation. This is achieved using a cost which is strictly subadditive in the moved mass so that the marginal transport cost per mass decreases the more mass is transported together (cf.\ Figure\,\ref{fig:transportCost}). We will formally introduce branched transport later in Section \ref{sec:branchedTransportBrief}. \paragraph{Urban planning problem.} The second problem we are interested in is the \emph{urban planning problem}, introduced in \cite{Brancolini-Buttazzo}. Here, the measures $\mu_+,\mu_-$ have the interpretation of the population and workplace densities, respectively. In this case the cost depends on the public transportation network $\Sigma$, which is the object of optimisation. In fact, one part of the cost associated with a network $\Sigma$ is the optimal value of \eqref{eq:monge}, where the cost $d$ depends on $\Sigma$ and is chosen in such a way that transportation on the network is cheaper than outside the network. The other part models the cost for building and maintaining the network. A detailed, rigorous description is given in Section \ref{sec:introUrbPlan}. \paragraph{Patterns and graphs.} Branched transport has been studied extensively and has several formulations. Maddalena, Morel, and, Solimini in \cite{Maddalena-Morel-Solimini-Irrigation-Patterns} and Bernot, Caselles, and Morel in \cite{Bernot-Caselles-Morel-Traffic-Plans} proposed a Lagrangian formulation based on \emph{irrigation patterns} $\chi$ that describe the position of each mass particle $p$ at time $t$ by $\chi(p,t)$. The difference between both articles is that in the latter the particle trajectories cannot be reparameterised without changing the transport cost. The viewpoint introduced by Xia in \cite{Xia-Optimal-Paths} is instead Eulerian, where only the flux of particles is described, discarding its dependence on the time variable $t$. A very interesting aspect of branched transport is its regularity theory as studied in several articles, among them \cite{Bernot-Caselles-Morel-Structure-Branched} and \cite{Xia-Interior-Regularity} for the geometric regularity, \cite{Morel-Santambrogio-Regularity} for the regularity of the tangents to the branched structure, \cite{Santambrogio-Landscape} and \cite{Brancolini-Solimini-Hoelder} for the regularity of the landscape function, \cite{Brancolini-Solimini-Fractal} for the fractal regularity of the minimisers. Equivalence of the different models and formulations are instead the topic of \cite{Maddalena-Solimini-Transport-Distances}, \cite{Maddalena-Solimini-Synchronic}. Branched transport can also be modelled with curves in the Wasserstein space as in \cite{Brancolini-Buttazzo-Santambrogio}, \cite{Bianchini-Brancolini}, \cite{Brasco-Santambrogio}. \paragraph{Main result of the paper.} The main result of this paper is a unified viewpoint of the branched transport problem and the urban planning problem. Indeed, we show that also the urban planning problem can be cast in the Eulerian or flux-based and in the Lagrangian or pattern-based framework. This involves the consideration of new functionals which are still subadditive in the moved mass, but not strictly so (cf.\ Figure\,\ref{fig:transportCost}), introducing several technical difficulties. The main theorem, Theorem\,\ref{thm:urban_plannning_energy_equivalences}, proves the equivalence between the original, the Lagrangian, and the Eulerian formulation of urban planning. One advantage of these equivalences is that now one can consider regularity questions in the most convenient formulation; as an example we show a single path property of optimal urban planning networks in Proposition\,\ref{prop:single_path_property_for_the_urban_planning_problem}. For the sake of symmetry we also introduce an additional formulation of branched transport which is based on the transport network $\Sigma$ as in the original urban planning formulation. Its equivalence to the pattern- and flux-based formulations is stated in Theorem\,\ref{thm:equivalenceBrTpt}. The following paragraphs introduce branched transport and urban planning more formally via cost functionals of the transport network $\Sigma$. Section\,\ref{sec:branched_transport} then recalls the Eulerian and Lagrangian formulation of branched transport and proves their equivalence to the formulation based on $\Sigma$. Section\,\ref{sec:urban_planning} puts forward the novel Eulerian and Lagrangian formulation of urban planning and states their equivalence, the proof of which is deferred to the separate Section\,\ref{sec:proof_of_main_theorem}. Section\,\ref{sec:urban_planning} also states a regularity result for (a subset of all) minimisers, the single path property, based on the model equivalences. \begin{figure} \centering \setlength{\unitlength}{.1\linewidth} \begin{picture}(3,2.3) \put(0,0){\includegraphics[width=3\unitlength]{transportCost}} \put(2.80,-.15){$m$} \put(2.9,1.5){$c^\alpha(m)$} \put(2,2.3){$c^{\varepsilon,a}(m)$} \end{picture} \caption{The cost per transport distance is a subadditive function of the transported mass $m$ for the branched transport model ($c^\alpha(m)=m^\alpha$ for some $\alpha\in(0,1)$) as well as the urban planning model ($c^{\varepsilon,a}(m)=\min(am,m+\varepsilon)$ for some $\varepsilon>0$, $a>1$).} \label{fig:transportCost} \end{figure} \subsection{Branched transport}\label{sec:branchedTransportBrief} Branched transport is the first network optimisation problem we consider. As already mentioned, it was introduced in \cite{Maddalena-Morel-Solimini-Irrigation-Patterns,Xia-Optimal-Paths} to model natural and artificial structures which show branching. Since it is simpler to state, we here take the viewpoint of network optimisation and therefore introduce a new formulation based on the network $\Sigma$. The original flux- and pattern-based formulations will be introduced in Section\,\ref{thm:equivalenceBrTpt}, and their equivalence to the new formulation will be shown in Theorem\,\ref{thm:equivalenceBrTpt}. The network is modelled as a one-dimensional set $\Sigma\subset\R^n$, thought of as a collection of pipes through which a quantity with initial distribution $\mu_+$ is transported to achieve the distribution $\mu_-$. The transport cost scales linearly with transport distance, but is strictly subadditive in the transported mass, which models scale effects (i.\,e., transporting an amount of mass in bulk is cheaper than splitting it up and transporting the portions separately). Precisely, the cost of moving the mass from $\mu_+$ to $\mu_-$ via a rectifiable pipe network $\Sigma$ is given by \begin{equation*} \brTptEn^{\alpha}[\Sigma] =\inf_{\substack{\vel:\Sigma\to\R^n\setminus\{0\}\\\flux=\vel\hdone\restr\Sigma\\[1ex]\dv\flux=\mu_+-\mu_-}}\int_\Sigma c^\alpha(|\vel|)\,\de\hdone \qquad\text{with }c^{\alpha}(m)=m^{\alpha}\,. \end{equation*} As it will be explained in more detail in Section \ref{sec:branched_transport}, the vector measure $\flux$ describes the mass flux, and $\vel$ denotes its Radon--Nikodym derivative with respect to $\hdone\restr\Sigma$, the one-dimensional Hausdorff measure on $\Sigma$. The divergence is taken in the distributional sense, and $\dv\flux=\mu_+-\mu_-$ identifies $\mu_+$ as a flux source and $\mu_-$ as a sink. The parameter $\alpha\in(0,1)$ governs how strong the scale effect is, i.\,e., how much cost can be saved by transporting the mass in bulk. An optimal transport network is a minimizer of the functional $\brTptEn^{\alpha}$. Existence of minimisers is shown in the Lagrangian or Eulerian framework in \cite{Maddalena-Morel-Solimini-Irrigation-Patterns,Xia-Optimal-Paths,Bernot-Caselles-Morel-Traffic-Plans} and many other works; regularity properties of minimisers are instead considered in \cite{Xia-Interior-Regularity,Bernot-Caselles-Morel-Structure-Branched,Morel-Santambrogio-Regularity,Santambrogio-Landscape,Brancolini-Solimini-Hoelder}; typically minimisers exhibit a type of fractal regularity (see \cite{Brancolini-Solimini-Fractal}). \subsection{Urban planning}\label{sec:introUrbPlan} The second energy functional we consider has been introduced as an urban planning model in \cite{Brancolini-Buttazzo}. Here, $\mu_+$ has the interpretation of a population density in an urban region, and $\mu_-$ represents the density of workplaces. The aim is to develop a public transportation network such that the population's daily commute to work is most efficient. The transportation network is described as a collection of one-dimensional curves; more precisely, it is a set $\Sigma\subset\R^n$ with finite one-dimensional Hausdorff measure $\hdone(\Sigma)$. An employee can travel part of the commute via own means, which is associated with a cost $a>0$ per distance, or via the transportation network $\Sigma$ at the distance-specific cost $b>0$ with $b<a$. Hence, if a travelling path is represented by a curve $\theta : [0,1] \to \R^n$, its cost is given by (for ease of notation we here identify the path $\theta$ with its image $\theta([0,1])$) \begin{equation*} a\hdone(\theta\setminus\Sigma)+b\hdone(\theta\cap\Sigma)\,. \end{equation*} The minimum cost to get from a point $x\in\R^n$ to a point $y\in\R^n$ is given by the metric \begin{equation}\label{eq:d_Sigma} d_\Sigma(x,y)=\inf \{a\hdone(\theta\setminus\Sigma)+b\hdone(\theta\cap\Sigma) \ : \ \theta\in C_{x,y}\} \end{equation} where \begin{equation}\label{eq:C_x_y} C_{x,y} = \{\theta : [0,1] \to \R^n \ : \ \theta \ \text{is Lipschitz}, \ \theta(0) = x, \ \theta(1) = y\} \end{equation} denotes Lipschitz curves connecting $x$ and $y$. The Wasserstein distance induced by this metric describes the minimum cost to connect the population density $\mu_+$ and the workplace density $\mu_-$ and is given by \begin{equation}\label{eq:WdSigma} \Wd{d_\Sigma}(\mu_+,\mu_-)=\inf_{\mu\in\Pi(\mu_+,\mu_-)}\int_{\R^n\times\R^n}d_\Sigma(x,y)\,\de\mu(x,y)\,. \end{equation} It is the infimum of formula \eqref{eq:monge} where we choose $d(x,y) = d_\Sigma(x,y)$. Here, $\Pi(\mu_+,\mu_-)$ denotes the set of transport plans, i.\,e., the set of non-negative finite Borel measures on the product space $\R^n\times\R^n$ whose marginals are $\mu_+$ and $\mu_-$, respectively, \begin{equation*} \Pi(\mu_+,\mu_-)=\left\{\mu\in\fbm(\R^n\times\R^n)\,:\,\pushforward{\pi_1}\mu=\mu_+,\pushforward{\pi_2}\mu=\mu_-\right\}\,, \end{equation*} where $\pushforward{\pi_i}\mu$ denotes the pushforward of $\mu$ under the projection $\pi_i : \R^n \times \R^n \to \R^n,(x_1,x_2) \mapsto x_i$ The urban planning problem is the task to find an optimal network $\Sigma$ with respect to the transport cost $\Wd{d_\Sigma}(\mu_+,\mu_-)$ and an additional penalty $\hdone(\Sigma)$, the building and maintaining cost for the network. This leads to the energy functional \begin{equation*} \urbPlEn^{\varepsilon,a,b}[\Sigma]=\Wd{d_\Sigma}(\mu_+,\mu_-)+\varepsilon\hdone(\Sigma)\,, \end{equation*} to be minimised among all sets $\Sigma\subset\R^n$. Existence of minimisers has been shown among all closed connected $\Sigma$ (see \cite{Brancolini-Buttazzo} or \cite[Chap.\,3]{BuPrSoSt09}). Without requiring connectedness, existence is proved in \cite[Chap.\,4]{BuPrSoSt09}. We will actually set $b = 1$ and study $\urbPlEn^{\varepsilon,a}\equiv\urbPlEn^{\varepsilon,a,1}$ without loss of generality, since $\urbPlEn^{\varepsilon,a,b}(\Sigma) = b\urbPlEn^{\frac{\varepsilon}{b},\frac{a}{b},1}(\Sigma)$. \subsection{Notation and useful notions Let us briefly fix some frequently used basic notation. \begin{itemize} \item \textbf{Lebesgue measure.} $\lebesgue^n$ denotes the $n$-dimensional \emph{Lebesgue measure}. \item \textbf{Hausdorff measure.} $\hd^r$ denotes the $r$-dimensional \emph{Hausdorff measure}. \item \textbf{Non-negative finite Borel measures.} $\fbm(\R^n)$ denotes the set of \emph{non-negative finite Borel measures} on $\R^n$. Notice that these measures are countably additive and also regular by \cite[Thm.\,2.18]{Ru87}. The corresponding total variation norm is denoted by $\|\cdot\|_\fbm$. \item \textbf{(Signed or vector-valued) regular countably additive measures.} $\rca(\R^n)$ denotes the set of \emph{(signed or vector-valued) regular countably additive measures} on $\R^n$. The corresponding total variation norm is denoted by $\|\cdot\|_\rca$. \item \textbf{Weak-$*$ convergence.} The weak-$*$ convergence on $\fbm(\R^n)$ or $\rca(\R^n)$ is indicated by $\weakstarto$. \item \textbf{Restriction of a measure to a set.} Let $(X,\mathcal A,\mu)$ be a measure space and $Y \subset X$ with $Y\in\mathcal A$. The measure $\mu\restr Y$ is the measure defined by \begin{displaymath} \mu\restr Y(A) = \mu(A \cap Y)\,. \end{displaymath} \item \textbf{Pushforward of a measure.} For a measure space $(X,\Mcal,\mu)$, a measurable space $(Y,\Ncal)$, and a measurable map $T : X \to Y$, the \emph{pushforward} of $\mu$ under $T$ is the measure $T_\#\mu$ on $(Y,\Ncal)$ defined by \begin{displaymath} T_\#\mu(B) = \mu(T^{-1}(B)) \quad \text{for all $B \in \Ncal$}. \end{displaymath} \item \textbf{Continuous and smooth functions.} $\cont_c(\R^n)$ and $\smooth(\R^n)$ denote the set of continuous and smooth functions, respectively, with compact support on $\R^n$. \item \textbf{Absolutely continuous functions.} $\AC(I)$ denotes the set of \emph{absolutely continuous functions} on the interval $I\subset\R$. \item \textbf{Lipschitz functions.} $\Lip(I)$ denotes the set of \emph{Lipschitz functions} on the compact domain $I$. \item \textbf{Characteristic function of a set.} Let $X$ be a set and $A \subseteq X$. The \emph{characteristic function} of the set $A$ is defined as \begin{displaymath} \setchar{A}:X\to\{0,1\}\,,\quad \setchar{A}(x) = \begin{cases} 1 & x \in A,\\ 0 & x \notin A. \end{cases} \end{displaymath} \item \textbf{Dirac mass.} Let $x \in \R^n$. The \emph{Dirac mass} in $x$ is the distribution $\delta_x$ defined by \begin{displaymath} \langle \delta_x,\varphi \rangle = \varphi(x) \text{ for all } \varphi \in \smooth(\R^n). \end{displaymath} The Dirac distribution is the measure $\delta_x(A)=1$ if $x\in A$ and $\delta_x(A)=0$ else. \end{itemize} Finally, for the reader's convenience we compile here a list of the most important symbols with references to the corresponding definitions. \begin{itemize} \item $I = [0,1]$: The unit interval. \item $d_\Sigma$, $\Wd{d_\Sigma}$: Urban planning transport metric and transport cost (see \eqref{eq:d_Sigma}-\eqref{eq:WdSigma}). \item $C_{x,y}$: Lipschitz paths connecting $x$ and $y$ (see \eqref{eq:C_x_y}). \item $\flux_G$: Flux associated with a discrete graph $G$ (see \eqref{eqn:graphFlux}). \item $(\reSpace,\Bcal(\reSpace),\reMeasure)$: Reference space of all particles (Definition\,\ref{def:reference_space}). \item $\chi$: Irrigation pattern of all particles (Definition\,\ref{def:irrigation_pattern}). \item $[x]_\chi, m_\chi(x)$: Solidarity class of $x$ and its mass (Definition\,\ref{def:solidarity_classes}). \item $\mu_+^\chi, \mu_-^\chi$: Irrigating and irrigated measure (Definition \ref{def:irrigation}). \item $s_\alpha^\chi$, $\repsilonachi$: Cost densities of branched transport (Definition \ref{eqn:costDensityBrTpt}) and urban planning (Definition \ref{def:urbPlPatternForm}). \item $\Theta$: Lipschitz curves on $I$, $\Theta = \Lip(I)$. This notation is introduced in the framework of transport path measures (Definition\,\ref{def:transport_path_measures}). \item $\TPM(\mu_+,\mu_-)$: Transport path measures moving $\mu_+$ onto $\mu_-$ (Definition\,\ref{def:transport_path_measures}). \end{itemize} \section{Branched transport formulations}\label{sec:branched_transport} In this section we will present the Eulerian or flux-based and the Lagrangian or pattern-based formulations of the \emph{branched transport problem} and state their equivalence to the formulation from Section\,\ref{sec:branchedTransportBrief}. We begin with the Eulerian formulation. \subsection{Flux-based formulation} We start considering the formulation given by Xia in \cite{Xia-Optimal-Paths}. Let $\mu_+ = \sum_{i = 1}^k a_i\delta_{x_i}$, $\mu_- = \sum_{j = 1}^l b_j\delta_{y_j}$ be discrete finite non-negative measures with $a_i, b_j > 0$, $x_i, y_j \in \R^n$. Suppose also that they have the same mass, \begin{displaymath} \sum_{i = 1}^k a_i = \sum_{j = 1}^l b_j\,. \end{displaymath} \begin{remark} The object of the next definition is called \emph{transport path} in \cite{Xia-Optimal-Paths}, and this is the commonly used term in the branched transport literature. We deliberately employ the term \emph{mass flux} instead, since it does not only encode a path, but also the amount of mass transported. This way we avoid confusion when referring to actual paths as one-dimensional curves. \end{remark} \begin{definition}[Discrete mass flux and cost function]\label{def:mass_flux_cost_function_branched_transport_discrete_case} A \emph{discrete mass flux between $\mu_+$ and $\mu_-$} is a weighted directed graph $G$ with vertices $V(G)$, straight edges $E(G)$, and edge weight function $w : E(G) \to [0,\infty)$ such that the following conditions are satisfied. Denoting $e^-$ and $e^+$ the initial and final point of edge $e$, we require the following \emph{mass preserving conditions}, \begin{itemize} \item $a_i = \sum_{e \in E(G), e^- = x_i} w(e) - \sum_{e \in E(G), e^+ = x_i} w(e)$ for $i=1,\ldots,k$, \item $b_j = \sum_{e \in E(G), e^+ = y_j} w(e) - \sum_{e \in E(G), e^- = y_j} w(e)$ for $j=1,\ldots,l$, \item $0 = \sum_{e \in E(G), e^+ = v} w(e) - \sum_{e \in E(G), e^- = v} w(e)$ for $v \in V(G)\setminus\{x_1,\ldots,x_k,y_1,\ldots,y_l\}$. \end{itemize} Given a parameter $\alpha\in(0,1)$, we define the transport cost per transport length as $c^\alpha(w) = w^\alpha$ (cf.\ Figure\,\ref{fig:transportCost}). The \emph{cost function} $\XiaEn^\alpha$ associated with a mass flux $G$ is defined as \begin{displaymath} \XiaEn^\alpha(G) = \sum_{e \in E(G)} c^\alpha(w(e))\,l(e) = \sum_{e \in E(G)} w(e)^\alpha\, l(e)\,, \end{displaymath} where $l(e)$ is the length of edge $e$. \end{definition} In order to state the branched transport problem in the case of non-discrete finite Borel measures, we need to replace graphs with measures. \begin{definition}[Graphs as vectorial measures]\label{def:graphs_as_vectorial_measures} Consider a weighted oriented graph $G$. Every edge $e\in E(G)$ with direction $\hat e=\frac{e^+-e^-}{|e^+-e^-|}$ can be identified with the vector measure $\mu_e = (\hdone\restr e)\, \hat e$, and the graph can be identified with the vector measure \begin{equation}\label{eqn:graphFlux} \flux_G = \sum_{e \in E(G)} w(e)\mu_e\,. \end{equation} All mass preserving conditions satisfied by a discrete mass flux $G$ between $\mu_+$ and $\mu_-$ summarise as $\dv\flux_G = \mu_+ - \mu_-$ (in the distributional sense). \end{definition} The identification of graphs with vector measures motivates the definition of a sum operation between graphs that we state here for later usage. \begin{definition}[Sums of graphs]\label{def:sums_of_graphs} If $G_1$ and $G_2$ are weighted oriented graphs, then $G_1+G_2$ is unique graph such that \begin{displaymath} \flux_{G_1+G_2} = \flux_{G_1} + \flux_{G_2}. \end{displaymath} \end{definition} \begin{definition}[Continuous mass flux and cost function] Let $\mu_+,\mu_- \in \fbm(\R^n)$ of equal mass. A vector measure $\flux\in\rca(\R^n;\R^n)$ is a \emph{mass flux} between $\mu_+$ and $\mu_-$, if there exist sequences of discrete measures $\mu_+^k$, $\mu_-^k$ with $\mu_+^k \weakstarto \mu_+$, $\mu_-^k \weakstarto \mu_-$, and a sequence of vector measures $\flux_{G_k}$ with $\flux_{G_k} \weakstarto \flux$, $\dv \flux_{G_k} = \mu_+^k - \mu_-^k$. Note that $\dv\flux=\mu_+-\mu_-$ follows by continuity w.r.t. the weak-$*$ topology. A sequence $(\mu_+^k,\mu_-^k,\flux_{G_k})$ satisfying the previous properties is called \emph{approximating graph sequence}, and we write $(\mu_+^k,\mu_-^k,\flux_{G_k}) \weakstarto (\mu_+,\mu_-,\flux)$. If $\flux$ is a mass flux between $\mu_+$ and $\mu_-$, the transport cost $\XiaEn^\alpha$ is defined as \begin{equation}\label{eq:functional_XiaEn} \XiaEn^\alpha(\flux) = \inf\left\{\liminf_{k \to \infty} \XiaEn^\alpha(G_k) \ : \ (\mu_+^k,\mu_-^k,\flux_{G_k}) \weakstarto (\mu_+,\mu_-,\flux)\right\}. \end{equation} \end{definition} \begin{problem}[Branched transport problem, flux formulation] Given $\mu_+,\mu_- \in \fbm(\R^n)$, the \emph{branched transport problem} is \begin{equation*} \min\{\XiaEn^\alpha(\flux) \ : \ \flux \text{ mass flux between } \mu_+\text{ and }\mu_-\}\,. \end{equation*} \end{problem} \begin{remark}[Existence of minimisers] A minimiser exists for $\mu_+,\mu_-\in\fbm(\R^n)$ with compact support \cite{Xia-Optimal-Paths}. The minimum value $d_\alpha(\mu_+,\mu_-)$ is a distance on $\fbm(\R^n)$, which induces the weak-$*$ convergence (see \cite{Xia-Optimal-Paths}). \end{remark} \begin{remark}\label{rem:flux_equivalent} It can be shown (see \cite{Xia-Optimal-Paths}, \cite{Bernot-Caselles-Morel-Structure-Branched}) that a mass flux $\flux$ with finite cost can be seen as a rectifiable set $\Sigma$ together with a real multiplicity $\tilde\vel : \Sigma \to (0,\infty)$ and an orientation $\hat e : \Sigma \to \R^n$, $|\hat e| = 1$, such that \begin{equation*} \flux = \tilde\vel \hat e \,(\hdone\restr\Sigma). \end{equation*} The quantity $\vel = \tilde\vel \hat e$ describes the mass flux at each point in $\Sigma$. In that case we have \begin{displaymath} \XiaEn^\alpha(\flux) = \int_\Sigma \tilde\vel^\alpha \de\hdone = \int_\Sigma |\vel|^\alpha \de\hdone\,. \end{displaymath} \end{remark} \subsection{Pattern-based formulation} In this section we recall the Lagrangian or pattern-based formulation (see \cite{Maddalena-Morel-Solimini-Irrigation-Patterns}, \cite{Bernot-Caselles-Morel-Traffic-Plans}, \cite{Maddalena-Solimini-Synchronic}). \begin{definition}[Reference space]\label{def:reference_space} Here we consider a separable uncountable metric space $\reSpace$ endowed with the $\sigma$-algebra $\Bcal(\reSpace)$ of its Borel sets and a positive finite Borel measure $\reMeasure$ with no atoms. We refer to $(\reSpace,\Bcal(\reSpace),\reMeasure)$ as the \emph{reference space}. The reference space can be interpreted as the space of all particles that will be transported from a distribution $\mu_+$ to a distribution $\mu_-$. \end{definition} \begin{remark Let $(X,\Mcal,\mu)$ and $(Y,\Ncal,\nu)$ be measure spaces. A map $T : X \to Y$ is said to be an isomorphism of measure spaces, if \begin{itemize} \item $T$ is one-to-one, \item for every $N \in \Ncal$, $T^{-1}(N) \in \Mcal$ and $\mu(T^{-1}(N)) = \nu(N)$, \item for every $M \in \Mcal$, $T(M) \in \Ncal$ and $\mu(M) = \nu(T(M))$. \end{itemize} Recall that if $\reSpace$ is a complete separable metric space and $\reMeasure$ is a positive Borel measure with no atoms (hence $\reSpace$ is uncountable), then $(\reSpace,\Bcal(\reSpace),\reMeasure)$ is isomorphic to the standard space $([0,1],\Bcal([0,1]),m\lebesgue^1 \restr [0,1])$ with $m=\reMeasure(\reSpace)$ (for a proof see \cite[Prop.\,12 or Thm.\,16 in Sec.\,5 of Chap.\,15]{Royden-Real-Analysis} or \cite[Chap.\,1]{Villani-Transport-Old-New}). As a consequence, the following definitions and results are independent of the particular choice of the reference space, and we may assume it to be the standard space without loss of generality. \end{remark} \begin{definition}[Irrigation pattern]\label{def:irrigation_pattern} Let $I = [0,1]$ and $(\reSpace,\Bcal(\reSpace),\reMeasure)$ be our reference space. An \emph{irrigation pattern} is a measurable function $\chi : \reSpace \times I \to \R^n$ such that for almost all $p\in\reSpace$ we have $\chi_p \in \AC(I)$. A pattern $\tilde\chi$ is \emph{equivalent} to $\chi$ if the images of $\reMeasure$ through the maps $p \mapsto \chi_p, p \mapsto \tilde\chi_p$ are the same. Because of that, a pattern $\chi$ can be regarded as a map $\chi : \reSpace \to \AC(I)$. For intuition, $\chi_p$ can be viewed as the path followed by the particle $p$. The image of $\chi_p$, that is $\chi_p(I)$, is called a \emph{fibre} and will frequently be identified with the particle $p$. Here we follow the setting recently introduced in \cite{Maddalena-Solimini-Synchronic}. \end{definition} \begin{definition}[Solidarity class]\label{def:solidarity_classes} For every $x\in\R^n$ we consider the set \begin{equation} [x]_\chi = \{q \in \reSpace \ : \ x \in \chi_q(I)\}\label{eq:solidarity_classes} \end{equation} of all particles flowing through $x$. The total \emph{mass} of those particles is given by \begin{equation*} m_\chi(x) = \reMeasure([x]_\chi)\,. \end{equation*} \end{definition} \begin{definition}[Cost density, cost functional]\label{eqn:costDensityBrTpt} For $0 \leq \alpha \leq 1$ we consider the following \emph{cost density}, \begin{equation*} s_\alpha^\chi(x) = c^\alpha(m_\chi(x))/m_\chi(x) = [m_\chi(x)]^{\alpha - 1}, \end{equation*} where $c^\alpha$ is the transport cost per transport length from Definition\,\ref{def:mass_flux_cost_function_branched_transport_discrete_case} and we set $s_\alpha^\chi(x) = \infty$ for $m_\chi(x)=0$. The \emph{cost functional} associated with irrigation pattern $\chi$ is \begin{equation}\label{eq:functional_MMSEn} \MMSEn^\alpha(\chi) = \int_{\reSpace \times I} s_\alpha^\chi(\chi_p(t)) |\dot\chi_p(t)|\,\de\reMeasure(p)\, \de t\,. \end{equation} The functional $\MMSEn^\alpha$ in the above form has been introduced by Bernot, Caselles, and Morel in \cite{Bernot-Caselles-Morel-Traffic-Plans}. \end{definition} \begin{definition}[Irrigating and irrigated measure]\label{def:irrigation} Let $\chi$ be an irrigation pattern. Let $i_0^\chi,i_1^\chi:\reSpace \to \R^n$ be defined as $i_0^\chi(p) = \chi(p,0)$ and $i_1^\chi(p) = \chi(p,1)$. The \emph{irrigating measure} and the \emph{irrigated measure} are defined as the pushforward of $\reMeasure$ via $i_0^\chi$ and $i_1^\chi$, respectively, \begin{displaymath} \mu_+^\chi = \pushforward{(i_0^\chi)}{\reMeasure}\,, \quad \mu_-^\chi = \pushforward{(i_1^\chi)}{\reMeasure}\,. \end{displaymath} \end{definition} \begin{problem}[Branched transport problem, pattern formulation]\label{prob:branched_transport_problem} Given $\mu_+,\mu_- \in \fbm(\R^n)$, the \emph{branched transport problem} is \begin{equation*} \min\{\MMSEn^\alpha(\chi) \ : \ \mu_+^\chi = \mu_+\text{ and }\mu_-^\chi = \mu_-\}\,. \end{equation*} \end{problem} \begin{remark}[Existence of minimisers] Given $\mu_+,\mu_- \in \fbm(\R^n)$ with compact support, Problem \ref{prob:branched_transport_problem} has a solution \cite{Maddalena-Solimini-Transport-Distances}. \end{remark} \subsection{Reparameterisation} In Definition\,\ref{def:irrigation_pattern} one may equivalently require $\chi_p \in \Lip(I)$ for almost all $p \in \reSpace$, instead of $\chi_p \in \AC(I)$, which is the content of Propositions\,\ref{prop:fixed_interval_reparameterisation_for_arc-length_parameterised_patterns} and \ref{prop:reparameterised_patterns_have_the_same_cost} below. This becomes necessary as we will later refer to results from works using either one or the other formulation. In addition, it allows us to assume Lipschitz continuous fibres throughout the remainder of the article. Let us first recall the following result, whose proof can be found in \cite[Lem.\,1.1.4]{Ambrosio-Gigli-Sarave-Gradient-Flows}. \begin{lemma}[Arc-length reparameterisation for $\AC$]\label{lem:arc-length_reparameterisation_for_AC} Let $v \in \AC([a,b])$ and let $L = \int_a^b |\dot v(t)|\,\de t$ be its length. Let \begin{align*} &\tilde s(t) = \textstyle\int_a^t |\dot v(\tau)|\,\de\tau\,,\\ &\tilde t(s) = \inf\{t \in [a,b] \ : \ \tilde s(t) = s\}\,, \end{align*} then the following holds true, \begin{itemize} \item $\tilde s \in \AC([a,b])$ with $\tilde s(a) = 0$, $\tilde s(b) = L$, \item $\tilde v = v \circ\tilde t$ satisfies $v = \tilde v \circ \tilde s$, $\tilde v \in \Lip([0,L])$, and $|\dot{\tilde v}| = 1$ a.\,e.\ in $[0,L]$. \end{itemize} \end{lemma} \begin{proposition}[Arc-length reparameterisation of patterns]\label{prop:arc-length_reparameterisation_for_patterns} Let $\chi : \reSpace \times I \to \R^n$ be an irrigation pattern. Suppose $\chi$ has finite cost $\MMSEn^\alpha(\chi) < \infty$, and define \begin{align*} \tilde s &: \reSpace \times I \to [0,\infty), && \tilde s(p,t) = \textstyle\int_0^t |\dot\chi(p,\tau)|\,\de\tau\,,\\ \tilde t &: \reSpace \times [0,\infty) \to I\cup\{\infty\}, && \tilde t(p,s) = \inf\{t \in I \ : \ \tilde s(p,t) = s\}\,,\\ \tilde\chi&: \reSpace \times [0,\infty) \to \R^n, && \tilde\chi(p,s) = \chi(p,\tilde t(p,s))\,, \end{align*} where for notational simplicity we define the infimum of the empty set as $\infty$. Then, for almost all $p \in \reSpace$ and all $s \in [0,\infty)$, $\tilde\chi(p,\cdot)$ is arc-length parameterised, and $\tilde\chi(\cdot,s)$ is measurable. \end{proposition} The proof is similar to the one of \cite[Lem.\,6.2]{Bernot-Caselles-Morel-Traffic-Plans} or \cite[Lem.\,4.1, Lem.\,4.2]{BeCaMo09}. We provide it here for completeness. \begin{proof} The fact that $\tilde\chi(p,\cdot)$ is arc-length parameterised for all $p \in \reSpace$ follows from Lemma\,\ref{lem:arc-length_reparameterisation_for_AC}. Since $\tilde\chi = \chi \circ (\Id_\Gamma \times \tilde t)$, its measurability properties are a consequence of the measurability of $\chi$ and $(\Id_\Gamma \times \tilde t)$ and of the fact that for every null set $N \subset \Gamma \times I$ the set $(\Id_\Gamma \times \tilde t)^{-1}(N)$ is a null set in $\Gamma \times [0,\infty)$. The measurability of $\Id_\reSpace \times \tilde t$ is proved as in \cite{Bernot-Caselles-Morel-Traffic-Plans} and follows from the measurability of the map $\tilde t$. We now show that the set $\tilde t^{-1}([0,\lambda])$ is measurable for any $\lambda \in \R$. Let $\{t_k\}_k$ be a dense sequence in $[0,\infty)$. Since $\tilde t$ is nondecreasing and lower semicontinuous in the variable $s$, we have \begin{displaymath} \tilde t^{-1}([0,\lambda]) = \bigcap_{h = 1}^\infty \bigcup_{k = 1}^\infty \{p \in \reSpace \ : \ \tilde t(p,t_k) \leq \lambda\} \times \left[0,t_k+\tfrac{1}{h}\right]. \end{displaymath} Since $\{p \in \reSpace \ : \ \tilde t(p,t_k) \leq \lambda\} = \{p \in \reSpace \ : \ \tilde s(p,\lambda) \geq t_k\}$ is measurable, we obtain that $\tilde t^{-1}([0,\lambda])$ is measurable, too. Finally, let $N \subset \reSpace \times I$ be a null set, and let $B$ be a Borel set such that $N \subset B$ and $(\reMeasure \otimes \lebesgue^1)(B) \leq \delta$. For almost all $p \in \reSpace$ we have \begin{displaymath} \int_0^\infty \setchar{B}(p,\tilde t(p,s))\,\de s = \int_0^1 \setchar{B}(p,t)\tfrac{\partial\tilde s}{\partial t}(p,t)\,\de t = \int_0^1 \setchar{B}(p,t)|\dot\chi(p,t)|\,\de t\,. \end{displaymath} Integrating over $\reSpace$, we obtain \begin{displaymath} (\reMeasure\otimes\lebesgue^1)((\Id_\reSpace \times \tilde t)^{-1}(B)) = \int_\reSpace \int_0^1 \setchar{B}(p,t)|\dot\chi(p,t)|\,\de t\,\de\reMeasure(p)\,. \end{displaymath} Due to $\MMSEn^\alpha(\chi) < \infty$, for every $\varepsilon > 0$ there exists $\delta > 0$ such that for every set $B$ with $(\reMeasure\otimes\lebesgue^1)(B) < \delta$ we have \begin{displaymath} \int_B s_\alpha^\chi(\chi(p,t))|\dot\chi(p,t)|\,\de t\,\de\reMeasure(p) < \varepsilon\,. \end{displaymath} Since we have that $\setchar{B}(\chi(p,t)) \leq 1 \leq \reMeasure(\reSpace)^{1-\alpha}s_\alpha^\chi(\chi(p,t))$, it follows that \begin{multline*} (\reMeasure\otimes\lebesgue^1)((\Id_\reSpace \times \tilde t)^{-1}(B)) = \int_\reSpace \int_0^1 \setchar{B}(p,t)|\dot\chi(p,t)|\,\de t\,\de\reMeasure(p) \\ \leq \reMeasure(\reSpace)^{1-\alpha}\int_B s_\alpha^\chi(\chi(p,t))|\dot\chi(p,t)|\,\de t\,\de\reMeasure(p) < \reMeasure(\reSpace)^{1-\alpha}\varepsilon. \end{multline*} Choosing $\varepsilon$ arbitrarily small gives $(\reMeasure\otimes\lebesgue^1)((\Id_\reSpace \times \tilde t)^{-1}(N)) = 0$ as desired. \end{proof} We may further reparameterise the irrigation pattern. \begin{proposition}[Constant speed reparameterisation of patterns]\label{prop:fixed_interval_reparameterisation_for_arc-length_parameterised_patterns} Let $\chi : \reSpace \times I \to \R^n$ be an irrigation pattern with finite cost $\MMSEn^\alpha(\chi)$, let $l(p)=\int_0^1 |\dot\chi(p,t)|\,\de t$ be its fibre length, and let $\tilde\chi$ be as in Proposition\,\ref{prop:arc-length_reparameterisation_for_patterns}. Then $\hat\chi:\reSpace\times I\to\R^n$, $(p,s) \mapsto \tilde\chi(p,s/l(p))$, is an irrigation pattern which reparameterises the fibres of $\chi$ with $\hat\chi_p\in\Lip(I)$ and constant velocity $|\dot{\hat\chi}_p|$ for almost all $p\in\reSpace$. \end{proposition} \begin{proof} This follows from the properties of $\tilde\chi$. \end{proof} \begin{proposition}[Reparameterised patterns have the same cost]\label{prop:reparameterised_patterns_have_the_same_cost} Let $\chi : \reSpace \times I \to \R^n$ be an irrigation pattern with finite cost $\MMSEn^\alpha(\chi)$ and let $\hat\chi$ be its Lipschitz reparameterisation. Then $\MMSEn^\alpha(\hat\chi)=\MMSEn^\alpha(\chi)$. \end{proposition} \begin{proof} The proof is straightforward, once one notices that the solidarity classes \eqref{eq:solidarity_classes} do not depend on the parameterisation. \end{proof} \subsection{Equivalence between the formulations It has been proved by Bernot, Caselles, and Morel in \cite[Sec.\,6]{Bernot-Caselles-Morel-Structure-Branched} that the pattern-based formulation is equivalent to the formulation by Xia, even though Xia's formulation does not include the particle motion, while in the pattern-based formulation by Maddalena, Morel, and Solimini the speed of particles occurs in the functional. In particular, minimisers exist for both models, and they can be identified with each other. \begin{definition}[Branched transport energies]\label{def:branched_transport_energy} Given two measures $\mu_+,\mu_-\in\fbm(\R^n)$ of equal mass, for an irrigation pattern $\chi$, a mass flux $\flux$, and a rectifiable set $\Sigma\subset\R^n$ we define \begin{equation*} \brTptEn^{\alpha}[\chi]=\MMSEn^{\alpha}(\chi)\,,\quad \brTptEn^{\alpha}[\flux]=\XiaEn^{\alpha}(\flux)\,, \end{equation*} where $\XiaEn^{\alpha}(\flux)$ and $\MMSEn^{\alpha}(\chi)$ are given by \eqref{eq:functional_XiaEn} and \eqref{eq:functional_MMSEn}, respectively, as well as \begin{align*} \brTptEn^{\alpha,\mu_+,\mu_-}[\chi] &=\begin{cases} \brTptEn^{\alpha}[\chi]&\text{if $\mu_+^\chi = \mu_+$ and $\mu_-^\chi = \mu_-$},\\ \infty&\text{else,} \end{cases}\\ \brTptEn^{\alpha,\mu_+,\mu_-}[\flux] &=\begin{cases} \brTptEn^{\alpha}[\flux]&\text{if }\dv\flux=\mu_+-\mu_-,\\ \infty&\text{else,} \end{cases}\\ \brTptEn^{\alpha,\mu_+,\mu_-}[\Sigma] &=\inf \{\brTptEn^{\alpha,\mu_+,\mu_-}[\flux] \ : \ \flux=\vel\hdone\restr\Sigma, \ \vel:\Sigma\to\R^n\setminus\{0\}\}\,. \end{align*} The last functional corresponds to the new formulation of Section\,\ref{sec:branchedTransportBrief}. Note that, if $\Sigma$ is not rectifiable, then $\brTptEn^{\alpha,\mu_+,\mu_-}[\Sigma] = \infty$ (see \cite[Proposition\,4.4]{Xia-Interior-Regularity}). \end{definition} \begin{theorem}[Equivalence of branched transport energies]\label{thm:equivalenceBrTpt} The minimisation problems associated with Definition \ref{def:branched_transport_energy} are equivalent in the sense that \begin{displaymath} \min_\chi\brTptEn^{\alpha,\mu_+,\mu_-}[\chi]=\min_\flux\brTptEn^{\alpha,\mu_+,\mu_-}[\flux]=\min_\Sigma\brTptEn^{\alpha,\mu_+,\mu_-}[\Sigma]\,. \end{displaymath} The optima can be identified with each other via \begin{equation*} \Sigma=\{x\in\R^n\,:\,m_\chi(x)>0\}\,,\quad \flux=\vel\hdone\restr\Sigma\text{ for the density }\vel=m_\chi\hat e\,, \end{equation*} where $\hat e$ is the tangent unit vector to $\Sigma$. Moreover, \begin{displaymath} \int_{\R^n}\varphi\cdot\de\flux=\int_\reSpace\int_I\varphi(\chi_p(t))\cdot\dot\chi_p(t)\,\de t\,\de \reMeasure(p)\text{ for all }\varphi\in\cont_c(\R^n;\R^n)\,. \end{displaymath} \end{theorem} \begin{proof} The equivalence of the pattern-based formulation $\min_\chi\brTptEn^{\alpha,\mu_+,\mu_-}[\chi]$ to Xia's formulation $\min_\flux\brTptEn^{\alpha,\mu_+,\mu_-}[\flux]$ has been proved by Bernot, Caselles, and Morel (\cite[Sec.\,6]{Bernot-Caselles-Morel-Structure-Branched} or \cite[Chap.\,9]{BeCaMo09}). Furthermore, for an optimal $\chi$, the set \begin{displaymath} \Sigma = \{x\in\R^n\,:\,m_\chi(x)>0\}\subset\bigcup_{p\in\reSpace}\chi_p(I) \end{displaymath} is rectifiable \cite[Lem.\,6.3]{Bernot-Caselles-Morel-Traffic-Plans}, and thus for $\hdone$-a.\,e.\ point $x$ has a tangent unit vector $\hat e(x)$. Defining a multiplicity via $\tilde\vel=m_\chi$ (see Definition\,\ref{def:solidarity_classes}) we obtain a flux $\flux=\tilde\vel\hat e\hdone\restr\Sigma$ as in Remark\,\ref{rem:flux_equivalent}, and the proof of \cite[Prop.\,9.8]{BeCaMo09} implies that this flux is optimal. The equality $\min_\flux\brTptEn^{\alpha,\mu_+,\mu_-}[\flux]=\min_\Sigma\brTptEn^{\alpha,\mu_+,\mu_-}[\Sigma]$ follows by choosing $\Sigma$ as the rectifiable set from Remark\,\ref{rem:flux_equivalent} corresponding to the optimal $\flux$. Finally, using the relation between the optimal $\flux$ and $\chi$, for $\varphi\in\cont_c(\R^n;\R^n)$ we have \begin{multline*} \int_{\R^n}\varphi\cdot\de\flux =\int_{\bigcup_{p\in\reSpace}\chi_p(I)}\varphi(x)\cdot\vel(x)\,\de\hdone(x)\\ = \int_{\R^n} [x]_\chi\varphi(x)\cdot\hat e(x)\,\de\hdone(x) = \int_\reSpace\int_I\varphi(\chi_p(t))\cdot\dot\chi_p(t)\,\de t\,\de \reMeasure(p). \end{multline*} This formula follows noting that if two fibres $\chi_p$ and $\chi_q$ coincide in an interval, then their tangents coincide $\hdone$-a.\,e., too. \end{proof} \subsection{Regularity properties} Due to proof of equivalence one can examine regularity properties of minimisers in the most convenient formulation. The following is based on patterns. \begin{definition}[Loop-free paths and patterns] Let $\theta\in\Lip(I)$ and let $\chi$ be an irrigation pattern. Following \cite[Def.\,4.5]{Maddalena-Solimini-Synchronic}, we say that \emph{$\theta$ has a loop} if there exist $t_1<t_2<t_3$ such that \begin{equation*} \theta(t_1) = \theta(t_3) = x\,,\quad\theta(t_2) \neq x; \end{equation*} else we say that $\theta$ is \emph{loop-free}. $\chi$ is said to be \emph{loop-free} if $\chi_p$ is loop-free for almost all $p\in\reSpace$. \end{definition} \begin{definition}[Single path property]\label{def:sigle_path_property} Let $\chi$ be a loop-free irrigation pattern and let \begin{displaymath} \reSpace_{\overrightarrow{xy}}^\chi = \{p \in \reSpace \ : \ \chi_p^{-1}(x) < \chi_p^{-1}(y)\}\,. \end{displaymath} Following \cite[Def.\,3.3]{Bernot-Caselles-Morel-Structure-Branched} and \cite[Def.\,7.3]{BeCaMo09}, $\chi$ has the \emph{single path property} if for every $x,y$ with $\reMeasure(\reSpace_{\overrightarrow{xy}}^\chi) > 0$, the sets $\chi(p,[\chi_p^{-1}(x),\chi_p^{-1}(y)])$ coincide for almost all $p \in \reSpace_{\overrightarrow{xy}}^\chi$. Note that under the single path property, almost all trajectories from $x$ to $y$ coincide, but they need not coincide as functions of time (since the time variable can be reparameterised). \end{definition} \begin{remark} Optimal patterns are loop-free and enjoy the single path property (see \cite[Sec.\,3]{Bernot-Caselles-Morel-Structure-Branched}, \cite[Chap.\,4]{BeCaMo09} or \cite[Thm.\,4.1]{Maddalena-Solimini-Synchronic}). \end{remark} \section{Urban planning formulations}\label{sec:urban_planning} Here we will employ the same notions as in the previous section to provide the Eulerian or flux-based and the Lagrangian or pattern-based formulations of urban transport. These will then be proved equivalent to the original definition, e.\,g.\ from \cite{BuPrSoSt09}. \subsection{Flux-based formulation} Let $G=(V(G),E(G),w)$ be a discrete mass flux between discrete measures $\mu_+,\mu_-\in\fbm(\R^n)$. Let $\Sigma$ be a subgraph of $G$; $\Sigma$ is not required to be connected. Given parameters $\varepsilon > 0$, $a>1$, the \emph{cost function} $\urbPlXia^{\varepsilon,a}$ is defined as \begin{displaymath} \urbPlXia^{\varepsilon,a}(G,\Sigma) = \sum_{e \in E(G)\setminus E(\Sigma)} a w(e)l(e) + \sum_{e \in E(\Sigma)} (w(e)+\varepsilon)l(e)\,, \end{displaymath} where $l(e)$ is the length of edge $e$. $\urbPlXia^{\varepsilon,a}(G,\Sigma)$ is the cost for employees to travel from an initial distribution $\mu_+$ of homes to a distribution $\mu_-$ of workplaces via the network $G$ using public transport on $\Sigma$. We wish to minimise $\urbPlXia^{\varepsilon,a}(G,\Sigma)$ among admissible pairs $(G,\Sigma)$. For a pair to be optimal one must have \begin{itemize} \item $a w(e) \leq w(e)+\varepsilon$ if $e \in E(G) \setminus E(\Sigma)$, since otherwise the pair $(G,\Sigma\cup\{e\})$ has a lower cost, and \item $a w(e) \geq w(e)+\varepsilon$ if $e \in E(\Sigma)$, since else $(G,\Sigma\setminus\{e\})$ has a lower cost. \end{itemize} As a result, the cost of an edge $e\in E(G)$ for an optimal $(G,\Sigma)$ is given by $\min(a w(e), w(e)+\varepsilon)$ so that the problem can be the restated with just the mass flux variable $G$. \begin{definition}[Cost function, flux formulation] Let $G=(V(G),E(G),w)$ be a discrete mass flux between discrete measures $\mu_+,\mu_-\in\fbm(\R^n)$. Given parameters $\varepsilon > 0$, $a>1$, we define the transport cost per transport length as $c^{\varepsilon,a}(w) = \min(aw,w+\varepsilon)$ (cf.\ Figure\,\ref{fig:transportCost}). The \emph{cost function} $\urbPlXia^{\varepsilon,a}$ associated with a mass flux $G$ is defined as \begin{displaymath} \urbPlXia^{\varepsilon,a}(G) = \sum_{e \in E(G)}c^{\varepsilon,a}(w(e))\,l(e) = \sum_{e \in E(G)}\min(aw(e),w(e)+\varepsilon) l(e)\,, \end{displaymath} where $l(e)$ is the length of edge $e$. If $\flux\in\rca(\R^n)$ is a general mass flux between general measures $\mu_+,\mu_-\in\fbm(\R^n)$, the \emph{cost function} is defined as \begin{displaymath} \urbPlXia^{\varepsilon,a}(\flux) = \inf\{\liminf_{k\to\infty}\urbPlXia^{\varepsilon,a}(G_k) \ : \ (\mu_+^k,\mu_-^k,\flux_{G_k}) \weakstarto (\mu_+,\mu_-,\flux)\}. \end{displaymath} \end{definition} \begin{problem}[Urban planning problem, flux formulation] Given $\mu_+,\mu_- \in \fbm(\R^n)$, the \emph{urban planning problem} is \begin{equation*} \min\{\urbPlXia^{\varepsilon,a}(\flux) \ : \ \flux \text{ mass flux between } \mu_+\text{ and }\mu_-\}\,. \end{equation*} \end{problem} \begin{remark}[Existence of minimisers] The existence of mass fluxes with finite cost follows from the existence of irrigation patterns with finite cost (Remark\,\ref{rem:existenceUrbPlFiniteCostPattern}) and Proposition\,\ref{prop:constructFluxFromPattern} later. Furthermore, $\urbPlXia^{\varepsilon,a}$ is \mbox{weakly-$*$} lower semicontinuous by definition, and it is bounded below by $\|\cdot\|_\rca$ (since it is the relaxation of a functional, defined only on discrete mass fluxes, which satisfies the same property). Thus, graphs with uniformly bounded energy are \mbox{weakly-$*$} precompact, and existence of minimisers follows via the direct method of the calculus of variations. \end{remark} \begin{remark} Note that, just like $c^\alpha$ for branched transport, the function $c^{\varepsilon,a}$ is subadditive (cf.\ Figure\,\ref{fig:transportCost}), since a concave function whose graph passes through the origin is subadditive. This leads to an economy of scales and thus to branched structures. However, unlike $c^\alpha$ it is not strictly subadditive, so there is a slightly weaker preference for branching structures. In particular, the minimisers need not be finite graphs away from the support of the initial and final measure, and mass fluxes can locally be absolutely continuous with respect to Lebesgue measure $\lebesgue^n$ (see Figure\,\ref{fig:urban_planning}). \end{remark} \begin{figure} \begin{center} \setlength{\unitlength}{.3\linewidth} \begin{picture}(1.0,0.85) \put(0.09,0.05){\includegraphics[width=\unitlength]{urban_planning}} \put(0.5,-0.02){$\mu_+$} \put(0.5,0.88){$\mu_-$} \end{picture} \caption{Sketch of an optimal urban planning mass flux which is absolutely continuous with respect to Lebesgue measure in some regions. The grey shade indicates the local flux density. \label{fig:urban_planning} \end{center} \end{figure} \begin{remark} For finite graphs, the corresponding optimal network subgraph $\Sigma$ is the graph whose edges are \begin{displaymath} E(\Sigma) = \{e \in E(G) \ : \ a w(e) > w(e) + \varepsilon\}\,. \end{displaymath} \end{remark} \subsection{Pattern-based formulation} \begin{definition}[Cost function, pattern formulation]\label{def:urbPlPatternForm} Let $(\reSpace,\Bcal(\reSpace),\reMeasure)$ be the reference space and let $\chi : \reSpace \times [0,1] \to \R^n$ be an irrigation pattern. For $\varepsilon > 0$ and $a>1$, consider the density \begin{equation*} \repsilonachi(x)= c^{\varepsilon,a}(m_\chi(x))/m_\chi(x) = \begin{cases} \min\left(1+\tfrac\varepsilon{m_\chi(x)},a\right) & \text{ if } m_\chi(x) > 0,\\ a & \text{ if } m_\chi(x) = 0. \end{cases} \end{equation*} The \emph{cost functional} $\urbPlMMS^{\varepsilon,a}$ is \begin{equation*} \urbPlMMS^{\varepsilon,a}(\chi)=\int_{\reSpace\times I}\repsilonachi(\chi_p(t))|\dot\chi_p(t)|\,\de \reMeasure(p)\,\de t\,. \end{equation*} \end{definition} \begin{problem}[Urban planning problem, pattern formulation] Given $\mu_+,\mu_- \in \fbm(\R^n)$, the \emph{urban planning problem} is \begin{equation*} \min\{\urbPlMMS^{\varepsilon,a}(\chi) \ : \ \mu_+^\chi = \mu_+\text{ and }\mu_-^\chi = \mu_-\}\,. \end{equation*} \end{problem} \begin{remark}[Existence of a finite cost pattern]\label{rem:existenceUrbPlFiniteCostPattern} An irrigation pattern with finite cost $\urbPlMMS^{\varepsilon,a}$ for a given pair $\mu_+,\mu_-$ of finite Borel measures with same mass and bounded support can readily be constructed based on the Monge--Kantorovich problem. Indeed, it is well-known that the 1-Wasserstein distance between $\mu_+$ and $\mu_-$ is bounded (see e.\,g.\ \cite[Chap.\,1]{Villani-Topics-Optimal-Transport}), \begin{equation*} \Wd{1}(\mu_+,\mu_-) = \inf_{\mu\in\Pi(\mu_+,\mu_-)} \int_{\R^n\times\R^n} |x - y|\, \de\mu(x,y) < \infty\,, \end{equation*} and that the infimum is achieved by a minimising measure $\mu\in\Pi(\mu_+,\mu_-)$, where $\Pi(\mu_+,\mu_-)$ denotes the set of transport plans as in Section\,\ref{sec:introUrbPlan}. By \cite[Prop.\,12 in Sec.\,5 of Chap.\,15]{Royden-Real-Analysis} there exist a measure $\nu$ on $[0,1]$ and an isomorphism $\varphi:([0,1],\Bcal([0,1]),\nu)\to(\R^n\times\R^n,\Bcal(\R^n\times\R^n),\mu)$ of measure spaces. Define $\psi:[0,m]\to[0,1]$ as the pseudo-inverse of the cumulative function of $\nu$. It is clear that the pushforward of the Lebesgue measure under $\varphi\circ\psi$ is $\mu$. Now take the reference space $(\reSpace,\Bcal(\reSpace),\reMeasure)=([0,m],\Bcal([0,m]),\lebesgue^1\restr [0,m])$ and define the irrigation pattern \begin{equation*} \chi(p,t) = C_t(\varphi(\psi(p)))\qquad\text{with }C_t:\R^n\times\R^n\to\R^n\,,\;C_t(x,y)=ty+(1-t)x\,. \end{equation*} Since $C_0$ and $C_1$ are the projection on the first and second argument, it is a straightforward exercise to verify that $\mu_+^\chi = \mu_+$, $\mu_-^\chi = \mu_-$. Moreover, we have \begin{multline*} \urbPlMMS^{\varepsilon,a}(\chi) = \int_{\reSpace \times I} \repsilonachi(\chi(p,t))|\dot\chi(p,t)|\,\de\reMeasure(p)\,\de t \leq \int_{\reSpace \times I} a |\dot\chi(p,t)|\,\de\reMeasure(p)\,\de t\\ = \int_\reSpace a|C_1(\varphi(\psi(p)))-C_0(\varphi(\psi(p)))|\,\de\reMeasure(p) = \int_{\R^n \times \R^n} a|x-y|\,\de\mu(x,y) = a\Wdone(\mu_+,\mu_-)\,. \end{multline*} \end{remark} \begin{remark}[Existence of minimisers] The existence of patterns with minimal urban planning cost will follow from Remark\,\ref{rem:existenceOptPatternUrbPl} via the equivalence of different energy functionals, one of which admits a minimiser. \end{remark} Before considering the equivalence between the different formulations, let us state a few properties of the cost functional for later use. \begin{proposition}[Constant speed reparameterisation of patterns]\label{thm:constSpeedPatternsUrbPl} Irrigation patterns of finite cost can be reparameterised such that $\chi_p\in\Lip(I)$ and $|\dot\chi_p|$ is constant for almost all $p\in\reSpace$ without changing the cost $\urbPlMMS^{\varepsilon,a}$. \end{proposition} \begin{proof} The proof is analogous to the proofs of Propositions\,\ref{prop:arc-length_reparameterisation_for_patterns} to \ref{prop:reparameterised_patterns_have_the_same_cost}, merely replacing the estimate $1 \leq \reMeasure(\reSpace)^{1-\alpha}s_\alpha^\chi(\chi(p,t))$ by $1 \leq \repsilonachi(\chi(p,t))$. \end{proof} The below closely follows \cite[Lemma 4.4, Lemma 4.5]{Maddalena-Solimini-Transport-Distances}. \begin{definition}[Pointwise convergence] Let $\chi_n$ be a sequence of irrigation patterns. We say that $\chi_n$ \emph{pointwise converges} to $\chi$ if for almost all $p \in \Gamma$ the curve $\chi_n(p,\cdot)$ converges uniformly to $\chi$. \end{definition} \begin{proposition}[$m_\chi$ is u.s.c.\ and $r_{\varepsilon,a}^{\chi}$ is l.s.c.]\label{prop:mass_is_upper_semicontinuous} Let $\chi_n$ be a sequence of irrigation patterns such that $\chi_n \to \chi$ pointwise. Let $t_n \in I$ such that $t_n \to t$. Then, for almost all $p \in \Gamma$, \begin{align} &m_\chi(\chi(p,t)) \geq \limsup_{n \to \infty} m_{\chi_n}(\chi_n(p,t_n))\,,\label{eq:mass_is_upper_semicontinuous}\\ &\repsilonachi(\chi(p,t)) \leq \liminf_{n \to \infty} r_{\varepsilon,a}^{\chi_n}(\chi_n(p,t_n))\,.\label{eq:r_is_lower_semicontinuous} \end{align} \end{proposition} \begin{proof} Fix $p \in \reSpace$ such that $\chi_p \in \AC(I)$ (we only discard a null set of fibres) and define the sets $A$ and $A_n$ as \begin{displaymath} A = \bigcap_n A_n\,, \quad A_n = \bigcup_{k \geq n} [\chi_k(p,t_k)]_{\chi_k}\,. \end{displaymath} Recall that $A = \limsup_n A_n$, $\reMeasure([\chi_k(p,t_k)]_{\chi_k}) = m_{\chi_k}(\chi_k(p,t_k))$ and \begin{displaymath} \reMeasure(A) = \lim_n\reMeasure(A_n) \geq \limsup_n m_{\chi_n}(\chi_n(p,t_n))\,. \end{displaymath} We want to show that $\reMeasure(A \setminus [\chi(p,t)]_{\chi}) = 0$, that is, $A \subseteq [\chi(p,t)]_{\chi}$ up to a negligible set of fibres so that $m_\chi(\chi(p,t)) \geq \reMeasure(A) \geq \limsup_n m_{\chi_n}(\chi_n(p,t_n))$, proving inequality \eqref{eq:mass_is_upper_semicontinuous}. Let then $q \in A$ such that $\chi_n(q,\cdot) \to \chi(q,\cdot)$ uniformly (we only discard a null set of fibres of $A$). Recall that $q \in A$ if and only if there exists an increasing sequence of integers $n_k$ and a sequence $s_{n_k} \in I$ such that $\chi_{n_k}(q,s_{n_k}) = \chi_{n_k}(p,t_{n_k})$. Suppose now by contraposition that $q \notin [\chi(p,t)]_{\chi}$. Because of the continuity of $\chi$ we have $d=\dist(\chi(p,t),\chi(q,I)) > 0$. Because of the fact that $\chi_n \to \chi$ uniformly, for large $k$ we have also $\dist(\chi(p,t),\chi_{n_k}(q,I)) > \frac d2$, contradicting $\chi_{n_k}(q,s_{n_k}) = \chi_{n_k}(p,t_{n_k})\to\chi(p,t)$. Inequality \eqref{eq:r_is_lower_semicontinuous} follows immediately from inequality \eqref{eq:mass_is_upper_semicontinuous} and the definition of $\repsilonachi$. \end{proof} \begin{proposition}[Lower semicontinuity of $\urbPlMMS^{\varepsilon,a}$]\label{prop:urban_planning_energy_is_lower_semicontinuous} The functional $\urbPlMMS^{\varepsilon,a}$ is lower semicontinuous with respect to pointwise convergence of patterns. \end{proposition} \begin{proof} Let $\chi_n$ be a sequence of irrigation patterns converging pointwise to the pattern $\chi$. For a given integer $n$ and a given fibre $p$, define $\mu_n = |\dot\chi_n(p,\cdot)|\,\de t$ and $\mu = |\dot\chi(p,\cdot)|\de t$. As a consequence of the uniform convergence of $\chi_n$ we have $\dot\chi_n(p,\cdot)\to\dot\chi(p,\cdot)$ in the distributional sense and thus \begin{displaymath} \mu(A)\leq\liminf_{n\to\infty}\mu_n(A) \end{displaymath} for any open $A\subset I$. Thanks to \cite[Def.\,C.1 and Thm.\,C.1]{Maddalena-Solimini-Synchronic} and Proposition\,\ref{prop:mass_is_upper_semicontinuous} we thus have \begin{displaymath} \int_I \repsilonachi(\chi(p,t)) \,\de\mu(t) \leq \liminf_{n\to\infty} \int_I r_{\varepsilon,a}^{\chi_n}(\chi_n(p,t)) \,\de\mu_n(t). \end{displaymath} Integrating with respect to $\reMeasure$ and applying Fatou's Lemma ends the proof. \end{proof} \subsection{Equivalence between the formulations \begin{definition}[Urban planning energies]\label{def:urban_plannning_energy} Given $\mu_+,\mu_-\in\fbm(\R^n)$ of equal mass, for an irrigation pattern $\chi$, a mass flux $\flux$, and a set $\Sigma\subset\R^n$ we define \begin{equation*} \urbPlEn^{\varepsilon,a}[\chi]=\urbPlMMS^{\varepsilon,a}(\chi)\,,\quad \urbPlEn^{\varepsilon,a}[\flux]=\urbPlXia^{\varepsilon,a}(\flux)\,, \end{equation*} as well as \begin{align*} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] &=\begin{cases} \urbPlEn^{\varepsilon,a}[\chi]&\text{if $\mu_+^\chi = \mu_+$ and $\mu_-^\chi = \mu_-$},\\ \infty&\text{else,} \end{cases}\\ \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux] &=\begin{cases} \urbPlEn^{\varepsilon,a}[\flux]&\text{if }\dv\flux=\mu_+-\mu_-,\\ \infty&\text{else,} \end{cases}\\ \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma] &=\begin{cases} \Wd{d_\Sigma}(\mu_+,\mu_-)+\varepsilon\hdone(\Sigma)&\text{if }\Sigma\text{ is rectifiable,}\\ \infty&\text{else,} \end{cases} \end{align*} with $\Wd{d_\Sigma}$ defined in Section\,\ref{sec:introUrbPlan}. \end{definition} \begin{remark} Recall that $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma]$ has a minimiser, thanks to \cite[Lem.\,4.10, Lem.\,4.11, Prop.\,4.15, and Thm.\,4.26]{BuPrSoSt09}. \end{remark} The next theorem is the main result of this paper. Its proof will be the object of the next section (Section \ref{sec:proof_of_main_theorem}). \begin{theorem}[Equivalence of urban planning energies]\label{thm:urban_plannning_energy_equivalences} The minimisation problems in Definition \ref{def:urban_plannning_energy} are equivalent in the sense that, for $\mu_+,\mu_-$ of equal mass and with bounded support, they possess minimisers and satisfy \begin{equation*} \min_\chi\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]=\min_\flux\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux]=\min_\Sigma\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma]\,. \end{equation*} Similarly to branched transport, there are optima $\chi$, $\flux$, and $\Sigma$ that can be identified with each other via \begin{gather} \int_{\R^n}\varphi\cdot\de\flux=\int_\reSpace\int_I\varphi(\chi_p(t))\cdot\dot\chi_p(t)\,\de t\,\de \reMeasure(p)\text{ for all }\varphi\in\cont_c(\R^n;\R^n)\,,\label{eqn:OptFluxIdentif}\\ \Sigma=\{x\in\R^n\,:\,m_\chi(x)>\tfrac\varepsilon{a-1}\}\,.\label{eqn:OptSigmaIdentif} \end{gather} \end{theorem} \subsection{Regularity properties A consequence of the above-stated equivalence between the different models is the fact that regularity issues can now be considered in the most convenient formulation. As an example, we state the following single path property of minimisers to the urban planning problem. Its proof is given at the end of Section\,\ref{sec:EquPatternSet}. \begin{proposition}[Single path property for the urban planning problem]\label{prop:single_path_property_for_the_urban_planning_problem} There exists an optimal irrigation pattern $\chi$ for $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}$ which has the single path property (see Definition \ref{def:sigle_path_property}). \end{proposition} \begin{figure} \centering \setlength{\unitlength}{10ex} \begin{picture}(3,1.1) \put(-.1,0){\vector(1,0){.4}} \put(0,-.1){\vector(0,1){.4}} \put(0,1){\circle*{.15}} \put(1,0){\circle{.15}} \put(2,0){\circle*{.15}} \put(3,1){\circle{.15}} \put(.6,-.1){$m_1$} \put(2.1,-.1){$m_1$} \put(3,1.1){$m_1+m_2$} \put(0,1.1){$m_1+m_2$} \put(0,1){\vector(1,-1){.95}} \put(2,0){\vector(1,1){.95}} \put(0,1){\vector(1,0){2.925}} \put(1.075,0){\vector(1,0){.85}} \put(.55,.45){$m_1\!+\!m$} \put(2.55,.45){$m_1\!+\!m$} \put(1.4,.05){$m$} \put(1.2,.85){$m_2\!-\!m$} \end{picture} \caption{Optimal particle motions for the irrigation problem from Remark\,\ref{rem:noSinglePath} (filled circles denote sources, open ones sinks, each arrow is labelled with its mass flux). Any $0\leq m\leq m_2$ yields an optimal mass flux or irrigation pattern.} \label{fig:noSinglePath} \end{figure} \begin{remark}\label{rem:noSinglePath} Unlike for branched transport there exist optimal irrigation patterns not satisfying the single path property, for instance for \begin{equation*} \mu_+=(m_1+m_2)\delta_{(0,s)}+m_1\delta_{(2,0)}\,,\quad \mu_-=m_1\delta_{(1,0)}+(m_1+m_2)\delta_{(3,s)} \end{equation*} with $m_1\geq\frac\varepsilon{a-1}\geq m_2$ and $s=\sqrt{a^2-1}$ (see Figure\,\ref{fig:noSinglePath}). \end{remark} \section{Proof of Theorem \ref{thm:urban_plannning_energy_equivalences} and of Proposition \ref{prop:single_path_property_for_the_urban_planning_problem}}\label{sec:proof_of_main_theorem} In this section Theorem \ref{thm:urban_plannning_energy_equivalences} is proved in four steps, corresponding to Propositions\,\ref{prop:constructPatternFromSet}, \ref{prop:constructSetFromPattern}, \ref{prop:constructFluxFromPattern}, and \ref{prop:constructPatternFromFlux}. The proof of \ref{prop:single_path_property_for_the_urban_planning_problem} follows as a corollary of those of Propositions \ref{prop:constructPatternFromSet} and \ref{prop:constructSetFromPattern}. We first introduce some necessary notions concerning measures on the set of paths and cycles in discrete graphs. \subsection{Transport path measures and other preliminary definitions and results} \begin{definition}[Transport path measures]\label{def:transport_path_measures} Let $\Theta=\Lip(I)$ be the set of Lipschitz curves $I \to \R^n$ with the metric \begin{equation*} d_\Theta(\theta_1,\theta_2)=\inf\left\{\max_{t\in I}|\theta_1(t)-\theta_2(\varphi(t))|\ :\ \varphi:I\to I\text{ increasing and bijective}\right\}. \end{equation*} Following \cite[Def.\,2.5]{BuPrSoSt09}, a \emph{transport path measure} is a measure $\eta$ on $\Theta$ (endowed with the Borel algebra). If $\mu_+,\mu_- \in \fbm(\R^n)$, we say that the transport path measure $\eta$ moves $\mu_+$ onto $\mu_-$ if \begin{displaymath} \pushforward{p_0}{\eta} = \mu_+,\quad \pushforward{p_1}{\eta} = \mu_-, \end{displaymath} where, given $t \in I$, $p_t : \Theta \to \R^n$ is defined by $p_t(\theta) = \theta(t)$. We denote by $\TPM(\mu_+,\mu_-)$ the set of transport path measures moving $\mu_+$ onto $\mu_-$. \end{definition} \begin{remark}[Compact sets in $\Theta$ The following compactness result can be obtained via the Ascoli--Arzel\`a Theorem (see \cite[p.7]{BuPrSoSt09}). Let $\theta_1,\theta_2,\ldots$ be a sequence in $\Theta$. Suppose that the $\theta_n$ have uniformly bounded lengths and $\theta_n(0) \in \Omega$ for a compact subset $\Omega$ of $\R^n$. Then, the sequence $\theta_n$ is relatively compact in $\Theta$. \end{remark} \begin{definition}[Parameterisation of transport path measures] Given a reference space $(\reSpace,\Bcal(\reSpace),\reMeasure)$, a \emph{parameterisation} of a transport path measure $\eta$ is a function $\chi : \reSpace \to \Theta$ such that $\pushforward{\chi}{\reMeasure} = \eta$. With little abuse of notation, we write $\chi(p,t)$ instead of $\chi(p)(t)$ (the position of particle $p$ at time $t$). Notice that the map $\chi : \reSpace \times I \to \R^n$ satisfies \begin{itemize} \item $\chi(p,\cdot) \in \Theta$ for a.\,e.\ $p \in \reSpace$, \item $\chi(\cdot,t)$ is measurable for all $t \in I$. \end{itemize} \end{definition} \begin{remark} Note that a parameterisation of a transport path measure always exists thanks to Skorokhod's Theorem (see \cite[App.\,A, Thm.\,A.3]{BeCaMo09} or \cite[Thm.\,11.7.2]{Dudley}). \end{remark} \begin{definition}[Cost of a transport path measure] Let $a > 1$ and $\Sigma$ be a Borel set with $\hdone(\Sigma) < \infty$. Following \cite[Chap.\,2, eq.\,\mbox{(2.2)}]{BuPrSoSt09}, we define the \emph{cost functional} $\urbPlTPM^{a}$ as \begin{displaymath} \urbPlTPM^{a}(\eta) = \int_\Theta a\hdone(\theta(I)\setminus\Sigma) + \hdone(\theta(I)\cap\Sigma) \,\de \eta(\theta)\,. \end{displaymath} \end{definition} The following is a slight refinement of \cite[Cor.\,2.12 and Prop.\,2.14]{BuPrSoSt09}; we thus only repeat the relevant parts of the proof. \begin{proposition}[Optimal transport path measures]\label{prop:OptTPM} Given a bounded Borel set $\Sigma\subset\R^n$ with $\hdone(\Sigma)<\infty$ and $\mu_+,\mu_-\in\fbm(\R^n)$ with equal mass and compact support, there exists a minimiser $\eta\in \TPM(\mu_+,\mu_-)$ of $\urbPlTPM^{a}$ such that $\eta$-a.\,e.\ path $\theta\in\Theta$ is loop-free. Furthermore, \begin{displaymath} \urbPlTPM^{a}(\eta) = \Wd{d_\Sigma}(\mu_+,\mu_-)\,. \end{displaymath} \end{proposition} \begin{proof} Using the notation of \cite[Prop.\,2.14]{BuPrSoSt09}, let us set $\delta_\Sigma(\theta)=a\hdone(\theta(I)\setminus\Sigma) + \hdone(\theta(I)\cap\Sigma)$. By \cite[Prop.\,2.18]{BuPrSoSt09} it is easy to see that $\delta_\Sigma$ equals its relaxation $\bar\delta_\Sigma$, so that \begin{equation*} \urbPlTPM^{a}(\tilde\eta) =\int_\Theta\delta_\Sigma(\theta)\,\de\tilde\eta(\theta) =\int_\Theta\bar\delta_\Sigma(\theta)\,\de\tilde\eta(\theta) =:\overline{C_\Sigma^a}(\tilde\eta) \end{equation*} for all transport path measures $\tilde\eta$. By \cite[Prop.\,2.14]{BuPrSoSt09}, $\overline{C_\Sigma^a}$ and thus also $\urbPlTPM^{a}$ possess a minimiser $\eta\in\TPM(\mu_+,\mu_-)$. Next, employing the notation from Section\,\ref{sec:introUrbPlan}, for a transport plan $\gamma\in\Pi(\mu_+,\mu_-)$ define $I_\Sigma(\gamma)=\int_{\R^n\times\R^n}d_\Sigma(x,y)\,\de\gamma(x,y)$. By \cite[Prop.\,2.14]{BuPrSoSt09}, there exist a (minimising) transport plan $\gamma$ and a (minimising) transport path measure $\eta$ such that \begin{equation*} \Wd{d_\Sigma}(\mu_+,\mu_-) =I_\Sigma(\gamma) =\overline{C_\Sigma^a}(\eta) =\urbPlTPM^{a}(\eta)\,. \end{equation*} Finally, let us show that $\eta$ can be chosen such that $\eta$-a.\,e.\ path is loop-free. To this end, let $\Omega\subset\R^n$ be a closed ball whose interior contains the support of $\mu_-$ and $\mu_+$ as well as $\Sigma$, and define \begin{equation*} \tilde\delta_\Sigma(\theta)=\int_0^1\left(1+(a-1)\setchar{\R^n\setminus\Sigma}(\theta(t))\right)|\dot\theta(t)|\,\de t\,. \end{equation*} Note that $\tilde\delta_\Sigma:\Theta\to[0,\infty)$ is lower semicontinuous. Thus, by the same proof as for \cite[Cor.\,2.11]{BuPrSoSt09}, for any given $x,y\in\Omega$ a minimiser $\theta_{x,y}$ of $\tilde\delta_\Sigma$ exists in $C_{x,y} = \{\theta\in\Theta\ :\ \theta(0)=x,\theta(1)=y\}$ with $\theta_{x,y}(I)\subset\Omega$. Therefore, by exactly the same proof as for \cite[Cor.\,2.12]{BuPrSoSt09} there is a Borel function $q:\Omega\times\Omega\to\Theta$ such that \begin{equation*} \tilde\delta_\Sigma(q(x,y))=\min_{\theta\in C_{x,y}}\tilde\delta_\Sigma(\theta)\,. \end{equation*} Now assume there are $x,y\in\Omega$ such that $q(x,y)$ has a loop, that is, $q(x,y)(t_1)=q(x,y)(t_3)=z$ and $q(x,y)(t_2)\neq z$ for some $0\leq t_1<t_2<t_3\leq1$. Then \begin{equation*} \tilde\delta_\Sigma(q(x,y))>\tilde\delta_\Sigma(\tilde\theta) \quad\text{for}\quad \tilde\theta(t)=\begin{cases}q(x,y)(t)&\text{if }t\in[0,t_1]\cup[t_3,1],\\z&\text{else,}\end{cases} \end{equation*} which contradicts the optimality of $q(x,y)$. Thus, $q(x,y)$ is loop-free and therefore injective up to reparameterisation, and we have \begin{multline*} \tilde\delta_\Sigma(q(x,y)) =\int_0^1\left(1+(a-1)\setchar{\R^n\setminus\Sigma}(q(x,y)(t))\right)|\dot q(x,y)(t)|\,\de t\\ =\hdone(q(x,y)(I))+(a-1)\hdone(q(x,y)(I)\setminus\Sigma) =\delta_\Sigma(q(x,y))\,. \end{multline*} If we can show $\tilde\delta_\Sigma(q(x,y))=\min_{\theta\in C_{x,y}}\delta_\Sigma(\theta)$, then, letting $\gamma\in\Pi(\mu_+,\mu_-)$ be an optimal transport plan, as in the proof of \cite[Prop.\,2.14]{BuPrSoSt09} it follows that $\eta=\pushforward{q}{\gamma}$ is an optimal transport path measure. We close by proving $\tilde\delta_\Sigma(q(x,y))\leq\min_{\theta\in C_{x,y}}\delta_\Sigma(\theta)$ (the opposite inequality holds trivially). By \cite[Cor.\,2.11]{BuPrSoSt09}, $\delta_\Sigma=\bar\delta_\Sigma$ possesses a minimiser $\hat\theta\in C_{x,y}$. If $\hat\theta$ has a loop, that is, $\hat\theta(t_1)=\hat\theta(t_3)=z$ and $\hat\theta(t_2)\neq z$ for some $0\leq t_1<t_2<t_3\leq1$, and if $\hat\theta(t_2)\notin\hat\theta(I\setminus[t_1,t_3])$, then \begin{equation*} \delta_\Sigma(\hat\theta)>\delta_\Sigma(\tilde\theta) \quad\text{for}\quad \tilde\theta(t)=\begin{cases}\hat\theta(t)&\text{if }t\in[0,t_1]\cup[t_3,1]\\z&\text{else,}\end{cases} \end{equation*} which contradicts the optimality of $\hat\theta$. Thus, $\hat\theta(I)$ must be homeomorphic to $I$ and can be parameterised by an injective $\theta_{x,y}\in C_{x,y}$. Therefore we have \begin{equation*} \min_{\theta\in C_{x,y}}\delta_\Sigma(\theta)=\delta_\Sigma(\hat\theta)=\delta_\Sigma(\theta_{x,y})=\tilde\delta_\Sigma(\theta_{x,y})\geq\tilde\delta_\Sigma(q(x,y)) \end{equation*} as desired. \end{proof} Finally, let us consider the relation between discrete graphs and transport path measures. \begin{definition}[Mass flux cycle] Let $G$ be a discrete mass flux. A \emph{cycle} $C$ is a collection of directed edges $\{e_1,\ldots,e_k\} \subset E(G)$ such that $e_1\cup\ldots\cup e_k$ is homeomorphic to a circle with constant orientation. The \emph{weight} of the cycle is defined as $w_G(C) = \min_{i=1,\ldots,k}w(e_i)$. Let $J_C^G=\{e\in C\ :\ w_G(e)=w_G(C)\}$ be the set of edges with minimal weight in $G$. The \emph{$C$-reduced mass flux} $G_C$ is the discrete mass flux such that the set of its edges is $E(G_C)=E(G)\setminus J_C^G$ with weights $w_{G_C}(e)=w_G(e)-w_G(C)$ for $e\in C$ and $w_{G_C}(e)=w_G(e)$ else. Notice that $\dv G_C = \dv G$ so that initial and final measures are unchanged. The \emph{cycle-reduced mass flux} is the mass flux $G$ reduced by all cycles. \end{definition} \begin{remark} The cycle-reduced mass flux is well-defined since a discrete mass flux has at most finitely many cycles which can be reduced one-by-one. In doing so, it is easy to see that the reduction order does not matter. \end{remark} \begin{lemma}[Cost of mass flux cycle]\label{lem:cycleCost} Let $G$ be a discrete mass flux between $\mu_+,\mu_-\in\fbm(\R^n)$ and $\tilde G$ the cycle-reduced mass flux. Then \begin{equation*} \urbPlXia^{\varepsilon,a}(\tilde G)\leq\urbPlXia^{\varepsilon,a}(G)-\|\flux_{\tilde G}-\flux_G\|_\rca\,. \end{equation*} \end{lemma} \begin{proof} Let $C$ be a cycle of $G$. We have \begin{align*} \|\flux_{G_C}-\flux_G\|_\rca &= \sum_{e\in J_C^G} w_G(e)l(e) + \sum_{e\in E(G_C)} (w_G(e)-w_{G_C}(e))l(e)\\ &= \sum_{e\in J_C^G} w_G(C)l(e) + \sum_{e\in E(G_C)\cap C} w_G(C)l(e) = w_G(C)\sum_{e\in C}l(e)\,, \end{align*} where $l(e)$ denotes the length of edge $e$. Likewise, since $c^{\varepsilon,a} (w)-c^{\varepsilon,a}(w-w_0) \geq w_0$ for all $w\geq w_0\geq0$, \begin{align*} \urbPlXia^{\varepsilon,a}(G)&-\urbPlXia^{\varepsilon,a}(G_C)\\ &= \sum_{e\in J_C^G} c^{\varepsilon,a}(w_G(e))l(e) + \sum_{e\in E(G_C)}(c^{\varepsilon,a} (w_G(e))-c^{\varepsilon,a}(w_{G_C}(e)))l(e)\\ &= \sum_{e\in J_C^G} c^{\varepsilon,a}(w_G(C))l(e) + \sum_{e\in E(G_C)\cap C}(c^{\varepsilon,a} (w_G(e))-c^{\varepsilon,a}(w_G(e)-w_G(C)))l(e)\\ &\geq w_G(C)\sum_{e\in C}l(e) = \|\flux_{G_C}-\flux_G\|_\rca\,. \end{align*} The result now follows repeating this procedure over all cycles and using the additivity of $\|\cdot\|_\rca$ with respect to cycle removal. \end{proof} \begin{remark}\label{rem:fluxDecomp} By \cite[Thm.\,3.5 and Prop.\,3.6]{Ahuja-Magnanti-Orlin} or \cite[Thm.\,1]{Gauthier-Desrosiers-Luebbecke}, a discrete mass flux between $\mu_+,\mu_-\in\fbm(\R^n)$ without cycles can be identified with a transport path measure $\eta$ moving $\mu_+$ to $\mu_-$. \end{remark} \newpage \subsection{Equivalence of the pattern- and set-based formulation}\label{sec:EquPatternSet} The combination of the following two propositions proves the existence of an optimal pattern $\chi$ and \begin{displaymath} \min_\chi\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]=\min_\Sigma\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma] \end{displaymath} as well as relation\,\eqref{eqn:OptSigmaIdentif} for at least one pair of minimisers, as detailed in Remark\,\ref{rem:existenceOptPatternUrbPl}. \begin{proposition}\label{prop:constructPatternFromSet} For any $\Sigma\subset\R^n$ there exists an irrigation pattern $\chi$ such that \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] \leq \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma]. \end{displaymath} \end{proposition} \begin{proof} By Proposition\,\ref{prop:OptTPM} there exists an optimal transport path measure $\eta \in \TPM(\mu_+,\mu_-)$ such that $\eta$-a.\,e. path $\theta\in\Theta$ is loop-free and \begin{displaymath} \Wd{d_\Sigma}(\mu_+,\mu_-) = \int_\Theta a \hdone(\theta(I)\setminus \Sigma) + \hdone(\theta(I)\cap \Sigma) \,\de \eta(\theta)\,. \end{displaymath} Let $\chi$ be a parameterisation of the optimal transport path measure $\eta$ so that $\eta = \pushforward{\chi}{\reMeasure}$. First we derive the formula \begin{displaymath} \hdone(\tilde\Sigma) = \int_\reSpace \int_{\{t \in I \ : \ \chi(p,t) \in \tilde\Sigma\}} \frac{1}{m_\chi(\chi(p,t))}|\dot\chi(p,t)| \,\de t \,\de \reMeasure(p)\,, \end{displaymath} where we introduced $\tilde\Sigma \subseteq \Sigma$ as \begin{displaymath} \tilde\Sigma = \{x \in \Sigma \ : \ m_\chi(x) > 0\}\,. \end{displaymath} Since $\chi$ is loop-free (thanks to Proposition \ref{prop:OptTPM}) and thus $\chi_p$ is injective (up to reparameterisation) for a.\,e.\ $p\in\reSpace$, we have \begin{multline*} \int_{\{t\in I\,:\,\chi(p,t) \in \tilde\Sigma\}} \frac{1}{m_\chi(\chi(p,t))}|\dot\chi(p,t)|\,\de t\\ = \int_{\chi_p(I) \cap \tilde\Sigma} \frac{1}{m_\chi(x)} \,\de\hdone(x) = \int_{\R^n} \frac{1}{m_\chi(x)} \setchar{\chi_p(I) \cap \tilde\Sigma}(x) \,\de \hdone(x)\,. \end{multline*} Using this as well as the identity $\frac{1}{m_\chi(x)} \int_\reSpace \setchar{\chi_p(I) \cap \tilde\Sigma}(x) \,\de \reMeasure(p) = \setchar{\tilde\Sigma}(x)$\footnote{$\frac{1}{m_\chi(x)} \int_\reSpace \setchar{\chi_p(I) \cap \tilde\Sigma}(x) \,\de \reMeasure(p) = \frac{\reMeasure(\{p \ : \ x \in \chi_p(I)\cap\tilde\Sigma\})}{\reMeasure(\{p \ : \ x \in \chi_p(I)\})} = \setchar{\tilde\Sigma}(x)$}, we obtain \begin{multline*} \int_\reSpace \int_{\{t \in I \ : \ \chi(p,t) \in \tilde\Sigma\}} \frac{1}{m_\chi(\chi(p,t))}|\dot\chi(p,t)| \,\de t \,\de \reMeasure(p)\\ = \int_\reSpace \int_{\R^n} \frac{1}{m_\chi(x)} \setchar{\chi_p(I) \cap \tilde\Sigma}(x) \,\de \hdone(x) \,\de \reMeasure(p) \\ = \int_{\R^n} \int_\reSpace \frac{1}{m_\chi(x)} \setchar{\chi_p(I) \cap \tilde\Sigma}(x) \,\de \reMeasure(p) \,\de \hdone(x) = \hdone(\tilde\Sigma) \end{multline*} after application of the Fubini--Tonelli Theorem. Next we notice that \begin{displaymath} \hdone(\chi_p(I)\setminus\Sigma) = \int_{\{\chi_p(t) \notin \Sigma\}} |\dot\chi_p(t)|\,\de t\,, \qquad \hdone(\chi_p(I)\cap\Sigma) = \int_{\{\chi_p(t) \in \Sigma\}} |\dot\chi_p(t)|\,\de t \end{displaymath} for a.\,e.\ $p\in\reSpace$ due to the injectivity of $\chi_p$ so that in summary, the urban planning cost can be estimated as \begin{multline}\label{eqn:urbPlCostEstimate} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma] = \int_\Theta a \hdone(\theta(I)\setminus \Sigma) + \hdone(\theta(I) \cap \Sigma) \,\de \eta(\theta) + \varepsilon\hdone(\Sigma) \\ = \int_\reSpace a \hdone(\chi_p(I)\setminus \Sigma) + \hdone(\chi_p(I) \cap \Sigma) \,\de \reMeasure(p) + \varepsilon\hdone(\Sigma) \\ \geq \int_\reSpace a \hdone(\chi_p(I)\setminus \Sigma) + \hdone(\chi_p(I) \cap \Sigma) \,\de \reMeasure(p) + \varepsilon\hdone(\tilde\Sigma) \\ = \int_\reSpace a\int_{\{\chi_p(t) \notin \Sigma\}} |\dot\chi_p(t)|\,\de t + \int_{\{\chi_p(t)\in \Sigma \setminus \tilde\Sigma\}} |\dot\chi_p(t)|\,\de t + \int_{\{\chi_p(t)\in \tilde\Sigma\}} |\dot\chi_p(t)|\,\de t \\ + \int_{\{\chi_p(t)\in \tilde\Sigma\}} \frac{\varepsilon}{m_\chi(\chi_p(t))}|\dot\chi_p(t)|\,\de t \,\de \reMeasure(p) \\ = \int_\reSpace a\int_{\{\chi_p(t) \notin \tilde\Sigma\}} |\dot\chi_p(t)|\,\de t + \int_{\{\chi_p(t)\in \tilde\Sigma\}} \left(1+\frac{\varepsilon}{m_\chi(\chi_p(t))}\right)|\dot\chi_p(t)|\,\de t \\ + \int_{\{\chi_p(t)\in \Sigma\setminus\tilde\Sigma\}} |\dot\chi_p(t)|\,\de t - a\int_{\{\chi_p(t) \in \Sigma\setminus\tilde\Sigma\}} |\dot\chi_p(t)|\,\de t\,\de \reMeasure(p) \\ \geq \int_{\reSpace \times I} \repsilonachi(\chi_p(t))|\dot\chi_p(t)|\,\de \reMeasure(p) \,\de t + \int_{\Gamma} \int_{\{\chi_p(t)\in \Sigma\setminus\tilde\Sigma\}} (1-a)|\dot\chi_p(t)|\,\de t\,\de \reMeasure(p) \\ \geq \int_{\reSpace \times I} \repsilonachi(\chi_p(t))|\dot\chi_p(t)|\,\de \reMeasure(p) \,\de t = \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]\,. \end{multline} Thus the pattern $\chi$ satisfies \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] \leq \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma] = \min_\Sigma\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma]\,, \end{displaymath} concluding the proof. \end{proof} \begin{proposition}\label{prop:constructSetFromPattern} For any irrigation pattern $\chi$, the rectifiable set $\Sigma = \{x \in \R^n \ : \ m_\chi(x) > \tfrac{\varepsilon}{a-1}\}$ satisfies \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]\geq\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma]\,. \end{displaymath} \end{proposition} \begin{proof} Let $\chi$ be an irrigation pattern. Let us define $\eta = \pushforward{\chi}{\reMeasure}$ ($\chi$ here is a map $\reSpace \to \Theta$). By definition of $\Sigma$, we have \begin{displaymath} \repsilonachi(x) = \begin{cases} 1 + \frac{\varepsilon}{m_\chi(x)} & \text{ if } x \in \Sigma,\\ a & \text{ otherwise} \end{cases} \end{displaymath} and thus \begin{align*} \int_I \repsilonachi(\chi_p(t))|\dot\chi_p(t)|\,\de t = \int_{\{\chi_p(t) \in \Sigma\}} \left(1+\tfrac{\varepsilon}{m_\chi(\chi(p,t))}\right)|\dot\chi(p,t)|\,\de t + \int_{\{\chi_p(t) \notin \Sigma\}} a|\dot\chi(p,t)|\,\de t\,. \end{align*} First, notice that \begin{multline*} \int_\reSpace \left(\int_{\{\chi_p(t) \in \Sigma\}} |\dot\chi(p,t)|\,\de t + \int_{\{\chi_p(t) \notin \Sigma\}} a|\dot\chi(p,t)|\,\de t\right)\,\de \reMeasure(p) \\ \geq \int_\Theta a \hdone(\theta(I)\setminus \Sigma) + \hdone(\theta(I) \cap \Sigma) \,\de \eta(\theta) \geq \Wd{d_\Sigma}(\mu_+,\mu_-)\, \end{multline*} by Proposition\,\ref{prop:OptTPM}. Furthermore, \begin{multline*} \int_\reSpace \int_{\{\chi_p(t) \in \Sigma\}} \frac{|\dot\chi_p(t)|}{m_\chi(\chi_p(t))}\,\de t\,\de \reMeasure(p) \geq \int_\reSpace \int_{\chi_p(I) \cap \Sigma} \frac{1}{m_\chi(x)} \,\de \hdone(x) \,\de \reMeasure(p) \\ = \int_\Sigma \int_{\{p \in \reSpace \ : \ x \in \chi_p(I)\}} \frac{1}{m_\chi(x)} \,\de \reMeasure(p) \,\de \hdone(x) = \int_\Sigma \frac{m_\chi(x)}{m_\chi(x)} \,\de \hdone(x) = \hdone(\Sigma) \end{multline*} so that \begin{multline*} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] =\int_\reSpace\int_I \repsilonachi(\chi_p(t))|\dot\chi_p(t)|\,\de t \,\de\reMeasure(p)\\ \geq \Wd{d_\Sigma}(\mu_+,\mu_-)+\varepsilon\hdone(\Sigma) = \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma]\,, \end{multline*} concluding the proof. \end{proof} \begin{remark}\label{rem:existenceOptPatternUrbPl} Due to the constructive nature of the above proofs, the existence of an optimal $\Sigma$ by \cite[Sec.\,4.4 and Thm.\,4.26]{BuPrSoSt09} implies the existence of an optimal irrigation pattern $\chi$. Indeed, let $\chi_\Sigma$ be the pattern constructed in Proposition\,\ref{prop:constructPatternFromSet} from an optimal $\Sigma$, then \begin{multline*} \inf_\chi \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] \leq \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi_\Sigma] \leq \min_{\Sigma} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\Sigma] \leq \inf_\chi \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]\,, \end{multline*} where the last step follows from Proposition\,\ref{prop:constructSetFromPattern}. Thus, all inequalities are equalities, and $\chi_\Sigma$ is an optimal pattern. Furthermore, by Proposition\,\ref{prop:constructSetFromPattern}, for any optimal $\chi$ there is an optimal $\Sigma$ satisfying \eqref{eqn:OptSigmaIdentif}. Note that there are also optimal $\Sigma$ that do not satisfy \eqref{eqn:OptSigmaIdentif} for any optimal irrigation pattern $\chi$, e.\,g.\ it is easy to see that $\Sigma=\{x \in \R^n \ : \ m_\chi(x) \geq \tfrac{\varepsilon}{a-1}\}$ is such an example. However, any optimal $\Sigma$ satisfies $$\{x \in \R^n \ : \ m_\chi(x) > \tfrac{\varepsilon}{a-1}\}\subset\Sigma\subset\{x \in \R^n \ : \ m_\chi(x) \geq \tfrac{\varepsilon}{a-1}\}$$ for some optimal $\chi$, since for an optimal $\Sigma$ the left-hand side and right-hand side in \eqref{eqn:urbPlCostEstimate} must coincide and thus all inequalities must be equalities. \end{remark} We end this section proving Proposition \ref{prop:single_path_property_for_the_urban_planning_problem}. \begin{proof}[Proof of Proposition \ref{prop:single_path_property_for_the_urban_planning_problem}] The construction of the optimal $\chi$ from an optimal $\Sigma$ in the proof of Proposition\,\ref{prop:constructPatternFromSet} is loop-free. Furthermore, it may be chosen to have the single path property, since with $\Sigma$ fixed, if there are two paths $\theta_1$ and $\theta_2$ passing through $x$ and $y$, then without changing the energy $\theta_2$ may be redirected to have the same path as $\theta_1$ between $x$ and $y$. \end{proof} \subsection{Equivalence of flux- and pattern-based formulation} Propositions\,\ref{prop:constructFluxFromPattern} and \ref{prop:constructPatternFromFlux} together prove \begin{displaymath} \min_\chi\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]=\min_\flux\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux] \end{displaymath} as well as the relation\,\eqref{eqn:OptFluxIdentif} for two minimisers. \begin{proposition}\label{prop:constructFluxFromPattern} There exists a mass flux $\flux$ with \begin{displaymath} \min_{\chi}\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] \geq \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux]\,. \end{displaymath} \end{proposition} \begin{proof} For any $h>0$ we consider a discrete grid $Z^h=h\Z^n$ and define discrete approximations $\mu_+^h,\mu_-^h$ of $\mu_+,\mu_-$ via \begin{displaymath} \mu_\pm^h=\sum_{i\in\Z^n}\mu_\pm(hi+[0,h]^n)\,\delta_{hi}\,, \end{displaymath} where $\delta_{hi}$ is a Dirac mass centred at $hi$. Due to the bounded support of $\mu_\pm$, $\mu_\pm^h$ is a finite weighted sum of Dirac masses. Furthermore, $\mu_\pm^h\stackrel*\rightharpoonup\mu_\pm$ as $h\to0$. Let $\chi$ be an optimal irrigation pattern for $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}$ and $\chi^h$ an optimal irrigation pattern for $\urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}$. Further below we will show \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]\geq\limsup_{h\to0}\urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\chi^h]\,. \end{displaymath} Furthermore, we will later also show that the $\chi^h$ can be identified with finite graphs (or the corresponding fluxes) $G^h$ such that \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\chi^h]=\urbPlXia^{\varepsilon,a}(G^h)\,. \end{displaymath} Now denoting by $\Wdone$ the 1-Wasserstein distance, by Remark\,\ref{rem:existenceUrbPlFiniteCostPattern} we have $\urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\chi^h]\leq a\Wdone(\mu_+^h,\mu_-^h)\leq a\mu_+^h(\R^n)(2h+\mathrm{diam}(\spt\mu_+\cup\spt\mu_-))$ so that the finite graphs $G^h$ have uniformly bounded energy. The corresponding fluxes are also uniformly bounded with respect to the total variation norm due to $\|\flux_{G^h}\|_\rca\leq\urbPlXia^{\varepsilon,a}(G^h)$ and thus are precompact with respect to weak-$*$ convergence. Hence, there is a mass flux $\flux$ such that $\flux_{G^h}\stackrel*\rightharpoonup\flux$ up to a subsequence. The lower semicontinuity of the cost then implies \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux]\leq\liminf_{h\to0}\urbPlXia^{\varepsilon,a}(G^h)\,, \end{displaymath} which concludes the proof. In order to show $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]\geq\limsup_{h\to0}\urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\chi^h]$, we associate with every point $x\in\R^n$ the Lipschitz path $I\to\R^n$ given by \begin{displaymath} \theta_x^h(t)=x+t(h\lfloor x/h\rfloor-x)\,, \end{displaymath} which connects $x$ with its corresponding point in $Z^h$ (here, $\lfloor c\rfloor=(\lfloor c_1\rfloor,\ldots,\lfloor c_n\rfloor)^T$, where $\lfloor c_i\rfloor=\max\{z\in\Z\ :\ z\leq c_i\}$ is the integer part). Now we can define new irrigation patterns according to \begin{equation*} \chi_+^h(p,\cdot)=\theta_{\chi_p(0)}^h\circ\iota\,,\quad \chi_-^h(p,\cdot)=\theta_{\chi_p(1)}^h\,,\quad \tilde\chi^h(p,t)=\begin{cases}\chi_+^h(p,3t)&\text{if }t\in[0,\frac13],\\\chi(p,3t-1)&\text{if }t\in(\frac13,\frac23],\\\chi_-^h(p,3t-2)&\text{if }t\in(\frac23,1],\end{cases} \end{equation*} where $\iota(t) = 1-t$. It can easily be checked that $\tilde\chi^h$ transports $\mu_+^h$ to $\mu_-^h$ with cost \begin{align*} \urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\tilde\chi^h] &\leq\urbPlMMS^{\varepsilon,a}(\chi_+^h)+\urbPlMMS^{\varepsilon,a}(\chi)+\urbPlMMS^{\varepsilon,a}(\chi_-^h)\\ &=\urbPlMMS^{\varepsilon,a}(\chi_+^h)+\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi]+\urbPlMMS^{\varepsilon,a}(\chi_-^h)\,. \end{align*} The estimate $\urbPlMMS^{\varepsilon,a}(\chi_\pm^h)\leq\mu_\pm(\R^n)ah\sqrt n$ (since all paths in $\chi_\pm^h$ have length no longer than $h\sqrt n$) as well as $\urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\chi^h]\leq\urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\tilde\chi^h]$ then yield the desired result. Finally, we need to show that the $\chi^h$ can be identified with finite graphs. For $i,j\in\Z^n$ let $\reSpace_{ij} = \{p \in \reSpace \ : \ \chi_p^h(0)=hi, \chi_p^h(1)=hj\}$. We have (potentially changing $\chi^h$ on a $\reMeasure$-null set, which does not alter its cost) \begin{displaymath} \reSpace=\bigcup_{i,j\in\Z^n}\reSpace_{ij}\,, \end{displaymath} where only finitely many, say $N$, terms of this union are nonempty, since $\mu_+^h$ and $\mu_-^h$ consist of only finitely many weighted Dirac measures. Since $\chi^h$ may be assumed to have the single path property (see Proposition\,\ref{prop:single_path_property_for_the_urban_planning_problem}), $\chi^h:\reSpace\to \Lip(I)$ may be taken constant on each nonempty $\reSpace_{ij}$, i.\,e.\ $\chi^h(\reSpace_{ij})=\chi_{ij}$ for some $\chi_{ij}\in \Lip(I)$. Furthermore, due to the single path property, the intersection of any two fibres $\chi_{ij}(I)$ and $\chi_{kl}(I)$ must be connected and can be assigned an orientation according to the irrigation direction. Now define for any subset $S\subset\Z^n\times\Z^n$ the fibre intersection \begin{displaymath} f_S=\bigcap_{(i,j)\in S}\chi_{ij}(I)\setminus\bigcup_{(i,j)\notin S}\chi_{ij}(I)\,, \end{displaymath} where for simplicity we set $\chi_{ij}(I)=\emptyset$ for $\reSpace_{ij}=\emptyset$. There are at most $2^N$ nonempty such intersections $f_S$, and each of them can have at most $N$ connected components $f_S^1,\ldots,f_S^N$ (again setting some of the $f_S^l$ to the empty set if necessary). We have \begin{displaymath} \chi(\reSpace,I)=\bigcup_{S\subset\Z^n\times\Z^n}\bigcup_{0\leq l\leq N}f_S^l \end{displaymath} with at most $N2^N$ terms being nonempty. Each of the $f_S^l$ can be assigned an orientation and a weight $w_S^l=\reMeasure\left(\bigcup_{(i,j)\subset S}\reSpace_{ij}\right)$, the amount of particles travelling on $f_S^l$ (which is constant all along $f_S^l$). Furthermore, $f_S^l$ must be a straight line segment, since otherwise, by straightening the fibres the cost of the irrigation pattern is reduced. Hence, we can define a finite graph $G^h$ whose oriented edges are the $f_S^l$, whose vertices are the edge end points, and whose edge weights are the $w_S^l$. It is now straightforward to check $\urbPlEn^{\varepsilon,a,\mu_+^h,\mu_-^h}[\chi^h]=\urbPlXia^{\varepsilon,a}(G^h)$ as required. \end{proof} The proof of the opposite inequality requires a few preparatory lemmas. \begin{lemma}[Almost a $\Gamma$-convergence lemma]\label{lem:gamma_convergence_for_discrete_measures} Suppose that \begin{itemize} \item $\mu_+^N$, $\mu_-^N$ are discrete measures such that $\mu_+^N \weakstarto \mu_+$, $\mu_-^N \weakstarto \mu_-$ as $N\to\infty$; \item $\flux_N$ is a minimiser of $\urbPlEn^{\varepsilon,a,\mu_+^N,\mu_-^N}$; \item $\flux_N \weakstarto \flux$. \end{itemize} Then, $\flux$ is a minimiser of $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}$. \end{lemma} \begin{proof} To achieve a contradiction suppose that $\flux$ is not a minimiser of $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}$, that is there exist $\flux'$ such that $\dv\flux' = \mu_+-\mu_-$ and $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}(\flux') < \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}(\flux)$. On the right-hand side, we have (up to a subsequence) \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}(\flux) \leq \liminf_{N \to \infty} \urbPlEn^{\varepsilon,a,\mu_+^N,\mu_-^N}(\flux_N) \end{displaymath} due to the weak-* lower semicontinuity of the energy. Thus, given $\eta > 0$, for large $N$ we have \begin{displaymath} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}(\flux) -\eta < \urbPlEn^{\varepsilon,a,\mu_+^N,\mu_-^N}(\flux_N). \end{displaymath} On the left-hand side, by definition of $\urbPlXia^{\varepsilon,a}$, there exists a sequence $\flux_N'$ such that \begin{displaymath} \lim_{N \to \infty} \urbPlEn^{\varepsilon,a,\mu_+^N,\mu_-^N}(\flux_N') \leq \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}(\flux') + \eta. \end{displaymath} Choosing $\eta$ suitably close to 0 now yields $\urbPlEn^{\varepsilon,a,\mu_+^N,\mu_-^N}(\flux_N')<\urbPlEn^{\varepsilon,a,\mu_+^N,\mu_-^N}(\flux_N)$ for $N$ large enough, a contradiction to the optimality of $\flux_N$. \end{proof} \begin{definition}[Paths in a graph and their weight Let $G$ be an oriented graph with edge set $E(G)$. A \emph{path} in $G$ is a sequence $\xi = (e_1,\ldots,e_k)$ of edges $e_1,\ldots,e_k\in E(G)$ such that $e_i^- = e_{i-1}^+$ for $i = 2,\ldots,k$, where $e^-$ and $e^+$ denote the initial and final point of edge $e$. Suppose that $G$ is also weighted and has no cycles. Let us denote by $\Xi(G)$ the set of \emph{maximal paths} on $G$, that is, paths that are not a subsequence of any other path. The \emph{weights} $w(\xi)$ of all paths $\xi\in\Xi(G)$ is defined by the system of equations \begin{displaymath} w(e) = \sum_{e \in \xi} w(\xi), \quad e \in E(G), \end{displaymath} whose solvability follows from \cite[Lemma 7.1]{Xia-Optimal-Paths}. Finally, for $\Xi_0 \subseteq \Xi(G)$ we define \begin{displaymath} |\Xi_0| = \sum_{\xi \in \Xi_0} w(\xi). \end{displaymath} \end{definition} \begin{lemma}[Bound on fibre length]\label{lem:bound_on_fibre_length} Let $\mu_+,\mu_-\in\fbm(\R^n)$ be discrete measures of equal mass with support in a convex set $\Omega\subset\R^n$ and let $G$ be any discrete mass flux between $\mu_+$ and $\mu_-$. Let $\Xi_0$ denote the set of paths in $G$ of length greater than $2a\diam\Omega$, and let $G(\Xi_0)$ denote the graph whose associated vectorial measure is given by \begin{displaymath} \flux_{G(\Xi_0)} = \sum_{\xi \in \Xi_0} \sum_{e \in \xi} w(\xi)\mu_e \end{displaymath} (cf.\ Definition\,\ref{def:graphs_as_vectorial_measures}). Then there exists a discrete mass flux $G'$ between $\mu_+$ and $\mu_-$ such that all its paths have length bounded by $2a\diam\Omega$ and \begin{align} \urbPlXia^{\varepsilon,a}(G)\!\!-\!\!\urbPlXia^{\varepsilon,a}(G') &\geq \|\flux_{G(\Xi_0)}\|_\rca - a\diam\Omega|\Xi_0|\geq a\diam\Omega|\Xi_0|\,,\label{eq:bound_on_fibre_length}\\ \|\flux_G - \flux_{G'}\|_\rca &\leq \|\flux_{G(\Xi_0)}\|_\rca + a\diam\Omega|\Xi_0|\leq3(\urbPlXia^{\varepsilon,a}(G)\!\!-\!\!\urbPlXia^{\varepsilon,a}(G'))\,.\label{eq:fluxDiff} \end{align} \end{lemma} \begin{proof} Definitions\,\ref{def:graphs_as_vectorial_measures} and \ref{def:sums_of_graphs} can be extended to paths and thus allow to define graphs as sums of edges and of paths, which we make use of in the following to simplify the exposition. Let \begin{displaymath} G' = \sum_{\xi \in \Xi(G)\setminus\Xi_0} w(\xi)\xi + G'' = \sum_{e \in E(G)} w'(e)e + G'', \end{displaymath} where \begin{displaymath} w'(e) = w(e) - \sum_{e \in \xi \in \Xi_0} w(\xi) \end{displaymath} and $G''$ is a graph composed of straight edges, which recovers the flux conservation condition $\dv G' = \dv G = \mu_+-\mu_-$. Denoting the length of an edge $e$ by $l(e)$, we can now compute \begin{align*} \urbPlXia^{\varepsilon,a}(G) - \urbPlXia^{\varepsilon,a}(G') &= \sum_{e \in E(G)} [c^{\varepsilon,a}(w(e))-c^{\varepsilon,a}(w'(e))]l(e) - \urbPlXia^{\varepsilon,a}(G'')\\ &= \sum_{e \in E(G)} \left(\sum_{e \in \xi \in \Xi_0} w(\xi)\right)l(e) - \urbPlXia^{\varepsilon,a}(G'')\\ &= \sum_{\xi \in \Xi_0} \sum_{e \in \xi} w(\xi)l(e) - \urbPlXia^{\varepsilon,a}(G'')\\ &\geq \|\flux_{G(\Xi_0)}\|_\rca - a\diam\Omega|\Xi_0|\,. \end{align*} The relation $\|\flux_{G(\Xi_0)}\|_\rca=\sum_{\xi \in \Xi_0} w(\xi) \sum_{e \in \xi}l(e)\geq 2a\diam\Omega|\Xi_0|$ now concludes the proof of \eqref{eq:bound_on_fibre_length}. Equation\,\eqref{eq:fluxDiff} directly follows from $G-G'=G(\Xi_0)-G''$. \end{proof} Finally we will need the following compactness lemma for transport path measures. \begin{lemma}[Compactness for transport path measures]\label{lem:compactness_lemma_for_transport_path_measures} Let $C>0$ and $\Omega\subset\R^n$ be compact, and consider the set \begin{displaymath} \Theta_C =\left\{\theta\in\Theta\,:\,\theta(I)\subset\Omega\text{ and }\textstyle\int_I |\dot\theta(t)|\de t \leq C\right\}\subset \Theta\,. \end{displaymath} Let $\eta_N\in\TPM(\mu_+,\mu_-)$ be a sequence of transport path measures such that \begin{equation*} \eta_N(\Theta\setminus\Theta_C) = 0\,. \end{equation*} Then, up to a subsequence, $\eta_N \weakto \eta$ in the sense \begin{displaymath} \int_{\Theta} \varphi(\theta)\,\de\eta_N(\theta) \to \int_{\Theta} \varphi(\theta)\,\de\eta(\theta) \quad\text{for all}\ \varphi\in\contbdd(\Theta)\,, \end{displaymath} where $\contbdd(\Theta)$ denotes the set of bounded continuous functions on $\Theta$. Moreover, $\eta\in\TPM(\mu_+,\mu_-)$. \end{lemma} \begin{proof} Note that $\Theta$ is separable (which follows from the separability of $\Lip(I)$) and that $\Theta_C$ is a (sequentially) compact subset of $\Theta$. Indeed, let $\theta_n$, $n=1,2,\ldots$, be a sequence in $\Theta_C$. Upon reparameterisation of each element (which does not change the sequence), the $\theta_n$ are uniformly Lipschitz. Thus, by the Ascoli--Arzel\`a Theorem, up to a subsequence we have $\theta_n\to\theta\in\cont(I;\Omega)$. Furthermore, \begin{displaymath} \int_I |\dot\theta(t)|\,\de t \leq \liminf_N \int_I |\dot\theta_N(t)|\,\de t \leq C. \end{displaymath} As a consequence, the $\eta_N$ are all supported on the same compact set and are thus tight (i.e. for every $\varepsilon > 0$ there exists a compact $K_\varepsilon$ such that $\eta_N(K_\varepsilon^c) < \varepsilon$). Furthermore, due to $\eta_N\in\TPM(\mu_+,\mu_-)$ they all have the same mass. Hence, by Prokhorov's Theorem (which assures weak compactness for a tight set of measures; see \cite{Bil99}) we get $\eta_N \weakto \eta$ up to a subsequence, as desired. It remains to prove $\pushforward{p_0}{\eta} = \mu_+$ (the proof of $\pushforward{p_1}{\eta} = \mu_-$ works analogously). Because of $\pushforward{p_0}{\eta_N} = \mu_+$ for all $N$ we have \begin{equation*} \int_\Theta \varphi(p_0(\theta))\,\de\eta_N(\theta) = \int_\Omega \varphi(x) \,\de\mu_+(x) \quad\text{for all}\ \varphi \in \contbdd(\Omega)\,. \end{equation*} Due to $\eta_N \weakto \eta$ as well as $\varphi\circ p_0\in\contbdd(\Theta)$, letting $N\to\infty$ we finally arrive at \begin{equation* \int_\Theta \varphi(p_0(\theta))\,\de\eta(\theta) = \int_\Omega \varphi(x) \,\de\mu_+(x) \quad\text{for all}\ \varphi \in \contbdd(\Omega)\,, \end{equation*} that is, $\pushforward{p_0}{\eta} = \mu_+$. \end{proof} \begin{proposition}\label{prop:constructPatternFromFlux} We have \begin{equation}\label{eq:urban_chi_leq_urban_flux} \min_\chi\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] \leq \min_\flux\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux]\,. \end{equation} Furthermore, for any optimal mass flux $\flux$ there is an optimal irrigation pattern $\chi$ so that both are related via \begin{equation}\label{eq:constructPatternFromFlux} \int_{\R^n}\varphi\cdot\de\flux=\int_\reSpace\int_I\varphi(\chi_p(t))\cdot\dot\chi_p(t)\,\de t\,\de \reMeasure(p) \;\text{for all}\; \varphi\in \cont_c(\R^n;\R^n). \end{equation} \end{proposition} \begin{proof} In the first part of the proof, we construct a pattern $\chi$ from an optimal flux. So let the flux $\flux$ be optimal. We may assume $\urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux] < \infty$ since otherwise there is nothing to show by Remark \ref{rem:existenceUrbPlFiniteCostPattern}. Let $G_N$ a sequence of finite weighted graphs such that \begin{itemize} \item $G_N$ is a discrete mass flux between some $\mu_+^N$ and $\mu_-^N$, \item $(\mu_+^N,\mu_-^N,\flux_{G_N}) \weakstarto (\mu_+,\mu_-,\flux)$, \item $\urbPlXia^{\varepsilon,a}(G_N) \to \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux]$. \end{itemize} Note that if $\hat G_N$ or rather $\flux_{\hat G_N}$ is a minimiser of $\urbPlEn^{\varepsilon,a,\mu_+^N,\mu_-^N}$, by Lemma \ref{lem:gamma_convergence_for_discrete_measures} we must have \begin{displaymath} \lim_{N\to\infty} \urbPlXia^{\varepsilon,a}(G_N)-\urbPlXia^{\varepsilon,a}(\hat G_N) = 0. \end{displaymath} Let $\tilde G_N$ denote the cycle-reduced graph $G_N$. By Lemma\,\ref{lem:cycleCost}, $\urbPlXia^{\varepsilon,a}(\tilde G_N)\leq\urbPlXia^{\varepsilon,a}(G_N)$ and \begin{equation*} \|\flux_{G_N}-\flux_{\tilde G_N}\|_\rca \leq \urbPlXia^{\varepsilon,a}(G_N)-\urbPlXia^{\varepsilon,a}(\tilde G_N) \leq\urbPlXia^{\varepsilon,a}(G_N)-\urbPlXia^{\varepsilon,a}(\hat G_N) \mathop\to_{N\to\infty}0\,. \end{equation*} Thus, without loss of generality, we may replace the $G_N$ by discrete mass fluxes without cycles, and from now on $G_N$ is supposed to be without cycles. We may even assume the $G_N$ to only contain paths with length bounded by $2a\diam(\Omega)$, where $\Omega$ is a ball containing $\spt\mu_+$ and $\spt\mu_-$. Indeed, let $G_N'$ and $\Xi_0^N$ be the graph and the set of paths from Lemma \ref{lem:bound_on_fibre_length} associated with $G_N$. By Lemma \ref{lem:bound_on_fibre_length} we obtain \begin{equation*} \|\flux_{G_N}-\flux_{G_N'}\|_\rca \leq 3(\urbPlXia^{\varepsilon,a}(G_N)-\urbPlXia^{\varepsilon,a}(G_N')) \leq3(\urbPlXia^{\varepsilon,a}(G_N)-\urbPlXia^{\varepsilon,a}(\hat G_N)) \mathop\to_{N\to\infty}0\,. \end{equation*} Thus we may replace the $G_N$ by the $G_N'$, which have uniformly bounded path lengths. Summarising, from now on we may assume the $G_N$ to have no cycles and to have path lengths bounded by $2a\diam\Omega$. Hence, by Remark\,\ref{rem:fluxDecomp} there exist corresponding transport path measures $\eta_N\in\TPM(\mu_+,\mu_-)$. Since they parametrise the graphs $G_N$, the $\eta_N$ have support on paths with lengths bounded by $2a\diam\Omega$ and images in $B_{2a\diam\Omega}(\Omega)$. Thanks to Lemma \ref{lem:compactness_lemma_for_transport_path_measures}, we thus have (up to a subsequence) $\eta_N \weakto \eta$ for some $\eta\in\TPM(\mu_+,\mu_-)$. By Skorokhod's Convergence Theorem \cite[App.\,A, Thm.\,A.8]{BeCaMo09}, there exist a sequence of irrigation patterns $\chi_N$ parameterising $\eta_N$ and an irrigation pattern $\chi$ parameterising $\eta$ such that \begin{displaymath} \chi_N(p,\cdot) \stackrel{C^0(I)}{\longrightarrow} \chi(p,\cdot) \quad \text{for all }p \in \reSpace\,. \end{displaymath} By Proposition \ref{prop:urban_planning_energy_is_lower_semicontinuous} we have \begin{equation*} \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\chi] = \urbPlMMS^{\varepsilon,a}(\chi) \leq \liminf_{N \to \infty} \urbPlMMS^{\varepsilon,a}(\chi_N) = \liminf_{N \to \infty} \urbPlXia^{\varepsilon,a}(G_N) = \urbPlEn^{\varepsilon,a,\mu_+,\mu_-}[\flux]\,, \end{equation*} and \eqref{eq:urban_chi_leq_urban_flux} is established. In the second part of the proof we now explain the relation given by formula \eqref{eq:constructPatternFromFlux} between the constructed $\chi$ and $\flux$. After reparameterisation according to Proposition\,\ref{thm:constSpeedPatternsUrbPl} we may assume the $\chi_N(p,\cdot)$ to be uniformly Lipschitz with constant $2a\diam(\Omega)$. Thus, for each $p$ we may extract a subsequence converging weakly-$*$ in $W^{1,\infty}(I)$ against $\chi(p,\cdot)$. Since any subsequence contains such a converging subsequence, actually the whole sequence $\chi_N(p,\cdot)$ converges weakly-$*$ against $\chi(p,\cdot)$. Now for any $\varphi\in \cont_c(\R^n;\R^n)$ we have (the second equality can be easily verified edge by edge) \begin{displaymath} \int_\Omega\varphi\cdot\de\flux =\lim_{N\to\infty}\int_\Omega\varphi\cdot\de\flux_{G_N} =\lim_{N\to\infty}\int_\reSpace\int_I\varphi(\chi_N(p,t))\cdot\dot\chi_N(p,t)\,\de t\,\de \reMeasure(p)\,. \end{displaymath} Note that $\varphi(\chi_N(p,\cdot))$ converges in $L^\infty(I)$, while $\dot\chi_N(p,\cdot)$ converges weakly-$*$ in $L^\infty(I)$ so that \begin{displaymath} \int_I\varphi(\chi_N(p,t))\cdot\dot\chi_N(p,t)\,\de t=:J_N(p)\to J(p):=\int_I\varphi(\chi(p,t))\cdot\dot\chi(p,t)\,\de t\,. \end{displaymath} Together with the uniform bound $J_N(p)\leq\|\varphi\|_{L^\infty}\hdone(\chi_N(p,I))\leq2\|\varphi\|_{L^\infty}a\diam(\Omega)$, this allows application of Lebesgue's dominated convergence theorem, from which we finally obtain \begin{displaymath} \int_\Omega\varphi\cdot\de\flux =\lim_{N\to\infty}\int_\reSpace\int_I\varphi(\chi_N(p,t))\cdot\dot\chi_N(p,t)\,\de t\,\de \reMeasure(p) =\int_\reSpace\int_I\varphi(\chi(p,t))\cdot\dot\chi(p,t)\,\de t\,\de \reMeasure(p)\,, \end{displaymath} the desired formula. \end{proof} \section{Acknowledgements} This work was supported by the Deutsche Forschungsgemeinschaft (DFG), Cells-in-Motion Cluster of Excellence (EXC 1003-CiM), University of M\"unster, Germany. B.W.'s research was supported by the Alfried Krupp Prize for Young University Teachers awarded by the Alfried Krupp von Bohlen und Halbach-Stiftung. \bibliographystyle{alpha}
{ "attr-fineweb-edu": 1.963867, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbpA4eIXguvMsvIQq
\section{Sequence-Form Representation for Perfect-Recall EFGs}\label{app:sequence} The number of pure strategies $|\Pi_i|$ of each player $i \in N$ may be exponentially large in the size of an EFG, preventing the development of scalable computational tools using them. Moreover, the same holds for \emph{reduced} pure strategies, which only specify actions at infosets that are reachable given the player's past moves. This problem is circumvented by the \emph{sequence form} introduced by~\citet{von1996efficient}, where each player selects a \emph{sequence} of actions rather than a pure strategy. For any node $h \in H$, we let $\sigma_i(h)$ be the ordered sequence of actions of player $i \in N$ on the path from the root of the game tree to $h$. We recall that, given the perfect recall assumption, all nodes in an infoset $I \in \mathcal{I}_i$ of player $i \in N$ define the same sequence $\sigma_i(I)$ of player $i$'s actions, \emph{i.e.}, it holds $\sigma_i(h) = \sigma_i(I) $ for all $h \in I$. Moreover, $\sigma_i(I)$ can be {extended} by any action $a \in A(I)$, defining a new player $i$'s sequence $\sigma_i(I) a$. Thus, by introducing the empty sequence to represent the paths in the game tree in which a player does not play, the set of sequences available to player $i \in N$ is $\Sigma_i \coloneqq \{\varnothing\} \cup \{ \sigma_i(I)a \mid I \in \mathcal{I}_i , a \in A(I) \}$. Within the sequence form, mixed strategies are expressed as \emph{realization plans}. A realization plan for player $i \in N$ is a function $x_i: \Sigma_i \to [0,1]$, with $x_i(\sigma_i)$ expressing the realization probability of sequence $\sigma_i \in \Sigma_i$. In order to be well defined, $x_i$ must satisfy the linear constraints $x_i(\varnothing) = 1$ and $x_i(\sigma_i(I)) = \sum_{a \in A(I)} x_i(\sigma_i(I)a)$ for every infoset $I \in \mathcal{I}_i$. Since the number of sequences $|\Sigma_i|$ of each player $i \in N$ is polynomial in the size of an EFG and realization plans can be easily expressed by linear constraints, the sequence form is an appealing formalism for handling EFGs. Moreover, as shown by~\citet{von1996efficient}, the crucial property of the sequence form is that realization plans and behavior strategies are equally expressive in EFGs with perfect recall. In particular, $x_i$ is equivalent to a behavior strategy that selects $a \in A(I)$ with probability $\frac{x_i(\sigma_i(I) a)}{x_i(\sigma_i(I))}$ if $x_p(\sigma_i(I)) > 0$ and arbitrarily if $x_i(\sigma_i(I)) = 0$. Conversely, a behavior strategy $\beta_i$ is equivalent to a realization plan that selects each sequence $\sigma_i \in \Sigma_i$ with probability $\prod_{a \in \sigma_i} \beta_i(a)$. \section{Characterization of EFCEs Using Trigger Agents}\label{app:trigger} We provide a formal statement of the characterization of EFCEs based on trigger agents (see Definition~\ref{def:trigger}), originally introduced by~\citet{DBLP:conf/icml/GordonGM08}~and~\citet{DBLP:conf/nips/FarinaLFS19a} (see also~\citep{DBLP:conf/aaai/FarinaBS20} for a more general treatment). We recall that such characterization is based on the fact that $\mu \in \Delta_\Pi$ is an EFCE if, for every $i \in N$, player $i$'s expected utility when following recommendations is at least as large as the expected utility that any $(I, a, \hat \mu_i)$-trigger agent for player $i$ can achieve (assuming the opponents' do not deviate from recommendations). For any $\mu\in\Delta_\Pi$ and $(I, a, \hat \mu_i)$-trigger agent, we define the probability of reaching a terminal node $z \in Z(I)$ as: \begin{equation}\label{eq:p} p_{\mu,\hat\mu_i}^{I,a}(z) \coloneqq \left( \sum_{\substack{\pi_i \in \Pi_i(I, a)\\\pi_{-i} \in \Pi_{-i}(z)}} \mu (\pi_i, \pi_{-i}) \right) \left( \sum_{\hat \pi_i \in \Pi_i(z)} \hat \mu_i(\hat \pi_i) \right) p_c(z), \end{equation} which accounts for the fact that the agent follows recommendations until she receives the recommendation of playing $a$ at $I$, and, thus, she `gets triggered' and plays according to $\hat \pi_i$ sampled from $\hat \mu_i$ from $I$ onwards. Moreover, the probability of reaching a terminal node $z \in Z$ when following the recommendations is defined as follows: \begin{equation}\label{eq:q} q_\mu(z) \coloneqq \left( \sum_{\pi \in \Pi(z)} \mu(\pi) \right) p_c(z). \end{equation} Then, the following lemma provides the trigger-agent-based characterization of EFCEs: \begin{lemma}[\citet{DBLP:conf/aaai/FarinaBS20}]\label{lem:efce_trigger} Given an EFG $\Gamma$, $\mu \in \Delta_\Pi$ is an EFCE of $\Gamma$ if for every $i \in N$ and $(I, a, \hat \mu_i)$-trigger agent for player $i$, it holds that: % \begin{equation*}\label{eq:efce_simp} \sum_{z \in Z(I,a)} q_\mu(z) u_i(z) \geq \sum_{z \in Z(I)} p_{\mu,\hat\mu_i}^{I,a}(z) u_i(z). \end{equation*} \end{lemma} \section{LP Formulation for the Set of EFCEs in $n$-Player EFGs}\label{app:lp} We show how to derive the LP formulation (Problem~\ref{prob:primal_ellip}) for the set of EFCEs in $n$-player EFGs originally introduced by~\citet{huang2008computing}, using the characterization of EFCEs based on trigger agents (see Definition~\ref{def:trigger} and Lemma~\ref{lem:efce_trigger}). In the following, we assume that a probability distribution $\mu \in \Delta_{\Pi}$ is encoded by means of variables $\mu[\pi]$, defined for $\pi \in \Pi$. For every player $i \in N$, infoset $I \in \mathcal{I}_i$, and action $a \in A(I)$, we introduce a variable $u[i, I, a]$ representing player $i$'s expected utility when following the recommendation to play $a$ at infoset $I$. These variables are defined by the following constraints: \begin{align}\label{eq:aux_u} & u[i, I, a] = \sum_{z \in Z(I,a)} \left( \sum_{\pi \in \Pi(z)} \mu[\pi] \right) p_c(z) u_i(z) & \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I). \end{align} Then, we need to introduce constraints which ensure that following recommendations guarantees a utility at least as large as that achieved by any $(I, a, \hat \mu_i)$-trigger agent. For every infoset $J \in \mathcal{I}_i$ such that $I \preceq J$, we introduce a variable $v[i,I,a,J]$ that encodes the maximum expected utility obtained at infoset $J$ by trigger agents associated with $I$ and $a$. We can recursively define variables $v[i,I,a,J]$ as follows: \begin{align}\label{eq:dev_mu} & v[i,I,a,J] \geq \sum_{z \in Z^\bot (J, a')} \left( \sum_{\substack{ \pi_i \in \Pi_i(I, a) \\ \pi_{-i} \in \Pi_{-i}(z) } } \mu[\pi_i, \pi_{-i}] \right) p_c(z) u_i(z) + \sum_{K \in \mathcal{C}(J, a^\prime)} v[i, I, a, K] \\ & \hspace{7.5cm}\forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I), \forall J \in \mathcal{I}_i: I \preceq J, \forall a^\prime \in A(J), \nonumber \end{align} where we notice that the first summation is over the set of terminal nodes which are reachable from $J$ by playing $a^\prime$ without traversing any other player $i$'s infoset. The following incentive constraints complete the formulation: \begin{align}\label{eq:inc_mu} & u[i,I,a] = v[i,I,a,I] & \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I). \end{align} A direct application of Lemma~\ref{lem:efce_trigger} and LP duality is enough to prove that Constraints~\eqref{eq:aux_u},~\eqref{eq:dev_mu},~and~\eqref{eq:inc_mu} correctly characterize the set of EFCEs (formally, it is enough to follow steps similar to those in the proof of Theorem~\ref{thm:trembling_lp_theorem}, with the only difference that Constraints~\eqref{eq:primal_inner_cons3} in the inner maximization problems and the corresponding dual variables $w[i,I,a,J,a']$ are missing). By substituting the equalities in Constraints~\eqref{eq:aux_u}~and~\eqref{eq:inc_mu} into Constraints~\eqref{eq:dev_mu}, we obtain the following set of linear constraints, which are equivalent to those introduced by~\citet{huang2008computing}: \begin{equation*} A \boldsymbol{\mu} + B \boldsymbol{v }\geq \boldsymbol{0}, \end{equation*} where $\boldsymbol{\mu}$ is the vector whose components are the variables $\mu[\pi]$ for $\pi \in \Pi$, while $\boldsymbol{v}$ is the vector of variables $v[i,I,a,J]$ indexed by $ i \in N, I \in \mathcal{I}_i, a \in A(I)$, and $J \in \mathcal{I}_i: I \preceq J$. Moreover, the matrices $A$ and $B$ encode the coefficients appearing in Constraints~\eqref{eq:aux_u}~and~\eqref{eq:dev_mu}. Specifically, non-zero entries of $A$ are products $p_c(z) u_i(z)$, while those of $B$ are either $1$ or $-1$. \section{Discussion on EFPCEs and Un-Reduced Strategies}\label{app:reduced} Next, we discuss the reasons why EFPCEs need un-reduced strategy profiles in order to be defined consistently. First, we remark that, as discussed by~\citet{von2008extensive}, restricting the definition of probability distributions $\mu$ to \emph{reduced} strategy profiles (\emph{i.e.}, those in which each player's pure strategy only specifies actions at infosets reachable given that player's moves, see~\citep{vermeulen1998reduced} for a formal definition) is sufficient for the characterization of the classical notions of correlated equilibria. Intuitively, the reason is that, at the equilibrium, each player follows recommendations issued by the correlation device, and, thus, the latter does \emph{not} need to specify action recommendations for the player at those infosets that are never reached when following recommendations at the preceding infosets of the same player. This is no longer the case if we introduce trembles in the game, which make all the infosets reachable with positive probability even when committing to following recommendations. As a result, the correlation device has to be ready to issue action recommendations everywhere in the game. Then, when defining EFPCEs, we cannot restrict the attention to probability distributions over reduced strategy profiles, and un-reduced ones are necessary. The EFG in Figure~\ref{fig:example_game}(\emph{Left}) provides an example where un-reduced strategies are necessary to express EFPCEs. As shown in the main text, any EFPCE of the game must recommend to play $a$ at $\textsc{i}$, while, at the same time, it is crucial to define recommendations also at infosets $\textsc{k}$ and $\textsc{l}$, in order to achieve optimality off the equilibrium path. Clearly, this is incompatible with reduced strategies, as any player $1$'s reduced strategy prescribing $a$ at $\textsc{i}$ does not specify anything at infosets $\textsc{k}$ and $\textsc{l}$, which are unreachable when playing $a$ at $\textsc{i}$. \section{Detailed Examples of EFPCEs}\label{app:example} Consider the EFG in Figure~\ref{fig:example_game}(\emph{Left}) and lower bound functions $\eta_t: A \to (0,1)$ for $t \in \mathbb{N}$, with $\eta_t(a)$ converging to zero as $t \rightarrow \infty$ for each $a \in A$. First, let us notice that, without trembles, player $1$ is always better off playing action $a$ at the root infoset $\textsc{i}$, since she can guarantee herself a utility of $1$ by selecting $c$ at the following infoset $\textsc{j}$, while she can achieve at most a utility of $\frac{1}{2}$ by playing $b$. Thus, any EFPCE of the game (as well as any EFCE) must recommend $a$ at $\textsc{i}$ with probability $1$, since there is no way player $1$ can be incentivized to play $b$. Then, in the sub-game reached when playing $a$ at $\textsc{i}$, it is easy to check that recommending the pairs of actions $(c,m)$, $(c,n)$, and $(d,m)$ each with probability $\frac{1}{3}$ is an equilibrium, as each player has no incentive to deviate from each possible recommendation, even in presence of trembles. As an example, consider the case in which player $1$ is told to play action $c$ at $\textsc{j}$. Then, by following the recommendations, she gets a utility equal to: \begin{align*} & \Big[ 2 \cdot \frac{1}{3} \left( 1 - \eta_t(n) \right) \left( 1 - \eta_t(d) \right) + 3 \cdot \frac{1}{3} \left( 1 - \eta_t(n) \right) \eta_t(d) + 1 \cdot \frac{1}{3} \eta_t(n) \left( 1 - \eta_t(d) \right) + 0 \cdot \frac{1}{3} \eta_t(n) \eta_t(d) \Big] + \\ & \Big[ 1 \cdot \frac{1}{3} \left( 1 - \eta_t(m) \right) \left( 1 - \eta_t(d) \right) + 0 \cdot \frac{1}{3} \left( 1 - \eta_t(m) \right) \eta_t(d) + 2 \cdot \frac{1}{3} \eta_t(m) \left( 1 - \eta_t(d) \right) + 3 \cdot \frac{1}{3} \eta_t(m) \eta_t(d) \Big], \end{align*} where the first sum is for the case in which $(c,m)$ is recommended, while the second one is for $(c,n)$. Each term appearing in a sum is for one of the four possible outcomes that may result when following recommendations subject to trembles. Instead, player $1$'s utility if deviating to $d$ at $\textsc{j}$ is: \begin{align*} & \Big[ 3 \cdot \frac{1}{3} \left( 1 - \eta_t(n) \right) \left( 1 - \eta_t(c) \right) + 2 \cdot \frac{1}{3} \left( 1 - \eta_t(n) \right) \eta_t(c) + 0 \cdot \frac{1}{3} \eta_t(n) \left( 1 - \eta_t(c) \right) + 1 \cdot \frac{1}{3} \eta_t(n) \eta_t(c) \Big] + \\ & \Big[ 0 \cdot \frac{1}{3} \left( 1 - \eta_t(m) \right) \left( 1 - \eta_t(c) \right) + 1 \cdot \frac{1}{3} \left( 1 - \eta_t(m) \right) \eta_t(c) + 3 \cdot \frac{1}{3} \eta_t(m) \left( 1 - \eta_t(c) \right) + 2 \cdot \frac{1}{3} \eta_t(m) \eta_t(c) \Big]. \end{align*} A simple calculation shows that the first quantity is greater than or equal to the second one as the lower bounds approach zero. Analogous conditions hold for other recommendations at infosets $\textsc{x}$ and $\textsc{j}$. Notice that, when lower bounds are zero, the conditions above collapse to the classical incentive constraints for EFCE. The correlation device described up to this point is sufficient to define an EFCE, as recommendations at infosets $\textsc{y}$, $\textsc{k}$, and $\textsc{l}$ are \emph{not} relevant given that they do not influence players' utilities at the equilibrium ($b$ is never recommended). However, in perturbed extended games, these infosets could be reached due to a tremble which happens with probability $\eta_t(b)$, and, thus, recommendations at such infosets become relevant. Then, it is easy to check that player $2$ must be told to play $p$ at $\textsc{y}$, because her utility is always $1$ if she plays $p$, while it is always $0$ when playing $o$. Moreover, with an analogous reasoning, player $1$ must be recommended to play $e$ and $h$ at $\textsc{k}$ and $\textsc{l}$, respectively. As an example, consider the case in which player $1$ is recommended to play $e$ at $\textsc{k}$. Then, her utility would be $ \frac{1}{2} \cdot \left( 1 - \eta_t(f) \right) + 0 \cdot \eta_t(f)$, while she would get $0 \cdot \left( 1 - \eta_t(e) \right) + \frac{1}{2} \cdot \eta_t(e)$ by deviating to $e$. Similar conditions hold for infosets $\textsc{y}$ and $\textsc{l}$. In conclusion, we can state that the following distribution $\mu \in \Delta_{\Pi}$ defines an EFPCE: \begin{align*} \mu(aceh,mp) = \mu(aceh,np) = \mu(adeh,mp) = \frac{1}{3}. \end{align*} Let us remark that this is not the only EFPCE of the game, as there are other ways of correlating players' behavior at infosets $\textsc{x}$ and $\textsc{j}$ while satisfying the required incentive constraints. For example, setting \begin{align*} \mu(aceh,mp) = \mu(aceh,np) = \mu(adeh,mp) = \mu(adeh,np) = \frac{1}{4} \end{align*} defines a valid EFPCE that results from a PE of the game (where players play uniform strategies at infosets $\textsc{x}$ and $\textsc{j}$). \section{Proofs of Theorems and Lemmas}\label{app:proofs} In this section, we provide the complete proofs of Theorems~\ref{thm:relations},~\ref{thm:relations_ne},~\ref{thm:charac_recc_ne},~\ref{thm:trembling_lp_theorem},~and~Lemma~\ref{lem:lemmasep}. \thmrelations* \begin{proof} Clearly, $\textnormal{\textsf{EFPCE}} \subseteq \textnormal{\textsf{EFCE}}$ holds since any PE of $\Gamma^{\textnormal{ext}}(\mu)$ is also an NE. % As for the other relation, let $\{ \beta_i \}_{i \in N}$ be a PE of $\Gamma$ obtained for a sequence of perturbed games $\{ (\Gamma, \eta_t) \}_{t \in \mathbb{N}}$ and a corresponding sequence of NEs in these games, namely $\{ \beta_{i,t} \}_{i \in N}$ for $t \in \mathbb{N}$, where each $\beta_{i,t}$ is a well-defined player $i$'s behavior strategy in $(\Gamma, \eta_t) $, \emph{i.e.}, it holds $\beta_{i,t}(a) \geq \eta_t(a)$ for all $t \in \mathbb{N}$, $i \in N$, and $a \in A_i$. % Let $\mu \in \Delta_{\Pi}$ be such that, for every $\pi \in \Pi$, it holds $\mu(\pi) = \prod_{i \in N} \prod_{I \in \mathcal{I}_i} \beta_i(\pi_i(I))$. % Consider the extended game $\Gamma^{\textnormal{ext}}(\mu)$, where we denote with $\mathcal{I}^{\textnormal{ext}}_i$ the set of all infosets of player $i \in N$, one for each infoset $I \in \mathcal{I}_i$ of $\Gamma$ and possible combination of recommendations received by $i$ at the infosets $J \in \mathcal{I}_i: J \preceq I$. % Overloading the notation, for each infoset $I \in \mathcal{I}^{\textnormal{ext}}_i$ of the extended game, we use $I$ as well to denote the corresponding infoset in the original game. % We also use $A(I)$ as the set of actions available at $I \in \mathcal{I}^{\textnormal{ext}}_i$. % Let $\{ (\Gamma^{\textnormal{ext}}(\mu), \eta_t) \}_{t \in \mathbb{N}}$ be the sequence of perturbed extended games resulting from $\{ (\Gamma, \eta_t) \}_{t \in \mathbb{N}}$. % Furthermore, for each $t \in \mathbb{N}$ and player $i \in N$, we define a player $i$'s behavior strategy for $(\Gamma^{\textnormal{ext}}(\mu), \eta_t) $ such that, at each infoset $I \in \mathcal{I}^{\textnormal{ext}}_i$: % \begin{itemize} \item all the residual probability given the lower bounds $1 - \sum_{a \in A(I): a \neq \pi_i(I)} \eta_t(a)$ is placed on the action $\pi_i(I)$ which is recommended at $I$; and % \item all the other, non-recommended actions $a \in A(I): a \neq \pi_i(I)$ are played with probabilities equal to their corresponding lower bounds $\eta_t(a)$. \end{itemize} % Intuitively, these strategies encode the fact that players follow recommendations in the perturbed extended games $(\Gamma^{\textnormal{ext}}(\mu), \eta_t) $, where trembles prevent them to perfectly obey to recommendations. % Given the definition of $\mu$ and the fact that each $\{ \beta_{i,t} \}_{i \in N} $ constitutes an NE for the perturbed game $(\Gamma, \eta_t) $, we can conclude that the behavior strategies defined above constitute NEs for the perturbed extended games $ (\Gamma^{\textnormal{ext}}(\mu), \eta_t) $. % Thus, any limit point of the sequence defined by such behavior strategies for $t \in \mathbb{N}$ is a PE of $\Gamma^{\textnormal{ext}}(\mu)$. % Moreover, by definition, any limit point prescribes to play recommended actions, which shows that $\mu$ defines an EFPCE of $\Gamma$, proving that $\textnormal{\textsf{PE}} \subseteq \textnormal{\textsf{EFPCE}}$. \end{proof} \thmrelationsne* \begin{proof} % Let us start with the first bullet point. % We consider the EFG in Figure~\ref{fig:example_game}(\emph{Left}) in order to provide examples that prove the two relations. % Notice that, in such game, player~$1$ is always better off playing action $a$ at the first infoset $\textsc{i}$, since she can guarantee herself to get at least $1$ by playing $c$ at $\textsc{j}$, while she can achieve at most $1$ by playing action $b$. % Then, it is easy to check that, in any NE of the game, the players play behavior strategies $\beta_1$ and $\beta_2$ such that: % \begin{itemize} \item $\beta_1(a) = 1$ and $\beta_1(b) = 0$, while $\beta_1(c) = \beta_1(d) = \frac{1}{2}$; and % \item $\beta_2(m) = \beta_2(n) = \frac{1}{2}$. \end{itemize} % The players' behavior at other infosets can be any, as it does not affect players' utilities at the equilibrium (given that infosets $\textsc{y}$, $\textsc{k}$, and $\textsc{l}$ are never reached due to $\beta_1(b)=0$). % As we have shown in the main text, one EFPCE of the game is the distribution $\mu \in \Delta_\Pi$ such that % \begin{align*} \mu(aceh, mp) = \mu(adeh,mp) = \mu(aceh, np) = \frac{1}{3}, \end{align*} % which enforces each player to follow recommendations, even in presence of trembles. % Clearly, this distribution $\mu$ cannot come up from players' behavior strategies, and, thus, it cannot result from an NE. % This shows that $\textnormal{\textsf{EFPCE}} \not\subset \textnormal{\textsf{NE}}$. % Moreover, notice that any NE such that $\beta_1(f) > 0$ cannot determine a distribution $\mu \in \Delta_{\Pi}$ which is an EFPCE, since it would be the case that action $f$ is recommended with positive probability when reaching infoset $\textsc{k}$ (due to trembles). % However, player $1$ cannot have any incentive to follow such recommendation, as she can gain a utility of $1$ instead of $0$ by deviating to $e$. % This proves that $\textnormal{\textsf{NE}} \not\subset \textnormal{\textsf{EFPCE}}$. As for the second bullet point, notice that all the EFPCEs $\mu \in \Delta_{\Pi}$ which are also NEs must be such that $\mu$ is obtained from some players' behavior strategies defining an NE. % As a result, by definition of EFPCE, we can conclude that such behavior strategies are indeed PEs. % \end{proof} \thmcharacreccne* \begin{proof} Given the definitions of $q^\eta_\mu(z)$, $p_{\mu,\hat\mu_i}^{\eta, I,a}(z)$, and $y_{\mu,\hat\mu_i}^{\eta, I,a}(z)$, following recommendations is an NE of $(\Gamma^{\textnormal{ext}}(\mu), \eta) $ if for every $i \in N$ and $(I,a,\hat\mu_i)$-trigger agent for player $i$, it holds that: % \begin{align*} \sum_{z \in Z} q^\eta_\mu(z) u_i(z) \geq \sum_{z \in Z \setminus Z(I)} q^\eta_\mu(z) u_i(z) + \sum_{z \in Z(I)} y_{\mu,\hat\mu_i}^{\eta, I,a}(z) u_i(z). \end{align*} % Equivalently, we can write: % \begin{align*} & \sum_{z \in Z(I)} q^\eta_\mu(z) u_i(z) \geq \sum_{z \in Z(I)} y_{\mu,\hat\mu_i}^{\eta, I,a}(z) u_i(z) \\ & \sum_{z \in Z(I)} \left[ \sum_{\substack{\pi_i \in \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }} \xi^\eta(z,\pi) \mu(\pi) + \sum_{\substack{\pi_i \in \Pi_i \setminus \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }} \xi^\eta(z,\pi) \mu(\pi) \right] u_i(z) \geq \\ & \hspace{3cm} \geq \sum_{z \in Z(I)} p_{\mu,\hat\mu_i}^{\eta, I,a}(z) u_i(z) + \sum_{z \in Z(I)} \left[ \sum_{\substack{\pi_i \in \Pi_i \setminus \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }} \xi^\eta(z,\pi) \mu(\pi) \right] u_i(z) \\ & \sum_{z \in Z(I)} \left[ \left( \sum_{\substack{\pi_i \in \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }} \xi^\eta(z,\pi) \mu(\pi) \right) u_i(z) \right] \geq \sum_{z \in Z(I)} p_{\mu,\hat\mu_i}^{\eta, I,a}(z) u_i(z), \end{align*} % which proves the result. \end{proof} \thmtremblinglptheorem* \begin{proof} By Theorem~\ref{thm:charac_recc_ne}, following recommendations is an NE of $(\Gamma^{\textnormal{ext}}(\mu), \eta_t)$ if the vector $\boldsymbol{\mu}$ of variables $\mu[\pi]$ encoding the distribution $\mu$ satisfies the following constraints (form here on, we omit the subscript $t$ for the ease of notation): % \begin{align*} & \sum_{z \in Z(I)} \left[ \left( \sum_{\substack{\pi_i \in \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }} \xi^\eta(z,\pi) \mu(\pi) \right) u_i(z) \right] = \sum_{z \in Z(I)} p_{\mu,\hat\mu_i^{I,a}}^{\eta, I,a}(z) u_i(z) & \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I) \\ & \hat\mu_i^{I,a} \in \argmax_{\hat \mu_i \in \Delta_{\Pi_i(I)}} \left\{ \sum_{z \in Z(I)} p_{\mu,\hat\mu_i}^{\eta, I,a}(z) u_i(z) \right\} & \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I), \end{align*} % where we replaced quantifications over all player $i$'s pure strategies $\hat \mu_i \in \Delta_{\Pi_i(I)}$ with inner maximizations, by introducing auxiliary variables $\hat \mu_i^{I,a}$ for each player $i \in N$, infoset $I \in \mathcal{I}_i$, and action $a \in A(I)$. % Next, let us notice that, as long as the objective to be maximized in each inner problem is the sum $\sum_{z \in Z(I)} p_{\mu,\hat\mu_i}^{\eta, I,a}(z) u_i(z) $ (which only contains terms referred to terminal nodes reachable from $I$), strategies $\hat \mu_i \in \Delta_{\Pi_i(I)}$ can be replaced with realization plans $x_i: \Sigma_i \to [0,1]$ such that $x_i(\sigma_i(I))=1$ (\emph{i.e.}, where the probability of reaching infoset $I$ given player $i$'s moves is $1$). % This holds thanks to the equivalence between mixed strategies and realization plans~\citep{von1996efficient}. % As a result, for every player $i \in N$, infoset $I \in \mathcal{I}_i$, and action $a \in A(I)$, we can write each inner maximization problem as follows: % \begin{subequations}\label{prob:primal_inner} \begin{align} \max & \quad \sum_{z \in Z(I)} \left( \sum_{\substack{\pi_i \in \Pi_i(a)\\\pi_{-i} \in \Pi_{-i}}} \xi^\eta(z,I,\pi) \mu (\pi) \right) u_i(z) x_i[\sigma_i(z)] \quad \textnormal{s.t.} \label{eq:primal_inner_obj}\\ & x_i[\sigma_i(I)] = 1 \label{eq:primal_inner_cons1}\\ & x_i[\sigma_i(J)] = \sum_{a' \in A(J)} x_i[\sigma_i(J)a'] & \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J)\label{eq:primal_inner_cons2} \\ & x_i[\sigma_i(J) a'] \geq \eta(a') x_i[\sigma_i(J)] & \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J) \label{eq:primal_inner_cons3} \\ & x_i[\sigma_i(I)] \geq 0 \nonumber\\ & x_i[\sigma_i(J) a'] \geq 0& \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J) , \nonumber \end{align} \end{subequations} % where $x_i[\sigma_i(I)]$ and $x_i[\sigma_i(J) a']$ are variables encoding a player $i$'s realization plan restricted to sequences extending $\sigma_i(I)$ (these are the only variables needed, since Objective~\eqref{eq:primal_inner_obj} does not depend on the realization plan probabilities of other sequences). % We also notice that the trembles associated with player $i$'s actions at infosets $J \in \mathcal{I}_i : I \preceq J$ (managed by the terms $\xi^{\eta}(z,I,\hat \pi_i)$ in the definition of $p_{\mu,\hat\mu_i}^{\eta, I,a}(z)$) are encoded by Constraints~\eqref{eq:primal_inner_cons3}, which ensure that each action $a' \in A(J)$ is played with probability $\frac{x_i[\sigma_i(J)a']}{x_i[\sigma_i(J)]} \geq \eta(a')$ (given that the denominator is non-null). % The dual of Problem~\ref{prob:primal_inner} reads as follows: % \begin{subequations}\label{prob:dual_inner} \begin{align} \min & \quad v[i,I,a,\varnothing] \quad \textnormal{s.t.} \label{eq:dual_inner_obj}\\ & v[i,I,a,\varnothing] \geq v[i,I,a,I] + \sum_{a' \in A(I)} \eta(a') w[i,I,a,I,a'] \label{eq:dual_inner_cons1}\\ & v[i,I,a,J] - w[i,I,a,J,a'] \geq \sum_{z \in Z^\bot(J,a')} \left( \sum_{\substack{\pi_i \in \Pi_i(a)\\\pi_{-i} \in \Pi_{-i}}} \xi^\eta(z,I,\pi) \mu (\pi) \right) u_i(z) + \nonumber \\ & \hspace{.5cm}+ \sum_{K \in \mathcal{C}(J,a')} \left( v[i,I,a,K] + \sum_{a'' \in A(K)} \eta(a'') w[i,I,a,K,a''] \right) \hspace{1cm} \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J)\label{eq:dual_inner_cons2} \\ & w[i,I,a,J,a'] \geq 0 \hspace{7.6cm} \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J), \nonumber \end{align} \end{subequations} % where $v[i,I,a,\varnothing]$ is the dual variable associated to Constraint~\eqref{eq:primal_inner_cons1}, $v[i,I,a,J]$ for $J \in \mathcal{I}_i : I \preceq J$ are the dual variables associated to Constraints~\eqref{eq:primal_inner_cons2}, and $w[i,I,a,J,a'] $ for $J \in \mathcal{I}_i : I \preceq J$ and $a' \in A(J)$ are the dual variables associated to Constraints~\eqref{eq:primal_inner_cons3}. % By using the fact that variable $v[i,I,a,\varnothing]$ appears only in Constraint~\eqref{eq:dual_inner_cons1} and by changing sign to variables $w[i,I,a,J,a'] $, we can re-write Problem~\eqref{prob:dual_inner} as follows: % \begin{subequations}\label{prob:dual_inner_re} \begin{align} \min & \quad v[i,I,a,I] - \sum_{a' \in A(I)} \eta(a') w[i,I,a,I,a'] \quad \textnormal{s.t.} \label{eq:dual_inner_re_obj}\\ & v[i,I,a,J] - w[i,I,a,J,a'] \geq \sum_{z \in Z^\bot(J,a')} \left( \sum_{\substack{\pi_i \in \Pi_i(a)\\\pi_{-i} \in \Pi_{-i}}} \xi^\eta(z,I,\pi) \mu (\pi) \right) u_i(z) + \nonumber \\ & \hspace{.5cm}+ \sum_{K \in \mathcal{C}(J,a')} \left( v[i,I,a,K] - \sum_{a'' \in A(K)} \eta(a'') w[i,I,a,K,a''] \right) \hspace{1cm} \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J)\label{eq:dual_inner_ne_cons} \\ & w[i,I,a,J,a'] \geq 0 \hspace{7.6cm} \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J). \nonumber \end{align} \end{subequations} % Then, we can remove the inner maximization problems by enforcing strong duality, \emph{i.e.}, we add constraints equating Objective~\eqref{eq:primal_inner_obj} and Objective~\eqref{eq:dual_inner_re_obj}. % Noticing that Objective~\eqref{eq:primal_inner_obj} is equal to $\sum_{z \in Z(I)} p_{\mu,\hat\mu_i^{I,a}}^{\eta, I,a}(z) u_i(z) $, we obtain the following set of linear constraints: % \begin{align*} & \sum_{z \in Z(I)} \left[ \left( \sum_{\substack{\pi_i \in \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }} \xi^\eta(z,\pi) \mu(\pi) \right) u_i(z) \right] = v[i,I,a,I] - \sum_{a' \in A(I)} \eta(a') w[i,I,a,I,a'] \\ & \hspace{12cm}\forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I) \\ & v[i,I,a,J] - w[i,I,a,J,a'] \geq \sum_{z \in Z^\bot(J,a')} \left( \sum_{\substack{\pi_i \in \Pi_i(a)\\\pi_{-i} \in \Pi_{-i}}} \xi^\eta(z,I,\pi) \mu (\pi) \right) u_i(z) + \\ & \hspace{.5cm}+ \sum_{K \in \mathcal{C}(J,a')} \left( v[i,I,a,K] - \sum_{a'' \in A(K)} \eta(a'') w[i,I,a,K,a''] \right) \\ & \hspace{7.7cm} \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I), \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J)\\ & w[i,I,a,J,a'] \geq 0 \hspace{4.9cm} \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I),\forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J). \end{align*} % By introducing variables $u[i,I,a]$ we get to the result. % \end{proof} \lemmasep* \begin{proof} The proof follows the same line as the proof of Lemma~5~of~\citet{huang2008computing} (its complete version can be found in~\citep{huang2011equilibrium}). % This is an extension of the CE existence proof by~\citet{hart1989existence} to the case of EFCE. % It is based on the construction of an auxiliary $2$-player zero-sum EFG, where player $1$ plays first by selecting a strategy profile $\pi \in \Pi$, and player $2$ plays second by choosing an infoset $I \in \mathcal{I}_i$ of some player $i \in N$, an action $a \in A(I)$, and a combinations of actions at following infosets $J \in \mathcal{I}_i : I \preceq J$ (intuitively, player $2$ chooses a trigger agent corresponding to $I$ and $a$, together with a possible trigger agent's behavior). % It is easy to see that, for our Problem~\ref{prob:dual_ellip_pert}, variables in $\boldsymbol{y}$ have the same meaning as in Lemma~5~of~\citet{huang2008computing}, \emph{i.e.}, they represent valid player $2$'s strategies in the auxiliary game. % This is because they satisfy the same linear restrictions $B^\top \boldsymbol{y} = \boldsymbol{0}$. % As a result, the only difference is in the coefficients of the exponentially-many constraints, which, in our case, are defined by the (perturbed) matrix $A_t$, rather than $A$. % These define the payoffs in the auxiliary game. % In particular, following steps analogous to those by~\citet{huang2011equilibrium} we can conclude that, in the auxiliary game, player $2$'s expected payment to player $1$ when the latter plays $\pi \in \Pi$ is given by the entry of $A_t^\top \boldsymbol{y}$ corresponding to $\pi$. % Then, the proof follows the same reasoning as that of~\citet{huang2011equilibrium} to prove the result. \end{proof} \section{NEs of Perturbed Extended Games}\label{sec:charac_recc_ne} We provide a characterization of NEs of perturbed extended games $(\Gamma^\textnormal{ext}(\mu), \eta)$, useful for our main algorithmic result on EFPCEs given in the following section. Specifically, we give a set of easily interpretable conditions which ensure that following recommendations is an NE of $(\Gamma^\textnormal{ext}(\mu), \eta)$. These are crucial for the derivation of the LP exploited by our algorithm. Our characterization is inspired by that of EFCEs based on trigger agents (see Lemma~\ref{lem:efce_trigger} in Appendix~\ref{app:trigger}). However, the presence of trembles in extended games requires some key changes, which we highlight in the following. First, we introduce some additional notation. Given a perturbed extended game $(\Gamma^{\textnormal{ext}}(\mu), \eta) $, we let $\xi^\eta(z, \pi)$ be the probability of reaching a node $z \in Z$ when a strategy profile $\pi \in \Pi$ is recommended and players' obey to recommendations, in presence of trembles defined by $\eta$. Each $\xi^\eta(z, \pi)$ is obtained by multiplying probabilities of actions in $ \sigma(z)$, which are those on the path from the root to $z$. For each $a \in \sigma(z)$, two cases are possible: either $a$ is prescribed by the recommended $\pi$ and played with its maximum probability given $\eta$, or it is \emph{not}, which means that a tremble occurred with probability $\eta(a)$. Formally, letting $\mymathbb{1} \{ a \in \pi \}$ be an indicator for the event $a \in \pi$, for every $z \in Z$ and $\pi \in \Pi$: \begin{equation*}\label{eq:def_xi} \xi^\eta(z, \pi ) \coloneqq \prod_{a \in A : a \in \sigma(z)} \eta(a)^{ \mymathbb{1} \{ a \in \pi \} } \tilde{\eta}(a)^{ 1 - \mymathbb{1} \{ a \in \pi \} } p_c(z) , \end{equation*} where, for $a \in A(I)$, we let $\tilde{\eta}(a) \coloneqq 1 - \sum_{a' \neq a \in A(I)} \eta(a')$ be the maximum probability assignable to $a$ given $\eta$. Moreover, for every player $i \in N$, infoset $I \in \mathcal{I}_i$, terminal node $z \in Z(I)$ reachable from $I$, and strategy profile $\pi \in \Pi$, we let $\xi^\eta(z, I, \pi ) $ be defined as $\xi^\eta(z, \pi )$ excluding player $i$'s actions leading from $I$ to $z$, \emph{i.e.}, with the product restricted to actions $a \in \sigma_i(I) \cup \left( \sigma(z) \setminus A_i \right)$. Analogously, for every player $i$'s strategy $\pi_i \in \Pi_i(I)$, we let $\xi^\eta(z, \pi_i )$ be defined for player $i$'s actions $a \in A_i \cap \left( \sigma(z) \setminus \sigma_i(I) \right)$ from $I$ to $z$. Following recommendations is an NE of the perturbed extended game $(\Gamma^{\textnormal{ext}}(\mu), \eta) $ if, for every player $ i \in N$, infoset $I \in \mathcal{I}_i$, and action $a \in (I)$, player $i$'s utility when obeying to the recommendation $a$ at $I$ is at least as large as the utility achieved by any $(I,a,\hat\mu_i)$-trigger agent. The fundamental differences with respect to EFCE are: \emph{(i)} an infoset $I$ could be reached even when actions recommended at preceding infosets do not allow it (due to trembles); and \emph{(ii)} trigger agents are subjects to trembles, which means that they may make mistakes while playing the strategy sampled from $\hat \mu_i$. For any terminal node $z \in Z$, the probability of reaching it when following recommendations is: \begin{equation*}\label{eq:q_pert} q^\eta_\mu(z) \coloneqq \sum_{\pi \in \Pi} \xi^\eta(z,\pi) \mu(\pi) , \end{equation*} where the summation accounts for the probability of reaching $z$ for every possible $\pi$. The sum is over $\Pi$ rather than $\Pi(z)$ as for EFCE (see Equation~\eqref{eq:q} in Appendix~\ref{app:trigger}), since, due to trembles, $z$ could be reached even when $\pi \notin \Pi(z)$. For any $(I,a,\hat\mu_i)$-trigger agent, the probability of reaching $z \in Z(I)$ when the agent `gets triggered' is defined as: \begin{equation*}\label{p_pert} p_{\mu,\hat\mu_i}^{\eta, I,a}(z) \hspace{-1mm}\coloneqq\hspace{-1.5mm} \left( \hspace{-0.5mm}\sum_{\substack{\pi_i \in \Pi_i(a)\\\pi_{-i} \in \Pi_{-i}}} \hspace{-3.5mm} \xi^\eta(z,I,\pi) \mu (\pi) \hspace{-1.5mm} \right) \hspace{-2mm} \left( \hspace{-0.5mm}\sum_{\hat \pi_i \in \Pi_i(I)} \hspace{-3.5mm} \xi^\eta(z, \hat \pi_i) \hat \mu_i(\hat \pi_i) \hspace{-1.5mm} \right) \hspace{-1mm}, \end{equation*} where the first summation is over $\Pi_i(a)$ instead of $\Pi_i(I,a)$ (as in the EFCE, see Equation~\eqref{eq:p} in Appendix~\ref{app:trigger}) since it might be the case that the agent is activated also when the recommended strategy $\pi_i$ does \emph{not} allow to reach infoset $I$. Finally, the overall probability of reaching $z \in Z(I)$ is: \begin{equation*}\label{y_pert} y_{\mu,\hat\mu_i}^{\eta, I,a}(z) \coloneqq p_{\mu,\hat\mu_i}^{\eta, I,a}(z) + \sum_{\substack{ \pi_i \in \Pi_i \setminus \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }} \xi^\eta(z,\pi) \mu(\pi) , \end{equation*} where the first term is for when the agent `gets triggered', while the second term accounts for the case in which the agent is \emph{not} activated (the two events are independent). \begin{restatable}{theorem}{thmcharacreccne}\label{thm:charac_recc_ne} Given a perturbed extended game $(\Gamma^{\textnormal{ext}}(\mu), \eta) $, following recommendations is an NE of the game if for every $i \in N$ and $(I,a,\hat\mu_i)$-trigger agent for player $i$, it holds that: % \begin{equation*}\label{eq:charac_recc_ne} \sum_{z \in Z(I)} \hspace{-1mm}\left[ \hspace{-1mm} \left( \sum_{\substack{\pi_i \in \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i} }}\hspace{-2mm} \xi^\eta(z,\pi) \mu(\pi) \hspace{-1mm}\right) \hspace{-1mm}u_i(z) \hspace{-0.5mm} \right] \hspace{-1.5mm}\geq\hspace{-1.5mm} \sum_{z \in Z(I)} \hspace{-1.5mm}p_{\mu,\hat\mu_i}^{\eta, I,a}(z) u_i(z). \end{equation*} \end{restatable} \section{Discussion and Future Works}\label{sec:discussion} We started the study of \emph{trembling-hand perfection} in sequential games with correlation, introducing the EFPCE as a refinement of the EFCE that amends its weaknesses off the equilibrium path. This paves the way to a new research line, raising novel game-theoretic and computational challenges. As for EFPCEs, an open question is whether compact correlated strategy representations, like the EFCE-based \emph{correlation plan} by~\citet{von2008extensive}, are possible in some restricted settings, such as $2$-player games without chance. This would enable the optimization over the set of EFPCEs in polynomial time. The main challenge raised by EFPCEs with respect to EFCEs is that the former require to reason about general, un-reduced strategy profiles. Another possible future work is to extend our analysis to other CE-based solution concepts, such as the \emph{normal-form} CE and the \emph{agent-form} CE (see~\citep{von2008extensive} for their definitions). This raises the interesting question of how different trembling-hand-based CEs are able to amend weaknesses off the equilibrium path. Finally, an interesting direction is to consider different ways of refining CE-based equilibria in sequential games, such as, \emph{e.g.}, using \emph{quasi-perfection}~\citep{van1984relation}. \section{Computing an EFPCE in $n$-player EFGs}\label{sec:eah} We provide a polynomial-time algorithm to compute \emph{an} EFPCE in $n$-player EFGs (also with chance). The algorithm is built on three fundamental components: \emph{(i)} a trembling LP (with exponentially many variables and polynomially many constraints) whose limit solutions define EFPCEs; \emph{(ii)} an adaption of the algorithm by~\citet{farina2018practical} that finds such limit solutions by solving a sequence of (non-trembling) LPs; and \emph{(iii)} a polynomial-time EAH procedure that solves these LPs. \input{example_lp} \paragraph{Trembling LP for EFPCEs} It resembles the EFCE LP in Problem~\ref{prob:primal_ellip}. In this case, the constraints appearing in the LP ensure that following recommendations is an NE in a given sequence of perturbed extended games, by exploiting the characterization given in Theorem~\ref{thm:charac_recc_ne}. Then, Lemma~\ref{lem:efpce_limit_point} allows to conclude that the limit solutions of the trembling LP define EFPCEs. In the following, we assume that a sequence of perturbed extended games $\{ (\Gamma^{\textnormal{ext}}(\mu), \eta_t) \}_{t \in \mathbb{N}}$ is given. For every player $i \in N$, infoset $I \in \mathcal{I}_p$, and action $a \in A(I)$, we introduce a variable $u[i,I,a]$ to encode player $i$'s expected utility when following the recommendation to play $a$ at $I$ in the perturbed extended game $(\Gamma^{\textnormal{ext}}(\mu), \eta_t)$. These variables are defined by the following constraints: \begin{align}\label{eq:u_rec_pert} & u[i, I, a] = \sum_{z \in Z(I)} \left( \sum_{\substack{ \pi_i \in \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i}} } \xi^\eta(z, \pi) \mu[\pi] \right) u_i(z) \\ & \hspace{4.1cm} \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I). \nonumber \end{align} Then, we introduce constraints that recursively define variables $v[i,I,a,J]$ for every infoset $J \in \mathcal{I}_i : I \preceq J$. These encode the maximum expected utility obtained at infoset $J$ by trigger agents associated with $I$ and $a$. To this end, we also need some auxiliary non-negative variables $w[i,I,a,J,a']$, which are defined for every player $i \in N$, infoset $I \in \mathcal{I}_i$, action $a \in A(I)$, infoset $J \in \mathcal{I}_i : I \preceq J$ following $I$ (this included), and action $a' \in A(J)$ available at $J$. \begin{align} & v[i,I,a,J] - w[i,I,a,J,a'] \geq \label{eq:dev_mu_pert} \\ & \hspace{0.3cm} \sum_{z \in Z^\bot(J,a^\prime) } \left( \sum_{\substack{ \pi_i \in \Pi_i(a) \\ \pi_{-i} \in \Pi_{-i}(z) } } \xi^\eta(z,I, \pi) \mu[\pi] \right) u_i(z) + \nonumber \\ & \hspace{0.3cm}\sum_{K \in \mathcal{C}(J, a^\prime) } \hspace{-1.5mm} \left( v[i,I,a,K] -\hspace{-1.5mm} \sum_{a'' \in A(K)} \hspace{-1.5mm} \eta_t(a'') w[i,I,a,K,a''] \right) \nonumber\\ & \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I), \forall J \in \mathcal{I}_i: I \preceq J, \forall a' \in A(J). \nonumber \end{align} Intuitively, each auxiliary variable $w[i,I,a,J,a']$ represents a penalty on $v[i,I,a,J]$ due to the possibility of trembling by playing a (possibly) sub-optimal action $a' \in A(J)$ at $J$. Indeed, whenever $a'$ is an optimal action at infoset $J$, then $w[i,I,a,J,a']$ is set to $0$ in any solution; otherwise, $w[i,I,a,J,a']$ represents how much utility is lost by playing $a'$ instead of an optimal action (see Figure~\ref{fig:example_lp_pert}(\emph{Right}) for an example). Finally, the incentive constraints are: \begin{align}\label{eq:inc_mu_pert} & u[i,I,a] = v[i,I,a,I] - \sum_{a' \in A(I)} \eta_t(a') w[i,I,a,I,a'] \\ & \hspace{4cm} \forall i \in N, \forall I \in \mathcal{I}_i, \forall a \in A(I). \nonumber \end{align} Figure~\ref{fig:example_lp_pert} provides an example of Constraints~\eqref{eq:dev_mu_pert}~and~\eqref{eq:inc_mu_pert} to better clarify their meaning. The following theorem shows that Constraints~\eqref{eq:u_rec_pert},~\eqref{eq:inc_mu_pert},~and~\eqref{eq:dev_mu_pert} correctly encode the conditions given in Theorem~\ref{thm:charac_recc_ne}, which ensure that following recommendations is an NE in $(\Gamma^{\textnormal{ext}}(\mu), \eta_t)$. \begin{restatable}{theorem}{thmtremblinglptheorem}\label{thm:trembling_lp_theorem} Given a perturbed extended game $(\Gamma^{\textnormal{ext}}(\mu), \eta_t)$, if Constraints~\eqref{eq:u_rec_pert},~\eqref{eq:dev_mu_pert},~and~\eqref{eq:inc_mu_pert} can be satisfied for the vector $\boldsymbol{\mu}$ of variables $\mu[\pi]$ encoding the distribution $\mu$, then following recommendations is an NE of $(\Gamma^{\textnormal{ext}}(\mu), \eta_t)$. \end{restatable} \noindent By substituting the expression of $u[i,I,a]$ (given by Constraints~\eqref{eq:u_rec_pert}~and~\eqref{eq:inc_mu_pert}) into Constraints~\eqref{eq:dev_mu_pert}, we can formulate the following trembling LP parameterized by $t \in \mathbb{N}$: \begin{subequations}\label{prob:primal_ellip_pert} \begin{align} \max_{\boldsymbol{\mu} \geq \boldsymbol{0}, \boldsymbol{v}, \boldsymbol{w} \geq \boldsymbol{0}} & \quad \sum_{\pi \in \Pi} \mu[\pi] \quad \textnormal{s.t.} \\ & A_t \boldsymbol{\mu} + B \boldsymbol{v} + C_t \boldsymbol{w} \geq \boldsymbol{0}, \label{eq:primal_ellip_pert_cons} \end{align} \end{subequations} where $A_t $ is the analogous of matrix $A$ in Problem~\ref{prob:primal_ellip}, $\boldsymbol{w}$ is a vector whose components are the variables $w[i,I,a,J,a']$, and $C_t$ is a matrix defining the constraints coefficients for these variables. Notice that the coefficients of variables in $\boldsymbol{v }$ (as defined by $B$) are the same as in Problem~\ref{prob:primal_ellip}. \paragraph{Limit Solutions of Trembling LP} Problem~\ref{prob:primal_ellip_pert} can be cast into the framework of~\citet{farina2018practical} by defining sequences of lower bounds $\eta_t$ by means of vanishing polynomials in a parameter $\epsilon \rightarrow 0$. As a result, the polynomial-time algorithm by~\citet{farina2018practical} can be used, with the only difference that, at each step, for a fixed value of the parameter $\epsilon$ (\emph{i.e.}, particular lower bounds $\eta_t$), it needs to solve an instance of Problem~\ref{prob:primal_ellip_pert} featuring exponentially many variables. Provided that the latter can be done in polynomial time, the polynomiality of the overall procedure is preserved, since the bounds on the running time provided by~\citet{farina2018practical} do not depend on the number of variables in the LP. \paragraph{EAH Procedure} In order to solve Problem~\ref{prob:primal_ellip_pert} for a particular lower bound function $\eta_t$ in polynomial time, we can apply a procedure similar to the EAH algorithm by~\citet{huang2008computing}. Notice that Problem~\ref{prob:primal_ellip_pert} is always unbounded, since there always exists a distribution $\mu \in \Delta_\Pi$ such that following recommendations is an NE of the perturbed extended game $(\Gamma^{\textnormal{ext}}(\mu), \eta_t)$ (such $\mu$ is an EFCE of the corresponding perturbed, non-extended game). Thus, we only need to provide a polynomial-time separation oracle for the always-infeasible dual of Problem~\ref{prob:primal_ellip_pert}, which reads as: \begin{subequations}\label{prob:dual_ellip_pert} \begin{align} A_t^\top \boldsymbol{y} \hspace{0.3cm} &\leq \hspace{0.3cm}-\boldsymbol{1} \\ B^\top \boldsymbol{y} \hspace{0.3cm}&= \hspace{0.3cm}\textcolor{white}{-}\boldsymbol{0} \\ C_t^\top \boldsymbol{y} \hspace{0.3cm}&\geq \hspace{0.3cm}\textcolor{white}{-}\boldsymbol{0} \\ \boldsymbol{y} \hspace{0.3cm}&\geq \hspace{0.3cm}\textcolor{white}{-} \boldsymbol{0}, \end{align} \end{subequations} where the vector of dual variables $\boldsymbol{y}$ has the same role as in Problem~\ref{prob:dual_ellip}, since the constraints of the primal problems are indexed on the same sets. Notice that constraints $C_t^\top \boldsymbol{y} \geq \boldsymbol{0}$ are polynomially many. As a result, one can always check whether one of these constraints is violated in polynomial time and, if this is the case, output one such constraint as a violated inequality. This allows to focus on separation oracles for the other constraints. Then, the required one is given by the following lemma, an analogous of Lemma~\ref{lem:sep_ro}. \begin{restatable}{lemma}{lemmasep}\label{lem:lemmasep} If $\boldsymbol{y} \geq \boldsymbol{0}$ is such that $B^\top \boldsymbol{y} = \boldsymbol{0}$, then there exists $\boldsymbol{\mu}$ encoding a product distribution $\mu \in \Delta_{\Pi}$ such that $\boldsymbol{\mu}^\top A_t^\top \boldsymbol{y} = 0$. % Moreover, $\boldsymbol{\mu}$ can be computed in poly-time. \end{restatable} \noindent The proof of Lemma~\ref{lem:lemmasep} follows the same line as that of Lemma~5 by~\citet{huang2008computing} (see~\citep{huang2011equilibrium} for its complete version) and it is based on the CE existence proof by~\citet{hart1989existence}. \section{Introduction}\label{sec:intro} \emph{Nash equilibrium} (NE)~\citep{nash1951non} computation in $2$-player zero-sum games has been the flagship challenge in artificial intelligence for several years (see, \emph{e.g.}, landmark results in poker~\citep{brown2018superhuman,brown2019superhuman}). Recently, increasing attention has been devoted to multi-player games, where equilibria based on \emph{correlation} are now mainstream. Correlation in games is customarily modeled through a trusted external mediator that privately recommends actions to the players. The mediator acts as a \emph{correlation device} that draws action recommendations according to a publicly known distribution. The seminal notion of \emph{correlated equilibrium} (CE) introduced by~\citet{aumann1974subjectivity} requires that no player has an incentive to deviate from a recommendation. This is encoded by NE conditions applied to an \emph{extended} game where the correlation device plays first by randomly selecting a profile of actions according to the public distribution; then, the original game is played with each player being informed only of the action selected for her. CEs are computationally appealing since they can be implemented in a \emph{decentralized} way by letting players play independently according to no-regret procedures~\citep{hart2000simple}. Computing CEs in sequential (\emph{i.e.}, extensive-form) games with imperfect information has received considerable attention in the last years~\citep{celli2019learning,DBLP:conf/aaai/FarinaBS20}. In this context, various CE definitions are possible, depending on the ways recommendations are revealed to the players. The one that has emerged as the most suitable for sequential games is the \emph{extensive-form correlated equilibrium} (EFCE) of~\citet{von2008extensive}. The key feature of EFCE is that recommendations are revealed to the players only when they reach a decision point where the action is to be played, and, if one player defects from a recommendation, then she stops receiving them in the future. \citet{von2008extensive} show that EFCEs can be characterized by a polynomially-sized \emph{linear program} (LP) in two-player games without chance. In the same restricted setting, \citet{DBLP:conf/nips/FarinaLFS19a} show how to find an EFCE by solving a bilinear saddle-point problem, which can be exploited to derive an efficient no-regret algorithm~\citep{DBLP:conf/nips/FarinaLFS19}. In general $n$-player games, \citet{huang2008computing} prove that \emph{an} EFCE can be computed in polynomial time by means of an \emph{ellipsoid against hope} (EAH) algorithm similar to that introduced by~\citet{papadimitriou2008computing} for CEs in compactly represented games (see also~\citep{DBLP:conf/icml/GordonGM08} for another algorithm). Instead, finding a payoff-maximizing EFCE is $\mathsf{NP}$-hard~\citep{von2008extensive}. Very recently,~\citet{celli2020no} provide an efficient no-regret procedure for EFCE in $n$-player games. One of the crucial weaknesses of standard equilibrium notions, such as NE, in sequential games is that they may prescribe to play sub-optimally off the equilibrium path, \emph{i.e.}, at those information sets never reached when playing equilibrium strategies. One way to amend this issue is \emph{trembling-hand perfection}~\citep{selten1975reexamination}, whose rationale is to let players reasoning about the possibility that they may make mistakes in the future, playing sub-optimal actions with small, vanishing probabilities (a.k.a.~trembles). This idea leads to the NE refinement known as \emph{perfect equilibrium} (PE)~\citep{selten1975reexamination}. Other refinements have been introduced in the literature; \emph{e.g.}, in the \emph{quasi-perfect equilibrium} of~\citet{van1984relation} players only account for opponents' future trembles (see~\citep{van1991stability} for other examples). Trembles can also be introduced in normal-form games, leading to robust equilibria that rule out weakly dominated strategies~\citep{hillas2002foundations}. Recently, equilibrium refinement has been addressed beyond the NE case, such as, \emph{e.g.}, in Stackelberg settings~\citep{farina2018trembling,marchesi2019quasi}. Trembling-hand perfection for CEs has only been studied from a theoretical viewpoint in normal-form games, by~\citet{dhillon1996perfect}. The authors introduce the concept of \emph{perfect} CE by enforcing PE conditions in the extended game, rather than NE ones. Despite equilibrium refinements in sequential games are ubiquitous, no work addressed perfection and correlation together in such setting.~\footnote{Applying the perfect CE by~\citet{dhillon1996perfect} to the normal-form representation of a sequential game does \emph{not} generally solve equilibrium weaknesses. This would lead to a correlated version of the \emph{normal-form} PE, which is known not to guard against sub-optimality off the equilibrium path~\citep{van1991stability}.} \paragraph{Original Contributions} We give an axiomatic definition of \emph{extensive-form perfect correlated equilibrium} (EFPCE), enforcing PE conditions, rather than NE ones, in the extended game introduced by~\citet{von2008extensive} for their original definition of EFCE. Intuitively, this accounts for the possibility that players may make mistakes while following recommendations independently at each information set of the game. Trembles are introduced on players' strategies, while the correlation device is defined as in classical CE notions. First, we show that an EFPCE always exists, since any PE constitutes an EFPCE, and that EFPCE is a refinement of EFCE, as any EFPCE is also an EFCE. Then, we show how \emph{an} EFPCE can be computed in polynomial time in any $n$-player extensive-form game (also with chance). At first, we introduce a characterization of the equilibria of perturbed extended games (\emph{i.e.}, extended games with trembles) inspired by the definition of EFCE based on \emph{trigger agents}, introduced by~\citet{DBLP:conf/icml/GordonGM08} and~\citet{DBLP:conf/nips/FarinaLFS19a}. This result allows us to formulate the EFPCE problem as that of finding a limit solution (as $\epsilon \rightarrow 0$) to a suitably defined \emph{trembling} LP parametrized by $\epsilon$, featuring exponentially many variables and polynomially many constraints. To this end, we show how the polynomial-time algorithm for trembling LPs developed by~\citet{farina2018practical} can be adapted to deal with problems having an exponential number of variables. This calls for the solution of a sequence of (non-trembling) LPs with exponentially many variables and polynomially many constraints, which is possible in polynomial time by applying an EAH approach. The latter is inspired by the analogous algorithm of~\citet{huang2008computing} for EFCEs, which is adapted to deal with a different set of dual constraints, requiring a modification of the polynomial-time separation oracle of~\citet{huang2008computing}.~\footnote{All the omitted proofs are in Appendix~\ref{app:proofs}.} \section{Trembling-Hand Perfection and Correlation}\label{sec:perf_corr} We are now ready to show how trembling-hand perfection can be injected into the definition of EFCE so as to amend its weaknesses off the equilibrium path (see the following for an example). We generalize the approach of~\citet{dhillon1996perfect} (restricted to CEs in normal-form games) to the general setting of EFCEs in EFGs. The core idea is to use the PE rather than the NE in the definition of CE. Thus: \begin{definition}\label{def:efpce} Given an EFG $\Gamma$, a distribution $\mu \in \Delta_{\Pi}$ is an \emph{extensive-form perfect correlated equilibrium (EFPCE)} if following recommendations is a PE of $\Gamma^{\textnormal{ext}}(\mu)$. \end{definition} The definition of EFPCE crucially relies on the introduction of trembles in extended games, \emph{i.e.}, it takes into account the possibility that each player may not follow action recommendations with a small, vanishing probability. In the following, given a perturbed EFG $(\Gamma, \eta)$ and $\mu \in \Delta_\Pi$, we denote with $(\Gamma^{\textnormal{ext}}(\mu), \eta)$ a perturbed extended game in which the probability of playing each action is subject to a lower bound equal to the lower bound $\eta(a)$ of the corresponding action $a \in A$ in $\Gamma$. By recalling the definition of PE (Definition~\ref{def:pe}) and the structure of perturbed extended games, it is easy to infer the following characterization of EFPCEs: \begin{lemma}\label{lem:efpce_limit_point} Given an EFG $\Gamma$, a distribution $\mu \in \Delta_{\Pi}$ is an EFPCE of $\Gamma$ if following recommendations constitutes NEs for at least one sequence of perturbed extended games $\{ \Gamma^{\textnormal{ext}}(\mu), \eta_t) \}_{t \in \mathbb{N}}$ such that, for all $a \in A$, the lower bounds $\eta_t(a)$ converge to zero as $t \rightarrow \infty $. \end{lemma} We remark that, with an abuse of terminology, we say that players follow recommendations in a perturbed extended game $(\Gamma^{\textnormal{ext}}(\mu), \eta)$ whenever they play strategies which place all the residual probability (given lower bounds) on recommended actions. In the following sections, we crucially rely on the characterization of EFPCEs given in Lemma~\ref{lem:efpce_limit_point} in order to derive our computational results. First, we show an example of EFPCE and prove some of its properties. \paragraph{Example of EFPCE} Consider the EFG in Figure~\ref{fig:example_game}(\emph{Left}) and lower bounds $\eta_t: A \to (0,1)$ for $t \in \mathbb{N}$, with $\eta_t(a) \rightarrow 0$ as $t \rightarrow \infty$ for all $a \in A$. First, notice that player $1$ is always better off playing action $a$ at the root infoset $\textsc{i}$, since she can guarantee herself a utility of $1$ by selecting $c$ at the following infoset $\textsc{j}$, while she can achieve at most $\frac{1}{2}$ by playing $b$. Thus, any EFPCE of the game (as well as any EFCE) must recommend $a$ at $\textsc{i}$ with probability $1$. Then, in the sub-game reached when playing $a$ at $\textsc{i}$, it is easy to check that recommending the pairs of actions $(c,m)$, $(c,n)$, and $(d,m)$ each with probability $\frac{1}{3}$ is an equilibrium, as no player has an incentive to deviate from a recommendation, even with trembles (see Appendix~\ref{app:example} for more details). The correlation device described fo far is sufficient to define an EFCE, as recommendations at infosets $\textsc{y}$, $\textsc{k}$, and $\textsc{l}$ are \emph{not} relevant given that they do not influence players' utilities at the equilibrium ($b$ is never recommended). However, they become relevant for EFPCEs, since, in perturbed extended games, these infosets could be reached due to a tremble with probability $\eta_t(b)$. Then, player $2$ must be told to play $p$ at $\textsc{y}$, because her utility is always $1$ if she plays $p$, while it is always $0$ for $o$. Moreover, with an analogous reasoning, player $1$ must be recommended to play $e$ and $h$ at $\textsc{k}$ and $\textsc{l}$, respectively. In conclusion, we can state that $\mu \in \Delta_{\Pi}:\mu(aceh,mp) = \mu(aceh,np) = \mu(adeh,mp) = \frac{1}{3}$ is an EFPCE. \paragraph{Properties of EFPCEs} We characterize the relation between EFPCEs and other equilibria, also showing that EFPCEs always exist and represent a refinement of EFCEs.~\footnote{In the following, we denote the sets of equilibria with their corresponding acronyms (\emph{e.g.}, \textsf{NE} is the set of all NEs of a game).} \begin{restatable}{theorem}{thmrelations}\label{thm:relations} This relation holds: $ \textnormal{\textsf{PE}} \subseteq \textnormal{\textsf{EFPCE}} \subseteq \textnormal{\textsf{EFCE}}. $ \end{restatable} \begin{restatable}{theorem}{thmrelationsne}\label{thm:relations_ne} The following relations hold: % \begin{itemize} \item $\textnormal{\textsf{EFPCE}} \not\subset \textnormal{\textsf{NE}}$ and $\textnormal{\textsf{NE}} \not\subset \textnormal{\textsf{EFPCE}}$; \item $\textnormal{\textsf{EFPCE}} \cap \textnormal{\textsf{NE}} = \textnormal{\textsf{PE}}$. \end{itemize} \end{restatable} \section*{Acknowledgments} This work has been partially supported by the Italian MIUR PRIN 2017 Project ALGADIMAR ``Algorithms, Games, and Digital Market''. \section{Preliminaries}\label{sec:prelim} \input{example_efg} \subsection{Extensive-Form Games} We focus on $n$-player \emph{extensive-form games} (EFGs) with imperfect information. We let $N \coloneqq \{ 1, \ldots, n \}$ be the set of players, and, additionally, we let $c$ be the \emph{chance} player representing exogenous stochasticity. The sequential structure is encoded by a game tree with node set $H$. Each node $h \in H$ is identified by the ordered sequence $\sigma(h)$ of actions encountered on the path from the root to $h$. We let $Z \subseteq H$ be the subset of terminal nodes, which are the leaves of the game tree. For every non-terminal node $h \in H \setminus Z$, we let $P(h) \in N \cup \{ c \}$ be the player who acts at $h$, while $A(h)$ is the set of actions available. The function $p_c: Z \to (0,1]$ defines the probability of reaching each terminal node given the chance moves on the path from the root to that node. For every player $i \in N$, the function $u_i : Z \to \mathbb{R}$ encodes player $i$'s utilities over terminal nodes. Imperfect information is modeled through \emph{information sets} (infosets). An infoset $I \subseteq H \setminus Z$ of player $i \in N$ is a group of player $i$'s nodes indistinguishable for her, \emph{i.e.}, for every $h \in I$, it must be the case that $P(h) = i$ and $A(h) = A(I)$, where $A(I)$ is the set of actions available at the infoset. W.l.o.g., we assume that the sets $A(I)$ are disjoint. We denote with $\mathcal{I}_i$ the collection of infosets of player $i \in N$. For every $i \in N$, we let $A_i \coloneqq \bigcup_{I \in \mathcal{I}_i} A(I)$ be the set of all player $i$'s actions. Moreover, we let $A \coloneqq \bigcup_{i \in N} A_i$. We focus on EFGs with \emph{perfect recall} in which no player forgets what she did or knew in the past. Formally, for every player $i \in N$ and infoset $I \in \mathcal{I}_i$, it must be that every node $h \in I$ is identified by the same ordered sequence $\sigma_i(I)$ of player $i$'s actions from the root to that node. Given two infosets $I, J \in \mathcal{I}_i$ of player $i \in N$, we say that $J$ \emph{follows} $I$, written $I \prec J$, if there exist two nodes $h \in I$ and $k \in J$ such that $h$ is on the path from the root to $k$. By perfect recall, $\prec$ is a partial order on $\mathcal{I}_i$. We also write $I \preceq J$ whenever either $I = J$ or $I \prec J$. For every infoset $I \in \mathcal{I}_i$, we let $\mathcal{C}(I,a) \subseteq \mathcal{I}_i$ be the set of all infosets that immediately follow $I$ by playing action $a \in A(I)$. \paragraph{Strategies} A player's \emph{pure strategy} specifies an action at every infoset of her. For every $i \in N$, the set of player $i$'s pure strategies $\pi_i$ is $\Pi_i \coloneqq \text{\LARGE $\times$}_{I \in \mathcal{I}_i} A(I)$, with $\pi_i(I) \in A(I)$ being the action at infoset $I \in \mathcal{I}_i$. Moreover, $\Pi \coloneqq \text{\LARGE $\times$}_{i \in N} \Pi_i$ denotes the set of \emph{strategy profiles} specifying a strategy for each player, while, for $i \in N$, we let $ \Pi_{-i} \coloneqq \text{\LARGE $\times$}_{j \neq i \in N} \Pi_j$ be the (partial) strategy profiles defining a strategy for each player other than $i$. Given $\pi_i \in \Pi_i$ and $a \in A_i$, we write $a \in \pi_i$ whenever $\pi_i$ prescribes to play $a$. Analogously, for $\pi \in \Pi$ and $a \in A$, we write $a \in \pi$. Players are allowed to randomize over pure strategies by playing \emph{mixed strategies}. For $i \in N$, we let $\mu_i: \Pi_i \to [0,1]$ be a player $i$'s mixed strategy, where $\sum_{\pi_i \in \Pi_i} \mu_i(\pi_i) = 1$. The perfect recall assumption allows to work with \emph{behavior strategies}, which define probability distributions locally at each infoset. For $i \in N$, we let $\beta_i : A_i \to [0,1]$ be a player $i$'s behavior strategy, which is such that $\sum_{a \in A(I)} \beta_i(a) = 1$ for all $I \in \mathcal{I}_i$.~\footnote{EFGs with perfect recall admit a compact strategy representation called \emph{sequence form}~\citep{von1996efficient}. See Appendix~\ref{app:sequence}.} \paragraph{Additional Notation} We introduce some subsets of $\Pi_i$ (see Figure~\ref{fig:example_game} for some examples). For every action $a \in A_i$ of player $i \in N$, we define $\Pi_i(a) \coloneqq \{ \pi_i \in\Pi_i \mid a\in \pi_i \}$ as the set of player $i$'s pure strategies specifying $a$. For every infoset $I \in \mathcal{I}_i$, we let $\Pi_{i}(I) \subseteq \Pi_i$ be the set of strategies that prescribe to play so as to reach $I$ whenever possible (depending on players' moves up to that point) and \emph{any} action whenever reaching $I$ is \emph{not} possible anymore. Additionally, for every action $a \in A(I)$, we let $\Pi_{i}(I,a) \subseteq \Pi_{i}(I) \subseteq \Pi_i$ be the set of player $i$'s strategies that reach $I$ and play $a$. Given a terminal node $z \in Z$, we denote with $\Pi_i(z) \subseteq \Pi_i$ the set of strategies by which player $i$ plays so as to reach $z$, while $\Pi(z) \coloneqq \text{\LARGE $\times$}_{i \in N} \Pi_i(z)$ and $\Pi_{-i}(z) \coloneqq \text{\LARGE $\times$}_{j \neq i \in N} \Pi_j(z)$. We also introduce the following subsets of $Z$. For every $i \in N$ and $I \in \mathcal{I}_i$, we let $Z(I) \subseteq Z$ be the set of terminal nodes reachable from infoset $I$ of player $i$. Moreover, $Z(I, a) \subseteq Z(I) \subseteq Z$ is the set of terminal nodes reachable by playing action $a \in A(I)$ at $I$, whereas $Z^\bot(I,a) \coloneqq Z(I,a) \setminus \bigcup_{J \in \mathcal{C}(I,a)} Z(J)$ is the set of those reachable by playing $a$ at $I$ without traversing any other player $i$'s infoset. \subsection{Nash Equilibrium and Its Refinements} Given an EFG, players' behavior strategies $\{ \beta_i \}_{i \in N}$ constitute an NE if no player has an incentive to unilaterally deviate from the equilibrium by playing another strategy~\citep{nash1951non}. The PE defined by~\citet{selten1975reexamination} relies on the idea of introducing \emph{trembles} in the game, representing the possibility that players may take non-equilibrium actions with small, vanishing probability. Trembles are encoded by means of Selten's \emph{perturbed games}, which force lower bounds on the probabilities of playing actions. Given an EFG $\Gamma$, a pair $(\Gamma, \eta)$ defines a perturbed game, where $\eta : A \to (0,1)$ is a function assigning a positive lower bound $\eta(a)$ on the probability of playing each action $a \in A$, with $\sum_{a \in A(I)} \eta(a) < 1$ for every $i \in N$ and $I \in \mathcal{I}_i$. Then: \begin{definition}\label{def:pe} Given an EFG $\Gamma$, $\{ \beta_i \}_{i \in N}$ is a PE of $\Gamma$ if it is a limit point of NEs for at least one sequence of perturbed games $\{(\Gamma,\eta_t)\}_{t \in \mathbb{N}}$ such that, for all $a \in A$, the lower bounds $\eta_t(a)$ converge to zero as $t \rightarrow \infty$. \end{definition} There are only a few computational works on NE refinements. For instance, \citet{miltersen2010computing} characterize quasi-perfect equilibria of $2$-player EFGs using the sequence form (see the recent work by~\citet{gatti2020characterization} for its extension to $n$-player games) and exploit this to compute an equilibrium by solving a linear complementarity problem with trembles defined as polynomials of some parameter treated symbolically. \citet{DBLP:conf/aaai/Farina017} do the same for the PE. Recently,~\citet{farina2018practical} provide a general framework for computing NE refinements in $2$-player zero-sum EFGs in polynomial time. The authors show how to reduce the task to the more general problem of solving \emph{trembling} LPs parametrized by some parameter $\epsilon$, \emph{i.e.}, finding their limit solutions as $\epsilon \rightarrow 0$. Then, they provide a general polynomial-time algorithm to find limit solutions to trembling LPs. Other works study the problem of computing (approximate) NE refinements in $2$-player zero-sum EFGs by employing online convex optimization techniques~\citep{DBLP:conf/ijcai/KroerFS17,farina2017regret}. \subsection{Correlation in Extensive-Form Games} We model a correlation device as a probability distribution $\mu \in \Delta_{\Pi}$. In the classical CE by~\citet{aumann1974subjectivity}, the correlation device draws a strategy profile $\pi \in \Pi$ according to $\mu$; then, it privately communicates $\pi_i$ to each player $i \in N$. This notion of CE does \emph{not} fit well to EFGs, as it requires the players to reason over the exponentially-sized set $\Pi_i$. \citet{von2008extensive} introduced the EFCE to solve this issue. The first crucial feature of the EFCE is a different way of giving recommendations: the strategy $\pi_i$ is revealed to player $i$ as the game progresses, \emph{i.e.}, the player is recommended to play the action $\pi_i(I)$ at infoset $I \in \mathcal{I}_i$ only when $I$ is actually reached during play. The second key aspect characterizing EFCEs is that, whenever a player decides to defect from a recommended action at some infoset, then she may choose any move at her subsequent infosets and she stops receiving recommendations from the correlation device. The definition of EFCE introduced by~\citet{von2008extensive} (Definition~\ref{def:efce}) requires the introduction of the notion of \emph{extended game} with a correlation device. \begin{definition}\label{def:ext_game} Given an EFG $\Gamma$ and a distribution $\mu \in \Delta_{\Pi}$, the extended game $\Gamma^{\textnormal{ext}}(\mu)$ is a new EFG in which chance first selects $\pi \in \Pi$ according to $\mu$, and, then, $\Gamma$ is played with each player $i \in N$ receiving the recommendation to play $\pi_i(I)$ as a signal, whenever she reaches an infoset $I \in \mathcal{I}_i$. \end{definition} The signaling in $\Gamma^{\textnormal{ext}}(\mu)$ induces a new infoset structure. Specifically, every infoset $I \in \mathcal{I}_i$ of the original game $\Gamma$ corresponds to many, new infosets in $\Gamma^{\textnormal{ext}}(\mu)$, one for each combination of possible action recommendations received at the infosets preceding $I$ (this included). At each new infoset, player $i$ can only distinguish among chance moves corresponding to strategy profiles $\pi \in \Pi$ that differ in the recommendations at infosets $J \in \mathcal{I}_i: J \preceq I$. Figure~\ref{fig:extended_game} shows a simple EFG with its corresponding extended game. \input{example_extended_game} \begin{definition}\label{def:efce} Given an EFG $\Gamma$, $\mu \in \Delta_{\Pi}$ defines an EFCE of $\Gamma$ if following recommendations is an NE of $\Gamma^{\textnormal{ext}}(\mu)$.~\footnote{For EFCEs, one can restrict the attention to distributions $\mu$ over \emph{reduced} strategy profiles, \emph{i.e.}, those in which each player's pure strategy only specifies actions at infosets reachable given that player's moves~\citep{vermeulen1998reduced}. In the following, we stick to general, un-reduced strategy profiles since, as showed in Appendix~\ref{app:reduced}, these are necessary for trembling-hand perfect CEs in order to define the players' behavior off the equilibrium path.} \end{definition} Next, we introduce an equivalent characterization of EFCEs~\citep{DBLP:conf/aaai/FarinaBS20}. It is based on the following concept of \emph{trigger agent}, originally due to~\citet{DBLP:conf/icml/GordonGM08}. \begin{definition}\label{def:trigger} Given an infoset $I \in \mathcal{I}_i$ of player $i \in N$, an action $a \in A(I)$, and a distribution $\hat \mu_i \in \Delta_{\Pi_i(I)}$, an \emph{$(I, a, \hat \mu_i)$-trigger agent for player $i$} is an agent that takes on the role of player $i$ and follows all recommendations unless she reaches $I$ and gets recommended to play $a$. % If this happens, she stops committing to recommendations and plays according to a strategy sampled from $\hat \mu_i$ until the game ends. \end{definition} Then, it follows that $\mu \in \Delta_\Pi$ is an EFCE if, for every $i \in N$, player $i$'s expected utility when following recommendations is at least as large as the expected utility that any $(I, a, \hat \mu_i)$-trigger agent for player $i$ can achieve (assuming the opponents' do not deviate from recommendations). We provide a formal statement in Appendix~\ref{app:trigger}. \paragraph{Computing EFCEs in $n$-player EFGs} The algorithm of~\citet{huang2008computing} relies on the following LP formulation of the problem of finding an EFCE, which has exponentially many variables and polynomially many constraints (for completeness, its derivation is in Appendix~\ref{app:lp}). \begin{subequations}\label{prob:primal_ellip} \begin{align} \max_{\boldsymbol{\mu} \geq \boldsymbol{0}, \boldsymbol{v}} & \quad \sum_{\pi \in \Pi} \mu[\pi] \quad \textnormal{s.t.} \\ & A \boldsymbol{\mu} + B \boldsymbol{v }\geq \boldsymbol{0}, \end{align} \end{subequations} where $\boldsymbol{\mu}$ is a vector of variables $\mu[\pi]$ for $\pi \in \Pi$, encoding a probability distribution $\mu \in \Delta_{\Pi}$. Problem~\ref{prob:primal_ellip} does not enforce any simplex constraint on variables $\mu[\pi]$, and, thus, it is either unbounded or it has an optimal solution with value zero (by setting $\boldsymbol{\mu}$ and $\boldsymbol{v}$ to zero). In the former case, any feasible $\boldsymbol{\mu}$ encodes an EFCE after normalizing it. As a result, since an EFCE always exists~\citep{von2008extensive}, the following dual of Problem~\ref{prob:primal_ellip} is always infeasible: \begin{subequations}\label{prob:dual_ellip} \begin{eqnarray} A^\top \boldsymbol{y} &\leq& -\boldsymbol{1} \\ B^\top \boldsymbol{y} &=& \textcolor{white}{-}\boldsymbol{0} \\ \boldsymbol{y} &\geq& \textcolor{white}{-}\boldsymbol{0}, \end{eqnarray} \end{subequations} where $\boldsymbol{y}$ is a vector of dual variables. The EAH approach applies the ellipsoid algorithm~\citep{grotschel1993geometric} to Problem~\ref{prob:dual_ellip} in order to conclude that it is infeasible. Since there are exponentially many constraints, the algorithm runs in polynomial time only if a polynomial-time separation oracle is available. This is given by the following: \begin{lemma}[Lemma~5, \citep{huang2008computing}]\label{lem:sep_ro} If $\boldsymbol{y} \geq \boldsymbol{0}$ is such that $B^\top \boldsymbol{y} = \boldsymbol{0}$, then there exists $\boldsymbol{\mu}$ encoding a product distribution $\mu \in \Delta_{\Pi}$ such that $\boldsymbol{\mu}^\top A^\top \boldsymbol{y} = 0$. % Moreover, $\boldsymbol{\mu}$ can be computed in polynomial time. \end{lemma} \noindent \citet{jiang2015polynomial} show how, given a product distribution $\mu$ computed as in Lemma~\ref{lem:sep_ro}, it is possible to recover, in polynomial time, a violated constraint for Problem~\ref{prob:dual_ellip}, corresponding to some strategy profile $\pi \in \Pi$. This, together with some additional technical tricks ensuring that $B^\top \boldsymbol{y} = \boldsymbol{0}$ holds (see~\citep{huang2008computing} for more details), allows to apply the ellipsoid algorithm to Problem~\ref{prob:dual_ellip} in polynomial time. Since the problem is infeasible, the algorithm must terminate after polynomially many iterations with a collection of violated constraints, which correspond to polynomially many strategy profiles. Then, solving (in polynomial time) Problem~\ref{prob:primal_ellip} with the variables $\boldsymbol{\mu}$ restricted to these strategy profiles gives an EFCE of the game. Let us also remark that the EFCE obtained in this way has support size polynomial in the size of the game.
{ "attr-fineweb-edu": 1.65332, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbpTxK0iCl36cV74l
\section{Introduction} \label{sec intro} In a recent paper of Hajac, Reznikoff and Tobolski (\cite{hrt}) the authors give conditions they call \textit{admissibility} on a pair of subgraphs of a directed graph implying that the $C^*$-algebras of the three graphs fit into a pullback diagram. The admissibility implies, in particular, that the $C^*$-algebra of each subgraph is a quotient of the $C^*$-algebra of the ambient graph, or, equivalently, that the complement of the vertex set of each subgraph is \textit{saturated} and \textit{hereditary}. In order to accomodate graphs that are not row finite they also require that the hereditary sets be \textit{unbroken}, meaning that neither hereditary set admits any \textit{breaking vertices} (see Definition \ref{def saturated}). The breaking vertices form a crucial part of the classification of gauge-invariant ideals of graph algebras (\cite{bhrs}). Admissibility in \cite{hrt} implies that the saturated hereditary set corresponding to each subgraph is associated to a unique gauge-invariant ideal in the $C^*$-algebra of the ambient graph, and that the quotient by this ideal is isomorphic to the $C^*$-algebra of the subgraph. It is shown in \cite{bhrs} that in the presence of breaking vertices, a saturated hereditary set is associated with several gauge-invariant ideals, corresponding to the subsets of the breaking vertices. Except in the case where the entire set of breaking vertices is used, the quotient $C^*$-algebra will not equal the algebra of a subgraph. In \cite{bhrs}, for each gauge-invariant ideal a new graph is constructed whose $C^*$-algebra does equal the quotient by that ideal. However, these quotients can in fact be presented as $C^*$-algebras associated to actual subgraphs by considering \textit{relative Toeplitz graph algebras} (see Definition \ref{def relative toeplitz algebra}). In this paper we show that the question of obtaining a pullback diagram of $C^*$-algebras from a pushout diagram of graphs extends naturally to the context of relative Toeplitz graph algebras. This makes use of all of the gauge-invariant ideals of $\mathcal{T} C^*(E)$. We introduce a corresponding category of \textit{relative directed graphs} and a contravariant functor from it to $C^*$-algebras. This is an extension of the results of \cite{bhrs}. We prove that pushouts exist in this category, and then characterize those pushout diagrams that give rise to pullback diagrams of $C^*$-algebras. We call such diagrams \textit{admissible} in imitation of the notion in \cite{hrt}, although it is quite different from that definition. Essentially, most of the properties coming from the admissibility conditions of \cite{hrt} are incorporated in our definition of morphism in the category of relative directed graphs. Our admissibility condition has to do with the variety of gauge-invariant ideals available in the case of arbitrary graphs. We briefly describe the contents of the paper. In section \ref{section relative graph algebras} we recall the terminology of directed graphs as used in operator algebras. In particular we recall the relative Toeplitz graph algebras. The standard definition is by means of generators and relations (the \textit{Cuntz-Krieger relations}). However these algebras are also described as $C^*$-algebras of certain \'etale groupoids, and we find it advantageous to work with the groupoid description. In section \ref{section groupoid picture} we recall this description as given in \cite{spi2}. (This is slightly different from the original groupoid description.) We give the definitions of the usual graph algebras and Toeplitz graph algebras, as well as that of the relative Toeplitz graph algebras, and describe the ideals relevant to this paper in terms of open invariant subsets of the unit space of the groupoid. In section \ref{section gauge-invariant ideals} we prove that these are, in fact, all of the gauge-invariant ideals in the relative Toeplitz graph algebras, giving a natural extension of the results of \cite{bhrs}. We have tried to keep this argument as general as possible for as long as possible, so that the specifics of graph algebras are needed only in the last part of the argument. In the end we give a simple characterization of the gauge-invariant ideals in terms of subsets of the graph, generalizing the characterization of such ideals in graph algebras from \cite{bhrs}. We note that while a more general theorem is proved in \cite{sww} for higher rank graphs, the characterization in that generality is quite complicated, and indeed, even translating it to the case of directed graphs is complicated. Thus we feel that our direct proof and simple characterization for directed graphs justify the inclusion of our treatment. In section \ref{section relative graphs} we introduce a new category of \textit{relative graphs}. The objects are pairs consisting of a directed graph and a certain subset of the vertex set. The definition is exactly what is needed to describe the relative Toeplitz graph algebras in terms of the graphs. We prove that it is indeed a category, and that pushouts exist in this category. In section \ref{section admissible pushouts of relative graphs} we first recall a theorem of Pedersen to characterize pullback diagrams with surjective maps. In the final section \ref{section examples} we consider several examples to illustrate the main theorem. In particular we consider the two extreme situations: where all four algebras are Toeplitz graph algebras, and where all four algebras are graph algebras. In the second of these we compare our theorem with the theorem of \cite{hrt}. We wish to place our results in the context of the paper \cite{kpsw}. That paper treats pullbacks of the Toeplitz and Cuntz-Krieger algebras of finitely aligned higher-rank graphs (without sources) by means of a connected sum of the graphs. Apart from the restriction to sourceless graphs (which is a reasonable restriction in the higher-rank case), this is much more general than the situation of this paper. The new feature in our treatment is to consider pullbacks of relative Toeplitz graph algebras. It would be interesting to work out the pullback structure for relative Toeplitz algebras of higher-rank graphs using the results of \cite{sww}. Throughout this paper we use the \textit{Australian convention} for the $C^*$-algebras of directed graphs, as presented in \cite{rae}. Thus edges of the graph are thought of as morphisms in a (small) category, and concatenation of edges corresponds to composition of morphisms. The result is that when the vertices and edges of a directed graph are represented as projections and partial isometries in a $C^*$-algebra, the source vertex of an edge is represented by the projection corresponding to the initial space of the partial isometry representing that edge. \section{Relative Toeplitz graph algebras} \label{section relative graph algebras} \begin{Definition} A \textit{directed graph} is a quadruple $E = (E^0, E^1, r, s)$, where $E^0$ is the set of vertices, $E^1$ is the set of directed edges, and $r, s: E^1 \to E^0$ are the range and source maps, respectively, so that if $e \in E^1$ is an edge from $v$ to $w$, then $r(e) = w$ and $s(e) = v$. We then say that $v$ \textit{emits} the edge $e$, and that $w$ \textit{receives} the edge $e$. We let $E^n$ denote the set of directed paths in $E$ consisting of $n$ edges, $E^* = \bigcup_{n=0}^\infty E^n$, the set of all directed paths of finite nonnegative length, and $E^\infty$ the set of all (semi) infinite directed paths. Thus $\alpha \in E^n$ means that $\alpha = e_1 e_2 \cdots e_n$, where $e_1, \ldots, e_n \in E^1$ and $s(e_i) = r(e_{i+1})$ for $1 \le i < n$, and $x \in E^\infty$ means that $x = e_1 e_2 \cdots$, where for each $n$, $e_1 \cdots e_n \in E^n$. We extend the source and range maps to $E^*$ and $E^\infty$ by setting (in the above) $s(\alpha) = s(e_n)$, $r(\alpha) = r(x) = r(e_1)$. For $\alpha \in E^*$ we write $\alpha E^* := \{\alpha \beta : \beta \in E^*, r(\beta) = s(\alpha) \}$, and similarly for $E^* \alpha$, $\alpha E^\infty$, and if $x \in E^\infty$, for $E^* x$. Thus we may write $s^{-1}(v) = v E^1$ for $v \in E^0$ to avoid specifying to which version of $s$ we refer, and for $A \subseteq E^* \cup E^\infty$ we write $E^* A = \bigcup_{\alpha \in A} E^* \alpha$, etc. We write $|\alpha|$ for the \textit{length} of $\alpha$: $|\alpha| = n$ for $\alpha \in E^n$. A \textit{source} is a vertex $v \in E^0$ that receives no edges, that is, $v E^1 = \emptyset$. An \textit{infinite receiver} is a vertex $v \in E^0$ such that $v E^1$ is infinite. $E$ is called \textit{row-finite} if it has no infinite receivers. A vertex $v$ is said to be \textit{singular} in $E$ if $v$ is either a source or an infinite receiver, and is said to be \textit{regular} in $E$ if it is not singular. We denote the set of singular vertices in $E$ by $\sing{E}$, the set of regular vertices in $E$ by $\reg{E}$, and the set of sources in $E$ by $\source{E}$. \end{Definition} All graphs we consider are directed graphs, so we will usually say ``graph'' rather than ``directed graph.'' \begin{Definition} \label{def relative toeplitz algebra} Let $E$ be a graph and let $A$ be a $C^*$-algebra. A \textit{Cuntz-Krieger E-family} in $A$ consists of a set of projections $\{P_v: v \in E^0\} \subseteq A$ such that $P_v P_w = 0$ whenever $v \neq w$ (we say that $\{P_v\}$ is a set of \textit{mutually orthogonal projections}) and a set of partial isometries $\{S_e : e \in E^1\} \subseteq A$ satisfying the \textit{Cuntz-Krieger relations:} \begin{itemize} \item[(CK1)] for all $e \in E^1$, $S_e^* S_e = P_{s(e)}$ \item[(CK2)] for all $e$, $f \in E^1$, if $e \not= f$ then $S_e^*S_f = 0$ \item[(CK3)] for all $e \in E^1$, $P_{r(e)} S_e = S_e$ \item[(CK4)] for all $v \in \reg{E}$, $\displaystyle P_v = \sum_{e \in v E^1} S_eS_e^*$. \end{itemize} The \textit{Cuntz-Krieger algebra}, or \textit{graph algebra}, of $E$ is the $C^*$-algebra generated by a universal Cuntz-Krieger $E$-family, and is denoted $C^*(E)$. Because of its special significance, we refer to the fourth relation (CK4) as \textit{the Cuntz-Krieger condition (at $v$)}. The \textit{Toeplitz graph algebra} of $E$ is the $C^*$-algebra defined as above but without the relation (CK4), and is denoted $\mathcal{T} C^*(E)$. Finally, for a subset $A \subseteq \reg{E}$, the \textit{relative Toeplitz graph algebra} is defined as above but with the relation (CK4) applied only at vertices $v \in A$, and is denoted $\mathcal{T} C^*(E,A)$. Thus $\mathcal{T} C^*(E,A)$ is defined by the relations (CK1) - (CK3) together with \begin{itemize} \item[(TCK4)] for all $v \in A$, $\displaystyle P_v = \sum_{e \in v E^1} S_e S_e^*$. \end{itemize} \end{Definition} We note that $\mathcal{T} C^*(E) = \mathcal{T} C^*(E,\varnothing)$, and $C^*(E) = \mathcal{T} C^*(E,\reg{E})$. Central to the ideas of this paper is a contravariant functor from graphs to $C^*$-algebras, specifically relative Toeplitz graph algebras. These algebras were introduced in \cite{spi1} to describe subalgebras corresponding to subgraphs (see also \cite{mt}). From this, in fact, there is also a covariant functor from \textit{relative graphs} (see Definition \ref{definition relative graph}) to $C^*$-algebras (discussed in the purely algebraic context in \cite[Section 1.5]{aas}) but the present work does not require this. To describe the contravariant functor that is of interest to us, we will need some more definitions. \begin{Definition} \label{def hereditary} A set $H \subseteq E^0$ is \textit{hereditary} if whenever $v \in H$ and $w \in s(v E^*)$ then $w \in H$. Let $E$ be a graph and let $H \subseteq E^0$ be a hereditary subset. Define a subgraph $F \equiv F_H$ of $E$ by letting $F^0 = E^0 \setminus H$ and $F^1 = F^0 E^1 F^0$ $(= E^1 F^0$ since $F^0$ is \textit{cohereditary}). For $v \in \reg{F}$ define $p_{v,H} = \sum_{e \in v F^1} s_e s_e^*$. \end{Definition} \begin{Definition} \label{def saturated} $H$ is \textit{saturated} if for each vertex $v \in \reg{E}$, if $v E^1 = v E^1 H$ then $v \in H$, that is, if every vertex that sends an edge to $v$ is in $H$, then $v$ must be in $H$. If $H$ is both hereditary and saturated and $F = F_H$, define $B_H = \reg{F} \cap \sing{E}$, the \textit{breaking vertices} for $H$. Thus $v$ is a breaking vertex for $H$ if $v$ receives infinitely many edges with source in $H$, and receives a finite nonzero number of edges with source in $F^0$ (note that $v$ must be a vertex of $F$ by the hereditary property of $H$). If necessary we may write $B^E_H$ to indicate the ambient graph. (In \cite{bhrs} the set $B_H$ is written $H_\infty^{\text{fin}}$.) \end{Definition} \begin{Remark} \label{remark saturated} The definition of saturated $H \subseteq E^0$ can be stated equivalently as $\reg{E} \cap F_H^0 \subseteq \reg{F_H}$ (or $\reg{E} \cap \source{F_H} = \varnothing$). The definition of hereditary $H$ can be stated equivalently as $H E^* = H E^* H$, or as $H E^1 = H E^1 H$. \end{Remark} We can now begin to describe the contravariant functor mentioned above. Let $E$ be a graph, let $H$ be a saturated hereditary subset of $E^0$, and let $F = F_H$. Let $J_H$ be the ideal in $C^*(E)$ generated by $\{p_v : v \in H\} \cup \{p_v - p_{v,H} : v \in B_H \}$. Then $C^*(E) / J_H \cong C^*(F)$. Thus the inclusion $F \hookrightarrow E$ induces a surjection $C^*(E) \twoheadrightarrow C^*(F)$. This is a special case of \cite[Theorem 3.6]{bhrs}, where all gauge-invariant ideals in $C^*(E)$ are described by means of the hereditary saturated subsets of $E^0$ and the subsets of the breaking vertices for these sets. We will generalize this below (Corollary \ref{cor kernel of quotient map}) using relative Toeplitz graph algebras. In this general setting, the ideals involved require just a hereditary subset of $E^0$. It is only when discussing ideals in Cuntz-Krieger algebras that the notion of saturation is relevant. \section{The groupoid picture} \label{section groupoid picture} We will describe the ideals that are the kernels of the quotient maps between relative Toeplitz algebras of graphs. For this it is convenient to use the description of these algebras using groupoids. Recall that a groupoid is called \textit{ample} if it is \'etale and its unit space is totally disconnected. For a directed graph $E$ we let $G(E)$ be the groupoid of $E$. We will use the description from \cite{spi2}. The unit space is $G(E)^{(0)} = E^\infty \cup E^*$, the set of all infinite and finite directed paths in $E$. For $\alpha \in E^*$ we let $Z(\alpha) = \alpha E^\infty \cup \alpha E^*$ be the set of all infinite and finite paths that extend $\alpha$. Then a base of compact-open sets for the topology of $G(E)^{(0)}$ is given by all sets of the form $Z(\alpha) \setminus \bigcup_{i=1}^n Z(\beta_i)$, for $\alpha$, $\beta_i \in E^*$. This topology is totally disconnected locally compact Hausdorff. The \textit{boundary} of $E$ is the subset $\partial E = E^\infty \cup E^* \sing{E}$, a closed subset. The complement $G(E)^{(0)} \setminus \partial E = E^* \reg{E}$ is a countable discrete open subset. We let $E^* * E^* * G(E)^{(0)} = \{(\alpha,\beta,x) \in E^* \times E^* \times G(E)^{(0)} : s(\alpha) = s(\beta) = r(x) \}$. The groupoid of $E$ is then $G(E) = E^* * E^* * G(E)^{(0)} \bigr/ \sim$, where $(\alpha,\beta,x) \sim (\alpha',\beta',x')$ if there are $\gamma, \gamma' \in E^*$ and $y \in G(E)^{(0)}$ such that $x = \gamma y$, $x' = \gamma' y$, $\alpha \gamma = \alpha' \gamma'$, and $\beta \gamma = \beta' \gamma'$. Inversion is given by $[\alpha,\beta,x]^{-1} = [\beta,\alpha,x]$. Composition is given as follows. It can (easily) be shown that if $\beta x = \alpha' x'$ then there are $y$, $\gamma$ and $\gamma'$ such that $x = \gamma y$, $x' = \gamma' y$, and $\beta \gamma = \alpha' \gamma'$. Then $[\alpha,\beta,x] \cdot [\alpha',\beta',x'] = [\alpha\gamma,\beta'\gamma',y]$ (see \cite{spi2}.) The range and source maps are given by $r([\alpha,\beta,x]) = \alpha x$ and and $s([\alpha,\beta,x]) = \beta x$ (or by $[r(\alpha),r(\alpha),\alpha x]$ and $[r(\beta),r(\beta),\beta x]$, if we identify $x \in G(E)^{(0)}$ with $[r(x),r(x),x] \in G(E)$). For $(\alpha,\beta) \in E^* * E^*$, and $F \subseteq Z(s(\alpha))$, we write $[\alpha,\beta,F] = \{[\alpha,\beta,x] : x \in F \}$. For $F$ compact-open in $G(E)^{(0)}$, these sets are compact-open bisections that form a base for a topology on $G(E)$ making it an ample Hausdorff groupoid. (The connection with the other formulation of the groupoid is given by using the triple $(\alpha x, |\alpha| - |\beta|, \beta x)$ in place of our triple $[\alpha,\beta,x]$.) It is proved in \cite{spi2} that $C^*(G(E)) = \mathcal{T} C^*(E)$, where $p_v = \chi_{Z(v)}$ (or more precisely, $\chi_{[v,v,Z(v)]}$) and $s_e = \chi_{[e,s(e),Z(s(e))]}$. Moreover, $\partial E$ is a closed invariant subset of $G(E)^{(0)}$, and $C^*(G(E)|_{\partial E}) = C^*(E)$, the usual graph algebra. There is a short exact sequence \[ 0 \to C^*(G(E)|_{E^*\reg{E}}) \to \mathcal{T} C^*(E) \to C^*(E) \to 0 \] (\cite[Remark 4.10]{ren}). For each $v \in \reg{E}$, the set $E^* v$ is open, discrete and invariant, and $G(E)|_{E^* v}$ is a principal transitive groupoid. Thus the ideal $C^*(G(E)|_{E^* \reg{E}})$ is isomorphic to $\bigoplus_{v \in \reg{E}} \mathcal{K}(\ell^2(E^*v))$. The summand corresponding to $v \in \reg{E}$ is generated as an ideal by $\chi_{[v,v,\{v\}]} = p_v - \sum_{e \in v E^1} s_e s_e^*$, the \textit{gap projection} at $v$. (More explicitly, a typical element of $G|_{E^*v}$ is $[\alpha,\beta,\gamma v] = [\alpha\gamma,\beta\gamma,v]$, so $G|_{E^* v} = \{ [\xi,\eta,v] : \xi,\eta \in E^*v \} \equiv E^* v \times E^*v$.) Let $A \subseteq \reg{E}$. Define $G(E,A)^{(0)} = E^\infty \sqcup (E^* \setminus E^*A) = G(E)^{(0)} \setminus E^* A$. Then $G(E,A)^{(0)}$ is a closed invariant subset of $G(E)^{(0)}$. Now define $G(E,A) = G(E)|_{G(E,A)^{(0)}}$. Then there is an exact sequence \[ 0 \to C^*(G(E)|_{E^* A}) \to C^*(G(E)) \xrightarrow{\pi} C^*(G(E,A)) \to 0. \] \begin{Theorem} \label{thm relative toeplitz algebra as groupoid algebra} $C^*(G(E,A)) \cong \mathcal{T} C^*(E,A)$ (as in Definition \ref{def relative toeplitz algebra}.) \end{Theorem} \begin{proof} Suppose that $\{P_v : v \in E^0 \} \cup \{S_e : e \in E^1 \}$ is a family in a $C^*$-algebra $B$ satisfying the relations for $\mathcal{T} C^*(E,A)$. Then it also satisfies the relations for $\mathcal{T} C^*(E)$, hence there is a $*$-homomorphism $\sigma : \mathcal{T} C^*(E) = C^*(G(E)) \to B$ with $\sigma(p_v) = P_v$ and $\sigma(s_e) = S_e$. Then for $v \in A$, $\sigma(\chi_{[v,v,\{v\}]}) = P_e - \sum_{e \in v E^1} S_e S_e^* = 0$. Therefore $C^*(G(E)|_{E^* A}) \subseteq \ker \sigma$, and hence $\sigma$ factors through $C^*(G(E,A))$. Conversely, suppose that $\sigma : C^*(G(E,A)) \to B$ is a $*$-homomorphism. Then $\sigma \circ \pi : C^*(G(E)) = \mathcal{T} C^*(E) \to B$ is a $*$-homomorphism. Thus the elements $\sigma(\pi(p_v))$ and $\sigma(\pi(s_e))$ satisfy the relations $(CK1)-(CK3)$. Since $C^*(G(E)|_{E^* A}) \subseteq \ker \sigma \circ \pi$, these elements satisfy (CK4) at vertices $v \in A$. Thus $\sigma$ is a representation of $\mathcal{T} C^*(E,A)$. \end{proof} Next let $H \subseteq E^0$ be a hereditary subset, and let $F = F_H$. \begin{Lemma} \label{lem F closed in E} Let $H$ and $F$ be as above. Then $G(F)^{(0)}$ is a closed invariant subset of $G(E)^{(0)}$. \end{Lemma} \begin{proof} We note that \begin{align} E^\infty &= E^* H E^\infty \sqcup E^* F^\infty = E^* H E^\infty \sqcup F^\infty \notag \\ E^* &= E^* H \sqcup E^* F^0 = E^* H \sqcup F^* \label{eqn decompositions} \\ G(E)^{(0)} &= E^* H E^\infty \sqcup E^* H \sqcup G(F)^{(0)}. \notag \end{align} We will show that $G(E)^{(0)} \setminus G(F)^{(0)}$ is open. First let $x \in E^* H E^\infty$. Then $x = \alpha y$ for some $\alpha \in E^*$ and $y \in E^\infty$ such that $s(\alpha) = r(y) \in H$. Then $x \in Z(\alpha) \subseteq E^* H \cup E^* H E^\infty = (G(F)^{(0)})^c$, hence $Z(\alpha)$ is a neighborhood of $x$ disjoint from $G(F)^{(0)}$. Next let $\alpha \in E^* H$. Then again $Z(\alpha) \subseteq (G(F)^{(0)})^c$. Invariance is clear. \end{proof} Let $B \subseteq \reg{F}$. Then $G(F,B)^{(0)}$ is a closed subset of $G(F)^{(0)}$, and hence is a closed invariant subset of $G(E)^{(0)}$. \begin{Theorem} Let $B \subseteq \reg{F}$. Then $G(F,B)^{(0)} \subseteq G(E,A)^{(0)}$ if and only if $A \cap F^0 \subseteq B$. \end{Theorem} \begin{proof} \begin{align*} G(F,B)^{(0)} &= F^\infty \cup F^*(F^0 \setminus B) = F^\infty \sqcup E^*(F^0 \setminus B) \\ G(E,A)^{(0)} &= E^\infty \sqcup E^*(E^0 \setminus A). \end{align*} Since $F^\infty \subseteq E^\infty$, $G(F,B)^{(0)} \subseteq G(E,A)^{(0)}$ if and only if $F^0 \setminus B \subseteq E^0 \setminus A$, or equivalently, if and only if $A \cap F^0 \subseteq B$. \end{proof} Let $(E,A)$ and $(F,B)$ be as above, and assume that $A \cap F^0 \subseteq B$. Let $U = G(E,A)^{(0)} \setminus G(F,B)^{(0)}$, an open $G(E,A)$-invariant subset. We have an exact sequence \[ 0 \to C^*(G(E,A)|_U) \to \mathcal{T} C^*(E,A) \to \mathcal{T} C^*(F,B) \to 0. \] We will describe the ideal in this sequence by means of a generating family of projections, as is done traditionally (cf. \cite{bhrs}). \begin{Theorem} \label{thm larger ideal} Let $(E,A)$ and $(F,B)$ be as above, and assume that $A \cap F^0 \subseteq B$. Let $U = G(E,A)^{(0)} \setminus G(F,B)^{(0)}$. Then $C^*(G(E,A)|_U)$ is the ideal in $\mathcal{T} C^*(E,A)$ generated by $\{p_v : v \in H \} \cup \{ p_v - p_{v,H} : v \in B \setminus A \}$. \end{Theorem} \begin{proof} We begin by dissecting $U$. Using \eqref{eqn decompositions} we have \begin{align*} G(E,A)^{(0)} &= E^\infty \sqcup E^*(E^0 \setminus A) \\ &= E^* H E^\infty \sqcup F^\infty \sqcup E^* (H \setminus A) \sqcup F^*(F^0 \setminus A) \\ &= E^* H E^\infty \sqcup E^* (H \setminus A) \sqcup F^\infty \sqcup F^*(F^0 \setminus B) \sqcup F^*(B \setminus A) \\ &= E^* H E^\infty \sqcup E^* (H \setminus A) \sqcup F^*(B \setminus A) \sqcup G(F,B)^{(0)}, \\ \noalign{hence} U &= E^* H E^\infty \sqcup E^* (H \setminus A) \sqcup F^*(B \setminus A). \end{align*} We will write this as $U = \bigsqcup_{j=1}^3 U_j$. Now we consider the ideal generated as in the statement of the theorem. For $v \in H$ we have that $p_v = \chi_{Z(v) \cap G(E,A)^{(0)}}$. We observe that \[ Z(v) \cap G(E,A)^{(0)} = v E^\infty \sqcup v E^*(E^0 \setminus A) = v E^\infty \sqcup v E^*(H \setminus A). \] The invariant subset of $G(E,A)^{(0)}$ generated by $\{ Z(v) \cap G(E,A)^{(0)} : v \in H \}$ is then \[ W_1 := \bigcup_{v \in H} E^* (Z(v) \cap G(E,A)^{(0)}) = \bigcup_{v \in H} (E^* v E^\infty \cup E^* v E^* (H \setminus A)) = E^* H E^\infty \cup E^*(H \setminus A). \] For $v \in B \setminus A$, $p_v - p_{v,H} = \chi_{(Z(v) \setminus \cup_{e \in v F^1} Z(e)) \cap G(E,A)^{(0)}}$. We have \begin{align*} \bigl(Z(v) \setminus \cup_{e \in v F^1} Z(e)\bigr) \cap G(E,A)^{(0)} &= \bigl(\{v\} \cup \bigcup_{e \in v E^1 H} Z(e)\bigr) \cap G(E,A)^{(0)} \\ &= \{v\} \cup v E^1 H E^\infty \cup v E^1 H E^* (H \setminus A) \\ &\subseteq \{v\} \cup E^* H E^\infty \cup E^* (H \setminus A). \end{align*} The invariant subset of $G(E,A)^{(0)}$ generated by $\{ (Z(v) \setminus \cup_{e \in v F^1} Z(e)) \cap G(E,A)^{(0)} : v \in B \setminus A \}$ is then \begin{align*} W_2 &:= \bigcup_{v \in B \setminus A} (E^*v \cup E^* v E^1 H E^\infty \cup E^* v E^1 H E^* (H \setminus A)) \\ &= E^*(B \setminus A) \cup W_3, \end{align*} where $W_3 \subseteq W_1$. Therefore the ideal in $\mathcal{T} C^*(E,A)$ generated by $\{ p_v : v \in H \} \cup \{p_v - p_{v,H} : v \in B \setminus A \}$ is the ideal generated by $C_0(W_1 \cup W_2)$, and we have that $W_1 \cup W_2 = E^* H E^\infty \cup E^*(H \setminus A) \cup F^*(B\setminus A) = U$. \end{proof} \begin{Definition} \label{def J(E,A;F,B)} Let $E$, $A$, $F$ and $B$ be as above. We denote by $J(E,A;F,B)$ the ideal $C^*(G(E,A)|_U)$ in Theorem \ref{thm larger ideal}. \end{Definition} \begin{Corollary} \label{cor kernel of quotient map} Let $E$, $H$ and $F$ be as above. Let $A \subseteq \reg{E}$ and $B \subseteq \reg{F}$. There is a (necessarily surjective) homomorphism $\mathcal{T} C^*(E,A) \to \mathcal{T} C^*(F,B)$ determined by $p_v \mapsto 0$ for $v \in H$, $s_e \mapsto s_e$ for $e \in F^1$ and $s_e \mapsto 0$ if $e \not\in F^1$ if and only if $A \cap F^0 \subseteq B$. In this case, the kernel of the homomorphism is $J(E,A;F,B)$. If $A = \reg{E}$ and $A \cap F^0 \subseteq B$, then $\reg{E} \cap F^0 \subseteq \reg{F}$. Then $\reg{E} \cap \source{F} = \varnothing$, and it follows that $H$ is saturated (see Remark \ref{remark saturated}). In this case, $\mathcal{T} C^*(E,A) = C^*(E)$, $\mathcal{T} C^*(F,B) = C^*(F)$, and the kernel is $J(E,\reg{E};F,\reg{F}) = J_{H,B_H}$ (as in \cite{bhrs}.) \end{Corollary} \section{Gauge-invariant ideals} \label{section gauge-invariant ideals} The ideals in $\mathcal{T} C^*(E)$ (and in $\mathcal{T} C^*(E,A)$) that we considered in section \ref{section groupoid picture} are the \textit{gauge-invariant} ideals. The gauge-invariant ideals in $C^*(E)$ were completely described in \cite{bhrs}. This was extended to the case of higher rank graphs, and to the Toeplitz algebras $\mathcal{T} C^*(\Lambda)$, in \cite{sww}. In the groupoid picture these are the ideals that arise from open invariant subsets of $G(E)^{(0)}$ (see Theorem \ref{thm gauge-invariant ideals}), and such sets can be described in terms of the underlying graph. However the description in the generality of higher rank graphs is extremely complicated. Since the case of 1-graphs that we consider is much easier, we think it worthwhile to give the relatively straightforward proofs. Much of the argument can be given in the more general context of a $C^*$-algebra with an action of a compact abelian group, or of the $C^*$-algebra of an \'etale groupoid. We begin with some well-known facts about fixed point algebras of compact abelian group actions. Let $A$ be a $C^*$-algebra, let $K$ be a compact abelian group, and let $\gamma : K \to \text{Aut}(A)$ be a (point-norm) continuous action. Define $P : A \to A$ by $P(x) = \int_K \gamma_t(x) dt$. Then $P$ is a faithful conditional expectation onto the fixed point algebra $A^\gamma := \{ x \in A : \gamma_t(x) = x,\ t \in K \}$. We will use the following notation: if $A$ is a $C^*$-algebra and $S \subseteq A$ we write $\langle S \rangle_A$ for the ideal in $A$ generated by $S$. \begin{Lemma} \label{lem fixed point algebra} If $I$ is a $\gamma$-invariant ideal in $A$ then $P(I) = I \cap A^\gamma$, and $I = \langle I \cap A^\gamma \rangle_A$. \end{Lemma} \begin{proof} Let $I$ be a $\gamma$-invariant ideal in $A$. Then $P(I) \subseteq I$, hence $P(I) \subseteq I \cap A^\gamma$. Conversely, since $P|_{A^\gamma}$ is the identity map, $I \cap A^\gamma = P(I \cap A^\gamma) \subseteq P(I)$. Therefore $P(I) = I \cap A^\gamma$. In particular, $P(I)$ is an ideal in $A^\gamma$. Let $I' = \langle P(I) \rangle_A$. Since $P(I)$ is pointwise fixed by $\gamma$, hence is $\gamma$-invariant, $I'$ is also $\gamma$-invariant. Since $P(I) \subseteq I$, it follows that $I' \subseteq I$. Then $P(I') \subseteq P(I) \subseteq I' \cap A^\gamma = P(I')$, so $P(I') = P(I)$. We claim that $I' = I$. For if not, then $I'$ is a proper ideal in $I$, and there exists $x \ge 0$, $x \in I \setminus I'$. Let $\pi_0 : A \to A/I'$ be the quotient map. Then $\pi_0(x) \not= 0$. We may define $\beta : K \to \text{Aut}(A/I')$ by $\beta_t(a + I') = \gamma_t(a) + I'$, and let $P' : A/I' \to (A/I')^\beta$ the corresponding conditional expectation. Then $P'(\pi_0(x)) \not= 0$, but $P'(\pi_0(x)) = \pi_0(P(x)) = 0$, since $P(x) \in P(I) = P(I') \subseteq I'$. Therefore $I' = I$. \end{proof} In this paper we are concerned with invariant ideals in the $C^*$-algebras of \'etale groupoids. In particular, we will give conditions under which these ideals correspond to open groupoid-invariant subsets of the unit space. We cite the following elementary fact. A proof is given in \cite[Lemma 3.2]{bonli} for the reduced $C^*$-algebra (which will be sufficient for our use since we will consider amenable groupoids). However that proof still holds for the full $C^*$-algebra. \begin{Lemma} \label{lem ideals and invariant open sets} Let $G$ be a Hausdorff \'etale groupoid, let $U \subseteq G^{(0)}$ be an open $G$-invariant set, and let $I$ be the ideal in $C^*(G)$ generated by $C_0(U)$. Then $I \cap C_0(G^{(0)}) = C_0(U)$. \end{Lemma} A common instance of a compact abelian group action occurs when a groupoid has a continuous homomorphism into a discrete abelian group (such a homomorphism is often called a \textit{cocycle}). Suppose that $G$ is a Hausdorff \'etale groupoid with a continuous homomorphism $c : G \to \Gamma$, where $\Gamma$ is a discrete abelian group. There is a dual action $\gamma$ of $\widehat{\Gamma}$ on $C^*(G)$, defined on $C_c(G)$ by $\gamma_z(f) (g) = \langle z,c(g) \rangle f(g)$. Let $G^c = c^{-1}(0)$. Then $G^c$ is also a Hausdorff \'etale groupoid, with the same unit space as $G$. It is shown in \cite[Proposition 9.3]{spi2} that if $G^c$ is amenable then so is $G$, and in fact $C^*(G)^\gamma = C^*(G^c)$. There are many situations where $G^c$ is an AF groupoid, and hence amenable (\cite[Definition III.1.1]{ren1}); some examples are the $C^*$-algebras of higher rank graphs, and more generally, of certain categories of paths (\cite[Theorem 9.8]{spi2}), and also the Toeplitz Cuntz-Krieger algebras of directed graphs. We give the following result for these situations. \begin{Lemma} \label{lem AF fixed point groupoid} Let $G$ be a Hausdorff ample groupoid with a continuous homomorphism $c : G \to \Gamma$, for some discrete abelian group $\Gamma$. Suppose that the fixed point subgroupoid $G^c$ is an AF groupoid. Let $\gamma$ be the dual action of $K = \widehat{\Gamma}$ on $C^*(G)$. Let $I$ be a $\gamma$-invariant ideal in $C^*(G)$, and let $I \cap C_0(G^{(0)}) = C_0(U)$, where $U$ is an open $G$-invariant subset of $G^{(0)}$. Then $I = \langle C_0(U) \rangle_{C^*(G)}$. \end{Lemma} \begin{proof} It is clear that $I \supseteq \langle C_0(U) \rangle_{C^*(G)}$. Before proving the reverse containment, we will prove the following claim: $I \cap C^*(G)^\gamma = \langle C_0(U) \rangle_{C^*(G^c)}$. Since $C_0(U) \subseteq I$ it is clear that $I \cap C^*(G)^\gamma \supseteq \langle C_0(U) \rangle_{C^*(G^c)}$. For the reverse containment we use the facts that $C^*(G)^\gamma = C^*(G^c)$ and that $G^c$ is an AF groupoid. As in the proof of \cite[Proposition III.1.15]{ren1}, we may write $C^*(G^c) = \overline{\cup_n B_n}$ where $B_n \subseteq B_{n+1}$ are finite dimensional $C^*$-algebras, and we may write $C_0(G^{(0)}) = \overline{\cup_n D_n}$ where $D_n \subseteq D_{n+1}$ and $D_n$ is a maximal abelian subalgebra (masa) of $B_n$. By \cite[Lemma 3.1]{bra} we have that $I \cap C^*(G^c) = \overline{\cup_n C_n}$, where $C_n \subseteq C_{n+1}$ and $C_n$ is an ideal in $B_n$. Then $C_0(U) = I \cap C_0(G^{(0)}) = \overline{\cup_n (C_n \cap D_n)}$. Note that $C_n = \langle C_n \cap D_n \rangle_{B_n}$. Then \[ I \cap C^*(G^c) = \overline{\cup_n C_n} = \overline{\cup_n \langle C_n \cap D_n \rangle_{B_n}} \subseteq \langle I \cap C_0(G^{(0)}) \rangle_{C^*(G^c)} = \langle C_0(U) \rangle_{C^*(G^c)}. \] This finishes the proof of the claim. Now we have \begin{align*} I &= \langle I \cap C^*(G^c) \rangle_{C^*(G)}, \text{ by Lemma \ref{lem fixed point algebra},} \\ &= \bigl\langle \langle C_0(U) \rangle_{C^*(G^c)} \bigr\rangle_{C^*(G)} \\ &= \langle C_0(U) \rangle_{C^*(G)}. \qedhere \end{align*} \end{proof} \begin{Corollary} \label{cor AF fixed point groupoid} In the context of Lemma \ref{lem AF fixed point groupoid}, the map $U \mapsto \langle C_0(U) \rangle_{C^*(G)}$ is a bijection between the open $G$-invariant subsets of $G^{(0)}$ and the $\gamma$-invariant ideals of $C^*(G)$. \end{Corollary} \begin{proof} This follows from Lemmas \ref{lem ideals and invariant open sets} and \ref{lem AF fixed point groupoid}. \end{proof} We recall the \textit{gauge action} on the Toeplitz algebra of a directed graph $E$. There is an action $\gamma$ of the circle group $\IT$ on $\mathcal{T} C^*(E)$ defined by $\gamma_z(p_v) = p_v$ for $v \in E^0$, and $\gamma_z(s_e) = z s_e$ for $e \in E^1$. (This is the dual action (as discussed before Lemma \ref{lem AF fixed point groupoid}) to the \textit{length cocycle} $c : G(E) \to \IZ$, defined by $c([\alpha,\beta,x]) = |\alpha| - |\beta|$.) By Corollary \ref{cor AF fixed point groupoid} we know that the gauge-invariant ideals of $\mathcal{T} C^*(E)$ are in one-to-one correspondence with the open invariant subsets of $G(E)^{(0)}$. We now describe the open invariant subsets of $G(E)^{(0)}$. \begin{Lemma} \label{lem open invariant sets one} Let $H \subseteq E^0$ be a hereditary subset and let $F = F_H$ as in Definition \ref{def hereditary}. Let $B \subseteq \reg{F}$. Let $U(H,B):= E^* H E^\infty \cup E^*(H \cup B)$. Then $U(H,B)$ is open and invariant. Moreover, $H = \{ v \in E^0 : Z(v) \subseteq U(H,B) \}$ and $B = F^0 \cap U(H,B)$. \end{Lemma} \begin{proof} It is clear from its definition that $U(H,B)$ is invariant. We show that it is open. Let $x \in U(H,B)$. First suppose that $x \in E^\infty$. Then $x \in E^* H E^\infty$, so we may write $x = \alpha y$ where $s(\alpha) \in H$. Then $Z(s(\alpha)) \subseteq H E^\infty \cup H E^* \subseteq U(H,B)$. Therefore $Z(\alpha) = \alpha Z(s(\alpha)) \subseteq U(H,B)$ is a neighborhood of $x$. Next suppose that $x \in E^* H$. Then $s(x) \in H$, so as before, $x Z(s(x))$ is a neighborhood of $x$ contained in $U(H,B)$. Lastly suppose that $x \in E^* B$. Then $s(x) \in B \subseteq \reg{F}$, hence $s(x) F^1$ is finite. For $e \in s(x) E^1 H$, $Z(s(e)) \subseteq U(H,B)$. Then \[ Z(s(x)) \setminus \bigcup_{e \in s(x) F^1} Z(e) = \{ s(x) \} \cup \bigcup_{e \in s(x) E^1 H} Z(e) = \{ s(x) \} \cup \bigcup_{e \in s(x) E^1 H} e Z(s(e)) \subseteq U(H,B). \] Therefore $Z(x) \setminus \bigcup_{e \in s(x) F^1} Z(xe)$ is a neighborhood of $x$ in $U(H,B)$. For the last statement, it is clear that $H \subseteq \{ v : Z(v) \subseteq U(H,B) \}$ and $B \subseteq F^0 \cap U(H,B)$. We prove the reverse inclusions. First let $v \in E^0$ with $Z(v) \subseteq U(H,B)$. Then $v \in U(H,B)$, so $v \in H \cup B$. To show that $v \in H$, we suppose that $v \in B$ and deduce a contradiction. We know that $v E^* \subseteq Z(v) \subseteq U(H,B)$, so $s(v E^*) \subseteq H \cup B$. Note that since $B \subseteq \reg{F}$, if $\alpha \in v E^* B$ then $s(\alpha) F^1 \not= \varnothing$. Applying this to $v$ we find $e_1 \in v F^1$. Applying it to $e_1$ we find $e_2 \in s(e_1) F^1$. Inductively we obtain $x = e_1 e_2 \cdots \in v F^\infty$. Since $x \in Z(v) \subseteq U(H,B)$, this is impossible - all infinite paths in $U(H,B)$ arise in $H$. Therefore $v \in H$ as claimed. Finally, suppose that $v \in F^0 \cap U(H,B)$. Again we must have $v \in H \cup B$. Since $v \in F^0$ it follows that $v \in B$. \end{proof} \begin{Lemma} \label{lem open invariant sets two} Let $U \subseteq G(E)^{(0)}$ be open and invariant. Let $H := \{ v \in E^0 : Z(v) \subseteq U \}$ and $B = U \cap (E^0 \setminus H)$. Then $H$ is hereditary in $E$. Letting $F = F_H$, we have that $B \subseteq \reg{F}$. Finally, $U = U(H,B)$ (where $U(H,B)$ is as in the statement of Lemma \ref{lem open invariant sets one}). \end{Lemma} \begin{proof} We begin by showing that $H$ is hereditary. Let $\alpha \in H E^*$. Then $Z(r(\alpha) \subseteq U$. Since $Z(\alpha) \subseteq Z(r(\alpha))$ we have that $Z(\alpha) \subseteq U$. But $Z(\alpha) = \alpha Z(s(\alpha))$. Since $U$ is invariant, $Z(s(\alpha)) \subseteq U$, hence $s(\alpha) \in H$. Therefore $H$ is hereditary. Next we show that $B \subseteq \reg{F}$. Let $v \in B$. Then \[ Z(v) = \{ v \} \cup \bigcup_{e \in v E^1} Z(e) = \{ v \} \cup \bigcup_{e \in v F^1} Z(e) \cup \bigcup_{e \in v E^1 H} e Z(s(e)). \] Since $Z(w) \subseteq U$ for $w \in H$, the invariance of $U$ implies that $\{v\} \cup \bigcup_{e \in v E^1 H} e Z(s(e)) \subseteq U$. Since $Z(v) \not\subseteq U$ it follows that $v F^1 \not= \varnothing$. Since $U$ is open there are $\alpha_1,\ldots,\alpha_n \in v E^*$ such that $Z(v) \setminus \bigcup_{i=1}^n Z(\alpha_i) \subseteq U$. Let $e_1,\ldots,e_n \in E^1$ be such that $\alpha_i \in Z(e_i)$ for $1 \le i \le n$. For $e \in v E^1 \setminus \{ e_1,\ldots,e_n \}$ we have $Z(e) \subseteq Z(v) \setminus Z(\alpha_i) \subseteq U$, hence $s(e) \in H$. Therefore $e \not\in F^1$. Thus $v F^1 \subseteq \{e_1,\ldots,e_n\}$ is finite. It follows that $v \in \reg{F}$. Finally we show that $U(H,B) = U$. The invariance of $U$ implies that $U(H,B) \subseteq U$. We prove the reverse containment. Let $x \in U$. Since $U$ is open there are $\alpha,\beta_1, \ldots,\beta_n \in r(x) E^*$ such that $x \in Z(\alpha) \setminus \bigcup_{i=1}^n Z(\beta_i) \subseteq U$. If $x \in E^\infty$ we may choose $\gamma \in r(x) E^*$ such that $x \in Z(\gamma)$ and $|\gamma| > |\beta_i|$ for $1 \le i \le n$. Then $Z(\gamma) \cap Z(\beta_i) = \varnothing$ for $1 \le i \le n$, so $Z(\gamma) \subseteq U$. But then $s(\gamma) \in H$, so $x \in E^* H E^\infty \subseteq U(H,B)$. If $x \in E^*$ then since $U$ is invariant, $s(x) \in U$. Then $s(x) \in H \cup B$, so $x \in E^*(H \cup B) \subseteq U(H,B)$. \end{proof} We summarize the above: \begin{Theorem} \label{thm gauge-invariant ideals} Let $E$ be a directed graph with groupoid $G(E)$. The gauge-invariant ideals in $\mathcal{T} C^*(E)$ are in one-to-one correspondence with the open $G(E)$-invariant subsets of $G(E)^{(0)}$, where an open invariant set $U$ corresponds to the ideal generated by $C_0(U)$. The open invariant subsets of $G(E)^{(0)}$ are in one-to-one correspondence with pairs $(H,B)$, where $H \subseteq E^0$ is a hereditary subset, and $B \subseteq \reg{F_H}$. The open invariant set corresponding to the pair $(H,B)$ is $U(H,B) = E^* H E^\infty \cup E^* (H \cup B)$. \end{Theorem} \begin{Remark} In Theorem \ref{thm gauge-invariant ideals}, the gauge-invariant ideal corresponding to the open invariant set $U(H,B)$ is $J(E,\varnothing;F,B)$ (as in Definition \ref{def J(E,A;F,B)}). \end{Remark} \section{Relative graphs} \label{section relative graphs} \vspace*{.1 in} \begin{Definition} \label{definition relative graph} A \textit{relative graph} is a pair $(F,B)$ consisting of a directed graph $F$ and a subset $B$ of the regular vertices of $F$. A \textit{morphism} $\alpha : (F,B) \to (E,A)$ of relative graphs is an injective homomorphism of directed graphs $F \hookrightarrow E$, which we take to be an inclusion, such that \begin{enumerate}[(1)] \item \label{relative graph 1} $H_{F,E} := E^0 \setminus F^0$ is a hereditary set in $E$ \item \label{relative graph 2} $F^1 = F^0 E^1 F^0 \ (= \{e \in E^1 : s(e), r(e) \in F^0 \})$. \item \label{relative graph 3} $A \cap F^0 \subseteq B$ \end{enumerate} \end{Definition} This definition describes the situation where $\mathcal{T} C^*(F,B) \cong \mathcal{T} C^*(E,A) / J(E,A;F,B)$, by Theorem \ref{thm larger ideal}. (Note that $F_{H_{F,E}} = F$.) \begin{Lemma} \label{lemma composition} Let $\alpha_1 : (F_1,A_1) \to (F_2,A_2)$ and $\alpha_2 : (F_2,A_2) \to (F_3,A_3)$ be morphisms of relative graphs. Then $\alpha_2 \circ \alpha_1 : (F_1,A_1) \to (F_3,A_3)$ is a morphism of relative graphs. \end{Lemma} \begin{proof} We verify the conditions for a morphism in Definition \ref{definition relative graph}. Since $F_1 \hookrightarrow F_2$ and $F_2 \hookrightarrow F_3$ we have $F_1 \hookrightarrow F_3$. For convenience we will write $H_{ij}$ for $H_{F_i,F_j}$. We first show that $H_{13}$ is hereditary in $F_3$. Thus we must show that $H_{13} F_3^1 = H_{13} F_3^1 H_{13}$. Note that $H_{13} = F_3^0 \setminus F_1^0 = (F_3^0 \setminus F_2^0) \sqcup (F_2^0 \setminus F_1^0) = H_{23} \sqcup H_{12}$. Then $H_{13} F_3^1 = H_{23} F_3^1 \sqcup H_{12} F_3^1$. Since $\alpha_2$ is a morphism we know that $H_{23}$ is hereditary in $F_3$. Therefore $H_{23} F_3^1 = H_{23} F_3^1 H_{23} \subseteq H_{13} F_3^1 H_{13}$. Next, since $F_3^0 = H_{13} \sqcup F_1^0$, we have $H_{12} F_3^1 = H_{12} F_3^1 H_{13} \sqcup H_{12} F_3^1 F_1^0$. But $H_{12}F_3^1 F_1^0 \subseteq F_2^0 F_3^1 F_2^0 = F_2^1$, by \eqref{relative graph 2} for $\alpha_2$, and hence $H_{12} F_3^1 F_1^0 \subseteq H_{12} F_2^1 F_1^0 = \varnothing$, since $H_{12}$ is hereditary in $F_2$. Therefore $H_{12} F_3^1 \subseteq H_{13} F_3^1 H_{13}$, and hence $H_{13}$ is hereditary in $F_3$. This finishes the verification of \eqref{relative graph 1} for $\alpha_2 \circ \alpha_1$. Next, note that $F_1^0 F_3^1 F_1^0 \subseteq F_2^0 F_3^1 F_2^0 = F_2^1$ by \eqref{relative graph 2} for $\alpha_2$. Then $F_1^0 F_3^1 F_1^0 = F_1^0 F_2^1 F_1^0 = F_1^1$ by \eqref{relative graph 2} for $\alpha_1$. Thus \eqref{relative graph 2} holds for $\alpha_2 \circ \alpha_1$. Finally, to verify \eqref{relative graph 3}, note that $A_3 \cap F_1^0 \subseteq A_3 \cap F_2^0 \subseteq A_2$, hence $A_3 \cap F_1^0 = A_3 \cap A_2 \cap F_1^0 \subseteq A_2 \cap F_1^0 \subseteq A_1$. \end{proof} \begin{Theorem} \label{thm pushout} The category of relative graphs admits pushouts. \end{Theorem} \begin{proof} Let $\alpha_i : (F_0,A_0) \to (F_i,A_i)$, $i = 1$, 2, be morphisms of relative graphs. Define a relative graph $(E,A)$ by \begin{align} E &= F_1 \sqcup_{F_0} F_2 \label{pushout eqn one} \\ A &= (A_1 \setminus F_0^0) \cup (A_2 \setminus F_0^0) \cup (A_1 \cap A_2). \label{pushout eqn two} \end{align} (The graph $E$ is defined by $E^0 = (F_1^0 \sqcup F_2^0)/\{\alpha_1(v) = \alpha_2(v) : v \in F_0^0 \}$, $E^1 = (F_1^1 \sqcup F_2^1)/\{\alpha_1(e) = \alpha_2(e) : e \in F_0^1 \}$, $r_E(e) = r_{F_i}(e)$ for $e \in F_i^1$, and $s_E$ defined analogously.) We note that $A_1 \cap A_2 \subseteq F_1^0 \cap F_2^0 = F_0^0$. We further note that \[ H_{F_0,F_1} = F_1^0 \setminus F_0^0 = (F_1^0 \sqcup (F_2^0 \setminus F_0^0)) \setminus (F_0^0 \sqcup (F_2^0 \setminus F_0^0)) = E^0 \setminus F_2^0 = H_{F_2,E}, \] and similarly, $H_{F_0,F_2} = H_{F_1,E}$. We claim that $H_{F_2,E}$ is hereditary in $E$. To see this let $e \in H_{F_2,E} E^1$. Then $r(e) \in H_{F_0,F_1} = F_1^0 \setminus F_0^0 = F_1^0 \setminus F_2^0$. But then $e \not\in F_2^1$, hence $e \in F_1^1$. Since $H_{F_0,F_1}$ is hereditary in $F_1$ we have that $s(e) \in H_{F_0,F_1} = H_{F_2,E}$. This proves the claim. Similarly, $H_{F_1,E}$ is hereditary in $E$. We now show that $A \subseteq \reg{E}$. Let $v \in A$. First suppose that $v \in A_1 \setminus F_0^0 \subseteq \reg{F_1} \cap H_{F_0,F_1}$. Since $H_{F_0,F_1}= H_{F_2,E}$ is hereditary in $E$, $v E^1 = v E^1 H_{F_2,E} \subseteq v E^1 (E^0 \setminus F_2^0) \subseteq v E^1 F_1^0 \subseteq v F^1$. Then since $v \in \reg{F_1}$, $v \in \reg{E}$. An analogous argument shows that $A_2 \setminus F_0^0 \subseteq \reg{E}$. Finally, $A_1 \cap A_2 \subseteq \reg{F_1} \cap \reg{F_2} \subseteq \reg{E}$. Therefore $A \subseteq \reg{E}$. Let $\beta_i : (F_i,A_i) \to (E,A)$ for $i = 1$, 2. We claim that $\beta_1$ and $\beta_2$ are morphisms of relative graphs. For the proof, first note that we have already shown that $H_{F_i,E}$ is hereditary in $E$ for $i = 1,2$. Equation \eqref{relative graph 3} for $\beta_1$ and $\beta_2$ is immediate from the definition of $A$. For \eqref{relative graph 2}, we have \begin{align*} F_i^0 E^1 F_i^0 &= (F_0^0 \sqcup H_{F_0,F_i}) E^1 (F_0^0 \sqcup H_{F_0,F_i}) \\ &= F_0^0 E^1 F_0^0 \sqcup F_0^0 E^1 H_{F_0,F_i} \sqcup H_{F_0,F_i} E^1 F_0^0 \sqcup H_{F_0,F_i}E^1 H_{F_0,F_i} \\ &= F_0^1 \sqcup F_0 F_i^1 H_{F_0,F_i} \sqcup \varnothing \sqcup H_{F_0,F_i} F_i^1 \\ &= F_i^1. \end{align*} It is clear that $\beta_1 \alpha_1 = \beta_2 \alpha_2$. Now let $(G,B)$ be a relative graph and let $\gamma_i : (F_i,A_i) \to (G,B)$ be morphisms such that $\gamma_1 \alpha_1 = \gamma_2 \alpha_2$. Then the inclusions $F_i \hookrightarrow G$ agree on $F_0$, so there is an inclusion $E \hookrightarrow G$. We claim that $\phi : (E,A) \to (G,B)$ is a morphism. It is clear that then $\gamma_i = \phi \beta_i$ for $i = 1$, 2. We verify \eqref{relative graph 1} - \eqref{relative graph 3} for $\phi$. For \eqref{relative graph 3} first note that $B \cap E^0 = (B \cap F_1^0) \cup (B \cap F_2^0) \subseteq A_1 \cup A_2$. Next we have $B \cap F_0^0 = B \cap F_1^0 \cap F_2^0 \subseteq A_1 \cap A_2$. Then we have \[ B \cap E^0 = (B \cap E^0 \setminus F_0^0) \cup (B \cap F_0^0) \subseteq ((A_1 \cup A_2) \setminus F_0^0) \cup (A_1 \cap A_2 \cap F_0^0) = A. \] To prove \eqref{relative graph 2}, note that from \eqref{relative graph 2} for $\gamma_1$ and $\gamma_2$ we have that $F_i^0 G^1 F_i^0 = F_i^1$ for $i = 1$, 2. Then \[ E^0 G^1 E^0 = (F_1^0 \cup F_2^0) G^1 (F_1^0 \cup F_2^0) \subseteq E^1 \cup F_1^0 G^1 F_2^0 \cup F_2^0 G^1 F_1^0. \] We claim that $F_1^0 G^1 F_2^0$, $F_2^0 G^1 F_1^0 \subseteq E^1$. For suppose, say, that $e \in F_1^0 G^1 F_2^0$. If $r(e) \in F_0^0$ then $e \in F_2^0 G^1 F_2^0 = F_2^1 \subseteq E^1$, by \eqref{relative graph 2} for $\gamma_2$. Similarly, if $s(e) \in F_0$ then $e \in E^1$. If neither of these two situations occurs, then $e \in H_{F_0,F_1} G^1 H_{F_0,F_2}$. Then $r(e) \in F_1^0 \setminus F_0^0 \subseteq G^0 \setminus F_2^0 = H_{F_2,G}$, which is hereditary by \eqref{relative graph 1} for $\gamma_1$. Thus $s(e) \in H_{F_2,G}$, a set disjoint from $H_{F_0,F_2}$. Thus one of the previous two situations must occur, proving the claim. Therefore $E^0 G^1 E^0 = E^1$. This verifies \eqref{relative graph 2} for $\phi$. For \eqref{relative graph 1}, note first that $H_{E,G} = G^0 \setminus (F_1^0 \cup F_2^0) = (G^0 \setminus F_1^0) \cap (G^0 \setminus F_2^0) = H_{F_1,G} \cap H_{F_2,G}$. Since $H_{F_i,G}$ is hereditary in $G$ for $i = 1$, 2, then $H_{F_1,G} \cap H_{F_2,G}$ is as well. Thus $H_{E,G}$ is hereditary in $G$, verifying \eqref{relative graph 1}. Therefore $\phi$ is a morphism. \end{proof} \begin{Remark} \label{rmk general situation} Let us consider the data involved in a pushout of relative graphs from a larger perspective. Let $F_0 \hookrightarrow F_i$, $i=1,2$, be inclusions of graphs satisfying Definition \ref{definition relative graph}\eqref{relative graph 1} and \eqref{relative graph 2}. For $i = 1,2$ let $A_i \subseteq \reg{F_i^0}$ be arbitrary. Let $A_{12} = (A_1 \cup A_2) \cap F_0^0$. Then if $A_0 \subseteq \reg{F_0^0}$, Definition \ref{definition relative graph}\eqref{relative graph 3} implies that the maps $(F_0,A_0) \to (F_i,A_i)$ are morphisms of relative graphs if and only if $A_{12} \subseteq A_0$. Notice that the pushout, $(E,A)$, defined in the proof of Theorem \ref{thm pushout}, does not depend on the choice of $A_0$ satisfying $A_{12} \subseteq A_0 \subseteq \reg{F_0}$. \end{Remark} \begin{Remark} Consider the pushout as defined in the proof of Theorem \ref{thm pushout}. We note the following. \begin{equation} \label{main theorem remark one} H_{F_1,E} \cap H_{F_2,E} = \varnothing \end{equation} Equation \eqref{main theorem remark one} is true because $H_{F_1,E} \cap H_{F_2,E} = (E^0 \setminus F_2^0) \cap (E^0 \setminus F_1^0) = E^0 \setminus (F_2^0 \cup F_1^0) = \varnothing$. \end{Remark} \section{Admissible pushouts of relative graphs} \label{section admissible pushouts of relative graphs} In order to discuss pushouts of relative Toeplitz graph algebras we recall Pedersen's theorem characterizing pullbacks of $C^*$-algebras in the case relevant to this paper. \begin{Theorem} \label{theorem pullback} Let $I$ and $J$ be ideals in a $C^*$-algebra $A$, and consider the commuting square of quotient maps in Figure \ref{figure 1}. This is a pullback diagram if and only if $IJ = 0$. \end{Theorem} \begin{proof} This follows easily from \cite[Proposition 3.1]{ped}, as we show here. According to this proposition, the diagram is a pullback if and only if the following three conditions are satisfied: \begin{enumerate}[(i)] \item $I \cap J = IJ = \{0\}$, \item $q_I^{-1}(q_J(A/J)) = \pi(A)$, \item $\pi_J(\ker \pi_I) = \ker q_J$. \end{enumerate} \noindent Condition (ii) is true since all four maps are surjective. To see that condition (iii) is true, note that $\pi_J(\ker \pi_I) = \pi_J(I) = (I + J)/J = \ker q_J$. Therefore the diagram is a pullback if and only if (i) holds. \end{proof} \begin{figure} \begin{tikzpicture}[scale=3] \node (0_0) at (0,0) [rectangle] {$A/J$}; \node (0_1) at (0,1) [circle] {$A$}; \node (1_0) at (1,0) [rectangle] {$A/(I + J)$.}; \node (1_1) at (1,1) [circle] {$A/I$}; \draw[-latex,thick] (0_1) -- (0_0) node[pos=0.5, inner sep=0.5pt, left=1pt] {$\pi_J$}; \draw[-latex,thick] (0_1) -- (1_1) node[pos=0.5, inner sep=0.5pt, above=1pt] {$\pi_I$}; \draw[-latex,thick] (1_1) -- (1_0) node[pos=0.5, inner sep=0.5pt, right=1pt] {$q_I$}; \draw[-latex,thick] (0_0) -- (1_0) node[pos=0.5, inner sep=0.5pt, below=1pt] {$q_J$}; \end{tikzpicture \captionof{figure}{} \label{figure 1} \end{figure} By the definition of the category of relative graphs, a pushout in this category determines, on the one hand, a commuting square of relative Toeplitz graph algebras (the outer square of Figure \ref{figure 3}), and on the other hand, a commuting diagram of quotient $C^*$-algebras (the upper left triangle of Figure \ref{figure 3}), where $I_j = \ker \pi_j$ for $j = 1,2$. \begin{figure}[h] \begin{tikzpicture \node (E-A) at (0,0) [rectangle] {$\mathcal{T} C^*(E,A)$}; \draw (E-A) ++(5,0) node (F_1-A_1) {$\mathcal{T} C^*(F_1,A_1)$}; \draw (E-A) ++(0,-3) node (F_2-A_2) {$\mathcal{T} C^*(F_2, A_2)$}; \draw (E-A) ++(F_1-A_1) ++(F_2-A_2) node (F_0-A_0) {$\mathcal{T} C^*(F_0,A_0)$}; \draw (E-A) ++(2.5,-1.5) node (quot) {$\displaystyle \frac{\mathcal{T} C^*(E,A)}{I_1 + I_2}$}; \draw (F_0-A_0) ++(2.2,0) node (quot2) {$\cong \displaystyle \frac{\mathcal{T} C^*(E,A)}{I_0}$.}; \begin{scope}[>=latex] \draw[->, thick] (E-A) -- (F_1-A_1) node[pos=0.5, above] {$\pi_1$}; \draw[->, thick] (E-A) -- (F_2-A_2) node[pos=0.5, left] {$\pi_2$}; \draw[->, thick] (F_1-A_1) -- (F_0-A_0); \draw[->, thick] (F_2-A_2) -- (F_0-A_0); \draw[->, thick] (F_1-A_1) -- (quot); \draw[->, thick] (F_2-A_2) -- (quot); \end{scope} \end{tikzpicture} \captionof{figure}{} \label{figure 3} \end{figure} In this section we first prove that this upper triangle is a pullback diagram of quotient $C^*$-algebras, using Theorem \ref{theorem pullback}. We then give a condition on the pushout that is necessary and sufficient for the inner and outer diagrams to coincide, so that the commuting square is in fact a pullback diagram of relative Toeplitz graph algebras. As in \cite{hrt} we call this condition \textit{admissibility}. Our condition is quite different from the conditions for admissibility in \cite{hrt}. In Example \ref{example compare} we give a detailed comparison in the setting of \cite{hrt}. Generally speaking the main reason behind our approach is that when arbitrary graphs are considered, the breaking vertices give rise to a rich family of gauge-invariant ideals (as described in Theorem \ref{thm larger ideal}). The problem of characterizing those pushout diagrams of graphs that determine pullback diagrams of graph algebras leads naturally to the same problem in the larger context of relative graphs and relative Toeplitz graph algebras. In this setting it seems to us more appropriate to make the requirements of admissibility from \cite{hrt} part of the structure of the category of relative graphs. Thus we use the term \textit{admissible} for the new feature needed due to the interaction between the hereditary sets and the sets used for the relativizations. \begin{Notation} \label{main proofs notn} For the rest of this section, let $\alpha_i : (F_0,A_0) \to (F_i,A_i)$ be morphisms of relative graphs, for $i = 1, 2$, and let $(E,A)$ be the pushout. Recall from Remark \ref{rmk general situation} the set $A_{12} = (A_1 \cup A_2) \cap F_0^0$, satisfying $A_{12} \subseteq A_0 \subseteq \reg{F_0^0}$. For the sake of notational convenience we will write $F_{12} := F_0$. \begin{enumerate}[(1)] \item \label{ideal notn} Let $I_i = J(E,A;F_i,A_i)$ for $i = 0, 1, 2, 12$. \item \label{u notn} Let $U^{(i)} = G(E,A)^{(0)} \setminus G(F_i,A_i)^{(0)}$ for $i = 0, 1, 2, 12$. \item \label{u decomp notn} For each of these four open sets we will make use of the corresponding decomposition as the union of three subsets, as given in the proof of Theorem \ref{thm larger ideal}. That is, for $i = 0,1,2,12$ we have \[ U^{(i)} = \bigsqcup_{j=1}^3 U^{(i)}_j \\ = E^* H_{F_i,E} E^\infty \sqcup E^*(H_{F_i,E} \setminus A) \sqcup E^*(A_i \setminus A). \] \end{enumerate} \end{Notation} \begin{Remark} \label{rmk U's and I's} By \ref{main proofs notn}(\ref{ideal notn}), $\mathcal{T} C^*(F_i, A_i) = \mathcal{T} C^*(E,A)/I_i$ for $i=0,1,2,12$. By \ref{main proofs notn}(\ref{u notn}), $I_i$ is generated as an ideal (in $\mathcal{T} C^*(E,A)$) by $C_0(U^{(i)})$ for $i=0,1,2$, and 12. \end{Remark} \begin{Proposition}\label{prop pullback} $I_1 I_2 = 0,$ and thus by Theorem \ref{theorem pullback}, $\mathcal{T} C^*(E,A)$ is the pullback of $\mathcal{T} C^*(F_1,A_1)$ and $\mathcal{T} C^*(F_2,A_2)$ over $\mathcal{T} C^*(E,A)/(I_1 + I_2)$. \end{Proposition} \begin{proof} We prove this by showing that $U^{(1)} \cap U^{(2)} = \varnothing$. It suffices to show that $U^{(1)}_i \cap U^{(2)}_j = \varnothing$ for $i \le j$. Since $U^{(i)}_1 \subseteq E^\infty$, and $U^{(j)}_k \subseteq E^*$ for $k > 1$, $U^{(i)}_1 \cap U^{(j)}_k = \varnothing$ for all $i,j$ and all $k > 1$. By equation \eqref{main theorem remark one}, $U^{(1)}_1 \cap U^{(2)}_1 = \varnothing = U^{(1)}_2 \cap U^{(2)}_2$. Next \[ U^{(1)}_2 \cap U^{(2)}_3 = E^*((H_{F_1,E} \setminus A) \cap (A_2 \setminus A)) = E^*((H_{F_1,E} \cap A_2) \setminus A) = \varnothing, \] since $H_{F_1,E} \cap A_2 = A_2 \setminus F_0^0 \subseteq A$, by equation \eqref{pushout eqn two}. Finally, \[ U^{(1)}_3 \cap U^{(2)}_3 = F_1^*(A_1 \setminus A) \cap F_2^*(A_2 \setminus A) = E^*\bigl((A_1 \setminus A) \cap A_2 \setminus A)\bigr) = E^*\bigl((A_1 \cap A_2) \setminus A\bigr) = \varnothing, \] again by equation \eqref{pushout eqn two}. \end{proof} \begin{Proposition} \label{prop pullback 2} $I_1 + I_2 = I_{12}$, and thus $\mathcal{T} C^*(E,A) / (I_1 + I_2)$ is a relative Toeplitz graph algebra. \end{Proposition} \begin{proof} We prove this by showing that $U^{(1)} \cup U^{(2)} = U^{(12)}$. Since $H_{F_1,E} \cup H_{F_2,E} = H_{F_0,E}$ it follows that $U^{(1)}_1 \cup U^{(2)}_1 = U^{(12)}_1$ and $U^{(1)}_2 \cup U^{(2)}_2 = U^{(12)}_2$. Next, from equation \eqref{pushout eqn two} we have that \[ (A_1 \cup A_2) \setminus A = (A_1 \cup A_2) \cap F_0^0 \setminus A = A_{12} \setminus A. \] Then $U^{(1)}_3 \cup U^{(2)}_3 = E^*\bigl((A_1 \cup A_2) \setminus A\bigr)= E^*(A_{12} \setminus A) = U^{(12)}_3$. \end{proof} \begin{Proposition} \label{prop pullback 3} $I_{12} \subseteq I_0$. \end{Proposition} \begin{proof} Since $A_{12} \subseteq A_0$, Notation \ref{main proofs notn}\eqref{u decomp notn} implies that $U^{(12)} \subseteq U^{(0)}$. This is equivalent to the containment of the proposition. \end{proof} \begin{Definition} \label{def admissible} Let $\alpha_i : (F_0,A_0) \to (F_i,A_i)$ be morphisms of relative graphs, for $i = 1$, 2, and let $(E,A)$ be the pushout relative graph as in Definition \ref{definition relative graph}. The pair $(\alpha_1,\alpha_2)$ is called \textit{admissible} if \begin{equation} \label{eqn admissible} A_0 \subseteq A_1 \cup A_2. \end{equation} \end{Definition} \begin{Remark} We wish to give some idea of what this condition means and why it is crucial for the pullback construction. Explicitly equation \eqref{eqn admissible} requires that if the Cuntz-Krieger condition is imposed at a vertex $v$ in $\mathcal{T} C^*(F_0,A_0)$ then it is necessary to impose it at $v$ in at least one of $\mathcal{T} C^*(F_i,A_i)$, $i = 1,2$ (and in particular, $v$ must be regular in at least one of $F_1$ and $F_2$). Let us also try to give a more fundamental explanation. For $i = 0,1,2$ let $I_i$ be the kernel of the quotient map of $\mathcal{T} C^*(E,A)$ onto $\mathcal{T} C^*(F_i,A_i)$. Let $v \in A_0$. Let us write $D_0 = v F_0^1$, $D_i = v (F_i^1 \setminus F_0^1)$ for $i = 1,2$, $q_0 = \sum_{e \in D_0} s_e s_e^*$, and (heuristically) $q_i = \sum_{e \in D_i} s_e s_e^*$. With the groupoid picture in mind we let $\chi_{\{v\}} = p_v - q_0 - q_1 - q_2$. Since $v \in A_0$ we have $p_v - q_0 = p_v - \sum_{e \in v F_0^1} s_e s_e^* = 0$ in $\mathcal{T} C^*(F_0,A_0)$. Therefore $\chi_{\{v\}} + q_1 + q_2 = p_v - q_0 \in I_0$. We need that it also belong to $I_1 + I_2$ (as required by Theorem \ref{theorem pullback}). First suppose that $v \in \sing{E}$. Then $v$ must be singular in at least one of $F_1$ and $F_2$. Suppose for definiteness that $v \in \sing{F_1}$. Then $v \not\in A_1$, since $A_1 \subseteq \reg{F_1}$. For $e \in D_2$ we know that $s(e) \in H_{F_1,E}$, and hence $s_e \in I_1$. Thus if $v \in \reg{F_2}$ then $q_2 \in I_1$. If in addition $v \in A_2$ then $q_1 = p_v - q_0 - q_2 \in I_2$, and it follows that $p_v - q_0 \in I_1 + I_2$. But if $v \not\in A_2$, or if $v \in \sing{F_2}$, i.e. if \eqref{eqn admissible} fails, then $\chi_{\{v\}} + q_1 + q_2$ cannot be apportioned between $I_1$ and $I_2$. Next suppose that $v \in \reg{E}$. We use the same notations as above. Again, since $v \in A_0$ we have $\chi_{\{v\}} + q_1 + q_2 = p_v - q_0 \in I_0$. Since $v \in \reg{E}$ we know that $D_1$ and $D_2$ are finite, so $q_1$ and $q_2$ are not ``heuristic''. Therefore $q_1 \in I_2$ and $q_2 \in I_1$, by the same reasoning used for $q_2$ previously. If $v$ is in $A_1$ or in $A_2$, it follows as before that $p_v - q_0 \in I_1 + I_2$. But again, if $v \not\in A_1 \cup A_2$ then this is not possible. \end{Remark} \begin{Theorem} \label{thm main} Let $\alpha_i : (F_0,A_0) \to (F_i,A_i)$ be morphisms of relative graphs, for $i = 1$, 2, and let $(E,A)$ be the pushout. We use the notation of \ref{main proofs notn}. The following statements are equivalent: \begin{enumerate}[(a)] \item \label{thm main 1} The commuting square of relative Toeplitz graph $C^*$-algebras corresponding to the pushout of $(\alpha_1,\alpha_2)$ is a pullback. (This refers to the outer square in Figure \ref{figure 3}.) \item \label{thm main 2} $(\alpha_1,\alpha_2)$ is admissible. \item \label{thm main 3} $A_0 = A_{12}$. \item \label{thm main 4} $I_0 = I_{12}$. \item \label{thm main 5} $A_0 \cap \sing{F_1} \subseteq A_2$, $A_0 \cap \sing{F_2} \subseteq A_1$, and $A_0 \cap \reg{E} \subseteq A_1 \cup A_2$. \end{enumerate} \end{Theorem} \begin{proof} \textit{\eqref{thm main 2} $\Leftrightarrow$ \eqref{thm main 3}:} Recall from Notation \ref{main proofs notn} that $A_{12} = (A_1 \cup A_2) \cap F_0^0 \subseteq A_0 \subseteq F_0^0$. If $(\alpha_1,\alpha_2)$ is admissible then $A_0 \subseteq A_1 \cup A_2$. Since $A_0 \subseteq F_0^0$ it follows that $A_0 \subseteq A_{12}$, hence $A_0 = A_{12}$. Conversely, if $A_0 = A_{12}$ then since $A_{12} \subseteq A_1 \cup A_2$ it follows that $(\alpha_1,\alpha_2)$ is admissible. \textit{\eqref{thm main 1} $\Leftrightarrow$ \eqref{thm main 4}:} By Theorem \ref{theorem pullback} and Figure \ref{figure 3}, \eqref{thm main 1} is equivalent to the equality $I_1 + I_2 = I_0$. By Proposition \ref{prop pullback 2} this is equivalent to \eqref{thm main 4}. \noindent \textit{\eqref{thm main 3} $\Leftrightarrow$ \eqref{thm main 4}:} This follows from Remark \ref{rmk U's and I's}. \noindent \textit{\eqref{thm main 3} $\Leftrightarrow$ \eqref{thm main 5}:} Suppose that \eqref{thm main 3} holds. Let $v \in A_0$. Then $v \in A_i \subseteq \reg{F_i}$ for $i=1$ or $i=2$. If $v \in \sing{F_1}$ then $v \not\in A_1$, hence $v \in A_2$. Similarly, if $v \in \sing{F_2}$ then $v \in A_1$. The third condition is immediate. Next suppose that \eqref{thm main 5} holds. Let $v \in A_0$. Since $\sing{E} = \sing{F_1} \cup \sing{F_2}$, if $v \in \sing{E}$ then the first two conditions in \eqref{thm main 5} imply that $v \in A_2 \cup A_1$. If $v \in \reg{E}$ then the third condition implies that $v \in A_1 \cup A_2$. \end{proof} \section{Examples} \label{section examples} We first consider the two extreme possibilities. Let $F_0 \hookrightarrow F_i$ for $i = 1,2$, and let $E$ be as in \eqref{pushout eqn one}. \begin{Example} Let $A_i = \varnothing$ for $i = 0,1,2$. Definition \ref{definition relative graph}\eqref{relative graph 3} reduces to $\varnothing \subseteq \varnothing$, so we do indeed have morphisms of relative graphs. It is clear that equation \eqref{pushout eqn two} holds, so we obtain a pushout diagram in the category of relative graphs. The corresponding commuting square of $C^*$-algebras consists of Toeplitz graph algebras (see Figure \ref{figure 4}). \begin{figure}[h] \begin{tikzpicture}[scale=2.5] \node (0_0) at (0,0) [rectangle] {$\mathcal{T} C^*(F_2)$}; \node (0_1) at (0,1) [rectangle] {$\mathcal{T} C^*(E)$}; \node (1_0) at (1,0) [rectangle] {$\mathcal{T} C^*(F_0)$.}; \node (1_1) at (1,1) [rectangle] {$\mathcal{T} C^*(F_1)$}; \draw[-latex,thick] (0_1) -- (0_0); \draw[-latex,thick] (0_1) -- (1_1); \draw[-latex,thick] (1_1) -- (1_0); \draw[-latex,thick] (0_0) -- (1_0); \end{tikzpicture} \captionof{figure}{} \label{figure 4} \end{figure} The condition in Definition \ref{def admissible} reduces to $\varnothing \subseteq \varnothing$, so we know that this is a pullback diagram of $C^*$-algebras. (This is a special case of \cite[Theorem 3.3]{kpsw}.) \end{Example} \begin{Example} \label{example CK algebras} Let $A_i = \reg{F_i}$ for $i = 0,1,2$. As noted in Corollary \ref{cor kernel of quotient map}, if the inclusion $(F_0,\reg{F_0}) \hookrightarrow (F_i,\reg{F_i})$ is a morphism then $H_{F_0,F_i}$ must be saturated in $F_i$. It then follows that $H_{F_i,E}$ is saturated in $E$ for $i = 1,2$. We must check Definition \ref{definition relative graph}\eqref{relative graph 3}. First, if $v \in \reg{E} \cap F_1^0$, the fact that $H_{F_2,E}$ is saturated in $E$ implies that $v$ is not a source in $F_1^0$. Therefore $v \in \reg{F_1}$. Similarly we have $\reg{E} \cap F_2^0 \subseteq \reg{F_2}$. Next, if $v \in \reg{F_1} \cap F_0^0$, then the fact that $H_{F_0,F_1} = H_{F_2,E}$ is saturated in $F_1$ implies that $v$ is not a source in $F_0$. Therefore $v \in \reg{F_0}$, and similarly, $\reg{F_2} \cap F_0^0 \subseteq \reg{F_0}$. Thus $(F_i,\reg{F_i}) \hookrightarrow (E,\reg{E})$ are, in fact, morphisms of relative graphs. Next we check equation \eqref{pushout eqn two} to verify that we have a pushout diagram in the category of relative graphs. We must show that $\reg{E} = (\reg{F_1} \setminus F_0^0) \cup (\reg{F_2} \setminus F_0^0) \cup (\reg{F_1} \cap \reg{F_2})$. For $\supseteq$, note that $\reg{F_1} \setminus F_0^0 \subseteq F_1^0 \setminus F_0^0 = H_{F_0,F_1} = H_{F_2,E}$. So if $v \in \reg{F_1} \setminus F_0^0$, then by the hereditary property of $H_{F_2,E}$, $v F_2^1 = \varnothing$, i.e. $v F_1^1 = v E^1$, and hence $v \in \reg{E}$. Similarly we have that $\reg{F_2} \setminus F_0^0 \subseteq \reg{E}$. Finally, it is clear that $\reg{F_1} \cap \reg{F_2} \subseteq \reg{E}$. For $\subseteq$, let $v \in \reg{E}$. If $v \in H_{F_1,E}$ then $v E^1 = v F_1^1$, so that $v \in \reg{F_1} \setminus F_0^0$. Similarly, if $v \in H_{F_2,E}$ then $v \in \reg{F_2} \setminus F_0^0$. Finally, let $v \in F_0^0$. If, say, $v \not\in \reg{F_1}$, then it must be a source in $F_1$. This means that $v E^1 = v F_2^1 H_{F_1,E}$. By the saturation property, $v \in H_{F_1,E}$, a contradiction. Thus $v \in \reg{F_1}$, and a similar argument shows that $v \in \reg{F_2}$. Now we consider admissibility. Theorem \ref{thm main}\eqref{thm main 5} becomes: $\reg{F_0} \cap \sing{F_1} \subseteq \reg{F_2}$, $\reg{F_0} \cap \sing{F_2} \subseteq \reg{F_1}$, and $\reg{F_0} \cap \reg{E} \subseteq \reg{F_1} \cup \reg{F_2}$. The third of these is automatically true, while the first two are equivalent to each other, and to the condition $\sing{F_1} \cap \sing{F_2} \cap \reg{F_0} = \varnothing$. In words, a vertex that is regular in $F_0$ cannot receive infinitely many edges from both of $H_{F_1,E}$ and $H_{F_2,E}$ (or equivalently, a vertex cannot be breaking for both of $H_{F_1,E}$ and $H_{F_2,E}$). Thus we find that the commuting square of Cuntz-Krieger algebras in Figure \ref{figure 5} is a pullback diagram if and only if $\sing{F_1} \cap \sing{F_2} \cap \reg{F_0} = \varnothing$. \begin{figure} \begin{tikzpicture}[scale=2.5] \node (0_0) at (0,0) [rectangle] {$C^*(F_2)$}; \node (0_1) at (0,1) [rectangle] {$C^*(E)$}; \node (1_0) at (1,0) [rectangle] {$C^*(F_0)$.}; \node (1_1) at (1,1) [rectangle] {$C^*(F_1)$}; \draw[-latex,thick] (0_1) -- (0_0); \draw[-latex,thick] (0_1) -- (1_1); \draw[-latex,thick] (1_1) -- (1_0); \draw[-latex,thick] (0_0) -- (1_0); \end{tikzpicture} \captionof{figure}{} \label{figure 5} \end{figure} We remark that if we let $A_0 = \reg{F_0} \setminus (\sing{F_1} \cap \sing{F_2})$, then the diagram becomes that of Figure \ref{figure 6}, which is a pullback. \begin{figure} \begin{tikzpicture}[scale=2.5] \node (0_0) at (0,0) [rectangle] {$C^*(F_2)$}; \node (0_1) at (0,1) [rectangle] {$C^*(E)$}; \node (1_0) at (1,0) [rectangle] {$\mathcal{T} C^*(F_0,A_0)$.}; \node (1_1) at (1,1) [rectangle] {$C^*(F_1)$}; \draw[-latex,thick] (0_1) -- (0_0); \draw[-latex,thick] (0_1) -- (1_1); \draw[-latex,thick] (1_1) -- (1_0); \draw[-latex,thick] (0_0) -- (1_0); \end{tikzpicture} \captionof{figure}{} \label{figure 6} \end{figure} \end{Example} \begin{Example} \label{example compare} We now compare our notion of admissible with that in \cite{hrt}. We describe the differences in our approaches. The situation in \cite{hrt} is of a graph $E$ and two subgraphs $F_1$ and $F_2$. In order to have a pushout of graphs it is necessary to assume that $E = F_1 \cup F_2$. Then $E$ is the pushout of the diagram $F_1 \cap F_2 \hookrightarrow F_i$, $i = 1,2$. In order that the incusions $F_i \hookrightarrow E$ define quotient maps $C^*(E) \to C^*(F_i)$ it is necessary that $E^0 \setminus F_i^0$ be saturated and hereditary in $E$ for $i = 1,2$. In order that the inclusions $F_1 \cap F_2 \hookrightarrow F_i$ define quotient maps $C^*(F_i) \to C^*(F_1 \cap F_2)$ it is necessary that $F_i^0 \setminus (F_1^0 \cap F_2^0) = E^0 \setminus F_j^0$ be saturated and hereditary in $F_i$, for $i = 1,2$ and $i \not= j$. In fact, the quotient of $C^*(F_i)$ by the ideal corresponding to the saturated hereditary set $F_i^0 \setminus (F_1^0 \cap F_2^0)$ equals the algebra of the graph $F_0$ defined by $F_0^0 = F_1^0 \cap F_2^0$ and $F_0^1 = F_0^0 E^1 F_0^0$. Thus it is necessary that $F_1 \cap F_2 = F_0$. In order to impose these requirements, the pair $F_1$ and $F_2$ is called \textit{admissible} (\cite[Definition 2.1]{hrt}) if, adjusting for the Australian convention, and using the graph $F_0$ defined above, \begin{enumerate}[(1)] \item $E = F_1 \cup F_2$ \item $\source{F_1 \cap F_2} \subseteq \source{F_1} \cap \source{F_2}$ \item $F_1^1 \cap F_2^2 = F_i^1F_0^0$, (equivalently, $F_1^1 \cap F_2^1 = E^1 F_0^0$, as is easily checked) \item $F_i^0 \setminus F_0^0$ has no breaking vertices in $E$ or in $F_i$, $i = 1,2$. \end{enumerate} Given (1), \cite[Lemma 2.2]{hrt} implies that (3) is equivalent to the hereditary property of $F_i^0 \setminus F_0^0$ in $F_i$ (and hence in $E$) together with the condition $F_1 \cap F_2 = F_0$ mentioned above. Furthermore, \cite[Lemma 2.3]{hrt} shows that (2) implies $F_i^0 \setminus F_0^0$ is saturated in $F_i$, $i = 1,2$. Given (1) it also follows that $F_i^0 \setminus F_0^0$ is saturated in $E$. We remark that (2) is stronger than needed for saturation. It may be replaced by the weaker condition: (2)$'$ $\source{F_1 \cap F_2} \subseteq \sing{F_1} \cap \sing{F_2}$. In fact, (2)$'$ is equivalent to the saturation of $F_i^0 \setminus F_0^0$ in $F_i$, $i = 1,2$. In the present paper we begin with two graphs, $F_1$ and $F_2$, and a common subgraph $F_0$. We \textit{construct} the graph $E$ as the pushout of the diagram $F_0 \hookrightarrow F_i$, $i = 1,2$. Then the condition $F_0 = F_1 \cap F_2$ holds by definition. We include the requirement that $F_i^0 \setminus F_0^0$ be hereditary in $F_i$ as part of our definition of morphism of (relative) directed graphs. Thus (1),(2)$'$,(3) are equivalent to our setup (in the special case that $A_i = \reg{F_i}$ for $i = 0,1,2$). Since we allow the graphs to have breaking vertices, we do not include a version of (4). Instead, our more general context requires our version of admissibility, namely, $\reg{F_0} \subseteq \reg{F_1} \cup \reg{F_2}$. Equivalently, a vertex in $F_0$ cannot be a breaking vertex for $F_i^0 \setminus F_0^0$ in $F_i$ for both $i = 1,2$ (as in Example \ref{example CK algebras}). We give the following examples. \end{Example} \begin{Example} \begin{figure} \begin{tikzpicture} \node (0_0) at (0,0) [rectangle] {$u$}; \node (1_0) at (1.7,0) [rectangle] {$w,$}; \node(m1_0) at (-1.7,0) [rectangle] {$v$}; \draw[thick,->] (-1.5,0) to node[above]{$e_i$} (-.2,0); \draw[thick,->] (1.5,0) to node[above]{$f_i$} (.2,0); \draw[->,thick] (0,-.75) .. controls (.5,-.75) and (.5,-.3) .. (0_0) node[pos=0, inner sep=0.5pt, anchor=north] {$d$}; \draw[thick] (0,-.75) .. controls (-.5,-.75) and (-.5,-.3) .. (0_0); \node (more) at (3.5,0) [rectangle] {$i = 1,2,\ldots$}; \end{tikzpicture} \captionof{figure}{} \label{figure 7} \end{figure} Let $E$ be the graph in Figure \ref{figure 7}. Let $F_0,F_1,F_2$ be the graphs defined by \begin{align*} &F_0^0 = \{u\},\ F_0^1 = \{d\} \\ &F_1^0 = \{u,v\},\ F_1^1 = \{d\} \cup \{e_i : i \in \IN\} \\ &F_2^0 = \{u,w\},\ F_2^1 = \{d\} \cup \{f_i : i \in \IN\}. \end{align*} Here $H_{F_0,F_1} = \{v\}$, $H_{F_0,F_2} = \{w\}$, and $u$ is a breaking vertex in $F_i$ for $H_{F_0,F_i}$, $i = 1,2$. Since there is a breaking vertex, this is not an admissible pushout according to \cite{hrt}. In this paper, the pushout diagram is not admissible because $u \in \reg{F_0} = A_0$ but $u \not\in \reg{F_i} = A_i$ for $i = 1,2$. Theorem \ref{thm main} implies that the corresponding diagram of graph $C^*$-algebras is not a pullback. \end{Example} \begin{Example} \begin{figure} \begin{tikzpicture} \node (0_0) at (0,0) [rectangle] {$u$}; \node (1_0) at (1.7,0) [rectangle] {$w$,}; \node(m1_0) at (-1.7,0) [rectangle] {$v$}; \draw[thick,->] (-1.5,0) to node[above]{$e$} (-.2,0); \draw[thick,->] (1.5,0) to node[above]{$f_i$} (.2,0); \draw[->,thick] (0,-.75) .. controls (.5,-.75) and (.5,-.3) .. (0_0) node[pos=0, inner sep=0.5pt, anchor=north] {$d$}; \draw[thick] (0,-.75) .. controls (-.5,-.75) and (-.5,-.3) .. (0_0); \node (more) at (3.5,0) [rectangle] {$i = 1,2,\ldots$}; \end{tikzpicture} \captionof{figure}{} \label{figure 8} \end{figure} Let $E$ be the graph in Figure \ref{figure 8}. Let $F_0,F_1,F_2$ be the graphs defined by \begin{align*} &F_0^0 = \{u\},\ F_0^1 = \{d\} \\ &F_1^0 = \{u,v\},\ F_1^1 = \{d,e\} \\ &F_2^0 = \{u,w\},\ F_2^1 = \{d\} \cup \{f_i : i \in \IN\}. \end{align*} Again, $H_{F_0,F_1} = \{v\}$, $H_{F_0,F_2} = \{w\}$. This time $u$ is a breaking vertex for $H_{F_0,F_2}$ in $F_2$, but not for $H_{F_0,F_1}$ in $F_1$. Since there is a breaking vertex, this is not an admissible pushout according to \cite{hrt}. In this paper, the pushout diagram is admissible because $u \in \reg{F_0} \cap \reg{F_1}$. Therefore Theorem \ref{thm main} implies that the corresponding diagram of graph algebras is a pullback. \end{Example}
{ "attr-fineweb-edu": 1.385742, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbrg5qsBB3TGfagsw
\section{Introduction} The two-stage program is one of the most fundamental optimization problems and has broad applications, see e.g., \cite{zhang2018ambulance,9046836}. It is observed that its coefficients are usually uncertain and ignoring their uncertainties may lead to poor decisions \cite{paridari2015robust,9093973}. In the literature, the classical robust optimization (RO) has been proposed to handle the uncertainty in the two-stage program by restricting them to some given sets and then minimizes the worst-case cost over all possible realizations \cite{ben2009robust}. However, it ignores the distribution information of stochastic uncertainty and may return a conservative solution \cite{zhao2018inventory}. To this end, the stochastic program (SP) is adopted to address the uncertainty via a distribution function \cite{shapiro2009lectures}, and in practice is solved by using an empirical distribution in the sample-average approximation (SAA) method \cite{shapiro1998simulation}. The SAA method is effective only when adequate and high-quality samples are obtained cheaply \cite{shapiro1998simulation}. If samples are of low quality, the empirical distribution may significantly deviate from the true distribution and the SAA method exhibits poor performance. An alternative approach is to apply the distributionally robust (DR) optimization technique to address stochastic uncertainty by assuming that the true distribution belongs to an ambiguity set of probability distributions \cite{shapiro2002minimax,shi2018distributionally}. This method overcomes inherent drawbacks of the SP and RO as it does not require an exact distribution and can exploit the sample information. In fact, numerous evidence implies that the DR method can yield high-quality solutions within a reasonable computation cost \cite{van2015distributionally,Zhang2017Robust,xiong2016distributionally}. Thus, our exposition concentrates on DR two-stage linear programs over an ambiguity set of distributions. The ambiguity set is essential in the DR programs. It should be large enough to include the true distribution with a high probability but cannot be too ``large" to avoid very conservative decisions. \cite{bertsimas2010models,hanasusanto2016k,ling2017robust} adopt the moment-based ambiguity set, which includes distributions with specified moment constraints. The DR two-stage linear program over the set of distributions with {\em{exactly}} known first- and second-order moments are reformulated either as a semidefinite program \cite{bertsimas2010models} or the mixed-integer linear program of a polynomial size \cite{hanasusanto2016k} under different settings. Observe that the moment mismatch is unavoidable, \cite{ling2017robust} further considers the moment uncertainty, which results in an intractable model. In this work, we study a data-driven DR two-stage linear program over a ball centered at the empirical distribution of a finite sample dataset, and the ball radius reflects our confidence in the empirical distribution. Particularly, the lower the confidence, the larger the ball radius. The sample dataset can be utilized in a flexible way to handle the distribution uncertainty, e.g., the degree of conservatism can be controlled by tuning the radius. Moreover, our model applies to the situation where the true distribution is slowly time-varying. Note that the empirical distribution is discrete and the true distribution is usually continuous. We adopt the 1-Wasserstein metric to measure the distance between distributions, which is different from the Kullback-Leibler divergence in \cite{chen2018distributionally} and $L^1$-norm in \cite{jiang2018risk}. Then, we obtain the DR two-stage linear program over 1-Wasserstein balls and develop a second-order conic programming (SOCP) approach to solve it. Since the Wasserstein ball contains the true distribution with a high probability \cite{esfahani2018data}, the proposed DR problem is expected to exhibit good out-of-sample performance. Moreover, the Wasserstein ball can asymptotically degenerate to the true distribution as the sample size increases to infinity \cite{esfahani2018data}. This work considers the distribution uncertainty either in the objective function or constraints of two-stage linear programs. Specifically, we first study the case with distribution uncertainty only in the objective function and {\em exactly} reformulate it as an SOCP problem, which covers all the results of the conference version of this work \cite{wang2020solving}. Then we proceed to the case with the distribution uncertainty only in constraints and show that such a program is generally NP-hard as it requires to solve a norm maximization problem over a polyhedron. The good news is that the resulting program can be reduced to an SOCP problem if the extreme points of the polyhedron are given as a prior. Motivated by this and also inspired by \cite{zeng2013solving,ding2015parallel}, we design a novel constraint generation algorithm with provable convergence to approximately solve it. It should be noted that \cite{hanasusanto2018conic} and \cite{xie2019tractable} study the DR two-stage linear programs with the $2$-Wasserstein and $\infty$-Wasserstein metrics, respectively. In \cite{hanasusanto2018conic}, the distribution uncertainty arises simultaneously in the objective function and constraints, which renders their model NP-hard, and the co-positive programs are utilized to approximately solve it. \cite{xie2019tractable} reformulates the DR model as a computational demanding mixed-integer problem. In comparison, we \textit{exactly} reformulate our model with distribution uncertainty only in the objective as an SOCP problem and design an SOCP approach to approximately solve the NP-hard problem with uncertainty only in constraints. Moreover, we explicitly derive the distribution achieving the worst-case cost by simply perturbing each sample, based on which we can further assess the quality of an optimal decision. This is clearly in sharp contrast to \cite{bertsimas2018adaptive}, \cite{hanasusanto2018conic} and \cite{xie2019tractable}. Overall, we summarized our contributions as follows: \begin{itemize} \item We propose a novel SOCP approach to solve the data-driven DR two-stage linear programs over 1-Wasserstein balls. \item We \textit{exactly} reformulate the model with uncertainty only in the objective as a solvable SOCP problem. \item The model with uncertainty only in the constraints is shown to be NP-hard. To approximately solve it, we develop an SOCP-based constraint generation algorithm with provable convergence. \item The good out-of-sample performance and the computational complexity of our model are validated by experiments. \end{itemize} The rest of this paper is organized as follows. Section \ref{Pro-For} proposes the DR two-stage linear program over the 1-Wasserstein ball. Section \ref{uino} reformulates the model with the distribution uncertainty only in the objective function as a tractable SOCP problem. Section \ref{uinc} studies the model with uncertainty only in constraints and presents an SOCP-based constraint generation algorithm. Section \ref{wos-cas} derives the distribution achieving the worst-case cost. Section \ref{sim} reports numerical results to illustrate the performance of the proposed model and the paper is concluded in Section \ref{con}. \\ {\bf Notation}: We denote the set of real positive real numbers by $\mathbb{R}$ and $\mathbb{R}_+$. The boldface lowercase letter denotes a vector, e.g., $\bm{x} = [x_1,\dots,x_n]^T\in \mathbb{R}^n$. Special vectors include the zero vector $\bm{0}$ and the all one vector $\bm{e}$. $\|\cdot\|_p$ denotes the $l_p$-norm. Let $[N]=\{1,2,\dots,N\}$ and $|\mathcal{E}|$ denotes the cardinality of $\mathcal{E}$. The letters s.t. are an abbreviation of the phrase ``subject to ". $\text{Diag}(\cdot)$ denotes a diagonal matrix with vector $(\cdot)$ being diagonal elements. \section{Problem Formulation} \label{Pro-For} \subsection{The Two-stage Stochastic Linear Optimization}\label{cla-pro} Consider the classical two-stage stochastic linear program \cite{birge2011introduction} \begin{equation} \begin{aligned} \label{classical} \minimize_{\bm{x} \in \mathcal{X}} \ \ \bm{c}^T\bm{x} + \mathbb{E}_\mathbb{F}[Q(\bm{x},\bm{\xi})], \end{aligned} \end{equation} where $\bm{x} \in \mathbb{R}^{n}$ is the first-stage decision vector from a compact set $\mathcal{X}$ and is decided before the realization of a random vector $\bm{\xi}\in \mathbb{R}^{m}$ with the distribution $\mathbb{F}$. The second-stage cost is evaluated based on the expectation of the following recourse problem \begin{equation} \begin{aligned} \label{LO-Two} Q(\bm{x},\bm{\xi}) = &\min \ \bm{z}(\bm{\xi})^T\bm{y} \\ &\begin{array}{ll} \sta & A(\bm{\xi})\bm{x}+B\bm{y} \ge \bm{b}(\bm{\xi})\\ &\bm{y} \in \mathbb{R}^{m}_+, \end{array} \end{aligned} \end{equation} where $B\in \mathbb{R}^{k\times m}$ is the \textit{recourse matrix} and $\bm{z}(\bm{\xi}) \in \mathbb{R}^{m},A(\bm{\xi}) \in \mathbb{R}^{k\times n}$ and $\bm{b}(\bm{\xi}) \in\mathbb{R}^{k}$ depend on the random vector $\bm{\xi}$. In the sequel, we study models with uncertainty only in the objective function or constraints, each of which is motivated by two notable examples, see also \cite{ling2017robust,bertsimas2010models,bertsimas2018adaptive}.\\ \begin{exmp}(\cite{ling2017robust}) \label{exmp-f} Consider a portfolio program with $n$ assets which investors can invest in two stages. Generally the return for assets in the second stage is random, hence a stochastic two-stage portfolio program is designed for a maximum return \begin{equation} \begin{aligned} \label{port} \minimize_{\bm{e}^T\bm{x} = 1,~\bm{x} \ge \bm{0}} \ \ -(\bm{e}+\bm{c})^T\bm{x} + \mathbb{E}_{\mathbb{F}}[Q(\bm{x,\bm{\xi}})], \end{aligned} \end{equation} where $\bm{x},\bm{c}\in \mathbb{R}^n$ are vectors of the invested dollar and the return for the $n$ assets in the first stage, $Q(\bm{x,\xi})$ is given by \begin{align} \label{port_q} Q(\bm{x,\xi})= \min \ \ & -(\bm{e}+\bm{\xi})^T\bm{y} \\ {\rm s.t.} \ \ & \bm{y} \ge \bm{0}, \bm{\Delta}^s \ge 0, \bm{\Delta}^b \ge 0 \nonumber \\ & A\bm{x}+(1-\theta)\bm{\Delta}^b-(1+\theta)\bm{\Delta}^s = \bm{y}, \nonumber \end{align} where $\bm{y},\bm{\xi}\in \mathbb{R}^n$ are vectors of the invested dollar and the random return for the assets in the second stage. The matrix $A=\text{Diag}(\bm{e}+\bm{c})$, $\bm{\Delta}^s$ and $\bm{\Delta}^b$ are the vectors of the dollar for selling and buying the assets, and $\theta$ is the transaction cost.\\ \end{exmp} \begin{exmp}(\cite{kall1994stochastic}) \label{exmp-c} Consider a material order problem with $n$ raw materials and $m$ desired products. Let $\bm{b}\in \mathbb{R}^m$ denote the market demand vector for products. Let $a_{ij}$ be the amount of product $i$ produced by per unit of material $j$ and $A = [a_{ij}]_{m\times n}$ be the matrix of the production amount for all materials. The market demand is usually time-varying and the uncertainty in the production amount is generally inevitable due to the quality of raw materials. Hence, it is unavoidable to introduce uncertainty $\bm{\xi}$ to the demand vector $\bm{b}$ and the matrix $A$, then the order problem is formulated as \begin{align} \label{order} &\minimize_{\bm{e}^T\bm{x} \le u, ~ \bm{x} \ge \bm{0}} \left\{\bm{c}^T\bm{x} + \mathbb{E}_\mathbb{F}[Q(\bm{x},\bm{\xi})]\right\}, \end{align} where $u$ is the capacity of $n$ materials, $\bm{c}\in \mathbb{R}^n$ is the cost vector of $n$ materials, and $Q(\bm{x},\bm{\xi})$ is given as \begin{equation} \begin{aligned} \label{order-1} Q(\bm{x},\bm{\xi}) = & \min \ \ \bm{z}^T\bm{y} \\ &\begin{array}{ll} {\rm s.t.} &A(\bm{\xi})\bm{x}+\bm{y} \ge \bm{b}(\bm{\xi})\\ &\bm{y} \in \mathbb{R}^{m}_+, \end{array} \end{aligned} \end{equation} where $\bm{z}\in\mathbb{R}^m$ is the penalty vector for per unit of undeliverable products and $\bm{y}\in\mathbb{R}_+^m$ is the corresponding shortage amount vector. \end{exmp} Motivated by above examples, we consider that $\bm{z}(\bm{\xi})$, $A(\bm{\xi})$ and $\bm{b}(\bm{\xi})$ in \eqref{classical} depend affinely on $\bm{\xi}$, i.e., \begin{equation} \begin{aligned} \label{uncertainc} &\bm{z}(\bm{\xi}) = \bm{z}_0 + \sum\limits_{i=1}^{m}\xi_i\bm{z}_i, ~ A(\bm{\xi}) = A_0 + \sum\limits_{i=1}^{m}\xi_iA_i, \\ &\bm{b}(\bm{\xi}) = \bm{b}_0 + \sum\limits_{i=1}^{m}\xi_i\bm{b}_i, \end{aligned} \end{equation} where $\bm{z}_0,\bm{z}_1,\dots,\bm{z}_{m} \in \mathbb{R}^{m}$, $\bm{b}_0,\bm{b}_1,\dots,\bm{b}_{m} \in \mathbb{R}^{k}$ and $A_0,A_1,\dots,A_{m} \in \mathbb{R}^{k \times n}$ are given as prior. In fact, the affine uncertainty has also been adopt in \cite{bertsimas2018adaptive,ling2017robust}. The following condition guarantees the feasibility of the second-stage problem in \eqref{LO-Two} and is satisfied by many problems, e.g., the production planning problem, the newsvendor problem and its variants \cite{birge2011introduction}. \\ \begin{assum} \label{rel-com} The second-stage problem in \eqref{LO-Two} is always feasible for any $\bm{x} \in \mathcal{X}$ and $\bm{\xi}$. \end{assum} \subsection{Distributionally Robust Two-stage Problems} \label{TDROLOP} The program in \eqref{classical} generally requires an exact distribution $\mathbb{F}$ of $\bm{\xi}$. In practice, $\mathbb{F}$ can only be estimated through a finite sample dataset $\{\widehat{\bm{\xi}}^i\}_{i=1}^N$ and a common idea is to adopt the SAA method, where $\mathbb{F}$ is approximated by an empirical distribution $\mathbb{F}_N$ over the sample dataset, i.e., $$\mathbb{F}_N(\bm{\xi})=\frac{1}{N}\sum_{i=1}^{N}\bm{1}_{\{\widehat{\bm{\xi}}^i\le\bm{\xi}\}},$$ where $\bm{1}_A$ is the indicator of event $A$. Then the stochastic linear problem in \eqref{classical} is approximated by \begin{equation} \label{SSP-two} \minimize_{\bm{x} \in \mathcal{X}} \ \ \left\{\bm{c}^T\bm{x} + \frac{1}{N}\sum_{i=1}^{N}Q(\bm{x},\widehat{\bm{\xi}}^i)\right\}. \end{equation} By Glivenko-Cantelli theorem \cite{cantelli1933sulla}, the distribution $\mathbb{F}_N$ weakly converges to the true distribution $\mathbb{F}$ as $N$ increases to infinity. This implies the asymptotic convergence of \eqref{SSP-two} to the stochastic model \eqref{classical}. Hence, the SAA method is sensible only when $\mathbb{F}_N$ well approximates the true distribution $\mathbb{F}$. However, insufficient and/or low-quality samples may lead to an empirical distribution $\mathbb{F}_N$ far from the true distribution $\mathbb{F}$. Thus, the SAA model \eqref{SSP-two} may be not reliable with poor out-of-sample performance. As in \cite{esfahani2018data}, a data-driven approach is adopted to address the distribution uncertainty in this work. We assume that $\mathbb{F}$ belongs to an ambiguity set $\mathcal{F}_N$ including all distributions within $\epsilon_N$-distance from the empirical distribution $\mathbb{F}_N$. Here $\epsilon_N$ indicates the confidence on $\mathbb{F}_N$, e.g., the larger the $\epsilon_N$, the lower the confidence. Since the true distribution $\mathbb{F}$ is generally continuous and the empirical distribution $\mathbb{F}_N$ is discrete, the 1-Wasserstein metric \cite{ambrosio2013user} is adopted to measure their distance and consequently a $1$-Wasserstein ball $\mathcal{F}_N$ is obtained. Then we are interested in the worst-case second-stage cost over $\mathcal{F}_N$, i.e., \begin{equation} \begin{aligned} \label{ADRO-Two} \beta(\bm{x})=\sup_{\mathbb{F}\in \mathcal{F}_N} \mathbb{E}_{\mathbb{F}}[Q(\bm{x},\bm{\xi})], \end{aligned} \end{equation} and the DR two-stage linear program is formulated as \begin{equation} \begin{aligned} \label{primal} \minimize_{\bm{x} \in \mathcal{X}} \ \ \bm{c}^T\bm{x} + \beta(\bm{x}). \end{aligned} \end{equation} To evaluate an optimal solution, we also derive the worst-case distribution $\mathbb{F}^*$ that achieves the worst-case second-stage cost, i.e., \begin{equation} \label{worst-dis} \beta(\bm{x})=\sup_{\mathbb{F}\in \mathcal{F}_N} \mathbb{E}_{\mathbb{F}}[Q(\bm{x},\bm{\xi})] = \mathbb{E}_{\mathbb{F}^*}[Q(\bm{x},\bm{\xi})]. \end{equation} \subsection{Ambiguity Set via the $1$-Wasserstein Metric} We introduce the $r$-Wasserstein metric below. \begin{defi}(\cite{ambrosio2013user}) Let $d(\bm{\xi}^1,\bm{\xi}^2)=\|\bm{\xi}^1-\bm{\xi}^2\|_p$ be the $l_p$-norm of $\bm{\xi}^1-\bm{\xi}^2$ on $\mathbb{R}^n$ and $(\Xi,d)$ be a Polish metric space. Given a pair of distributions $\mathbb{F}_1\in \mathcal{M}(\Xi)$ and $\mathbb{F}_2\in \mathcal{M}(\Xi)$ where $\mathcal{M}(\Xi)$ is a set containing all distributions supported on $\Xi$, the r-Wasserstein metric $W^r$:$\mathcal{M}(\Xi) \times \mathcal{M}(\Xi) \rightarrow \mathbb{R}_+$ is defined as \begin{equation}\label{defdis} \begin{aligned} & W^r(\mathbb{F}_1,\mathbb{F}_2)= \inf\left\{\left(\int_{\Xi^2} d(\bm{\xi}^1,\bm{\xi}^2)^r K(\mathrm{d}\bm{\xi}^1,\mathrm{d}\bm{\xi}^2)\right)^{1/r}\right.:\\ &\left.\int_{\Xi} K(\bm{\xi}^1,\mathrm{d}\bm{\xi}^2)=\mathbb{F}_1(\bm{\xi}^1), \int_{\Xi} K(\mathrm{d}\bm{\xi}^1,\bm{\xi}^2)=\mathbb{F}_2(\bm{\xi}^2) \right\}, \end{aligned} \end{equation} where $r\ge 1$ and $K$ is a joint distribution with its marginal distributions being $\mathbb{F}_1$ and $\mathbb{F}_2$. \end{defi} Without scarifying much modeling power and to obtain a real metric \cite{ambrosio2013user}, we need the following requirement on the set $\mathcal{M}(\Xi)$. \begin{assum} \label{M-set-assum} For any distribution $\mathbb{F} \in \mathcal{M}(\Xi)$, it holds $$\int_{\Xi}{\| {\bm{\xi}}\|}^r_p \mathbb{F}(\mathrm{d}{\bm{\xi}})<\infty.$$ \end{assum} Different from \cite{hanasusanto2018conic} and \cite{xie2019tractable}, we adopt the 1-Wasserstein metric and $l_2$-norm, i.e., $r=1$ and $p=2$ in \eqref{defdis} to construct the ambiguity ball $\mathcal{F}_N$, \begin{equation} \label{Wset} \mathcal{F}_N=\{\mathbb{F}\in \mathcal{M}(\Xi):W^1(\mathbb{F}_N,\mathbb{F})\le \epsilon_N\}, \end{equation} where $\epsilon_N > 0$ is the ball radius, i.e., $\mathcal{F}_N$ is the set of distributions within $\epsilon_N$-distance from $\mathbb{F}_N$. \subsection{Comparisons with the state-of-the-art methods} In \cite{bertsimas2018adaptive}, the ambiguity set of the DR two-stage linear programs is defined as a set of distributions with specified first- and second-order moment constraints. \cite{hanasusanto2018conic} considers DR two-stage linear programs of the form \eqref{primal} with 2-Wasserstein balls, i.e., $r = p=2$ in \eqref{defdis}, and $Q(\bm{x},\bm{\xi})$ is defined as \begin{equation} \begin{aligned} \label{Kuhn} \begin{array}{ll} Q(\bm{x},\bm{\xi}) = &\min \ \ (Q\bm{\xi}+\bm{q})^T\bm{y} \\ &\sta \ \ T(\bm{x})\bm{\xi}+h(\bm{x})\le B\bm{y} \end{array} \end{aligned} \end{equation} where $T(\cdot)$ and $h(\cdot)$ are two affine functions. In \cite{xie2019tractable}, the DR two-stage program is defined via the $\infty$-Wasserstein metric, i.e, $r = \infty$ and $p=1, \infty$ in \eqref{defdis} with the uncertainty only in the objective function or constraints separately, i.e., $Q$ or $T(\bm{x})$ in \eqref{Kuhn} is set to $0$ respectively. Comparisons with those state-of-art models are summarized as follows: \begin{itemize} \item \textbf{Model differences:} Clearly, $Q(\bm{x},\bm{\xi})$ in \eqref{LO-Two} of this work and \cite{bertsimas2018adaptive} is different from \eqref{Kuhn} in \cite{hanasusanto2018conic} and \cite{xie2019tractable}. Our model is motivated from a wide range of real applications, see e.g. Examples \ref{exmp-f}-\ref{exmp-c}. Note that this ``minor" difference may require a completely different solution approach. \item \textbf{Solution approaches:} \cite{hanasusanto2018conic} derives co-positive programs to approximate their NP-hard DR two-stage model. \cite{xie2019tractable} reformulates the model as a computational demanding mixed-integer problem. \cite{bertsimas2018adaptive} approximate their model by linear decision rule techniques. In this work, we {\em{equivalently}} reformulate our model with distribution uncertainty only in the objective as an SOCP problem and design an SOCP-based constraint generation algorithm for the problem with distribution uncertainty only in constraints. \item \textbf{Approximation gaps:} There is no approximation gap in \cite{hanasusanto2018conic} and \cite{bertsimas2018adaptive}, under the condition that for any $\bm{t} \in \mathbb{R}^k$, there exists a solution $\bm{y}$ to solve the inequality $B\bm{y} \ge \bm{t}$ (aka {\em complete recourse}). In this work, the zero-gap condition in Assumption \ref{rel-com} (aka {\em relatively complete recourse}) is weaker and satisfied by numerous real application models \cite{birge2011introduction}. As explicitly stated in \cite{bertsimas2018adaptive}, ``there are also problems that would generally not satisfy complete recourse, such as a production planning problem where a manager determines a production plan today to satisfy all uncertain demands for tomorrow instead of incurring penalty", see Example \ref{exmp-c} which satisfies relatively complete recourse. \item \textbf{The worst-case distribution:} In sharp contrast to those state-of-art models, this work derives the distribution attaining the worst-case second-stage cost with distribution uncertainty either in the objective function or constraints, respectively. \end{itemize} \section{Uncertainty in the Objective Function} \label{uino} We first consider the distribution uncertainty only in the objective function of \eqref{LO-Two} via the following form \begin{equation} \begin{aligned} \label{Q_un_o} Q(\bm{x},\bm{\xi}) = \min \ \ &\bm{z}(\bm{\xi})^T\bm{y} \\ \sta \ \ &A\bm{x}+B\bm{y} \ge \bm{b}\\ & \bm{y} \in \mathbb{R}^{m}_+, \end{aligned} \end{equation} where $\bm{z}(\bm{\xi})$ is defined as \eqref{uncertainc} in Section \ref{cla-pro}. We convert the problem in \eqref{primal} with $Q(\bm{x},\bm{\xi})$ given by \eqref{Q_un_o} over the 1-Wasserstein ball $\mathcal{F}_N$ to an SOCP problem which can be solved efficiently by general-purpose commercial-grade solvers such as CPLEX. \begin{theo} \label{theo1} Under Assumptions \ref{rel-com}-\ref{M-set-assum}, the worst-case $\beta(\bm{x})$ with $Q(\bm{x},\bm{\xi})$ in \eqref{Q_un_o} over the 1-Wasserstein ball $\mathcal{F}_N$ is equivalent to the optimal value of an SOCP problem \begin{equation} \begin{aligned} \label{beta_dual_f} \beta(\bm{x}) = \inf \ \ &\left\{\lambda\epsilon_N+\frac{1}{N}\sum\limits_{i=1}^{N}s_i\right \}\\ {\rm s.t.}\ \ & \lambda \ge \Vert {Z\bm{y}} \Vert_2 \\ & s_i \ge \bm{z}_0^T\bm{y}+\bm{y}^TZ^T\widehat{\bm{\xi}}^i, \ \forall i\in [N]\\ & A\bm{x}+B\bm{y} \ge \bm{b}, \ \ \bm{y}\ge \bm{0}, \end{aligned} \end{equation} where $Z^T = [\bm{z}_1,\dots,\bm{z}_m]$. Moreover, the associated DR problem (\ref{primal}) is equivalent to the following SOCP problem \begin{equation} \begin{aligned} \label{whole-problem_f} \minimize_{\bm{x} \in \mathcal{X}} ~~~ &\left\{\bm{c}^T\bm{x} + \lambda\epsilon_N+\frac{1}{N}\sum\limits_{i=1}^{N}s_i\right\} \\ \st~~& \lambda \ge \Vert {Z\bm{y}} \Vert_2 \\ & s_i \ge \bm{z}_0^T\bm{y} + \bm{y}^TZ^T\widehat{\bm{\xi}}^i, \ \forall i\in [N]\\ & A\bm{x}+B\bm{y} \ge \bm{b}, \ \bm{y}\ge \bm{0}.\\ \end{aligned} \end{equation} \end{theo} \begin{proof} For any feasible first-stage decision vector $\bm{x}$, $\beta(\bm{x})$ over the 1-Wasserstein ball can be obtained by solving a conic linear program \begin{equation} \begin{aligned} \label{beta-linear} \beta(\bm{x}) =\sup \ \ & \sum\limits_{i=1}^{N}\int_{\Xi}Q(\bm{x},\bm{\xi})K(\mathrm{d}\bm{\xi},\widehat{\bm{\xi}}^i) \\ \sta \ \ &\int_{\Xi}K(\mathrm{d}\bm{\xi},\widehat{\bm{\xi}}^i)=\frac{1}{N}, \forall i\in [N]\\ &\int_{\Xi}\sum\limits_{i=1}^{N}d(\bm{\xi},\widehat{\bm{\xi}}^i)K(\mathrm{d}\bm{\xi},\widehat{\bm{\xi}}^i)\le \epsilon_N. \end{aligned} \end{equation} The Lagrange dual function for \eqref{beta-linear} is represented as \begin{align*} &g(\lambda,\bm{s}) \\ &= \sup_{{{\bm{\xi}}}\in \Xi} \left\{\int_{\Xi} \sum\limits_{i=1}^{N}\left(Q(\bm{x},\bm{\xi})-s_i -\lambda d({\bm{\xi}},{\widehat{\bm{\xi}}}^i)\right) K(\mathrm{d}{\bm{\xi}},{\widehat{\bm{\xi}}}^i)\right\} \\ &~~+\frac{1}{N}\sum\limits_{i=1}^{N}{s_i}+ \lambda \epsilon_N. \end{align*} Consequently, the dual problem of \eqref{beta-linear} is given as \begin{align} \beta(\bm{x}) =\inf \ \ & \lambda\epsilon_N + \frac{1}{N}\sum\limits_{i=1}^{N}s_i \label{beta_dual1_1} \\ \sta \ & \lambda \ge 0 \nonumber\\ &\hspace*{-0.1in} Q(\bm{x},\bm{\xi}) - \lambda d(\bm{\xi},\widehat{\bm{\xi}}^i) \le s_i, \forall i \in [N], ~\bm{\xi}\in \Xi \label{beta_dual1_b_1} . \end{align} Since $\epsilon_N > 0$, then $K = \mathbb{F}_N \times \mathbb{F}_N$ is a strictly feasible solution to \eqref{beta-linear}, the Slater condition for the strong duality of primal problem \eqref{beta-linear} and its dual problem \eqref{beta_dual1_1} is satisfied \cite{Shapiro2001On}. The constraints in \eqref{beta_dual1_b_1} require a feasible second-stage solution $\widehat{\bm{y}}$ to guarantee the feasibility of the following inequality \begin{equation*} \begin{aligned} \bm{z}(\bm{\xi})^T\widehat{\bm{y}}- \lambda d(\bm{\xi},\widehat{\bm{\xi}}^i) \le s_i, \forall i \in [N],~\bm{\xi}\in \Xi. \end{aligned} \end{equation*} Note that Assumption \ref{rel-com} ensures the existence of such a $\widehat{\bm{y}}$. Hence, \eqref{beta_dual1_b_1} can be expressed as \begin{equation} \begin{aligned} s_i \ge \bm{z}(\bm{\xi})^T\widehat{\bm{y}}- \lambda d(\bm{\xi},\widehat{\bm{\xi}}^i),\ \ \forall i \in [N], ~\bm{\xi}\in \Xi. \end{aligned} \end{equation} Since \begin{equation*} \bm{z}(\bm{\xi})^T\widehat{\bm{y}} = \left(\bm{z}_0+\sum_{i=1}^{m}\xi_i\bm{z}_i\right)^T\widehat{\bm{y}}=\bm{z}_0^T\widehat{\bm{y}}+\bm{\xi}^TZ\widehat{\bm{y}}, \end{equation*} it implies that \begin{equation*} \begin{aligned} &\sup_{\bm{\xi}} \left\{\bm{z}(\bm{\xi})^T\widehat{\bm{y}}-\lambda\Vert \bm{\xi}-\widehat{\bm{\xi}}^i\Vert_2 \right\}\\ &=\sup_{\bm{\xi}} \left\{ \bm{z}_0^T\widehat{\bm{y}}+\bm{\xi}^TZ\widehat{\bm{y}}-\lambda\Vert \bm{\xi}-\widehat{\bm{\xi}}^i\Vert_2\right\} \\ &=\left\{ \begin{array}{ll} \bm{z}_0^T\widehat{\bm{y}}+\widehat{\bm{y}}^TZ^T\bm{\xi}^i, &\text{if} \ {\| Z\widehat{\bm{y}} \|}_2 \le \lambda \\ +\infty, &\text{if} \ {\| Z\widehat{\bm{y}} \|}_2 > \lambda \\ \end{array} \right. \end{aligned} \end{equation*} where the last equality follows from Lemma 1 in \cite{wang2020wasserstein}. Consequently, \eqref{beta_dual1_b_1} admits an equivalent form \begin{equation*} \left\{ \begin{aligned} & s_i \ge \bm{z}_0^T\widehat{\bm{y}}+\widehat{\bm{y}}^TZ^T\widehat{\bm{\xi}}^i, ~\forall i \in [N], \\ &\lambda \ge {\| Z\widehat{\bm{y}} \|}_2, \end{aligned} \right. \end{equation*} Inserting the above to \eqref{beta_dual1_b_1} leads to the equivalence of \eqref{beta_dual_f} and \eqref{ADRO-Two}. Hence, the two-stage problem \eqref{primal} can be equivalently reformulated as the SOCP problem \eqref{whole-problem_f}. \end{proof} Theorem \ref{theo1} shows that the optimization program \eqref{primal} can be reformulated as a tractable SOCP problem. Furthermore, different $l_p$-norms in \eqref{defdis} lead to different equivalent forms of the DR two-stage problem, see Table \ref{form} for details, where LP represents the linear programming. \renewcommand{\arraystretch}{1.2} \begin{table}[htb] \centering \caption{Equivalent problems of the our DR problem, where $p$ represents the $l_p$-norm in \eqref{defdis}.} \begin{tabu} to 0.48\textwidth{|X[1.5,c]| X[1,c]| X[1,c]| X[1.5,c]| X[3,c]|} \hline Norm&$p=1$&$p=2$&$p=\infty$&Otherwise\\ \hline Problem&LP&SOCP&LP&Convex Program\\ \hline \end{tabu} \label{form} \end{table} \section{Uncertainty in the Constraints} \label{uinc} In this section we consider the distribution uncertainty only in constraints of \eqref{LO-Two}, i.e., \begin{equation} \begin{aligned} \label{Q_un_cons} Q(\bm{x},\bm{\xi}) = \min \ \ &\bm{z}^T\bm{y} \\ \sta\ \ &A(\bm{\xi})\bm{x}+B\bm{y} \ge \bm{b}(\bm{\xi})\\ & \bm{y} \in \mathbb{R}^{m}_+, \end{aligned} \end{equation} where $A(\bm{\xi})$ and $\bm{b}(\bm{\xi})$ are defined in \eqref{uncertainc} of Section \ref{cla-pro}. \subsection{Reformulation of the DR Problem}\label{CGP} We first prove the NP-hardness of the problem \eqref{primal} with $Q(\bm{x},\bm{\xi})$ given in \eqref{Q_un_cons}. \begin{theo} \label{theo2} Under Assumptions \ref{rel-com}-\ref{M-set-assum}, the worst-case $\beta(\bm{x})$ with $Q(\bm{x},\bm{\xi})$ in \eqref{Q_un_cons} over the 1-Wasserstein ball $\mathcal{F}_N$ can be computed by an NP-hard problem \begin{align} \beta(\bm{x}) = \inf \ \ &\left\{\lambda\epsilon_N+\frac{1}{N}\sum\limits_{i=1}^{N}s_i \right\}\label{beta_dual_c} \\ {\rm s.t.} \ \ & s_i \ge ({C\bm{p}})^T \widehat{\bm{\xi}}^i + \bm{p}^T(\bm{b}_0-A_0\bm{x}) \label{wh_LP_con}\\ &\lambda \ge \| {C\bm{p}} \|_2, ~\forall i\in [N], ~\bm{p}\in\mathcal{P}, \label{wp1} \end{align} where \begin{equation} \label{matrix-c} C = [\bm{b}_1-A_1{\bm{x}},\dots,\bm{b}_{m}-A_{m}{\bm{x}}]^T \end{equation} and $\mathcal{P}$ is a polyhedron given by \begin{equation} \label{poly} \mathcal{P} = \{\bm{p}\in \mathbb{R}^k_+:B^T\bm{p}\le\bm{d}\}. \end{equation} \end{theo} \begin{proof} The strong duality stills holds for $\beta(\bm{x})$, which is rewritten as \begin{align} \beta(\bm{x}) =\inf \ \ & \lambda\epsilon_N + \frac{1}{N}\sum\limits_{i=1}^{N}s_i \label{beta_dual1_c} \\ \sta \ & \lambda \ge 0 \nonumber \\ &\hspace*{-0.1in} Q(\bm{x},\bm{\xi}) - \lambda d(\bm{\xi},\widehat{\bm{\xi}}^i) \le s_i, \forall i \in [N], ~\bm{\xi}\in \Xi. \label{beta_dual1_c_1} \end{align} Under the strong duality of the LP problem, $Q(\bm{x},\bm{\xi})$ in \eqref{Q_un_cons} is equivalent to \begin{equation} \begin{aligned} \label{Qx-dual1} Q(\bm{x},\bm{\xi}) =\max \ \ & \bm{p}^T(\bm{b}(\bm{\xi})-A(\bm{\xi})\bm{x}) \\ \sta \ \ & \bm{z}\ge B^T\bm{p} \\ & \bm{p} \ge 0. \end{aligned} \end{equation} Then the constraints in \eqref{beta_dual1_c_1} can be expressed as \begin{equation} \begin{aligned} \label{beta_dual1_b_2} \hspace{-.3cm}s_i \ge \bm{p}^T\left(\bm{b}(\bm{\xi})-A(\bm{\xi})\bm{x}\right)-\lambda d(\bm{\xi},\widehat{\bm{\xi}}^i),\forall \bm{\xi}\in \Xi, \bm{p}\in \mathcal{P}. \end{aligned} \end{equation} Furthermore, the right-hand side of \eqref{beta_dual1_b_2} is expressed as \begin{equation*} \begin{aligned} &\sup_{ \bm{\xi}}\left\{\bm{p}^T\left(\bm{b}(\bm{\xi})-A(\bm{\xi})\bm{x}\right)-\lambda d(\bm{\xi},\widehat{\bm{\xi}}^i)\right\} \\ &=\sup_{ \bm{\xi}}\left\{(C\bm{p})^T\bm{\xi} + \bm{p}^T(\bm{b}_0-A_0\bm{x})-\lambda d(\bm{\xi},\widehat{\bm{\xi}}^i)\right\}\\ &=\left\{ \begin{array}{ll} ({C\bm{p}})^T\widehat{\bm{\xi}}^i -\bm{p}^T(\bm{b}_0-A_0\bm{x}), &\text{if} \ {\| {C\bm{p}} \|}_2 \le \lambda \\ +\infty, &\text{if} \ {\|{C\bm{p}} \|}_2 > \lambda, \\ \end{array} \right. \end{aligned} \end{equation*} where $C$ is defined in \eqref{matrix-c} and the second equality follows from Lemma 1 in \cite{wang2020wasserstein}. Consequently, \eqref{beta_dual1_c_1} is equivalent to \begin{equation*} \left\{ \begin{array}{ll} s_i \ge ({C\bm{p}})^T\widehat{\bm{\xi}}^i -\bm{p}^T(\bm{b}^0-A^0\bm{x}), &\forall i \in [N], ~\bm{p}\in \mathcal{P} \\ \lambda \ge {\| {C\bm{p}} \|}_2, &\forall \bm{p}\in \mathcal{P}. \end{array} \right. \end{equation*} Thus, $\beta(\bm{x})$ in \eqref{ADRO-Two} is reformulated as \eqref{beta_dual_c}. The constraint \eqref{wp1} in \eqref{beta_dual_c} can be expressed as $$ \lambda \ge \max_{\bm{p} \in \mathcal{P}}\| {C\bm{p}} \|_2. $$ Thus, the norm maximization problem over the polyhedron is NP-complete \cite{bodlaender1990computational} and checking the feasibility of constraint \eqref{wp1} is NP-hard. This completes the proof. \end{proof} Theorem \ref{theo2} immediately implies the NP-hardness of the problem in \eqref{primal}. If the extreme point set $\mathcal{E}$ of the polyhedron $\mathcal{P}$ is explicitly known, the problem \eqref{primal} can be reformulated as a solvable SOCP problem. \begin{coro} \label{sepcial-case} Suppose that Assumptions \ref{rel-com}-\ref{M-set-assum} hold and the extreme point set $\mathcal{E}$ of the polyhedron $\mathcal{P}$ in \eqref{poly} is known, the 1-Wasserstein problem \eqref{primal} with $Q(\bm{x},\bm{\xi})$ in \eqref{Q_un_cons} is equivalent to an SOCP problem \begin{equation} \label{whole-problem_c_s} \begin{aligned} \minimize_{\bm{x} \in \mathcal{X}} ~~~ &\left\{\bm{c}^T\bm{x}+\lambda\epsilon_N+\frac{1}{N}\sum\limits_{i=1}^{N}s_i \right\}\\ \st ~~ & s_i \ge (\bm{C}\bm{p})^T \widehat{\bm{\xi}}^i + \bm{p}^T(\bm{b}^0-A^0\bm{x}), \\ & \lambda \ge \| \bm{C}\bm{p} \|_2, \ \ \forall i\in [N], ~\bm{p} \in \mathcal{E}.\\ \end{aligned} \end{equation} \end{coro} \begin{proof} Since the LP problem \eqref{Qx-dual1} attains its optimal value at an extreme point of its feasible set $\mathcal{P}$, it holds that \begin{equation*} \begin{aligned} Q(\bm{x},\bm{\xi}) =\max_{\bm{p} \in \mathcal{E}} \ \bm{p}^T(\bm{b}(\bm{\xi})-A(\bm{\xi})\bm{x}). \end{aligned} \end{equation*} Then the constraints in \eqref{wp1} and \eqref{wh_LP_con} can be explicitly expressed as \begin{equation*} \left\{ \begin{array}{ll} s_i \ge (C\bm{p})^T\widehat{\bm{\xi}}^i -\bm{p}^T(\bm{b}^0-A^0\bm{x}), &\forall i \in [N], ~ \bm{p} \in \mathcal{E}, \\ \lambda \ge {\|C\bm{p} \|}_2, &\forall \bm{p} \in \mathcal{E}, \end{array} \right. \end{equation*} which leads to the equivalence of \eqref{whole-problem_c_s} and \eqref{primal}. This completes the proof. \end{proof} Corollary \ref{sepcial-case} shows that we can solve the DR two-stage problem by explicitly enumerating the extreme points of the polyhedron $\mathcal{P}$. Motivated by this, we design an algorithm to approximately solve the NP-hard DR two-stage problem via a constraint generation approach. \subsection{Approximately Solving the DR Two-stage Problem with Uncertainty in Constraints} \label{sol2uc} In this subsection, we propose a constraint generation algorithm to solve \eqref{primal}. Inspired by Corollary \ref{sepcial-case}, the DR problem can be efficiently solved given all extreme points of $\mathcal{P}$. While the direct enumeration of all extreme points is computational demanding, we gradually select sets of ``good" extreme points by solving a sequence of second-stage problems $\beta(\bm{x})$. Particularly, we utilize a master-subproblem framework to approximately solve \eqref{primal}. In the master problem (MP), we find an optimal solution under a selected subset of extreme points. Then a subproblem (SuP) is solved to obtain a better subset of extreme points. We add these points to the subset in MP as feasible cuts. Note that the optimal values of the MP and SuP are the lower and upper bounds for \eqref{primal} respectively. Both the lower and upper bounds will converge and a good solution to \eqref{primal} can be obtained. The algorithm based on such an MP-SuP framework is given in the sequel. By Corollary \ref{sepcial-case}, the MP is an SOCP problem given as \begin{align} \minimize_{\bm{x} \in \mathcal{X}} \ \ &\left\{\bm{c}^T\bm{x}+\lambda\epsilon_N+\frac{1}{N}\sum\limits_{i=1}^{N}s_i \right\} \label{MP} \\ \st \ & s_i \ge ({C\bm{p}})^T \widehat{\bm{\xi}^i} + \bm{p}^T(\bm{b}^0-A^0\bm{x}), \nonumber\\ & \lambda \ge \| {C\bm{p}} \|_2, \ \ \forall i\in [N], ~\bm{p} \in \mathcal{E}_s,\nonumber \end{align} where $\mathcal{E}_s$ is a given subset of extreme points of $\mathcal{P}$. After obtaining an optimal solution $\bm{x}^m$ of the MP, an SuP is derived as follows \begin{align} \beta(\bm{x}^m)=\min_{\lambda^s, s^s_i} \ \ &\left\{\lambda^s\epsilon_N+\frac{1}{N}\sum\limits_{i=1}^{N}s^s_i\right\} \label{SP} \\ \sta\ \ \ & s^s_i \ge ({C\bm{p}})^T \widehat{\bm{\xi}^i} + \bm{p}^T(\bm{b}^0-A^0{\bm{x}^m}), \nonumber\\ & \lambda^s \ge \Vert {C\bm{p}} \Vert_2, \ \ \forall i\in [N]~\bm{p}\in \mathcal{P}. \nonumber \end{align} A weak condition is needed to obtain an good solution of the SuP. \begin{assum} \label{poly-bound} The polyhedron $\mathcal{P}=\{\bm{p}\in \mathbb{R}^k_+:B^T\bm{p}\le\bm{z}\}$ is nonempty and bounded. \end{assum} The decision variables $\lambda^s$ and $\bm{s}^s$ in \eqref{SP} are completely decoupled and hence we can find their optimal solutions separately. To achieve it, we have the following steps. \begin{enumerate} \item An optimal solution $\bm{s}^s$ to SuP is obtained by solving a group of linear programs, i.e., \begin{equation} \begin{aligned} \label{sub_lp} s^s_i =\max \ \ & (C\bm{p})^T\widehat{\bm{\xi}^i}+\bm{p}^T(\bm{b}^0-A^0\bm{x}^m) \\ \sta \ \ & \bm{p} \in \mathcal{P}. \\ \end{aligned} \end{equation} \item An optimal $\lambda^s$ is obtained by solving a norm maximization problem, i.e., \begin{equation} \begin{aligned} \label{sub_norm} \lambda^s = &\max \ \ \| C\bm{p} \|_2 \\ &\sta \ \ \bm{p} \in \mathcal{P}. \end{aligned} \end{equation} \end{enumerate} A sequence of optimal solutions $\{\bm{p}_i^*\}_{i=1}^N$ to \eqref{sub_lp} can be added to the extreme point subset $\mathcal{E}_s$ in MP, since the LP problem \eqref{sub_lp} obtains its optimal value at extreme points of the feasible region $\mathcal{P}$. To solve the non-convex norm maximization problem, we adopt the consensus alternating direction method of multipliers (ADMM) method \cite{huang2016consensus}. Particularly, \eqref{sub_norm} is reformulated as a consensus form via $m$ auxiliary variables $\{\bm{g}_1,\dots,\bm{g}_m\}$, i.e., \begin{equation} \begin{aligned} \label{sub_norm_cons} \lambda^s = &\min \ \ -\bm{p}^TC^TC\bm{p} \\ &\ \sta \ \ \bm{b}^T_i\bm{g}_i \le z_i, \bm{g}_i\ge \bm{0}\\ & \ \ \ \ \ \ \ \ \bm{g}_i = \bm{p},~ \forall i \in [m], \end{aligned} \end{equation} where $\bm{b}_i$ is the $i$-th column of $B$. Algorithm \ref{algo_norm} provides the detailed consensus-ADMM algorithm. We omit its convergence proof for brevity, which can be found in \cite{huang2016consensus}. \begin{algorithm}[t!] \caption{The consensus-ADMM for \eqref{sub_norm_cons} \label{algo_norm} \begin{algorithmic}[1] \REQUIRE {Matrix $B,C$, vector $\bm{z},\bm{g}_i$ and $\bm{u}_i$, tolerance $\tau$} \ENSURE {Optimal solution $\bm{p}^*$ and optimal value $\lambda^s$} \STATE {Initialize $\bm{g}_i$ and $\bm{u}_i$} \REPEAT \STATE $\bm{p} \leftarrow\left(\frac{-C^TC}{\rho}+mI\right)(\sum_{i=1}^{m}(\bm{g}_i+\bm{u}_i))$ \FOR{each $i \in [m]$} \STATE { $\bm{g}_i\leftarrow \arg \min_{\bm{z}_i} \|\bm{g}_i-\bm{p}+\bm{u}_i\|^2$\\ $\ \ \ \ \ \ \st \ \bm{b}^T_i\bm{g}_i \le z_i,~\bm{g}_i\ge \bm{0}$\\ $\bm{u}_i \leftarrow \bm{g}_i+\bm{u}_i-\bm{p}$ } \ \ENDFOR \UNTIL{The successive difference of $\bm{p}$ is smaller than $\tau$} \STATE{Return $\bm{p}^* \leftarrow \bm{p}$ and $\lambda^s \leftarrow \|C\bm{p}^*\|_2$} \end{algorithmic} \end{algorithm} By Assumption \ref{poly-bound}, a solution $\bm{p}^*$ to \eqref{sub_norm} as an extreme point of polyhedron $\mathcal{P}$ is ensured to exist and then is added to the subset $\mathcal{E}_s$ \cite{bodlaender1990computational}. \begin{algorithm}[t!] \caption{Solve the robust program \label{algo_whole} \begin{algorithmic}[1] \REQUIRE{ A set of extreme points, $UB = +\infty$, $LB = -\infty$, $~~~~~k=0$}\\ \ENSURE{ Optimal solution $\bm{x}^*$ \REPEAT \STATE{Add extreme points to $\mathcal{E}_s$ in \eqref{MP} and set $k = k+1$} \STATE{Solve \eqref{MP} to obtain an optimal solution $\{{\bm{x}}_k,{\bm{s}}_k,{\lambda}_k\}$ and set $$LB = \bm{c}^T\bm{x}_{k} + {\lambda}_k\epsilon_N + \frac{1}{N}\sum_{i=1}^{N}s_{ki}$$} \STATE {Solve \eqref{SP} to obtain an optimal solution $\{\bm{s}^s_{k},\lambda^s_k\}$ and extreme points $\{\bm{p}_k^{i}\}_{i=1}^N \cup \{\bm{p}_k\}$ and set $$UB = \min\{UB, \bm{c}^T\bm{x}_{k} + \lambda^s _k\epsilon_N+ \frac{1}{N}\sum_{i=1}^{N}s_{ki}^s\}$$}\ \UNTIL{$UB - LB \le \varepsilon$} \STATE{Return $\bm{x}^* \leftarrow \bm{x}_k$} \end{algorithmic} \end{algorithm} We provide the MP-SuP based algorithm in Algorithm \ref{algo_whole}. Theorem \ref{theo3} shows that Algorithm \ref{algo_whole} terminates in a finite number of iterations. \begin{theo} \label{theo3}Under Assumption \ref{poly-bound}, Algorithm \ref{algo_whole} generates an optimal solution of \eqref{primal} in $O(|\mathcal{E}|)$ iterations. \end{theo} \begin{proof} Let $\{\bm{x}_k,\lambda_k,\bm{s}_k\}$ be an optimal solution of MP in the $k$-th iteration and $\{\lambda_k^{s},\bm{s}_k^{s}\}$ be an optimal solution of SuP with $\{\bm{p}_k^{i}\}_{i=1}^N \cup \{\bm{p}_k\}$ being the extreme points of SuP. We show that $\{\bm{p}_k^{i}\}_{i=1}^N \cup \{\bm{p}_k\} \subseteq \mathcal{E}_s$ implies the convergence of Algorithm \ref{algo_whole}, i.e., $LB = UB$. Step 4 in Algorithm \ref{algo_whole} implies that \begin{equation*} UB \le \bm{c}^T\bm{x}_k + \frac{1}{N}\sum_{i=1}^{N}s_{ki}^{s}+\epsilon_N\lambda_k^{s}. \end{equation*} Since $\{\bm{p}_k^{i}\}_{i=1}^N \cup \{\bm{p}_k\} \subseteq \mathcal{E}_s$, then MP in the $k$-th iteration is identical to that in the $(k-1)$-th iteration. Thus, $\bm{x}_k$ is an optimal solution to the $(k-1)$-th MP as well. By the Step 3 in Algorithm \ref{algo_whole}, we find that $LB \ge \bm{c}^T\bm{x}_k + \epsilon_N\lambda_k+\sum_{i=1}^{N}\frac{s_{ki}}{N} \ge \bm{c}^T\bm{x}_k +\epsilon_N\lambda_k^{s}+ \sum_{i=1}^{N}\frac{s_{ki}^{s}}{N},$ where the last inequality holds due to the fact that $\{\bm{p}_k^{i}\}_{i=1}^N \cup \{\bm{p}_k\} \subseteq \mathcal{E}_s$ and hence the related constraints are added to MP before the $(k-1)$-th iteration. Consequently, we have $UB = LB$. The conclusion of the convergence in $O(|\mathcal{E}|)$ iterations follows immediately from the finite number of extreme points for the polyhedron $\mathcal{P}$. \end{proof} \section{The Worst-case Distribution and the Asymptotic Consistency}\label{wos-cas} \subsection{The Worst-case Distribution} In this subsection we derive the distribution achieving the worst-case $\beta(\bm{x})$ in \eqref{ADRO-Two} of Section \ref{TDROLOP} for any feasible vector $\bm{x} \in \mathcal{X}$. \begin{lem} \label{thm_dis} For any feasible first-stage decision vector $\bm{x}$, then \begin{equation} \label{worst-case-B} \beta(\bm{x})=\sup_{\tilde{\bm{\xi}}\in \mathcal{B}}\left\{\frac{1}{N}\sum_{i=1}^{N}Q(\bm{x},\bm{\xi}^{(i)})\right\}, \end{equation} where $$ \label{setB} \mathcal{B} = \left\{(\bm{\xi}^{(1)},\dots,\bm{\xi}^{(N)}) ~|~ \frac{1}{N}\sum_{i=1}^{N}d(\bm{\xi^{(i)}},\widehat{\bm{\xi}}^{i})\le\epsilon_N,\ \bm{\xi}^{(i)}\in \Xi\right\}. $$ \end{lem} \begin{proof} Given a feasible solution $\bm{x}$, it follows that \begin{equation} \label{s1} \begin{aligned} \sup_{\tilde{\bm{\xi}}\in \mathcal{B}}\left\{\frac{1}{N}\sum_{i=1}^{N}Q(\bm{x},\bm{\xi}^{(i)})\right\} \le \sup_{F\in \mathcal{F}_N}\mathbb{E}_\mathbb{F} \left\{Q(\bm{x},\bm{\xi})\right\}, \end{aligned} \end{equation} by Lemma 2 in \cite{wang2020wasserstein}. By the equivalence between $\beta(\bm{x})$ and \eqref{beta_dual1_1}, then for any $\varepsilon \ge 0$, there exists $\{\tilde{\bm{\xi}}^{(i)}\}_{i\in[N]}\subseteq \Xi$ such that \begin{equation}\label{contradict} \begin{aligned} & \sup_{F\in \mathcal{F}_N}\mathbb{E}_\mathbb{F} \left\{Q(\bm{x},\bm{\xi})\right\}-\varepsilon \\ & < \inf_{\lambda \ge 0}\left\{\lambda\epsilon_N + \frac{1}{N}\sum_{i=1}^{N}\left\{Q(\bm{x},\tilde{\bm{\xi}}^{(i)})-\lambda d(\tilde{\bm{\xi}}^{(i)},\widehat{\bm{\xi}}^{i})\right\} \right\}. \end{aligned} \end{equation} If $\left(\tilde{\bm{\xi}}^{(1)},\dots,\tilde{\bm{\xi}}^{(N)}\right) \notin \mathcal{B}$ and let $\lambda > 0 $, it follows that \begin{equation*} \lambda\left\{\epsilon_N-\frac{1}{N} \sum_{i=1}^{N}d(\tilde{\bm{\xi}}^{(i)},\widehat{\bm{\xi}}^i)\right\} < 0. \end{equation*} Increasing $\lambda$ to $+\infty$ in \eqref{contradict} enforces $\sup_{\mathbb{F}\in \mathcal{F}_N}\mathbb{E}_\mathbb{F} \{Q(\bm{x},\bm{\xi})\}$ to $-\infty$, which contradicts with the fact that $$\sup_{\mathbb{F}\in \mathcal{F}_N}\mathbb{E}_\mathbb{F} \{Q(\bm{x},\bm{\xi})\} \ge \mathbb{E}_{\mathbb{F}_N} \{Q(\bm{x},\bm{\xi})\} > -\infty, $$ where the second inequality follows from Assumption \ref{rel-com}. Thus, $\left(\tilde{\bm{\xi}}^{(1)},\dots, \tilde{\bm{\xi}}^{(N)}\right) \in \mathcal{B}$. By Lemma 2 in \cite{wang2020wasserstein}, it holds that \begin{equation*} \begin{aligned} \sup_{\mathbb{F}\in \mathcal{F}_N}\mathbb{E}_\mathbb{F} \{Q(\bm{x},\bm{\xi})\}-\varepsilon < \sup_{\tilde{\bm{\xi}}\in \mathcal{B}}\left\{\frac{1}{N}\sum_{i=1}^{N}\left\{Q(\bm{x},{\bm{\xi}}^{(i)})\right\} \right\}. \end{aligned} \end{equation*} Letting $\varepsilon$ to zero, it holds that \begin{equation*} \begin{aligned} \sup_{F\in \mathcal{F}_N}\mathbb{E}_F \{Q(\bm{x},\bm{\xi})\le\sup_{\tilde{\bm{\xi}}\in \mathcal{B}}\left\{\frac{1}{N}\sum_{i=1}^{N}Q(\bm{x},\bm{\xi}^{(i)})\right\}. \end{aligned} \end{equation*} Jointly with \eqref{s1}, then \eqref{worst-case-B} holds. \end{proof} Since $Q(\bm{x},\bm{\xi})$ is concave with respect to $\bm{\xi}$ and $\mathcal{B}$ is a compact set, \eqref{worst-case-B} allows for an optimal solution, Then a worst-case distribution is explicitly derived below. \begin{theo} \label{worst-case-dis-1} For any solution $\bm{x} \in \mathcal{X}$ and let $\bm{\xi}_{\bm{x}}=\left(\bm{\xi}^{(1)}_{\bm{x}},\dots,\bm{\xi}^{(N)}_{\bm{x}}\right)$ be an optimal solution to \eqref{worst-case-B}. The following distribution \begin{equation*} \begin{aligned} \mathbb{F}^*_{\bm{x}} = \frac{1}{N}\sum_{i=1}^{N}\delta_{\bm{\xi}^{(i)}_{\bm{x}}} \end{aligned} \end{equation*} is the distribution achieving the worst-case second-stage cost , i.e., \begin{equation*} \begin{aligned} \sup_{\mathbb{F}\in \mathcal{F}_N}\mathbb{E}_\mathbb{F}\left\{Q(\bm{x},{\bm{\xi}})\right\} = \mathbb{E}_{\mathbb{F}^*_{\bm{x}}}\left\{Q(\bm{x},{\bm{\xi}})\right\}. \end{aligned} \end{equation*} \end{theo} \begin{proof} Obviously, the following distribution \begin{equation*} \begin{aligned} \Pi_{\bm{x}} = \frac{1}{N}\sum_{i=1}^{N}\delta_{(\bm{\xi}^{(i)}_{\bm{x}},\widehat{\bm{\xi}}^i)}. \end{aligned} \end{equation*} is a joint distribution of $F_N$ and $\mathbb{F}^*_{\bm{x}}$. Then it holds that \begin{equation*} \begin{aligned} W(\mathbb{F}_N,\mathbb{F}^*_{\bm{x}}) &\le \int\left\|{\bm{\xi}}-{\bm{\xi}}^{\prime}\right\|_p \Pi_{\bm{x}}\left(\mathrm{d} {\bm{\xi}}, \mathrm{d} {\bm{\xi}}^{\prime}\right)\\ &=\frac{1}{N} \sum_{i=1}^{N} \| {\bm{\xi}}_{\bm{x}}^{(i)}-\widehat{\bm{\xi}}^{i}\|_p \le \epsilon_N, \end{aligned} \end{equation*} where the first inequality follows directly from the definition of the 1-Wasserstein metric and the last inequality follows from the fact that $\left(\bm{\xi}^{(1)}_{\bm{x}},\dots,\bm{\xi}^{(N)}_{\bm{x}}\right) \in \mathcal{B}$. Hence, $\mathcal{F}_N$ includes the distribution $F^*_{\bm{x}}$. Thus, it yields that \begin{equation*} \begin{aligned} \sup_{\mathbb{F}\in \mathcal{F}_N}\mathbb{E}_\mathbb{F} \left\{Q(\bm{x},{\bm{\xi}})\right\}&\ge\mathbb{E}_{\mathbb{F}^*_{\bm{x}}}\left\{Q(\bm{x},{\bm{\xi}})\right\} =\frac{1}{N}\sum\limits_{i=1}^{N}Q(\bm{x}, \bm{\xi}^{(i)}_{\bm{x}})\\ &=\sup_{\mathbb{F}\in \mathcal{F}_N}\mathbb{E}_\mathbb{F} \left\{Q(\bm{x},\bm{\xi})\right\}, \end{aligned} \end{equation*} where the last equality follows from Lemma \ref{thm_dis}. Hence, $\mathbb{F}^*_{\bm{x}}$ is the desired worst-case distribution. \end{proof} \subsection{The Asymptotic Consistency} This subsection studies the asymptotic consistency of the DR problem \eqref{primal} under a mild assumption. \begin{assum} \label{light-dis} There exists a positive constant $c$ such that \begin{equation*} \int_{\Xi}\exp(\|\bm{\xi}\|^c_2)\mathbb{F}(\mathrm{d}\bm{\xi})< \infty. \end{equation*} for the true distribution $\mathbb{F}$. \end{assum} Under Assumptions \ref{rel-com}-\ref{light-dis}, we formalize the asymptotic consistency of the proposed DR problem below. \begin{theo} \label{asy_con} Under Assumptions \ref{rel-com}-\ref{light-dis} and select $\beta_N \in (0,1)$ such that $\sum_{N=1}^{\infty}\beta_N \le \infty$. Let the 1-Wasserstein ball radius be \begin{equation*} \epsilon_N(\beta_N)=\left\{\begin{array}{ll} \left(\frac{\log(c_1\beta_N^{-1})}{c_2N}\right)^{1/\max\{n,2\}}, & \rm{if} \ \ N \ge \frac{\log(c_1\beta_N^{-1})}{c_2} \\ \left(\frac{\log(c_1\beta_N^{-1})}{c_2N}\right)^{1/c}, & \rm{if} \ \ N < \frac{\log(c_1\beta_N^{-1})}{c_2} \end{array} \right. \end{equation*} where $c_1$ and $c_2$ are positive constants related to the constant $c$ in Assumption \ref{light-dis}. Then the DR problem \eqref{primal} asymptotically converges to the stochastic problem \eqref {classical} almost surely when the sample number increases to infinity. \end{theo} \begin{proof} For the problem with distribution uncertainty only in the objective function, the relatively complete recourse implies that $Q(\bm{x},\bm{\xi})$ is feasible and finite. Then there exists a finite $\bm{y}$ such that $|~Q(\bm{x},\bm{\xi})~|=|~(Z\bm{y})^T\bm{\xi}~| \le \|Z\bm{y}\|_2\|\bm{\xi}\|_2 \le L(1+\| {\bm{\xi}} \|_2)$ for any $\bm{x} \in \mathcal{X}$ and ${\bm{\xi}} \in \Xi$, where $L \ge 0$ is a constant. For the case of the distribution uncertainty only in constraints, the strong duality of LP problem shows that $Q(\bm{x},\bm{\xi}) = (C\tilde{\bm{p}})^T\bm{\xi}$, where $C$ is given in \eqref{matrix-c} of Section \ref{CGP} and $\tilde{\bm{p}}$ is the extreme point of polyhedron $\mathcal{P}$. Assumption \ref{poly-bound} implies that $\|\tilde{\bm{p}}\|$ is bounded and hence there exists a positive constant $L$ such that $|~Q(\bm{x},\bm{\xi})~| \le \|C\tilde{\bm{p}}\|_2\|\bm{\xi}\|_2 \le L(1+\| {\bm{\xi}} \|_2)$ for $\bm{x} \in \mathcal{X}$ and ${\bm{\xi}} \in \Xi$. Finally the asymptotic consistency of our model follows from Theorem $3.6$ in \cite{esfahani2018data}. \end{proof} \section{Simulation}\label{sim} This section conducts experiments to evaluate the performance of the proposed model and the constraint generation algorithm. All experiments are performed on a 64 bit PC with an Intel Core i5-7500 CPU at 3.4GHz and 8 GB RAM. The Cplex 12.6 optimizer is used to solve the optimization programs. \subsection{The Two-stage Portfolio Program} This subsection is devoted to the application in two-stage portfolio program with uncertainty only in the objective function as stated in Example \ref{exmp-f}, see \cite{ling2017robust} for details. \subsubsection{Problem Specification} Consider a portfolio of four assets: (1) Dow Jones Industrial Average Index, (2) Dow Jones Transportation Average Index, (3) Dow Jones Composite Average Index and (4) Dow Jones Utility Average. The daily returns of above assets over seven years from January 02th, 2011 to December 31th, 2018 are collected from the RESSET database (http://www.resset.cn). Since the first-stage return $\bm{c}$ is unknown in our simulation, we select the data from January 02th, 2011 to December 31th, 2016 to approximate it by the SAA method, i.e., $\bm{c}=\sum_{i=1}^N \widehat{\bm{\xi}^1_i}$, where $\widehat{\bm{\xi}}^1_i$ is the $i$th sample of the first-stage return. \subsubsection{Impact of the 1-Wasserstein Radius and the Sample Size} Experiments are conducted to test the impact of the 1-Wasserstein radius $\epsilon_N$ and the sample size $N$ on the out-of-sample performance of our model in this subsection. The out-of-sample performance is measured by the loss of the proposed model on {\em new} samples, i.e., \begin{equation} \label{Out_P} \bm{c}^T\bm{x} + \mathbb{E}_{\mathbb{F}}\{Q(\bm{x},\bm{\xi})\}. \end{equation} We are unable to exactly calculate \eqref{Out_P} due to the unknown true distribution $\mathbb{F}$. Instead, we randomly choose $300$ test samples from the dataset to approximate it, i.e., \begin{equation*} \bm{c}^T\bm{x} + \frac{1}{ N_T}\sum\limits_{i=1}^{N_T}Q(\bm{x},\widehat{\bm{\xi}}_T^i), \end{equation*} where $\widehat{\bm{\xi}}_T^i$ is the $i$-th test sample and $N_T$ is the number of test samples. We first test the impact of the 1-Wasserstein radius $\epsilon_N$ on our model. We conduct $200$ independent experiments and the averaged out-of-sample performance is illustrated in Figure \ref{radius}. Experimental results show that the out-of-sample performance improves as the 1-Wasserstein radius increases and decreases if the radius is greater than a specific value. \begin{figure*}[htbp] \centering \subfigure[ ]{ \label{fig:30 } \includegraphics[width=0.3 \textwidth]{20.pdf} } \subfigure[ ]{ \label{fig:100 } \includegraphics[width=0.3 \textwidth]{100.pdf} } \subfigure[ ]{ \label{fig:300} \includegraphics[width=0.3 \textwidth]{200.pdf} } \caption{{The averaged out-of-sample performance under sample dataset of different sizes as a function for 1-Wasserstein radius estimated by 200 independent simulation runs. (a)$N=20$, (b) $N=100$, (c) $N = 200$.}} \label{radius} \end{figure*} Experiments on different sample sizes are performed as well. The out-of-sample performance averaged over $200$ independent experiments is presented in Figure \ref{fig:Sam}. Theorem \ref{asy_con} is confirmed by the out-of-sample performance improvement with the growing sample size. \begin{figure}[htbp] \centering \includegraphics[scale=0.35]{sample.pdf} \caption{The averaged out-of-sample performance as a function of sample size $N$ for $200$ independent experiments.} \label{fig:Sam} \end{figure} \subsubsection{Comparisons with the State-of-the-art Methods} In this subsection, we compare the proposed 1-Wasserstein DR model (denoted as DRW) with the SAA method and the DR model with the moment-based ambiguity set (denoted as DRM), where the first- and second-order uncertainty are borrowed from \cite{ling2017robust}. Let $N = \{20,30,50,100,200,300\}$. Due to the dependence of the radius $\epsilon_N$ on the sample dataset size, we tune it to ensure a good out-of-sample performance. We adopt the percentage difference $$\left(\frac{\text{DR}}{\text{SAA}}-1\right)\times 100\%$$ to compare the out-of-sample performance of those models, where DR denotes the out-of-sample performance of the DR two-stage problem and SAA denotes that of the SAA method. \renewcommand{\arraystretch}{1} \begin{table}[htb] \centering \caption{Percentage differences of out-of-sample performance(in $\%$) between the DR models and the SAA } \begin{tabular*}{0.48\textwidth} {@{}@{\extracolsep{\fill}}ccccccc@{}} \toprule[1 pt] $N$ & 20 & 30 & 50 & 100 & 200 & 300 \\ \midrule DRW & 1.1 &1.6& 1.7 & 2.1 & 4.1 & 4.8 \\ DRM & -1.3 & -0.7 & 0.7 & 1.5 & 3.6 & 3.5 \\ \bottomrule[1 pt] \end{tabular*} \label{com_out} \end{table} \renewcommand{\arraystretch}{1} \begin{table}[htb] \centering \caption{Averaged computation time (second) of different methods} \begin{tabu} to 0.48\textwidth {X[1,c] X[1,c] X[1,c]X[1,c] X[1,c] X[1,c] X[1,c]} \toprule[1 pt] $N$ & 20 & 30 & 50 & 100 & 200 & 300 \\ \midrule DRW & 0.14 & 0.15& 0.15 & 0.17 & 0.16 & 0.19 \\ DRM & 0.12 & 0.14 & 0.14 & 0.16 & 0.15 & 0.16 \\ SAA & 0.13 & 0.15 & 0.16 & 0.17 & 0.16 & 0.16 \\ \bottomrule[1 pt] \end{tabu} \label{com_tim} \end{table} Comparisons in terms of the out-of-sample performance and computation time are presented in Table \ref{com_out} and Table \ref{com_tim} respectively. A positive value in Table \ref{com_out} implies a better performance of the DR method than the SAA. Table \ref{com_out} indicates the best out-of-sample performance of our proposed method among all models. Importantly, it can also be solved in an acceptable time even under a large sample dataset. \subsection{The Two-stage Material Order Problem} Algorithm \ref{algo_whole} is applied to solve the DR two-stage ordering problem in Example \ref{exmp-c}. We omit the comparison with the moment-based model since there is no effective method to solve it \cite{ling2017robust}. \subsubsection{Problem Specification} Consider the crude oil order problem for the gasoline and fuel oil supply stated in \cite{kall1994stochastic}). The oil is from two countries and can be viewed as different materials. Then the coefficients of the material order problem in Example \ref{exmp-c} is set as \begin{flalign} &\bm{c}=[2,3]^T, \bm{d}=[7,12]^T, u = 100, \ \nonumber \\ & A(\bm{\xi})=\left[ \begin{matrix} 2 + \xi_1 & 3 \\ 6 & 3.4 + \xi_2 \\ \end{matrix} \right], \bm{b}(\bm{\xi})=\left[ \begin{matrix} 180 + \xi_3 \\ 162 + \xi_4 \\ \end{matrix} \right],& \nonumber \end{flalign} where $\bm{\xi} \in \mathbb{R}^4$ is a random vector with an unknown distribution and the recourse matrix $B$ is the identity matrix. We assume that $\bm{\xi}$ follows a Gaussian distribution $\mathcal{N}(\bm{\mu},\bm{\Sigma})$ with $ \bm{\mu} =[0,0,0,0]^T$ and $\bm{\Sigma} = \text{Diag}([9,12,0.21,0.16]^T)$, and generate $N$ samples to construct the 1-Wasserstein ball $\mathcal{F}_N$. \subsubsection{Test the Tightness of Bounds} We test the tightness of the proposed bounds in MP and SuP for an optimal function value (O.F.V) and the first-stage cost over the 1-Wasserstein ball with different radii $\epsilon_N$. Obviously, the extreme points of the set $\mathcal{P}=\{\bm{p} \ge 0: \bm{p}\le \bm{d}\} = \{\bm{p}\in \mathbb{R}^2_+: p_1 \le 7,p_2 \le 12 \}$ are $[0,0]^T, [0,12]^T, [7,0]^T \ \text{and} \ [7,12]^T$. Hence, we can solve \eqref{primal} directly with explicitly known extreme points and compare with Algorithm \ref{algo_whole}. Let $(x^d_1,x^d_2)$ denote the solution obtained via solving \eqref{primal} directly and $(x^{a}_1,x^{a}_2)$ obtained by Algorithm \ref{algo_whole}. Table \ref{tab2} indicates that the two methods under different 1-Wasserstein radius obtain identical results. \renewcommand{\arraystretch}{1.1} \begin{table*}[htb] \centering \caption{The optimal solutions under different methods with different 1-Wasserstein ball radii $\epsilon_N$ when sample size $N = 500$} \begin{tabu} to 0.8\textwidth{X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c]} \toprule[1 pt] $\epsilon_N$ & \ \ 0.01 \ \ & 0.21 & 0.41 \ \ & 0.61 \ \ & 0.81 \ \ & 1\\ \hline $(x^d_1,x^d_2)$ & (42.7,57.2) & (41.2,50.8) &(38.7,41.5) &(36.2,32.4)& (34.7,26.4) &(33.4,22.5) \\ $(x^{a}_1,x^{a}_2)$ & (42.7,57.2) & (41.2,50.8) &(38.7,41.5)&(36.2,32.4)& (34.7,26.4) &(33.4,22.5) \\ \bottomrule[1 pt] \end{tabu} \label{tab2} \end{table*} The O.F.V. and the first-stage cost compared to that of the method with known extreme points under $500$ samples is shown in Fig.\ref{bound_per_OFV} and Fig.\ref{bound_per_first}. We observe that both the lower bound and upper bound are tight, regardless of the radius of the 1-Wasserstein ball. Thus, these bounds can be viewed as a good reference to verify the performance of our algorithm. \begin{figure}[htb] \centering \subfigure[ ]{ \label{bound_per_OFV} \includegraphics[width=0.35 \textwidth]{OFVbound.pdf} } \subfigure[ ]{ \label{bound_per_first} \includegraphics[width=0.35 \textwidth]{Fcostbound.pdf} } \caption{The averaged performance of the proposed bounds for O.F.V. and the first-stage cost under the 1-Wasserstein ball with different radii. (a) O.F.V (b) the first-stage cost} \end{figure} \renewcommand\arraystretch{0.3} \begin{table}[htb] \centering \caption{The averaged number of extreme points under different sample sizes } \begin{tabu} to 0.48\textwidth{X[1,c] X[1,c] X[1,c] X[1,c]X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c]} \toprule[1 pt] $N$ & 10 & 20 & 30 & 50 & 100 & 200 & 300 & 500 & 1000 \\ \midrule Num & 3.68 & 3.74 & 3.98 & 3.96 & 4 & 4 & 4 & 4 & 4 \\ \bottomrule[1 pt] \end{tabu} \label{tab4} \end{table} \renewcommand\arraystretch{0.3} \begin{table}[!htb] \centering \caption{The averaged number of iterations under different sample sizes} \begin{tabu} to 0.48\textwidth{X[1,c] X[1,c] X[1,c] X[1,c]X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c]} \toprule[1 pt] $N$ & 10 & 20 & 30 & 50 & 100 & 200 & 300 & 500 & 1000 \\ \midrule Ite & 3.78 & 3.84 & 3.94 & 3.94 & 4 & 4 & 4 & 4 & 4 \\ \bottomrule[1 pt] \end{tabu} \label{ite-num} \end{table} \begin{figure}[!htb] \centering \includegraphics[scale=0.35]{tendency.pdf} \caption{The convergence of the O.F.V for the two-stage program with $500$ samples.} \label{tendency_1} \end{figure} Fig.\ref{tendency_1} shows the tendency of the upper and lower bound for the proposed two-stage program in a single experiment. We record the averaged number of the extreme points and iterations in Algorithm \ref{algo_whole} under different sample sizes over $100$ independent experiments in Table \ref{tab4} and Table \ref{ite-num}, both of which validate the effectiveness of Algorithm \ref{algo_whole}. \subsubsection{The Test for High Dimension} A direct enumeration of all extreme points of the polyhedron $\mathcal{P} = \{\bm{p}\in \mathbb{R}^M_+:B^T\bm{p}\le\bm{d}\}$ with a large $M$ is computational demanding \cite{khachiyan2009generating}. In this subsection, we consider a high dimension problem to verify the efficiency of Algorithm \ref{algo_whole}, i.e., \begin{flalign*} & u = 1000, ~\bm{x} \in \mathbb{R}^{20}, ~A(\bm \xi) \in \mathbb{R}^{20\times 20}, ~ \bm{b}(\bm \xi)\in \mathbb{R}^{20}, \\ & \bm{c} = [2,3,1,4,5,2,4,3,4,2,5,4,4,2,6,2,4,3,1,2]^T, \\ & \bm{d} = [7,9,4,6,8,5,6,8,10,7,12,10,6,7,9,5,11,10,5,8]^T, \end{flalign*} where $A(\bm \xi)$ and $\bm{b}(\bm \xi)$ are affinely dependent on the random vector $\bm \xi$ and $B$ is the identity matrix. Fig.\ref{bound_hper_OFV} and Fig.\ref{bound_hper_first} report the averaged performance of our proposed bounds for the O.F.V and the first-stage cost under different 1-Wasserstein radii $\epsilon_N$ when the sample size $N = 500$. As previous subsection, these proposed bounds are tight as well. \begin{figure}[htb] \centering \subfigure[ ]{ \label{bound_hper_OFV} \includegraphics[width=0.35 \textwidth]{OFVbound_h.pdf} } \subfigure[ ]{ \label{bound_hper_first} \includegraphics[width=0.35 \textwidth]{Fcostbound_h.pdf} } \caption{The averaged performance of the proposed bounds for O.F.V. and first-stage cost under the 1-Wasserstein ball with different radii. (a) O.F.V (b) first-stage cost} \end{figure} We record the averaged computation time, the number of extreme points and iterations in Algorithm \ref{algo_whole} over $100$ independent simulations as sample size $N$ varies from $10$ to $1000$ in Table \ref{tab5}, Table \ref{tab6} and Table \ref{tab7} respectively. The convergence of the proposed algorithm in a single experiment is also illustrated in Fig.\ref{tendency}. \renewcommand{\arraystretch}{0.3} \begin{table}[h] \centering \caption{The averaged computation time (second) under different sample sizes } \begin{tabu} to 0.48\textwidth{X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c]} \toprule[1 pt] $N$ & 10 & 20 & 30 & 50 & 100 & 200 & 300 & 500 & 1000 \\ \midrule Time & 10.9 & 11.6 & 11.6 & 11.6 & 12.3 & 13.9 & 17.2 & 23.3 & 36.2 \\ \bottomrule[1 pt] \end{tabu} \label{tab5} \end{table} \renewcommand{\arraystretch}{0.3} \begin{table}[htb] \centering \caption{The averaged number of extreme points under different sample sizes} \begin{tabu} to 0.48\textwidth{X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1.2,c] X[1.2,c] X[1.2,c] X[1.2,c]} \toprule[1 pt] $N$ & 10 & 20 & 30 & 50 & 100 & 200 & 300 & 500 & 1000 \\ \midrule Num & 35.2 & 46.5 & 49.2 & 60.3 & 72.4 & 100.1 & 123.7 & 156.4 & 181.1 \\ \bottomrule[1 pt] \end{tabu} \label{tab6} \end{table} \renewcommand{\arraystretch}{0.3} \begin{table}[!htb] \centering \caption{The averaged number of iterations under different sample sizes } \begin{tabu} to 0.48\textwidth{X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c] X[1,c]} \toprule[1 pt] $N$ & 10 & 20 & 30 & 50 & 100 & 200 & 300 & 500 & 1000 \\ \midrule Ite & 10.28 & 9.58 & 9.32 & 8.92 & 8.80 & 8.54 & 8.58 & 8.16 & 8.46\\ \bottomrule[1 pt] \end{tabu} \label{tab7} \end{table} \begin{figure}[!htb] \centering \includegraphics[scale=0.35]{tendency_h.pdf} \caption{The convergence of the O.F.V for the two-stage program with $500$ samples.} \label{tendency} \end{figure} Results show that Algorithm \ref{algo_whole} converges in a reasonable time even for the problem in a high dimension under a large sample dataset. The number of extreme points required in our algorithm is far smaller than the total number of extreme points. \section{Conclusion} \label{con} We have proposed a novel SOCP approach to solve the data-driven DR two-stage linear programs over 1-Wasserstein balls. The model with distribution uncertainty in the objective function is reformulated as a solvable SOCP problem. While the DR model over the moment-based ambiguity set is generally unsolvable, we propose a constraint generation algorithm with provable convergence to approximately solve the NP-hard model with distribution uncertainty only in constraints. We explicitly derive a distribution achieving the worst-case cost. Numerical results validate the good out-of-sample performance for our model and the high efficiency of the proposed algorithm. \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.533203, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbsq6NNjgBtygd_7_
\section{Introduction} \label{sec:introduction} The penetration of advanced machine learning (ML) methods into physics has led to far-reaching advances in both theoretical predictions and experiments, yielding exciting and interesting new results \cite{carleo2017solving, raissi2019physics, iten2020discovering, choo2020fermionic, gentile2021learning, karniadakis2021physics}. Some of the most interesting progress has come from the solution of inverse problems \cite{tarantola2005inverse} aimed at finding novel experimental setups that produce a desired physical observable \cite{krenn2016automated, melnikov2018active, tamayo2018automatic, malkiel2018plasmonic, molesky2018inverse, yao2019intelligent, minkov2020inverse, jagtap2020conservative, krenn2020computer, colburn2021inverse, wiecha2021deep}. Nevertheless, there are still physical phenomena, particularly in quantum physics, that have yet benefit from this progress. This may be attributed at least partially to the lack of appropriate computational tools for modelling complex quantum systems, and in some cases to the stochastic dynamics involved in modelling quantum phenomena such as spontaneous processes and fluctuations of quantum fields \cite{sinatra2002truncated, brambilla2004simultaneous, corney2015non, lewis2016approximate, drummond2017higher, weinbub2018recent, trajtenberg2020simulating}. One important branch of quantum physics that might benefit significantly from the adoption of inverse design algorithms is quantum optics \cite{scully1999quantum, garrison2008quantum}. Quantum optics has proven to be an invaluable resource for the realization of many quantum technologies, such as quantum communication \cite{ursin2007entanglement,gisin2007quantum,vallone2015experimental,chen2021integrated}, quantum computing \cite{knill2001scheme,kok2007linear,spring2013boson,zhong2020quantum}, and cryptography \cite{bennett1992experimental,bennett2020quantum,sit2017high,liao2017satellite,pirandola2020advances}. A prominent reason for this is the availability of sources for generating nonclassical light \cite{garrison2008quantum}, which are mainly based on nonlinear interactions \cite{boyd2020nonlinear}. The most prevalent of these processes is spontaneous parametric down-conversion (SPDC) in second order nonlinear $\chi^{(2)}$ materials \cite{SPDCreview2018}. The nonlinear coefficient of ferroelectric materials can be modulated by electric field poling in two out of the three crystal axes \cite{berger1998nonlinear,broderick2000hexagonally}. Recently, this capability was extended to enable modulation in all three axes using focused laser beams \cite{xu2018three,wei2018experimental,liu2019nonlinear,wei2019efficient,imbrock2020waveguide,liu2020nonlinear,zhang2021nonlinear, chen2021quasi, arie2021storing}. The 3D nonlinear photonic crystals (NLPCs) offer a promising new avenue for shaping and controlling arbitrary quantum correlations between photons. This new technology introduces additional degrees of freedom for tailoring the quantum state of structured photon-pairs \cite{walborn2012generalized,malik2016multi,dosseva2016shaping,kovlakov2017spatial,erhard2018twisted,PhysRevA.98.060301,cui2019wave, erhard2020advances, boucher2021engineering}. Solving the inverse quantum optical design would make it possible to find the optimal physical parameters of the system, such as the pump beam profile and the nonlinear 2D and 3D volume holograms embedded in the NLPC, that yield the desired quantum state. These capabilities can be used for the generation of maximally entangled photonic states of arbitrary dimensionality that allow stronger violation of generalized Bell's inequalities, the encoding of larger capacities of quantum information on light \cite{brandt2020high}, and improved security in quantum key distribution \cite{krenn2015twisted,sit2017high,sit2018quantum}. If we wish to employ ML methods for problems in quantum optics, it is crucial to have a good physical model of the quantum optical process in question and integrate it into the algorithm itself \cite{raissi2019physics, de2019deep, PhysRevLett.124.010508, jagtap2020conservative, pang2020physics, SIRIGNANO2020109811, karniadakis2021physics, pakravan2021solving}. The model should ideally encompass the relevant conservation laws, physical principles, and phenomenological behaviors. Such physically-constrained models will ensure convergence to physically realizable solutions, reduce the parameter search, improve the predictive accuracy and statistical efficiency of the model, and allow for faster training with improved generalization. However, there are obstacles to incorporating ML into quantum optics while still properly capturing the physics. In order to account for general optical medium geometry, diffraction, dispersion, and non-perturbative effects in non-classical light generation (such as SPDC), accurate simulation schemes must be employed, that go beyond the scope of the more frequently used analytic calculations \cite{torres2003quantum, walborn2012generalized, SPDCreview2018,kolobov1999spatial}. However, such models – which are more appealing for the inverse design of complex optical media – are often stochastic \cite{kolobov1999spatial, PhysRevA.69.023802, trajtenberg2020simulating}. The stochastic nature of the problem, also prominent in other physical fields such as those which employ Monte Carlo simulations \cite{binder1993monte}, makes modern descent-based algorithms difficult to employ. In this paper, we solve the inverse design problem of generating structured and entangled photon pairs in quantum optics using tailored nonlinear interactions in the SPDC process. The learned interaction parameters can then be used to predict the generation of the desired quantum state or correlations between structured photon-pairs in future experiments, as illustrated in Fig. \ref{fig:illustration}. Our \emph{SPDCinv} model captures the full dynamics (the governing dynamics derived from Heisenberg's equations of motion), takes into account high-order interaction effects, and can learn every parameter of the quantum optical process. We show how to make an inherent stochastic description of SPDC fully differentiable, making it amenable to descent based methods of optimization. Furthermore, we use a split-step Fourier (SSF) method \cite{stoffa1990split} to solve our forward model. To the best of our knowledge, this is the first time that a differentiable model has been integrated with SSF -- a feature which is also relevant for many other inverse problems in optics and quantum mechanics (it combines diffraction, or more generally, propagation in space, to solve nonlinear partial differential equations, like the nonlinear Schr\"{o}dinger equation). \textcolor{black}{Our forward model has already been validated against a number of published experimental results, detailed in Refs. \cite{trajtenberg2020simulating, DiDomenico:21, DiDomenico:Talk}, for the cases of structured pump beams \cite{kovlakov2017spatial, PhysRevA.98.060301, mair2001entanglement} and structured crystals \cite{trajtenberg2020simulating, DiDomenico:21, DiDomenico:Talk}. In this paper, we further validate it against other experiments \cite{kovlakov2017spatial,PhysRevA.69.023802}, obtaining very good agreement for both on-axis spatial mode correlations, as well as to the quantum state tomography of the generated state. Moreover, we demonstrate the full process of inverse design to obtain the correct relations between crystal length and pump waist, as achieved in the experiments \cite{kovlakov2017spatial}.} We use our model to discover the optimal quantum volume holograms (embedded in 2D \cite{berger1998nonlinear, broderick2000hexagonally, chowdhury2001experimental, ellenbogen2009nonlinear,bloch2012twisting,shapira2012two,hong2014nonlinear,zhu2020high} or 3D NLPCs \cite{xu2018three,wei2018experimental,liu2019nonlinear,wei2019efficient,imbrock2020waveguide,liu2020nonlinear,zhang2021nonlinear, chen2021quasi, arie2021storing}) and the pump structures that generate desired nontrivial quantum correlations (coincidence rate counts) and quantum states (bi-photon density matrices). We demonstrate the generation of high-dimensional maximally entangled photon-pairs and show how the generated quantum state and its correlations can be controlled entirely optically using shaped pump fields interacting with the initially-learned 3D NLPC hologram -- a feature that can find applications in qudit-based quantum key distribution and quantum information protocols that work at high switching rates. Our \emph{SPDCinv} model has been made available at \cite{jax-spdc_inv}. \footnote{A preliminary short abstract of this work was presented at the CLEO 2021 conference \cite{rozenberg2021inverse}.}. \begin{figure}[ht] \centering \includegraphics[width=1\textwidth]{figures/Fig1.png} \caption{ \small An illustration of the inverse design problem: Given the desired coincidence rate counts, $G^{(2)}$, and density matrix, $\rho$, the \emph{SPDCinv} algorithm solves the inverse design problem and extracts the optimal quantum volume hologram, embedded in 3D NLPC, and the complex pump beam structure, for generating the desired quantum state of the spontaneously emitted structured photon-pairs.} \label{fig:illustration} \end{figure} \section{Algorithmic Design} \label{sec:algorithm} \subsection{Methodology} The procedure for the study of inverse problems in physical systems can be divided into the following three steps \cite{tarantola2005inverse, aster2018parameter}: (i) identifying a minimal set of model parameters whose values completely characterize the system; (ii) identifying the physical laws and dynamics governing the system; and (iii) the use of actual results to infer the values of the model parameters. Given a desired observable-set, $\mathbb{O}_d$, describing the quantum state or any related features, our goal is to find the unknown physical parameters, $\Lambda$, that characterize the system, \begin{equation} \label{eq:inverse_model} \Lambda = \mathbf{I}(\mathbb{O}_d), \end{equation} where $\mathbf{I}(\cdot)$ is our inverse solver. We physically constrain our \emph{SPDCinv} model by integrating it with the interaction dynamics of the SPDC process. In this manner, the model captures the interaction properties, such as diffraction, space-dependent nonlinear coupling, vacuum fluctuations and non-perturbative effects. The dynamics of SPDC is prescribed by the Heisenberg equations of motion: $i\hbar \partial_t \hat{E} = [\hat{E},\hat{H}_{\mathrm{SPDC}}]$, for the field operators $\hat{E}$ evolving under the SPDC Hamiltonian $\hat{H}_{\mathrm{SPDC}}$, where $\hbar$ is the reduced Planck's constant. To solve the dynamics, it is enough to consider two pairs of c-number coupled wave equations along the 3D interaction medium, in terms of the field operator matrix elements \cite{trajtenberg2020simulating}, given as: \begin{equation} \begin{split} i\frac{\partial E_{i}^{out}}{\partial \zeta} = -\frac{\nabla^2_\perp}{2k_i} E_{i}^{out}+\kappa_ie^{-i\Delta k \zeta}(E_{s}^{vac})^* \\ i\frac{\partial E_{i}^{vac}}{\partial \zeta} = -\frac{\nabla^2_\perp}{2k_i}E_{i}^{vac}+\kappa_ie^{-i\Delta k \zeta}(E_{s}^{out})^* \\ i\frac{\partial E_{s}^{out}}{\partial \zeta} = -\frac{\nabla^2_\perp}{2k_s}E_{s}^{out}+\kappa_se^{-i\Delta k \zeta}(E_{i}^{vac})^*\\ i\frac{\partial E_{s}^{vac}}{\partial \zeta} = -\frac{\nabla^2_\perp}{2k_s}E_{s}^{vac}+\kappa_se^{-i\Delta k \zeta}(E_{i}^{out})^* \label{eq:waveeq} \end{split} \end{equation} where $\zeta=z$ is the coordinate along the direction of propagation. In the above equation: $E_{j}^{out},E_{j}^{vac}$ ($j=i,s$ for the idler and signal fields respectively) are the output and vacuum field amplitudes of the generated photons and vacuum fluctuations; $\nabla^2_\perp$ is the transverse Laplacian operator; $k_j$ is the wavenumber; $\kappa_{j} (\textbf{r}, \zeta)=\frac{\omega_j^2}{c^2 k_j} \chi^{(2)} (\textbf{r}, \zeta) \mathcal{E}_{p}(\mathbf{r})$ is the nonlinear-coupling coefficient, where $\textbf{r}=(x,y)$ is a position on the transverse plane; $\chi^{(2)}(\textbf{r}, \zeta)$ stands for the (spatially varying) second-order susceptibility and $\mathcal{E}_{p}(\mathbf{r})$ is the (spatially varying) pump field envelope; $c$ is the speed of light in vacuum; and $\Delta k=k_p-k_s-k_i$ is the phase mismatch. The quantum vacuum noise is emulated by initializing a large number of instances of Gaussian noise in both the idler and signal fields (denoted as $E_i^{vac}$ and $E_s^{vac}$ in Eq. \ref{eq:waveeq}), creating the physical vacuum field uncertainty. We summarize Eq. \ref{eq:waveeq} in a compact fashion by denoting all of the fields as $E = (E_{i}^{out}, E_{i}^{vac}, E_{s}^{out}, E_{s}^{vac})$, and writing \begin{equation} i\frac{\partial E}{\partial \zeta} = \mathcal{L}(\Lambda)E \label{eq:waveeq_compact} \end{equation} where $\mathcal{L}$ is the operator given by the righthand side of Eq. \ref{eq:waveeq}, and $\Lambda$ represents the list of physical parameters described in the previous exposition. In practice, we will be particularly interested in the pump field $\mathcal{E}_{p}$ and second-order susceptibility $\chi^{(2)}$, that is, $\Lambda = (\mathcal{E}_{p}(\cdot), \chi^{(2)}(\cdot))$, with all other parameters being taken as fixed. However, we note that the formulation which follows is general, and does not depend on the parameters of interest. \textcolor{black}{We emphasize that our model is not semiclassical, but instead fully equivalent to the solution of the quantum Heisenberg equations of motion for the field operators \cite{brambilla2004simultaneous, trajtenberg2020simulating}. In our case, we assume the signal and idler fields are initially in the vacuum state (which was rigorously shown to justify the random sampling of vacuum fluctuations \cite{brambilla2004simultaneous, trajtenberg2020simulating}), and employ the fact that the generated multimode squeezed vacuum state belongs to the family of Gaussian states (see Eq. \ref{eq:G2}). Further, we note that similar approaches, for example ones that sample the Wigner function at random to calculate quantum observables, are also employed in quantum optics and condensed matter theory \cite{sinatra2002truncated, corney2015non, lewis2016approximate, drummond2017higher, weinbub2018recent}}. We integrate the fields along the direction of propagation according to Eq. \ref{eq:waveeq}, and solve the coupled wave equations for the large ensemble of quantum vacuum realizations in parallel. We use a time-unfolded version \cite{gregor2010learning} of the SSF method \cite{stoffa1990split, agrawal2001applications} to solve for the propagation along the crystal. Then, we derive the second-order statistics to describe the resulting quantum state; an approach that was validated against experimental results, for several cases of shaped pump beams and structured crystals \cite{PhysRevA.69.023802, kovlakov2017spatial, trajtenberg2020simulating, DiDomenico:21, DiDomenico:Talk} \textcolor{black}{(see also section \ref{subsec:validation}).} This strategy facilitates differentiation back through the model and enables application of the latest optimization methods for learning its physical parameters, thereby overcoming issues related to the fundamentally stochastic nature of the model. In what follows, we shall refer to the solution of Eq. \ref{eq:waveeq} (or alternatively Eq. \ref{eq:waveeq_compact}), together with the mapping onto a particular set of observables of interest, denoted as $\mathbb{O}$, as our \textit{forward model}. In particular, we write \begin{equation} \label{eq:forward_model} \mathbb{O} = \mathbf{F}(\Lambda). \end{equation} Given a desired observable set, $\mathbb{O}_d$ , the general inverse problem involves finding the physical parameters $\Lambda$ which produce it. We specialize by solving a parameterized version of the inverse problem. In particular, suppose that the physical parameters of interest $\Lambda$ depend upon parameters $\theta$ which specify them, i.e. $\Lambda = \Lambda(\theta)$. Such parameters $\theta$ may, for example, be coefficients of basis expansions; we will see concrete examples shortly. In this case, we solve the inverse problem by solving the optimization problem \begin{equation} \label{eq:optimizer} \theta^* = \min_\theta \mathcal{D}\big( \mathbf{F}(\Lambda(\theta)), \mathbb{O}_d \big) \end{equation} In the above, $\mathcal{D}(\cdot, \cdot)$ is a discrepancy measure between two sets of observables. For example, we may take $\mathcal{D}(\mathbb{O}, \mathbb{O}') = \|\mathbb{O} - \mathbb{O}' \|_\beta$, where $\|\cdot\|_\beta$ is the Euclidean $\beta$-norm; alternatively, if the observables are normalized to unit $1$-norm, then $\mathcal{D}$ can be the Kullback-Leibler divergence. In the case where we are measuring the discrepancy between two density matrices, we may take $\mathcal{D}$ to be the Trace Distance \cite{rana2016trace}. In Eq. \ref{eq:optimizer}, we are therefore trying to minimize the discrepancy between the set of observables given by a particular parameter specification $\theta$, and the desired set of observables $\mathbb{O}_d$. The inverse model is then given by \begin{equation} \label{eq:model} \mathbf{I}(\mathbb{O}_d) = \Lambda(\theta^*) \end{equation} In order to solve the optimization problem in Eq. \ref{eq:optimizer}, an approach based on gradient descent may be employed. The key is that the forward model of Eq. \ref{eq:waveeq}, while quite complicated, can be expressed in such a way that it is fully differentiable. As a result, any library which can auto-differentiate a system may be used to compute the relevant gradients, thereby allowing for the solution to the optimization problem in Eq. \ref{eq:optimizer}. In practice, we use JAX \cite{jax2018github}, a Python library designed for high-performance numerical computing and automatic differentiation. Finally, given the solution to the inverse problem, we may run the forward model to compute the observables that actually result from the interaction parameters we have computed, that is \begin{equation} \label{eq:inference} \mathbb{O}_i = \mathbf{F}(\Lambda(\theta^*)) \end{equation} where the subscript $i$ indicates \emph{inference}. The degree to which the inferred observables $\mathbb{O}_i$ match the desired observables $\mathbb{O}_d$ will indicate the quality of the inverse algorithm. The overall algorithm is summarized in Fig. \ref{fig:model_paradigm}. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{figures/algorithm/AlgFlowNew.png} \caption{ \small Description of the \emph{SPDCinv} algorithm in two phases. (1) In the training phase (upper panel), the parameterized version of the inverse design is solved. The model receives as input the desired observables and emits the parameterization of the physical parameters that will produce it, by solving the optimization problem. The learning process is described by applying gradient descent (in orange) to the appropriate discrepancy measure, $\mathcal{D}(\cdot, \cdot)$. (2) In the inference phase (lower panel), the model receives the computed physical parameters and emits the observables. The compact notation of the partial differential equation refers to the solution of the Heisenberg equations, Eq. \ref{eq:waveeq}. The quantum vacuum noise is integrated externally (dashed line).} \label{fig:model_paradigm} \vspace{-0.5cm} \end{figure} \parashort{Interaction Parameters:} We may learn any physical parameters $\Lambda$ of the interaction, e.g. wavelength, temperature profile, poling period, poling profile, etc. In this work, the 2D/3D NLPC structure, $\chi^{(2)}(\textbf{r}, \zeta)$, and pump beam profile, $\mathcal{E}_{p}(\textbf{r})$, are the unknown physical parameters we seek to learn, that is $\Lambda = (\mathcal{E}_{p}(\cdot), \chi^{(2)}(\cdot))$. We parameterize the 2D/3D crystal hologram and pump beam profile by the multi-dimensional parameters $\theta_\mathcal{E}$ and $\theta_\chi$, respectively, such that $\Lambda(\theta) = (\mathcal{E}_{p}(\cdot; \theta_\mathcal{E}), \chi^{(2)}(\cdot; \theta_\chi))$. We now discuss in more detail how this parameterization is performed. The parameters we learn can be as general as we want, subject to technological and physical restrictions. To decrease the dimensionality of learned parameters in order to ensure smoother convergence of the inverse problem's solution, the continuous functions of the NLPC holograms are represented using a finite set of unknowns. One way to do this is through expansion in set basis functions that are mutually orthogonal, which may also change as a function of $\zeta$; the parameters $\theta$ then include the coefficients of the expansion. Examples include the Hermite-Gauss (HG) and Laguerre-Gauss (LG) bases, though many other possibilities exist. These basis functions are often scaled according to a transverse length, which for light beams is usually referred to as the waist size, a term which we adopt hereafter for all basis functions. Learning the waist sizes of each of the basis functions individually adds further degrees of freedom to our model. The exact role of the parameters can be seen by formally writing the NLPC structure and the pump profile as a linear combination of the basis functions: \begin{align} \chi^{(2)}(\textbf{r}, \zeta; \theta_\chi) & = \sum_{n=1}^{N_\chi}{\alpha_\chi^n \Phi_\chi^n(\textbf{r}, \zeta; w_\chi^n)} & \quad \theta_\chi = \left\{ \left(\alpha_\chi^n, w_\chi^n \right) \right\}_{n=1}^{N_\chi} \notag \\ \mathcal{E}_{p}(\textbf{r}; \theta_\mathcal{E}) & = \sum_{n=1}^{N_\mathcal{E}}{\alpha_\mathcal{E}^n \Phi_\mathcal{E}^n(\textbf{r}; w_\mathcal{E}^n)} & \quad \theta_\mathcal{E} = \left\{ \left(\alpha_\mathcal{E}^n, w_\mathcal{E}^n \right) \right\}_{n=1}^{N_\mathcal{E}} \label{eq:parameterized} \end{align} where $\alpha_\chi^n, \alpha_\mathcal{E}^n$ are the learned basis coefficients; $w_\chi^n, w_\mathcal{E}^n$ are the learned basis function waist sizes; and $\Phi_\chi^n, \Phi_\mathcal{E}^n$ are the basis functions. Here, the basis function index $n$ sums over both transverse modal numbers, for example the orbital angular momentum $l$- and radial $p$-indices for LG modes. \subsection{Observables}\label{subsec:observables} The set of desired observables describing the generated quantum state is given by the coincidence rate count, $G^{(2)}$, and density matrix of the bi-photon quantum state, $\rho$, such that in general $\mathbb{O}_d = (G^{(2)}_d, {\rho}_d)$. Their evaluation is achieved by first solving the Heisenberg equations of motion for the SPDC Hamiltonian over a large number of independent realizations of the vacuum noise, projecting the output and noise fields onto a desired orthonormal basis of optical modes, and then taking the ensemble average to obtain first-order correlations \cite{PhysRevA.69.023802, trajtenberg2020simulating}, which (for the signal) is given by $G^{(1)}(q_s,q'_s)=\braket{\psi|a^{\dagger}_{q_s}a_{q'_s}|\psi}$. Here, $\ket{\psi}$ denotes the quantum state, $a$ ($a^{\dagger}$) denotes the photon annihilation (creation) operator, and $q_s$ denotes any quantum number of the signal photon, for example, LG modes, HG modes, etc. Second-order correlations are derived using the fact that the quantum state of SPDC, the squeezed vacuum state \cite{wu1986generation}, belongs to the family of Gaussian states, for which all higher-order correlations can be obtained from the first-order ones \cite{gardiner2004quantum}. The coincidence rate is given by the second-order quantum correlation function, which determines the probability of finding an idler photon in mode $q_i$ and a signal photon in mode $q_s$ \begin{equation} G^{(2)}(q_i,q_s,q_s,q_i)=\braket{\psi|a^{\dagger}_{q_i}a^{\dagger}_{q_s}a_{q_s}a_{q_i}|\psi} \label{eq:G2} \end{equation} To extract the optimal model parameters that generate the desired quantum correlations over a given basis, we solve the optimization problem in Eq. \ref{eq:optimizer}. Here, $\mathcal{D}(\cdot, \cdot)$ is taken as a typical measure of discrepancy between two probability distributions. For example, we may use the Kullback-Leibler divergence \cite{georgiou2003kullback}, the L1 norm \cite{gine2003bm}, or an ensemble of both. To obtain the full quantum state generated by the SPDC process, we use quantum state tomography (QST) \cite{thew2002qudit, agnew2011tomography, toninelli2019concepts}. Eq. \ref{eq:G2} allows for the calculation of any coincidence measurement performed on the system, on any basis of our choice. Since the process of QST involves a sequence of projective coincidence measurements on different bases, we can readily reconstruct the density matrix, $\rho$, of the entangled two-qudit state, through a series of linear operations. Here, naturally, $\mathcal{D}(\cdot, \cdot)$ (in Eq. \ref{eq:optimizer}) is taken to be the Trace Distance \cite{rana2016trace} -- a metric on the space of density matrices that measures the distinguishability between two states. The tomographic reconstruction is performed using the correlation data collected from the projections of the simulated bi-photon state onto orthogonal as well as mutually unbiased bases (MUBs) \cite{toninelli2019concepts, agnew2011tomography}. The density matrix of the bi-photon system can be written as \begin{equation} \rho = \frac{1}{d^{2}} \sum_{m,n=0}^{d^{2}-1}\rho_{mn}\\ \sigma_{m}\otimes\sigma_{n} \end{equation} where $\sigma_{m}$ are the set of generators that span the $d$-dimensional tomography space (for example, Pauli and Gell-Mann matrices for $d=2$ and $3$, respectively). The expansion coefficients $\rho_{mn}$ are found via \begin{equation} \rho_{mn} = \sum_{i,j=0}^{d-1}\\ a_{m}^{i}a_{n}^{j}\braket{\lambda_{m}^{i}\lambda_{n}^{j}|\\ \rho|\lambda_{m}^{i}\lambda_{n}^{j}} \end{equation} with $a_{m}^{i}$ and $|\lambda_{m}^{i}\rangle$ denoting the $i^{th}$ eigenvalue and eigenstate of $\sigma_{m}$, respectively \cite{toninelli2019concepts}. The required projections inside the sum function are found in a similar manner to Eq. \ref{eq:G2}, with the pure basis states replaced by the MUBs, when necessary. \section{Results} \label{sec:results} The proposed method can be readily employed to generate desired quantum correlations between SPDC structured photon-pairs. Further, by emulating QST integrated into the learning stage, we can tailor specific, high-dimensional quantum states desirable for photonic quantum information and communication. In this section, we use our algorithm to solve the inverse design problem and extract the optimal quantum volume holograms, embedded in 2D or 3D NLPCs, and the complex pump beam structures for generating desired second-order quantum correlations or density matrices. We either let our algorithm learn the NLPC volume holograms, the complex pump beam profiles, or both. We discover that the quantum state of SPDC photons and their correlations can be all-optically controlled, by first learning the crystal volume holograms with a given pump mode, and then changing the initial pump mode in inference phase. This active optical control has the advantage of altering the quantum state in a non-trivial manner, while retaining its purity. Further, we find that learning the quantum volume hologram and the pump beam profile simultaneously can improve the accuracy of the generated results, in comparison with the desired state. The \emph{SPDCinv} training phase takes about one hour on 4 nvidia t4 16gb gpus, for all configurations involving 1mm-long NLPCs. \subsection{Model Validation}\label{subsec:validation} \textcolor{black}{ Before we delve into inverse design problems, we first validate our model against published experimental results of SPDC shaping \cite{kovlakov2017spatial, PhysRevA.98.060301}. This comes in addition to the multiple, already presented, validations of our model \cite{trajtenberg2020simulating}. Fig. \ref{fig:Kovlakov_PRA} presents the inference stage of our model for recovering the experimental results reported by Kovlakov et al. \cite{PhysRevA.98.060301}. We reproduce the coincidence rate counts for a qutrit state, Fig. \ref{fig:Kovlakov_PRA}a, and ququint state, Fig. \ref{fig:Kovlakov_PRA}c, in the LG basis, generated by a shaped pump field. To show the capability of our model to simulate the QST procedure, we recover the density matrix of the qutrit state, Fig. \ref{fig:Kovlakov_PRA}b, as reported by Kovlakov et al. \cite{PhysRevA.98.060301}. The resulting quantum states, coincidence rates and pump fields (used to recover the result in inference) are in good agreement with experiments (deviations may arise from detection, OAM projection, and coupling imperfections, as acknowledged by Kovlakov et al. \cite{PhysRevA.98.060301}). Next, we follow another result reported by Kovlakov et al. \cite{kovlakov2017spatial} and let our algorithm learn the optimal pump waist size for generating a pure HG spatial Bell state between structured SPDC photon pairs. Fig. \ref{fig:Kovlakov_PRL} shows the convergence of our learning algorithm towards the optimal pump waist, $w_p=\sqrt{L/k_p}$ \cite{kovlakov2017spatial}, for the case of $L=5 mm$. As the learning process progresses, the discrepancy measure, $\mathcal{D}(\cdot, \cdot)$ Eq. \ref{eq:optimizer}, reduces until the model reaches convergence. Accordingly, the size of the pump waist converges to the required value \cite{kovlakov2017spatial} and a clear Bell state, $(\ket{0,1}+\exp(i\phi)\ket{1,0})/\sqrt{2}$, is generated.} \begin{figure*}[] \centering \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figures/FigKovlakovPRA_ab.png} \\ \includegraphics[trim={0 9cm 0 0},clip,width=0.95\linewidth]{figures/FigKovlakovPRA_c.png} \end{tabular} \caption{ \small \textcolor{black}{Model validation against experimental results reported by Kovlakov et al. \cite{PhysRevA.98.060301}. \textbf{a} LG qutrit state: coincidence rate counts (left), pump intensity (middle) and phase (right). \textbf{b} Density matrix of the generated qutrit: real (left) and imaginary (right) parts of the density matrix. \textbf{c} LG ququint state: coincidence rate counts (left), pump intensity (middle) and phase (right).}} \label{fig:Kovlakov_PRA} \end{figure*} \begin{figure*}[] \centering \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figures/FigKovlakovPRL_d.png} \end{tabular} \caption{ \small \textcolor{black}{ Model validation against experimental results reported by Kovlakov et al. \cite{kovlakov2017spatial} for shaped correlations corresponding to the Bell state $(\ket{0,1}+\exp(i\phi)\ket{1,0})/\sqrt{2}$. The upper-right figure is the discrepancy measure (Eq. \ref{eq:optimizer}) between the generated coincidence rate counts and the desired one\cite{kovlakov2017spatial} vs training epoch number. The only learned physical parameter is the pump waist, and we let our algorithm find its optimal value for generating the desired quantum correlations. We sample the obtained pump waist along the discrepancy curve (red dots and insets) to see the evolution of the generated coincidence count rates under the optimized pump waist. At convergence, the algorithm obtains the correct pump waist value of $w_p=\sqrt{L/k_p}\approx13.8\mu m$ for $L=5mm$ for generating a pure HG Bell state.}} \label{fig:Kovlakov_PRL} \end{figure*} \subsection{Shaping arbitrary quantum correlations}\label{subsec:results-G2} First, we let our algorithm learn the physical parameters for desired quantum correlations -- that is, the two-photon coincidence rates -- between structured SPDC photon pairs. The learned parameters are the spatial modes of the crystal volume holograms and pump structure, according to Eq. \ref{eq:parameterized}. We use a type-2 SPDC process in a 1mm-long KTP NLPC, quasi-phase-matched to on-axis generation of photon pairs at 810nm from a 405nm pump wave. We assume that the pump beam is linearly polarized along the y direction and that the $\chi^{(2)}$ nonlinear coefficient can attain one of two binary values of $+d_{24}$ and $-d_{24}$. We project the generated photons on either the LG modes with the integer quantum numbers $l,p$, standing for the azimuthal and radial numbers, respectively; or the HG modes, with integer quantum numbers $n,m$, standing for the x and y-axis mode numbers, respectively. When considering the coincidence rate counts, we post-select either the radial index ($p=0$), in the case of LG basis, or the y-axis modal number ($m=0$), in the case of HG basis. The discrepancy measure in Eq. \ref{eq:optimizer} is taken as a weighted ensemble of the Kullback-Leibler divergence and the L1 norm. \paragraph{Laguerre-Gauss basis:} Here, we show all-optically coherent control over quantum correlations of SPDC photons, in the LG basis (Fig. \ref{fig:lg1} depicts the results of this section). We use our algorithm to extract the optimal quantum volume holograms, embedded in 3D NLPCs, for generating the desired coincidence rate counts of maximally-entangled two-photon qubit $\ket{\psi}=(\ket{1,-1}+\exp(i\phi)\ket{-1,1})/\sqrt{2}$ and ququart $\ket{\psi}=(\ket{-2,1}+\exp(i\phi_1)\ket{0,-1}+\exp(i\phi_2)\ket{-1,0}+\exp(i\phi_3)\ket{1,-2})/\sqrt{4}$ states, that can later be actively-controlled via the pump beam (the indices of the signal and idler photons are the azimuthal indices). We start by letting the algorithm learn the optimal 3D volume crystal hologram with a constant Gaussian pump beam, presented in Fig. \ref{fig:lg1}a(iv)-(v) and b(iv)-(v). The obtained volume holograms (Fig. \ref{fig:lg1}a-b(v)) display an intricate structures: concentric rings, Fig. \ref{fig:lg1}a(v), which mark the coupling to radial LG modes ($p>0$), and corkscrew structures, Fig. \ref{fig:lg1}b(v), indicating an intrinsic chirality of the hologram. We find that the coupling to radial modes is essential for quantum destructive and constructive interference in the post-selected subspace ($p=0$), while the crystal-handedness is responsible for inducing orbital angular momentum. The generated quantum correlations coincide remarkably well with the target, Fig. \ref{fig:lg1}a(i)-(ii) and b(i)-(ii) \begin{figure*}[] \centering \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figures/Fig3a.png} \\ \includegraphics[width=0.95\linewidth]{figures/Fig4a.png} \end{tabular} \caption{ \small Inverse design and all-optical coherent control over quantum correlations of SPDC photons: maximally-entangled two-photon states in the LG basis. \textbf{a}. Shaped correlations corresponding to the qubit state $\ket{\psi}=(\ket{1,-1}+\exp(i\phi)\ket{-1,1})/\sqrt{2}$. (i) shows the target coincidence probability. (ii) shows the learned coincidence probability, for an initial Gaussian pump (iv) and the learned 3D NLPC volume hologram (v). In (v), 3 successive unit cells are shown (the z-axis is scaled-up by a factor of 20). All-optical control over the coincidence probability is demonstrated using a $\mathrm{LG_{01}}$ pump mode (vi), with the same learned crystal -- giving quantum correlations that correspond to a new qubit state, $\ket{\psi}=(\ket{0,1}+\exp(i\phi)\ket{1,0})/\sqrt{2}$ (iii). \textbf{b}. Shaped correlations corresponding to the ququart state $\ket{\psi}=(\ket{-2,1}+\exp(i\phi_1)\ket{0,-1}+\exp(i\phi_2)\ket{-1,0}+\exp(i\phi_3)\ket{1,-2})/\sqrt{4}$. (i) to (v) as in \textbf{a}. All-optical control over the coincidence probability is demonstrated using a $\mathrm{LG_{02}}$ pump mode (vi), with the same learned crystal -- giving quantum correlations that correspond to a different ququart state, residing on the $l_i + l_s = +1$ diagonal, $\ket{\psi}=(\ket{2,-1}+\exp(i\phi_1)\ket{0,1}+\exp(i\phi_2)\ket{1,0}+\exp(i\phi_3)\ket{-1,2})/\sqrt{4}$ (iii).} \label{fig:lg1} \end{figure*} The learned volume holograms demonstrate an even richer functionality -- they can span a larger variety of output correlations when the input pump mode is altered from Gaussian ($l=0$) to other LG modes, as depicted in Fig. \ref{fig:lg1}a-b(vi). As we alter the initial pump mode, the new correlations differ significantly from those obtained in the original design, while they still correspond to maximally-entangled states. Moreover, the new correlations keep the high signal to noise ratio (SNR) between the primary two-photon modes and the background of the coincidence signal, as can be seen in Figs. \ref{fig:lg1}a-b(iii). For example, by introducing an external pump orbital angular momentum, a qubit state originally on the $l_i + l_s = 0$ diagonal is shifted to a qubit on the $l_i + l_s = 1$ diagonal, Fig. \ref{fig:lg1}a, when $l_p = 1$. Similarly, a ququart on the $l_i + l_s = -1$ diagonal is shifted to the $l_i + l_s = 1$ diagonal when $l_p = 2$, Fig. \ref{fig:lg1}b. Interestingly, by using other learned holograms and superpositions of LG modes in the pump beam, we discover nontrivial pump-induced transformations, between a qutrit and a ququart, and a ququart and a qubit (see Supplementary Material, Section \ref{sup:lg}, Fig. \ref{fig:suppLG}). \paragraph{Hermite-Gauss basis:} In the previous example, in the Laguerre-Gauss basis, the learning step was performed by varying only the crystal parameter. Now, we show that by learning the quantum volume hologram and the pump beam profile, simultaneously, we can improve the quality of the generated second-order quantum correlations. In this section, we explore the photon correlations in the HG basis and our target is a two-photon ququart state $\ket{\psi}=(\ket{0,1}+\exp(i\phi_1)\ket{1,0}+\exp(i\phi_2)\ket{1,2}+\exp(i\phi_3)\ket{2,1})/\sqrt{4}$ (the indices of the signal and idler photons are the Hermite-Gaussian modes indices in the Y direction). We consider designs that use more mature NLPC technologies, such as electric field poling \cite{kazansky1997electric}, which are restricted to 2D nonlinear holograms. We use our algorithm to simultaneously extract the optimal quantum volume hologram that varies only in the y-direction (embedded in 2D NLPC) and the pump beam profile that is restricted to vary only in the x-direction, for generating the desired coincidence rate counts of a maximally-entangled two-photon ququart. In Fig. \ref{fig:hg}(ii) we see the generated coincidence rate counts that result from the computed interaction parameters. While the probabilities of the generated ququart state are lower than the desired target, they are equal and significantly larger than other unwanted probabilities. This result is certainly exciting when taking into account the restrictions we considered under 2D-variation. The obtained volume hologram (Fig. \ref{fig:hg}(iii)) and the pump profile (Fig. \ref{fig:hg}(iv)) display a Cartesian structure. \begin{figure*}[h] \centering \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figures/Fig5f2.PNG} \end{tabular} \caption{ \small Inverse design of quantum correlations of SPDC photons: maximally-entangled two-photon states in the HG basis. Shaped correlations corresponding to the ququart state $\ket{\psi}=(\ket{0,1}+\exp(i\phi_1)\ket{1,0}+\exp(i\phi_2)\ket{1,2}+\exp(i\phi_3)\ket{2,1})/\sqrt{4}$. (i) and (ii) show, respectively, the target and learned coincidence rate counts. (iii) and (iv) show the simultaneously learned quantum volume hologram, embedded 2D NLPC, and complex pump beam profile, restricted to vary only in the x-direction. In (iii), 3 successive unit cells are shown (the z-axis is scaled-up by a factor of 20)} \label{fig:hg} \end{figure*} To better show the importance of combining both the quantum volume hologram and the pump beam profile to obtain the desired maximally-entangled state, we compared the quality of the generated second-order quantum correlations of a ququart state under the following three scenarios: using our algorithm to solve the inverse design problem and 1) extracting the optimal quantum volume hologram, embedded in 3D NLPC, with a constant Gaussian pump; 2) extracting the complex pump beam profile, with a constant periodically-poled crystal; 3) extracting both the optimal quantum volume hologram, embedded 3D NLPC, and the optimal complex pump beam profile. The simultaneous learning of the pump and crystal clearly outperform the individual learning of either. This is attributed to higher modes created by the multiplication of modes composing the pump and crystal structure in the nonlinear coupling coefficient, $\kappa_j$ (in Eq. \ref{eq:waveeq}). Also, there seemed to be no preference in the generated results while optimizing separately either the NLPC or the pump, which shows the similar role of each of them in the nonlinear coupling coefficient. For visual results, see Supplementary Material, section \ref{sup:hg}, Fig. \ref{fig:suppHG}. \subsection{Shaping arbitrary quantum states} In order to resolve a specific two-photon quantum state generated by the tailored SPDC process, a coincidence measurement will not suffice. For this purpose, we emulate QST and integrate it into our learning stage for evaluating the corresponding density matrix, as detailed in Section \ref{subsec:observables}. The density matrix is used as an observable while the Trace Distance is taken as the discrepancy metric $\mathcal{D}(\cdot, \cdot)$ (Eq. \ref{eq:optimizer}). As a proof-of-concept, we consider two-photon qudit states with dimension $d=3$ in the LG basis. That is, we focus on the subspace spanned by $\lbrace\ket{-1}, \ket{0}, \ket{1} \rbrace\otimes\lbrace\ket{-1}, \ket{0}, \ket{1}\rbrace$, giving a 9-by-9 dimensional density matrix. Similar to the previous subsection, we use our algorithm to simultaneously extract the optimal quantum volume holograms, embedded in 3D NLPCs, and the pump beam profiles, for generating the desired quantum states. Fig. \ref{fig:rho1}a depicts the results for the maximally-entangled state $\ket{\psi}=(\ket{1,-1} + \ket{-1,1})/\sqrt{2}$ (corresponding to the coincidence rate shown in Fig. \ref{fig:lg1}a(i)), while Fig. \ref{fig:rho1}b depicts the results for the maximally-entangled state $\ket{\psi}=(\ket{1,-1} + \ket{0,0} +\ket{-1,1})/\sqrt{3}$ (corresponding to the coincidence rate shown in the Supplementary Material, Fig. \ref{fig:suppLG}a(i)). The generated density matrices fit the target states well, as evident in Figs. \ref{fig:rho1}a(i),(iii) and b(i),(iii). Our learned pump profiles and crystal holograms demonstrate concentric shapes, Fig. \ref{fig:rho1}a(ii),(iv) and b(ii),(iv). These maintain a total orbital angular momentum of $l_i + l_s = 0$, as expected, while making higher-order radial LG modes possible. These higher order modes are responsible, for example, for removing the two-photon Gaussian mode $\ket{00}$ in the first learned state, Figs. \ref{fig:rho1}a(i) and \ref{fig:lg1}a(ii), through destructive interference, which is otherwise impossible when only using Gaussian pump beams. \begin{figure*}[] \centering \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figures/Qubit.png} \\ \includegraphics[width=0.95\linewidth]{figures/State1.png} \end{tabular} \caption{ \small Inverse design of quantum state density matrices of SPDC photons: maximally-entangled two-photon states in the LG basis. \textbf{a}. The qubit state $\ket{\psi}=(\ket{1,-1}+\ket{-1,1})/\sqrt{2}$. (i) and (iii) show, respectively, the learned and target states (the real part of the density matrix is shown in large, and the imaginary in small). (ii) and (iv) show the simultaneously learned complex pump beam profile and quantum volume hologram embedded 3D NLPC. In (iv), 3 successive unit cells are shown (the z-axis is scaled-up by a factor of 20). \textbf{b}. The qutrit state $\ket{\psi}=(\ket{1,-1}+\ket{0,0}+\ket{-1,1})/\sqrt{3}$. (i-iv) as in \textbf{a}.} \label{fig:rho1} \end{figure*} Importantly, the generated quantum two-photon states are sensitive to the relative phase between the modes constructing the pump profile and the learned nonlinear volume holograms. This feature is essential for asserting that the active all-optical control over the coincidence rates, discussed in the previous section, allows also for quantum coherent control over the generated photon qudits. To demonstrate this, we again learn a 3D volume hologram with a fixed pump profile, but this time consisting of a given superposition of LG modes. By changing the relative phase between the LG modes, we expect that the off-diagonal terms in the density matrix change accordingly. Fig. \ref{fig:rho2} depicts the results for the generated maximally-entangled two-photon ququart state $\ket{\psi}=(\ket{-1,0}+\ket{0,-1}+\ket{1,0}+\ket{0,1})/\sqrt{4}$. Initially, we use our algorithm to extract the optimal quantum volume hologram, embedded in 3D NLPCs, with a fixed pump beam of the form: $\mathrm{LG_{01}}+e^{i\alpha}\mathrm{LG_{0-1}}$ for $\alpha=0^{\circ}$ (i.e., a $\mathrm{HG}_{10}$ mode, as presented in Fig. \ref{fig:rho2}a(iii)). The real part of the generated density matrix is shown in Fig. \ref{fig:rho2}a(i) and the imaginary part in Fig. \ref{fig:rho2}a(ii). The generated density matrix fits the desired one. We then used the extracted crystal volume hologram with different superpositions of LG modes of the pump. Figs. \ref{fig:rho2}b(i)-(ii) and c(i)-(ii) show the quantum states achieved through inference with the same learned crystal hologram, but with the pump mode superposition phase angle $\alpha$ changed to $\alpha=120^{\circ}$, Fig. \ref{fig:rho2}b(iii), and $240^{\circ}$, Fig. \ref{fig:rho2}c(iii). This corresponds experimentally to a rotation of the $\mathrm{HG}_{10}$ mode. Note, the significant change in the imaginary off-diagonal density matrix elements, in Figs. \ref{fig:rho2}b(ii) and c(ii). This indicates the coherent control over the quantum state via the rotation of the pump beam -- a diverse functionality available by use of a single volume hologram pumped with different optical modes. \begin{figure*}[h] \centering \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figures/Switching.png} \end{tabular} \caption{ \small Inverse design and all-optical coherent control over quantum state of SPDC photons: maximally-entangled two-photon ququart state in the LG basis. We use our algorithm to extract the 3D NLPC hologram that generates the desired ququart state $\ket{\psi}=(\ket{-1,0}+\ket{0,-1}+\ket{1,0}+\ket{0,1})/\sqrt{4}$, using the initial constant pump profile $\mathrm{HG_{10}} = \mathrm{LG_{01}}+\mathrm{LG_{0-1}}$ a(iii). The real part of generated density matrix is shown in a(i) and the imaginary part in a(ii). Next, the pump beam illuminating the learned hologram is rotated to actively control the generated quantum state. b(i) and (ii) show the real and imaginary parts, respectively, of generated density matrix for the rotated incident beam $\mathrm{LG_{01}}+e^{i120^{\circ}}\mathrm{LG_{0-1}}$ b(iii). c(i)-(ii) show the real and imaginary parts, respectively, of generated density matrix for the rotated incident beam $\mathrm{LG_{01}}+e^{i240^{\circ}}\mathrm{LG_{0-1}}$ c(iii). } \label{fig:rho2} \end{figure*} \section{Conclusion} \label{sec:conclusion} We have introduced an algorithm for solving the inverse design problem of generating structured and entangled photon pairs in quantum optics, using tailored nonlinear interactions in the SPDC process. The \emph{SPDCinv} algorithm extracts the optimal physical parameters which yield a desired quantum state or correlations between structured photon-pairs, that can then be used in future experiments. To ensure convergence to realizable results and to improve the predictive accuracy, our algorithm obeyed physical constraints through the integration of the time-unfolded propagation dynamics governing the interaction of the SPDC Hamiltonian. We have shown how we can apply our algorithm to obtain the optimal nonlinear $\chi^{(2)}$ volume holograms (2D/3D) as well as different pump structures for generating the desired maximally-entangled states. \textcolor{black}{The optimal crystal holograms extracted by our model seem to exhibit robustness against imperfections. To mimic crystal fabrication imperfections we deliberately add errors to the crystal structure to impair the generated coincidence rate counts of the maximally-entangled two-photon qubit. Then, we show how with a slight variation in a different parameter of the system (pump waist), we can divert the system back, to nearly recover the original system results (see supplementary section \ref{sup:imperfections}).} The high dimensionality of these generated states increases the bandwidth of quantum information, and can improve the security of quantum key distribution protocols \cite{fuchs1997optimal,durt2004security}. We further demonstrate all-optical coherent control over the generated quantum states by actively changing the profile of the pump beam, making our results appealing for a variety of quantum information applications that require fast switching rates. This work can readily be extended to the spectral-temporal domain, by allowing non-periodic volume holograms along the propagation axis -- making it possible to shape the joint spectral amplitude \cite{zielnicki2018joint} of the photon pairs. Furthermore, one can easily adopt our approach for other optical systems, such as: nonlinear waveguides and resonators \cite{QiLi+2020+1287+1320}, $\chi^{(3)}$ effects (e.g. spontaneous four wave mixing \cite{sharping2006generation}), spatial solitons \cite{stegeman1999optical, chen2012optical}, fiber optics communication systems \cite{10.1007/3-540-46629-0_9, hager2020physics}, and even higher-order coincidence probabilities \cite{PhysRevA.78.033831}. Moreover, the algorithm can be upgraded to include passive optical elements such as beam-splitters, holograms, and mode sorters \cite{krenn2016automated}, thereby providing greater flexibility for generating and manipulating quantum optical states. \textcolor{black}{Our model can incorporate decoherence mechanisms arising from non-perturbative high-order photon pair generation in the high gain regime \cite{brambilla2004simultaneous, trajtenberg2020simulating}. Other decoherence effects due to losses such as absorption and scattering can be incorporated into the model in the future.} Finally, our current scheme can be adapted to other quantum systems sharing a similar Hamiltonian structure, such as superfluids and superconductors \cite{coleman2015introduction}. In light of all this, we believe that this work, along with its complementary code, can contribute to further exciting advancements and discoveries in other quantum and classical systems.
{ "attr-fineweb-edu": 1.902344, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbt_xK7DgtAWezNwV
\section{Introduction}\label{S1} Optimization over distributions is an important topic in various areas. For example, the minimum divergence between a mixture family and an exponential family has been studied by em algorithm in the areas of machine learning and neural network \cite{Amari,Fujimoto,Allassonniere}. Em algorithm is an iterative algorithm to calculate the above minimization and it is rooted in the study of Boltzmann machine \cite{Bol}. In particular, the paper \cite{Fujimoto} formulated em algorithm the framework with Bregman divergence \cite{Amari-Nagaoka,Amari-Bregman}. This topic has been mainly studied in the community of machine learning, neural network, and information geometry. As another iterative algorithm, Arimoto-Blahut algorithm is known as an algorithm to maximize the mutual information by changing the distribution on the input system \cite{Arimoto,Blahut}. This maximization is needed to calculate the channel capacity \cite{Shannon}. This algorithm has been generalized to various settings including the rate distortion theory \cite{Blahut,Csiszar,Cheng,YSM}, the capacity of wire-tap channel \cite{Yasui}, and their quantum extensions \cite{Nagaoka,Dupuis,Sutter,Li,RISB}. In particular, the two papers \cite{YSM,RISB} made very useful generalizations to cover various topics in information theory. This topic has been mainly studied in the community of information theory. However, no study had discussed the relation between these two topics until recently. Recently, the paper \cite{Shoji} pointed out that the maximization of the mutual information can be considered the maximization of the projected divergence to an exponential family by changing an element of a mixture family. The paper \cite{reverse} generalized this maximization to the framework with Bregman divergence \cite{Amari-Nagaoka,Amari-Bregman} and applied this setting to various problems in information theory. Also, the recent paper \cite{Bregman-em} applied em algorithm to the rate distortion theory, which is a key topic in information theory. In this paper, we focus on a generalized problem setting proposed in \cite{RISB}, which is given as an optimization over the set of input quantum states. As the difference from the former algorithm, their algorithm \cite{RISB} has an acceleration parameter. Changing this parameter, we can enhance the convergence speed under a certain condition. To obtain a wider applicability, we extend their problem setting to the minimization over a general mixture family. Although they discussed the convergence speed only when there is no local minimizer, our analysis covers the convergence speed to a local minimizer even when there exist several local minimizers. Further, since our setting covers a general mixture family as the set of input variables, our method can be applied to the minimum divergence between a mixture family and an exponential family. There is a possibility that each iteration can be calculated only approximately. To cover such an approximated case, we evaluate the error of our algorithm with approximated iterations. Since em algorithm has local minimizers in general, it is essential to cover the convergence to a local minimizer. Since our algorithm has the acceleration parameter, our application to the minimum divergence gives a generalization of em algorithm. Also, our algorithm can be applied to the maximization of the projected divergence to an exponential family by changing an element of a mixture family. In addition, our algorithm has various applications that was not discussed in the preceding study \cite{RISB}. In channel coding, the decoding error probability goes to zero exponentially under the proper random coding when the transmission rate is smaller than the capacity \cite{Gallager}. Also, the probability of correct decoding goes to zero exponentially when the transmission rate is greater than the capacity \cite{Arimoto2}. These exponential rates are written with the optimization of so called Gallager function. Recently, the paper \cite{H15} showed that the Gallager function can be written as the minimization of R\'{e}nyi divergence. Using this fact, we apply our method to these optimizations. Further, we apply our algorithm to the capacity of wiretap channel. In addition, since our problem setting allows a general mixture family as the range of input, we apply the channel capacity with cost constraint. Also, we point out that the calculation of the commitment capacity is given as the minimization of the divergence between a mixture family and an exponential family. Hence, we discuss this application as well. The remaining part of this paper is organized as follows. Section \ref{setup} formulates our minimization problem for a general mixture family. Then, we proposed several algorithm to solve the minimization problem. We derive various convergence theorems including the case with approximated iterations. Section \ref{S4} applies our algorithm to various information theoretical problems Then, Section \ref{S5} applies our algorithm to the minimum divergence between a mixture family and an exponential family Section \ref{S6} applies our algorithm to the commitment capacity. Section \ref{S7} applies our algorithm to the maximization of the projected divergence to an exponential family by changing an element of a mixture family. Appendices are devoted to the proofs of the theorems presented in Section \ref{setup}. \section{General setting}\label{setup} \subsection{Algorithm with exact iteration}\label{S2-1} We consider a finite probability space ${\cal X}$ and focus on the set ${\cal P}({\cal X})$ of distributions whose support is ${\cal X}$. Using $k$ functions $f_1, \ldots, f_k$ on ${\cal X}$ and constants $a=(a_1, \ldots, a_k)$, we define the mixture family ${\cal M}_a$ as follows \begin{align} {\cal M}_a:= \{P \in {\cal P}({\cal X})| P[f_i]=a_i \hbox{ for } i=1, \ldots, k \},\label{MDP} \end{align} where $P[f]:= \sum_{x \in {\cal X}} P(x)f(x)$. When we added additional $l-k$ linearly independent functions $f_{k+1}, \ldots f_l$ and $|{\cal X}|=l+1$, the distribution $P$ can be parameterized by the mixture parameter $\eta=(\eta_1, \ldots, \eta_l)$ as $ \eta_i= P[f_i]$. That is, the above distribution is denoted by $P_\eta$. Then, we denote the $m$-projection of $P$ to ${\cal M}_a$ by $\Gamma^{(m)}_{{\cal M}_a}[P]$. Given a continuous function $\Psi$ from ${\cal M}_a$ to ${\cal X}$, we consider the minimization $\min_{P \in {\cal M}_a} {\cal G}(P)$; \begin{align} {\cal G}(P):= \sum_{x \in {\cal X}} P(x) \Psi[P](x). \end{align} The aim of this paper is finding \begin{align} \overline{{\cal G}}(a):=\min_{P \in {\cal M}_a} {\cal G}(P), \quad P_{*,a}:=\mathop{\rm argmin}\limits_{P \in {\cal M}_a} {\cal G}(P). \end{align} For this aim, we define the distribution ${\cal F}_3[Q] $ as ${\cal F}_3[Q](x):= \frac{1}{\kappa[Q]}Q(x)\exp( -\frac{1}{\gamma} \Psi[Q](x))$, where $\kappa[Q]$ is the normalization factor $\sum_{x \in {\cal X}} Q(x)\exp( -\frac{1}{\gamma} \Psi[Q](x))$. Then, generalizing the algorithm by \cite{RISB}, we propose Algorithm \ref{AL1}. When the calculation of $ \Psi[P]$ and the $m$-projection is feasible, Algorithm \ref{AL1} is feasible. \begin{algorithm} \caption{Minimization of ${\cal G}(P)$} \label{AL1} \begin{algorithmic} \STATE {Choose the initial value $P^{(1)} \in \mathcal{M}$;} \REPEAT \STATE Calculate $P^{(t+1)}:=\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[P^{(t)}]] $; \UNTIL{convergence.} \end{algorithmic} \end{algorithm} \if0 In fact, the condition (A2) is closely related to the convexity of ${\cal G}(P)$ as follows. \begin{lemma} When \begin{align} \sum_{x \in {\cal X}} P(x) (\Psi[P](x)- \Psi[Q](x)) \ge 0 \label{XMZ2} \end{align} holds for $P,Q \in {\cal M}_a$, the map $ P \mapsto {\cal G}(P)$ is convex. \end{lemma} \begin{proof} \begin{align} &\lambda {\cal G}(P)+(1-\lambda) {\cal G}(Q) -\lambda {\cal G}(\lambda P+(1-\lambda)Q)\\ =&\lambda \sum_{x \in {\cal X}} P(x) (\Psi[P](x)- \Psi[\lambda P+(1-\lambda)Q](x))\\ &+(1-\lambda) \sum_{x \in {\cal X}} Q(x) (\Psi[Q](x)- \Psi[\lambda P+(1-\lambda)Q](x))\\ \ge & 0. \end{align} \end{proof} \fi Indeed, Algorithm \ref{AL1} is characterized as the iterative minimization of the following two-variable function, i.e., the extended objective function; \begin{align} J_\gamma(P,Q):=\gamma D(P\|Q)+\sum_{x \in {\cal X}} P(x) \Psi[Q](x). \end{align} To see this fact, we define \begin{align} {\cal F}_1[P] := \mathop{\rm argmin}\limits_{Q \in {\cal M}_a} J _\gamma(P,Q) ,\quad {\cal F}_2[Q] := \mathop{\rm argmin}\limits_{P \in {\cal M}_a} J _\gamma(P,Q) . \end{align} Then, as a generalization of a part of \cite[Lemma 3.2]{RISB}, ${\cal F}_2[Q]$ is calculated as follows. \begin{lemma}\label{L1} Then, we have ${\cal F}_2[Q] =\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]] $, i.e., \begin{align} \min_{P \in {\cal M}_a} J_\gamma(P,Q)&= J _\gamma(\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]],Q) \nonumber \\ &= \gamma D(\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]\|{\cal F}_3[Q]) - \gamma \log \kappa[Q] ,\label{XMY} \\ J _\gamma(P,Q) &=\min_{P' \in {\cal M}_a} J _\gamma(P',Q) +\gamma D(P\| \Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]) \label{XMY2UU} \\ &=J _\gamma(\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]],Q) +\gamma D(P\| \Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]). \label{XMY2} \end{align} \end{lemma} \begin{proof} \begin{align} &J _\gamma(P,Q) =\gamma \sum_{x \in {\cal X}} P(x) (\log P(x)- \log Q(x) + \frac{1}{\gamma}\log \Psi[Q](x)) \nonumber\\ =&\gamma \sum_{x \in {\cal X}} P(x) (\log P(x)- \log {\cal F}_3[Q](x)- \log \kappa[Q]) \nonumber\\ =&\gamma D(P\| {\cal F}_3[Q])-\gamma \log \kappa[Q] \nonumber\\ =&\gamma D(P\| \Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]) +\gamma D(\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]\|{\cal F}_3[Q]) - \gamma\log \kappa[Q] .\label{ASS4} \end{align} Then, the minimum is given as \eqref{XMY}, and it is realized with $\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]$. Applying \eqref{XMY} into the final line of \eqref{ASS4}, we obtain \eqref{XMY2UU}. Since the minimum in \eqref{XMY2UU} is realized when $P'=\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]$, we obtain \eqref{XMY2}. \end{proof} As a generalization of another part of \cite[Lemma 3.2]{RISB}, we can calculate ${\cal F}_1[Q]$ as follows. \begin{lemma}\label{L2} Assume that two distributions $P,Q \in {\cal M}_a$ satisfy the following condition; \begin{align} \sum_{x \in {\cal X}} P(x) (\Psi[P](x)- \Psi[Q](x)) \le \gamma D(P\|Q). \label{BK1+} \end{align} Then, we have ${\cal F}_1[P] =P$, i.e., \begin{align} J _\gamma(P,Q)\ge J _\gamma(P,P) . \end{align} \end{lemma} \begin{proof} Eq. \eqref{BK1} guarantees that \begin{align} &J _\gamma(P,Q)-J _\gamma(P,P) \nonumber\\ =& \gamma D(P\|Q)-\sum_{x \in {\cal X}} P(x) (\Psi[P](x)- \Psi[Q](x)) \ge 0 \label{XMY5}. \end{align} \end{proof} Therefore, when all pairs $(P^{(t+1)},P^{(t)})$ satisfies \eqref{BK1+}, the relations \begin{align} {\cal G}(P^{(t)})=J _\gamma(P^{(t)},P^{(t)})\ge J _\gamma(P^{(t+1)},P^{(t)}) \ge J _\gamma(P^{(t+1)},P^{(t+1)})= {\cal G}(P^{(t+1)}) \label{SAC} \end{align} hold under Algorithm \ref{AL1}. The relation \eqref{SAC} guarantees that Algorithm \ref{AL1} converges to a local minimum. To discuss the detail of Algorithm \ref{AL1}, we focus on the $\delta$-neighborhood $U(P^{0},\delta)$ of $P^{0}$ defined as \begin{align} U(P^{0},\delta):=\{ P \in {\cal M}_a | D(P^{0} \|P)\le \delta \}. \end{align} In particular, we denote ${\cal M}_a$ by $U(P^{0},\infty)$. Then, we address the following conditions for the $\delta$-neighborhood $U(P^{0},\delta)$ of $P^{0}$; \begin{description} \item[(A0)] Any distribution $Q \in U(P^{0},\delta)$ satisfies the inequality \begin{align} {\cal G}({\cal F}_2[Q]) \ge {\cal G}(P^{0}).\label{BK-1} \end{align} \if0 \item[(A0')] Any distribution $Q \in U(P^{0},\delta)$ satisfies the inequality \begin{align} {\cal G}(Q) \ge {\cal G}(P^{0}).\label{BK-2} \end{align} \fi \item[(A1)] Any distribution $Q \in U(P^{0},\delta)$ satisfy \begin{align} D_{\Psi}({\cal F}_2[Q] \|Q) \le \gamma D({\cal F}_2[Q] \| Q). \label{BK1} \end{align} where \begin{align} D_{\Psi}(P \|Q):= \sum_{x \in {\cal X}} P(x) (\Psi[P](x)- \Psi[ Q ](x)). \end{align} \item[(A2)] Any distribution $Q \in U(P^{0},\delta)$ satisfies \begin{align} D_{\Psi}(P^{0} \|Q) \ge 0. \label{XMZ} \end{align} \item[(A3)] Any distribution $Q \in U(P^{0},\delta)$ satisfies \begin{align} \sum_{x \in {\cal X}} P^{0}(x) (\Psi[P^{0}](x)- \Psi[Q](x)) \ge \beta D(P^{0} \| Q). \label{CAU} \end{align} \end{description} The condition (A3) is a stronger version of (A2). \if0 When (A1) holds, ${\cal G}(Q) \ge {\cal G}({\cal F}_2[Q]) $, i.e., (A0) implies (A0'). Hence, the condition (A0') is a weaker version of (A0). \fi However, the convergence to the global minimum is not guaranteed. As a generalization of \cite[Theorem 3.3]{RISB}, the following theorem discusses the convergence to the global minimum and the convergence speed. \begin{theorem}\label{TH1} Assume that the $\delta$-neighborhood $U(P^{0},\delta)$ of $P^{0}$ satisfies the conditions (A1) and (A2) with $\gamma$, and $P^{(1)} \in U(P^{0},\delta)$. Then, Algorithm \ref{AL1} with $t_0$ iterations has one of the following two behaviors. \begin{description} \item[(i)] There exists an integer $t_1 \le t_0+1$ such that \begin{align} {\cal G}(P^{(t_1)}) < {\cal G}(P^{0}). \end{align} \item[(ii)] Algorithm \ref{AL1} satisfies the conditions $\{P^{(t)}\}_{t=1}^{t_0+1} \subset U(P^{0},\delta)$ and \begin{align} {\cal G}(P^{(t_0+1)}) -{\cal G}(P^{0}) \le \frac{\gamma D(P^{0}\| P^{(1)}) }{t_0}.\label{XME} \end{align} \end{description} When the conditions (A0) holds additionally, Algorithm \ref{AL1} with $t_0$ iterations satisfies (ii). \end{theorem} The above theorem is shown in Appendix \ref{S3-2}. Now, we choose an element $P^* \in {\cal M}_a$ to satisfy ${\cal G}(P^*)=\min_{P \in {\cal M}_a} {\cal G}(P)$. Then, $U(P^*,\infty)={\cal M}_a$ with $P^*$ satisfies the conditions (A0) and (A0'). When $U(P^*,\infty)={\cal M}_a$ with $P^*$ satisfies the conditions (A1) and (A2), Theorem \ref{TH1} guarantees the convergence to the minimizer $P^*$ in Algorithm \ref{AL1}. Although we consider the conditions (A1) and (A2), the condition (A2) is essential. When we choose $\gamma>0$ to be sufficiently large and $\delta\neq \infty$, the $\delta$-neighborhood $U(P^*,\delta)$ of $P^*$ satisfies the condition (A1) with $\gamma$ because $U(P^*,\delta)$ is a compact set. Hence, Theorem \ref{TH1} guarantees the convergence of Algorithm \ref{AL1}. That is, it is essential to check the condition (A2). However, as seen in \eqref{XME}, a larger $\gamma$ makes the convergence speed slower. Therefore, it is important to choose $\gamma$ to be small under the condition (A1). Practically, it is better to change $\gamma$ to be smaller when the point $P^{(t)}$ is closer to the minimizer $P^*$. In fact, as a generalization of \cite[Proposition 3.6]{RISB}, we have the following exponential convergence under a stronger condition dependently of $\gamma$. In this sense, the parameter is called an acceleration parameter \cite[Remark 3.4]{RISB}. \begin{theorem}\label{TH2} Assume that Assume that the $\delta$-neighborhood $U(P^{0},\delta)$ of $P^{0}$ satisfies the conditions (A1) and (A3) with $\gamma$, and $P^{(1)} \in U(P^{0},\delta)$. Then, Algorithm \ref{AL1} with $t_0$ iterations has one of the following two behaviors. \begin{description} \item[(i)] There exists an integer $t_1 \le t_0+1$ such that \begin{align} {\cal G}(P^{(t_1)}) < {\cal G}(P^{0}). \end{align} \item[(ii)] Algorithm \ref{AL1} satisfies the conditions $\{P^{(t)}\}_{t=1}^{t_0+1} \subset U(P^{0},\delta)$ and \begin{align} {\cal G}(P^{(t_0+1)})-{\cal G}(P^{0}) \le (1-\frac{\beta}{\gamma})^{t_0} D(P^{0} \| P^{(1)}).\label{CAU2} \end{align} \end{description} When the conditions (A0) holds additionally, Algorithm \ref{AL1} with $t_0$ iterations satisfies (ii). \if0 the $\delta$-neighborhood $U(P^{0},\delta)$ of $P^{0}$ satisfies the conditions (A0), (A1), and (A3) with $\gamma$ and $\beta$, and $P^{(1)} \in U(P^{0},\delta)$. Then, Algorithm \ref{AL1} satisfies the conditions $\{P^{(t)}\} \subset U(P^{0},\delta)$ and \begin{align} {\cal G}(P^{(t+1)})-{\cal G}(P^*) \le (1-\frac{\beta}{\gamma})^t D(P^* \| P^{(1)}).\label{CAU2} \end{align} \fi \end{theorem} The above theorem is shown in Appendix \ref{S3-3}. Next, we consider the case when there are several local minimizers $P^*_1, \ldots, P^*_{n} \in {\cal M}_a$ while the true minimizer is $P^*$. These local minimizers are characterized by the following corollary, which is shown in Appendix \ref{S3-2} as a corollary of Theorem \ref{TH1}. \begin{corollary}\label{Cor1} \begin{align} \sum_{x \in {\cal X}}P^*(x) (\Psi[P^*](x)-\Psi[P^*_i](x))= {\cal G}(P^*)-{\cal G}(P^*_i)<0.\label{Cor2} \end{align} \end{corollary} Hence, if there exist local minimizers, $U(P^*,\infty)={\cal M}_a$ with $P^*$ does not satisfy the condition (A2). In this case, when the $\delta$-neighborhood $U(P^{*,i},\delta)$ of $P^{*,i}$ satisfies the conditions (A0') (A1), and (A2), Algorithm \ref{AL1} converges to the local minimizer $P^{*,i}$ with the speed \eqref{XME} except for the case (i). Since $P^{*,i}$ is a local minimizer, the $\delta$-neighborhood $U(P^{*,i},\delta)$ of $P^{*,i}$ satisfies the conditions (A0') and (A1) with sufficiently small $\delta>0$. When the following condition (A4) holds, as shown below, the $\delta$-neighborhood $U(P^{*,i},\delta)$ of $P^{*,i}$ satisfies the condition (A2) with sufficiently small $\delta>0$. That is, when the initial point belongs to the $\delta$-neighborhood $U(P^{*,i},\delta)$, Algorithm \ref{AL1} converges to $P^{*,i}$. \begin{description} \item[(A4)] The function $\eta \mapsto \Psi[P_\eta](x)$ is differentiable, and the relation \begin{align} \sum_{x \in {\cal X}}P_{\eta} (x) \Big(\frac{\partial }{\partial \eta_i}\Psi[P_\eta](x)\Big)= 0 \end{align} holds for $i=1, \ldots, k$. \end{description} \begin{lemma}\label{L6} We consider the following two conditions for a convex subset ${\cal K} \subset {\cal M}_a$. \begin{description} \item[(B1)] The relation \begin{align} \sum_{x \in {\cal X}} P(x) (\Psi[P](x)-\Psi[Q](x)) \ge 0 \end{align} holds for $P,Q \in {\cal K}$. \item[(B2)] ${\cal G}(P)$ is convex for mixture parameter in ${\cal K}$. \end{description} The condition (B1) implies the condition (B2). In addition, when the condition (A4) holds, the condition (B2) implies the condition (B1). \end{lemma} When the function $\eta \mapsto \Psi[P_\eta](x)$ is twice-differentiable, and its Hessian is strictly positive semi definite at a local minimizer $ P^*_i$, this function is convex in the $\delta$-neighborhood $U(P^{*,i},\delta)$ of $P^{*,i}$ with a sufficiently small $\delta>0$ because its Hessian is strictly positive semi definite in the neighborhood. Then, Lemma \ref{L6} guarantees the condition (A2) for the $\delta$-neighborhood $U(P^{*,i},\delta)$. Algorithm \ref{AL1} converges to the local minimizer $P^{*,i}$ with the speed \eqref{XME} except for the case (i). \begin{proofof}{Lemma \ref{L6}} Assume the condition (B1). Then, for $\lambda \in [0,1]$, we have \begin{align} \varphi(\lambda):=&\lambda {\cal G}(P)+(1-\lambda) {\cal G}(Q) -\lambda {\cal G}(\lambda P+(1-\lambda)Q)\nonumber \\ =&\lambda \sum_{x \in {\cal X}} P(x) (\Psi[P](x)- \Psi[\lambda P+(1-\lambda)Q](x)) \nonumber \\ &+(1-\lambda) \sum_{x \in {\cal X}} Q(x) (\Psi[Q](x)- \Psi[\lambda P+(1-\lambda)Q](x))\nonumber \\ \ge & 0, \end{align} which implies (B2). Assume the conditions (A4) and (B2). Since $\varphi(\lambda)\ge 0$ for $\lambda \in [0,1]$, we have \begin{align} 0\le &\frac{d \varphi(\lambda)}{d \lambda}|_{\lambda=0} \nonumber \\ =&{\cal G}(P)- {\cal G}(Q) -\sum_{x \in {\cal X}} (P(x)-Q(x))\Psi[Q](x)\nonumber \\ &-\sum_{x \in {\cal X}} Q(x) \frac{d \Psi[\lambda P+(1-\lambda)Q](x)}{d\lambda}|_{\lambda=0}\nonumber \\ \stackrel{(a)}{=}&{\cal G}(P)- {\cal G}(Q) -\sum_{x \in {\cal X}} (P(x)-Q(x))\Psi[Q](x), \end{align} which implies (B1), where $(a)$ follows from the condition (A4). \end{proofof} \begin{remark} The reference \cite{RISB} considers the same problem setting with the quantum case. Their case covers the case when ${\cal M}_a$ is given as ${\cal P}({\cal X})$, and does not covers the case with general mixture family ${\cal M}_a$, To address the case with cost constraint, this extension is essential. The ideas of Theorems \ref{TH1} and \ref{TH2} are quite similar to \cite[Theorem 3.3 and Proposition 3.6]{RISB}. However, these preceding studies considers the case when $P^0$ and $U(P^{0},\delta)$ are $P^*$ and $ {\cal P}({\cal X})$, respectively. That is, they do not cover the case with local minimizers. \end{remark} \subsection{Algorithm with approximated iteration}\label{S2-2} In general, it is not so easy to calculate the $m$-projection $\Gamma^m_{{\cal M}_a}({\cal F}_3[Q])$. We consider the case when it is approximately calculated. There are two methods to calculate the $m$-projection. One is the method based on the minimization in the given mixture family, and the other is the method based on the minimization in the exponential family orthogonal to the mixture family. In the first method, the $m$-projection $\Gamma^m_{{\cal M}_a}({\cal F}_3[Q])$ is the minimizer of the following minimization; \begin{align} \min_{P \in {\cal M}_a} D(P\| {\cal F}_3[Q]).\label{ZMY} \end{align} \if0 To characterize the m-projection $\Gamma^{(m)}_{{\cal M}_a}$ of ${\cal F}_3[Q]$, we define the exponential family \begin{align} Q_{\theta}(x):= {\cal F}_3[Q](x) e^{\sum_{j=1}^k \theta^j f_j(x)- \phi[Q](\theta)}, \end{align} where $ \phi[Q](\theta):= \sum_{x \in {\cal X}}{\cal F}_3[Q](x) e^{\sum_{j=1}^k \theta^j f_j(x)}$. \fi To describe the second method, we define the exponential family \begin{align} Q_{\theta}(x):= {\cal F}_3[Q](x) e^{\sum_{j=1}^k \theta^j f_j(x)- \phi[Q](\theta)}, \end{align} where \begin{align} \phi[Q](\theta):= \sum_{x \in {\cal X}}{\cal F}_3[Q](x) e^{\sum_{j=1}^k \theta^j f_j(x)}. \end{align} The projected element $\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[Q]]$ is the unique element of the intersection $ \{Q_\theta\}\cap {\cal M}_a$. Then, the $m$-projection $\Gamma^m_{{\cal M}_a}({\cal F}_3[Q])$ is given as the solution of the following equation; \begin{align} \frac{\partial \phi[Q]}{\partial \theta^j }(\theta) =\sum_{x\in {\cal X}}Q_\theta(x) f_j(x)= a_j\label{ZMX} \end{align} for $j=1, \ldots, k$. The solution of \eqref{ZMX} is given as the minimizer of the following minimization; \begin{align} \min_{\theta \in \mathbb{R}^k} \phi[Q](\theta)- \sum_{j=1}^k \theta^j a_j.\label{ZMX2} \end{align} We discuss the precision of our algorithm when each step has a certain error in the above minimization. \if0 To discuss the first method with error, we consider Algorithm \ref{AL2} instead of Algorithm \ref{AL1}. \begin{algorithm} \caption{Minimization of ${\cal G}(P)$ with $\epsilon$ error in \eqref{ZMY}} \label{AL2} \begin{algorithmic} \STATE {Choose the initial value $P^{(1)} \in \mathcal{M}$;} \REPEAT \STATE Calculate $P^{(t+1)}$ to satisfy \begin{align} D(P^{(t+1)}\| {\cal F}_3[P^{(t)}]) \le \min \Big( D(P^{(t)}\| {\cal F}_3[P^{(t)}]), \min_{P \in {\cal M}_a} D(P\| {\cal F}_3[P^{(t)}]) +\epsilon \Big); \end{align} \UNTIL{convergence.} \end{algorithmic} \end{algorithm} \fi \if0 \begin{theorem} When \eqref{XMZ} and \eqref{BK1} hold, we have \begin{align} {\cal G}(P^{(k+1)}) -{\cal G}(P^*) \le \gamma \frac{D(P^*\| P^{(1)}) }{k} + 2 \kappa \sqrt{D(P^*\| P^{(1)}) \epsilon} +(\kappa+1)\epsilon. \label{XZW} \end{align} \end{theorem} \fi However, the first method requires the minimization with the same number of parameters as the original minimization $\min_{P \in {\cal M}_a} {\cal G}(P)$. Hence, it is better to employ the second method. In fact, when ${\cal M}_a$ is given as subset of ${\cal P}({\cal X})$ with one linear constraint, the minimization \eqref{ZMX2} written as a one-parameter convex minimization. Since any one-parameter convex minimization is performed by the bisection method, which needs $O(-\log \epsilon)$ iterations \cite{BV}, the cost of this minimization is much smaller than that of the original minimization $\min_{P \in {\cal M}_a} {\cal G}(P)$. To consider an algorithm based on the minimization \eqref{ZMX2}, we assume that $\Psi$ is defined in ${\cal P}({\cal X})$. In the multi-parameter case, we can use the gradient method and the accelerated proximal gradient method \cite{BT,AT,Nesterov,Nesterov2,Nesterov3,Teboulle}. \begin{algorithm} \caption{Minimization of ${\cal G}(P)$ with $\epsilon$ error in \eqref{ZMY}} \label{AL3} \begin{algorithmic} \STATE {Choose the initial value $P^{(1)} \in \mathcal{M}$;} \REPEAT \STATE Calculate the pair of $P^{(t+1)} \in {\cal M}_a$ and $\bar{P}^{(t+1)}= Q_\theta$ with $Q=P^{(t)}$ to satisfy \begin{align} \phi[\bar{P}^{(t)}](\theta)- \sum_{j=1}^k \theta^j a_j &\le \min_{\theta' \in \mathbb{R}^k} \phi[\bar{P}^{(t)}](\theta')- \sum_{j=1}^k {\theta'}^j a_j +\epsilon_1 \label{AMG} \\ D(\bar{P}^{(t+1)}\| P^{(t+1)} ) &\le \epsilon_2\label{NXP} . \end{align} \UNTIL{$t=t_1-1$.} \STATE {\bf final step:}\quad We output the final estimate $P_f^{(t_1)} :=P^{(t_2)} \in \mathcal{M}$ by using $t_2:= \mathop{\rm argmin}\limits_{t=2, \ldots, t_1} {\cal G}(P^{(t)})- \gamma D(P^{(t)} \| \bar{P}^{(t)})$. \end{algorithmic} \end{algorithm} To consider the convergence of Algorithm \ref{AL3}, we extend the conditions (A1) and (A2). For this aim, we focus on the $\delta$-neighborhood $\bar{U}(P^{0},\delta)$ of $P^{0} \in {\cal M}_a$ defined as \begin{align} \bar{U}(P^{0},\delta):=\{ P \in {\cal P}({\cal X}) | D(P^{0} \|P)\le \delta \}. \end{align} Then, we introduce the following conditions for the $\delta$-neighborhood $\bar{U}(P^{0},\delta)$ of $P^{0}$ as follows. \begin{description} \if0 \item[(A0+)] A distribution $Q \in \bar{U}(P^{0},\delta)$ satisfies \eqref{BK-1}. \item[(A0'+)] A distribution $Q \in \bar{U}(P^{0},\delta)$ satisfies \eqref{BK-2}. \fi \item[(A1+)] A distribution $Q \in \bar{U}(P^{0},\delta)\cap {\cal M}_a ={U}(P^{0},\delta)$ satisfies the following condition. When a distribution $P \in {\cal M}_a$ satisfies $D( P \|{\cal F}_2[Q]) \le \epsilon_2$, we have \begin{align} \sum_{x \in {\cal X}} P(x) (\Psi[P](x)- \Psi[ Q ](x)) \le \gamma D(P \| Q). \label{BK12} \end{align} \item[(A2+)] A distribution $Q \in \bar{U}(P^{0},\delta)$ satisfies \eqref{XMZ}. \end{description} The convergence of Algorithm \ref{AL3} is guaranteed in the following theorem. \begin{theorem}\label{TH8} Assume that the $\delta$-neighborhood $\bar{U}(P^{0},\delta)$ of $P^{0}$ satisfies the conditions (A1+) and (A2+) with $\gamma$, $\epsilon_2$, and $P^{(1)} \in U(P^{0},\delta)$. Then, Algorithm \ref{AL3} satisfies the conditions \begin{align} D(\Gamma^{(m)}_{{\cal M}_a}[{\cal F}_3[\bar{P}^{(t)}]]\| \bar{P}^{(t+1)} ) & \le \epsilon_1 \label{XP8}\\ {\cal G}(P_f^{(t_1)}) -{\cal G}(P^*) & \le \frac{\gamma D(P^*\| P^{(1)}) }{t_1-1} + \epsilon_ +\gamma \epsilon_2. \label{XZWN} \end{align} \end{theorem} The above theorem is shown in Appendix \ref{S3-4}. We discussed the convergences of Algorithms \ref{AL1} and \ref{AL3} under several conditions. When these conditions do not hold, we cannot guarantee the global convergence but, the algorithms achieve a local minimum. Hence, we need to repeat these algorithms by changing the initial value. \if0 \begin{corollary}\label{Th5} Assume that the conditions (A1) and (A4) hold, and ${\cal G}(P)$ is convex for mixture parameter in the neighborhood $U(P^{0},\delta)$. Then, for $P^{(1)} \in U(P^*_i,\delta)$, Algorithm \ref{AL1} satisfies \begin{align} {\cal G}(P^{(t_0+1)}) -{\cal G}(P^*_i) \le \frac{\gamma D(P^*_i\| P^{(1)}) }{t_0}.\label{XME2} \end{align} \end{corollary} Due to Theorem \ref{Th5}, even when the condition (A2) does not hold and there are several local minimizers, the convergence speed to the local minimizer is guaranteed by \eqref{XME2} instead of \eqref{XME} when the initial point belongs to its neighborhood. We choose $\eta^{*,i}$ as $ P_{\eta^{*,k}}= P^*_k$ for $i=1, \ldots, n$. When $\eta$ belongs to the neighborhood of $\eta^{*,i}$, $\Psi[P_{\eta}](x)$ can be expanded as \begin{align} \Psi[P_{\eta}](x) \cong & \Psi[P_{\eta^{*,i}}](x) +\sum_{j=1}^k \frac{\partial }{\partial \eta_j }\Psi[P_\eta](x) \Big|_{\eta=\eta^{*,i}}(\eta-\eta^{*,i})_j \\ &+ \frac{1}{2}\sum_{j=1}^k\sum_{j'=1}^k \frac{\partial^2 }{\partial \eta_j \partial \eta_{j'}}\Psi[P_\eta](x) \Big|_{\eta=\eta^{*,i}}(\eta-\eta^{*,i})_j(\eta-\eta^{*,i})_{j'}, \end{align} we have \begin{align} & \sum_{x \in {\cal X}}P_{\eta^{*,i}}(x) (\Psi[P_{\eta^{*,i}}](x)-\Psi[P_{\eta}](x)) \\ \cong & \frac{1}{2}\sum_{j=1}^k\sum_{j'=1}^k J_{j,j'|\eta^{*,i}} (\eta-\eta^{*,i})_j(\eta-\eta^{*,i})_{j'} \label{ACY} \end{align} Hence, when $\Big(\frac{\partial^2 }{\partial \eta_j \partial \eta_{j'}}{\cal G}(P_\eta) \Big|_{\eta=\eta^{*,i}}\Big)_{j,j'}$ is strictly positive-definite, \eqref{ACY} is negative value in the neighborhood of $\eta^{*,i}$. We have the following theorem. \fi \begin{remark} To address the minimization with cost constraint, the paper \cite{YSM} added linear penalty term to $f$. However, this method does not guarantee that the obtained result satisfies the required cost constraint. Our method can be applied to any mixture family including the distribution family with cost constraint. Hence, our method directly can be applied without the above modification while we need to calculate the m-projection. As explained in this subsection, this m-projection can be obtained with the convex minimization whose number of variables is the number of the constraint to define the mixture family. If the number of the constraint is not so large, still the m-projection is feasible. \end{remark} \subsection{Combination of gradient method and Algorithm \ref{AL1}}\label{S23} Although we can use the gradient method to calculate \eqref{ZMX2} for a general mixture family ${\cal M}_a$, we propose another algorithm to combine the gradient method and Algorithm \ref{AL1}. For this aim, we consider the following functions by using Legendre transform; For $b=(b_1, \ldots, b_k)$ and $c=(c_1, \ldots, c_{l-k})$, we define \begin{align} {\cal G}_*(b,c) :=& \sup_{P \in {\cal P}({\cal X})} \sum_{i=1}^k b^i P[f_i] +\sum_{j=1}^{l-k} c^i P[f_{k+i}]-{\cal G}(P) , \end{align} and \begin{align} \overline{{\cal G}}_*(b):=&{\cal G}_*(b,0) =\sup_{P \in {\cal P}({\cal X})} \sum_{i=1}^k b^i P[f_i]-{\cal G}(P) = \sup_{a \in \mathbb{R}^k} \sum_{i=1}^k b^i a_i -\overline{{\cal G}}(a).\label{ASS} \end{align} In the following, we consider the calculation of $\overline{{\cal G}}(0)$ by assuming that the function $\eta \mapsto {\cal G}(P_\eta) $ is $C^2$-continuous and convex. Since $\sup_{a \in \mathbb{R}^k} \sum_{i=1}^k b^i a_i -\overline{{\cal G}}_*(b) =\overline{{\cal G}}(a)$, we have \begin{align} \min_{b \in \mathbb{R}^k} \overline{{\cal G}}_*(b) =\overline{{\cal G}}(0).\label{ZMQ} \end{align} That is, when we find the minimizer $b_*:= \mathop{\rm argmin}\limits_{a \in \mathbb{R}^k} \overline{{\cal G}}_*(b)$, we can calculate $\overline{{\cal G}}(0)$ as $\overline{{\cal G}}(0)= \sup_{P \in {\cal P}({\cal X})} \sum_{i=1}^k b_*^i P[f_i]-{\cal G}(P) $. To find it, we choose a real number $L$ that is larger than the matrix norm of the Hessian of $\overline{{\cal G}}_*$, which implies the uniform Lipschitz condition; \begin{align} \| \nabla \overline{{\cal G}}_*(b)- \nabla \overline{{\cal G}}_*(b')\| \le L \| b-b'\| . \end{align} Then, we apply the following update rule for minimization of $\overline{{\cal G}}_*(b)$; \begin{align} b_{k+1}:= b_k -\frac{1}{L}\nabla \overline{{\cal G}}_*(b_k). \end{align} The following precision is guaranteed \cite[Chapter 10]{Beck} \cite{BT,Nesterov}; \begin{align} |\overline{{\cal G}}_*(b_k)- \overline{{\cal G}}_*(b_*)| \le \frac{L}{2k} \| b_*-b_0\|^2.\label{XMU} \end{align} However, since the calculation of \eqref{NER} requires large calculation amount. Hence, replacing it by one-step iteration in Algorithm \ref{AL1}, we propose another algorithm. For this aim, we notice that \begin{align} \nabla \overline{{\cal G}}_*(b)= \mathop{\rm argmax}\limits_{a \in \mathbb{R}^k} \sum_{i=1}^k b^i a_i -\overline{{\cal G}}(a)= (Q_b[f_i])_{i=1}^k,\label{ZMR} \end{align} where \begin{align} Q_b :=\mathop{\rm argmax}\limits_{P \in {\cal P}({\cal X})} \sum_{i=1}^k b^i P[f_i]-{\cal G}(P) =\mathop{\rm argmin}\limits_{P \in {\cal P}({\cal X})} \sum_{x \in {\cal X}}P(x) \Big(\Psi[P](x) -\sum_{i=1}^k b^i f_i(x)\Big) \label{NER}. \end{align} Using ${\cal F}_{3}^{b}[Q](x):= \frac{1}{\kappa}Q(x)\exp( -\frac{1}{\gamma} \Big(\Psi[P](x) -\sum_{i=1}^k b^i f_i(x)\Big)$ with the normalizing constant $\kappa$, we propose Algorithm \ref{AL4}. \begin{algorithm} \caption{Minimization of ${\cal G}(P)$} \label{AL4} \begin{algorithmic} \STATE {Choose the initial values $P^{(1)} \in \mathcal{M}$ and $b_1 \in \mathbb{R}^k$ ;} \REPEAT \STATE Calculate $P^{(t+1)}:=[{\cal F}_{3}^{b_k}[P^{(t)}]]$ and $b_{k+1}:= b_k- \frac{1}{L} (P^{(t+1)}[f_i])_{i=1}^k$; \UNTIL{convergence.} \end{algorithmic} \end{algorithm} It is not so easy to evaluate the convergence speed of Algorithm \ref{AL4}. But, when it converges, the convergent point is the true minimizer. \begin{theorem} \if0 Assume that the function $\eta \mapsto {\cal G}(P_\eta) $ is $C^2$-continuous and convex. Also, we assume that a real number $L$ is larger than the inverse of the smallest eigenvalue of the Hessian of ${\cal G}$. \fi When the pair $(b,P)$ is a convergence point, we have $b=b_*$ and $P=P_*$. \end{theorem} \begin{proof} Since the pair $(b,P)$ is a convergence point, we have $P=[{\cal F}_{3}^b[P]]$, which implies \begin{align} \sum_{i=1}^k b^i P[f_i]-{\cal G}(P) =\sup_{P' \in {\cal P}({\cal X})} \sum_{i=1}^k b^i P'[f_i]-{\cal G}(P') =\overline{{\cal G}}_*(b). \label{CMR} \end{align} Since the pair $(b,P)$ is a convergence point, we have $ P[f_i] =0$ for $i=1, \ldots, k$, i.e., the distribution $P$ satisfies the required condition in \eqref{MDP}. the relation \eqref{ZMR} implies $\nabla \overline{{\cal G}}_*(b)=0$. Hence, \eqref{ZMQ} yields $\overline{{\cal G}}_*(b)=\overline{{\cal G}}(0)$, which implies $b=b_*$. Therefore, the relation \eqref{CMR} is rewritten as ${\cal G}(P) =\overline{{\cal G}}(0)$, which implies $P=P_*$. \end{proof} \begin{remark} We compare our algorithm with the general algorithm proposed in \cite{YSM}. The input of the objective function in \cite{YSM} forms a mixture family. The function $f$ given in \cite[(6)]{YSM} satisfies the condition of ${\cal G}$ by considering the second line of \cite[(6)]{YSM} as $\Psi$. Their algorithm is the same as Algorithm \ref{AL1} with $\gamma=1$ when there is no constraint because their extended objective function $g$ defined in \cite[(16)]{YSM} can be considered as $D(P\|Q)+ \sum_{x \in {\cal X}}P(x) \Psi[Q](x)$, where the choice of $q$ in \cite{YSM} corresponds to the choice of $P$ and the choice of $Q_1,\ldots, Q_K$ in \cite{YSM} does to the choice of $Q$. Also, we can show that the function $f$ given in \cite[(6)]{YSM} satisfies the condition (A4). Since the condition (A4) holds, the convexity of $f$ is equivalent to the condition (B1). This equivalence in this case was shown as \cite[Proposition 4.1]{YSM}. They showed the convergence of their algorithm as \cite[Theorem 4.1]{YSM}, which can be considered as a special case of our Theorem \ref{TH1}. However, our treatment for constraint is different from theirs. They consider the minimization $\min_{P \in {\cal P}({\cal X})} {\cal G}(P)-\sum_{i=1}^k b^i P[f_i] $ without updating the parameter $b$. Hence, their algorithm cannot achieve the minimum with the desired constraint while Algorithms \ref{AL1}, \ref{AL3}, and \ref{AL4} achieve the minimum with the desired constraint. Although their algorithm is similar to Algorithm \ref{AL4}, Algorithm \ref{AL4} updates the parameter $b$ to achieve the minimum with the desired constraint. \end{remark} \section{Application to information theoretical problems}\label{S4} \subsection{Channel capacity}\label{S4-1} According to the reference \cite{RISB}, we apply our problem setting to the channel coding. A channel is given as a conditional distributions $W_{Y|X=x}$ on the probability space ${\cal Y}$ with conditions the probability space ${\cal X}$, where ${\cal Y}$ is a general probability space with a measure $\mu$ and ${\cal X}$ is a finite probability space. For two absolutely continuous distributions $P_Y$ and $Q_Y$ with respect to $\mu$ on ${\cal Y}$, the Kullback-Leibler divergence $D(P_Y\|Q_Y)$ is given as \begin{align} D(P_Y\|Q_Y):= \int_{{\cal Y}}p_Y(y) (\log p_Y(y)-\log q_Y(y))\mu(dy), \end{align} where $p_Y$ and $q_Y$ are the probability density functions of $P_Y$ and $Q_Y$ with respect to $\mu$. This quantity is generalized to the Renyi divergence with order $\alpha>0$ as \begin{align} D_\alpha(P_{Y} \| Q_Y):= \frac{1}{\alpha-1}\log \int_{{\cal Y}} \Big(\frac{p_Y(y)}{q_Y(y)}\Big)^{\alpha-1}p_Y(y) \mu(dy). \end{align} The channel capacity $C(W_{Y|X})$ is given as the maximization of the mutual information $I(P_X,W_{Y|X})$ as \cite{Shannon} \begin{align} C(W_{Y|X})&:=\max_{P_X} I(P_X,W_{Y|X}) \label{CMD}\\ I(P_X,W_{Y|X})&:=\sum_{x \in {\cal X}}P_X(x) D(W_{Y|X=x}\| W_{Y|X} \cdot P_X) \nonumber \\ &= D(P_{Y|X} \times P_X \| (W_{Y|X} \cdot P_X) \times P_X), \end{align} where $W_{Y|X} \cdot P_X$ and $W_{Y|X} \times P_X$ are defined as the following probability density functions $w_{Y|X} \cdot P_X$ and $w_{Y|X} \times P_X$; \begin{align} (w_{Y|X} \cdot P_X)(y) &:= \sum_{x \in {\cal X}}P_X(x) w_{Y|X=x}(y) \\ (w_{Y|X} \times P_X)(x, y)& := P_X(x) w_{Y|X=x}(y) . \end{align} However, the mutual information $I(P_X,W_{Y|X})$ has another form as \begin{align} I(P_X, W_{Y|X})= \min_{Q_Y} \sum_{x \in {\cal X}}P_X(x) D(W_{Y|X=x}\| Q_{Y}). \label{Eq82} \end{align} When we choose ${\cal M}_a$ and $\Psi$ as ${\cal P}({\cal X})$ and \begin{align} \Psi_{W_{Y|X}}[P_X](x):= - D(W_{Y|X=x}\| W_{Y|X} \cdot P_X), \end{align} $-I(P_X,W_{Y|X})$ coincides with ${\cal G}(P_X)$ \cite{RISB}. Since \begin{align} \sum_{x\in {\cal X}}P_X(x) (\Psi[P_X](x)-\Psi[Q_X](x))= D(W_{Y|X} \cdot P_X\| W_{Y|X} \cdot Q_X) \ge 0, \end{align} the condition (A2) holds with ${\cal P}({\cal X})$. In addition, since the information processing inequality guarantees that \begin{align} D(W_{Y|X} \cdot P_X\| W_{Y|X} \cdot Q_X) \le D(P_X\|Q_X), \end{align} the condition (A1) holds with $\gamma=1$ and ${\cal P}({\cal X})$. In this case, ${\cal F}_3$ is given as \begin{align} {\cal F}_3[Q_X](x)=\frac{1}{\kappa_{W_{Y|X}}[Q_X]}Q_X(x) \exp ( \frac{1}{\gamma} D(W_{Y|X=x}\| W_{Y|X} \cdot Q_X)), \end{align} where the normalizing constant $\kappa_{W_{Y|X}}[Q_X]$ is given as $\kappa_{W_{Y|X}}[Q_X]=\sum_{x \in {\cal X}}Q_X(x) \exp ( \frac{1}{\gamma} D(W_{Y|X=x}\| W_{Y|X} \cdot Q_X))$. When $\gamma=1$, it coincides with Arimoto-Blahut algorithm \cite{Arimoto,Blahut}. Since ${\cal F}_3[Q_X] \in {\cal P}({\cal X})$, $P_X^{(t+1)}$ is given as ${\cal F}_3[P_X^{(t)}]$. \if0 \subsubsection{2nd type of application} When we choose ${\cal M}_a$ and $\Psi$ as ${\cal M}_a_{P_{Y|X}}:=\{P_{Y|X}\times P_X| P_X \in {\cal P}({\cal X})\}$ and \begin{align} \Psi[P_{Y|X}\times P_X](x,y) :=& - \log (p_{Y|X}(y|x)P_X(x)) +\log (p_{Y|X} \cdot P_X)(y)+\log P_X(x) \\ =& - \log (p_{Y|X}(y|x))+\log (p_{Y|X} \cdot P_X)(y) , \end{align} $-I(P_X)$ coincides with ${\cal G}(P_X)$. Since \begin{align} &\int_{{\cal Y}}\sum_{x\in {\cal X}} p_{Y|X}(y|x) P_X(x) (\Psi[P_{Y|X}\times P_X](x,y)-\Psi[P_{Y|X}\times Q_X](x,y)) \mu(dy) \\ =& D(P_{Y|X} \cdot P_X\| P_{Y|X} \cdot Q_X) \ge 0, \end{align} the condition (A2) holds with ${\cal M}_a={\cal M}_a_{P_{Y|X}}$. In addition, the condition (A1) holds with $\gamma=1$ and ${\cal M}_a={\cal M}_a_{P_{Y|X}}$. In this case, ${\cal F}_3$ is given as \begin{align} &{\cal F}_3[P_{Y|X}\times P_X](x,y)\\ =& e^{ -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X}\times P_X \| (P_{Y|X} \cdot P_X) \times P_X } p_{Y|X} (y|x)P_X(x) \exp ( \frac{1}{\gamma} \log (p_{Y|X}(y|x))-\log (p_{Y|X} \cdot P_X)(y)) \\ =& e^{ -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X}\times P_X \| (P_{Y|X} \cdot P_X) \times P_X } p_{Y|X} (y|x) ^{1+\frac{1}{\gamma}} P_X(x) (p_{Y|X} \cdot P_X)(y)^{-\frac{1}{\gamma}} \\ =& e^{ -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X}\times P_X \| (P_{Y|X} \cdot P_X) \times P_X } e^{-\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) } p_{Y|X} (y|x) ^{1+\frac{1}{\gamma}} (p_{Y|X} \cdot P_X)(y)^{-\frac{1}{\gamma}} e^{ \frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) } P_X(x) . \end{align} Then, we have \begin{align} \Gamma^m_{{\cal M}_a_{P_{Y|X}}}[{\cal F}_3[P_{Y|X}\times Q_X]] = \end{align} \begin{align} &\Gamma^m_{{\cal M}_a_{P_{Y|X}}}[{\cal F}_3[P_{Y|X}\times Q_X]](x)\\ =& Q_X(x) e^{ \frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X}\times P_X \| (P_{Y|X} \cdot P_X) \times P_X )} \\ &\cdot \exp \Big( -\int_{{\cal Y}} p_{Y|X=x}(y) (\log p_{Y|X=x}(y) -\log e^{-\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) } p_{Y|X} (y|x) ^{1+\frac{1}{\gamma}} ) (p_{Y|X} \cdot P_X)(y)^{-\frac{1}{\gamma}} \Big)\\ =& Q_X(x) e^{ \frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X}\times P_X \| (P_{Y|X} \cdot P_X) \times P_X )} \\ &\cdot \exp \Big(- \int_{{\cal Y}} p_{Y|X=x}(y) (-\frac{1}{\gamma}\log p_{Y|X=x}(y) +\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) +\frac{1}{\gamma}\log (p_{Y|X} \cdot P_X)(y)) \Big)\\ =& Q_X(x) e^{ \frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X}\times P_X \| (P_{Y|X} \cdot P_X) \times P_X )} \\ &\cdot \exp \Big( \frac{1}{\gamma} D(P_{Y|X=x} \| P_{Y|X} \cdot P_X) -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X=x} \| P_{Y|X} \cdot P_X) \Big) \\ =& Q_X(x) e^{ -\frac{1}{\gamma} D_{1+\frac{1}{\gamma}}(P_{Y|X}\times P_X \| (P_{Y|X} \cdot P_X) \times P_X ) + \frac{1}{\gamma} D(P_{Y|X=x} \| P_{Y|X} \cdot P_X)} . \end{align} \begin{lemma} We have \begin{align} \Gamma^m_{{\cal M}_a_{P_{Y|X}}}[Q_{Y|X} \times Q_X] =P[Q_{Y|X} \times Q_X] \end{align} where \begin{align} P[Q_{Y|X} \times Q_X]=\frac{1}{\kappa[Q_{Y|X} \times Q_X]}Q_X(x) e^{-D(P_{Y|X=x}\| Q_{Y|X=x})} \end{align} where $\kappa[Q_{Y|X} \times Q_X]:=\sum_{x\in {\cal X}}Q_X(x) e^{-D(P_{Y|X=x}\| Q_{Y|X=x})}$ \end{lemma} \begin{proof} \begin{align} &D( P_{Y|X} \times P_X| Q_{Y|X} \times Q_X)\\ =& D( P_X| Q_X)+ \sum_{x\in {\cal X}}P_X(x) D(P_{Y|X=x}\| Q_{Y|X=x}) \\ =& \sum_{x\in {\cal X}}P_X(x) \Big(\log P_X(x) -\log Q_X(x)+D(P_{Y|X=x}\| Q_{Y|X=x}) \Big)\\ =& \sum_{x\in {\cal X}}P_X(x) \Big(\log P_X(x) -\log \Big(\frac{1}{C}Q_X(x) e^{-D(P_{Y|X=x}\| Q_{Y|X=x})}\Big) -\log C. \end{align} \end{proof} \fi \subsection{Reliability function in channel coding}\label{S4-2} In channel coding, we consider the reliability function, which was originally introduced by Gallager \cite{Gallager} and expresses the exponential decreasing rate of an upper bound of the decoding error probability under the random coding. To achieve this aim, for $\alpha > 0 $, we define \begin{align} I_{\alpha}(P_X,W_{Y|X}):= \frac{\alpha}{\alpha-1}\log \Big( \int_{{\cal Y}} \Big(\sum_{x \in {\cal X}} P_X(x) w_{Y|X=x}(y)^{\alpha}\Big)^{\frac{1}{\alpha}} \mu(dy) \Big) \end{align} while this parameterization is different from the original parameterization by \cite{Gallager}. Then, when the code is generated with the random coding based on the distribution $P_X$, the decoding error probability with coding rate $R$ is upper bounded by the following quantity; \begin{align} e^{n \min_{\rho \in [0,1]}\rho R -\rho I_{\frac{1}{1+\rho}}(P_X,W_{Y|X})} \end{align} when we use the channel $W_{Y|X}$ with $n$ times. Notice that $e^{-\rho I_{\frac{1}{1+\rho}}(P_X,W_{Y|X})}= \int_{{\cal Y}} \Big(\sum_{x \in {\cal X}} P_X(x) w_{Y|X=x}(y)^{\frac{1}{1+\rho}}\Big)^{1+\rho} \mu(dy)$. Taking the minimum for the choice of $P_X$, we have \begin{align} \min_{P_X} e^{\min_{\rho \in [0,1]}\rho R -\rho I_{\frac{1}{1+\rho}}(P_X,W_{Y|X})} = \min_{\alpha \in [1/2,1]} \Big(e^{-\frac{\alpha-1}{\alpha} R} \min_{P_X} e^{\frac{\alpha-1}{\alpha} I_{\alpha}(P_X,W_{Y|X})}\Big) \end{align} with $\alpha=\frac{1}{1-\rho}\in [1/2,1]$. Therefore, we consider the following minimization; \begin{align} \min_{P_X} e^{\frac{\alpha-1}{\alpha} I_{\alpha}(P_X,W_{Y|X})} =\min_{P_X} \int_{{\cal Y}} \Big(\sum_{x \in {\cal X}} P_X(x) w_{Y|X=x}(y)^{\alpha}\Big)^{\frac{1}{\alpha}} \mu(dy) \label{AMP}. \end{align} In the following, we discuss the RHS of \eqref{AMP} with $\alpha \in [1/2,1]$. To apply our method, as a generalization of \eqref{Eq82}, we consider another expression of $I_{\alpha}(P_X,W_{Y|X})$; \begin{align} I_{\alpha}(P_X,W_{Y|X})= \min_{Q_Y} D_\alpha(W_{Y|X}\times P_X \| Q_Y\times P_X), \label{MXP} \end{align} which was shown in \cite[Lemma 2]{H15}. Using \begin{align} Q_{Y|\alpha,P_X}:=& \mathop{\rm argmin}\limits_{Q_Y} D_\alpha(W_{Y|X}\times P_X \| Q_Y\times P_X) \nonumber\\ =&\mathop{\rm argmax}\limits_{Q_Y}\sum_{x\in {\cal X}}P_X(x)e^{ (\alpha-1)D_\alpha(W_{Y|X=x} \| Q_{Y})},\label{XCP} \end{align} we have \begin{align} \Big(\min_{P_X} e^{\frac{\alpha-1}{\alpha} I_{\alpha}(P_X,W_{Y|X})}\Big)^\alpha = \min_{P_X} \sum_{x\in {\cal X}}P_X(x)e^{ (\alpha-1) D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})}. \label{APT} \end{align} To solve the minimization \eqref{APT}, we apply our method to the case when $\Psi$ is given as \begin{align} \Psi_{\alpha,W_{Y|X}}[P_X](x):= e^{(\alpha-1) D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})}. \end{align} Since \eqref{XCP} guarantees that \begin{align} &\sum_{x\in {\cal X}}P_X(x) (\Psi_{\alpha,W_{Y|X}}[P_X](x) -\Psi_{\alpha,W_{Y|X}}[Q_X](x)) \\ =& \sum_{x\in {\cal X}}P_X(x) \Big(e^{ (\alpha-1)D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})} - e^{ (\alpha-1)D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,Q_X})}\Big) \ge 0, \end{align} the condition (A2) holds with ${\cal P}({\cal X})$. The condition (A1) can be satisfied with sufficiently large $\gamma$. In this case, ${\cal F}_3$ is given as \begin{align} {\cal F}_{3,\alpha}[Q_X](x)=\frac{1}{\kappa_{\alpha,W_{Y|X}}[Q_X]}Q_X(x) \exp ( -\frac{1}{\gamma} e^{(\alpha-1) D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})}), \end{align} where the normalizing constant $\kappa_{\alpha,W_{Y|X}}[Q_X]$ is given as $\kappa_{\alpha,W_{Y|X}}[Q_X]=$\par\noindent $\sum_{x \in {\cal X}}Q_X(x) \exp ( -\frac{1}{\gamma} e^{(\alpha-1) D_\alpha( W_{Y|X=x} \| Q_{Y|\alpha,P_X})})$. Since ${\cal F}_{3,\alpha}[Q_X] \in {\cal M}_a$, $P_X^{(t+1)}$ is given as ${\cal F}_{3,\alpha}[P_X^{(t)}]$. \subsection{Strong converse exponent in channel coding}\label{S4-3} In channel coding, we discuss an upper bound of the probability of correct decoding. This probability is upper bounded by the following quantity; \begin{align} \max_{P_X} e^{n \min_{\rho \in [0,1]} \Big(-\rho R +\rho I_{\frac{1}{1-\rho}}(P_X,W_{Y|X})\Big)} \end{align} when we use the channel $P_{Y|X}$ with $n$ times and the coding rate is $R$ \cite{Arimoto2}. Therefore, we consider the following maximization; \begin{align} \max_{P_X} e^{\rho I_{\frac{1}{1-\rho}}(P_X,W_{Y|X})} =\max_{P_X} e^{\frac{\alpha-1}{\alpha} I_{\alpha}(P_X,W_{Y|X})} \label{AMP2} \end{align} with $\alpha=\frac{1}{1-\rho}>1$. In the following, we discuss the RHS of \eqref{AMP2} with $\alpha >1$. To apply our method, we consider another expression \eqref{MXP} of $I_{\alpha}(P_X,W_{Y|X})$. Using \eqref{XCP}, we have \begin{align} \Big(\max_{P_X} e^{\frac{\alpha-1}{\alpha} I_{\alpha}(P_X,W_{Y|X})}\Big)^\alpha = \max_{P_X} \sum_{x\in {\cal X}}P_X(x)e^{ (\alpha-1) D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})}. \label{APT2} \end{align} The maximization \eqref{APT2} can be solved by choosing $\Psi$ as \begin{align} \Psi_{\alpha,W_{Y|X}}[P_X](x):= - e^{(\alpha-1) D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})}. \end{align} Since \eqref{XCP} guarantees that \begin{align} &\sum_{x\in {\cal X}}P_X(x) (\Psi_{\alpha,W_{Y|X}}[P_X](x) -\Psi_{\alpha,W_{Y|X}}[Q_X](x)) \nonumber \\ =& \sum_{x\in {\cal X}}P_X(x) \Big(-e^{ (\alpha-1)D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})} + e^{ (\alpha-1)D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,Q_X})}\Big) \ge 0, \end{align} the condition (A2) holds with ${\cal P}({\cal X})$. Similarly, the condition (A1) can be satisfied with sufficiently large $\gamma$. In this case, ${\cal F}_3$ is given as \begin{align} {\cal F}_{3,\alpha}[Q_X](x)=\frac{1}{\kappa_{\alpha,W_{Y|X}}[Q_X]}Q_X(x) \exp ( \frac{1}{\gamma} e^{(\alpha-1) D_\alpha( W_{Y|X=x} \| Q_{Y|\alpha,P_X})}), \end{align} where the normalizing constant $\kappa_{\alpha,W_{Y|X}}[Q_X]$ is given as $\kappa_{\alpha,W_{Y|X}}[Q_X]=$\par\noindent $\sum_{x \in {\cal X}}Q_X(x) \exp ( \frac{1}{\gamma} e^{(\alpha-1) D_\alpha(W_{Y|X=x} \| Q_{Y|\alpha,P_X})})$. Since ${\cal F}_{3,\alpha}[Q_X] \in {\cal M}_a$, $P_X^{(t+1)}$ is given as ${\cal F}_{3,\alpha}[P_X^{(t)}]$. \subsection{Wiretap channel capacity} \subsubsection{General case} Given a pair of a channel channel $W_{Y|X}$ from ${\cal X}$ to a legitimate user ${\cal Y}$ and a channel channel $W_{Z|X}$ from ${\cal X}$ to a malicious user ${\cal Z}$, the wiretap channel capacity is given as \cite{Wyner,CK79} \begin{align} C(W_{Y|X},W_{Z|X}):=\max_{P_{VX}} I(P_V, W_{Y|X}\cdot P_{X|V})-I(P_V, W_{Z|X}\cdot P_{X|V})\label{XAT} \end{align} with a sufficiently large discrete set ${\cal V}$. The recent papers showed that the above rate can be achieved even with the strong security \cite{Csisz,H06,H11} and the semantic security \cite{BTV,HM16}. Furthermore, the paper \cite[Appendix D]{HM16} showed the above even when the output systems are general continuous systems including Gaussian channels. The wiretap capacity \eqref{XAT} can be calculated via the minimization; \begin{align} \min_{P_{VX}} -I(P_V, W_{Y|X}\cdot P_{X|V})+I(P_V, W_{Z|X}\cdot P_{X|V}).\label{XAT2} \end{align} Here, ${\cal V}$ is an additional discrete probability space. When we choose ${\cal M}_a$ and $\Psi$ as ${\cal P}({\cal X} \times {\cal V})$ and \begin{align} &\Psi_{W_{Y|X},W_{Z|X}}[P_{VX}](v,x)\nonumber \\ := &D(W_{Z|X}\cdot P_{X|V=v}\|W_{Z|X}\cdot P_{X} ) -D(W_{Y|X}\cdot P_{X|V=v}\|W_{Y|X}\cdot P_{X} ), \end{align} $-I(P_V, W_{Y|X}\cdot P_{X|V})+I(P_V, W_{Z|X}\cdot P_{X|V})$ coincides with ${\cal G}(P_{VX})$. Hence, the general theory in Section \ref{setup} can be used to the minimization \eqref{XAT2}. In this case, it is difficult to clarify whether the conditions (A1) and (A2) hold in general. ${\cal F}_3$ is given as \begin{align} &{\cal F}_3[Q_{VX}](v,x)\nonumber\\ =&\frac{1}{\kappa_{W_{Y|X},W_{Z|X}}[Q_{VX}]}Q_{VX}(v,x) \exp \Big( \frac{1}{\gamma} \Big( D(W_{Y|X}\cdot P_{X|V=v}\|W_{Y|X}\cdot P_{X} )\nonumber\\ &-D(W_{Z|X}\cdot P_{X|V=v}\|W_{Z|X}\cdot P_{X} ) \Big) \Big), \end{align} where $\kappa_{W_{Y|X},W_{Z|X}}[Q_{VX}]$ is the normalizing constant. Since ${\cal F}_3[Q_X] \in {\cal M}_a$, $P_X^{(t+1)}$ is given as ${\cal F}_3[P_X^{(t)}]$. \subsubsection{Degraded case}\label{S4-5-2} However, when there exists a channel $W_{Z|Y}$ from ${\cal Y}$ to ${\cal Z}$ such that $ W_{Z|Y} \cdot W_{Y|X}=W_{Z|X}$, i.e., the channel $W_{Z|X}$ is a degraded channel of $W_{Y|X}$, we can define the joint channel $W_{YZ|X}$ with the following conditional probability density function \begin{align} w_{YZ|X}(yz|x):= w_{Z|Y}(z|y)w_{Y|X}(y|x). \end{align} Then, the maximization \eqref{XAT} is simplified as \begin{align} C(W_{YZ|X}):=\max_{P_{X}} I(X;Y|Z)[P_X, W_{YZ|X}] \label{ZLO} \end{align} where the conditional mutual information is given as \begin{align} I(X;Y|Z)[P_X, W_{YZ|X}] := \sum_{x,z} P_{XZ}(x,z) D(P_{Y|X=x,Z=z}\| P_{Y|Z=z}), \label{XATE} \end{align} where the conditional distributions $P_{Y|XZ}$ and $P_{Y|Z}$ are defined from the joint distribution $W_{YZ|X}\times P_X$. To consider \eqref{ZLO}, we consider the following minimization with a general two-output channel $W_{YZ|X}$; \begin{align} \min_{P_{X}} - I(X;Y|Z)[P_X, W_{YZ|X}] \label{ZLO2}. \end{align} When we choose ${\cal M}_a$ and $\Psi$ as ${\cal P}({\cal X})$ and \begin{align} \Psi_{W_{YZ|X}}[P_{X}](x):= - \sum_{z} P_{Z|X=x}(z) D(P_{Y|X=x,Z=z}\| P_{Y|Z=z}). \end{align} $- I(X;Y|Z)[P_X, W_{YZ|X}]$ coincides with ${\cal G}(P_{X})$. Hence, the general theory in Section \ref{setup} can be used to the minimization \eqref{XAT2}. In this case, as shown in Subsection \ref{MXT}, the conditions (A1) with $\gamma=1$ and (A2) hold. ${\cal F}_3$ is given as \begin{align} &{\cal F}_3[Q_{X}](x)\nonumber\\ =&\frac{1}{\kappa_{W_{YZ|X}}[Q_{X}]}Q_{X}(x) \exp \Big( \frac{1}{\gamma} \Big( \sum_{z} P_{Z|X=x}(z) D(P_{Y|X=x,Z=z}\| P_{Y|Z=z}) \Big) \Big), \end{align} where $\kappa_{W_{YZ|X}}[Q_{X}]$ is the normalizing constant. Since ${\cal F}_3[Q_X] \in {\cal M}_a$, $P_X^{(t+1)}$ is given as ${\cal F}_3[P_X^{(t)}]$. The above algorithm with $\gamma=1$ coincides with that by \cite{Yasui}. \subsection{Capacities with cost constraint}\label{S4-4} Next, we consider the case when a cost constraint is imposed. Consider a function $f$ on ${\cal X}$ and the following constraint for a distribution $P_X \in {\cal X}$; \begin{align} P_X[f]=a.\label{COS} \end{align} We define ${\cal M}_a$ by imposing the condition \eqref{COS}. The capacity under the cost constraint is given as $\max_{P_X \in {\cal M}_a}I(P_X,W_{Y|X})$. That is, we need to solve the minimization $\min_{P_X \in {\cal M}_a}-I(P_X,W_{Y|X})$. In this case, the $t+1$-th distribution $P^{(t+1)}$ is given as $\Gamma_{{\cal M}_a}^m [{\cal F}_3[P_X^{(t)}]]$. Since $\Gamma_{{\cal M}_a}^m [{\cal F}_3[P_X^{(t)}]]$ cannot be calculated analytically, we can use Algorithm \ref{AL3} instead of Algorithm \ref{AL1}. Since conditions (A1) with $\gamma=1$ and (A2) holds, Theorem \ref{TH8} guarantees the global convergence to the minimum in Algorithm \ref{AL3}. We can consider the cost constraint \eqref{COS} for the problems \eqref{APT} and \eqref{APT2}. In these cases, we have similar modification by considering $\Gamma_{{\cal M}_a}^m [{\cal F}_3[P_X^{(t)}]]$. \section{em problem}\label{S5} We apply our algorithm to the problem setting with em-algorithm \cite{Amari,Fujimoto,Allassonniere}, which is a generalization of Boltzmann machines \cite{Bol}. For this aim, we consider a pair of an exponential family ${\cal E}$ and a mixture family ${\cal M}_a$ on ${\cal X}$. We denote the $e$-projection to ${\cal E}$ of $P$ by $\Gamma_{\cal E}^{e}[P]$. We consider the following minimization; \begin{align} &\min_{P\in {\cal M}_a}\min_{Q\in {\cal E}} D(P\|Q)= \min_{P\in {\cal M}_a} D(P\|\Gamma_{\cal E}^{e}[P]) \nonumber \\ =& \min_{P\in {\cal M}_a} \sum_{x \in {\cal X}}P(x) (\log P(x) - \log \Gamma_{\cal E}^{e}[P](x)).\label{AMR} \end{align} We choose the function $\Psi$ as \begin{align} \Psi_{\rm{em}}[P](x):= (\log P(x) - \log \Gamma_{\cal E}^{e}[P](x)),\label{ZSP} \end{align} and apply the discussion in Section \ref{setup}. Then, we have \begin{align} &\sum_{x \in {\cal X}} P^{0}(x) (\Psi_{\rm{em}}[P^{0}](x)- \Psi_{\rm{em}}[Q](x)) \nonumber \\ =&\sum_{x \in {\cal X}} P^{0}(x) \Big( (\log P^{0}(x) - \log \Gamma_{\cal E}^{e}[P^{0}](x)) - (\log Q(x) - \log \Gamma_{\cal E}^{e}[Q](x)) \Big) \nonumber \\ =& \sum_{x \in {\cal X}} P^{0}(x)(\log P^{0}(x)-\log Q(x)) \nonumber \\ &+ \sum_{x \in {\cal X}} P^{0}(x) \Big( \log \Gamma_{\cal E}^{e}[Q](x)) - \log \Gamma_{\cal E}^{e}[P^{0}](x)) \Big) \nonumber \\ =& D (P^{0}\| Q) + D(P^{0} \| \Gamma_{\cal E}^{e}[P^{0}]) -D(P^{0} \| \Gamma_{\cal E}^{e}[Q]) \nonumber \\ =& D (P^{0}\| Q) -D(\Gamma_{\cal E}^{e}[P^{0}] \| \Gamma_{\cal E}^{e}[Q]).\label{ZME} \end{align} The condition (A1) holds with $U(P^0,\infty)={\cal M}_a$ and $\gamma=1$. There is a possibility that the condition (A1) holds with a smaller $\gamma$. In addition, when the relation \begin{align} D (P^0\| Q) \ge D(\Gamma_{\cal E}^{e}[P^0] \| \Gamma_{\cal E}^{e}[Q]) \label{AMO} \end{align} holds for $Q \in U(P^0,\delta)$, the condition (A2) holds with $U(P^0,\delta)$. That is, if the condition \eqref{AMO} holds, Algorithm \ref{AL1} has the global convergence to the minimizer. The condition \eqref{AMO} is a similar condition to the condition given in \cite{Bregman-em}. In this case, ${\cal F}_3$ is given as \begin{align} {\cal F}_3[Q](x) =&\frac{1}{\kappa_{\rm{em}}[Q]}Q(x) \exp \Big( - \frac{1}{\gamma} (\log Q(x) - \log \Gamma_{\cal E}^{e}[Q](x))\Big)\nonumber \\ =&\frac{1}{\kappa_{\rm{em}}[Q]}Q(x)^{\frac{\gamma-1}{\gamma}} \Gamma_{\cal E}^{e}[Q](x)^{\frac{1}{\gamma}},\label{AMD} \end{align} where the normalizing constant $\kappa_{\rm{em}}[Q_X]$ is given as $\kappa_{\rm{em}}[Q_X]=\sum_{x \in {\cal X}} Q(x)^{\frac{\gamma-1}{\gamma}} \Gamma_{\cal E}^{e}[Q](x)^{\frac{1}{\gamma}}$. Since ${\cal F}_3[Q] \in {\cal M}_a$, $P^{(t+1)}$ is given as $\Gamma_{{\cal M}_a}^{m}[{\cal F}_3[P^{(t)}]]$. When $\gamma=1$, it coincides with the conventional em-algorithm \cite{Amari,Fujimoto,Allassonniere} because ${\cal F}_3[P^{(t)}]=\Gamma_{\cal E}^{e}[P^{(t)}]$. The above analysis suggests a possibility to choose a smaller $\gamma$ than $1$. That is, there is a possibility that a smaller $\gamma$ improves the conventional em-algorithm. In addition, we may use Algorithm \ref{AL3} instead of Algorithm \ref{AL1} when the calculation of m-projection is difficult. \begin{lemma}\label{L9} When $\Psi_{\rm{em}}$ given as \eqref{ZSP}, the condition (A4) holds. \end{lemma} Therefore, by combining Lemmas \ref{L6} and \ref{L9}, the assumptions of Theorem \ref{TH1} holds in the $\delta$ neighborhood of local minimizer with sufficiently small $\delta>0$. That is, the convergence speed can be evaluated by Theorem \ref{TH1}. \begin{proof} Pythagorean theorem guarantees \begin{align} &\sum_{x \in {\cal X}}P (x) \big( \log \Gamma_{\cal E}^{e}[Q](x)- \log \Gamma_{\cal E}^{e}[P](x) \big) \nonumber \\ =& D(P\| \Gamma_{\cal E}^{e}[P]) - D(P\| \Gamma_{\cal E}^{e}[Q]) = D(\Gamma_{\cal E}^{e}[P]\| \Gamma_{\cal E}^{e}[Q]) \label{ZAE}. \end{align} We make the parameterization $P_\eta \in {\cal M}_a$ with mixture parameter $\eta$. We denote $\eta(h,i):=(\eta(0)_1, \ldots, \eta(0)_{i-1},\eta(0)_i+h,\eta(0)_{i+1},\ldots, \eta(0)_k)$. \begin{align} &\sum_{x \in {\cal X}}P_{\eta(0)} (x) \Big(\frac{\partial }{\partial \eta_i}\Psi[P_\eta](x)|_{\eta=\eta(0)}\Big) \nonumber\\ =& \sum_{x \in {\cal X}}P_{\eta(0)} (x) \Big( \lim_{h\to 0}\frac{\Psi[P_{\eta(h,i)}](x)-\Psi[P_{\eta(0)}](x)}{h} \Big) \nonumber\\ =& \sum_{x \in {\cal X}}P_{\eta(0)} (x) \Big( \lim_{h\to 0}\frac{ \log P_{\eta(h,i)}(x)-\log P_{\eta(0)}(x)}{h} \nonumber\\ &-\lim_{h\to 0}\frac{ \log \Gamma_{\cal E}^{e}[P_{\eta(h,i)}](x)- \log \Gamma_{\cal E}^{e}[P_{\eta(0)}](x)}{h} \Big) \nonumber\\ \stackrel{(a)}{=} & \sum_{x \in {\cal X}}P_{\eta(0)} (x) \Big( \lim_{h\to 0}\frac{ \log P_{\eta(h,i)}(x)-\log P_{\eta(0)}(x)}{h} \Big)\nonumber\\ &- \sum_{x \in {\cal X}}\Gamma_{\cal E}^{e}[P_{\eta(0)}] (x) \Big( \lim_{h\to 0} \frac{ \log \Gamma_{\cal E}^{e}[P_{\eta(h,i)}](x)- \log \Gamma_{\cal E}^{e}[P_{\eta(0)}](x)}{h} \Big) \nonumber\\ =& \sum_{x \in {\cal X}} \frac{\partial }{\partial \eta_i} P_\eta(x)|_{\eta=\eta(0)} - \sum_{x \in {\cal X}} \frac{\partial }{\partial \eta_i} \Gamma_{\cal E}^{e}[P_\eta](x)|_{\eta=\eta(0)} =0, \end{align} which implies the condition (A4). Here, $(a)$ follows from \eqref{ZAE}. \end{proof} \section{Commitment capacity}\label{S6} Using the same notations as Section \ref{S4}, we address the bit commitment via a noisy channel $W_{Y|X}$. This problem setting has several versions. To achieve the bit commitment, the papers \cite{BC1,CCDM,W-Protocols} considers interactive protocol with multiple rounds, where each round has one use of the given noisy channel $W_{Y|X}$ and free noiseless communications of both directions. Then, it derived the commitment capacity, which is explained later. The proof is basically composed of two parts. One is the achievablility part, which is often called the direct part and shows the existence of the code to achieve the capacity. The other is the impossibility part, which is often called the converse part and shows the non-existence of the code to exceed the capacity. As the achievablility part, they showed that the commitment capacity can be achieved with non-interactive protocol, which does not have no free noiseless communication during multiple uses of the given noisy channel $W_{Y|X}$. However, as explained in \cite{HW22}, their proof of the impossibility part skips so many steps so that it cannot be followed. Later, the paper \cite{BC3} showed the impossibility part only for non-interactive protocols by applying the wiretap channel. Recently, the paper \cite{H21} constructed a code to achieve the commitment capacity by using a specific type of list decoding. Further, the paper showed that the achievability of the commitment capacity even in the quantum setting. In addition, the paper \cite{HW22} showed the impossibility part for interactive protocols by completing the proof by \cite{HW22}. The proof in \cite{HW22} covers the impossibility part for a certain class even in the quantum setting. Given a distribution $P_X$, the Shannon entropy is given as \begin{align} H(X)_{P_X}:=-\sum_{x \in {\cal X}}P_X(x)\log P_X(x). \end{align} Given a joint distribution $P_{XY}$, the conditional entropy is defined as \begin{align} H(X|Y)_{P_{XY}}:= \int_{{\cal Y}} H(X)_{W_{X|Y=y}} p_Y(y) \mu(dy). \end{align} The commitment capacity is given as \begin{align} C_c(W_{Y|X}):=\max_{P_X} H(X|Y)_{W_{Y|X} \times P_X}. \end{align} \subsection{Use of em-algorithm problem} To calculate the commitment capacity, we consider the following mixture and exponential families; \begin{align} {\cal M}_a &:=\{W_{Y|X} \times P_X| P_X \in {\cal P}({\cal X})\} \label{CPA}\\ {\cal E} &:=\{ Q_Y \times P_{X,Uni} | Q_Y \in {\cal P}({\cal Y})\}, \end{align} where $P_{X,Uni}$ is the uniform distribution on ${\cal X}$. Since $\Gamma_{\cal E}^{e}[W_{Y|X} \times P_X]=(W_{Y|X} \cdot P_X) \times P_{X,Uni} $, the commitment capacity is rewritten as \begin{align} \log |{\cal X}|-C_c(W_{Y|X}) =&\min_{P_X} H(X)_{P_{X,Uni}} + H(X)_{W_{Y|X} \cdot P_X} - H(X Y)_{W_{Y|X} \times P_X}\nonumber\\ =&\min_{P_X} D( W_{Y|X} \times P_X\| (W_{Y|X} \cdot P_X) \times P_{X,Uni} ) \nonumber\\ =&\min_{P_X} D( W_{Y|X} \times P_X\| \Gamma_{\cal E}^{e}[W_{Y|X} \times P_X] ).\label{MOP} \end{align} Hence, the minimization \eqref{MOP} is a special case of the minimization \eqref{AMR}. Since $ \Gamma_{\cal E}^{e}[W_{Y|X} \times Q_X](x,y) =(W_{Y|X} \cdot Q_X) \times P_{X,Uni} $, \begin{align} D(\Gamma_{\cal E}^{e}[P^*] \| \Gamma_{\cal E}^{e}[Q_X]) =D(W_{Y|X}\cdot P_X\|W_{Y|X}\cdot Q_X) \le D (P^*\| Q_X) , \end{align} which yields the condition \eqref{AMO}. Hence, the global convergence is guaranteed. By applying \eqref{AMD}, ${\cal F}_3$ is calculated as \begin{align} &{\cal F}_3[W_{Y|X} \times Q_X](x,y)\nonumber\\ =&\frac{1}{\kappa_{W_{Y|X}}^{1}[Q_X]} w_{Y|X}(y|x)^{\frac{\gamma-1}{\gamma}} Q_X(x)^{\frac{\gamma-1}{\gamma}} (w_{Y|X} \cdot Q_X)(y)^{\frac{1}{\gamma}} P_{X,Uni}(x)^{\frac{1}{\gamma}},\label{AMD2} \end{align} where $\kappa_{W_{Y|X}}^{1}[Q_X]$ is the normalizer. Then, after a complicated calculation, the marginal distribution of its projection to ${\cal M}_a$ is given as \begin{align} &\int_{{\cal Y}} \Gamma_{{\cal M}_a}^{m}[{\cal F}_3[W_{Y|X} \times Q_X]](x,y)\mu(dy)\nonumber\\ =&\frac{1}{\kappa_{W_{Y|X}}^{2}[Q_X]} Q_X(x)^{1-\frac{1}{\gamma}} \exp ( -\frac{1}{\gamma} D(W_{Y|X=x}\| W_{Y|X} \cdot Q_X)) ,\label{MKD} \end{align} where $\kappa_{W_{Y|X}}^{2}[Q_X]$ is the normalizer. \if0 the normalizer $\kappa[P_{Y|X} \times P_X]$ is given as \begin{align} &\kappa[P_{Y|X} \times P_X]\\ :=&\sum_{x\in {\cal X}} \int_{{\cal Y}} p_{Y|X}(y|x)^{\frac{\gamma-1}{\gamma}} P_X(x)^{\frac{\gamma-1}{\gamma}} (p_{Y|X} \cdot P_X)(y)^{\frac{1}{\gamma}} P_{X,Uni}(x)^{\frac{1}{\gamma}} \mu(dy). \end{align} Then, \begin{align} D(P_{Y|X}\times Q_X \| {\cal F}_3[P_{Y|X} \times P_X]) \end{align} \fi In the algorithm, we update $P_X^{(t+1)}$ as $P_X^{(t+1)}(x):= \int_{{\cal Y}} \Gamma_{{\cal M}_a}^{m}[{\cal F}_3 [W_{Y|X} \times P_X^{(t)}]] (x,y)\mu(dy) $. \subsection{Direct Application} The update formula \eqref{MKD} requires a complicated calculation, we can derive the same update rule by a simpler derivation as follows. The commitment capacity is rewritten as \begin{align} -C_c(W_{Y|X}) =&\min_{P_X} I(P_X,W_{Y|X})-H(X)_{P_{X}} \nonumber\\ =&\min_{P_X} \sum_{x\in {\cal X}}P_X(x) (D(W_{Y|X=x}\| W_{Y|X}\cdot P_X )+\log P_X(x)). \end{align} We choose $\Psi$ as \begin{align} \Psi_{c,W_{Y|X}}[P_X](x):= D(W_{Y|X=x}\| W_{Y|X}\cdot P_X )+\log P_X(x). \end{align} Then, we have \begin{align} &\sum_{x \in {\cal X}}P_X(x)( \Psi[P_X](x)- \Psi[Q_X](x))\nonumber\\ =& D(P_X\|Q_X)-D(W_{Y|X}\cdot P_X\|W_{Y|X}\cdot Q_X) \ge 0 \end{align} and \begin{align} D(P_X\|Q_X) \ge D(P_X\|Q_X)- D(W_{Y|X}\cdot P_X\|W_{Y|X}\cdot Q_X) . \end{align} Since the condition (A1) with $\gamma=1$ and the condition (A2) hold, Algorithm \ref{AL1} has the convergence with $\gamma =1$. In this case, ${\cal F}_3$ is given as \begin{align} &{\cal F}_3[Q_X](x) \nonumber\\ &=\frac{1}{\kappa_{W_{Y|X}}^3[Q_X]}Q_X(x) \exp ( - \frac{1}{\gamma} (\log Q_X(x)+ D(W_{Y|X=x}\| W_{Y|X} \cdot Q_X))) \nonumber\\ &=\frac{1}{\kappa_{W_{Y|X}}^3[Q_X]}Q_X(x)^{1-\frac{1}{\gamma}} \exp ( -\frac{1}{\gamma} D(W_{Y|X=x}\| W_{Y|X} \cdot Q_X)) , \end{align} where the normalizing constant $\kappa_{W_{Y|X}}^3[Q_X]$ is given as $\kappa_{W_{Y|X}}^3[Q_X]:=$\par\noindent $\sum_{x \in {\cal X}} Q_X(x)^{1-\frac{1}{\gamma}} \exp ( - \frac{1}{\gamma} D(W_{Y|X=x}\| W_{Y|X} \cdot Q_X))$. Since ${\cal F}_3[Q_X] \in {\cal M}_a$, $P_X^{(t+1)}$ is given as ${\cal F}_3[P_X^{(t)}]$. \section{Reverse em problem}\label{S7} \subsection{General problem description} In this section, given a pair of an exponential family ${\cal E}$ and a mixture family ${\cal M}_a$ on ${\cal X}$, we consider the following maximization; \begin{align} &\max_{P\in {\cal M}_a}\min_{Q\in {\cal E}} D(P\|Q)= \min_{P\in {\cal M}_a} D(P\|\Gamma_{\cal E}^{e}[P]) \nonumber \\ =& \max_{P\in {\cal M}_a} \sum_{x \in {\cal X}}P(x) (\log P(x) - \log \Gamma_{\cal E}^{e}[P](x))\label{AMR2} \end{align} while Section \ref{S5} considers the minimization of the same value. When ${\cal M}_a$ is given as \eqref{CPA} and ${\cal E}$ is given as ${\cal P}({\cal X})\times {\cal P}({\cal Y})$, this problem coincides with the channel capacity \eqref{CMD}. This problem was firstly studied for the channel capacity in \cite{Shoji}, and was discussed with a general form in \cite{reverse}. To discuss this problem, we choose the function $\Psi$ as $\Psi_{\rm{rem}}:=-\Psi_{\rm{em}}$, and apply the discussion in Section \ref{setup}. Due to \eqref{ZME}, \eqref{BK1} in the condition (A1) is written as \begin{align} (\gamma+1) D (P^0\| Q) \ge D(\Gamma_{\cal E}^{e}[P^0] \| \Gamma_{\cal E}^{e}[Q]), \end{align} and \eqref{XMZ} in the condition (A2) is written as \begin{align} D(\Gamma_{\cal E}^{e}[P^0] \| \Gamma_{\cal E}^{e}[Q]) \ge D (P^0\| Q) . \end{align} Further, due to Lemma \ref{L9}, the condition (A4) holds. \subsection{Application to wiretap channel}\label{MXT} Now, we apply this problem setting to wiretap channel with degraded case discussed in Subsection \ref{S4-5-2}. We choose ${\cal M}_a$ as $\{ W_{YZ|X}\times P_X | P_X\in {\cal P}({\cal X}) \}$ and ${\cal E}$ as the set of distributions with the Markov chain condition $X-Z-Y$ \cite{Toyota}. Then, the conditional mutual information $I(X;Y|Z)[P_X,W_{YZ|X}]$ is is given as $D( W_{YZ|X}\times P_X\| \Gamma_{{\cal E}}^e (W_{YZ|X}\times P_X))$. In this application, we have \begin{align} D_{\Psi_{\rm{rem}}}(W_{YZ|X}\times P_X\| W_{YZ|X}\times Q_X)&= D_{\Psi_{W_{YZ|X}}}D( P_X \| Q_X) \\ D( W_{YZ|X}\times P_X\| W_{YZ|X}\times Q_X) &=D( P_X \| Q_X). \end{align} To check the conditions (A1) and (A2) for $\Psi_{W_{YZ|X}}$, it is sufficient to check them for $\Psi_{\rm{rem}}$ in this application. Since we have \begin{align} & D( \Gamma_{{\cal E}}^e (W_{YZ|X}\times P_X)\| \Gamma_{{\cal E}}^e (W_{YZ|X}\times Q_X)) \nonumber\\ =& D( W_{Z|X} \times P_{X} \| Q_{XZ}) + D( W_{YZ|X}\cdot P_X \| W_{YZ|X}\cdot Q_X)\nonumber\\ &- D( W_{Z|X}\cdot P_X \| W_{Z|X}\cdot Q_X) \nonumber\\ =& D( P_X \| Q_X) + D( W_{YZ|X}\cdot P_X \| W_{YZ|X}\cdot Q_X - D( W_{Z|X}\cdot P_X \| W_{Z|X}\cdot Q_X) \nonumber\\ \le & 2D( P_X \| Q_X), \end{align} LHS of \eqref{BK1} in the condition (A1) is written as \begin{align} & \gamma D (P_X^0\| Q_X) - D( W_{YZ|X}\cdot P_X \| W_{YZ|X}\cdot Q_X) + D( W_{Z|X}\cdot P_X \| W_{Z|X}\cdot Q_X) \nonumber\\ \ge & \gamma D (P^0\| Q) - D( P_X^0 \| Q_X). \end{align} It is not negative when $\gamma \ge 1$. Also, RHS of \eqref{XMZ} in the condition (A2) is written as \begin{align} D( W_{YZ|X}\cdot P_X \| W_{YZ|X}\cdot Q_X) - D( W_{Z|X}\cdot P_X \| W_{Z|X}\cdot Q_X) \ge 0. \end{align} Hence, the conditions (A1) and (A2) hold with $\gamma \ge 1$. \section{Conclusion} We have proposed iterative algorithms with an acceleration parameter for a general minimization problem over a mixture family. For these algorithms, we have shown its convergence theorems in various forms, one of which covers the case with approximated iterations. Then, we have applied our algorithms to various problem settings including em algorithm and several information theoretical problem settings. There are several interesting future problems. The first direction is the numerical simulation for our algorithm. In particular, it is unclear how the acceleration parameter $\gamma$ improves the conventional em algorithm for the minimization of the divergence between a mixture family and an exponential family because this setting generally has several local minimizers. Therefore, it is useful to demonstrate this improvement in typical examples for the em algorithm. The second direction is the evaluation of the convergence speed of Algorithm \ref{AL4} because we could not derive its evaluation. The third direction is finding various application of our methods. Although this paper studied several examples, it is needed to more useful examples for our algorithm. The third direction is the extensions of our results. A typical extension is the extension to the quantum setting \cite{Holevo,SW,hayashi}. As further extension, it is an interesting topic to extend our result to the setting with Bregman divergence. Recently, Bregman proximal gradient algorithm has been studied for the minimization of a convex function \cite{CT93,T97,ZYS}. Since this algorithm uses Bregman divergence, it might has an interesting relation with the above extended our algorithm. Therefore, it is interesting study to investigate this relation. \section*{Acknowledgments} The author was supported in part by the National Natural Science Foundation of China (Grant No. 62171212) and Guangdong Provincial Key Laboratory (Grant No. 2019B121203002). The author is very grateful to Mr. Shoji Toyota for helpful discussions. In addition, he pointed out that the secrecy capacity can be written as the reverse em algorithm in a similar way as the channel capacity \cite{Toyota} under the degraded condition. \section*{Data availability} Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
{ "attr-fineweb-edu": 1.875977, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbw3xK0wg09lKA7HS
\section{Discussion} In this section, we discuss some algorithmic design choices and mention potential limitations of the proposed method. First, the combination of time-domain and frequency-domain information is extensively studied in the field of multi-view learning \cite{zhao2017multi} and its applications. One approach is to simply concatenate separately learned TD and FD features, e.g. \cite{yuan2017multi}. Another approach is to find a joint representation, which needs to take both views into account in an effective way. This can for instance be achieved using adaptive gradient blending \cite{phan2020xsleepnet}. In the context of CPD, it is however a priori unclear how to optimize this joint representation during training. We therefore choose to train the TD and FD autoencoders separately and use a CPD-tailored data-driven weighted concatenation to fuse both views into one representation. From Table \ref{tab:performance}, it is clear that the AUC of TIRE (i.e. with TD and FD combined) is in general only slightly lower than the maximum of the AUCs of TIRE-TD and TIRE-FD, illustrating the good performance of our fusion approach. Second, in this paper we focused on time series with only few channels. In this setting, we showed that the latent dimension of the autoencoder has little influence on the performance. Our method deliberately targets a lossy reconstruction due to a compressed representation in order to only learn the most important time-invariant properties of the time series segment. For high-dimensional time series data, e.g. supervisory control and data acquisition (SCADA) or electroencephalography (EEG) data, the choice of latent dimension might need further investigation. Alternatively, relevant channels can be selected using an application-specific method, e.g. \cite{bertrand2020}. We demonstrated the performance of TIRE using the AUC, but practitioners need to choose a suitable value of $\tau$ (cf. Section \ref{sec:summary}) in order to use the method. As $\tau$ critically depends on domain knowledge and the needs of the practioner (e.g. their willingness to make a type I, resp. type II, error), we do not provide explicit guidelines. The tuning of $\tau$ can be facilitated if some prior knowledge is available, e.g. when part of the data is labelled or when an estimate of the number of change points is available. In case such information is not available and in case of doubt, we advise to underestimate $\tau$ as our proposed post-processing procedure effectively reduces the number of false positives. It is also worth noting that TIRE can be interpreted as a nonlinear parametric CPD method that learns the relevant parameters from the data. Whereas classical parametric methods are often able to provide an (asymptotically correct) significance level for change point probabilities \cite{davis1995testing, shao2010testing, Cheng2020OnDetection,Truong2020SelectiveMethods}, the interpretation of our change point score is rather limited. These theoretical guarantees for classical parametric methods however only hold under very specific assumptions on the data distribution, which are often not satisfied when real life data is used. Finally, we showed that the use of filters to both smoothen the features itself \eqref{eq:smooth-features} and the dissimilarity measure \eqref{eq:matched_filter} generally leads to a significant improvement in AUC (see Table \ref{tab:postprocessing}). Care should however be taken when the peaks in the unfiltered dissimarity measure are either skewed or very close to each other. In the first case, the peak location might shift, leading to a false negative when the toleration distance is set too small. In the second case, the two peaks might either be joined to one peak, or one of the two peaks will have a very low prominence-based change point score. Given the good performance of TIRE (Table \ref{tab:performance}), it is however clear that these are only minor concerns. \section{Conclusion} We have proposed a novel distribution-free change point detection method based on autoencoders that learn a partially time-invariant representation that complies with the needs of CPD. Change points are calculated using a dissimilarity measure based on the Euclidean distance between the features learned from consecutive windows. We have mitigated the effect of false positive detections by proposing a postprocessing procedure using a matched filter and a prominence-based change point score. Furthermore, we have explicitly focused on non-iid time series by including temporally localized spectral information in the input of the autoencoder. The resulting method is very flexible, as it allows the user to indicate whether change points should be sought in the frequency domain, time domain or both. Examples of change points that can be detected are abrupt changes in the slope, mean, variance, autocorrelation function and frequency spectrum. Finally, we have showed that the performance of TIRE is consistently superior or highly competitive compared to baseline methods on benchmark data sets. A sensitivity analysis reveals that this good performance does not critically depend on the window size, nor on the latent dimension of the autoencoder. This robustness, together with the lack of distributional assumptions, make TIRE an easy-to-use change point detection method, whilst still offering a great deal of flexibility. \subsubsection{Simulated data} We consider the one-dimensional autoregressive (AR) model $y(t) = a_1 y(t-1) + a_2 y(t-2) + \epsilon_t $ where $\epsilon_t\sim\mathcal{N}(\mu_t,\sigma_t^2)$ and $y(1)=y(2)=0$. We generate 50 random change points $t_n$ with $t_0=0$, $t_n=t_{n-1}+\lfloor\tau_n\rfloor$ and $\tau_n\sim\mathcal{N}(100,10)$. Following the parameter choices of \cite{Kawahara2012SequentialEstimation, Liu2013Change-pointEstimation, Chang2019KernelModels, Takeuchi2006ASeries}, we create the following data sets, each consisting of ten randomly generated time series. \textbf{Jumping mean (JM)}. For this data set, let $a_1=0.6$, $a_2=-0.5$ and $\sigma_t=1.5$. We set the noise mean as \begin{equation} \mu_t = \begin{cases}0 & 1\leq t\leq t_1 \\ \mu_{t_{n-1}}+n/16 & t_{n-1}+1\leq t \leq t_n. \end{cases} \end{equation} \textbf{Scaling variance (SV)}. For this data set, let $a_1=0.6$, $a_2=-0.5$ and $\mu_t=0$. We set the noise standard deviation as \begin{equation} \sigma_t = \begin{cases}1 & t_{n-1}+1\leq t \leq t_n \text{ and } n \text{ odd} \\ \ln(e+n/4) & t_{n-1}+1\leq t \leq t_n \text{ and } n \text{ even}.\end{cases} \end{equation} \textbf{Changing coefficients (CC)}. We set $a_2=0$, $\mu_t=0$ and $\sigma_t=1.5$. To take the burn-in time into account, we set $\tau_n\sim\mathcal{N}(1000,100)$. For every segment, the coefficient $a_1$ is alternatively sampled from $\mathcal{U}([0,0.5])$ and $\mathcal{U}([0.8,0.95])$, leading to clear differences in autocorrelation and frequency content between consecutive segments. \textbf{Gaussian mixtures (GM)}. Here we abandon the AR model and instead simulate a piecewise iid sequence alternatively sampled between the Gaussian mixtures $0.5\mathcal{N}(-1,0.5^2)+0.5\mathcal{N}(1,0.5^2)$ and $0.8\mathcal{N}(-1,1.0^2)+0.2\mathcal{N}(1,0.1^2)$. Change points are generated using the same mechanism as for JM and SV. \subsubsection{Real-life data sets} \textbf{Bee dance} \cite{Oh2008LearningSystems} is an often used data set to evaluate CPD algorithms \cite{Xuan2007ModelingSeries, Cheng2020OnDetection, Chang2019KernelModels, Turner2011GaussianDetection, Burg2020AnAlgorithms}. It consists of six three-dimensional time series of the bees position (location in 2D plane and angle differences) while it performs a three-stage waggle dance, which is of interest to ethnologists. \textbf{HASC-2011} is a subset of the HASC Challenge 2011 dataset \cite{Kawaguchi2011HASC2011corpus:Recognition}, which provides human activity data from portable three-axis accelerometers. The six activities carried out are staying still, walking, jogging, skipping, taking the stairs up or down. Following respectively \cite{cheng2020optimal} and \cite{Liu2013Change-pointEstimation}, we use the data from person 671 and convert the data to a 1D time series by taking the $l^2$-norm of the three-dimensional samples. Human activity recognition data is commonly used in CPD literature \cite{cheng2020optimal, Cheng2020OnDetection, Chang2019KernelModels, Kawaguchi2011HASC2011corpus:Recognition, Kawahara2012SequentialEstimation, Liu2013Change-pointEstimation,M-stat}. \textbf{Well log} \cite{389767} consists of nuclear magnetic resonance measurements taken from a drill while drilling a well. Changes in the mean of the time series correspond to changes in rock stratification, outliers should be ignored \cite{Knoblauch2018Doubly-divergences}. Other results on this data set in the context of CPD evaluation include \cite{Adams2007BayesianDetection, Turner2011GaussianDetection, 389767, doi:10.1111/1467-9868.00421, Burg2020AnAlgorithms, Knoblauch2018Doubly-divergences}. \begin{table}[htbp] \caption{Overview of data sets. For data sets consisting of multiple time series, mean and standard deviation are reported. Q10, Q50 and Q90 denote the 10\%, 50\% and 90\% quantile, resp. } \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & & & & \multicolumn{3}{c|}{\textbf{CP distances}}\\ \textbf{Data set} & \textbf{Length}& \textbf{\#series}& \textbf{\#CPs} & \textbf{Q10}& \textbf{Q50}& \textbf{Q90}\\ \hline JM, SV, GM & $4900\pm22$ & 10 & 48 & 96& 100& 104\\ CC & $49000\pm70$ & 10 & 48 & 987& 1000& 1013\\ Bee dance & $827\pm202$ & 6 & $20\pm 4$ & 28& 39& 56\\ HASC-2011 & $39397$ & $1$& $39$ & 69& 427& 2509\\ Well log & $4050$ & $1$ & $9$ & 55& 170& 390\\ \hline \end{tabular} \label{tab:datasets} \end{center} \end{table} \section{Discussion} \section{Introduction} This document is a model and instructions for \LaTeX. Please observe the conference page limits. \section{Ease of Use} \subsection{Maintaining the Integrity of the Specifications} The IEEEtran class file is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations. \section{Prepare Your Paper Before Styling} Before you begin to format your paper, first write and save the content as a separate text file. Complete all content and organizational editing before formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on proofreading, spelling and grammar. Keep your text and graphic files separate until after the text has been formatted and styled. Do not number text heads---{\LaTeX} will do that for you. \subsection{Abbreviations and Acronyms}\label{AA} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''. \item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.) \end{itemize} \subsection{Equations} Number equations consecutively. To make your equations more compact, you may use the solidus (~/~), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in: \begin{equation} a+b=\gamma\label{eq} \end{equation} Be sure that the symbols in your equation have been defined before or immediately following the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at the beginning of a sentence: ``Equation \eqref{eq} is . . .'' \subsection{\LaTeX-Specific Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please don't use the \verb|{eqnarray}| equation environment. Use \verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}| environment leaves unsightly spaces around relation symbols. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Do not use \verb|\nonumber| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \subsection{Some Common Mistakes}\label{SCM} \begin{itemize} \item The word ``data'' is plural, not singular. \item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''. \item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates). \item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''. \item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''. \item Do not confuse ``imply'' and ``infer''. \item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the ``et'' in the Latin abbreviation ``et al.''. \item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''. \end{itemize} An excellent style manual for science writers is \cite{b7}. \subsection{Authors and Affiliations} \textbf{The class file is designed for, but not limited to, six authors.} A minimum of one author is required for all conference articles. Author names should be listed starting from left to right and then moving down to the next line. This is the author sequence that will be used in future citations and by indexing services. Names should not be listed in columns nor group by affiliation. Please keep your affiliations as succinct as possible (for example, do not differentiate among departments of the same organization). \subsection{Identify the Headings} Headings, or heads, are organizational devices that guide the reader through your paper. There are two types: component heads and text heads. Component heads identify the different components of your paper and are not topically subordinate to each other. Examples include Acknowledgments and References and, for these, the correct style to use is ``Heading 5''. Use ``figure caption'' for your Figure captions, and ``table head'' for your table title. Run-in heads, such as ``Abstract'', will require you to apply a style (in this case, italic) in addition to the style provided by the drop down menu to differentiate the head from the text. Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. \subsection{Figures and Tables} \paragraph{Positioning Figures and Tables} Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are cited in the text. Use the abbreviation ``Fig.~\ref{fig}'', even at the beginning of a sentence. \begin{table}[htbp] \caption{Table Type Styles} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\ \cline{2-4} \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\ \hline copy& More table copy$^{\mathrm{a}}$& & \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.} \end{tabular} \label{tab1} \end{center} \end{table} \begin{figure}[htbp] \centerline{\includegraphics{fig1.png}} \caption{Example of a figure caption.} \label{fig} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity ``Magnetization'', or ``Magnetization, M'', not just ``M''. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization \{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of quantities and units. For example, write ``Temperature (K)'', not ``Temperature/K''. \section*{Acknowledgment} The preferred spelling of the word ``acknowledgment'' in America is without an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor acknowledgments in the unnumbered footnote on the first page. \section*{References} Please number citations consecutively within brackets \cite{b1}. The sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$'' Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes. Unless there are six authors or more give all authors' names; do not use ``et al.''. Papers that have not been published, even if they have been submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers that have been accepted for publication should be cited as ``in press'' \cite{b5}. Capitalize only the first word in a paper title, except for proper nouns and element symbols. For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation \cite{b6}. \vspace{12pt} \color{red} IEEE conference templates contain guidance text for composing and formatting conference papers. Please ensure that all template text is removed from your conference paper prior to submission to the conference. Failure to remove the template text from your paper may result in your paper not being published. \section{Experiments}\label{sec:experiments-setup} \subsection{Evaluation measure} In our setting, the goal of a CPD algorithm is to identify the location of change points as accurately as possible. Given a toleration distance $\delta$ we say that a ground-truth change point $a$ is correctly detected by a detection alarm $b$ if the following three conditions are satisfied \cite{Lee2018TimeLearning}: \begin{enumerate \item No other ground-truth change point is closer to $b$ than $a$. \item The time distance between $a$ and $b$ is smaller than the toleration distance, i.e. $|a-b|\leq\delta$. \item Every detection alarm can only contribute to the correct detection of at most one ground-truth change point. \end{enumerate} To evaluate the performance of our method, we will construct receiver operating characteristic (ROC) curves and use the area under this curve (AUC) as a performance metric, as is common practice. Following \cite{Kawahara2012SequentialEstimation, Liu2013Change-pointEstimation, Chang2019KernelModels, Lee2018TimeLearning}, we define the true positive rate (TPR) and false positive rate (FPR) of our detection algorithm as \begin{equation} \text{TPR} = \frac{N_{\text{CR}}}{N_{\text{GT}}} \quad \text{and} \quad \text{FPR} = \frac{N_{\text{AL}}-N_{\text{CR}}}{N_{\text{AL}}}, \end{equation} where $N_{\text{GT}}$ denotes the number of ground-truth change points, $N_{\text{AL}}$ denotes the number of all detection alarms by the algorithm and $N_{\text{CR}}$ is the number of times a ground-truth change point is correctly detected. We obtain the ROC curve by varying the detection threshold $\tau$. Unlike in the binary classification setting, the ROC curve is not necessarily monotonously increasing, as the FPR does not need to be a monotonous function of $\tau$. Nevertheless, it still holds that $0\leq \text{AUC} \leq 1$. Moreover, note that a TPR of $1.0$ can be obtained by setting the detection threshold to zero $\tau=0$ (i.e. all time stamps are detection alarms), though the FPR will always be strictly smaller than $1.0$ for $\tau=0$ when at least one change point is present. We therefore extend the ROC curve by manually adding the point $(\text{FPR},\text{TPR})=(1.0,1.0)$. This ensures that a perfect performance corresponds to an AUC of $1$. \subsection{Data sets}\label{sec:datasets} \input{datasets} \subsection{Parameter settings and baseline methods}\label{sec:parametersetting} For TIRE, we report the results for two different parameter settings. Parameter setting \textit{a} corresponds to the case without instantaneous features: in both time and frequency domain the autoencoder learns only 1 (time-invariant) feature (i.e. $h^{\text{TD}}=s^{\text{TD}}=h^{\text{FD}}=s^{\text{FD}}=1$). Furthermore we set $K=2$, $\lambda^{\text{TD}}=1$ and $\lambda^{\text{FD}}=1$. Parameter setting \textit{b} corresponds to the case with 1 instantaneous and 2 time-invariant features in the time domain (i.e. $h^{\text{TD}}=3$, $s^{\text{TD}}=2$) and furthermore we set $h^{\text{FD}}=s^{\text{FD}}=1$, $K=2$, $\lambda^{\text{TD}}=1$ and $\lambda^{\text{FD}}=1$. For TIRE-TD we set $\alpha=1$ and $\beta=0$ in \eqref{eq:combine_shared}, and vice versa for TIRE-FD. For the combined approach, we set $\alpha$ and $\beta$ following \eqref{eq:alpha-beta}. We train all networks for 200 epochs using the Adam optimizer \cite{kingma2014adam} with default settings. For both parameter settings, we choose window sizes and toleration distances based on domain knowledge and sampling frequency. We set $N=20$ and $\delta=15$ for JM, SC and GM; $N=200$ and $\delta=150$ for CC; $N=10$ and $\delta=15$ for bee dance; $N=100$ and $\delta=300$ for HASC-2011 and $N=75$ and $\delta=50$ for well log. The influence of these parameter settings will be discussed in Section \ref{sec:sensitivity}. In terms of postprocessing, we use a matched filter and calculate our proposed prominence score (cf. Section \ref{sec:method}). The advantageous effect of this postprocessing stage is analyzed in Section \ref{sec:postprocessing}. In order to obtain a fair comparison, we also apply these postprocessing steps to all undermentioned baseline methods which do not explicitly define such a procedure. The first baseline method we use is the \textbf{generalized likelihood ratio} (GLR) procedure \cite{brandt, appel_brandt_1983}, which has been shown to have a good performance for detecting changes in the autocorrelation function or the frequency spectrum \cite{Appel1984AAlgorithms}. A conceptually similar method is described in \cite{davis1995testing}. We use a sliding window approach, where an AR(2)-model is fit on every two neighbouring windows as well as their union. A generalized log-likelihood ratio is used as dissimilarity measure. For a fair comparison, we use the same window sizes and postprocessing steps as for TIRE. Second, we consider a density-ratio estimation method called \textbf{relative unconstrained least-squares importance fitting} (RuLSIF) that has been applied to CPD \cite{Liu2013Change-pointEstimation}. Like with the closely related uLSIF \cite{Kawahara2012SequentialEstimation}, the idea is to estimate and compare the density ratio of two neighbouring windows instead of the individual densities. Because the validation data sets in \cite{Liu2013Change-pointEstimation} largely overlap with ours, we adopt the same parameter choices and postprocessing steps as described in the original paper. Next, \textbf{kernel learning CPD} (KL-CPD) \cite{Chang2019KernelModels} is a recently proposed kernel learning framework for time series CPD that optimizes a lower bound of test power via an auxiliary generative model. Features are learned using a recurrent neural network and the dissimilarity measure is based on the maximum mean discrepancy. Given the large overlap in used benchmark data sets, we use the original default parameter settings in \cite{Chang2019KernelModels} without adaptation (e.g. window size of 25). We train the networks for 200 epochs, as longer training did not result in improved results. For a fair comparison, we use the same postprocessing steps as for TIRE as none were proposed in \cite{Chang2019KernelModels}. Finally, we compare with the \textbf{autoencoder-based breakpoint detection procedure} (ABD) \cite{Lee2018TimeLearning}. ABD only uses time-domain information and does not include any regularization to promote time-invariant features. We set parameters using the parameter guidelines in the original paper. This leads to a window size of 96 for JM, SV and GM; 995 for CC; 26 for bee dance; 158 for HASC-2011 and 155 for well log. \section{Results}\label{sec:experiments-results} \subsection{Performance results} In Table \ref{tab:performance}, the performances of all versions of TIRE and the baseline methods are listed. For all data sets, we report the mean AUC and its standard error. All data sets, methods and abbreviations are described in Section \ref{sec:parametersetting}. The highest mean AUC for each data set can be found in bold. In the following, we discuss some important observations. The GLR procedure gives very good results on the simulated data sets, but its performance degrades on the real-life data sets. This confirms the common observation that the performance of model-based CPD procedures heavily relies on how well the actual data can be described using the chosen model. In this case, both the simulated data and the GLR procedure are based on a second-order autoregressive model, which is why GLR performs well on this data. RuLSIF and KL-CPD do not perform well on data sets in which the change points manifest themselves in the frequency domain, since they do not leverage the sequential nature of the time series data, i.e. they assume the data samples to be iid. Note that AUC values for KL-CPD differ from those in \cite{Chang2019KernelModels} as CPD is there interpreted as a binary classification problem. Next, ABD generally does not give good results, which can by explained by ABD's inability to detect changes in the frequency domain and the often noisy features (cf. Figure \ref{fig:representation}). In addition, ABD's \textit{normalized} dissimilarity measure (eq. (10) in \cite{Lee2018TimeLearning}), given by, \begin{equation} \mathcal{D}_t^{\text{ABD}} = \norm{\mathbf{h}_t- \mathbf{h}_{t+N}}_2/\sqrt{\norm{\mathbf{h}_t}_2\cdot\norm{\mathbf{h}_{t+N}}_2}, \end{equation} where $\mathbf{h}_t$ is the vector of learned time-domain features, is not invariant to a shift of the features (i.e. adding a constant to all features); it even diverges when the norm of one of the features vanishes, which is not reasonable. For all data sets and both parameter settings \textit{a} and \textit{b}, either TIRE-TD or TIRE-FD outperforms (almost) all other baseline methods or has an AUC higher than $0.90$. In many real-life cases, it is a priori clear whether TD (e.g. well log) or FD (e.g. HAR data, audio, \ldots) information should be used. Moreover, our framework for combining the time-invariant features from the time and frequency domain still gives consistently good results even when in one of the two domains no change point information is present. This means that the combined TD-FD approach can always be selected as a safe choice when it is unclear in which domain the change points mainly manifest themselves. Finally, the different parameter settings seem have no significant influence on the performance of TIRE. The sensitivity of the proposed method to parameter choices will be further discussed in Section \ref{sec:sensitivity}. \begin{table*} \caption{Comparison of the AUC of the proposed Time-Invariant Representation CPD methods (TIRE) with baseline methods. } \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline &\textbf{Mean} & \textbf{Variance}& \textbf{Coefficient}& \textbf{Gaussian} &\textbf{Bee dance} & \textbf{HASC-2011}&\textbf{Well log} & \textbf{Average}\\ \hline GLR \cite{brandt, appel_brandt_1983} &$0.73 \pm 0.02$ & $0.81 \pm 0.02$ & $\textbf{1.00} \pm 0.01$ & $\textbf{0.989} \pm 0.004$ & $0.55 \pm 0.06$ & $0.6431$ & $0.2109$ & $0.71 \pm 0.01$ \\ RuLSIF \cite{Liu2013Change-pointEstimation} &$0.708 \pm 0.008$ & $0.65 \pm 0.02$ & $0.36 \pm 0.02$ & $0.874 \pm 0.007$ & $0.47 \pm 0.06$ & $0.3162$ & $0.798$ & $0.597 \pm 0.009$ \\ KL-CPD \cite{Chang2019KernelModels} &$0.872 \pm 0.007$ & $0.23 \pm 0.02$ & $0.11 \pm 0.01$ & $0.84 \pm 0.07$ & $0.56 \pm 0.07$ & $0.343$ & $0.4247$ & $0.48 \pm 0.01$ \\ ABD \cite{Lee2018TimeLearning} &$0.22 \pm 0.02$ & $0.17 \pm 0.02$ & $0.08 \pm 0.02$ & $0.18 \pm 0.02$ & $0.20 \pm 0.04$ & $0.2487$ & $0.477$ & $0.224 \pm 0.008$ \\ \hline TIRE-TD-\textit{a}&$0.86 \pm 0.01$ & $0.25 \pm 0.01$ & $0.26 \pm 0.01$ & $0.958 \pm 0.009$ & $0.36 \pm 0.05$ & $0.4166$ & $0.8002$ & $0.558 \pm 0.007$ \\ TIRE-FD-\textit{a}&$0.86 \pm 0.01$ & $\textbf{0.85} \pm 0.01$ & $0.96 \pm 0.01$ & $0.83 \pm 0.04$ & $\textbf{0.70} \pm 0.10$ & $\textbf{0.6504}$ & $0.6278$ & $\textbf{0.78} \pm 0.02$ \\ TIRE-\textit{a}&$0.86 \pm 0.01$ & $\textbf{0.85} \pm 0.01$ & $0.74 \pm 0.05$ & $0.92 \pm 0.02$ & $0.65 \pm 0.09$ & $0.6172$ & $0.7656$ & $0.77 \pm 0.02$ \\ \hline TIRE-TD-\textit{b}& $\mathbf{0.882} \pm 0.009$ & $0.26 \pm 0.02$ & $0.26 \pm 0.02$ & $0.965 \pm 0.006$ & $0.42 \pm 0.06$ & $0.4284$ & $\textbf{0.8151}$ & $0.58 \pm 0.01$ \\ TIRE-FD-\textit{b}&$0.86 \pm 0.01$ & $0.84 \pm 0.02$ & $0.95 \pm 0.02$ & $0.74 \pm 0.03$ & $0.69 \pm 0.10$ & $0.6261$ & $0.200$ & $0.70 \pm 0.02$ \\ TIRE-\textit{b}&$0.877 \pm 0.009$ & $0.83 \pm 0.02$ & $0.76 \pm 0.05$ & $0.89 \pm 0.02$ & $0.60 \pm 0.09$ & $0.6258$ & $0.8134$ & $0.77 \pm 0.01$ \\ \hline \end{tabular} \label{tab:performance} \end{center} \end{table*} \subsection{Insight in encoded features and reconstruction}\label{sec:insight} To gain insight into the working of the TIRE method, we investigate how the (partially) time-invariant representation and the corresponding reconstructions look like. We do this by conducting a case study on the jumping mean and bee dance data set. \begin{figure} \centering \includegraphics[width=\columnwidth]{representation.pdf} \caption{Example of the three-dimensional learned representation on a part of the jumping mean data set for ABD (left) and TIRE-TD (right) with two time-invariant features (in red) and one instantaneous feature (in green). The features were vertically shifted (but not rescaled) for clarity. Blue vertical lines indicate the locations of ground-truth change points. } \label{fig:representation} \end{figure} First, we demonstrate the effect of our proposed penalty in the autoencoder loss function \eqref{eq:training-loss3}. In Figure \ref{fig:representation} we show the non-smoothed encoded features (i.e. without applying \eqref{eq:smooth-features}) for a part of the jumping mean data set for both ABD and TIRE-TD. For both methods, we use three features, of which two are time-invariant in the case of TIRE. Other parameter settings are as in parameter setting \textit{b} of Section \ref{sec:parametersetting}. Whereas the features learned by ABD are very variable and noisy, the time-invariant features of TIRE-TD are approximately constant within each segment. For TIRE, the only significant variations in the features are near the ground-truth change points. These observations match exactly with the intention of our proposed loss function. \begin{figure} \centering \includegraphics[width=\columnwidth]{reconstructions.pdf} \caption{Examples of time-domain and frequency-domain windows (dashed lines) and their reconstructions by the autoencoder used in our proposed method (full lines). In the jumping mean data set, the change points consist of abrupt changes in mean. For bee dance, the goal is to detect abrupt changes in slope (upper right) and amplitude (lower right). } \label{fig:reconstruction} \end{figure} Second, we conduct a case study on the reconstruction of both TD and FD windows. Since the number of features we propose to use is very small, these reconstructions might be lossy and deviate from the original windows. We train TD and FD autoencoders with only one (time-invariant) feature following parameter setting \textit{a} (cf. Section \ref{sec:parametersetting}) for jumping mean and bee dance data. We select four distinct windows and their reconstruction for each data set. The results are shown in Figure \ref{fig:reconstruction} in different colours. In case of the jumping mean data set, the autoencoder unsurprisingly reconstructs the mean of each interval, ignoring all noise. In the frequency domain, the mean manifests itself in the DC component (first frequency bin). The values of most other frequency bins seem to be encoded in the weights and biases. Next, we consider the bee dance data set. In the time domain, we use one location coordinate of the bee. As the bee moves back and forth in its waggle dance, the location coordinate resembles a triangular wave. The autoencoder can track the bees location through the variation in the slope of the location coordinate windows. The reconstruction in Figure \ref{fig:reconstruction} indeed shows approximately straight lines with varying slope. In the frequency domain, we only consider the angle of the head of the bee in this case study. As the bee shakes its head in some parts of the waggle dance, the goal is to pick up the presence of high-frequency oscillations. Indeed, the reconstruction only varies notably in the bins corresponding to higher frequencies. As we use only one latent variable, the decoded reconstruction does not fully capture all variations in the frequency spectrum, yet it captures the slope of the upward trend towards higher frequencies. We conclude that autoencoders can automatically identify and construct CPD-relevant features, in contrast to CPD methods based on parametric models where the relevant parameters need to be chosen in advance. \subsection{Importance of postprocessing}\label{sec:postprocessing} In Section \ref{sec:method}, we conjectured the importance of suitable postprocessing steps to mitigate the effect of false positive detection alarms. An example of the effect of our postprocessing steps can be found in Figure \ref{fig:postprocessing}. The use of the prominence as a change point score allows us to automatically retain only one significant detection alarm per peak, whereas a height-based dissimilarity score would lead to a false positive detection alarm even if the detection threshold is set high. Furthermore, the matched filter automatically removes most false positive detections. The use of our proposed prominence score then ensures that the remaining false positive detections have a negligible change point score. \begin{figure} \centering \includegraphics[width=\columnwidth]{pp.pdf} \caption{Example of the peak in our proposed dissimilarity measure (black line) near a ground-truth change point (red vertical line) both without (left) and with (right) the use of a matched filter. Black dots correspond to local maxima (i.e. detection alarms), our proposed prominence measure is shown in blue. The matched filter drastically reduces the number of false positive detection alarms, whereas the prominence measure makes sure that there is only one detection alarm with a large change point score. The example was generated using KL-CPD on the Gaussian mixture data set. } \label{fig:postprocessing} \end{figure} Next, we quantitatively compare peak height and peak prominence \eqref{eq:prominence} as change point score and investigate the effect of applying a matched filter \eqref{eq:matched_filter}. We report the average and standard deviation of the AUC on all seven data sets for the GLR procedure, RuLSIF, KL-CPD and TIRE in Table \ref{tab:postprocessing}. Both the matched filter and the use of the peak prominence result in an increase in the average AUC, with best results for when both postprocessing techniques are combined. Most notably, our proposed postprocessing approach almost leads to a doubling of the average AUC compared to naive peak detection for all methods. \begin{table} \caption{Comparison of the AUC of different postprocessing techniques on dissimilarity-measure-based CPD methods. } \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline &\textbf{Height} & \textbf{Height+MF}& \textbf{Prominence}& \textbf{Prom.+MF} \\ \hline GLR&$0.42 \pm 0.05$ & $0.67 \pm 0.04$ & $0.58 \pm 0.04$ & $\textbf{0.71} \pm 0.04$ \\ RuLSIF&$0.37 \pm 0.07$ & $0.63 \pm 0.05$ & $0.60 \pm 0.07$ & $\textbf{0.64} \pm 0.05$ \\ KL-CPD&$0.28 \pm 0.10$ & $0.46 \pm 0.08$ & $0.44 \pm 0.10$ & $\textbf{0.48} \pm 0.07$ \\ TIRE&$0.40 \pm 0.08$ & $0.67 \pm 0.10$ & $0.56 \pm 0.08$ & $\textbf{0.79} \pm 0.07$ \\ \hline \end{tabular} \label{tab:postprocessing} \end{center} \end{table} \subsection{Run time} We compare the run times of the different methods on the jumping mean data set by reporting the mean and standard deviation of run times under 10 random seeds. The GLR procedure takes $(6.6\pm0.4)$\si{s}, RuLSIF needs $(69.6\pm1.5)$\si{s}, KL-CPD needs $(390\pm 5)$\si{s} for 200 epochs and TIRE takes $(32.5\pm0.2)$\si{s} for 200 epochs. The run times of all methods scale linearly with the length of the time series. Unsurprisingly, the very simple GLR procedure is by far the fastest method. KL-CPD, which involves the training of a generative adversarial network and a recurrent neural network, is the slowest. Comparing the run times for 200 epochs, we see that TIRE is faster than KL-CPD. Note that the comparison between TIRE and KL-CPD is difficult, as both are iterative methods and convergence rates may differ. In the code accompanying \cite{Chang2019KernelModels}, a stop criterion for KL-CPD is provided, but this criterion was never satisfied sooner than 200 epochs on the used data sets. We conclude that TIRE has a very reasonable run time compared to other methods, albeit not the best. \subsection{Sensitivity analysis}\label{sec:sensitivity} We investigate to which extent the performance of the proposed method depends on the parameters chosen in Section \ref{sec:parametersetting}. Ideally, each parameter can either be set following some general guidelines, or the method should not be sensitive to the exact parameter value. First, we examine how the performance depends on the chosen window size. It is clear that a constant window size would in general be an unreasonable demand: when a time series is down- or upsampled, the window size should change accordingly. Some attempts to provide guidelines on how to choose a window size have been made \cite{Lee2018TimeLearning}, but these often give rise to unreasonable choices and poor performance (see ABD in Table \ref{tab:performance}). Moreover, one can even argue that a good window size is inherently dependent on the interpretation and goals of the practitioner, and can not be deduced from the data alone. For example, this would be the case for a superposition of two CC time series (cf. Section \ref{sec:datasets}) with frequencies at two distinct scales, of which only one is of interest to the practitioner. Following amongst others \cite{Cheng2020OnDetection}, we therefore advise to set the window size based on domain knowledge (cf. Section \ref{sec:parametersetting}). To inspect the sensitivity of TIRE to these choices, we show in Figure \ref{fig:windowsize} the mean AUC and its standard error for all seven data sets for window sizes that are $0.25$, $1/2\sqrt{2}$, $0.5$, $1/\sqrt{2}$, $1$, $\sqrt{2}$, $2$, $2\sqrt{2}$ and $4$ times the domain-knowledge-based window size as defined in Section \ref{sec:parametersetting}. Furthermore we let again $K=2$, $\lambda ^{\text{TD}}=1$ and $\lambda ^{\text{FD}}=1$. \begin{figure} \centering \includegraphics[width=\columnwidth]{windowsize_extended.pdf} \caption{Influence of the window size to the performance of TIRE. We report the mean AUC and its standard error for window sizes ranging between one quarter to four times the window size chosen in Section \ref{sec:parametersetting}. } \label{fig:windowsize} \end{figure} The larger standard error for the bee dance data set in Figure \ref{fig:windowsize} is primarily caused by the large variation in difficulty between the different time series, and not by the method. For most data sets only limited variations in AUC are present in the interval $[0.5, 2]$, such that a small to moderate change in window size would not affect the positioning of the performance of the proposed TIRE method compared to the results of the considered baseline methods. For the changing coefficients (CC) data set and the well log data set, the variations in AUC are more substantial. The AUC for CC increases steadily as the window size grows since the DFT can better capture the long-range dependences in the data set, but also decreases sharply when the window size is large compared to the distance between the change points. In the well log case, the difficulty is that some change points are very close to each other. When the window size grows large, two nearby peaks in the dissimilarity measure will not be resolved anymore. In this case, the use of a matched filter is thus even disadvantageous. This also explains why the AUC decreases sharply for all data sets when an unreasonably large window size is chosen. Second, we investigate the influence of the latent dimension of the used autoencoder. We let the total number of time-domain features $h^{\text{TD}}$ vary from 1 to 10 and set the number of time-invariant features to $s^{\text{TD}} = \max\{h^{\text{TD}}-1,1\}$. Furthermore we let $s^{\text{FD}}=h^{\text{FD}}=1$, $K=2$, $\lambda ^{\text{TD}}=1$ and $\lambda^{\text{FD}}=1$ (cf. parameter settings \textit{a} and \textit{b}). We use at most one instantaneous feature to avoid that the autoencoder would leak valuable CPD-relevant information into the instantaneous features (cf. Section \ref{sec:method}). We also let the number of frequency-domain features vary analogously and investigate the advantage of using time-invariant features. We do the latter by comparing to TIRE$_{\lambda=0}$, a version of TIRE with $\lambda=0$ in the loss function \eqref{eq:training-loss3} (i.e. no time-invariant features) and without the smoothing as in \eqref{eq:smooth-features}, as this is not necessarily a meaningful operation in this case. \begin{figure} \centering \includegraphics[width=\columnwidth]{nr_features.pdf} \caption{Sensitivity of performance of TIRE to the total number of TD features $h^{\text{TD}}$ and FD features $h^{\text{FD}}$ of the used autoencoder. The average AUC over all data sets and its standard deviation is shown. We also compare to the dependence of TIRE$_{\lambda=0}$ on the latent dimension. Whereas TIRE (with $\lambda=1$) seems on average robust to the number of TD features, the AUC for TIRE$_{\lambda=0}$ decreases. } \label{fig:latentdim} \end{figure} The average AUCs over all data sets are shown in Figure \ref{fig:latentdim}. The large standard deviation stems from the diversity of the different data sets. For TIRE, the average AUC remains very stable when the number of TD features is varied. Furthermore, the performance of TIRE seems optimal for 1 time-invariant FD feature, the average AUC when two or more FD feature are used is lower but does not further decrease with the number of FD features. Furthermore, we can observe that the performance of TIRE with $\lambda=1$ is clearly superior over TIRE$_{\lambda=0}$. The increase in AUC is more distinct for higher numbers of TD features. This is unsurprising, as a larger latent dimension allows an autoencoder without time-invariant features to encode the feature more freely, making the positive effect of adding the time-invariant feature term to the loss function \eqref{eq:training-loss3} all the more pronounced. Next, we determine how sensitive TIRE is to the parameter $K$ in the training loss \eqref{eq:training-loss3}. We let $K$ vary from $1$ to $10$, with other parameters as in previous experiments, and present the result in Figure \ref{fig:Kdependence}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Ksize.pdf} \caption{Sensitivity of performance of TIRE to the parameter $K$ in the TIRE training loss \eqref{eq:training-loss3}. We report for each data set the mean AUC and its standard error for $K$ between $1$ and $10$.} \label{fig:Kdependence} \end{figure} For most data sets the performance is stable with respect to changes in $K$, only for CC and bee dance a decrease in AUC is observed for large $K$. As also the runtime increases with $K$, we advise to set $K$ rather small, e.g. $K\in [1,5]$. Finally, we investigate the sensitivity of TIRE with respect to the change magnitude at the change points (relative to the noise level). We do this by varying the standard deviation of the noise in the jumping mean data set (cf. Section \ref{sec:datasets}), leaving the change magnitudes unchanged. The jumps in the mean are of magnitude $1/16, 2/16, \ldots, 3$ and we let the standard deviation of the noise vary from $0.5$ to $3$. \begin{figure} \centering \includegraphics[width=\columnwidth]{meannoise.pdf} \caption{Sensitivity of performance of TIRE to the standard deviation of the noise in the jumping mean data set. The average AUC over ten realizations and its standard deviation is shown together with the fraction of change points for which the change magnitude is larger than the noise standard deviation.} \label{fig:meannoise} \end{figure} Figure \ref{fig:meannoise} shows a decrease of the AUC that is roughly proportionate to the fraction of change points for which the change magnitude is larger than the noise standard deviation. This is in line with expectations. In general, we can conclude that the performance of TIRE does not depend critically on the exact value of the window size $N$, the number of features $h$ and the parameter $K$. \section{Introduction} In the era of big data, where Internet of Things (IoT) devices and other sensors provide endless data streams, the importance of time series analysis techniques can hardly be overestimated. One particular task, that has drawn attention from statistics and data mining communities for decades\cite{wald2004sequential,brodsky2013nonparametric, gustafsson2000adaptive, basseville1993detection}, is \textit{change point detection} (CPD): the detection of abrupt changes in the temporal evolution of time series data. Change point detection can be a goal in itself or it can be used as a preprocessing tool to segment a time series in homogeneous segments (which can then be further analysed, clustered or classified). Real-life applications of CPD include, but are not limited to, the analysis of climate data \cite{reeves_chen_wang_lund_lu_2007}, financial market data \cite{pepelyshev_polunchenko_2017, hsu_1982}, genetic data \cite{wang_wu_ji_wang_liang_2011} sensor network data \cite{tartakovsky2003quickest, tartakovsky2008asymptotically} and medical data \cite{michael1979automatic, malladi2013online}. CPD methods can be categorized according to many different criteria. It is common to make the distinction between online CPD, which provides real-time detections, and retrospective (offline) CPD, which provides more robust detections at the cost of needing more future data. In this paper, we focus on the second category. Many CPD algorithms compare past and future time series intervals by means of a dissimilarity measure. An alarm is issued when the two intervals are sufficiently dissimilar. A first group of methods defines this dissimilarity measure based on the difference in distribution of the two intervals. CUSUM and related methods \cite{basseville1993detection,shao2010testing} track changes in the parameter of a chosen distribution, the generalized likelihood ratio (GLR) procedure \cite{brandt, appel_brandt_1983} monitors the likelihood that both intervals are generated from the same distribution, and subspace methods \cite{ide2007change, kawahara2007change} measure the distance between subspaces spanned by the columns of an observability matrix. All these methods however strongly rely on the assumption that the time series data is generated using some parametric probability distribution (CUSUM), autoregressive model (GLR) or state-space model (subspace method). Bayesian online CPD \cite{Adams2007BayesianDetection} is another notable algorithm that depends on distributional assumptions. Unsurprisingly, the performance of these parametric methods heavily depends on how well the actual data follows the assumed model. Parameter-free alternatives are kernel density estimation \cite{csorgHo198820, brodsky2013nonparametric, Chang2019KernelModels} and the related density ratio estimation \cite{Kawahara2012SequentialEstimation, Liu2013Change-pointEstimation}. A more complete overview of CPD methods can be found in e.g. \cite{Truong2020SelectiveMethods, namoano2019online, Burg2020AnAlgorithms, Aminikhanghahi2017ADetection}. Following the successful application of deep learning techniques in anomaly detection, a promising approach for CPD was to base the dissimilarity measure on the distance between features automatically learned by an autoencoder \cite{Lee2018TimeLearning}. Main advantages of this approach are the absence of distributional assumptions and the ability of autoencoders to extract complex features from data in a cost-efficient way. There are however also some severe drawbacks. First, there are no guarantees that the distance between consecutive features reflects the actual dissimilarity of the intervals, i.e. features may vary significantly even in the absence of a change point. Second, the correlated nature of time series samples is not adequately leveraged by vanilla autoencoders, which makes it challenging to detect abrupt changes in the frequency domain. This is not uncommon in CPD literature \cite{basseville1993detection, Adams2007BayesianDetection, Chang2019KernelModels, cheng2020optimal}. Some methods explicitly focus on abrupt changes in the spectrum \cite{moskvina2003change, Chen2019AutomatedAnalysis}, thereby often ignoring changes in the time domain. Finally, the absence of a postprocessing procedure preceding the detection of peaks in the dissimilarity measure often leads to a high number of false positive detection alarms \cite{Cheng2020OnDetection}. Building on \cite{Lee2018TimeLearning}, we propose a new autoencoder-based CPD method using a partially time-invariant representation (TIRE) that aims to overcome the aforementioned concerns. Our main contributions can be summarized as follows. \begin{itemize} \item We propose a new CPD framework based on a novel adaptation of the autoencoder with a loss function that promotes time-invariant features. Through our choice of loss function, we aim for the autoencoder to learn a representation that is better suited for CPD. Based on this encoding, we define a dissimilarity measure to detect change points. We evaluate the performance of our algorithm on diverse simulated and real-life benchmark data sets and compare with other dissimilarity-measure-based CPD algorithms. \item Whereas many change point algorithms assume the time series data to consist of independent identically distributed (iid) samples, we explicitly focus on non-iid data. We use the discrete Fourier transform to obtain temporally localized spectral information and propose an approach that combines time-domain and frequency-domain information. When domain knowledge is available, our approach allows the user to only focus on the time or frequency domain. \item Finally, we propose a way of identifying change points from the dissimilarity measure data by applying the notion of topographic prominence \cite{llobera_2001} to the CPD setting. We emphasize the general importance of postprocessing steps in CPD through numerous experiments. \end{itemize} \section{Problem formulation}\label{sec:problem} Let $\mathbf{X}$ be a $d$-channel time series of length $T$ for which there exist time stamps $0=T_0<T_1<\ldots <T_p=T$ such that every subsequence of the form $(\mathbf{X}[T_k+1], \ldots, \mathbf{X}[T_{k+1}])$ is a realisation of a discrete time weak-sense stationary stochastic (WSS) process, whereas the union of two such consecutive subsequences is not. The time stamps $T_1, T_2, \ldots$ are referred to as \textit{change points}. The goal of \textit{change point detection} (CPD) is to estimate these change points without any prior knowledge on the number and the locations of the change points \cite{Cheng2020OnDetection}. The piecewise WSS assumption is not a strict prerequisite for the algorithm to work, but it does accurately summarize the kind of change points our proposed algorithm will be able to detect. Examples of violations of the WSS conditions, and therefore examples of change points we wish to detect, are changes in mean, variance and autocorrelation. Note that changes in the latter are also reflected in the frequency spectrum through the Wiener-Khinchin theorem \cite{wiener_1930, khintchine_1934}. We focus on CPD algorithms that are based on a \textit{dissimilarity measure}. Such methods calculate for every time stamp $t$ the dissimilarity between the windows $(\mathbf{X}[t-N+1], \ldots, \mathbf{X}[t])$ and $(\mathbf{X}[t+1], \ldots, \mathbf{X}[t+N])$, where $N$ is a user-defined window size. Our first main goal is to develop a CPD-tailored feature embedding and a corresponding dissimilarity measure $\mathcal{D}_t$, which peaks when the WSS restriction is violated. The nominal approach for identifying change points would then be to determine all local maxima and label each local maximum of which the height exceeds a user-defined detection threshold $\tau$ as a change point \cite{M-stat, Lee2018TimeLearning}. However, given a window size $N$, the width of this peak will theoretically be $2N-1$ time stamps, making it likely that noise will cause multiple detection alarms for each ground-truth change point \cite{Cheng2020OnDetection}. Our second objective is to mitigate the impact of this issue. \section{Autoencoder-based change point detection}\label{sec:method} \subsection{Preprocessing} Let $\mathbf{X}$ be a $d$-channel time series of length $T$, where we denote the $i$-th channel by $\mathbf{X}^i$. We first divide each channel into windows of size $N$, \begin{equation}\label{eq:window-size} \mathbf{x}^i_t = \begin{bmatrix}\mathbf{X}^i[t-N+1],\ldots, \mathbf{X}^i[t]\end{bmatrix}^T \in \mathbb{R}^N. \end{equation} We then combine for every $t$ the corresponding windows of each channel into a single vector, \begin{equation}\label{eq:td-windows} \mathbf{y}_t = \begin{bmatrix}(\mathbf{x}^1_t)^T,\ldots, (\mathbf{x}^d_t)^T\end{bmatrix}^T \in\mathbb{R}^{Nd}. \end{equation} Furthermore, we use the discrete Fourier transform (DFT) on each window $\mathbf{x}^i_t$ to obtain temporally localized spectral information. The length of the transformed window is then cropped to a predefined length $M$. Finally, the modulus of the transformed window is computed. Bundling all these transformations as a single mapping $\mathcal{F}:\mathbb{R}^N\to\mathbb{R}^M$, we obtain the frequency-domain counterpart of $\mathbf{y}_t$: \begin{equation}\label{eq:fd-windows} \mathbf{z}_t = \begin{bmatrix}\mathcal{F}(\mathbf{x}^1_t)^T,\ldots, \mathcal{F}(\mathbf{x}^d_t)^T\end{bmatrix}^T\in\mathbb{R}^{Md}. \end{equation} \subsection{Feature encoding} Building on \cite{Lee2018TimeLearning}, we use autoencoders (AEs) to extract features for change point detection from the time-domain (TD) windows $\{\mathbf{y}_t\}_t$. We expand the approach in \cite{Lee2018TimeLearning} by also extracting features from the frequency-domain (FD) windows $\{\mathbf{z}_t\}_t$ and through the proposal of a new loss function that explicitly promotes time-invariance of the features in consecutive windows. The latter is a relevant property in order to perform CPD based on a dissimilarity metric. An autoencoder is a type of artificial neural network that aims to learn a low-dimensional encoding (i.e. features) from a higher-dimensional input by reconstructing the input from the encoding as accurately as possible. It is often used as a dimension reduction technique and can be seen as a non-linear generalization of PCA \cite{goodfellow2016deep}. In its simplest form, an autoencoder consists of one hidden layer. The encoder maps the input $\mathbf{y}_t\in\mathbb{R} ^{Nd}$ (resp. $\mathbf{z}_t$) to its encoded form $\mathbf{h}_t\in\mathbb{R}^{h}$ as \begin{equation} \mathbf{h}_t = \sigma(\mathbf{W}\mathbf{y}_t+\mathbf{b}), \end{equation} where $\mathbf{W}$ is the weight matrix, $\mathbf{b}$ is the bias vector and $\sigma$ is a non-linear activation function that is applied element-wise. The decoder then maps the encoded representation back to the original input space, \begin{equation} \Tilde{\mathbf{y}}_t = \sigma'(\mathbf{W'}\mathbf{h}_t+\mathbf{b'}). \end{equation} We choose $\sigma=\sigma'$ to be the hyperbolic tangent function, with as a consequence that each channel of the time series should be rescaled to the interval $[-1, 1]$. We use individual instead of joint rescaling to ensure that all channels have a comparable magnitude. The goal of the AE is then to minimize the difference between the input and the output, i.e. minimize $\norm{\mathbf{y}_t-\Tilde{\mathbf{y}}_t}$, by optimizing the choice of $\mathbf{W}, \mathbf{W'}, \mathbf{b}, \mathbf{b'}$. In \cite{Lee2018TimeLearning}, the learned features $\mathbf{h}_t$ are then used for CPD by measuring the dissimilarity between consecutive feature vectors ($\mathbf{h}_t$ vs. $\mathbf{h}_{t-1}$). However, the learned features $\mathbf{h}_t$ will then unavoidably also contain information that is not relevant for CPD (e.g. phase shift or noise information), which may generate large dissimilarities even when there is no actual change point. We try to remedy this by introducing the notions of \textit{time-invariant} and \textit{instantaneous features}. The idea is that features learned from consecutive windows are only useful for CPD when they are approximately equal to each other in the absence of a change point (e.g. mean, amplitude and frequency should not change much within a WSS segment). We will refer to them as \textit{time-invariant} features as they are aimed to be invariant over time within a WSS segment. All other information that is needed for a good reconstruction, but that may differ for consecutive windows, is aimed to be encoded in \textit{instantaneous} features. This then gives \begin{equation} \mathbf{h}_t = \begin{bmatrix}(\mathbf{s}_t)^T, (\mathbf{u}_t)^T\end{bmatrix}^T, \end{equation} where $\mathbf{s}_t\in\mathbb{R}^{s}$ are the time-invariant features and $\mathbf{u}_t\in\mathbb{R}^{h-s}$ are the instantaneous features. To obtain both a good reconstruction and time-invariant features, we propose to minimize the loss function \begin{equation}\label{eq:training-loss} \sum_t\left( \norm{\mathbf{y}_{t}-\Tilde{\mathbf{y}}_{t}}_2 +\lambda \norm{\mathbf{s}_{t}-\mathbf{s}_{t-1}}_2\right) \end{equation} where $\lambda>0$ control the amount of regularization of the time-invariant features. Here we make the implicit assumption that the number of terms in \eqref{eq:training-loss} that correspond to a window containing a change point is very small compared to $T$. It is very uncommon in machine learning to directly minimize the loss function \eqref{eq:training-loss}, i.e. take all $t$ into account for every step of gradient descent. To improve convergence, it is advisable to first randomly partition all time stamps $t$ over $J$ smaller \textit{mini-batches} $\mathcal{T}_j$ \cite{masters2018revisiting}. The mini-batch stochastic gradient descent (SGD) version of minimizing \eqref{eq:training-loss} would then consist of updating the network parameters by calculating the gradient of \begin{equation}\label{eq:training-loss2} \sum_{t\in\mathcal{T}_j}\left( \norm{\mathbf{y}_{t}-\Tilde{\mathbf{y}}_{t}}_2 +\lambda \norm{\mathbf{s}_{t}-\mathbf{s}_{t-1}}_2\right) \end{equation} for some $j$, followed by performing one gradient descent step and repeating this for all other mini-batches. Note that formulation \eqref{eq:training-loss2} would require to use time stamps from other batches, i.e. $t\in\mathcal{T}_j$ does not generally imply that $t-1\in\mathcal{T}_j$. However, we choose to generalize \eqref{eq:training-loss2}, and minimize the following loss function for each mini-batch, \begin{equation}\label{eq:training-loss3} \sum_{t\in\mathcal{T}_j}\left( \norm{\mathbf{y}_{t}-\Tilde{\mathbf{y}}_{t}}_2 +\frac{\lambda}{K} \sum_{k=0}^{K-1}\norm{\mathbf{s}_{t-k}-\mathbf{s}_{t-k-1}}_2\right), \end{equation} where $K\in\mathbb{N}$. For $K=1$ this equation reduces to \eqref{eq:training-loss2}. For $K>1$, this approach has the advantage that now $K+1$ consecutive features are jointly and simultaneously considered during the computation of the gradient, resulting in an additional smoothing effect of the stochastic gradient in the direction of the minimization of the penalty term in \eqref{eq:training-loss}. Thereby further promoting the aimed time invariance of the features $\mathbf{s}_t$. It may help to think of \eqref{eq:training-loss3} as $K+1$ parallel autoencoders with identical weights and biases, where the $k$-th autoencoder receives $\mathbf{y}_{t+k-K-1}$ as input and where a subset of the latent variables (i.e. the time-invariant features) of the parallel autoencoders are forced to be close together to obtain a partially time-invariant representation (Figure \ref{fig:pae}). Note that even though the difference in formulation between \eqref{eq:training-loss} and \eqref{eq:training-loss3} impacts the training of the autoencoder, the resulting loss functions are essentially the same when summing over all $t$. To avoid that the autoencoder encodes all information in the unregularized instantaneous features, the number of instantaneous features should be taken as small as possible. Depending on the data, one might add additional regularization terms to the loss function or use a more advanced type of autoencoder (e.g. weight regularized, deep/stacked, tied-weights, variational, recurrent autoencoder). In an entirely similar fashion, we train a second autoencoder on $\{\mathbf{z}_t\}_t$ with a similar loss function to obtain frequency-domain time-invariant features. We will use the superscripts TD and FD to distinguish between parameters and features corresponding to the time and frequency domain, respectively. \begin{figure} \centering \includegraphics[width=\columnwidth]{Autoencoder.pdf} \caption{Visualization of time-invariant feature encoding for $K=1$. The TD autoencoder is shown two times, once with input $\mathbf{y}_{t-1}$ and once with input $\mathbf{y}_{t}$. The corresponding time-invariant features $\mathbf{s}_{t-1}$ and $\mathbf{s}_{t}$ are forced to be approximately equal because of the chosen loss function \eqref{eq:training-loss3}. Frequency-domain time-invariant features are obtained analogously.} \label{fig:pae} \end{figure} \subsection{Postprocessing and peak detection} In this section we first describe how to construct a dissimilarity measure that complies with the needs formulated in Section \ref{sec:problem} based on the time-invariant features from the previous section. Next, we discuss multiple methods to suppress the number of false positives when determining the detection alarms. \subsubsection{Postprocessing} We first combine the TD and FD time-invariant features into a single time-invariant features vector, \begin{equation}\label{eq:combine_shared} \mathbf{s}_t = \begin{bmatrix}\alpha\cdot(\mathbf{s}^{\text{TD}}_t)^T,\beta\cdot(\mathbf{s}^{\text{FD}}_t)^T\end{bmatrix}^T, \end{equation} where $\alpha,\beta>0$ are parameters that control the relative contribution of the TD and FD time-invariant features. Next, we use a zero-delay weighted moving average filter to smoothen the time-invariant features, as small fluctuations in the features will affect the performance of the method. The moving average filtering operation can be described as follows, \begin{equation}\label{eq:smooth-features} \Tilde{\mathbf{s}}_t[i] = \sum_{k=-N+1}^{N-1} \mathbf{v}[N-k]\cdot \mathbf{s}_{t+k}[i], \end{equation} with $\mathbf{v}[k] = \mathbf{v}[2N-k] \triangleq k/N^2$ for $1\leq k\leq N$, where $N$ is the window size as defined in \eqref{eq:window-size}, resulting in a triangular shaped weighting window. We use edge value padding in order for the equation to be defined for all $t$. We then propose the following definition for the dissimilarity measure $\mathcal{D}$: \begin{equation}\label{eq:dissimilarity-measure} \mathcal{D}_t = \norm{\Tilde{\mathbf{s}}_t- \Tilde{\mathbf{s}}_{t+N}}_2, \end{equation} where $N$ is the window size as defined in \eqref{eq:window-size}. In some applications, domain-specific knowledge might suggest that only TD (resp. FD) information is relevant for CPD. This expert knowledge can be incorporated in the dissimilarity measure by setting $\alpha=1$ and $\beta=0$ (resp. $\alpha=0$ and $\beta=1$) in \eqref{eq:combine_shared}. We denote the obtained dissimilarity measure by $\mathcal{D}_t^{\text{TD}}$ (resp. $\mathcal{D}_t^{\text{FD}}$). Using $\mathcal{D}_t^{\text{TD}}$ and $\mathcal{D}_t^{\text{FD}}$, we can also set $\alpha$ and $\beta$ automatically in such a way that the TD and FD time-invariant features contribute in a comparable fashion to $\mathcal{D}_t$. We let \begin{align}\label{eq:alpha-beta} \alpha = Q(\{\mathcal{D}_t^{\text{FD}}\}_t, 0.95) \quad \text{and} \quad \beta = Q(\{\mathcal{D}_t^{\text{TD}}\}_t, 0.95), \end{align} where $Q$ is the quantile function, i.e. for a set of real numbers $A$ and $0< p \leq 1$ it holds that $Q(A,p)$ is the smallest number such that $p\cdot 100\%$ of the elements of the set $A$ are smaller than $Q(A,p)$. We use the 95-percentile as a measure of the heights of the peaks in the dissimilarity scores, where outliers are ignored. By setting $\alpha$ and $\beta$ in \eqref{eq:combine_shared} according to \eqref{eq:alpha-beta}, the peaks in $\{\mathcal{D}_t^{\text{FD}}\}_t$ and $\{\mathcal{D}_t^{\text{TD}}\}_t$ contribute approximately equally to $\{\mathcal{D}_t\}_t$. As all learned features lie in the interval $[-1,1]$, the robustness of using a quantile-based fusion approach is guaranteed. \subsubsection{Peak detection} If the time-invariant features are indeed similar across successive windows within a WSS segment, the dissimilarity measure $\mathcal{D}_t$, as defined in \eqref{eq:dissimilarity-measure}, will peak at or near a change point. Determining reasonable detection alarms from these peaks is an often neglected task in current literature. In some cases, the problem is avoided by focusing on time series containing only one change point \cite{M-stat}. In other cases all local maxima of the dissimilarity measure are considered to be detection alarms \cite{Lee2018TimeLearning}, leading to unreasonably many false positives. Liu et al. \cite{Liu2013Change-pointEstimation} propose to reduce the number of false positives by deleting detections that are too close to the previous detection. As their method might also delete correct detections, it is clearly not optimal. Recently, the use of a matched filter was investigated as a way to improve detection and localization of change points \cite{cheng2020optimal, Cheng2020OnDetection}. It is however difficult to automatically select a representative peak to base the matched filter on \cite{cheng2020optimal}, nor is it possible to unambiguously derive an asymptotically matched filter \cite{Cheng2020OnDetection} for our dissimilarity measure. We therefore propose to reuse the impulse response $\mathbf{v}$ from the moving average filtering \eqref{eq:smooth-features} as it is has a comparable effect to that of a matched filter, as a consequence of its width and shape. This then leads to \begin{equation}\label{eq:matched_filter} \Tilde{\mathcal{D}}_t = \sum_{k=-N+1}^{N-1} \mathbf{v}[N-k]\cdot \mathcal{D}_{t+k}. \end{equation} The detection alarms then correspond to all local maxima of the series $( \Tilde{\mathcal{D}}_N, \Tilde{\mathcal{D}}_{N+1}, \ldots, \Tilde{\mathcal{D}}_{T-N})$ \cite{cheng2020optimal, Cheng2020OnDetection}. Aiming to further improve detection accuracy, we propose to use a different, parameter-free approach for peak detection. In topography, the \textit{prominence} of a peak is the minimum height that one needs to descend in order to be able to ascend to a higher peak \cite{llobera_2001}. The idea is that even though every peak in the dissimilarity measure might consist of multiple local maxima that all have a large height, only one of these maxima will have a large prominence. This measure has previously been successfully applied in the analysis of population data \cite{nelson_mckeon_2019}, super-resolution microscopy data \cite{griffie_boelen_burn_cope_owen_2015} and neural signals \cite{choi_ahn_park_lee_kim_cho_senok_koo_goo_2017}. Given that $\mathcal{D}_{t}$ is a local maximum, we first define the two closest time stamps left and right of $t$ for which the dissimilarity measure is larger than $\mathcal{D}_{t}$, and denote them by $t_L$ and $t_R$ respectively, i.e., \begin{align} t_L &= \max\left\{\sup\{t^* \:|\: \mathcal{D}_{t^*}>\mathcal{D}_{t} \text{ and } t^*<t\},N\right\},\\ t_R &= \min\{\inf\{t^* \:|\: \mathcal{D}_{t^*}>\mathcal{D}_{t} \text{ and } t^*>t\},T-N\}, \end{align} where the $\max$ and $\min$ operators ensure that $t_L$ and $t_R$ stay at a distance $N$ from the boundaries of the time series. We then define the prominence $\mathcal{P}(\mathcal{D}_{t})$ of local maximum $\mathcal{D}_{t}$ by \begin{equation}\label{eq:prominence} \mathcal{P}(\mathcal{D}_{t}) = \mathcal{D}_{t} - \max\left\{\min_{t_L<t^*<t}\mathcal{D}_{t^*},\min_{t<t^*<t_R}\mathcal{D}_{t^*} \right\}. \end{equation} If $\mathcal{D}_{t}$ is not a local maximum we set $\mathcal{P}(\mathcal{D}_{t})=0$ by definition. We propose to combine the matched filter \eqref{eq:matched_filter} and the prominence measure \eqref{eq:prominence}, i.e. by calculating the prominences for $\{\Tilde{\mathcal{D}}\}_t$ instead of $\{\mathcal{D}\}_t$. A change point is then detected if the prominence $\mathcal{P}(\Tilde{\mathcal{D}}_{t})$ is above a predefined threshold $\tau$. \subsection{Summary: the TIRE method}\label{sec:summary} Finally, we summarize all the steps of the proposed Time-Invariant REpresentation (TIRE) change point detection method. If only time-domain or frequency-domain information is used, we will refer to the method using the acronym TIRE-TD or TIRE-FD, respectively. \begin{enumerate} \item Construct time-domain windows $\{\mathbf{y}_t\}_t$ \eqref{eq:td-windows} and frequency-domain windows $\{\mathbf{z}_t\}_t$ \eqref{eq:fd-windows} from a time series $\mathbf{X}$. \item Using these windows as training data sets, train two autoencoders by minimizing loss function \eqref{eq:training-loss3}. \item Use \eqref{eq:alpha-beta} to determine $\alpha$ and $\beta$ or set one of them to zero based on domain knowledge. Construct the combined time-invariant features according to \eqref{eq:combine_shared}. \item Smoothen the time-invariant features according to \eqref{eq:smooth-features}. \item Calculate the dissimilarity measures for all $t$ using \eqref{eq:dissimilarity-measure}. \item Apply a matched filter on the dissimilarity measures following \eqref{eq:matched_filter} and compute the prominence of all local maxima using \eqref{eq:prominence}. \item If the prominence \eqref{eq:prominence} of a local maximum is higher than some user-defined detection threshold $\tau$, a change point has been detected. \end{enumerate} An implementation of our TIRE methods has been made available at \url{https://github.com/deryckt/TIRE}.
{ "attr-fineweb-edu": 1.556641, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbwHxK02iP4Wj_LbC
\section{Introduction} The following three, well-known problems of modern physics and cosmology, accelerated expansion of the Universe (Riess {\em et al.\/} \cite{Riess1998}; Perlmutter {\em et al.\/} \cite{Perlm1999}), very small but non-vanishing cosmological constant or dark energy (Weinberg \cite{Wein1989}), (Carroll \cite{Carr2001}; Padmanabhan (\cite{Padm2003}, \cite{Padm2006})), and theoretically extraordinarily huge quantum vacuum energy density (Zel`dovich \cite{Zeld1967}), (Volovik (\cite{Vol2005}, \cite{Vol2006})) can be treated as mutually related or as independent problems. An old approach to the issue of the cosmological constant $\Lambda$ utilizes quantum vacuum energy as a solution of this issue, but unfortunately it does not work properly. Namely, it appears that directly calculated, Casimir-like value of quantum vacuum energy is more than one hundred orders greater than expected. Such a huge value of quantum vacuum energy is a serious theoretical problem in itself. Lowering the UV cutoff scale down from the planckian to the supersymmetric one is a symbolic improvement (roughly, it cuts the order by two (Weinberg \cite{Wein1989})). A more radical reduction of the cutoff could cure the situation but it would create new problems. Sometimes, it is claimed that vacuum energy for one or another reason does not influence gravitational field. In this paper, following ideas presented in (Broda {\em et al.\/} \cite{Broda2008}), we show in what sense quantum vacuum energy influences gravitational field, and in what sense it does not. Actually, we propose a reasonable derivation of a contribution of quantum vacuum energy which influences gravitational field, and which assumes an experimentally reasonable value. \section{Quantum vacuum energy} The standard approach (Weinberg \cite{Wein1989}) to estimate the quantum vacuum energy density $\varrho_{\rm vac}$ in the spirit of Casimir energy yields for a single bosonic scalar mode \begin{align} \varrho_{\rm vac}=\frac{1}{2}\int\limits_{0}^{\Lambda_{\rm \textsc{uv}}} \frac{4\pi c}{(2\pi\hbar)^3}\; \sqrt{(mc)^2+k^2}\;k^2\mathrm{d}k, \label{eq:wrong density 1} \end{align} where $m$ is the mass of the mode. For the planckian UV cutoff, $\Lambda_{\rm \textsc{uv}}=\Lambda_{\rm \textsc{p}}=\sqrt{\hbar c^3/G}\approx 6.5\rm\,kg\,m/s$, we obtain $\varrho_{\rm vac}\approx 3.1\cdot 10^{111}\rm\, J/m^3$, whereas the experimentally estimated value is of the order of the critical density of the Universe, $\varrho_{\rm crit}={3 {\left(H_{0}c\right)}^{2}}/{8\pi G}\; (\approx 10^{-9} \rm\, J/m^3)$. In our opinion, the explicitly absurd result follows from an erroneous approach. Namely, in classical as well as in quantum theory interactions are being mediated by fields or particles. In Eq.\,\eqref{eq:wrong density 1} no explicit or implicit coupling to gravitational field appears on any stage. Therefore, by construction, we assume that gravitation does not couple (is insensitive) to the term \eqref{eq:wrong density 1}. As there is no any coupling to \eqref{eq:wrong density 1}, its huge value is isolated from the outer world and therefore invisible (non-existent). What we have just said is, so to say, a negative part of our reasoning. In the positive part we should cure the situation somehow proposing a reasonable solution. In (Broda {\em et al.\/} \cite{Broda2008}) we have sketched our idea and proposed an estimation of the quantum vacuum energy. Actually, it is possible to allow another interpretation of our calculations. For example, in our opinion, the idea of ``the rearrangement'' of vacuum motivated by thermodynamics and condensed-matter physics advocated in (Volovik (\cite{Vol2005}, \cite{Vol2006})) could be implemented just this way. Anyway, our original calculus (Broda {\em et al.\/} \cite{Broda2008}) consists in careful considering only contributions coming from attached classical external lines. More precisely, in the first step, we should estimate quantum vacuum fluctuations of a matter field in the background of an external classical gravitational field. In the next step we should retain the most divergent part and subtract the term without gravitational field. \section{The estimation} We will work in the formalism of the effective action, throughout. A euclidean version of our approach has been given in (Broda {\em et al.\/} \cite{Broda2008}), and here we present a relativistic one. Full quantum contribution to the effective action coming from a single (non-self-interacting) mode is (DeWitt (\cite{DeWitt1975}, \cite{DeWitt2003})) \begin{align} S_{\rm eff}=\pm\frac{i\hbar}{2}\log\det\mathcal{D}, \label{eq:EffectiveAction1} \end{align} where $\mathcal{D}$ is a second-order differential operator, in general, with classical external fields, and the upper (plus) sign corresponds to a boson, whereas the lower (minus) one corresponds to a fermion, respectively. Proper-time UV regularized version of \eqref{eq:EffectiveAction1} in Schwinger's formalism assumes the form (Birrell \& Davies \cite{Birrell1982}), (DeWitt (\cite{DeWitt1975}, \cite{DeWitt2003})) \begin{align} S^{\varepsilon}_{\rm eff}=\mp\frac{i\hbar}{2}\int\limits_{\varepsilon}^{\infty}\frac{i\mathrm{d}s}{is}\;\mathrm{Tr} \;e^{-is\mathcal{D}}. \label{eq:EffectiveActionRegularized} \end{align} Next, we apply the Seeley--DeWitt ``heat-kernel'' expansion in four dimensions (Birrell \& Davies \cite{Birrell1982}), (DeWitt (\cite{DeWitt1975}, \cite{DeWitt2003})), (Ball \cite{Ball1989}), \begin{align} \left<x\right|e^{-is\mathcal{D}}\left|x\right>=i(4 \pi)^{-2}\sum\limits_{n=0}^{\infty}a_{n}(x)(is)^{n-2}. \label{eq:Seeley--DeWitt Expansion} \end{align} The contribution coming from the first term, we are interested, i.e.\ $a_{0}(x)$, is \begin{align} S_{\rm vac}=\mp\frac{\hbar}{2} \frac{1}{2 \varepsilon^2} \frac{1}{\left(4 \pi\right)^2}\;\mathrm{Tr}\;a_{0}(x). \label{eq:a0VacuumActionContribution 1} \end{align} Since $a_{0}(x)=1$, and for planckian UV cutoff $\varepsilon=\frac{\hbar G}{c^3}$, we obtain \begin{align} S_{\rm vac}=\mp\frac{1}{4} \frac{c^7}{\left(4 \pi\right)^2 \hbar G^2}\;\int \sqrt{-g}\,{\mathrm{d}}^3x\mathrm{d}t. \label{eq:a0VacuumActionContribution2} \end{align} For simplicity, we confine ourselves to the spatially flat Friedmann--Lema\^{\i}tre--Robertson--Walker metric with the scale factor $a(t)$. To ease our calculus further, we set the present coordinate time $t=0$, and normalize the coordinates to unity, i.e.\ $a(0)=1$. Expanding $a(t)$ around $t=0$ we have \begin{align} a(t)=1+H_0t-\frac{1}{2}q_0{H_0}^2t^2+\mathcal{O}(t^3), \label{eq:a(t)FactorExpansion1} \end{align} where $H_0$ is the present day Hubble expansion rate, and $q_0$ is the present day deceleration parameter. Hence \begin{align} \sqrt{-g}=\left[a^2(t)\right]^{3/2}=\left[1+2H_0t+\left(1-q_0\right){H_0}^2t^2+\mathcal{O}(t^3)\right]^{3/2}. \label{eq:DeterminantExpansion1} \end{align} Now, one can easily show that the infinitesimal gauge transformation of the metric, \begin{align} \delta g_{\mu\nu}=\partial_{\mu}\xi_{\nu}+\partial_{\nu}\xi_{\mu}, \label{eq:Gauge transformation 1} \end{align} with the gauge parameter \begin{align} \xi_{\mu}=\left(\frac{1}{2}H_{0}{\bold{x}}^2, -H_{0}tx^{i}\right), \label{eq:GaugeParameter} \end{align} cancels the linear in $t$ part in \eqref{eq:DeterminantExpansion1}. There are also general arguments supporting this cancellation given in (Shapiro \cite{Shapiro2007}). Therefore \begin{align} \sqrt{-g}\approx 1+\frac{3}{2}\left(1-q_0\right){H_0}^2t^2, \label{eq:DeterminantExpansion2} \end{align} and \begin{align} S_{\rm vac}\approx \mp\frac{1}{4} \frac{c^7}{\left(4 \pi\right)^2 \hbar G^2}\;\int \left[1+\frac{3}{2}\left(1-q_0\right){H_0}^2t^2\right]\mathrm{d}t \int{\mathrm{d}}^3x. \label{eq:a0VacuumActionContribution3} \end{align} The number one in the bracket corresponds to the term uncoupled to gravitational field, and it should be subtracted. By the way, such a subtraction is a standard procedure in quantum field theory. As we are interested in a density rather than in a total value we should get rid off all integrals. Since the integrand is only time-dependent we can simply discard the spatial volume $\int{\mathrm{d}}^3x$. As far as the time integrand is concerned we should take into account that our calculus is perturbative in $t$ and valid only in the vicinity of $t=0$. Therefore, we have to take the limit of ``infinitesimal'' time. From the point of view of quantum field theory the ``infinitesimal'' time is the Planck time $T_{\rm \textsc{p}}=\sqrt{{\hbar G}/{c^5}}$. So, our density is a time average, i.e.\ ${T_{\rm \textsc{p}}}^{-1}\int\limits_{0}^{T_{\rm \textsc{p}}}\mathrm{d}t(\cdot)$, and assumes the form \begin{align} \varrho\approx \mp\frac{1}{4} \frac{c^7}{\left(4 \pi\right)^2 \hbar G^2}\frac{1}{2}\left(1-q_0\right){H_0}^2{T_{\rm \textsc{p}}}^2, \label{eq:Final Density 1} \end{align} or finally \begin{align} \varrho\approx \mp\frac{1}{48 \pi}\left(1-q_0\right)\varrho_{\rm crit}, \label{eq:Final Density 2} \end{align} where we have used the relation: ${H_0}^2=\frac{8}{3}\pi \frac{G}{c^2}\varrho_{\rm crit}$. For, e.g.\ $q_0=-0.7$ (Virey {\em et al.\/} \cite{Virey2005}), we get \begin{align} \varrho\approx \mp0.01\varrho_{\rm crit}, \label{eq:Final Density 3} \end{align} a very promising result. \section{Conclusions} In the framework of standard quantum field theory, without any additional more or less exotic assumptions we are able to derive an experimentally reasonable result \eqref{eq:Final Density 3}. This numeric value corresponds to only a single mode. Therefore in the real world it should be multiplied by a small natural number. \section*{Acknowledgments} This work was supported in part by the Polish Ministry of Science and Higher Education Grant PBZ/MIN/008/P03/2003 and by the University of {\L}\'od\'z grant. One of the authors (B.B.) would like to thank the organizers for their kind invitation and for their generous support.
{ "attr-fineweb-edu": 1.912109, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUbxg4ukPiEKkD69fI
\section{Introduction} Color confinement in quantum chromodynamics (QCD) is still an important unsolved problem~\cite{CMI:2000mp}. As a picture of color confinement, 't~Hooft~\cite{tHooft:1975pu} and Mandelstam~\cite{Mandelstam:1974pi} conjectured that the QCD vacuum is a kind of a magnetic superconducting state caused by condensation of magnetic monopoles and an effect dual to the Meissner effect works to confine color charges. However, in contrast to SUSY QCD ~\cite{Seiberg:1994rs} or Georgi-Glashow model~\cite{'tHooft:1974qc,Polyakov:1976fu} with scalar fields, to find color magnetic monopoles which condense is not straightforward in QCD. An interesting idea to realize this conjecture is to project QCD to the Abelian maximal torus group by a partial (but singular) gauge fixing~\cite{tHooft:1981ht}. In $SU(3)$ QCD, the maximal torus group is Abelian $U(1)^2$. Then color magnetic monopoles appear as a topological object. Condensation of the monopoles causes the dual Meissner effect~\cite{Ezawa:1982bf,Suzuki:1988yq,Maedan:1988yi}. \par Numerically, an Abelian projection in non-local gauges such as the maximally Abelian (MA) gauge~\cite{Suzuki:1983cg,Kronfeld:1987ri,Kronfeld:1987vd} has been found to support the Abelian confinement scenario beautifully~\cite{Suzuki:1992rw,Singh:1993jj,Chernodub:1997ay,Bali:1997cp, Suzuki:1998hc,Koma:2003gq,Koma:2003hv}. Also the Abelian dominance and the dual Meissner effect are observed clearly in local unitary gauges such as $F12$ and Polyakov (PL) gauges~\cite{Sekido:2007mp}. \par However, although numerically interesting, the idea of Abelian projection\cite{tHooft:1981ht} is theoretically very unsatisfactory. 1) In non-perturabative QCD, any gauge-fixing is not necessary at all. There are infinite ways of such a partial gauge-fixing and whether the 't Hooft scheme is gauge independent or not is not known. 2) After an Abelian projection, only one (in $SU(2)$) or two (in $SU(3)$) gluons are photon-like with respect to the residual $U(1)$ or $U(1)^2$ symmetry and the other gluons are massive charged matter fields. Such an asymmetry among gluons is unnatural. 3) How to construct Abelian monopole operators in a gauge-independent way in terms of original gluon fields is not clear at all. \vspace{.5cm} In this paper, we propose a new theoretical scheme for color confinement based on the dual Meissner effect which is free from the above problems. The idea was first expressed by one of the authors (T.S.) in Ref.\cite{Suzuki:2014wya} and was extended in Ref.\cite{SIB201711}. However, the proofs of the Dirac quantization condition of $g_m^a$ in $SU(2)$ and $SU(3)$ shown in Refs.\cite{Suzuki:2014wya,SIB201711} are incorrect. Without knowing the explicit form of the gauge-field configuration corresponding to VNABI, it is impossible to prove the Dirac quantization condition theoretically. Since the authors expect that VNABI play an important role in color confinement, the Dirac quantization conditions for $g_m^a$ in $SU(2)$ and $SU(3)$ are assumed. Also the simultaneous diagonalization of VNABI $J_{\mu}$ for all $\mu$ can not be proved from the Coleman-Mandula theorem\cite{Coleman} and Lorentz invariance contrary to the assertion in Ref.\cite{SIB201711}. When the simultaneous diagonalization of $J_{\mu}$ for all $\mu$ is assumed, the condensation of $J_\mu$ and electric color invariance of the confinement vacuum can be compatible. Then to check if the above scheme is realized in nature, we study the proposal in the framework of the non-Abelian lattice gauge theory. For simplicity we adopt pure $SU(2)$ lattice gauge theory. First considering $J_{\mu}(x)=k_{\mu}(x)$ in the continuum, we define VNABI on lattice as an Abelian-like monopole following DeGrand-Toussaint\cite{DeGrand:1980eq}. Then as a most important point to be clarified, we are going to study if the lattice VNABI has the non-trivial continuum limit, namely if the scaling of the density exists. The lattice monopoles exist as a closed loop due to the current conservation law. As shown later explicitly, monopole closed loops are contaminated by lattice artifacts. Hence it is absolutely necessary to introduce various techniques avoiding such large lattice artifacts in order to analyse especially such a quantity as the monopole density, since all lattice artifacts contribute positively to the density. We introduce various techniques of smoothing the thermalized vacuum. Smooth gauge fixings such as the maximal center gauge (MCG)\cite{DelDebbio:1996mh,DelDebbio:1998uu}, block-spin transformations of Abelian-like monopoles and extraction of physically important infrared long monopoles are taken into account. We also employ the tree-level tadpole improved gauge action. \section{A new confinement scheme based on VNABI\label{Sec2}} \subsection{Equivalence of $J_{\mu}$ and $k_{\mu}$} First of all, we prove that the Jacobi identities of covariant derivatives lead us to conclusion that violation of the non-Abelian Bianchi identities (VNABI) $J_{\mu}$ is nothing but an Abelian-like monopole $k_{\mu}$ defined by violation of the Abelian-like Bianchi identities without gauge-fixing. Define a covariant derivative operator $D_{\mu}=\partial_{\mu}-igA_{\mu}$. The Jacobi identities are expressed as \begin{eqnarray} \epsilon_{\mu\nu\rho\sigma}[D_{\nu},[D_{\rho},D_{\sigma}]]=0. \label{eq-Jacobi} \end{eqnarray} By direct calculations, one gets \begin{eqnarray*} [D_{\rho},D_{\sigma}]&=&[\partial_{\rho}-igA_{\rho},\partial_{\sigma}-igA_{\sigma}]\\ &=&-ig(\partial_{\rho}A_{\sigma}-\partial_{\sigma}A_{\rho}-ig[A_{\rho},A_{\sigma}])+[\partial_{\rho},\partial_{\sigma}]\\ &=&-igG_{\rho\sigma}+[\partial_{\rho},\partial_{\sigma}], \end{eqnarray*} where the second commutator term of the partial derivative operators can not be discarded, since gauge fields may contain a line singularity. Actually, it is the origin of the violation of the non-Abelian Bianchi identities (VNABI) as shown in the following. The non-Abelian Bianchi identities and the Abelian-like Bianchi identities are, respectively: $D_{\nu}G^{*}_{\mu\nu}=0$ and $\partial_{\nu}f^{*}_{\mu\nu}=0$. The relation $[D_{\nu},G_{\rho\sigma}]=D_{\nu}G_{\rho\sigma}$ and the Jacobi identities (\ref{eq-Jacobi}) lead us to \begin{eqnarray} D_{\nu}G^{*}_{\mu\nu}&=&\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}D_{\nu}G_{\rho\sigma} \nn\\ &=&-\frac{i}{2g}\epsilon_{\mu\nu\rho\sigma}[D_{\nu},[\partial_{\rho},\partial_{\sigma}]]\nn\\ &=&\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}[\partial_{\rho},\partial_{\sigma}]A_{\nu}\nn\\ &=&\partial_{\nu}f^{*}_{\mu\nu}, \label{eq-JK} \end{eqnarray} where $f_{\mu\nu}$ is defined as $f_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}=(\partial_{\mu}A^a_{\nu}-\partial_{\nu}A^a_{\mu})\sigma^a/2$. Namely Eq.(\ref{eq-JK}) shows that the violation of the non-Abelian Bianchi identities is equivalent to that of the Abelian-like Bianchi identities. Denote the violation of the non-Abelian Bianchi identities as $J_{\mu}$: \begin{eqnarray} J_{\mu} &=& \frac{1}{2}J_{\mu}^a\sigma^a =D_{\nu}G^*_{\mu \nu}. \label{nabi} \end{eqnarray} Eq.(\ref{nabi}) is gauge covariant and therefore a non-zero $J_{\mu}$ is a gauge-invariant property. An Abelian-like monopole $k_{\mu}$ without any gauge-fixing is defined as the violation of the Abelian-like Bianchi identities: \begin{eqnarray} k_{\mu}=\frac{1}{2}k_{\mu}^a\sigma^a&=& \partial_{\nu}f^*_{\mu\nu} =\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}f_{\rho\sigma}. \label{ab-mon} \end{eqnarray} Eq.(\ref{eq-JK}) shows that \begin{eqnarray} J_{\mu}=k_{\mu}. \label{JK} \end{eqnarray} Several comments are in order. \begin{enumerate} \item Eq.(\ref{JK}) can be considered as a special case of the important relation derived by Bonati et al.\cite{Bonati:2010tz} in the framework of an Abelian projection to a simple case without any Abelian projection. Actually it is possible to prove directly without the help of the Jacobi identities \begin{eqnarray*} J_{\mu}^a-k_{\mu}^a&=& \Tr\sigma^a D_{\nu}G^{*}_{\mu\nu}-\partial_{\nu}f^{*a}_{\mu\nu} \\ &=&-ig\Tr\sigma^a[A_{\nu}, G^{*}_{\mu\nu}]\\ &&-ig\epsilon_{\mu\nu\rho\sigma}\Tr\sigma^a[\partial_{\nu}A_{\rho}, A_{\sigma}]\\ &=&0. \end{eqnarray*} \item VNABI $J_{\mu}$ transforms as an adjoint operator, so that does the Abelian-like monopole current $k_{\mu}$. This can be proved also directly. Consider a regular gauge transformation \begin{eqnarray*} A'_{\mu}&=&VA_{\mu}V^{\dag}-\frac{i}{g}\partial_{\mu}VV^{\dag}. \end{eqnarray*} Then \begin{eqnarray} k'_{\mu}&=&\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}\partial_{\rho}A'_{\sigma}\nn\\ &=&\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}\partial_{\rho}(VA_{\sigma}V^{\dag}-\frac{i}{g}\partial_{\sigma}VV^{\dag})\nn\\ &=&V(\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}\partial_{\rho}A_{\sigma})V^{\dag}\nn\\ &=&Vk_{\mu}V^{\dag}.\label{vkv} \end{eqnarray} \item The above equivalence shows VNABI is essentially Abelian-like. It was already argued that singularities of gauge fields corresponding to VNABI must be Abelian\cite{DiGiacomo:2008wh}, although the reasoning is different. \item The covariant conservation law $D_{\mu}J_{\mu}=0$ is proved as follows\cite{Bonati:2010tz}: \begin{eqnarray} D_{\mu}J_{\mu}&=&D_{\mu}D_{\nu}G^*_{\nu\mu} =\frac{ig}{2}[G_{\nu\mu},G^*_{\nu\mu}]\nn\\ &=&\frac{ig}{4}\epsilon_{\nu\mu\rho\sigma}[G_{\nu\mu},G_{\rho\sigma}] =0, \label{NA-cons} \end{eqnarray} where \begin{eqnarray} \partial_{\mu}\partial_{\nu}G^{*}_{\mu\nu}=0 \label{PNA} \end{eqnarray} is used. The Abelian-like monopole satisfies the Abelian-like conservation law \begin{eqnarray} \partial_{\mu}k_{\mu}=\partial_{\mu}\partial_{\nu}f^{*}_{\mu\nu}=0\label{PAA} \end{eqnarray} due to the antisymmetric property of the Abelian-like field strength\cite{Arafune:1974uy}. Hence VNABI satisfies also the same Abelian-like conservation law \begin{eqnarray} \partial_{\mu}J_{\mu}=0. \label{A-cons} \end{eqnarray} Both Eqs.(\ref{NA-cons}) and (\ref{A-cons}) are compatible, since the difference between both quantities \begin{eqnarray} [A_{\mu}, J_{\mu}]&=&\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}[A_{\mu},\partial_{\nu}f_{\rho\sigma}]\nn\\ &=&\epsilon_{\mu\nu\rho\sigma}[A_{\mu}, \partial_{\nu}\partial_{\rho}A_{\sigma}]\nn\\ &=&-\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}\partial_{\mu}[A_{\rho},A_{\sigma}]\nn\\ &=&\frac{i}{g}(\partial_{\mu}\partial_{\nu}G^*_{\mu\nu}-\partial_{\mu}\partial_{\nu}f^*_{\mu\nu})\nn\\ &=& 0\nn, \end{eqnarray} where (\ref{PNA}) and (\ref{PAA}) are used. Hence the Abelian-like conservation relation (\ref{A-cons}) is also gauge-covariant. \item The Abelian-like conservation relation (\ref{A-cons}) gives us three conserved magnetic charges in the case of color $SU(2)$ and $N^2-1$ charges in the case of color $SU(N)$. But these are kinematical relations coming from the derivative with respect to the divergence of an antisymmetric tensor~\cite{Arafune:1974uy}. The number of conserved charges is different from that of the Abelian projection scenario~\cite{tHooft:1981ht}, where only $N-1$ conserved charges exist in the case of color $SU(N)$. \end{enumerate} \begin{table*} \caption{\label{comp}Comparison between the 'tHooft Abelian projection studies and the present work in $SU(2)$ QCD. $\hat{\phi}'=V_p^{\dag}\sigma_3 V_p$, where $V_p$ is a partial gauge-fixing matrix of an Abelian projection. $(u_c, d_c)$ is a color-doublet quark pair. MA means maximally Abelian. } \begin{ruledtabular} \begin{tabular}{|c|c|c|c|} &\multicolumn{2}{c|}{The 'tHooft Abelian projection scheme} & This work and Refs.\cite{Suzuki:2007jp,Suzuki:2009xy} \ \ \\ &Previous works\cite{Suzuki:1983cg,Kronfeld:1987ri,Kronfeld:1987vd,Suzuki:1992rw,Singh:1993jj,Chernodub:1997ay,Bali:1997cp,Suzuki:1998hc,Koma:2003gq,Koma:2003hv,Sekido:2007mp} & Reference \cite{Bonati:2010tz} & \\ \hline Origin of $k_{\mu}$ & A singular gauge transformation& $k_{\mu}=\Tr J_{\mu}\hat{\phi}'$ & $k_{\mu}^a=J_{\mu}^a$\\ \hline No. of conserved $k_{\mu}$ & \multicolumn{2}{c|}{$1$} & $3$ \\ Role of $A^a_{\mu}$ &\multicolumn{2}{c|}{One photon $A^3_{\mu}$ with $k^3_{\mu}$ $+$ 2 massive $A^{\pm}_{\mu}$} & Three gluons $A_{\mu}^a$ with $k_{\mu}^a$ \\ Flux squeezing&\multicolumn{2}{c|}{One electric field $E_{\mu}$ }& Three electric fields $E^a_{\mu}$\\ \hline Number of physical mesons &\multicolumn{2}{c|}{ 2 Abelian neutrals, $\bar{u}_cu_c$ and $\bar{d}_cd_c$}& 1 color singlet $\bar{u}_cu_c+\bar{d}_cd_c$\\ \hline Expected confining vacuum &\multicolumn{2}{c|}{Condensation of Abelian monopoles }& Condensation of color-invariant $\lambda_{\mu}$\cite{Suzuki:1988yq}\\ \hline Privileged gauge choice & A singular gauge & MA gauge & No need of gauge-fixing \\ \end{tabular} \end{ruledtabular} \end{table*} \vspace{.5cm} \subsection{Proposal of the vacuum in the confinement phase} Now we propose a new mechanism of color confinement in which VNABI $J_{\mu}$ play an important role in the vacuum. For the scenario to be realized, we make two assumptions concerning the property of VNABI. \begin{enumerate} \item If VNABI are important physically, they must satify the Dirac quantization condition between the gauge coupling $g$ and the magnetic charge $g_m^a$ for $a=1,2,3$ in $SU(2)$ and $a=1\sim 8$ in $SU(3)$. Since we do not know theoretically the property of VNABI, we have to assume the Dirac qunatization conditions: \begin{eqnarray*} gg_m^a=4\pi n^a, \end{eqnarray*} where $n^a$ is an integer. \item The vacuum in the color confinement phase should be electric color invariant. Since VNABI transform as an adjoint operator, we have to extract electric color invariant but magnetically charged quantity from VNABI. One possible way it to assume that VNABI satisfy \begin{eqnarray*} [J_\mu (x),J_{\nu\neq\mu}]=0 \end{eqnarray*} which make it possible to diagonalize VNABI $J_\mu$ simultaneously for all $\mu$. At present, the authors do not know if the second assumption is the only way to have the magnetically charged but electrically neutral vacuum in the confinement phase. \end{enumerate} Using the above assumption, VNABI can be diagonalized by a unitary matrix $V_d(x)$ as follows: \begin{eqnarray*} V_d(x)J_{\mu}(x)V_d^{\dag}(x)=\lambda_{\mu}(x)\frac{\sigma_3}{2}, \end{eqnarray*} where $\lambda_{\mu}(x)$ is the eigenvalue of $J_{\mu}(x)$ and is then color invariant but magnetically charged. Then one gets \begin{eqnarray} \Phi(x)&\equiv& V_d^{\dag}(x)\sigma_3 V_d(x) \label{Phi}\\ J_{\mu}(x)&=&\frac{1}{2}\lambda_{\mu}(x)\Phi(x), \label{lambda}\\ \sum_a (J_{\mu}^a(x))^2&=&\sum_a (k_{\mu}^a(x))^2=(\lambda_{\mu}(x))^2. \label{Eigen} \end{eqnarray} Namely the color electrically charged part and the magnetically charged part are separated out. From (\ref{lambda}) and (\ref{A-cons}), one gets \begin{eqnarray} \partial_{\mu}J_{\mu}(x)&=&\frac{1}{2}(\partial_{\mu}\lambda_{\mu}(x)\Phi(x) + \lambda_{\mu}(x)\partial_{\mu}\Phi(x))\nn\\ &=& 0. \end{eqnarray} Since $\Phi(x)^2=1$, \begin{eqnarray*} \partial_{\mu}\lambda_{\mu}(x)&=&-\frac{1}{2}\lambda_{\mu}(x)(\Phi(x)\partial_{\mu}\Phi(x)+\partial_{\mu}\Phi(x)\Phi(x))\\ &=&0. \end{eqnarray*} Hence the eigenvalue $\lambda_{\mu}$ itself satisfies the Abelian conservation rule. Furthermore, when use is made of (\ref{vkv}), it is possible to prove that \begin{eqnarray} \frac{1}{2}\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}f'_{\mu\nu}(x) &=&\lambda_{\mu}(x)\frac{\sigma_3}{2},\label{lambda3} \end{eqnarray} where \begin{eqnarray*} f'_{\mu\nu}(x)&=&\partial_{\mu}A'_{\nu}(x)-\partial_{\nu}A'_{\mu}(x)\\ A'_{\mu}&=&V_dA_{\mu}V^{\dag}_d-\frac{i}{g}\partial_{\mu}V_dV_d^{\dag},\\ &\equiv& \frac{A^{'a}_{\mu}\sigma^a}{2}. \end{eqnarray*} Namely, \begin{eqnarray} \frac{1}{2}\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}f^{'1,2}_{\rho\sigma}(x)(x)&=&0\\ \frac{1}{2}\epsilon_{\mu\nu\rho\sigma}\partial_{\nu}f^{'3}_{\rho\sigma}(x)(x)&=&\lambda_{\mu}(x).\label{eq:Lambda} \end{eqnarray} The singularity appears only in the diagonal component of the gauge field $A'_{\mu}$. It is very interesting to see that $f^{'3}_{\mu\nu}(x)$ is actually the gauge invariant 'tHooft tensor\cite{'tHooft:1974qc}: \begin{eqnarray*} f^{'3}_{\mu\nu}(x)=\tr\Phi(x)G_{\mu\nu}(x)+\frac{i}{2g}\tr\Phi(x)D_{\mu}\Phi(x)D_{\nu}\Phi(x), \end{eqnarray*} in which the field $\Phi(x)$ (\ref{Phi}) plays a role of the scalar Higgs field in Ref.\cite{'tHooft:1974qc}. To be noted is that the field $\Phi(x)$ (\ref{Phi}) is determined uniquely by VNABI itself in the gluodynamics without any Higgs field. In this sense, our scheme can be regarded as a special Abelian projection scenario with the partial gauge-fixing condition where $J_{\mu}(x)$ are diagonalized. The condensation of the gauge-invariant magnetic currents $\lambda_{\mu}$ does not give rise to a spontaneous breaking of the color electric symmetry. Condensation of the color invariant magnetic currents $\lambda_{\mu}$ may be a key mechanism of the physical confining vacuum\cite{Suzuki:1988yq, Maedan:1988yi}. The main difference between our new scheme and previous Abelian projection schemes is that in the former there exist $N^2-1$ conserved magnetic currents squeezing $N^2-1$ color electric fields and color ( not charge) confinement is shown explicitly, whereas in the latter, there exists only $N-1$ conserved currents giving charge confinement. In our scheme, the $N^2-1$ conserved magnetic currents are degenerate in the vacuum to $N-1$ color-invariant currents corresponding to the eigenvalues. To show the difference of this scheme from the previous 'tHooft Abelian projection with some partial gauge-fixing, we show Table~\ref{comp} in which typical different points are written. Let us make a comment here on the relation derived by Bonati et al.\cite{Bonati:2010tz}: \begin{eqnarray} k^{AB}_{\mu}(x)&=&\Tr\{J_{\mu}(x)\Phi^{AB}(x)\}, \label{kab} \end{eqnarray} where $k^{AB}_{\mu}(x)$ is an Abelian monopole, $\Phi^{AB}(x)=V^{\dag}_{AB}(x)\sigma_3V_{AB}(x)$ and $V_{AB}(x)$ is a partial gauge-fixing matrix in some Abelian projection like the MA gauge. Making use of Eq.(\ref{lambda}), we get \begin{eqnarray} k^{AB}_{\mu}(x)&=&\lambda_{\mu}(x)\tilde{\Phi}^3(x), \end{eqnarray} where \begin{eqnarray*} \tilde{\Phi}(x)&=&V_{AB}(x)V_d^{\dag}(x)\sigma_3V^{\dag}_{AB}(x)V_d(x)\\ &=&\tilde{\Phi}^a(x)\sigma^a. \end{eqnarray*} The relation (\ref{kab}) is important, since existence of an Abelian monopole in any Abelian projection scheme is guaranteed by that of VNABI $J_{\mu}$ in the continuum limit. Hence if in any special gauge such as MA gauge, Abelian monopoles remain non-vanishing in the continuum as suggested by many numerical data~\cite{Suzuki:1992rw,Singh:1993jj,Chernodub:1997ay,Bali:1997cp, Suzuki:1998hc,Koma:2003gq,Koma:2003hv}, VNABI also remain non-vanishing in the continuum. \vspace{.5cm} \section{Lattice numerical study of the continuum limit} \subsection{Definition of VNABI on lattice} Let us try to define VNABI on lattice. In the previous section, VNABI $J_{\mu}(x)$ is shown to be equivalent in the continuum limit to the violation of the Abelian-like Bianchi identities $J_{\mu}(x)=k_{\mu}(x)$. On lattice, we have to define a quantity which leads us to the above VNABI in the continuum limit. There are two possible definitions which lead us to the above VNABI in the naive continuum limit. One is a quantity keeping the adjoint transformation property under the lattice $SU(2)$ gauge transformation $V(s)$: \begin{eqnarray*} U(s,\mu)^{'}=V(s)U(s,\mu)V^{\dag}(s+\mu). \end{eqnarray*} Here $U(s, \mu)$ is a lattice gauge link field. Such a quantity was proposed in Ref\cite{Skala:1996ar}: \begin{eqnarray*} J_{\mu}(s)&\equiv&\frac{1}{2}\big(U(s,\nu)U_{\mu\nu}(s+\nu)U^{\dag}(s,\nu)-U_{\mu\nu}(s)\big),\\ U_{\mu\nu}(s)&\equiv&U(s,\mu)U(s+\mu,\nu)U^{\dag}(s+\nu,\mu)U^{\dag}(s,\nu) \end{eqnarray*} where $U_{\mu\nu}(s)$ is a plaquette variable corresponding to the non-Abelian field strength. This transforms as an adjoint operator: \begin{eqnarray} J_{\mu}^{'}(s)=V(s)J_{\mu}(s)V^{\dag}(s) \label{eq:trans} \end{eqnarray} and satisfies the covariant conservation law \begin{eqnarray*} \sum_{\mu}D^L_{\mu}J_{\mu}(s)&=& \sum_{\mu}\big(U(s+\mu,\mu)J_{\mu}(s)U^{\dag}(s,\mu)-J_{\mu}(s)\big)\\ &=&0. \end{eqnarray*} However it does not satisfy the Abelian conservation law: \begin{eqnarray} \sum_{\mu}\big(J_{\mu}(s+\mu)-J_{\mu}(s)\big)= 0. \label{eq:acon} \end{eqnarray} Moreover it does not have a property corresponding to the Dirac quantization condition satisfied by the continuum VNABI, as we assumed. The last point is very unsatisfactory, since the topological property as a monopole is essential. Hence we adopt here the second possibility which can reflect partially the topological property satisfied by VNABI. That is, we define VNABI on lattice as the Abelian-like monopole\cite{Suzuki:2007jp,Suzuki:2009xy} following DeGrand and Toussaint\cite{DeGrand:1980eq}. First we define Abelian link and plaquette variables: \begin{eqnarray} \theta_{\mu}^a(s)&=&\arctan (U^a_{\mu}(s)/U^0_{\mu}(s))\ \ \ (|\theta_{\mu}^a(s)|<\pi) \label{abel_link}\\ \theta_{\mu\nu}^a(s)&\equiv&\partial_{\mu}\theta_{\nu}^a(s)-\partial_{\nu}\theta_{\mu}^a(s), \label{abel_proj} \end{eqnarray} where $\partial_{\nu}(\partial'_{\nu})$ is a forward (backward) difference. Then the plaquette variable can be decomposed as follows: \begin{eqnarray} \theta_{\mu\nu}^a(s) &=&\bar{\theta}_{\mu\nu}^a(s)+2\pi n_{\mu\nu}^a(s)\ \ (|\bar{\theta}_{\mu\nu}^a|<\pi),\label{abel+proj} \end{eqnarray} where $n_{\mu\nu}^a(s)$ is an integer corresponding to the number of the Dirac string. Then VNABI as Abelian monopoles is defined by \begin{eqnarray} k_{\mu}^a(s)&=& -(1/2)\epsilon_{\mu\alpha\beta\gamma}\partial_{\alpha} \bar{\theta}_{\beta\gamma}^a(s+\hat\mu) \nonumber\\ &=&(1/2)\epsilon_{\mu\alpha\beta\gamma}\partial_{\alpha} n_{\beta\gamma}^a(s+\hat\mu) \nonumber \\ J_{\mu}(s)&\equiv&\frac{1}{2}k_{\mu}^a(s)\sigma^a \label{eq:amon}. \end{eqnarray} This definition (\ref{eq:amon}) of VNABI satisfies the Abelian conservation condition (\ref{eq:acon}) and takes an integer value which corresponds to the magnetic charge obeying the Dirac quantization condition. The eigenvalue $\lambda_{\mu}$ is defined from (\ref{Eigen}) as \begin{eqnarray} (\lambda_{\mu}(s))^2=\sum_a(k_{\mu}^a(s))^2.\label{Eigen_L} \end{eqnarray} However Eq.(\ref{eq:amon}) does not satisfy the transformation property (\ref{eq:trans}) on the lattice. We will demonstrate that this property is recovered in the continuum limit by showing the gauge invariance of the monopole density or the squared monopole density (\ref{Eigen_L}) in the scaling limit. \begin{table}[H] \caption{A typical example of monopole loop distributions (Loop length (L) vs Loop number (No.)) for various gauges in one thermalized vacuum on $24^4$ lattice at $\beta=3.6$ in the tadpole improved action. Here $I$ and $L$ denote the color component and the loop length of the monopole loop, respectively. } \label{Tab:Mdist} \begin{center} \begin{tabular}{|c|c||c|c||c|c|} \hline NGF I=1 & & MCG I=1 & & DLCG I=1 &\\ \hline L & No & L & No & L & No\\ \hline 4 & 154 & 4 & 166 & 4 & 164\\ 6 & 20 & 6 & 64 & 6 & 66\\ 8 & 7 & 8 & 30 & 8 & 28\\ 10 & 2 & 10 & 13 & 10 & 15\\ 14 & 1 & 12 & 11 & 12 & 10\\ 16 & 1 & 14 & 4 & 14 & 3\\ 407824 & 1 & 16 & 5 & 16 & 6\\ & & 18 & 1 & 18 & 2\\ & & 22 & 2 & 20 & 1\\ & & 24 & 2 & 22 & 1\\ & & 28 & 1 & 24 & 2\\ & & 30 & 1 & 26 & 3\\ & & 32 & 1 & 30 & 1\\ & & 34 & 2 & 36 & 1\\ & & 36 & 1 & 44 & 1\\ & & 44 & 1 & 48 & 1\\ & & 46 & 1 & 54 & 1\\ & & 48 & 1 & 58 & 1\\ & & 58 & 1 & 124 & 1\\ & & 124 & 1 & 1106 & 1\\ & & 2254 & 1 & 1448 & 1\\ \hline AWL I=1 & & MAU1 I=1 & & MAU1 I=3 &\\ \hline L & No & L & No & L & No\\ \hline 4 & 142 & 4 & 73 & 4 & 190\\ 6 & 66 & 6 & 32 & 6 & 80\\ 8 & 36 & 8 & 13 & 8 & 22\\ 10 & 8 & 10 & 11 & 10 & 15\\ 12 & 7 & 12 & 6 & 12 & 2\\ 14 & 3 & 14 & 3 & 14 & 3\\ 16 & 3 & 16 & 2 & 16 & 1\\ 18 & 1 & 18 & 3 & 18 & 3\\ 20 & 1 & 20 & 2 & 20 & 3\\ 22 & 3 & 22 & 1 & 24 & 1\\ 26 & 3 & 30 & 2 & 36 & 1\\ 28 & 1 & 34 & 2 & 42 & 1\\ 30 & 2 & 58 & 1 & 60 & 1\\ 32 & 1 & 148 & 1 & 66 & 1\\ 34 & 1 & 5188 & 1 & 146 & 1\\ 40 & 1 & & & 318 & 1\\ 46 & 1 & & & 722 & 1\\ 58 & 1 & & & & \\ 120 & 1 & & & &\\ 308 & 1 & & & &\\ 1866 & 1 & & & &\\ \hline \end{tabular} \end{center} \end{table} \subsection{Simulation details} \subsubsection{Tadpole improved gauge action} First of all, we adopt the tree level improved action of the form \cite{Alford:1995hw} for simplicity in $SU(2)$ gluodynamics: \beq S = \beta_{imp} \sum_{pl} S_{pl} - {\beta_{imp} \over 20 u_0^2} \sum_{rt} S_{rt} \label{eq:improved_action} \eeq where $S_{pl}$ and $S_{rt}$ denote plaquette and $1 \times 2$ rectangular loop terms in the action, \beq S_{pl,rt}\ = \ {1\over 2}{\rm Tr}(1-U_{pl,rt}) \, , \label{eq:terms} \eeq the parameter $u_0$ is the {\it input} tadpole improvement factor taken here equal to the fourth root of the average plaquette $P=\langle \frac{1}{2} {\mathrm tr} U_{pl} \rangle$. In our simulations we have not included one--loop corrections to the coefficients, for the sake of simplicity. The lattices adopted are $48^4$ for $\beta=3.0\sim3.9$ and $24^4$ for $\beta=3.3\sim3.9$. The latter was taken mainly for studying finite-size effects. The simulations with the action (\ref{eq:improved_action}) have been performed with parameters given in Table~\ref{t1} in Appendix\ref{Ap:tadpole} following similarly the method as adopted in Ref.\cite{Bornyakov:2005iy}. \subsubsection{The non-Abelian string tension} In order to fix the physical lattice scale we need to compute one physical dimensionful observable the value of which is known. For this purpose we choose the string tension $\sigma$. The string tension for the action (\ref{eq:improved_action}) was computed long ago in \cite{Poulis:1997zx,Bornyakov:2005iy} but we improve this measurement according to present standards. We use the hypercubic blocking (HYP) invented by the authors of Ref.~\cite{Hasenfratz:2001hp,Hasenfratz:2001tw, Gattringer:2001jf,Bornyakov:2004ii} to reduce the statistical errors. After one step of HYP, APE smearing \cite{APE} were applied to the space-like links. The spatial smearing is made, as usually, in order to variationally improve the overlap with a mesonic flux tube state. The results of the measured string tensions are listed also in Table~\ref{t1} in Appendix\ref{Ap:tadpole}. \begin{figure}[htb] \caption{\label{fig_b-beta} $b=na(\beta)$ in unit of $1/\sqrt{\sigma}$ versus $\beta$} \includegraphics[width=8cm,height=6.cm]{fig1.eps} \vspace{-0.3cm} \end{figure} \begin{table}[h] \caption{The $n=4$ blocked monopole loop distribution (Loop length (L) vs Loop number (No.)) in various gauges on $6^4$ reduced lattice volume at $\beta=3.6$ in the same vacuum used in Table\ref{Tab:Mdist}.} \label{Tab:Mdist3} \begin{center} \begin{tabular}{|c|c||c|c||c|c|} \hline NGF I=1 & & MCG I=1 & & DLCG I=1 & \\ \hline L & No & L & No & L & No \\ \hline 9266 & 1 & 4 & 5 & 4 & 8 \\ & & 6 & 1 & 6 & 2 \\ & & 10 & 1 & 406 & 1 \\ & & 340 & 1 & & \\ \hline AWL I=1 & & MAU1 I=1 & & MAU1 I=3 & \\ \hline L & No & L & No & L & No \\ \hline 4 & 5 & 4 & 12 & 4 & 8 \\ 6 & 1 & 6 & 1 & 6 & 3 \\ 14 & 1 & 10 & 1 & 8 & 2 \\ 352 & 1 & 24 & 1 & 16 & 1 \\ & & 26 & 1 & 276 & 1 \\ & & 270 & 1 & & \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Introduction of smooth gauge-fixings} Monopole loops in the thermalized vacuum produced in the above improved action (\ref{eq:improved_action}) still contain large amount of lattice artifacts. Hence we here adopt a gauge-fixing technique smoothing the vacuum, although any gauge-fixing is not necessary in principle in the continuum limit\cite{footnote2}: \begin{enumerate} \item Maximal center gauge (MCG).\\ The first gauge is the maximal center gauge\cite{DelDebbio:1996mh,DelDebbio:1998uu} which is usually discussed in the framework of the center vortex idea. We adopt the so-called direct maximal center gauge which requires maximization of the quantity \begin{eqnarray} R=\sum_{s,\mu}(\tr U(s,\mu))^2 \label{eq:MCG} \end{eqnarray} with respect to local gauge transformations. The condition (\ref{eq:MCG}) fixes the gauge up to $Z(2)$ gauge transformation and can be considered as the Landau gauge for the adjoint representation. In our simulations, we choose simulated annealing algorithm as the gauge-fixing method which is known to be powerful for finding the global maximum. For details, see the reference\cite{Bornyakov:2000ig}. \item Direct Laplacian center gauge (DLCG).\\ The second is the Laplacian center gauge\cite{Faber:2001zs} which is also discussed in connection to center vortex idea. Here we adopt the so-called direct Laplacian center gauge (DLCG). Firstly, we require maximization of the quantity \begin{eqnarray} R_M=\sum_{s,\mu}\tr \left[M^T(s)U^A(s,\mu)M(s,\mu)\right] \end{eqnarray} where $U^A(s,\mu)$ denotes the adjoint representation of $U(s, \mu)$ and $M(s,\mu)$ is a real-valued $3\times3$ matrix in $SU(2)$ gauge theory which satisfies the constraint \begin{eqnarray} \frac{1}{V}\sum_s\sum_j M^T_{ij}(s)M_{jk}(s)=\delta_{ik} \end{eqnarray} with $V$ lattice volume. Matrix field $M(s)$ which leads to a global maximum of $R_M$ is composed of the three lowest eigenfunctions of a lattice Laplacian operator. Secondly, to determine the corresponding gauge transformation, we construct $SO(3)$ matrix-valued field which is the closest to $M(s)$ and satisfies the corresponding Laplacian condition by local gauge transformation. Finally, the $SO(3)$ matrix-valued field is mapped to an SU(2) matrix-valued field which is used to the gauge transformation for the original lattice gauge field in fundamental representation. After that, DLCG maximizes the quantity (\ref{eq:MCG}) with respect to solving a lattice Laplacian equation. \item Maximal Abelian Wilson loop gauge (AWL).\\ Another example of a smooth gauge is introduced. It is the maximal Abelian Wilson loop gauge (AWL) in which \begin{eqnarray} R&=&\sum_{s,\mu\neq\nu}\sum_a(cos(\theta^a_{\mu\nu}(s)) \label{SAWL} \end{eqnarray} is maximaized. Here $\theta^a_{\mu\nu}(s)$ have been introduced in eq.~(\ref{abel+proj}). Since $cos(\theta^a_{\mu\nu}(s))$ are $1\times 1$ Abelian Wilson loops, the gauge is called as the maximal Abelian Wilson loop gauge (AWL). A similar gauge was proposed in \cite{Suzuki:1996ax}, although only one-color component was considered then in comparison with the maximal Abelian gauge (MAG). Note that even $1\times 1$ small Abelian Wilson loop is enhanced when a smooth gauge condition such as the MA gauge is adopted. The details are presented in the Appendix \ref{AWL}. \item Maximal Abelian and $U(1)$ Landau gauge (MAU1).\\ The fourth is the combination of the maximal Abelian gauge (MAG) and the $U(1)$ Landau gauge\cite{Kronfeld:1987ri,Kronfeld:1987vd}. Namely we first perform the maximal Abelian gauge fixing and then with respect to the remaining $U(1)$ symmetry the Landau gauge fixing is done. This case breaks the global $SU(2)$ color symmetry contrary to the previous three cases (MCG, DLCG and AWL) but nevertheless we consider this case since the vacuum is smoothed fairly well. MAG is the gauge which maximizes \begin{equation} R=\sum_{s,\hat\mu}{\rm Tr}\Big(\sigma_3 U(s,\mu) \sigma_3 U^{\dagger}(s,\mu)\Big) \label{R} \end{equation} with respect to local gauge transformations. Then there remains $U(1)$ symmetry to which the Landau gauge fixing is applied, i.e., $ \sum_{s,\mu} cos \theta^3_{\mu}(s)$ is maximized\cite{Bali:1996dm}. \end{enumerate} \vspace{0.5cm} \subsubsection{Extraction of infrared monopole loops} An additional improvement is obtained when we extract important long monopole clusters only from total monopole loop distribution. Let us see a typical example of monopole loop distributions in each gauge in comparison with that without any gauge fixing starting from a thermalized vacuum at $\beta=3.6$ on $24^4$ lattice. They are shown in Table \ref{Tab:Mdist}. One can find almost all monopole loops are connected and total loop lengths are very large when no gauge fixing (NGF) is applied as shown in the NGF case. On the other hand, monopole loop lengths become much shorter in all smooth gauges discussed here. Also it is found that only one or few loops are long enough and others are very short as observed similarly in old papers in MAG. The long monopole clusters are called as infrared monopoles and they are the key ingredient giving confinement as shown in the old papers\cite{Ejiri:1994uw}. It is important that in addition to MAU1, all other three MCG, DLCG and AWL cases also have similar behaviors. Since small separate monopole loops can be regarded as lattice artifacts, we extract only infrared monopoles alone. Although there observed only one infrared monopole loop in almost all cases, there are some vacua (especially for large beta) having two or three separate long loops which can be seen as infrared one, since they have much longer length than other shorter ones. We here define as infrared monopoles as all loops having loop lengths longer than $10\%$ of the longest one. The cutoff value is not so critical. Actually the definition of infrared loops itself has an ambiguity, since even in the longest loop, we can not separate out some short artifact loops attached accidentally to the real infrared long loop. But such an ambiguity gives us numerically only small effects as seen from the studies of different cutoff values. \begin{figure}[htb] \caption{The VNABI (Abelian-like monopoles) density versus $a(\beta)$ in MCG on $48^4$. Top: total density; bottom: infrared density. $n^3$ in the legend means $n$-step blocked monopoles.} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=8cm,height=6.cm]{fig2a.eps} \label{fig_MCG-a} \end{minipage} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=8cm,height=6.cm]{fig2b.eps} \label{fig_IF_MCG-a} \end{minipage} \end{figure} \begin{figure*}[tbh] \caption{The VNABI (Abelian-like monopoles) density versus $b=na(\beta)$ in MCG on $48^4$. Top: total density; bottom: infrared density.} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=14cm,height=10.cm]{fig3a.eps} \label{fig_MCG-b} \end{minipage} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=14cm,height=10.cm]{fig3b.eps} \label{fig_IF_MCG-b} \end{minipage} \end{figure*} \begin{figure*}[hbt] \caption{The fit of the infrared VNABI (Abelian-like monopoles) density data in MCG on $48^4$ lattice to Eq.(\ref{eq:rho-b}). } \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=10cm,height=7.cm]{fig4.eps} \label{fig_f(b)} \end{minipage} \end{figure*} \begin{figure*}[hbt] \caption{The VNABI (Abelian-like monopole) density at $b=0.5, 1.0, 1.5, 2.0$ for different $n$ in MCG on $48^4$. The data used are derived by a linear interpolation of two nearest data below and above for the corresponding $b$ and $n$. As an example, see the original data at $b=1.0$ in Table\ref{Tab_b=1}.} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=12cm,height=7.cm]{fig5.eps} \label{fig_MCG-b1} \end{minipage} \end{figure*} \begin{table}[htbp] \caption{IF monopole density $\rho_{IF}$ around $b=1.0$ for each blocking steps $n$ in MCG case on $48^4$.\label{Tab_b=1} \begin{center} \begin{tabular}{|r|r|r|r|r|r|} \hline $n$ & $\beta$ & $b=na(\beta)$ & $db$ & $\rho_{IF}$ & error \\ \hline 3 & 3.0 & 1.1184 & 0.0012 & 3.94E-01 & 1.42E-03 \\ 3 & 3.1 & 0.9465 & 0.0024 & 4.82E-01 & 4.06E-03 \\ \hline 4 & 3.2 & 1.052 & 0.0016 & 3.99E-01 & 1.40E-02 \\ 4 & 3.3 & 0.866 & 0.0008 & 5.32E-01 & 2.37E-03 \\ \hline 6 & 3.4 & 1.092 & 0.0012 & 3.93E-01 & 2.80E-03 \\ 6 & 3.5 & 0.9318 & 0.0024 & 4.64E-01 & 7.44E-03 \\ \hline 8 & 3.6 & 1.0712 & 0.0072 & 3.77E-01 & 9.20E-03 \\ 8 & 3.7 & 0.9064 & 0.0008 & 4.75E-01 & 3.78E-03 \\ \hline 12 & 3.8 & 1.1412 & 0.0012 & 3.70E-01 & 4.43E-03 \\ 12 & 3.9 & 0.9948 & 0.0024 & 4.56E-01 & 8.36E-03 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[htb] \caption{The VNABI (Abelian-like monopoles) density versus $b=na(\beta)$ in AWL on $48^4$. Top: total density; bottom: infrared density.} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=8cm,height=6.cm]{fig6a.eps} \label{fig_AWL-b} \end{minipage} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=8cm,height=6.cm]{fig6b.eps} \label{fig_IF_AWL-b} \end{minipage} \end{figure} \begin{figure}[hbt] \caption{The VNABI (Abelian-like monopoles) density versus $b=na(\beta)$ in DLCG on $24^4$. } \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=8cm,height=6.cm]{fig7.eps} \label{fig_DLCG-b} \end{minipage} \end{figure} \subsubsection{Blockspin transformation} Block-spin transformation and the renormalization-group method is known as the powerful tool to study the continuum limit. We introduce the blockspin transformation with respect to Abelian-like monopoles. The idea was first introduced by Ivanenko et al.\cite{Ivanenko:1991wt} and applied in obtaining an infrared effective monopole action in Ref.\cite{Shiba:1994db}. The $n$ blocked monopole has a total magnetic charge inside the $n^3$ cube and is defined on a blocked reduced lattice with the spacing $b=na$, $a$ being the spacing of the original lattice. The respective magnetic currents are defined as \begin{eqnarray} k_{\mu}^{(n)}(s_n) &=& \frac{1}{2}\epsilon_{\mu\nu\rho\sigma} \partial_{\nu}n_{\rho\sigma}^{(n)}(s_n+\hat{\mu}) \nonumber\\ & = & \sum_{i,j,l=0}^{n-1}k_{\mu}(ns_n \nonumber\\ && +(n-1)\hat{\mu}+i\hat{\nu} +j\hat{\rho}+l\hat{\sigma}), \label{excur}\\ n_{\rho\sigma}^{(n)}(s_n) &=& \sum_{i,j=0}^{n-1} n_{\rho\sigma}(ns_n+i\hat{\rho}+j\hat{\sigma}),\nonumber \end{eqnarray} where $s_n$ is a site number on the reduced lattice. For example, \begin{eqnarray*} k_{\mu}^{(2)}(s_2)&=& \sum_{i,j,l=0}^{1}k_{\mu}(2s_2+\hat{\mu}+i\hat{\nu} +j\hat{\rho}+l\hat{\sigma}),\\ k_{\mu}^{(4)}(s_4)&=&\sum_{i,j,l=0}^{3}k_{\mu}(4s_4+3\hat{\mu}+i\hat{\nu} +j\hat{\rho}+l\hat{\sigma}) \nonumber \\ &=&\sum_{i,j,l=0}^{1}k_{\mu}^{(2)}(2s_4+\hat{\mu}+i\hat{\nu} +j\hat{\rho}+l\hat{\sigma}). \end{eqnarray*} These equations show that the relation between $k_{\mu}^{(4)}(s_4)$ and $k_{\mu}^{(2)}(s_2)$ is similar to that between $k_{\mu}^{(2)}(s_2)$ and $k_{\mu}(s)$ and hence one can see the above equation (\ref{excur}) corresponds to the usual block-spin transformation. After the block-spin transformation, the number of short lattice artifact loops decreases while loops having larger magnetic charges appear. We show an example of the loop length and loop number distribution of the four step ($n=4$ ) blocked monopoles in Table\ref{Tab:Mdist3} with respect to the same original vacuum as in Table\ref{Tab:Mdist}. For reference, we show the relation between the spacing of the blocked lattice and $\beta$ in Fig.\ref{fig_b-beta}. In Fig.1 and in what follows we present spacings $a$ and $b$ in units of $1/\sqrt{\sigma}$. \begin{figure}[hbt] \caption{The VNABI (Abelian-like monopoles) density versus $b=na(\beta)$ for $k^2$ and $k^3$ components in MAU1 on $48^4$. Top: total density; bottom: infrared density.\label{fig_MA_k23}} \begin{minipage}[b]{0.9\linewidth} \centering \hspace*{-1cm} \includegraphics[width=9cm,height=6.cm]{fig8a.eps} \end{minipage} \begin{minipage}[b]{0.9\linewidth} \centering \hspace*{-1cm} \includegraphics[width=9cm,height=6.cm]{fig8b.eps} \end{minipage} \end{figure} \subsection{Numerical results} Now let us show the simulation results with respect to VNABI (Abelian-like monopole ) densities. Since monopoles are three-dimensional objects, the density is defined as follows: \begin{eqnarray} \rho=\frac{\sum_{\mu,s_n}\sqrt{\sum_a(k_{\mu}^a(s_n))^2}}{4\sqrt{3}V_nb^3},\label{eq:Mdensity} \end{eqnarray} where $V_n=V/n^4$ is the 4 dimensional volume of the reduced lattice, $b=na(\beta)$ is the spacing of the reduced lattice after $n$-step blockspin transformation. $s_n$ is the site on the reduced lattice and the superscript $a$ denotes a color component. Note that $\sum_a(k_{\mu}^a)^2$ is gauge-invariant in the continuum limit. Although the global color invariance is exact except in MAU1 gauge, the average of the density of each color component of $|k_{\mu}^a|$ is not equal to the average of the above $\rho$, since two or three colored monopoles can run on the same dual links. In general, the density $\rho$ is a function of two variables $\beta$ and $n$. \subsubsection{Scaling} For the purpose of studying the continuum limit, it is usual to analyse scaling behaviors. First of all, let us show the data of MCG case in Fig.\ref{fig_MCG-a}. In this Figure and in what follows we present the monopole density $\rho$ in units of $\sigma^{1.5}$. When the scaling exists for both the string tension and the monopole density, we expect $\rho\to\textrm{const}$ as $a(\beta)\to 0$ and $V\to\infty$, since $a(\beta)$ is measured in unit of the string tension. In the case of total monopole density such a behavior is not seen yet. When infrared monopoles alone and blocked monopoles are considered, the behavior becomes flatter as seen from Fig.\ref{fig_IF_MCG-a}. But still this scaling is not conclusive. We need to study larger $\beta$ regions on larger lattice volumes. These features are very much similar in other smooth gauges as AWL, DLCG and MAU1 and so their data are not shown here. \subsubsection{Scaling under the block-spin transformations} It is very interesting to see that more beautiful and clear scaling behaviors are observed when we plot $\rho(a(\beta),n)$ versus $b=na(\beta)$. As one can see from the figures shown below for various smooth gauges considered in this work, one can see a universal function $\rho(b)$ for $\beta=3.0\sim3.9$ ($\beta=3.3\sim3.7$) and $n=1,2,3,4,6,8,12$ ($n=1,2,3,4,6$) on $48^4$ ($24^4$) lattice. Namely \textit{$\rho(a(\beta),n)$ is a function of $b=na(\beta)$ alone.} Thus we observe clear indication of the continuum ($a(\beta)\to 0$) limit for the lattice VNABI studied in this work. \begin{figure*}[htb] \caption{The VNABI (Abelian-like monopoles) density (\ref{eq:Mdensity}) versus $b=na(\beta)$ in MAU1 on $48^4$. Top: total density; bottom: infrared density.\label{fig_MA-b}} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=12cm,height=8.cm]{fig9a.eps} \end{minipage} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=12cm,height=8.cm]{fig9b.eps} \end{minipage} \end{figure*} \subsubsection{MCG case} First we show the case of MCG gauge-fixed vacua in details. As can be seen from Fig.\ref{fig_MCG-b}, data for $\rho(a(\beta),n)$ can be expressed by a function of one argument $b=na(\beta)$ alone. There is a very beautiful scaling behavior for the range of $\beta=3.0\sim3.9$ and $n=1,2,3,4,6,8,12$. When we are restricted to long infrared monopoles alone, the density becomes substantially reduced for small $b<0.5$ region. But the scaling also can be seen except for small $b$ region as shown in Fig.\ref{fig_IF_MCG-b}. The violation of scaling for small $b$ region is mainly due to the ambiguity of extracting infrared monopoles. When we restrict ourselves to the data for $b\ge 0.5$, the scaling function $\rho(b)$ is obtained using the $\chi^2$ fit to a simple function as shown in Fig.\ref{fig_f(b)}: \begin{eqnarray} \rho(b)&=&\exp(a_1 + a_2b + a_3b^2),\label{eq:rho-b}\\ a_1&=& 0.5302(141), a_2=-1.4756(158), a_3= 0.1304(35). \nonumber \end{eqnarray} But the fit is not good enough, since $\chi^2/N_{dof}=12.56$ for $N_{dof}=44$. Here we show the function (\ref{eq:rho-b}) only for the purpose of illustration, since we have not found a simple but better fit. To see in more details, let us consider the data points at $b=0.5, 1.0, 1.5, 2.0$ for each $n$. Especially the data at $b=1.0$ can be fixed from the data at 5 different values of $\beta$ from $3.0\le\beta\le 3.9$ as seen from Fig.\ref{fig_b-beta} and Table\ref{Tab_b=1}. One can see the scaling behavior also clearly from the density plot for different $n$ at $b=1.0, 1.5, 2.0$ as shown in Fig.\ref{fig_MCG-b1}. However a scaling violation is seen at $b=0.5$\cite{footnote3}. \subsubsection{AWL case} Very similar behaviors are seen in the AWL gauge case. Again beautiful scaling behaviors for the range of $\beta=3.0\sim3.9$ and $n=1,2,3,4,6,8,12$ are seen in Fig.\ref{fig_AWL-b}. But in the case of infrared monopoles shown in Fig.\ref{fig_IF_AWL-b}, a scaling violation is observed for small $b$ region. \subsubsection{DLCG case} Since the DLCG gauge-fixing needs much time for larger lattice, we evaluate monopole density only on $24^4$ lattice. As seen from Fig.\ref{fig_DLCG-b}, a scaling behavior is found, although small deviations exist for small $b$ region. \subsubsection{MAU1 case} Now we discuss the case of MAU1 gauge. In this gauge, the global isospin symmetry is broken. Hence let us first evaluate the monopole density in each color direction. Namely \begin{eqnarray} \rho^a=\frac{\sum_{\mu,s_n}|k_{\mu}^a(s_n))|}{4V_nb^3}. \end{eqnarray} As expected we find $\rho^1\sim\rho^2\neq\rho^3$, so that we show $\rho^2$ and $\rho^3$. The results are shown in Fig.\ref{fig_MA_k23}. Here the scaling is seen clearly with respect to the off-diagonal $k^2$ currents, but the violation is seen for the diagonal $k^3$ currents especially at small $b$ region. Similar behaviors are found when we are restricted to infrared monopoles. However when we evaluate the monopole density (\ref{eq:Mdensity}), we can observe similar beautiful scaling behaviors as in MCG and AWL cases. They are shown in Fig.\ref{fig_MA-b}. \begin{figure*}[htb] \caption{Comparison of the VNABI (Abelian-like monopoles) densities versus $b=na(\beta)$ in MCG, AWL, DLCG and MAU1 cases. DLCG data only are on $24^4$ lattice. Here $\rho(b)$ is a scaling function (\ref{eq:rho-b}) determined from the Chi-Square fit to the IF monopole density data in MCG. Top: total density; bottom: infrared density. \label{fig_log-b}} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=12cm,height=8.cm]{fig10a.eps} \end{minipage} \begin{minipage}[b]{0.9\linewidth} \centering \includegraphics[width=12cm,height=8.cm]{fig10b.eps} \end{minipage} \end{figure*} \begin{figure}[htb] \caption{\label{fig_MCG3} Volume dependence of VNABI (Abelian-like monopole) density in the case of MCG in $48^4$ and $24^4$ tadpole improved gauge action. The data for $3.0\le\beta\le 3.6$ and $1\le n\le 6$ alone are plotted for comparison.} \includegraphics[width=8cm,height=6.cm]{fig11.eps} \vspace{-0.3cm} \end{figure} \begin{figure}[htb] \caption{\label{fig_impWil} Gauge action dependence of VNABI (Abelian-like monopole) densities in the case of DLCG in $24^4$ tadpole improved and Wilson gauge actions, The data for $3.3\le\beta\le 3.7$ and $1\le n\le 6$ alone are plotted.} \includegraphics[width=8cm,height=6.cm]{fig12.eps} \vspace{-0.3cm} \end{figure} \subsection{Gauge dependence} Since $\sum_a(k_{\mu}^a)^2$ should be gauge-invariant according to our derivation in section \ref{Sec2}, we compare the data in different smooth gauges. Look at Fig.{\ref{fig_log-b}}, which show the comparison of the data in four gauges (MCG, AWL, DLCG and MAU1). One can see that data obtained in these four different gauges are in good agreement with each other providing strong indication of gauge independence. \textit{This is the main result of this work.} Note that in MAU1 gauge, the global color invariance is broken and usually off-diagonal color components of gauge fields are said to have large lattice artifacts. However here we performed additional U1 Landau gauge-fixing with respect to the remaining $U(1)$ symmetry after MA fixing, which seems to make the vacua smooth enough as those in MCG gauge case. The fact that the scaling functions $\rho(b)$ obtained in MCG gauge can reproduce other three smooth-gauge data seems to show that it is near to the smallest density corresponding to the continuum limit without large lattice artifact effects. In other non-smooth gauges or without any gauge-fixing (NGF), $\rho$ does not satisfy the scaling and actually becomes much larger. This is due to our inability to suppress lattice artifacts in the non-smooth gauges or without gauge-fixing. \subsection{Volume dependence in MCG case} The volume dependence is also studied when the two data on $48^4$ and $24^4$ lattices in MCG are plotted for the same $\beta$ region $(3.0\le\beta\le3.6)$ and the blocking steps $(1\le n\le 6)$ as shown in Fig.{\ref{fig_MCG3}}. We found sizable finite volume effects for $\beta=3.7$ only (not shown in the figure) when lattice size for $L=24$ becomes $La < 2.7/ \sqrt{\sigma} $. Volume dependence for $(3.0\le\beta\le3.6)$ is very small as seen from Fig.{\ref{fig_MCG3}}. \subsection{Gauge action dependence\label{sec8}} Let us in short check how the gauge action adopted here improves the density $\rho$ behavior by comparing the data in the tadpole improved action with those in the simple Wilson gauge action. It is shown in Fig.\ref{fig_impWil}. The density in the Wilson action is higher especially for $b\le 1.0$ and so considerable improvement is obtained with the choice of the tadpole improved gauge action. \section{Conclusions} In conclusion, we have proposed a new color confinement scheme which is summarized as follows: \begin{enumerate} \item VNABI is equal to the Abelian-like monopole coming from the violation of the Abelian-like Bianchi identities. \item VNABI satisfies the Abelian-like conservation law as well as the covariant one. Hence there are $N^2-1$ conserved magnetic charges in the case of color $SU(N)$. \item All magnetic charges are assumed to satisfy the Dirac quantization condition. \item VNABI can be defined on lattice as lattice Abelian-like monopoles. Previous numerical results suggest that the dual Meissner effect due to condensation of VNABI must be the color confinement mechanism of QCD. The role of Abelian monopoles is played by VNABI. This must be a new scheme for color confinement in QCD. \item VNABI are assumed to satisfy $[J_\mu, J_{\nu\neq\mu}]=0$ leading to the simultaneous diagonalization for all $\mu$. \item Condensation of the color invariant magnetic currents $\lambda_{\mu}$ which are the eigenvalue of VNABI $J_{\mu}$ may be a key mechanism of the physical confining vacuum. \end{enumerate} Then to check if the new confinement scenario is correct in the continuum limit, densities of VNABI defined on lattice were studied extensively in this work. Since VNABI is equivalent to Abelian-like monopoles in the continuum, VNABI on lattice is defined as lattice Abelian-like monopoles following DeGrand-Toussaint\cite{DeGrand:1980eq}. This definition even on lattice keeps partially the topological property of VNABI satisfied in the continuum. In the thermalized vacuum, there are plenty of lattice artifact monopoles which contribute equally to the density, so that we have adopted various improvement techniques reducing the lattice artifacts. One of them is to adopt the tadpole improved gauge action. The second is to introduce various gauges smoothing the vacuum, although gauge-fixing is not necessary at all in the continuum. We have considered here four smooth gauges, MCG, DLCG, AWL and MAU1. The third is to perform a blockspin renormalization group study. With these improvement techniques, we have been able to get very beautiful results. First of all, in MCG, AWL and MAU1 gauges, clear scaling behaviors are observed up to the 12-step blockspin transformations for $\beta=3.0\sim 3.9$. Namely the density $\rho(a(\beta),n)$ is a function of $b=na(\beta)$ alone, i.e. $\rho(b)$. If such scaling behaviors are seen for $n\to\infty$, the obtained curve depending on $b=na(\beta)$ alone corresponds to the continuum limit $a(\beta)\to 0$. It is just the renormalized trajectory. The second beautiful result is the gauge independence of the measured densities at least with respect to MCG, AWL and MAU1 smooth gauges on $48^4$ and DLCG on $24^4$ adopted here. The gauge independence is the property expected in the continuum limit, since the observed quantity $\rho$ in (\ref{eq:Mdensity}) is gauge invariant in the continuum. These beautiful results suggest that the lattice VNABI adopted here has the continuum limit and hence the new confinement scenario can be studied on lattice with the use of the lattice VNABI. \vspace{.5cm} Let us note that monopole dominance and the dual Meissner effect due to VNABI as Abelian monopoles were shown partially without any smooth gauge fixing with the use of random gauge transformations in Ref.\cite{Suzuki:2007jp,Suzuki:2009xy}, although scaling behaviors were not studied enough. More extensive studies of these effects and derivation of infrared effective VNABI action using block-spin transformation in these smooth gauges discussed here and its application to analytical studies of non-perturbative quantities will appear in near future. \begin{acknowledgments} The numerical simulations of this work were done using computer clusters HPC and SX-ACE at Reserach Center for Nuclear Physics (RCNP) of Osaka University and the supercomputer at ITEP, Moscow. The authors would like to thank RCNP for their support of computer facilities. Work of VB was supported by Russian Foundation for Basic Research (RFBR) grant 16-02-01146. One of the authors (T.S.) would like to thank Prof. T. Kugo and Prof. H. Tamura for pointing him the errors in the original paper and fruitful discussions. \end{acknowledgments}
{ "attr-fineweb-edu": 1.741211, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc0HxK7FjYEB4VAzk
\section{Conclusion} We proposed a generic framework for network compression and acceleration based on identifying the importance levels of neurons. Neuron importance scores in the layer of interest (usually the last layer before classification) are obtained by feature ranking. We formulated the network pruning problem as a binary integer program and obtained a closed-form solution to a relaxed version of the formulation. We presented the Neuron Importance Score Propagation algorithm that efficiently propagates the importance to every neuron in the whole network. The network is pruned by removing less important neurons and fine-tuned to retain its predicative capability. Experiments demonstrated that our method effectively reduces CNN redundancy and achieves full-network acceleration and compression. \section*{Acknowledgement} The research was partially supported by the Office of Naval Research under Grant N000141612713: Visual Common Sense Reasoning for Multi-agent Activity Prediction and Recognition. \input{Sup} { \section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the CVPR 2018 web page for a discussion of the policy on dual submissions. \subsection{Paper length} Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. {\bf There will be no extra page charges for CVPR 2018.} Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $095.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \emph{et al.} [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \emph{et al.} . You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \emph{et al.} , but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \emph{e.g.}, meaning ``for example'', should not be a sentence-ending space. So \emph{e.g.} is correct, {\em e.g.} is not. The provided \verb'\emph{e.g.}' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\emph{et al.} '' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \emph{et al.} ~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \emph{et al.} ~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\emph{et al.} ' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \emph{et al.} . For this citation style, keep multiple citations in numerical (not chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to \cite{Alpher02,Alpher03,Authors14}. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the CVPR 2018 web page for a discussion of the use of color in your document. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or Fax (714) 761-1784. {\small \bibliographystyle{ieee} \section{Experiments} We evaluate our approach on standard datasets with popular CNN networks. We first compare to \textit{random pruning} and \textit{training-from-scratch} baselines to demonstrate the effectiveness of our method. We then compare to two other baselines, \textit{magnitude-based pruning} and \textit{layer-by-layer pruning} to highlight the contributions of feature ranking and neuron importance score propagation, respectively. Finally, we benchmark the pruning results and compare to existing methods such as \cite{PerforatedCNN,Tucker,learning,pruneweigth}. \subsection{Experimental Setting}\label{sec:expset} We conduct experiments on three datasets, MNIST~\cite{lenet}, CIFAR10 and ImageNet~\cite{imagenet_cvpr09}, for the image classification task. We evaluate using five commonly used CNN architectures: \textit{LeNet} \cite{lenet}, \textit{Cifar-net}\footnote{\scriptsize\url{https://code.google.com/p/cuda-convnet/}.}, \textit{AlexNet} \cite{Alexnet}, \textit{GoogLeNet} \cite{googlenet} and \textit{ResNet} \cite{resnet}. All experiments and time benchmarks are obtained using Caffe ~\cite{caffe}. The hyper-parameter of Inf-FS is a loading coefficient $\alpha \in [0, 1] $, which controls the influence of variance and correlation when measuring the importance. We conduct PCA accumulated energy analysis (results shown in the supplementary material) as suggested in \cite{Nonlinear} to guide our choice of pruning ratios. \begin{figure*}[!t] \centering \subfigure[MNIST]{\label{fig:Mnist}\includegraphics[height=3.5cm,width=.24\linewidth]{image/LeNet.pdf}} \subfigure[CIFAR10]{\label{fig:cifar}\includegraphics[height=3.5cm,width=.24\linewidth]{image/Cifar10.pdf}} \subfigure[ImageNet: AlexNet]{\label{fig:alexbaseline}\includegraphics[height=3.5cm,width=.24\linewidth]{image/Alex.pdf}} \subfigure[ImageNet: GoogeLeNet]{\label{fig:google}\includegraphics[height=3.5cm,width=.24\linewidth]{image/Google.pdf}} \caption{Learning curves of random pruning and training from scratch baselines and NISP using different CNNs on different datasets. The pruning ratio of neurons and filters is 50\%. Networks pruned by NISP (orange curves) converge the fastest with the lowest accuracy loss.}\label{fig:cifar10} \end{figure*} \subsection{Comparison with Random Pruning and Train-from-scratch Baselines}\label{sec:naiveexp} We compare to two baselines: (1) randomly pruning the pre-trained CNN and then fine-tuning, and (2) training a small CNN with the same number of neurons/filters per layer as our pruned model from scratch. We use the same experimental settings for our method and baselines except for the initial learning rate. For training from scratch, we set the initial learning rate to the original one, while for fine-tuning tasks (both NISP and random pruning), the initial learning rate is reduced by a factor of 10. \textbf{LeNet on MNIST:} We prune half of the neurons in FC layers and half of the filters in both convolution layers in Fig. \ref{fig:Mnist}. Our method is denoted as $\textsl{NISP}_\textsl{Half}$, while the baseline methods that prune randomly or train from scratch are denoted as $\textsl{Random}_\textsl{Half}$ and $\textsl{Scratch}_\textsl{Half}$. Our method outperforms the baselines in three aspects. First, for fine-tuning (after pruning), unlike the baselines, our method has very small accuracy loss at iteration 0; this implies that it retains the most important neurons, pruning only redundant or less discriminative ones. Second, our method converges much faster than the baselines. Third, our method has the smallest accuracy loss after fine-tuning. For LeNet on MNIST, our method only decreases 0.02\% top-1 accuracy with a pruning ratio of 50\% as compared to the pre-pruned network. \textbf{Cifar-net on CIFAR10:} The learning curves are shown in Fig.~\ref{fig:cifar}. Similar to the observations from the experiment for LeNet on MNIST, our method outperforms the baselines in the same three aspects: the lowest initial loss of accuracy, the highest convergence speed and the lowest accuracy loss after fine-tuning. Our method has less than 1\% top-1 accuracy loss with 50\% pruning ratio for each layer. \textbf{AlexNet on ImageNet:} To demonstrate that our method works on large and deep CNNs, we replicate experiments on AlexNet with a pruning ratio of 50\% for all convolution layers and FC layers (denoted as {$\textsl{NISP}_\textsl{CF}$} when we prune both conv and FC layers). Considering the importance of FC layers in AlexNet, we compare one more scenario in which our approach only prunes half of the filters but without pruning neurons in FC layers (denoted as $\textsl{NISP}_\textsl{C}$). We reduce the initial learning rate by a factor of 10, then fine-tune 90 epochs and report top-5 accuracy loss. Fig. \ref{fig:alexbaseline} shows that for both cases (pruning both convolution and FC layers and pruning only convolution layers), the advantages we observed on MNIST and CIFAR10 still hold. Layer-wise computational reduction analysis that shows the full-network acceleration can be found in supplementary materials. \textbf{GoogLeNet on ImageNet:} We denote the reduction layers in an inception module as ``Reduce", and the $1 \times 1$ convolution layer without reduction as ``1$\times$1". We use the quick solver from Caffe in training. We conduct experiments between our method and the baselines for 3 pruning strategies: (\textsl{Half}) pruning all convolution layers by half; (\textsl{noReduce}) pruning every convolution layer except for the reduction layers in inception modules by half; (\textsl{no1x1}) pruning every convolution layer by half except the $1 \times 1$ layers in inception modules. We show results for two of them in Fig. \ref{fig:google}, and observe similar patterns to the experiments on other CNN networks\footnote{See supplementary materials for the results of \textsl{noReduce}.}. For all GoogLeNet experiments, we train/fine-tune for 60 epochs and report top-5 accuracy loss. \subsection{Feature Selection v.s. Magnitude of Weights} How to define neuron importance is an open problem. Besides using feature ranking to measure neuron importance, other methods \cite{pruneweigth,thinet,DeepCompress} measure neuron importance by magnitude of weights. To study the effects of different criteria to determine neuron importance, we conduct experiments by fixing other parts of NISP and only comparing the pruning results with different measurements of importance: 1. using feature selection method in \cite{Roffo_2015_ICCV} (NISP-FS) and 2. considering only magnitude of weights (NISP-Mag). For the Magnitude-based pruning, the importance of a neuron in the final response layer equals the absolute sum of all weights connecting the neuron with its previous layer. To compare only the two metrics of importance, we rank the importance of neurons in the final response layer based on the magnitude of their weight values, and propagate their importance to the lower layers. Finally, we prune and fine-tune the model in the same way as the NISP method. For the ``NISP-Mag" baseline, we use both AlexNet and Cifar-net architectures. The learning curves of those baselines are shown in Fig. \ref{fig:maglbl}. We observe that ``NISP-FS" yields much smaller accuracy loss with the same pruning ratio than ``NISP-Mag", but ``NISP-Mag" still outperforms the random pruning and train-from-scratch baselines, which shows the effectiveness of NISP with different measurement of importance. In the remainder of this paper, we employ the feature ranking method proposed in \cite{Roffo_2015_ICCV} in NISP. \subsection{NISP v.s. Layer-by-Layer Pruning}\label{LBL} To demonstrate the advantage of the NISP's importance propagation, we compare with a pruning method that conducts feature ranking on every layer to measure the neuron importance and prune the unimportant neurons of each layer independently. All other settings are the same as NISP. We call this method ``Layer-by-Layer" (LbL) pruning. One challenge for the ``LbL" baseline is that the computational cost of measuring neuron importance on each layer is huge. So we choose a small CNN structure trained on the CIFAR10 dataset. Fig. \ref{fig:layer} shows that although the ``LbL" method outperforms the baselines, it performs much worse than NISP in terms of the final accuracy loss with the same pruning ratio, which shows the need for measuring the neuron importance across the entire network using NISP. To further study the advantage of NISP over layer-by-layer pruning, we define the Weighted Average Reconstruction Error (WARE) to measure the change of the important neurons' responses on the final response layer after pruning (without fine-tuning) as: \begin{equation} \text{WARE} = \frac{\sum_{m=1}^{M} \sum_{i=1}^{N} s_{i}\cdot \frac{|{\hat y}_{i,m}- y_{i,m}|}{|y_{i,m}|}}{M \cdot N}, \end{equation} where $M$ and $N$ are the number of samples and number of retained neurons in the final response layer; $s_i$ is the importance score; $y_{i,m}$ and ${\hat y}_{i,m}$ is the response on the $m^{th}$ sample of the $i^{th}$ neuron before/after pruning. We design different Cifar-net-like CNNs with different numbers of Conv layers, and apply NISP and LbL pruning with different pruning ratios. We report the WARE on the retained neurons in the final response layer (``ip1" layer in Cifar-net-like CNNs) in Fig. \ref{fig:LBL}. We observe that: 1. As network depth increases, the WARE of the LbL-pruned network dramatically increases, which indicates the error propagation problem of layer-by-layer pruning, especially when the network is deep, and suggests the need for a global pruning method such as NISP; 2. The WARE of the LbL method becomes much larger when the pruning ratio is large, but is more stable when using NISP to prune a network; 3. NISP methods always reduce WARE on the retained neurons compared to LbL. The small reconstruction errors on the important neurons in the final response layer obtained by NISP provides a better initialization for fine-tuning, which leads to much lower accuracy loss of the pruned network. \begin{figure}[!t] \centering \subfigure[AlexNet on ImageNet]{\label{fig:mag}\includegraphics[height=3.5cm,width=.49\linewidth]{image/Mag}} \subfigure[Cifar-net on CIFAR10]{\label{fig:layer}\includegraphics[height=3.5cm,width=.49\linewidth]{image/LbL}} \caption{Comparison with layer-by-layer (LbL) and magnitude based (Mag) pruning baselines. We prune 50\% of neurons and filters in all layers for both CNNs. NISP-FS outperforms NISP-Mag and LbL in terms of prediction accuracy.} \label{fig:maglbl} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.65\linewidth]{image/ARE.pdf} \end{center} \caption{Weighted Average Reconstruction Error (WARE) on the final responses without fine-tuning: we set pruning ratios as 25\% and 50\% and evaluate the WARE on the final responses of models with different depths pruned using NISP or LbL. It is clear that networks pruned by NISP have the lowest reconstruction errors.} \label{fig:LBL} \end{figure} \begin{table}[h] \centering \setlength{\tabcolsep}{4pt} \footnotesize \begin{tabular}{@{}llccc@{}} \toprule & Model & Accu.$\downarrow$\% & FLOPs$\downarrow$\% & Params.$\downarrow$\% \\ \midrule \multirow{1}{*}{AlexNet } & NISP-A & \textbf{1.43} & \textbf{67.85} & 33.77 \\ \multirow{1}{*}{on ImageNet }& Perforated \cite{PerforatedCNN} & 2.00 & 50.00 & - \\\cmidrule{2-5} & NISP-B & \textbf{0.97} & \textbf{62.69} & 1.96 \\ & Tucker \cite{Tucker} & 1.70 & 62.55 & - \\\cmidrule{2-5} & NISP-C & \textbf{0.54} & \textbf{53.70} & 2.91 \\ & Learning \cite{learning} &1.20 & 48.19 & - \\ \cmidrule{2-5} & NISP-D & 0.00 & 40.12 & 47.09 \\\midrule \multirow{1}{*}{GoogLeNet} & NISP & \textbf{0.21} & \bf{58.34} & \textbf{33.76} \\ \multirow{1}{*}{on ImageNet } & Tucker \cite{Tucker} & 0.24 & 51.50 & 31.88 \\\midrule \multirow{1}{*}{ResNet}& NISP-56 & 0.03 & \textbf{43.61} & \textbf{42.60} \\ \multirow{1}{*}{on CIFAR10 } & 56-A \cite{pruneweigth} & \textbf{-0.06}\tablefootnote{A negative value here indicates an improved model accuracy.} & 10.40 & 9.40 \\ & 56-B \cite{pruneweigth} & -0.02 & 27.60 & 13.70 \\ \cmidrule{2-5} & NISP-110 & 0.18 & \textbf{43.78} & \textbf{43.25} \\ & 110-A \cite{pruneweigth} & \textbf{0.02} & 15.90 & 2.30 \\ & 110-B \cite{pruneweigth} & 0.23 & 38.60 & 32.40 \\ \midrule \multirow{1}{*}{ResNet} & \multirow{1}{*}{NISP-34-A} & \multirow{1}{*}{\textbf{0.28}} & \multirow{1}{*}{27.32} & \multirow{1}{*}{27.14} \\ \multirow{1}{*}{on ImageNet} & \multirow{1}{*}{NISP-34-B} & \multirow{1}{*}{0.92} & \multirow{1}{*}{\textbf{43.76}} & \multirow{1}{*}{\textbf{43.68}} \\ & \multirow{1}{*}{Res34 \cite{pruneweigth}} & \multirow{1}{*}{1.06} & \multirow{1}{*}{24.20} & \multirow{1}{*}{-} \\\cmidrule{2-5} & \multirow{1}{*}{NISP-50-A} & \multirow{1}{*}{\textbf{0.21}} & \multirow{1}{*}{27.31} & \multirow{1}{*}{27.12} \\ & \multirow{1}{*}{NISP-50-B} & \multirow{1}{*}{0.89} & \multirow{1}{*}{\textbf{44.01}} & \multirow{1}{*}{\textbf{43.82}} \\ & \multirow{1}{*}{Res50 \cite{thinet}} & \multirow{1}{*}{0.84} & \multirow{1}{*}{36.79} & \multirow{1}{*}{33.67} \\ \bottomrule \end{tabular} \caption{Compression Benchmark. [Accu.$\downarrow$\%] denotes the absolute accuracy loss; [FLOPs$\downarrow$\%] denotes the reduction of computations; [Params.$\downarrow$\%] demotes the reduction of parameter numbers; } \label{table:others} \vspace{3pt} \end{table} \subsection{Comparison with Existing Methods}\label{existing} We compare our method with existing pruning methods on AlexNet, GoogLeNet and ResNet, and show results in Table \ref{table:others}. We show benchmarks of several pruning strategies in Table \ref{table:others}, and provide additional results in the supplementary materials. In Table \ref{table:others}, for AlexNet, the pruning ratio is 50\%. NISP-A denotes pruning all Conv layers; NISP-B denotes pruning all Conv layers except for Conv5; NISP-C denotes pruning all Conv layers except for Conv5 and Conv4; NISP-D means pruning Conv2, Conv3 and FC6 layers. For GoogLeNet, we use the similar the pruning ratios of the 3$\times$3 layers in \cite{Tucker}, and we prune 20\% of the reduce layers. Our method is denoted as ``NISP". To compare theoretical speedup, we report reduction in the number of multiplication and the number of parameters following \cite{Tucker} and \cite{PerforatedCNN}, and denote them as [FLOPs$\downarrow$$\%$] and [Params.$\downarrow$$\%$] in the table. Pruning a CNN is a trade-off between efficiency and accuracy. We compare different methods by fixing one metric and comparing the other. On AlexNet, by achieving smaller accuracy loss (1.43\% ours vs. 2.00\% \cite{PerforatedCNN}), our method NISP-A manages to reduce significantly more FLOPs (67.85\%) than the one in \cite{PerforatedCNN} (50\%), denoted as ``Perforate" in the table; compare to the method in \cite{learning} (denoted as ``Learning"), our method NISP-C achieves much smaller accuracy loss (0.54\% ours vs. 1.20\%) and prunes more FLOPs (53.70\% ours vs. 48.19\%). We manage to achieve 0 accuracy loss and reduce over 40\% FLOPs and 47.09\% parameters (NISP-D). On GoogLeNet, Our method achieves similar accuracy loss with larger FLOPs reduction (58.34\% vs. 51.50\%) Using ResNet on Cifar10 dataset, with top-1 accuracy loss similar to \cite{pruneweigth} (56-A, 56-B. 110-A and 110-B), our method reduces more FLOPs and parameters. We also conduct our ResNet experiments on ImageNet \cite{imagenet_cvpr09}. We train a ResNet-34 and a ResNet-50 for 90 epochs. For both ResNet models, we prune 15\% and 25\% of filters for each layer (denote as ``NISP-X-A" and ``NISP-X-B" (``X" indicates the ResNet model) in Table \ref{table:others}), and obtain 27-44\% FLOPs and parameter reduction with tiny top-1 accuracy loss, which shows superior performance when compared with the state-of-the-art methods \cite{pruneweigth,thinet}. \subsection{Additional Analysis} Below, we provide case studies and ablation analysis to help understand the proposed NISP pruning algorithm. \textbf{Similar Predictive Power of Networks Before/After Pruning.} To check whether the pruned network performs similarly with the original network, we compare the final classification results of the original AlexNet and the pruned one with fine-tuning using the ILSVRC2012 validation set. 85.9\% of the top 1 predictions of the two networks agree with each other, and 95.1\% top 1 predictions of the pruned network can be found in the top 5 predictions of the original network. The above experiments show that the network pruned by NISP performs similarly with the original one. \begin{figure} \centering \subfigure[LeNet Prune 75\% and 90\%]{\label{fig:LeNetQ}\includegraphics[height=3.5cm,width=40mm]{image/LeNet_Tenth_quater.pdf}} \subfigure[AlexNet Prune 75\%]{\label{fig:AlexQ}\includegraphics[height=3.5cm,width=40mm]{image/Alex_Quarter_all.pdf}} \caption{Evaluations for different pruning ratios (a) LeNet: pruning 75\% and 90\%, (b) AlexNet: pruning 75\%. CNNs pruned by NISP converge fastest with the lowest accuracy loss.} \label{SuperALl} \end{figure} \textbf{Sensitivity of pruning ratios.} The selection of per-layer pruning ratios given a FLOPs budget is a challenging open problem with a large search space. Due to time limitation, we either choose a single pruning ratio for all layers or replicate the pruning ratios of baseline methods (\emph{e.g.}, \cite{Tucker}), and NISP achieves smaller accuracy loss, which shows the effectiveness of NISP. In practice, if time and GPU resources permit, one can search the optimal hyper-parameters by trying different pruning ratio combinations on a validation set. We also evaluate NISP with very large pruning ratios. We test on pruning ratios of 75\% (denoted as \emph{Quarter} in the figures) and 90\% using LeNet (Fig. \ref{fig:LeNetQ}) (denoted as \emph{Tenth}) for both Conv and FC layers. For AlexNet (Fig. \ref{fig:AlexQ}), we test on pruning ratios of 75\% (\emph{Quarter}) for both convolution and FC layers, and we test two pruning strategies: (1) prune 75\% of neurons in FC layers and filters in Conv layers, denoted as \emph{$\text{FC}$}; and (2) only prune 75\% of the convolution filters without pruning FC layers, denoted as \emph{$\text{C}$}. The above experiments show that NISP still outperforms all baselines significantly with large pruning ratios, in terms of both convergence speed and final accuracy. \section{Introduction} CNNs require a large number of parameters and high computational cost in both training and testing phases. Recent studies have investigated the significant redundancy in deep networks~\cite{PredictingParameters} and reduced the number of neurons and filters \cite{random,DeepCompress,pruneweigth,thinet} by pruning the unimportant ones. However, most current approaches that prune neurons and filters consider only the statistics of one layer (\emph{e.g.}, prune neurons with small magnitude of weights \cite{pruneweigth,DeepCompress}), or two consecutive layers \cite{thinet} to determine the ``importance" of a neuron. These methods prune the ``least important" neurons layer-by-layer either independently \cite{DeepCompress} or greedily \cite{pruneweigth, thinet}, without considering all neurons in different layers jointly. One problem with such methods is that neurons deemed unimportant in an early layer can, in fact, contribute significantly to responses of important neurons in later layers. Our experiments (see Sec.\ref{LBL}) reveal that greedy layer-by-layer pruning leads to significant reconstruction error propagation, especially in deep networks, which indicates the need for a global measurement of neuron importance across different layers of a CNN. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{image/system.pdf} \end{center} \caption{We measure the importance of neurons in the final response layer (FRL), and derive Neuron Importance Score Propagation (NISP) to propagate the importance to the entire network. Given a pre-defined pruning ratio per layer, we prune the neurons/filters with lower importance score. We finally fine-tune the pruned model to recover its predictive accuracy.} \label{fig:spe} \end{figure} To address this problem, we argue that it is essential for a pruned model to retain the most important responses of the second-to-last layer before classification (``final response layer" (FRL)) to retrain its predictive power, since those responses are the direct inputs of the classification task (which is also suggested by feature selection methods, \emph{e.g.}, \cite{Roffo_2015_ICCV}). We define the importance of neurons in early layers based on a \textbf{unified goal}: \emph{minimizing the reconstruction errors of the responses produced in FRL.} We first measure the importance of responses in the FRL by treating them as features and applying some feature ranking techniques (\emph{e.g.}, \cite{Roffo_2015_ICCV}), then propagate the importance of neurons backwards from the FRL to earlier layers. We prune only nodes which have low propagated importance (\emph{i.e.}, those whose removal does not result in large propagated error). From a theoretical perspective, we formulate the network pruning problem as a binary integer programming objective that minimizes the weighted $\ell^1$ distance (proportional to the importance scores) between the original final response and the one produced by a pruned network. We obtain a closed-form solution to a relaxed version of this objective to infer the importance score of every neuron in the network. Based on this solution, we derive the \textit{Neuron Importance Score Propagation} (NISP) algorithm, which computes all importance scores recursively, using only one feature ranking of the final response layer and one backward pass through the network as illustrated in Fig.~\ref{fig:spe}. The network is then pruned based on the inferred neuron importance scores and fine-tuned to retain its predictive capability. We treat the pruning ratio per layer as a pre-defined hyper-parameter, which can be determined based on different needs of specific applications (\emph{e.g.}, FLOPs, memory and accuracy constraints). The pruning algorithm is generic, since feature ranking can be applied to any layer of interest and the importance scores can still be propagated. In addition, NISP is not hardware specific. Given a pretrained model, NISP outputs a smaller network of the same type, which can be deployed on the hardware devices designed for the original model. We evaluate our approach on MNIST \cite{lenet}, CIFAR10 \cite{CIFAR10} and ImageNet \cite{imagenet_cvpr09} using multiple standard CNN architectures such as LeNet \cite{lenet}, AlexNet \cite{Alexnet}, GoogLeNet \cite{googlenet} and ResNet \cite{resnet}. Our experiments show that CNNs pruned by our approach outperform those with the same structures but which are either trained from scratch or randomly pruned. We demonstrate that our approach outperforms magnitude-based and layer-by-layer pruning. A comparison of the theoretical reduction of FLOPs and number of parameters of different methods shows that our method achieves faster full-network acceleration and compression with lower accuracy loss, \emph{e.g.}, our approach loses 1.43\% accuracy on Alexnet and reduces FLOPs by 67.85\% while Figurnov \emph{et al.} \cite{PerforatedCNN} loses more (2\%) and reduces FLOPs less (50\%). With almost zero accuracy loss on ResNet-56, we achieve a 43.61\% FLOP reduction, significantly higher than the 27.60\% reduction by Li \emph{et al.} \cite{pruneweigth}. \subsection{Contribution} We introduce a generic network pruning algorithm which formulates the pruning problem as a binary integer optimization and provide a closed-form solution based on final response importance. We present NISP to efficiently propagate the importance scores from final responses to all other neurons. Experiments demonstrate that NISP leads to full-network acceleration and compression for all types of layers in a CNN with small accuracy loss. \section{Conclusion} We proposed an efficient and generic framework for CNN model compression and acceleration based on identifying the importance levels of neurons. Neuron importance scores in the layer of interest (usually the last layer before classification) are obtained by feature ranking. We further formulated the network pruning problem as a binary integer program and obtained a closed-form solution to a relaxed version of the formulation. Based on this solution, we presented the Neuron Importance Score Propagation algorithm that efficiently propagates the importance to every neuron in the whole network. The network is pruned by removing less important neurons and fine-tuned so that its predicative capability is best reserved. Experimental results demonstrate that our method effectively reduces CNN redundancy and achieves full-network acceleration and compression. \section*{Acknowledgement} The research was supported by the Office of Naval Research under Grant N000141612713: Visual Common Sense Reasoning for Multi-agent Activity Prediction and Recognition. {\small \bibliographystyle{ieee} \section{Related Work} There has been recent interest in reducing the redundancy of deep CNNs to achieve acceleration and compression. In \cite{PredictingParameters} the redundancy in the parameterization of deep learning models has been studied and demonstrated. Cheng \emph{et al.} \cite{Circulant} exploited properties of structured matrices and used circulant matrices to represent FC layers, reducing storage cost. Han \emph{et al.} \cite{DeepCompress} studied the weight sparsity and compressed CNNs by combining pruning, quantization, and Huffman coding. Sparsity regularization terms have been use to learn sparse CNN structure in \cite{lasso,SSL,learning}. Miao \emph{et al.} \cite{miao-icde} studied network compression based on float data quantization for the purpose of massive model storage. To accelerate inference in convolution layers, Jaderberg \emph{et al.} \cite{SeperableFilter} constructed a low rank basis of filters that are rank-1 in the spatial domain by exploiting cross-channel or filter redundancy. Liu \emph{et al.} \cite{slimLiu} imposed a scaling factor in the training process and facilitated one channel-level pruning. Figurnov \emph{et al.} \cite{PerforatedCNN} speeded up the convolutional layers by skipping operations in some spatial positions, which is based on loop perforation from source code optimization. In \cite{DentonLeCun,Nonlinear,Tucker}, low-rank approximation methods have been utilized to speed up convolutional layers by decomposing the weight matrix into low-rank matrices. Molchanov \emph{et al.} \cite{Nvidia} prune CNNs based on Taylor expansion. Focusing on compressing the fully connected (FC) layers, Srinivas \emph{et al.} \cite{Datafree} pruned neurons that are similar to each other. Yang \emph{et al.} \cite{fry} applied the ``Fastfood" transform to reparameterize the matrix-vector multiplication of FC layers. Ciresan \emph{et al.} \cite{random} reduced the parameters by randomly pruning neurons. Chen \emph{et al.} \cite{hash} used a low-cost hash function to randomly group connection weights into hash buckets and then fine-tuned the network with back-propagation. Other studies focused on fixed point computation rather than exploiting the CNN redundancy \cite{precision,binary}. Another work studied the fundamental idea about knowledge distillation \cite{Distilling}. Wu \textit{et al.} \cite{Zuxuan} proposed to skip layers for speeding up inference. Besides the above work which focuses on network compression, other methods speedup deep network inference by refining the pipelines of certain tasks \cite{faster,yu1,yu3, ssd}. Our method prunes a pre-trained network and requires a fast-converging fine-tuning process, rather than re-training a network from scratch. To measure the importance of neurons in a CNN, the exact solution is very hard to obtain given the complexity of nonlinearity. Some previous works \cite{brain2,brain3,brain} approximate it using 2nd-order Taylor expansion. Our work is a different approximation based on the Lipschitz continuity of a neural network. Most similar to our approach, Li \emph{et al.} \cite{pruneweigth} pruned filters by their weight magnitude. Luo \emph{et al.} \cite{thinet} utilized statistics information computed from the next layer to guide a greedy layer-by-layer pruning. In contrast, we measure neuron importance based not only on a neuron's individual weight but also the properties of the input data and other neurons in the network. Meanwhile, instead of pruning layer-by-layer in greedy fashion under the assumption that one layer can only affect its next layer, which may cause error propagation, we measure the importance across the entire network by propagating the importance from the final response layer. \section{Our Approach} We follow a straightforward strategy to reduce Convolutional Neural Networks: pruning kernels of convolution layers and neurons in fully connected layers. Trivial methods like random pruning or changing the number of neurons and kernels arbitrarily may result in huge degradation of the predictive power. We study the importance levels of neurons based on feature ranking of the final output, and then compute importance scores of intermediate neurons by measuring their effect on the final response. Based on our study, we derive the Neuron Importance Score Propagation algorithm to efficiently propagate final response importance throughout the whole network. The framework overview is illustrated in Fig.~\ref{fig:spe}. Given a trained CNN, we first apply a feature ranking algorithm on this final response layer and obtain the importance score of each output neuron. Then, the Neuron Importance Sore Propagation (NISP) algorithm propagates importance scores throughout the network. Finally, the network is pruned based on the importance scores of neurons and fine-tuned to recover accuracy loss. \subsection{Feature Ranking on the Final Response Layer}\label{sec:featrank} The predictive capability of a neural network is highly dependent on the quality of the features in the final layer response. Existing work has demonstrated that a subset of important features, selected based on feature ranking of the network final response, still retains good predictive power \cite{Roffo_2015_ICCV}. The intuition is that the final responses should play key roles in full network pruning since they are determined by intermediate neurons of a network. So, in the first step, we apply feature ranking on the final responses. There are three major categories of feature selection: \emph{wrappers}, which score a subset of features using classifiers; \emph{embedding methods}, which implicitly select the features in the learning process of the classifier by regularization methods; and \emph{filtering methods}, which exploit intrinsic properties of data, regardless of the classifiers \cite{nips2003,Roffo_2015_ICCV}. Our method can be used with any feature selection model that can score individual features in a set with respect to classification power. We employ the recently introduced filtering method Inf-FS ~\cite{Roffo_2015_ICCV} because of its efficiency and effectiveness on CNN feature selection. Inf-FS first constructs an affinity graph whose nodes are the features to be scored and whose edges represent feature relations like correlation. A selection of features is represented by a path through the graph passing through those features. The model in Roffo \emph{et al.} \shortcite{Roffo_2015_ICCV} utilizes properties of the power series of matrices to efficiently compute the importance of a feature with respect to all the other features, \emph{i.e.}, it is able to integrate the importance of a feature over all paths in the affinity graph\footnote{Details of the method are provided in \cite{Roffo_2015_ICCV} and code for the method was taken from {\scriptsize\url{https://www.mathworks.com/matlabcentral/fileexchange/54763-infinite-feature-selection-2016}}.}. \subsection{Neuron Importance Score Propagation (NISP)}\label{sec:rip} Our goal is to decide which intermediate neurons to delete, given the importance scores of final responses, so that the predictive power of the network is maximally retained. We formulate this problem as a binary integer programming (optimization) and provide a closed-form solution. Based on the theoretical analysis, we develop the Neuron Importance Score Propagation algorithm to efficiently compute the neuron importance for the whole network. \subsubsection{Problem Definition.}\label{sec:probdef} Our goal is to delete neurons, keeping a fixed number of neurons and minimizing accuracy loss. While model accuracy is dependent on the feature importance of the final output, we define our objective as minimizing the weighted distance between the original final response and the final response after neurons are pruned in a specific layer. Throughout the rest of the paper, we use bold symbols to represent vectors and matrices. \iffalse Let $\mathbf x$ be the input to a layer. A layer can be represented using the following general form, \begin{equation} \label{proof1_objective_2} f(\mathbf x)=\sigma(\mathbf w\mathbf x+\mathbf b) \end{equation} where $\sigma$ is an activation function, $\mathbf w,\mathbf b$ are weight and bias. The $j$-th output element of function $f$ is $f_j(\mathbf x) = \sigma\left(\sum_{i} w_{i,j}x_i+b_j\right)$ where $x_i$ is the $i$-th element of $\mathbf x$. \fi Most neural networks can be represented as a nested function. Thus, we define a network with depth $n$ as a function $F^{(n)}=f^{(n)}\circ f^{(n-1)}\circ\dots\circ f^{(1)}$. The $l$-th layer $f^{(l)}$ is represented using the following general form, \begin{equation} \label{proof1_objective_2} f^{(l)}(\mathbf x)=\sigma^{(l)}(\mathbf w^{(l)}\mathbf x+\mathbf b^{(l)}), \end{equation} where $\sigma^{(l)}$ is an activation function and $\mathbf w^{(l)},\mathbf b^{(l)}$ are weight and bias. Networks with branch connections such as the skip connection in ResNet can be transformed to this representation by merging and expanding layers with added binary (zero or one) weights. We define the \textit{neuron importance score} as a non-negative value w.r.t. a neuron in the network, and use $\mathbf s_l$ to represent the vector of neuron importance scores in the $l$-th layer. Suppose $N_l$ neurons are to be kept in the $l$-th layer after pruning; we define the \textit{neuron prune indicator} of the $l$-th layer to be a binary vector $\mathbf s^*_l$ which is computed based on the neuron importance scores $\mathbf s_l$ such that $s^*_{l,i}=1$ if and only if $s_{l,i}$ is among the highest $N_l$ values in $\mathbf s_l$. \subsubsection{Objective Function.} Let $F^{(n)}$ be a neural network with $n$ layers. Suppose we have a dataset of $M$ samples, and each is represented using $\mathbf x^{(m)}_0$. For the $m$-th sample, we use $\mathbf x^{(m)}_l$ to represent the response of the $l$-th layer (which is the input to the $(l+1)$-th layer). So the final output of the network is $x^{(m)}_n$ and its corresponding neuron importance is $\mathbf s_n$. We define \begin{align}\label{eq:gnet} G^{(i,j)}=f^{(j)}\circ f^{(j-1)}\circ\cdots\circ f^{(i)} \end{align} as a sub-network of $F^{(n)}$ starting from the $i$-th layer to the $j$-th layer. Our goal is to compute for the $l$-th layer the neuron prune indicator $\mathbf s^*_l$ so that the influence of pruning the $l$-th layer on the important neurons of the final response is minimized. To accomplish this, we define an optimization objective w.r.t. the $l$-th layer neuron prune indicator, i.e., \begin{equation} \label{proof1_objective} \arg\min_{\mathbf s^*_l}\ \sum_{m=1}^M \mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; G^{(l+1,n)})~, \end{equation} which is accumulated over all samples in the dataset. The objective function for a single sample is defined as \begin{equation}\label{proof1_objective2} \mathcal{F}(\mathbf s^*_l|\mathbf x,\mathbf s_n; F)=\left\langle\ \mathbf s_n,\ |F(\mathbf x)-F(\mathbf s^*_l \odot \mathbf x)|\ \right\rangle, \end{equation} where $\langle\cdot,\cdot\rangle$ is dot product, $\odot$ is element-wise product and $|\cdot|$ is element-wise absolute value. The motivation behind Eq.~\ref{proof1_objective2} is that the difference between the responses produced by the original network and the one produced by the pruned network should be minimized w.r.t. important neurons. \subsubsection{Solution.} The network pruning problem can be formulated as a binary integer program, finding the optimal neuron prune indicator in Eq.~\ref{proof1_objective}. However, it is hard to obtain efficient analytical solutions by directly optimizing Eq.~\ref{proof1_objective}. So we derive an upper bound on this objective, and show that a sub-optimal solution to the prune indicator can be obtained by minimizing the upper bound. More importantly, we find a feasible and efficient formulation for the importance scores of all neurons based on this sub-optimal solution. Recall that the $k$-th layer is defined as $f^{(k)}(\mathbf x)=\sigma^{(k)}(\mathbf w^{(k)}\mathbf x+\mathbf b^{(k)})$. We assume the activation function $\sigma^{(k)}$ is Lipschitz continuous since it is generally true for most of the commonly used activations in neural networks such as Identity, ReLU, sigmoid, tanh, PReLU, etc. Then we know for any $\mathbf x,\mathbf y$, there exists a constant $C_\sigma^{(k)}$ such that \begin{align} |\sigma^{(k)}(\mathbf x)-\sigma^{(k)}(\mathbf y)|\le C_\sigma^{(k)}|\mathbf x-\mathbf y|~. \end{align} Then it is easy to see that \begin{align}\label{lemma2} |f^{(k)}(\mathbf x)-f^{(k)}(\mathbf y)|\le C_\sigma^{(k)}|\mathbf w^{(k)}|\cdot|\mathbf x-\mathbf y|~. \end{align} where $|\cdot|$ is the element-wise absolute value. From Eq.~\ref{eq:gnet}, we see that $G^{(i,j)}=f^{(j)}\circ G^{(i,j-1)}$. Therefore, we have, \begin{align} &|G^{(i,j)}(\mathbf x)-G^{(i,j)}(\mathbf y)|\nonumber\\ &~~~~~~~\le C_\sigma^{(j)}|\mathbf w^{(j)}||G^{(i,j-1)}(\mathbf x)-G^{(i,j-1)}(\mathbf y)|~.\label{eq:G} \end{align} By applying Eq.~\ref{lemma2} and repeatedly applying Eq.~\ref{eq:G} for $j=i, i+1\ldots, n$, we have \begin{align} & |G^{(i,n)}(\mathbf x)-G^{(i,n)}(\mathbf y)|\le C_\Sigma^{(i,j)}\mathbf W^{(i,j)}|\mathbf x-\mathbf y|,\label{eq:intermediate} \end{align} where $\mathbf W^{(i,j)}=|\mathbf w^{(j)}||\mathbf w^{(j-1)}|\cdots|\mathbf w^{(i)}|$ and $C_\Sigma^{(i,j)}=\prod_{k=i}^j C_\sigma^{(k)}$. Substituting $\mathbf x = \mathbf x_l^{(m)},\mathbf y=\mathbf s^*_l\odot\mathbf x_l^{(m)},i=l+1$ into Eq.~\ref{eq:intermediate}, we have \begin{align} &|G^{(l+1,n)}(\mathbf x^{(m)}_l)-G^{(l+1,n)}(\mathbf s^*_l\odot\mathbf x^{(m)}_l)|\nonumber\\ &~~~~~~\le C_\Sigma^{(l+1,n)}\mathbf W^{(l+1,n)}|\mathbf x^{(m)}_l-\mathbf s^*_l\odot\mathbf x^{(m)}_l|~. \end{align} Since $\mathbf s_n$ is a non-negative vector, \begin{align} &\mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; G^{(l+1,n)})\nonumber\\ &~~~~=\langle\mathbf s_n, |G^{(l+1,n)}(\mathbf x^{(m)}_l)-G^{(l+1,n)}(\mathbf s^*_l\odot\mathbf x^{(m)}_l)|\rangle\\ &~~~~\le \langle\mathbf s_n, C_\Sigma^{(l+1,n)}\mathbf W^{(l+1,n)}|\mathbf x^{(m)}_l-\mathbf s^*_l\odot\mathbf x^{(m)}_l|\rangle\\ &~~~~=C_\Sigma^{(l+1,n)}\langle{\mathbf W^{(l+1,n)}}^\intercal\mathbf s_n, (\mathbf 1-\mathbf s^*_l)\odot\mathbf |x^{(m)}_l|\rangle~. \end{align} Let us define $\mathbf r_l={\mathbf W^{(l+1,n)}}^\intercal\mathbf s_n$; then \begin{align} &\sum_{m=1}^M \mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; G^{(l+1,n)})\nonumber\\ &~~~~~~\le C_\Sigma^{(l+1,n)}\sum_{m=1}^M \langle\mathbf r_l,(\mathbf 1-\mathbf s^*_l)\odot|\mathbf x^{(m)}_l|\rangle\\ &~~~~~~=C_\Sigma^{(l+1,n)}\sum_{m=1}^M \sum_i r_{l,i}(1-s^*_{l,i})|x^{(m)}_{l,i}|\\ &~~~~~~=C_\Sigma^{(l+1,n)}\sum_ir_{l,i}(1-s^*_{l,i})\sum_{m=1}^M|x^{(m)}_{l,i}|~. \end{align} Since $|\mathbf x^{(m)}_{l,i}|$ is bounded, there must exist a constant $C_x$ such that $\sum_{m=1}^M |x^{(m)}_{l,i}|\le C_x,\forall i$. Thus, we have \begin{align} &\sum_{m=1}^M \mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; F^{(l+1)})\le C\sum_ir_{l,i}(1-s^*_{l,i}),\label{eq:upbound} \end{align} where $C=C_\Sigma^{(l+1,n)}C_x$ is a constant factor. \iffalse \begin{proposition}\label{theorem} Let $F^{(n)}=f^{(n)}\circ f^{(n-1)}\circ\dots\circ f^{(1)}$ be a neural network with depth $n$ where the $l$-th layer is defined as $f^{(l)}(\mathbf x)=\sigma^{(l)}(\mathbf w^{(l)}\mathbf x+\mathbf b^{(l)})$ and $\sigma^{(l)}$ is Lipschitz continuous. Then there exists a constant value $C$ such that \begin{align}\label{eq:layer} \sum_{m=1}^M\mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l, \mathbf s_n; F^{(l+1)})\le C\langle\ \mathbf 1-\mathbf s^*_l,\ \mathbf W^\intercal\mathbf s_n\ \rangle \end{align} where $M$ is the total number of samples and large, $\langle\cdot,\cdot\rangle$ is the dot product, and $\mathbf W=|\mathbf w^{(n)}||\mathbf w^{(n-1)}|\cdots|\mathbf w^{(l+1)}|$. \begin{proof}Let $F^{(k)}=f^{(k)}\circ f^{(k-1)}\circ\dots\circ f^{(1)}$ be a sub-network of $F^{(n)}$ and $F^{(0)}(\mathbf x)=\mathbf x$ is the identity. Let us define \begin{align} G^{(k)}_j=|F_{j}^{(k)}(\mathbf x)-F_{j}^{(k)}(\mathbf s^*_x\odot\mathbf x)|~,\nonumber \end{align} then, for any $n\ge 1$, we have \begin{align} G^{(k)}_j=|f^{(k)}_j(F^{(k-1)}(\mathbf x^{(m)}_l))-f^{(k)}_j(F^{(k-1)}(\mathbf s^*_l\odot\mathbf x^{(m)}_l))|\nonumber~. \end{align} According to Eq.~\ref{eq:layer}, there exists $C_\sigma^{(k)}$ such that \begin{align} G^{(k)}_j &\le C_\sigma^{(k)}\sum_{i}|w_{i,j}^{(k)}|\cdot|F^{(k-1)}_{i}(\mathbf x)-F_{i}^{(k-1)}(\mathbf s^*_x\odot\mathbf x)|\nonumber\\ &= C_\sigma^{(k)}\sum_{i}|w_{i,j}^{(k)}|G^{(k-1)}_i\nonumber \end{align} whose matrix form is $\mathbf G^{(k)}\le C_\sigma^{(k)}|\mathbf w^{(k)}|\mathbf G^{(k-1)}$. By applying the same inequalities for $k=1,2,\ldots,n$, we have \begin{align} \mathbf G^{(n)}\le C_\Sigma\mathbf W\mathbf G^{(0)},\nonumber \end{align} where $C_\Sigma=\prod_{k=1}^n C_\sigma^{(k)}$, $\mathbf W=|\mathbf w^{(n)}||\mathbf w^{(n-1)}|\cdots|\mathbf w^{(1)}|$, and $\mathbf G^{(0)}=|\mathbf x - \mathbf s^*_x\odot\mathbf x|$. Then we have, \begin{align} \mathcal{F}(\mathbf s^*_x|\mathbf x, \mathbf s_y; F^{(n)})&=\sum_j s_{yj}G_j^{(n)}\nonumber\\ &\le \sum_j s_{yj}C_\Sigma\sum_iW_{i,j}|x_i - s^*_{xi}x_i|\nonumber\\ &= C_\Sigma\sum_i |x_i|(1-s^*_{xi})\sum_j W_{i,j}s_{yj}\nonumber~. \end{align} Let $\mathbf r=\mathbf W^\intercal \mathbf s_y$ (i.e., $r_i=\sum_j W_{i,j}s_{yj}$), then we have \begin{align} \sum_{m=1}^M\mathcal{F}(\mathbf s^*_x|\mathbf x^{(m)}, \mathbf s_y; F^{(n)})&\le C_\Sigma\sum_{m=1}^M\sum_i |x_i^{(m)}|(1-s^*_{xi})r_{i}\nonumber\\ &=C_\Sigma\sum_i(1-s^*_{xi})r_i\sum_{m=1}^M|x_i^{(m)}|\nonumber~. \end{align} Since the total number $M$ of input samples is a large value, hence we can approximate $\sum_{m=1}^M|x_i^{(m)}|=M\mathop{\mathbb{E}}[|x_i|]$, according to the law of large numbers. Then, there must exists a constant value $C_x$ such that $C_x\ge \mathop{\mathbb{E}}[|x_i|], \forall i$. Thereafter, \begin{align} \sum_{m=1}^M\mathcal{F}(\mathbf s^*_x|\mathbf x^{(m)}, \mathbf s_y; F^{(n)})\le C_\Sigma C_x\sum_i(1-s^*_{xi})r_i~.\label{eq:theo-last} \end{align} Let $C=C_\Sigma C_x$ and we see Eq.~\ref{eq:theo-last} is equivalent to Eq.~\ref{eq:upbound}. \end{proof} \end{proposition} \fi Eq.~\ref{eq:upbound} reveals an upper-bound of our objective in Eq.~\ref{proof1_objective}. Thus, we minimize this upper-bound, i.e., \begin{align} \arg\min_{\mathbf s^*_l}\sum_ir_{l,i}(1-s^*_{l,i})\Leftrightarrow\arg\max_{\mathbf s^*_l}\ \sum_is^*_{l,i}r_{l,i}~.\label{suboptimal} \end{align} The optimal solution to Eq.\ref{suboptimal} is sub-optimal with respect to the original objective in Eq.~\ref{proof1_objective}, however it still captures the importance of neurons. It is easy to see that if we keep $N_x$ neurons in the $l$-th layer after pruning, then the solution to Eq.~\ref{suboptimal} is that $s^*_{l,i}=1$ if and only if $r_{l,i}$ is among the highest $N_x$ values in $\mathbf r_l$. According to the definition of neuron prune indicator in Sec.~\ref{sec:probdef}, $\mathbf r_l={\mathbf W^{(l+1,n)}}^\intercal\mathbf s_n$ is a feasible solution to the importance scores of the $l$-th layer response. This conclusion can be applied to every layer in the network. Based on this result, we define the neuron importance of a network as follows. \begin{definition}[Neuron importance score]\label{def:ri} Given a neural network $F^{(n)}$ containing $n$ layers and the importance score $\mathbf s^{(n)}$ of the last layer response, the importance score of the $k$-th layer response can be computed as \begin{align} \mathbf s_k=|\mathbf w^{(k+1)}|^\intercal|\mathbf w^{(k+2)}|^\intercal\cdots|\mathbf w^{(n)}|^\intercal \mathbf s_n, \end{align} where $\mathbf w^{(i)}$ is the weight matrix of the $i$-th layer. \end{definition} An important property of neuron importance is that it can be computed recursively (or propagated) along the network. \begin{proposition}[Neuron importance score propagation] \label{def:rip} The importance score of the $k^\text{th}$ layer response can be propagated from the importance score of the $(k+1)^\text{th}$ layer by \begin{align} \mathbf s_k=|\mathbf w^{(k+1)}|^\intercal\mathbf s_{k+1},\label{eq:rip} \end{align} where $\mathbf w^{(k+1)}$ is the weight matrix of the $(k+1)^\text{th}$ layer. \end{proposition} \subsubsection{Algorithm} We propose the \textit{Neuron Importance Score Propagation} (NISP) algorithm based on Proposition~\ref{def:rip}. Initially, we have the importance score of every neuron in the final layer of the network. Definition~\ref{def:ri} shows that the importance score of every other layer in the network is directly correlated with the importance of the final response. However, instead of computing the importance expensively using Definition~\ref{def:ri}, we see from Eq.~\ref {eq:rip} that the importance score of a lower layer can be propagated directly from the adjacent layer above it. An equivalent form of Eq.~\ref{eq:rip} is \begin{equation} \textstyle s_{k,j}=\sum_i |w^{(k+1)}_{i,j}|s_{k+1,i},\label{eq:prop} \end{equation} where $s_{k,j}$ is the importance score of the $j$-th neuron in the $k$-th layer response. We conclude from Eq.~\ref{eq:prop} that the importance of a neuron is a weighted sum of all the subsequent neurons that are directly connected to it. This conclusion also applies to the case when the network has branch connections, i.e., a layer is directly connected with multiple layers. The importance of batch normalization layers is propagated identically since the connections between neurons form a one-one mapping. An exception is the max-pooling layer in which we assume a uniform probability for each neuron to be the maximum in the window. So in our implementation, the NISP of max-pooling is treated in the same way as average-pooling. Our algorithm starts with the final layer importance and repeats the propagation (Eq.~\ref{eq:prop}) to obtain the importance of all neurons in the network with a single backward pass (Fig.~\ref{fig:spe}). \subsection{Pruning Networks Using NISP} \label{sec:prune} Given target pruning ratios for each layer, we propagate the importance scores, compute the prune indicator of neurons based on their importance scores and remove neurons with prune indicator value $0$. The importance propagation and layer pruning happens jointly in a single backward pass, and the importance of a pruned neuron is not propagated to any further low-level layers. For most of the layers, e.g., fully connected layers, we prune the neurons independently. For a convolution layer, we prune a whole channel of neurons at the same time. Suppose a convolution layer has an output tensor with $C$ channels, then the assignment of prune indicator is essentially optimizing the same objective function in Eq.~\ref{suboptimal} but with an additional constraint that a set of neurons within the same channel should have the same prune indicator value. In this case, Eq.~\ref{suboptimal} becomes $\arg\max \sum_{i=c}^C s_{l,c}^\text{chan*}\left( \sum_{j\in idx_{l,c}^\text{chan}}r_{l,j}\right)$ where $s_{l,c}^\text{chan*}$ is the prune indicator of the $c$-th channel of the $l$-th layer and $idx_{l,c}^\text{chan}$ is the set of indices of neurons in the $c$-th channel of the $l$-th layer. Thus, the importance score of a channel can be computed as the summation of the importance scores of neurons within this channel\footnote{More details can be found in supplementary materials.}. \section{Our Approach} An overview of NISP is illustrated in Fig.~\ref{fig:spe}. Given a trained CNN, we first apply a feature ranking algorithm on this final response layer and obtain the importance score of each neuron. Then, the proposed NISP algorithm propagates importance scores throughout the network. Finally, the network is pruned based on the importance scores of neurons and fine-tuned to recover its accuracy. \subsection{Feature Ranking on the Final Response Layer}\label{sec:featrank} Our intuition is that the final responses of a neural network should play key roles in full network pruning since they are the direct inputs of the classification task. So, in the first step, we apply feature ranking on the final responses. It is worth noting that our method can work with any feature selection that scores features \emph{w.r.t.} their classification power. We employ the recently introduced filtering method Inf-FS ~\cite{Roffo_2015_ICCV} because of its efficiency and effectiveness on CNN feature selection. Inf-FS utilizes properties of the power series of matrices to efficiently compute the importance of a feature with respect to all the other features, \emph{i.e.}, it is able to integrate the importance of a feature over all paths in the affinity graph\footnote{Details of the method are introduced in \cite{Roffo_2015_ICCV} and its codes taken from {\scriptsize\url{https://www.mathworks.com/matlabcentral/fileexchange/54763-infinite-feature-selection-2016}}.}. \subsection{Neuron Importance Score Propagation (NISP)}\label{sec:rip} Our goal is to decide which intermediate neurons to delete, given the importance scores of final responses, so that the predictive power of the network is maximally retained. We formulate this problem as a binary integer programming (optimization) and provide a closed-form approximate solution. Based on our theoretical analysis, we develop the \textit{Neuron Importance Score Propagation} algorithm to efficiently compute the neuron importance for the whole network. \subsubsection{Problem Definition}\label{sec:probdef} The goal of pruning is to remove neurons while minimizing accuracy loss. Since model accuracy is dependent on the final responses, we define our objective as minimizing the weighted distance between the original final responses and the final responses after neurons are pruned of a specific layer. In following, we use bold symbols to represent vectors and matrices. \iffalse Let $\mathbf x$ be the input to a layer. A layer can be represented using the following general form, \begin{equation} \label{proof1_objective_2} f(\mathbf x)=\sigma(\mathbf w\mathbf x+\mathbf b) \end{equation} where $\sigma$ is an activation function, $\mathbf w,\mathbf b$ are weight and bias. The $j$-th output element of function $f$ is $f_j(\mathbf x) = \sigma\left(\sum_{i} w_{i,j}x_i+b_j\right)$ where $x_i$ is the $i$-th element of $\mathbf x$. \fi Most neural networks can be represented as a nested function. Thus, we define a network with depth $n$ as a function $F^{(n)}=f^{(n)}\circ f^{(n-1)}\circ\dots\circ f^{(1)}$. The $l$-th layer $f^{(l)}$ is represented using the following general form, \begin{equation} \label{proof1_objective_2} f^{(l)}(\mathbf x)=\sigma^{(l)}(\mathbf w^{(l)}\mathbf x+\mathbf b^{(l)}), \end{equation} where $\sigma^{(l)}$ is an activation function and $\mathbf w^{(l)},\mathbf b^{(l)}$ are weight and bias, and f(n) represents the "final response layer". Networks with branch connections such as the skip connection in ResNet can be transformed to this representation by padding weights and merging layers. We define the \textit{neuron importance score} as a non-negative value \emph{w.r.t.} a neuron, and use $\mathbf s_l$ to represent the vector of neuron importance scores in the $l$-th layer. Suppose $N_l$ neurons are to be kept in the $l$-th layer after pruning; we define the \textit{neuron prune indicator} of the $l$-th layer as a binary vector $\mathbf s^*_l$, computed based on neuron importance scores $\mathbf s_l$ such that $s^*_{l,i}=1$ if and only if $s_{l,i}$ is among top $N_l$ values in $\mathbf s_l$. \begin{figure}[t] \begin{center} \includegraphics[width=0.75\linewidth]{image/NISP.pdf} \end{center} \caption{We propagate the neuron importance from the final response layer (FRL) to previous layers, and prune bottom-ranked neurons (with low importance scores shown in each node) given a pre-defined pruning ratio per layer in a single pass. The importance of pruned neurons (with backslash) is not propagated.} \label{fig:NISP} \end{figure} \subsubsection{Objective Function} The motivation of our objective is that the difference between the responses produced by the original network and the one produced by the pruned network should be minimized w.r.t. important neurons. Let $F^{(n)}$ be a neural network with $n$ layers. Suppose we have a dataset of $M$ samples, and each is represented using $\mathbf x^{(m)}_0$. For the $m$-th sample, we use $\mathbf x^{(m)}_l$ to represent the response of the $l$-th layer (which is the input to the $(l+1)$-th layer). The final output of the network is $\mathbf x^{(m)}_n$ and its corresponding non-negative neuron importance is $\mathbf s_n$. We define \begin{align}\label{eq:gnet} G^{(i,j)}=f^{(j)}\circ f^{(j-1)}\circ\cdots\circ f^{(i)} \end{align} as a sub-network of $F^{(n)}$ starting from the $i$-th layer to the $j$-th layer. Our goal is to compute for the $l$-th layer the neuron prune indicator $\mathbf s^*_l$ so that the influence of pruning the $l$-th layer on the important neurons of the final response is minimized. To accomplish this, we define an optimization objective w.r.t. the $l$-th layer neuron prune indicator, \emph{i.e.}, \begin{equation} \label{proof1_objective} \arg\min_{\mathbf s^*_l}\ \sum_{m=1}^M \mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; G^{(l+1,n)})~, \end{equation} which is accumulated over all samples in the dataset. The objective function for a single sample is defined as \begin{equation}\label{proof1_objective2} \mathcal{F}(\mathbf s^*_l|\mathbf x,\mathbf s_n; F)=\left\langle\ \mathbf s_n,\ |F(\mathbf x)-F(\mathbf s^*_l \odot \mathbf x)|\ \right\rangle, \end{equation} where $\langle\cdot,\cdot\rangle$ is dot product, $\odot$ is element-wise product and $|\cdot|$ is element-wise absolute value. The solution to Eq. \ref{proof1_objective} indicates which neurons should be pruned in an arbitrary layer. \subsubsection{Solution} The network pruning problem can be formulated as a binary integer program, finding the optimal neuron prune indicator in Eq.~\ref{proof1_objective}. However, it is hard to obtain efficient analytical solutions by directly optimizing Eq.~\ref{proof1_objective}. So we derive an upper bound on this objective, and show that a sub-optimal solution can be obtained by minimizing the upper bound. Interestingly, we find a feasible and efficient formulation for the importance scores of all neurons based on this sub-optimal solution. Recall that the $k$-th layer is defined as $f^{(k)}(\mathbf x)=\sigma^{(k)}(\mathbf w^{(k)}\mathbf x+\mathbf b^{(k)})$. We assume the activation function $\sigma^{(k)}$ is Lipschitz continuous since it is generally true for most of the commonly used activations in neural networks such as Identity, ReLU, sigmoid, tanh, PReLU, \etc. Then we know for any $\mathbf x,\mathbf y$, there exists a constant $C_\sigma^{(k)}$ such that $|\sigma^{(k)}(\mathbf x)-\sigma^{(k)}(\mathbf y)|\le C_\sigma^{(k)}|\mathbf x-\mathbf y|$. Then it is easy to see \begin{align}\label{lemma2} |f^{(k)}(\mathbf x)-f^{(k)}(\mathbf y)|\le C_\sigma^{(k)}|\mathbf w^{(k)}|\cdot|\mathbf x-\mathbf y|~, \end{align} where $|\cdot|$ is the element-wise absolute value. From Eq.~\ref{eq:gnet}, we see that $G^{(i,j)}=f^{(j)}\circ G^{(i,j-1)}$. Therefore, we have, \begin{align} &|G^{(i,j)}(\mathbf x)-G^{(i,j)}(\mathbf y)|\nonumber\\ &~~~~~~~\le C_\sigma^{(j)}|\mathbf w^{(j)}||G^{(i,j-1)}(\mathbf x)-G^{(i,j-1)}(\mathbf y)|~.\label{eq:G} \end{align} Applying Eq.~\ref{lemma2} and Eq.~\ref{eq:G} repeatedly, we have, $\forall i\le j\le n$, \begin{align} & |G^{(i,n)}(\mathbf x)-G^{(i,n)}(\mathbf y)|\le C_\Sigma^{(i,n)}\mathbf W^{(i,n)}|\mathbf x-\mathbf y|,\label{eq:intermediate} \end{align} where $\mathbf W^{(i,j)}=|\mathbf w^{(j)}||\mathbf w^{(j-1)}|\cdots|\mathbf w^{(i)}|$, and $C_\Sigma^{(i,j)}=\prod_{k=i}^j C_\sigma^{(k)}$. Substituting $\mathbf x = \mathbf x_l^{(m)},\mathbf y=\mathbf s^*_l\odot\mathbf x_l^{(m)},i=l+1$ into Eq.~\ref{eq:intermediate}, we have \begin{align} &|G^{(l+1,n)}(\mathbf x^{(m)}_l)-G^{(l+1,n)}(\mathbf s^*_l\odot\mathbf x^{(m)}_l)|\nonumber\\ &~~~~~~\le C_\Sigma^{(l+1,n)}\mathbf W^{(l+1,n)}|\mathbf x^{(m)}_l-\mathbf s^*_l\odot\mathbf x^{(m)}_l|~. \end{align} Since $\mathbf s_n$ is a non-negative vector, \begin{align} &\mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; G^{(l+1,n)})\nonumber\\ &~~~~=\langle\mathbf s_n, |G^{(l+1,n)}(\mathbf x^{(m)}_l)-G^{(l+1,n)}(\mathbf s^*_l\odot\mathbf x^{(m)}_l)|\rangle\\ &~~~~\le \langle\mathbf s_n, C_\Sigma^{(l+1,n)}\mathbf W^{(l+1,n)}|\mathbf x^{(m)}_l-\mathbf s^*_l\odot\mathbf x^{(m)}_l|\rangle\\ &~~~~=C_\Sigma^{(l+1,n)}\langle{\mathbf W^{(l+1,n)}}^\intercal\mathbf s_n, (\mathbf 1-\mathbf s^*_l)\odot |\mathbf{x}^{(m)}_l|\rangle~. \end{align} Let us define $\mathbf r_l={\mathbf W^{(l+1,n)}}^\intercal\mathbf s_n$; then \begin{align} &\textstyle\sum_{m=1}^M \mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; G^{(l+1,n)})\nonumber\\ &\textstyle~~~~~~\le C_\Sigma^{(l+1,n)}\sum_{m=1}^M \langle\mathbf r_l,(\mathbf 1-\mathbf s^*_l)\odot|\mathbf x^{(m)}_l|\rangle\\ &\textstyle~~~~~~\le C_\Sigma^{(l+1,n)}\sum_{m=1}^M \sum_i r_{l,i}(1-s^*_{l,i})|x^{(m)}_{l,i}|\\ &\textstyle~~~~~~=C_\Sigma^{(l+1,n)}\sum_ir_{l,i}(1-s^*_{l,i})\sum_{m=1}^M|x^{(m)}_{l,i}|~. \end{align} Since $|\mathbf x^{(m)}_{l,i}|$ is bounded, there must exist a constant $C_x$ such that $\sum_{m=1}^M |x^{(m)}_{l,i}|\le C_x,\forall i$. Thus, we have \begin{align} &\sum_{m=1}^M \mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l,\mathbf s_n; F^{(l+1)})\le C\sum_ir_{l,i}(1-s^*_{l,i}),\label{eq:upbound} \end{align} where $C=C_\Sigma^{(l+1,n)}C_x$ is a constant factor. \iffalse \begin{proposition}\label{theorem} Let $F^{(n)}=f^{(n)}\circ f^{(n-1)}\circ\dots\circ f^{(1)}$ be a neural network with depth $n$ where the $l$-th layer is defined as $f^{(l)}(\mathbf x)=\sigma^{(l)}(\mathbf w^{(l)}\mathbf x+\mathbf b^{(l)})$ and $\sigma^{(l)}$ is Lipschitz continuous. Then there exists a constant value $C$ such that \begin{align}\label{eq:layer} \sum_{m=1}^M\mathcal{F}(\mathbf s^*_l|\mathbf x^{(m)}_l, \mathbf s_n; F^{(l+1)})\le C\langle\ \mathbf 1-\mathbf s^*_l,\ \mathbf W^\intercal\mathbf s_n\ \rangle \end{align} where $M$ is the total number of samples and large, $\langle\cdot,\cdot\rangle$ is the dot product, and $\mathbf W=|\mathbf w^{(n)}||\mathbf w^{(n-1)}|\cdots|\mathbf w^{(l+1)}|$. \begin{proof}Let $F^{(k)}=f^{(k)}\circ f^{(k-1)}\circ\dots\circ f^{(1)}$ be a sub-network of $F^{(n)}$ and $F^{(0)}(\mathbf x)=\mathbf x$ is the identity. Let us define \begin{align} G^{(k)}_j=|F_{j}^{(k)}(\mathbf x)-F_{j}^{(k)}(\mathbf s^*_x\odot\mathbf x)|~,\nonumber \end{align} then, for any $n\ge 1$, we have \begin{align} G^{(k)}_j=|f^{(k)}_j(F^{(k-1)}(\mathbf x^{(m)}_l))-f^{(k)}_j(F^{(k-1)}(\mathbf s^*_l\odot\mathbf x^{(m)}_l))|\nonumber~. \end{align} According to Eq.~\ref{eq:layer}, there exists $C_\sigma^{(k)}$ such that \begin{align} G^{(k)}_j &\le C_\sigma^{(k)}\sum_{i}|w_{i,j}^{(k)}|\cdot|F^{(k-1)}_{i}(\mathbf x)-F_{i}^{(k-1)}(\mathbf s^*_x\odot\mathbf x)|\nonumber\\ &= C_\sigma^{(k)}\sum_{i}|w_{i,j}^{(k)}|G^{(k-1)}_i\nonumber \end{align} whose matrix form is $\mathbf G^{(k)}\le C_\sigma^{(k)}|\mathbf w^{(k)}|\mathbf G^{(k-1)}$. By applying the same inequalities for $k=1,2,\ldots,n$, we have \begin{align} \mathbf G^{(n)}\le C_\Sigma\mathbf W\mathbf G^{(0)},\nonumber \end{align} where $C_\Sigma=\prod_{k=1}^n C_\sigma^{(k)}$, $\mathbf W=|\mathbf w^{(n)}||\mathbf w^{(n-1)}|\cdots|\mathbf w^{(1)}|$, and $\mathbf G^{(0)}=|\mathbf x - \mathbf s^*_x\odot\mathbf x|$. Then we have, \begin{align} \mathcal{F}(\mathbf s^*_x|\mathbf x, \mathbf s_y; F^{(n)})&=\sum_j s_{yj}G_j^{(n)}\nonumber\\ &\le \sum_j s_{yj}C_\Sigma\sum_iW_{i,j}|x_i - s^*_{xi}x_i|\nonumber\\ &= C_\Sigma\sum_i |x_i|(1-s^*_{xi})\sum_j W_{i,j}s_{yj}\nonumber~. \end{align} Let $\mathbf r=\mathbf W^\intercal \mathbf s_y$ (\emph{i.e.}, $r_i=\sum_j W_{i,j}s_{yj}$), then we have \begin{align} \sum_{m=1}^M\mathcal{F}(\mathbf s^*_x|\mathbf x^{(m)}, \mathbf s_y; F^{(n)})&\le C_\Sigma\sum_{m=1}^M\sum_i |x_i^{(m)}|(1-s^*_{xi})r_{i}\nonumber\\ &=C_\Sigma\sum_i(1-s^*_{xi})r_i\sum_{m=1}^M|x_i^{(m)}|\nonumber~. \end{align} Since the total number $M$ of input samples is a large value, hence we can approximate $\sum_{m=1}^M|x_i^{(m)}|=M\mathop{\mathbb{E}}[|x_i|]$, according to the law of large numbers. Then, there must exists a constant value $C_x$ such that $C_x\ge \mathop{\mathbb{E}}[|x_i|], \forall i$. Thereafter, \begin{align} \sum_{m=1}^M\mathcal{F}(\mathbf s^*_x|\mathbf x^{(m)}, \mathbf s_y; F^{(n)})\le C_\Sigma C_x\sum_i(1-s^*_{xi})r_i~.\label{eq:theo-last} \end{align} Let $C=C_\Sigma C_x$ and we see Eq.~\ref{eq:theo-last} is equivalent to Eq.~\ref{eq:upbound}. \end{proof} \end{proposition} \fi Eq.~\ref{eq:upbound} reveals an upper-bound of our objective in Eq.~\ref{proof1_objective}. Thus, we minimize this upper-bound, \emph{i.e.}, \begin{align} \arg\min_{\mathbf s^*_l}\sum_ir_{l,i}(1-s^*_{l,i})\Leftrightarrow\arg\max_{\mathbf s^*_l}\ \sum_is^*_{l,i}r_{l,i}~.\label{suboptimal} \end{align} The optimal solution to Eq.\ref{suboptimal} is sub-optimal with respect to the original objective in Eq.~\ref{proof1_objective}, however it still captures the importance of neurons. It is easy to see that if we keep $N_x$ neurons in the $l$-th layer after pruning, then the solution to Eq.~\ref{suboptimal} is that $s^*_{l,i}=1$ if and only if $r_{l,i}$ is among the highest $N_x$ values in $\mathbf r_l$. According to the definition of neuron prune indicator in Sec.~\ref{sec:probdef}, $\mathbf r_l={\mathbf W^{(l+1,n)}}^\intercal\mathbf s_n$ is a feasible solution to the importance scores of the $l$-th layer response. This conclusion can be applied to every layer in the network. Based on this result, we define the neuron importance of a network as follows. \begin{definition}[Neuron importance score]\label{def:ri} Given a neural network $F^{(n)}$ containing $n$ layers and the importance score $\mathbf s^{(n)}$ of the last layer response, the importance score of the $k$-th layer response can be computed as \begin{align} \mathbf s_k=|\mathbf w^{(k+1)}|^\intercal|\mathbf w^{(k+2)}|^\intercal\cdots|\mathbf w^{(n)}|^\intercal \mathbf s_n, \end{align} where $\mathbf w^{(i)}$ is the weight matrix of the $i$-th layer. \end{definition} An important property of neuron importance is that it can be computed recursively (or propagated) along the network. \begin{proposition}[Neuron importance score propagation] \label{def:rip} The importance score of the $k^\text{th}$ layer response can be propagated from the importance score of the $(k+1)^\text{th}$ layer by \begin{align} \mathbf s_k=|\mathbf w^{(k+1)}|^\intercal\mathbf s_{k+1},\label{eq:rip} \end{align} where $\mathbf w^{(k+1)}$ is the weight matrix of the $(k+1)^\text{th}$ layer. \end{proposition} \subsubsection{Algorithm} We propose the \textit{Neuron Importance Score Propagation} (NISP) algorithm (shown in Fig. \ref{fig:NISP}) based on Proposition~\ref{def:rip}. Initially, we have the importance score of every neuron in the final response layer of the network. Definition~\ref{def:ri} shows that the importance score of every other layer in the network is directly correlated with the importance of the final response. However, instead of computing the importance expensively using Definition~\ref{def:ri}, we see from Eq.~\ref {eq:rip} that the importance score of a lower layer can be propagated directly from the adjacent layer above it. An equivalent form of Eq.~\ref{eq:rip} is \begin{equation} \textstyle s_{k,j}=\sum_i |w^{(k+1)}_{i,j}|s_{k+1,i},\label{eq:prop} \end{equation} where $s_{k,j}$ is the importance score of the $j$-th neuron in the $k$-th layer response. We conclude from Eq.~\ref{eq:prop} that the importance of a neuron is a weighted sum of all the subsequent neurons that are directly connected to it. This conclusion also applies to normalization, pooling and branch connections in the network (\emph{i.e.}, a layer is directly connected with multiple layers)\footnote{\label{supp}See supplementary material for more details and proofs.}. The NISP algorithm starts with the importance in FRL and repeats the propagation (Eq.~\ref{eq:prop}) to obtain the importance of all neurons in the network with a single backward pass (Fig.~\ref{fig:spe}). \subsection{Pruning Networks Using NISP} \label{sec:prune} Given target pruning ratios for each layer, we propagate the importance scores, compute the prune indicator of neurons based on their importance scores and remove neurons with prune indicator value $0$. The importance propagation and layer pruning happens jointly in a single backward pass, and the importance of a pruned neuron is not propagated to any further low-level layers. For fully connected layers, we prune each individual neuron. For convolution layers, we prune a whole channel of neurons together. The importance score of a channel is computed as the summation of the importance scores of all neurons within this channel\footref{supp} \section{Supplementary Material} Despite their impressive predictive power on a wide range of tasks \cite{faster, xu1,xu3,yu1,yu2,peng,yu3,yu6,yu7,xu2,yu4,yu5}, the redundancy in the parameterization of deep learning models has been studied and demonstrated \cite{PredictingParameters}. We present NISP to efficiently propagate the importance scores from final responses to all other neurons to guide network pruning to achieve acceleration and compression of a deep network. In the supplementary materials, we show details on how to propagate neuron importance from the final response layer, and some additional experiments. \subsection{Neuron Importance Score Propagation (NISP)} Given the importance of a neuron, we first identify the positions in the previous layer that are used as its input, then propagate the importance to the positions proportional to the weights. We only propagate the importance of the selected feature extractors to the previous layers and ignore the pruned ones. The NISP process can be divided into three classes: from a 1-way tensor to a 1-way tensor, e.g. between FC layers; from a 1-way tensor to a 3-way tensor, e.g., from an FC layer to a conv/pooling layer; from a 3-way tensor to a 3-way tensor, e.g., from a pooling layer to a conv layer. We simplify NISP by ignoring the propagation of bias. \subsection{NISP: from 1-way tensor to 1-way tensor} Given an FC layer with $M$ input neurons and $N$ output neurons, the $N \text{-}by\text{-}1$ importance vector ($\mathbf{S}$) of the output feature is $\mathbf{S_{FC_{out}}}=\left [{S_{FC_{out}}}_1,{S_{FC_{out}}}_2 \dots {S_{FC_{out}}}_N\right ]^\text{T}$. We use $\mathbf{W_{FC}}\in \mathbb{R}^{M\times N}$ to denote the weights of the FC layer. The importance vector of the input neurons is: \begin{equation} \label{RI_FC_FC} \mathbf{S_{FC_{in}}}=|\mathbf{W_{FC}}| \cdot \mathbf{S_{FC_{out}}}~, \end{equation} where $|\cdot|$ is element-wise absolute value. \subsection{NISP: from 1-way tensor to 3-way tensor} Given an FC layer with a 3-way tensor as input and $N$ output neurons, the input has a size of $X \times X \times C$, where $X$ is the spatial size and $C$ is the number of input channels. The input can be the response of a convolutional layer or a pooling layer. We use $\mathbf{W_{FC}}\in \mathbb{R}^{(X^2 \times C)\times N}$ to denote the weights of the FC layer. The flattened importance vector $\mathbf{S_{in}} \in \mathbb{R}^ {(X^2 \times C)\times 1}$ of the input tensor is: \begin{equation} \label{RI_FC_conv} \mathbf{S_{in}}=|\mathbf{W_{FC}}| \cdot \mathbf{S_{FC_{out}}}. \end{equation} \subsection{NISP: from 3-way tensor to 3-way tensor} \subsubsection{Convolution Layer.} We derive NISP for a convolutional layer, which is the most complicated case of NISP between 3-way tensors. NISP for pooling and local response normalization (LRN) can be derived similarly. For a convolutional layer with the input 3-way tensor $\mathbf{{conv_{in}}} \in \mathbb{R}^ {X\times X \times N }$ and output tensor $\mathbf{{conv_{out}}} \in \mathbb{R}^ {Y\times Y \times F )}$, the filter size is $k$, stride is $s$ and the number of padded pixels is $p$. During the forward propagation, convolution consists of multiple inner products between a kernel $\mathbf{k}_f \in \mathbb{R}^ {k\times k \times N}$, and multiple corresponding receptive cubes to produce an output response. Fixing input channel $n$ and output channel $f$, the spatial convolutional kernel is $\mathbf{k}_{fn}$. For position $i$ in the $n^{th}$ channel of the input tensor, the corresponding response of the output channel $f$ at position $i$ is defined as Equation \ref{fw_conv}: \begin{equation} \label{fw_conv} R_f(i)=\sum_n \mathbf{k}_{fn} \cdot \mathbf{in}(i), \end{equation} where $\mathbf{in}(i)$ is the corresponding 2-D receptive field. Given the importance cube of the output response $\mathbf{{S_{out}}} \in \mathbb{R}^ {Y \times Y \times F}$, we use a similar linear computation to propagate the importance from the output response to the input: \begin{equation} \label{bw_conv} S_n(i)=\sum_f \mathbf{k}_{fn} \cdot \mathbf{S}_{out}(i), \end{equation} where $S_n(i)$ is the importance of position $i$ in the $n^{th}$ input channel, and $\mathbf{S}_{out}(i)$ is the corresponding 2-D matrix that contains the output positions whose responses come from the value of that input position during forward propagation. We propagate the importance proportionally to the weights as described in Algorithm \ref{convBP}. \begin{algorithm}[!t] \caption{NISP: convolutional layer}\label{convBP} \begin{algorithmic}[1] \State $\mathbf{Input: } \text{ weights of the conv layer } \mathbf{W} \in \mathbb{R}^ {X\times X \times N \times F} $ \State $\text{, flattened importance of the $f^{th}$ output channel }$ \State $\mathbf{S}_{out}^f \in \mathbb{R}^ {1 \times( X \times X )}$ \For {n in 1 \dots N} \For {f in 1 \dots F} \State $\mathbf{k}_{fn} \gets |\mathbf{W}[:,:,n,f]|$ \State $\text{Construct } \mathbf{BP}_{conv}^{fn}$ as \eqref{BP_conv} and \eqref{b_c} \State $\mathbf{S}_{in}^{fn} \gets \mathbf{S}_{out}^f \cdot \mathbf{BP}_{conv}^{fn}$ \EndFor \State $\mathbf{S}_{in}^{n} \gets \sum_f \mathbf{S}_{in}^{fn}$ \EndFor \State $\mathbf{S}_{in} \gets [\mathbf{S}_{in}^{1},\mathbf{S}_{in}^{2} \dots, \mathbf{S}_{in}^{N}]$ \State \text{end} \end{algorithmic} \end{algorithm} The propagation matrices used in algorithm \ref{convBP} are defined in \eqref{BP_conv} and \eqref{b_c} \begin{equation} \label{BP_conv} \mathbf{BP}_{conv}^{fn}=\left[ \begin{aligned} \mathbf{b}_1^{fn} \dots\ \ \mathbf{b}_j^{fn} \ \ & \dots \,\mathbf{b}_k^{fn} \\ \mathbf{b}_1^{fn} \dots\ \ &\mathbf{b}_j^{fn} \ \ \dots \,\mathbf{b}_k^{fn} \\ \vdots \\ \mathbf{b}_1^{fn} &\dots\ \ \mathbf{b}_j^{fn} \ \ & \dots \,\mathbf{b}_k^{fn} \\ \end{aligned} \right], \end{equation} where $\mathbf{b}_c^i$ is the building block of size $Y \times X$ defined as: \begin{equation} \label{b_c} \mathbf{b}_{i}^{fn}=\left[ \begin{aligned} \mathbf{k}_{fn}[i,1] \dots\ \ & \dots \mathbf{k}_{fn}[i,k] \\ \mathbf{k}_{fn}[i,1]&\dots \ \ \dots \mathbf{k}_{fn}[i,k] \\ \vdots \\ &\mathbf{k}_{fn}[i,1] \dots\ \ \dots \mathbf{k}_{fn}[i,k] \end{aligned} \right], \end{equation} Equation \ref{bw_conv} implies that the propagation of importance between 3-way tensors in convolutional layers can be decomposed into propagation between 2-D matrices. Fixing the input channel $n$ and the output channel $f$, the input layer size is $X \times X$ and the output size is $Y \times Y$. Given the flattened importance vector $\mathbf{S_{out}}^f \in \mathbb{R}^ {1\times (Y \times Y )}$ of the output layer, the propagation matrix $\mathbf{BP}_{conv}^{fn} \in \mathbb{R}^ { (Y \times Y ) \times (X \times X)}$ is used to map from $\mathbf{S_{out}}^f$ to the importance of input layer $\mathbf{S_{in}}^{fn} \in \mathbb{R}^ {1\times (X \times X )}$. $\mathbf{BP}^{fn}_{conv}(i,j)\neq0$, implies that the $i^{th}$ position in the output layer comes from a convolution operation with the $j^{th}$ position in the input layer, and we propagate the importance between the two positions. We use a $Y \times X$ matrix $\mathbf{b}^{fn}_i$ to represent the mapping between a row in the output layer to the corresponding row in the input layer. In each row of $\mathbf{b}^{fn}_i$, there are $k$ non-zeros since each position in the output layer is obtained from a region with width $k$ of the input layer. The non-zeros of each row of $\mathbf{b}^{fn}_i$ are the $i^{th}$ row of the convolutional kernel $\mathbf{k}_{fn}$. The offset of the beginning of the weights in each row is the stride $s$. The entire propagation matrix $\mathbf{BP}_{conv}^{fn}$ is a block matrix with each submatrix being a $Y \times X$ matrix of either $\mathbf{b}_i^{fn}$ or a zero matrix. Each row of $\mathbf{BP}_{conv}^{fn}$ has $\mathbf{b}_1^{fn}$ to $\mathbf{b}_k^{fn}$ because the height of a convolutional kernel is $k$. The offset of the beginning of the $\mathbf{b}$s in each row of $\mathbf{BP}_{conv}^{fn}$ is the stride s. We use the case when $X=4, Y=2, k=3, s=1$ as an example shown in Figure \ref{fig:conv}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{image/conv.pdf} \end{center} \caption{importance propagation: Convolutional layer. $X=4, Y=2, k=3, s=1$. Fixing the $f^{th}$ input channel and $c^{th}$ output channel, the upper-left X-by-X grid is the corresponding input feature map, and the upper-right Y-by-Y grid is the output map after convolution is applied. $\mathbf{k}_{fc}$ is the corresponding 2D convolutional kernel. Given the flattened importance vector for the output feature map $\mathbf{S_{out}}^{f,c}$, we use $\mathbf{BP}_{conv}$ to propagate the importance and obtain $\mathbf{S_{in}}^{f,c}$, which contains the importance of the input feature map. The structure of $\mathbf{BP}_{conv}$ is determined by the kernel size k and stride s. } \label{fig:conv} \end{figure} \subsubsection{Pooling Layer.}\label{pooling} Assume a pooling layer with input tensor of size $X \times X \times F$ and output size $Y \times Y \times F$. The pooling filter size is $k$ and the stride is $s$. The basic idea of most pooling techniques is the same: use a fixed 2-dimensional filter to abstract local responses within each channel independently. For example, in max pooling each output response consists of the max of $k \times k$ values from the input responses. Due to the large variance of input data, it is safe to assume a uniform distribution on which value within the receptive field is the largest is a uniform distribution. Consequently, for an output response location, the contributions from the corresponding $k \times k$ values of the input response are equal. Since pooling is a spatial operation that does not cross channels, we can propagate the importance of each channel independently. Given a flattened importance vector of a channel $f$ $\mathbf{{S_{out}}^f} \in \mathbb{R}^ {1\times (Y \times Y )}$ of the output 3-way tensor, the flattened importance vector of the input tensor is calculated as: \begin{equation} \label{RI_Pooling_Conv} \mathbf{{S_{in}}^f}=\mathbf{{S_{out}}^f}\cdot \mathbf{BP}_{pooling}, \end{equation} where $\mathbf{BP}_{pooling}$ is the back-propagation matrix of size $Y^2\times X^2$ defined as: \begin{equation} \label{BP_pooling} \mathbf{BP}_{pooling}=\left[ \begin{aligned} \mathbf{b}_p \dots\ \ \mathbf{b}_p \ \ & \dots \,\mathbf{b}_p \\ \mathbf{b}_p \dots\ \ &\mathbf{b}_p \ \ \dots \,\mathbf{b}_p \\ \vdots \\ \mathbf{b}_p & \dots\ \ \mathbf{b}_p \ \ \dots \,\mathbf{b}_p \end{aligned} \right], \end{equation} where $\mathbf{b}_p$ is the building block of size $Y \times X$ defined as: \begin{equation} \label{b_p} \mathbf{b}_{p}=\left[ \begin{aligned} 1 \dots\ \ 1 \ \ & \dots \,1 \\ 1 \dots\ \ &1 \ \ \dots \,1 \\ \vdots \\ 1 & \dots\ \ 1 \ \ \dots \,1 \end{aligned} \right], \end{equation} Consider one channel with input size $X \times X$ and the output size $Y \times Y$. Given the flattened importance vector $\mathbf{S_{out}}^f \in \mathbb{R}^ {1\times (Y \times Y )}$ of the output layer, the propagation matrix $\mathbf{BP}_{pooling} \in \mathbb{R}^ { (Y \times Y ) \times (X \times X)}$ is used to map from $\mathbf{S_{out}}^f$ to the importance of input layer $\mathbf{S_{in}}^f \in \mathbb{R}^ {1\times (X \times X )}$. If $\mathbf{BP}_{pooling}(i,j)=1$, the $i^{th}$ position in the output layer comes from a pooling operation and involves the $j^{th}$ position in the input layer, so we propagate the importance between the two positions. We use a $Y \times X$ matrix $\mathbf{b}_p$ to represent the mapping between a row in the output layer to the corresponding row in the input layer. In each row of $\mathbf{b}_p$, there are $k$ $1's$ since each element in the output layer is pooled from a region with width $k$ of the input layer. The offset of the beginning of the $1's$ is the stride $s$. The entire propagation matrix $\mathbf{BP}_{pooling}$ is a block matrix with each submatrix being a $Y \times X$ matrix of either $\mathbf{b}_p$ or a zero matrix. Each row of $\mathbf{BP}_{pooling}$ has $k$ $\mathbf{b}_p s$ because the height of pooling filter is $k$. The offset of the beginning of the $k$ $\mathbf{b}_p$s is the stride s. The ones in $\mathbf{b}_p$ will be normalized by the number of positions covered by a pooling filter (the same for LRN layers shown below). The other elements are all zeros. We use the case that $X=4, Y=2, k=2, s=2$ as an example shown in Figure \ref{fig:long}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{image/pool.pdf} \end{center} \caption{NISP: Pooling layer. $X=4, Y=2, k=2, s=2$. The upper-left X-by-X grid is the $f^{th}$ feature map of the input channel, and the upper-right Y-by-Y grid is the output channel after pooling is applied. Given the importance vector $\mathbf{S_{out}}^f$, we use $\mathbf{BP}_{pooling}$ to propagate the importance and obtain $\mathbf{S_{in}}^f$, which contains the importance of each position of the input feature map. The structure of $\mathbf{BP}_{pooling}$ relates to the kernel size k and stride s.} \label{fig:long} \end{figure} \subsubsection{Local Response Normalization Layer.} Krizhevsky \emph{et al.} \cite{Alexnet} proposed Local Response Normalization (LRN) to improve CNN generalization. For cross-channel LRN, sums over adjacent kernel maps at the same spatial position produce a response-normalized activation at that position. Since LRN is a non-linear operation, it is intractable to conduct exact importance propagation between the input and output tensors. One way to approximate propagation is to assume the kernel maps at one spatial position contribute equally to the response at that position of the output tensor when considering the large variance of the input data. Then, given the $X\times X \times N$ importance tensor for the response of a LRN layer with $\text{local}\_\text{size} = l$, which is the number of adjacent kernel maps summed for a spatial position, considering all $N$ channels of a spatial position $(i,j)$, the importance vector of that spatial position is $\mathbf{S}_{out}^{ij} \in \mathbb{R}^{1 \times N}$. The corresponding importance vector of the input $\mathbf{S}_{in}^{ij} \in \mathbb{R}^{1 \times N}$ is: \begin{equation} \label{lrn} \mathbf{S}_{in}^{ij}=\mathbf{S}_{out}^{ij} \cdot \mathbf{BP}_{LRN}, \end{equation} where $\mathbf{BP}_{LRN} \in \mathbb{R}^{N \times N}$ is defined as: \begin{equation} \label{BP_LRN} \mathbf{BP}_{LRN}=\left[ \renewcommand{\arraystretch}{0.6} \setlength{\arraycolsep}{1pt} \begin{array}{cccccccccccc} &1 &1 &\cdots &1 & & & & & & & \\ &1 &1 &\cdots &1 &1 & & & & & & \\ &\cdot &\cdot &\cdot &\cdot &\cdot &\cdot & & & & & \\ & 1 &1 &\cdots &1 &1 &\cdots & & & & & \\ & & 1 &\cdots &1 &1 &\cdots &1 & & & &\\ & & & &\cdot &\cdot &\cdot &1 &1 & & &\\ & & & &1 &1 &\cdots &\cdots &\cdots & & &\\ & & & & &1 &\cdots &1 &1 &\cdots & 1 & \\ & & & & & & &1 &1 &\cdots &1 &1 \\ & & & & & & &\cdot &\cdot &\cdot &\cdot &\cdot \\ & & & & & & &1 &1 &\cdots &1 &1 \\ & & & & & & & &1 &\cdots &1 &1 \\ \end{array} \right]. \end{equation} For a cross-channel LRN, the output response tensor has the same shape as the input. For a spatial position $(i,j)$ of the output tensor, given its importance vector $\mathbf{S}_{out}^{ij}$, we construct a $N \times N$ symmetric matrix $\mathbf{BP}_{LRN}$ to propagate its importance to the corresponding input vector$\mathbf{S}_{in}^{ij}$ at that position. Since the center of the LRN operation is at position $(i,j)$, the operation will cover $\frac{l+1}{2}$ positions to the left and right. When the operation is conducted on the positions at the center of the vector $\mathbf{S}_{out}^{ij}$ (from column $\frac{l+1}{2}$ to $N\text{-}\frac{l+1}{2}+1$), the operation covers $l$ cross-channel position so that the corresponding columns in $\mathbf{BP}_{LRN}$ have $l$ 1's. When the LRN operation is conducted at the margin of the vector, there are missing cross-channel positions so that from column $\frac{l+1}{2}$ to column 1 (similar for the right-bottom corner), the 1's in the corresponding column of $\mathbf{BP}_{LRN}$ decreases by 1 per step from the center to the margin. We use the case when $l=3, N=5$ as an example of LRN layer with cross-channel in Figure \ref{fig:lrn}. \begin{figure}[t] \begin{center} \includegraphics[width=0.95\linewidth]{image/LRN.pdf} \end{center} \caption{Importance propagation: LRN layer (cross-channel). $l=3, N=5$. The red vector is the cross-channel vector at spatial position $(i,j)$ of the input tensor, and the yellow vector is the cross-channel vector at the same position of the output tensor after LRN is applied. Given the $\mathbf{S_{out}}^{ij}$, we use $\mathbf{BP}_{LRN}$ to propagate the importance and obtain $\mathbf{S_{in}}^{ij}$, which contains the importance of each position of the input feature map. The structure of $\mathbf{BP}_{LRN}$ relates to the local size l and number of channels N.} \label{fig:lrn} \end{figure} For within-channel LRN, following our equal distribution assumption, the importance can be propagated similarly as in a pooling layer. \subsection{Experiments} \subsection{PCA Accumulated Energy Analysis} One way to guide the selection of pruning ratio is the PCA accumulated energy analysis \cite{Nonlinear} on the responses of a pre-pruned layer. The PCA accumulated energy analysis shows how many PCs it needs for that layer to capture the majority of variance of the samples, which implies a proper range of how many neurons/kernels we should keep for that layer. We show the PCA accumulated energy analysis results on the last FC layers before the classification part for LeNet (ip1) and AlexNet (fc7) in Figure \ref{fig:PCAL} and \ref{fig:PCAA}. By setting variance threshold as 0.95, 120 out of 500 PCs are required for LeNet, 2234 out of 4096 PCS are required for AlexNet to capture the variance. \begin{figure}[!t] \centering \subfigure[LeNet]{\label{fig:PCAL}\includegraphics[width=.49\linewidth]{image/PCA_mnist.png}} \subfigure[AlexNet]{\label{fig:PCAA}\includegraphics[width=.49\linewidth]{image/PCA_alex.pdf}} \caption{PCA accumulated energy analysis: LeNet on MNIST (a) and AlexNet on ImageNet (b). The y axis measures the PCA accumulated energy. The x axis shows the number of PCs.} \end{figure} \subsection{Experiments on AlexNet: Convolutional Layers v.s. FC Layers} From the experiments in the main paper, we found that FC layers have significant influence on accuracy loss, model size and memory usage. To exploit the impact of pruning FC layers and convolutional layers, we conduct experiments on pruning half of the neurons in FC layers and some convolutional layers. We categorize the 5 convolutional layers into three-level feature extractors: low (Conv1-Conv2 layers), middle (Conv3 layer) and high (Conv4-Conv5 layers). Figure \ref{fig:alexThree} displays learning curves and shows that although FC layers are important in AlexNet, powerful local feature extractors (more kernels in convolutional layers) can compensate the loss from pruning neurons in FC layers, or even achieve better predictive power (High and Low curves). \begin{figure}[!t] \begin{center} \includegraphics[width=.9\linewidth]{image/Alex_Three_Level.pdf} \end{center} \caption{Learning Curves of AlexNet on ImageNet: The subscript $CF$ means we prune both convolutional kernels and neurons in FC layers, and $C$ means we only prune convolutional kernels. \emph{$\text{High}$}, \emph{$\text{Mid}$} and \emph{$\text{Low}$} mean we prune the entire CNN except for the high/middle/low level convolutional layers (Conv4-Conv5, Conv3 and Conv1-Conv2 respectively). } \label{fig:alexThree} \end{figure} \subsection{Experiments on GoogLeNet} The learning curves for ``no\_Reduce" is shown in Figure \ref{fig:googRe}. We observe that our importance based pruning method leads to better initialization, faster convergence and smaller final accuracy loss. \begin{figure}[!t] \begin{center} \includegraphics[width=.9\linewidth]{image/Reduce.pdf} \end{center} \caption{Learning Curves of GoogLeNet on ImageNet: The pruning ratio is 50\%. We prune all layers but the reduction layers in the inception modules. importance based pruning method converges much faster and can achieve the smallest accuracy loss. } \label{fig:googRe} \end{figure} \subsection{Layer-wise Improvements} In our experiments of AlexNet on Titan X, the empirical computation time for the intermediate layers (all layers except for convolutional layers and FC layers) accounts for 17\% of the entire testing time; therefore, those layers must be considered as well while designing an acceleration method. One of our advantages over existing methods is that all layers in the network can be sped up due to the fact that the data volume or feature dimension at every layer is reduced. For example, by pruning kernels in convolutional layers, we reduce the number of both output channels of the current layer and input channels of the next layer. In theory, given a pruning ratio of 50\%, except for the first layer whose input channels cannot be pruned, all of the convolutional layers can be sped up by $4\times$. The intermediate pooling, non-linearity and normalization layers have a theoretical speedup ratio of around $2\times$. The layer-wise acceleration ratios (both theoretical and empirical) of our method when the pruning ratio is 50\% for both convolutional layers and FC layers are shown in Figure \ref{fig:layerwise}. We observe that the theoretical and empirical speedup are almost the same for pooling, non-linearity and normalization. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{image/layerwise.pdf} \end{center} \caption{Full-Network Acceleration of AlexNet: Pruning Half of the Kernels and Neurons. } \label{fig:layerwise} \end{figure}
{ "attr-fineweb-edu": 1.416992, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc185qhDACtwISAjd
\section{Introduction} For $p>1$, the $p$-R\'{e}nyi \cite{Re} entropy of a (continuous) random vector $X$ in $\mathbb{R}^d$ distributed with density $f$ is defined by $$h_p(X) = - \frac{1}{p-1}\log \int_{\mathbb{R}^d} f(x)^p d\mu_d(x) = -\frac{1}{p-1}\log \|f\|_p^p, $$ where $\mu_d$ denotes the $d$-dimensional Lebesgue measure. As $p\to 1^+$, $h_p(X)$ converges to the usual Shannon entropy \begin{equation*} h(X) = -\int_{\mathbb{R}^d} f(x)\log f(x) d\mu_d(x)\end{equation*} (provided that the density of $X$ is sufficiently regular to justify passage of the limit). For the entropy power $N(X)=\exp(2h(X)/d)$, the fundamental entropy power inequality (EPI) of Shannon \cite{Shan} asserts that for independent random vectors $X_1$ and $X_2$, \begin{equation*} N(X_1+X_2)\geq N(Z_1+Z_2), \end{equation*}where $Z_1$, $Z_2$ are independent Gaussians satisfying $N(X_i)= N(Z_i), i=1,2$. A firm connection between the EPI, $p$-R\'{e}nyi entropy and fundamental results like the Brunn-Minkowski and Young's convolution inequalities goes back to Dembo, Cover and Thomas \cite{DCT}. See Principe \cite{Pr} for more information about where the R\'{e}nyi entropy arises; see also Bobkov, Marsiglietti \cite{BM} for a related discussion. Recently, there has been increasing interest in $p$-R\'{e}nyi entropy inequalities. Interestingly, the following basic mathematical question is still open: \emph{Over all random variables $X$ with $h_p(X)$ some fixed quantity, what are the minimizers of the entropy $h_p(X+X')$, where $X'$ is an independent copy of $X$?} We learnt about this question from the papers of Madiman, Melbourne, Xu, and Wang \cite{MW, MMX}, who studied unifying entropy power inequalities for the R\'{e}nyi entropy, which, in the limit $p\to 1^+$ recover the statement that, over all probability distributions with $h(X)$ fixed, $h(X+X')$ is minimized if (and only if) $X$ is a Gaussian, see e.g. \cite{DCT}. Several closely related questions have been recently addressed involving the $p$-R\'enyi entropy power $N_p(X) = \exp(\tfrac{2}{d}h_p(X))$. Bobkov and Chistyakov \cite{3} show that there is a constant $c>0$, depending on $d$ and $p$, such that $N_p(\sum_{j=1}^n X_j)\geq c\sum_{j=1}^n N_p(X_j)$ for independent random vectors $X_1, \dots, X_n$. A sharper form of the constant was subsequently found by Ram and Sason \cite{8}. Bobkov and Marsiglietti \cite{1} proved that $N_p(X_1+X_2)^{\alpha}\geq N_p(X_1)^{\alpha} +N_p(X_2)^{\alpha}$ for $X_1,X_2$ independent Random vectors if $\alpha\geq \frac{p+1}{2}$. There has been considerable further recent success extending the EPI to the R\'enyi setting \cite{2, 4, 5, 7, 8, 10}. Following \cite{LYZ, MW, MMX}, for $\beta>0$, consider the \emph{Generalized Gaussian} $$G_{\beta,p}(x) = \alpha (1-\beta|x|^2)_+^{1/(p-1)}, $$ where $\alpha$ is chosen so that $\int_{\mathbb{R}^d} G_{\beta,p}(x) d\mu_d(x) = 1$. The generalized Gaussian is the distribution with the smallest second moment with a given R\'{e}nyi entropy, see work of Lutwak, Yang, and Zhang \cite{LYZ}, as well as earlier results of Costa, Hero, and Vignat \cite{CHV}. Madiman and Wang made the following bold conjecture (Conjecture IV.3 in \cite{MW}). \begin{conj}[The Madiman-Wang Conjecture] If $X_j$, $j=1,\dots,n$, are independent random variables with densities $f_j$, and $Z_j$ are independent random variables distributed with respect to $G_{\beta_j,p}$ where $\beta_j$ is chosen so that $h_p(X_j) = h_p(Z_j)$, then $$h_p(X_1+\dots+X_n)\geq h_p(Z_1+\dots+Z_n).$$\end{conj} This conjecture has been confirmed in the case $p=+\infty$, see \cite{6, 12}. In this note we will show that unfortunately this conjecture does not hold in the special case when $d=1$, $p=2$, $n=2$ and $X_1$ and $X_2$ are identically distributed, see Section \ref{MWsec}. However, we do suspect that a minimizing distribution is a relatively small perturbation of the generalized Gaussian Throughout this note we only consider the case where $X_1,\dots ,X_n$ are independent copies of a random variable $X$ with density $f$. The question of finding the minimizer of $h_p(X_1+\dots+ X_n)$ with $h_p(X)$ fixed can then be rephrased as a constrained maximization problem, which we introduce in Section \ref{constrained}. Subsequently, in Section \ref{variation} we take the first variation of this maximization problem. We have not been able to develop a satisfactory theory of the associated Euler-Lagrange equation (\ref{neccond}), but we show in Section \ref{MWsec} that the generalized Gaussian is not a solution to (\ref{neccond}), and so fails to be a maximizer of the extremal problem. We conclude the paper with some elementary remarks and speculation. \medskip \medskip \textbf{Acknowledgement.} The first named author is supported by NSF DMS-1830128, DMS-1800015 and NSF CAREER DMS-1847301. The second named author is supported by the NSF CAREER DMS-1753260. The third named author is supported by the NSF DMS-1812240. The fourth named author is supported by the NSF DMS-1612936. The work was partially supported by the National Science Foundation under Grant No. DMS-1440140 while the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2017 semester. The authors are especially grateful to the reviewers for valuable comments and suggestions, which helped improve the paper and clarify the exposition. \section{The constrained maximization problem}\label{constrained} Denote by $\mathcal{C}_n(f)$ the $(n-1)$-fold convolution of a given function $f$ with itself, that is, $\mathcal{C}_n(f) = f*f*\cdots *f$, where there are $n$ factors of $f$ (and $n-1$ convolutions). Then $\mathcal{C}_1(f)=f$. It will be convenient to set $\mathcal{C}_0(f)= \delta_0$, the Dirac delta measure, so that $g*\mathcal{C}_0(f)=g$ for any measurable function $g$. Throughout the text, we fix $M>0$, $n\in \mathbb{N}$ and $p\in (1,\infty)$. We set $$\mathcal{F} = \bigl\{f \in L^1(\mathbb{R}^d)\cap L^p(\mathbb{R}^d), \, f\geq 0,\, \|f\|_p^p=M,\, \|f\|_1=1\bigl\} $$ and consider the extremal problem \begin{equation}\label{convprob}\begin{cases}\;\text{Maximize } \mathcal{I}(f)\stackrel{\operatorname{def}}{=}\int_{\mathbb{R}^d}[\mathcal{C}_n(f)(x)]^pd\mu_d(x)\\\text{ subject to } f\in \mathcal{F}. \end{cases} \end{equation} Put \begin{equation}\label{lambdadef}\Lambda = \Lambda(p,M) = \sup\{\mathcal{I}(f): f\in \mathcal{F}\}.\end{equation} We begin with a simple scaling lemma, which we will use often in what follows. \begin{lem}\label{scaling} Suppose that $f\in L^1(\mathbb{R}^d)\cap L^p(\mathbb{R}^d)$ is non-negative, and $\|f\|_1>0$. The function $$\widetilde f = \frac{1}{\lambda^d \|f\|_1}f\Bigl(\frac{\cdot}{\lambda}\Bigl), \text{ with }\lambda = \Bigl(\frac{\|f\|_p^p}{M\|f\|_1^p}\Bigl)^{\tfrac{1}{d(p-1)}},$$ belongs to $\mathcal{F}$, and $$\mathcal{I}(\widetilde f) = \frac{M}{\|f\|_p^p}\frac{1}{\|f\|_1^{p(n-1)}}\mathcal{I}(f). $$ \end{lem} \begin{proof} Observe that, for any $r\in [1,\infty)$, $$\|\widetilde f\|_r^r = \frac{1}{\lambda^{d(r-1)}\|f\|_1^r}\|f\|_r^r. $$ Plugging in $r=1$ and $r=p$ (and recalling the definition of $\lambda$) we see that $\widetilde f\in \mathcal{F}$. Next, observe that $$\mathcal{C}_n(\widetilde f)(x) = \frac{1}{\|f\|_1^n\lambda^d}\mathcal{C}_n(f)\bigl(\frac{x}{\lambda}\bigl)\text{ for any }x\in \mathbb{R}^d. $$ Whence, $$\mathcal{I}(\widetilde{f}) = \frac{1}{\lambda^{d(p-1)}\|f\|_1^{pn}}\mathcal{I}(f),$$ and the proof is complete by recalling the definition of $\lambda$. \end{proof} We next prove that (\ref{convprob}) has a maximizer. A radial function $f$ on $\mathbb{R}^d$ is called decreasing if $f(y)\leq f(x)$ whenever $|y|\geq |x|$. \begin{prop}\label{existence} The problem (\ref{convprob}) has a lower-semicontinuous, radially decreasing, maximizer $Q$. \end{prop} \begin{proof} First observe that for any measurable function $f$, iterating Riesz's rearrangement inequality \cite[Theorem 3.7]{LL} yields $\mathcal{I}(f)\leq \mathcal{I}(f^*)$, where $f^*$ is the symmetric rearrangement of $f$; see \cite[Section 3.4]{B} for related multiple convolution rearrangement inequalities and their equality cases. Also, notice that if $f\in \mathcal{F}$, then $f^*\in \mathcal{F}$. \begin{comment}\begin{enumerate} \item Repeated application of Young's convolution inequality \cite{LL} yields that, with $p'=p/(p-1)$, $$\mathcal{I}(f)\leq \|f\|_{(np')'}^{np}, $$ where $(np')'$ is the H\"{o}lder conjugate of $np'$. Since $n>1$, we have that $(np')'\in (1,p)$. \item \end{enumerate}\end{comment} Take non-negative functions $f_j\in \mathcal{F}$ such that $\Lambda =\lim_{j\to\infty}\mathcal{I}(f_j)$ (recall $\Lambda$ from (\ref{lambdadef})). By replacing $f_j$ with its symmetric rearrangement, we may assume that $f_j$ are radial and decreasing. Passing to a subsequence if necessary, we may in addition assume that $f_j\to f$ weakly in $L^p(\mathbb{R}^d)$. Consequently, $f$ is radial, decreasing, $f\geq 0$, and $\|f\|_p^p\leq M$. (To see this, observe that the set of radial decreasing nonnegative functions with norm at most $M^{1/p}$ is a closed convex set in $L^p(\mathbb{R}^d)$, so by Mazur's Lemma, see e.g. \cite[Theorem 2.13]{LL}, this set is weakly closed.) By modifying $f$ on a set of measure zero if necessary, we may assume that $f$ is lower semi-continuous\footnote{If $f$ is discontinuous at $x\in \mathbb{R}^d$, then define $f(x) = \sup_{|y|>|x|}f(y)$ (i.e. the one-sided radial limit from the right). Then $\{f>\lambda\}$ is open for every $\lambda>0$.}. \begin{cla}\label{claim:pointwise} As $j\to \infty$, $f_j\to f$ $\mu_d$-almost everywhere.\end{cla} \begin{proof} For $r>0$, define $v_j(r) = f_j(x)$ and $v(r)=f(x)$ whenever $|x|=r$. Then since $f_j$ converges weakly to $f$ in $L^p(\mathbb{R}^d)$, we have that whenever $I$ is a closed interval of finite Lebesgue measure in $(0,\infty)$, $$\lim_{j\to \infty}\int_I v_j(s) d\mu_1(s) = \int_I v(s) d\mu_1(s). $$ Insofar as the function $v$ is non-decreasing, it has at most countably many points of discontinuity. If $r>0$ is a point of continuity of $v$, and $I_{k} = [r-2^{-k}, r]$, then $$v(r) = \lim_{k\to\infty}\frac{1}{2^{-k}}\int_{I_{k}}v(s) d\mu_1(s) = \lim_{k\to\infty}\lim_{j\to\infty}\frac{1}{2^{-k}}\int_{I_{k}}v_j(s) d\mu_1(s) $$ but since $v_j$ is decreasing we have that $v_j(s)\geq v_j(r)$ for $s\in I_{k}$. Thus $$v(r) \geq \limsup_{j\to\infty}v_j(r). $$ Arguing similarly with intervals whose left end-point is $r$, we also have that $$v(r) \leq \liminf_{j\to\infty}v_j(r). $$ Thus $\lim_{j\to \infty}v_j = v$ at every point of continuity of $v$. If $E$ is a countable set in $(0,\infty)$, then $E\times \mathbb{S}^{d-1}$ is a Lebesgue null set in $\mathbb{R}^d$, so the claim follows \end{proof} Notice that, as a consequence of this claim, Fatou's Lemma ensures that $\|f\|_1\leq 1$. Our next claim is \begin{cla}\label{claim:lq} If $1<q<p$, then $f_j \to f$ strongly in $L^q(\mathbb{R}^d)$ as $j\to \infty$. \end{cla} The proof of this claim is a variant of the Vitali convergence theorem (see e.g. Theorem 9.1.6 of \cite{Ros}), but observe that it does not necessarily hold if one was to remove the radially decreasing property of the functions $f_j$ (just consider a sequence of translates of a fixed function). \begin{proof} Fix $\varepsilon>0, \delta>0$. Insofar as the functions $f_j$ and $f$ are radially decreasing, $$\bigcup_j\{|f_j|\geq \tfrac{\delta}{2}\}\cup\{|f|\geq\tfrac{\delta}{2}\}\subset B, $$ where $B$ is the closed ball centered at $0$ of radius $\bigl(\frac{2}{\mu_d(B(0,1))\delta}\bigl)^{1/d}.$ (Otherwise we would have $\|f_j\|_1>1$ for some $j$, or $\|f\|_1>1$.) On $\mathbb{R}^d\backslash B$, we have $|f_j|<\delta/2$ for every $j$, and $|f|<\delta/2$, whence $$\int_{\mathbb{R}^d\backslash B}|f_j(x)-f(x)|^qd\mu_d(x)\leq \delta^{q-1}\Bigl(\|f_j\|_1+\|f\|_1\Bigl)\leq 2\delta^{q-1}<\frac{\varepsilon}{3} $$ provided $\delta>0$ is chosen sufficiently small. Now fix $\varkappa>0$. Observe that, $$\int_{B\cap\{|f_j-f|<\varkappa\}}|f_j(x)-f(x)|^qd\mu_d(x) \leq \mu_d(B)\varkappa^q<\frac{\varepsilon}{3} $$ if $\varkappa$ is chosen sufficiently small. On the other hand, since $B$ has finite measure, one can invoke continuity of measure from above, thus we have that $f_j\to f$ in measure on $B$ as $j\to \infty$. From the inequalities \begin{equation}\begin{split}\nonumber\int_{B\cap\{|f_j-f|\geq \varkappa\}}|f_j(x)-f(x)|^qd\mu_d(x) &\leq \mu_d(B\cap \{|f_j-f|\geq \varkappa\})^{1-q/p} \|f_j-f\|_p^q\\ &\leq 2^qM^{q/p} \mu_d(B\cap \{|f_j-f|\geq \varkappa\})^{1-q/p}, \end{split}\end{equation} we infer that there exists $N\in \mathbb{N}$ such that $$\int_{B\cap\{|f_j-f|\geq \varkappa\}}|f_j(x)-f(x)|^qd\mu_d(x)<\frac{\varepsilon}{3} \text{ for all }j\geq N. $$ Bringing these estimates together, it follows that $\|f_j-f\|_q^q<\varepsilon$ for every $j\geq N$. \end{proof} Our next goal is to use this claim in order to show that $\mathcal{I}(f)=\Lambda$. To this end, observe that repeated application of Young's convolution inequality \cite{LL} yields that, for any $n$-tuple of functions $g_1,\dots, g_n$, \begin{equation}\label{nyoung}\Bigl(\int_{\mathbb{R}^d}|g_1*g_2*\cdots *g_n(x)|^p d\mu_d(x) \Bigl)^{1/p}\leq \prod_{j=1}^n\|g_j\|_{(np')'}, \end{equation} where $p' = p/(p-1)$ is the H\"{o}lder conjugate of $p$, so $(np')' = \tfrac{np}{np-p+1}$. Since $n>1$, $(np')'\in (1,p)$. To apply this inequality, first use Minkowski's inequality to observe that, $$|\mathcal{I}(g_1)^{1/p}-\mathcal{I}(g_2)^{1/p}|\leq \Bigl(\int_{\mathbb{R}^d} |\mathcal{C}_n(g_1)(x) - \mathcal{C}_n(g_2)(x)|^p d\mu_d(x)\Bigl)^{1/p},$$ but, $$\mathcal{C}_n(g_1)-\mathcal{C}_n(g_2) = \sum_{k=0}^{n-1}\mathcal{C}_k(g_1)*(g_1-g_2)*\mathcal{C}_{n-k-1}(g_2), $$ and hence $$|\mathcal{I}(g_1)^{1/p}-\mathcal{I}(g_2)^{1/p}|\leq\sum_{k=0}^{n-1}\Bigl(\int_{\mathbb{R}^d} |\mathcal{C}_k(g_1)*(g_1-g_2)*\mathcal{C}_{n-k-1}(g_2)(x)|^pd\mu_d(x)\Bigl)^{1/p}. $$ Appealing to (\ref{nyoung}) now yields, $$|\mathcal{I}(g_1)^{1/p}-\mathcal{I}(g_2)^{1/p}|\leq\sum_{k=0}^{n-1}\|g_1\|_{(np')'}^k\|g_2\|_{(np')'}^{n-k-1}\|g_1-g_2\|_{(np')'}. $$ Returning to our sequence $f_j$, it is a consequence of H\"{o}lder's inequality that $\|f_j\|_{(np')'}\leq \|f_j\|_1^{\theta}\|f_j\|_p^{1-\theta}$ with some $\theta\in (0,1)$ depending on $n$ and $p$, so $\|f_j\|_{(np')'}\leq C(M,n,p)$ (and the same inequality holds with $f_j$ replaced by $f$). Whence there is a constant $C'(n,p,M)$ such that $$|\mathcal{I}(f_j)^{1/p}-\mathcal{I}(f)^{1/p}| \leq C'(n,p,M)\|f_j-f\|_{(np')'} \,\text{ for every $j$}. $$ Since $(np')'\in (1,p)$, Claim \ref{claim:lq} yields that $f_j \to f$ in $L^{(np')'}$ as $j\to \infty$. Hence $\mathcal{I}(f)=\lim_{j\to \infty}\mathcal{I}(f_j)=\Lambda$. (It follows that $f$ is not identically zero.) It remains to show that $f\in \mathcal{F}$. To this end, we apply Lemma \ref{scaling}: Consider the function $$\widetilde{f} = \frac{1}{\|f\|_1\lambda^d}f\Bigl(\frac{\cdot}{\lambda}\Bigl), \text{ with }\lambda = \Bigl(\frac{\|f\|_p^p}{M\|f\|_1^p}\Bigl)^{\frac{1}{d(p-1)}}.$$ Then $\widetilde f\in \mathcal{F}$ and $\mathcal{I}(\widetilde f ) = \frac{M}{\|f\|_p^p}\frac{1}{\|f\|_1^{p(n-1)}}\Lambda.$ Consequently, if $\|f\|_p^p<M$ or $\|f\|_1<1$, then $\mathcal{I}(\widetilde f)>\Lambda$, which is absurd. Thus $f\in \mathcal{F}$ and the proof of the proposition is complete. \end{proof} \section{The First Variation}\label{variation} With the existence of a maximizer proved, we now wish to analyze it analytically. To introduce the Euler-Lagrange equation associated to (\ref{convprob}) it will be convenient to define, for a function $f$, $\mathcal{T}(f)(x) = f(-x)$. Observe that, if $f,g,h$ are non-negative measurable functions, \begin{equation}\label{threeconv}\int_{\mathbb{R}^d} f(x)(\mathcal{T}(g)*h)(x)d\mu_d(x) = \int_{\mathbb{R}^d}(f*g)(x)h(x) d\mu_d(x). \end{equation} \begin{prop} A lower-semicontinuous function $Q\in \mathcal{F}$ is a maximizer of the problem (\ref{convprob}) if and only if \begin{equation}\label{neccond} [\mathcal{T}(\mathcal{C}_{n-1}(Q))]*[\mathcal{C}_n(Q)]^{p-1} = \frac{\Lambda}{M n} Q^{p-1}+\frac{\Lambda (n-1)}{n} \text{ on }\{Q>0\}. \end{equation} \end{prop} \begin{rem}\label{raddecrem}Observe that if $Q$ is radially decreasing, then $\mathcal{C}_{n-1}(Q)$ is again radially decreasing for any $n\in \mathbb{N}$, so $\mathcal{T}(\mathcal{C}_{n-1}(Q)) = \mathcal{C}_{n-1}(Q)$ in this case.\end{rem} \begin{proof} The sufficiency is easy to show. Integrating both sides of (\ref{neccond}) against $Q$, and recalling that $Q\in \mathcal{F}$, yields $$\int_{\mathbb{R}^d} Q(x)\cdot ([\mathcal{T}(\mathcal{C}_{n-1}(Q))]*[\mathcal{C}_n(Q)](x))^{p-1}d\mu_d(x) =\Lambda. $$ But using Tonelli's theorem and (\ref{threeconv}), the left hand side is equal to $\int_{\mathbb{R}^d}(\mathcal{C}_n(Q)(x))^pd\mu_d(x) = \mathcal{I}(Q)$. Conversely, consider a bounded function $\varphi$ compactly supported in the open set $\{Q>0\}$. Since $Q$ is lower-semicontinuous, $\inf_{\operatorname{supp}(\varphi)} Q>0$. Therefore, (insofar as $\varphi$ is bounded) there exists a constant $C>0$ such that \begin{equation}\label{phismall} |\varphi|\leq C Q \text{ on }\mathbb{R}^d, \end{equation} so in particular, there exists $t_0>0$ such that for $|t|\leq t_0$ it follows that $Q_t\stackrel{\operatorname{def}}{=}Q+t\varphi$ is non-negative. In the notation of Lemma \ref{scaling} with $f=Q_t$, we consider the function $${\widetilde Q_t} = \frac{1}{\lambda^d}\frac{(Q + t \varphi)\bigl(\frac{\cdot}{\lambda}\bigl)}{\|Q+t\varphi\|_1},$$ with the corresponding $\lambda>0$ satisfying $\|\widetilde Q_t\|^p_{p} = \|Q\|_{p}^p=M$. Of course we also have $\int_{\mathbb{R}}{\widetilde Q_t}(x)\,d\mu_d(x)=1$ regardless of $\lambda$ for $|t|<t_0$. We conclude that ${\widetilde Q_t}$ belongs to $\mathcal{F}$, and therefore \begin{equation}\label{Qtsmaller}\mathcal{I}(\widetilde Q_t)\leq \mathcal{I}(Q)=\Lambda,\text{ for all }|t|<t_0.\end{equation} Moreover, as in Lemma \ref{scaling}, \begin{equation}\label{Qtcalc} \mathcal{I}(\widetilde Q_t) = \frac{1}{\lambda^{d(p-1)}\|Q+t\varphi\|_1^{np}}\int_{\mathbb{R}^d}[\mathcal{C}_n(Q+t\varphi)(x)]^pd\mu_d(x). \end{equation} For $|t|<t_0$, we calculate, using commutativity and associativity of the convolution operator, $$\frac{d}{dt} \mathcal{C}_n(Q+t\varphi)^p = pn[\varphi*\mathcal{C}_{n-1}(Q+t\varphi)][\mathcal{C}_n(Q+t\varphi)]^{p-1}, $$ and \begin{equation}\begin{split}\label{2ndder} \frac{d^2}{dt^2}\mathcal{C}_n(Q+&t\varphi)^p = pn(n-1) \varphi*\varphi*\mathcal{C}_{n-2}(Q+t\varphi)[\mathcal{C}_n(Q+t\varphi)]^{p-1}\\&+n^2p(p-1)[\varphi*\mathcal{C}_{n-1}(Q+t\varphi)]^2[\mathcal{C}_n(Q+t\varphi)]^{p-2}. \end{split} \end{equation} Crudely employing the bound (\ref{phismall}) in (\ref{2ndder}), we infer that there is a constant $C>0$, depending on $n$, $p$ and $t_0,$ such that for all $|t|<t_0$, $$\Bigl|\frac{d^2}{dt^2}\mathcal{C}_n(Q+t\varphi)^p\Bigl|\leq C\mathcal{C}_n(Q)^p. $$ Whence, the second order Taylor formula yields that \begin{equation}\begin{split}\label{pointwiseperturb}|\mathcal{C}_n(Q+&t\varphi)^p - \mathcal{C}_n(Q)^p - npt [\varphi*\mathcal{C}_{n-1}(Q)][\mathcal{C}_n(Q)]^{p-1}| \leq Ct^2\mathcal{C}_n(Q)^p,\end{split}\end{equation} for $|t|< t_0$. Integrating the pointwise inequality (\ref{pointwiseperturb}) yields \begin{equation}\begin{split}\label{b4lambdaperturb} \int_{\mathbb{R}^d}\!&[\mathcal{C}_n(Q+t\varphi)(x)]^pd\mu_d(x)\\&= \!\Lambda\!+\!\!npt\int_{\mathbb{R}^d}[\varphi*\mathcal{C}_{n-1}(Q)(x)][\mathcal{C}_n(Q)(x)]^{p-1} d\mu_d(x)+ O(t^2) \end{split}\end{equation} as $t\to 0$. Now, recalling the definition of $\lambda$, we calculate \begin{equation}\begin{split}\label{scaleexpansion}&\lambda^{d(p-1)}\|Q+t\varphi\|_1^{np} = \frac{\|Q+t\varphi\|_p^p}{M}\|Q+t\varphi\|_1^{(n-1)p}\\ & = \Bigl(1+\frac{pt}{M}\int_{\mathbb{R}^d}\varphi(x) Q(x)^{p-1} d\mu_d(x)+O(t^2)\Bigl)\\ &\;\;\;\cdot\Bigl(1+t(n-1)p\int \varphi(x) d\mu_d(x)+O(t^2)\Bigl),\end{split}\end{equation} where in the expansion of $\|Q+t\varphi\|_p^p$ we have again used the inequality (\ref{phismall}) to obtain the $O(t^2)$ term. Plugging the two expansions (\ref{scaleexpansion}) and (\ref{b4lambdaperturb}) into (\ref{Qtcalc}) yields that, as $t\to 0$, \begin{equation}\begin{split}\nonumber\mathcal{I}(\widetilde Q_t) = &\Lambda + pt\Bigl\{n\int_{\mathbb{R}^d}[\varphi*\mathcal{C}_{n-1}(Q)(x)][\mathcal{C}_n(Q)(x)]^{p-1} d\mu_d(x)\\ & - \frac{\Lambda}{M}\int_{\mathbb{R}^d}\varphi(x) Q^{p-1}(x)d\mu_d(x) - (n-1)\Lambda\int_{\mathbb{R}^d} \varphi(x) \,d\mu_d(x)\Bigl\} +O(t^2). \end{split}\end{equation} From (\ref{Qtsmaller}) it follows that $\lim_{t\to 0}\frac{\mathcal{I}(\widetilde Q_t)-\mathcal{I}(Q)}{t}=0$, so the second term in the prior expansion must vanish, that is, $$\int_{\mathbb{R}^d}\varphi(x) \Bigl\{n[\mathcal{T}(\mathcal{C}_{n-1}(Q))]*[\mathcal{C}_n(Q)]^{p-1}(x) - \frac{\Lambda}{M} Q^{p-1} - (n-1)\Lambda\Bigl\}d\mu_d(x)=0, $$ where (\ref{threeconv}) has been used. Since $\varphi$ was any bounded function compactly supported in $\{Q>0\}$, we conclude that (\ref{neccond}) holds.\end{proof} \section{On the Madiman-Wang conjecture}\label{MWsec} \begin{prop}\label{keyprop} The generalized Gaussian is not the extremizer for problem (\ref{convprob}). \end{prop} \begin{proof} Consider the simplest case $d=1$, $p=2$, and $n=2$. We shall show that the function $G(x) = \alpha(1-|x|^2)_+$ does not satisfy the equation \begin{equation}\label{3conv}\mathcal{C}_3(f)=af+b\text{ on }[-1,1]\text{ with }a,b>0, \end{equation} and so no function of the form $\frac{c}{\lambda}G(\frac{\cdot}{\lambda})$, with $c,\lambda>0$, satisfies (\ref{neccond}), for any value of $\Lambda$ (recall Remark \ref{raddecrem}). In fact, we shall show that $\mathcal{C}_3(G)=G*G*G$ is not a quadratic polynomial near $0$. For this, observe: $$G'' = 2\alpha(\delta_{-1} - \chi_{[-1,1]} +\delta_{1}). $$ Thus, $(G*G*G)'''''' = (G''*G''*G'')$ is the threefold convolution of the above measure. The threefold convolution of $-2\chi_{[-1,1]}$ equals $-8(3-|x|^2)_+$ on $[-1,1]$, and no other term in the convolution $G''*G''*G''$ is quadratic in $|x|$. Therefore, $G*G*G$ has non-vanishing sixth derivative at $0$, but $a+bG$ does have vanishing sixth derivative at $0$. \end{proof} \begin{rem} Moreover, for any dimension $d$, the random vector $X$ in $\mathbb{R}^d$ with i.i.d. coordinates $X_i$, each distributed according to the generalized Gaussian density, does not constitute the extremizer for this problem. Indeed, in this case $h_p(X)=dh_p(X_i)$, and it remains to use Proposition \ref{keyprop}. Therefore, a random vector with i.i.d. coordinates which are generalized Gaussians is not an extremal case for this question. \end{rem} \section{Any radially decreasing solution of (\ref{neccond}) is compactly supported} In this section, we discuss the following \begin{prop} Decreasing radial solutions of (\ref{neccond}) are compactly supported. \end{prop} \begin{proof} Suppose that $Q\in \mathcal{F}$ solves (\ref{neccond}) and $Q$ is not compactly supported. Since $Q$ is non-negative and radially decreasing, its support is $\mathbb{R}^d$. The term $G = \mathcal{C}_{n-1}(Q)*(\mathcal{C}_n(Q))^{p-1}$ on the left hand side of (\ref{neccond}) belongs to $L^r$, where $r=\max(1, 1/(p-1))$. Indeed, if $p\geq 2$ then $\int_{\mathbb{R}^d} G(x) d\mu_d(x) = \int_{\mathbb{R}^d}[\mathcal{C}_n(Q)(x)]^{p-1}d\mu_d(x)$ (recall that $Q\geq 0$ with $\int_{\mathbb{R}^d}Q(x)d\mu_d(x)=1$), but $\int_{\mathbb{R}^d}\mathcal{C}_n(Q)(x)d\mu_d(x)=1$ and $$\int_{\mathbb{R}^d}\mathcal{C}_n(Q)^p(x)d\mu_d(x)=\Lambda <\infty,$$ so $\mathcal{C}_n(Q)\in L^{p-1}(\mathbb{R}^d)$. If $1<p<2$, then $t\mapsto t^{1/(p-1)}$ is convex, so by Jensen's inequality, $G^{1/(p-1)}\leq \mathcal{C}_{n-1}(Q)*(\mathcal{C}_n(Q)^{p-1})^{1/(p-1)} = \mathcal{C}_{2n-1}(Q)$, whence $\|G\|_{1/(p-1)}\leq 1$ in this case. On the other hand, the right hand side of (\ref{neccond}) belongs to $L^r$ only if $\Lambda=0$, which is absurd, since $\mathcal{F}$ certainly contains non-zero functions. \end{proof} \section{Remarks} In this section we make some remarks that suggest that although the generalized Gaussian is not an optimal distribution for the problem (\ref{convprob}), a reasonably small perturbation of the generalized Gaussian could well be. Beginning with $f_0(x) = \mathbf{1}_{[-1,1]}$, consider the following iteration for $j\geq 1$ $$f_{j}(x) = \frac{\mathcal{C}_3(f_{j-1})(x) - \mathcal{C}_3(f_{j-1})(1)}{\mathcal{C}_3(f_{j-1})(0) - \mathcal{C}_3(f_{j-1})(1)}. $$ Numerically, this iteration converges pointwise to a solution of the equation (\ref{3conv}) for some $a,b>0$ satisfying the constraints $f(0)=1$ and $f(1)=0$ (so the support of $f$ is $[-1,1]$). The resulting function $f$ can then be re-scaled via the transformation $\frac{c}{\lambda}f(\tfrac{\cdot}{\lambda})$ ($c,\lambda>0$) to have any given positive integral and $L^2$-norm. We do not know if the solution of $\mathcal{C}_3(f) = af+b$ is unique (modulo natural invariants in the problem), so we cannot say that this function $f$ corresponds to a solution of the constrained maximization problem (\ref{convprob}). We provide the graphs of $f_1, f_2, f_3$ and $f_4$ (see Figure \ref{fig:iterates} below), and the algebraic expressions for $f_1$, $f_2$ and $f_3$ on $[-1,1]$. \begin{gather*} f_1(x) = 1 - x^2,\; f_2(x) = 1 - \frac{6 x^2}{5} + \frac{x^4}{5}\\ f_3(x) = 1 - \frac{62325 x^2}{50521} + \frac{12810 x^4}{50521} - \frac{1050 x^6}{50521} + \frac{45 x^8}{50521} - \frac{x^{10}}{50521}.\end{gather*} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.2\linewidth} \includegraphics[width=\linewidth]{1stIterate.pdf} \caption{$f_1$.} \end{subfigure} \begin{subfigure}[b]{0.2\linewidth} \includegraphics[width=\linewidth]{2ndIterate.pdf} \caption{$f_2$.} \end{subfigure} \begin{subfigure}[b]{0.2\linewidth} \includegraphics[width=\linewidth]{3rdIterate.pdf} \caption{$f_3$.} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=\linewidth]{4thIterate.pdf} \caption{$f_4$.} \end{subfigure} \caption{The graphs of $f_1, \dots, f_4$ on $[-1,1]$.} \label{fig:iterates} \end{figure} \pagebreak
{ "attr-fineweb-edu": 1.741211, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcC3xK1yAgYaM1TWS
\section*{Abstract} Large openly available neuroimaging datasets, although highly valuable, do not typically provide the community with subject-level whole-brain connectivity matrices that are ready to be used brain connectivity analysis. In the pursuit of open science, we have made available fully processed and parcellated connectomes and timeseries for the Human Connectome Project -- Young Adult (HCP-YA) dataset to the neuroimaging and computational neuroscience community. We have processed and parcellated the 8 fMRI conditions for the $\sim$1200 from subjects part of the HCP-YA dataset into to the Schaefer parcellations, which are further subdivisions of the resting state functional networks. We also provide a detailed fingerprinting analysis of these parcellated connectomes and timeseries data using the differential identifiability framework on three different cohorts: Unrelated subjects, monozygotic twins, and dizygotic twins. Overall, results indicate that combining fine-grained parcellations with the differential identifiability framework uncovers fingerprints for all cohorts and fMRI conditions evaluated. \section{Introduction} \label{sec_intro} With the advent of improved neuroimaging acquisition techniques, there has been a surge in the availability of high quality neuroimaging data in recent years. Data repositories such as the different Human Connectome Project (HCP) \cite{van2013wu} datasets (HCP Young Adult \cite{van2013wu}, HCP Aging \cite{bookheimer2019lifespan}, HCP Development \cite{somerville2018lifespan}, etc.), 1000 Functional Connectomes Project (\url{http://fcon_1000.projects.nitrc.org/}), UK Biobank \cite{allen2014uk}\cite{miller2016multimodal}, and the Alzheimer's Disease Neuroimaging Initiative (ADNI) \cite{petersen2010alzheimer} among others are openly available to the scientific community. These data repositories, although highly valuable, do not always provide the users with ready-to-use processed subject-level whole-brain functional connectomes (FCs). Instead, they provide raw data or data that has been only minimally processed \cite{glasser2013minimal}\cite{makropoulos2018developing}. Hence, it is typically up to the researcher to estimate single session whole-brain functional connectomes from fMRI and T1 data. This step is critical \cite{power2020critical}\cite{parkes2018evaluation}\cite{power2018ridding}\cite{power2014methods}\cite{power2015recent}\cite{burgess2016evaluation} for subsequent brain connectivity and network neuroscience analyses \cite{fornito2016fundamentals}\cite{sporns2010networks}\cite{sporns2012discovering}. This can be a difficult task due to the knowledge required as well as the amount of computational power necessary to process large datasets such as HCP. \par These open-source datasets are usually shared with the community with either no or minimal artefact and/or noise removal. This is an efficient and suitable strategy for neuroimaging data sharing. One of the main reasons is that MRI data processing is constantly evolving, with registration, processing and denoising methods constantly evolving \cite{power2020critical}\cite{parkes2018evaluation}\cite{power2018ridding}\cite{power2014methods}\cite{power2015recent}\cite{burgess2016evaluation} as well as new brain atlases \cite{schaefer2018local}\cite{glasser2016multi}\cite{salehi2020there} being provided to the community. Providing raw or minimally processed datasets allows for up-to-date processing techniques to be applied to the dataset. \par Amongst the many different choices one has to make while processing raw neuroimaging data to obtain subject-level, single-session, whole-brain functional connectomes, the choice of the parcellation is very important \cite{schaefer2018local}\cite{glasser2016multi}\cite{salehi2020there}. The subsequent analysis of the connectomes depends on the level of granularity of a parcellation. The importance of the parcellation granularity has been shown, for instance, when evaluating brain fingerprints \cite{finn2015functional}\cite{abbas2020regularization}. Schaefer et al., 2018 \cite{schaefer2018local} recently published a scheme of parcellations that gives the user the ability to assess up to 10 different levels of granularity (atlases include 100 to 1,000 brain regions, in steps of 100). Another added advantage of this parcellation scheme is that all ten levels of granularity are further divisions of the resting state functional networks proposed by Yeo et al., 2011 \cite{yeo2011organization}. \par There is no standard procedure to decide on the brain parcellation to estimate functional connectomes. This is also true for the artefact and noise removal steps in the fMRI data processing. To that end, the amount of subject fingerprint present in the resultant functional connectomes is a useful proxy, as a whole, of the measure of quality of the experimental design, acquisition parameters, and the ultimate estimation of the functional connectomes. Many recent studies have established that functional connectomes have an individual fingerprint that can be used to identify an individual from a population (a process known as \textit{fingerprinting} or \textit{subject-identification}) \cite{amico2018quest}\cite{abbas2020geff}\cite{finn2015functional}\cite{kaufmann2017delayed}\cite{miranda2014connectotyping}\cite{noble2017influences}\cite{rajapandian2020uncovering}\cite{mars2018connectivity}\cite{abbas2020geff}\cite{hu2020disentangled}\cite{ngo2020connectomic}. Subject-level fingerprints in the FCs have been found to be reliable and reproducible in high quality datasets \cite{finn2015functional} (e.g., HCP). Moreover, this fingerprint can be improved by using the differential identifiability framework ($\mathbb{I}\mathit{f}$), which relies on performing group-level decomposition into principal components followed by an iterative reconstruction adding components in descending order of explained variance until the differential identifiability score reaches an optimal value \cite{amico2018quest}. Without a high subject-level fingerprint, brain connectomic analyses that are aimed at finding associations between functional connectivity and cognition, behavior, or disease progression are severely compromised \cite{svaldi2019optimizing}\cite{sripada2020boost}.\par Fingerprints are not unique to test/retest sessions of the same individuals. Subjects sharing genetics and/or environment are expected to have, to some extent, a fingerprint. In particular, similar to subject-level fingerprint in brain functional connectomes, it has been established that a fingerprint also exists in the FCs of monozygotic (MZ) and dizygotic (DZ) twin subjects \cite{ge2017heritability}\cite{de2008electroencephalographic}\cite{kumar2018multi}\cite{gritsenko2020twin}\cite{colclough2017heritability}\cite{demeter2020functional}, albeit to a lower extent than the subject-level fingerprint. Kumar et al. 2018 \cite{kumar2018multi} have presented a framework based on manifold approximation for generating brain fingerprints from multimodal data using T1/T2-weighted MRI, diffusion MRI, and resting-state fMRI. Their results show a link between amount of fingerprint and genetic proximity as the MZ twins have more prominent fingerprints than DZ or non-twin siblings. Ge et al. 2017 \cite{ge2017heritability} have used a linear mixed effects model to dissociate intra- and inter-subject variation of a phenotype and computed heritability with respect to stable inter-subject variation in fMRI data as the phenotype. Colclough et al. 2018 \cite{colclough2017heritability} have investigated the influence of genetics and common environment on functional connectomes of individuals obtained from fMRI and MEG data in HCP. Demeter et al. 2020 \cite{demeter2020functional} have applied support vector machine classifiers on resting state fMRI to predict retest and co-twin pairs from two twin datasets (adult and pediatric) that include repeat scans. Gritsenko et al. 2020 \cite{gritsenko2020twin} propose a pair-wise twin classification method to identify the zygosity of twin pairs using the resting state fMRI. The latest release of the HCP-YA \cite{glasser2016multi} dataset includes unrelated subjects, as well as subjects that are related to each other, including MZ and DZ twins. This affords us the opportunity to assess brain connectivity fingerprints not only by comparing test and retest functional connectomes of the same subject (\textit{subject-level fingerprint}), but also by comparing the functional connectomes of MZ and DZ twins (\textit{twin fingerprint}) across different fMRI conditions.\par The aim of this study is to provide state-of-the-art processed whole-brain, single-session FCs to the scientific community for conducting research in brain connectomics \cite{sporns2012discovering}\cite{fornito2016fundamentals}\cite{rubinov2010complex}. We provide FCs corresponding to all 10 levels of granularity (100 to 1,000) of the Schaefer parcellations. In terms of artefact/noise removal processing steps, we provide FCs at different level of processing/denoising (e.g. with and without global signal regression \cite{murphy2017towards}\cite{liu2017global}\cite{hayasaka2013functional}\cite{xu2018impact}\cite{gotts2013perils}\cite{saad2012trouble}). In addition, we assess the amount of subject-level fingerprint and twin-fingerprint (MZ and DZ) in each fMRI condition (resting-state and 7 tasks), at different levels of granularity of the Schaefer parcellations. We estimate and uncover these fingerprints using an extended version of the differential identifiability framework ($\mathbb{I}\mathit{f}$) \cite{amico2018quest}\cite{bari2019uncovering}. \par \section{Methods} \label{sec_methods} \subsection{The HCP-YA dataset} \label{sec_HCP_data} The functional MRI data processed as a part of this study is available in the \href{http://www.humanconnectome.org/study/hcp-young-adult}{Human Connectome Project-Young Adult (HCP-YA) repository} \cite{van2013wu}. The HCP-YA data consists of behavioural and 3T MRI data from 1206 healthy young adult subjects collected between August 2012 and October 2015. 3T MR structural scans are available for 1113 subjects, out of which 889 subjects have fully complete data for all four 3T MRI modalities: structural (T1w and T2w) data, resting state fMRI, task fMRI, and high angular resolution diffusion MRI data. The HCP-YA dataset also has extensive family structures, including siblings and twin pairs (monozygotic and dizygotic). All the subjects are within the age range of 22-37 years at the time of scanning. \textit{Table \ref{table_numconns}} summarizes the number of unrelated subjects and monozygotic and dizygotic twin pairs for resting state and all 7 tasks included in HCP-YA. The term \textit{fMRI Condition} would be used to indicate both resting state and tasks that are included in the dataset. \begin{table}[h] \centering \begin{tabular}{|l|c|c|c|} \hline \textbf{fMRI Condition} & \textbf{Unrelated subjects} & \textbf{MZ twin pairs} & \textbf{DZ twin pairs} \\ \hline REST1 & 435 & 131 & 76 \\ REST2 & 435 & 131 & 76 \\ EMOTION & 416 & 124 & 70 \\ GAMBLING & 438 & 135 & 74 \\ LANGUAGE & 417 & 129 & 72 \\ MOTOR & 438 & 134 & 76 \\ RELATIONAL & 414 & 125 & 69 \\ SOCIAL & 417 & 128 & 71 \\ WORKING MEMORY & 436 & 133 & 77 \\ \hline \end{tabular} \caption{Summary of the number of unrelated subjects, MZ and DZ twin pairs corresponding to each of the fMRI conditions in the HCP-YA dataset} \label{table_numconns} \end{table} \subsubsection{HCP-YA fMRI conditions} We have used the fMRI data from the HCP-YA 1200 subjects release \cite{van2012human}\cite{van2013wu}. The fMRI resting-state data (HCP-YA filenames: rfMRI\_REST1 and rfMRI\_REST2) were acquired in separate sessions on two different days, with two different phase acquisitions (left to right or LR and right to left or RL) per day \cite{van2012human}\cite{van2013wu}\cite{glasser2013minimal}. The seven fMRI tasks are the following: gambling (tfMRI\_GAMBLING), relational (tfMRI\_RELATIONAL), social (tfMRI\_SOCIAL), working memory (tfMRI\_working memory), motor (tfMRI\_MOTOR), language (tfMRI\_LANGUAGE, including both a story-listening and arithmetic task), and emotion (tfMRI\_EMOTION). Two runs (LR and RL) were acquired for each task. Working memory, gambling, and motor task were acquired on the first day, and the other tasks on the second day \cite{glasser2013minimal}\cite{barch2013function}. \textit{Table \ref{table_task_summary}} summarizes the run time and number of frames per condition: \begin{table}[h!] \centering \begin{tabular}{|l|c|c|c|} \hline \textbf{fMRI Condition} & \textbf{\#Runs} & \textbf{Run time (min:sec)} & \textbf{\#Frames} \\ \hline REST1 & 2 & 14:33 & 1,200\\ REST2 & 2 & 14:33 & 1,200\\ EMOTION & 2 & 2:16 & 176\\ GAMBLING & 2 & 3:12 & 253\\ MOTOR & 2 & 3:34 & 284\\ LANGUAGE & 2 & 3:57 & 316\\ RELATIONAL & 2 & 2:56 & 232\\ SOCIAL & 2 & 3:27 & 274\\ WORKING MEMORY & 2 & 5:01 & 405\\ \hline \end{tabular} \caption{Summary of the number of runs, run time (in minutes and seconds), and number of frames per run for resting state and 7 tasks included in the HCP-YA dataset} \label{table_task_summary} \end{table} The following is a brief description of each fMRI condition. More extensive information may be found in the \href{https://www.humanconnectome.org/storage/app/media/documentation/s1200/HCP_S1200_Release_Reference_Manual.pdf}{HCP S1200 Release Reference Manual}. \begin{itemize} \item \textbf{REST:} Resting state fMRI (rs-fMRI) data was acquired in four runs of approximately 15 minutes each, two runs in each session. The subjects were instructed to keep their eyes open with relaxed fixation on a projected bright cross-hair on a dark background presented in a darkened room. Within each session, oblique axial acquisitions alternated between phase encoding in a right-to-left (RL) direction in one run and phase encoding in a left-to-right (LR) direction in the other run. 1200 frames were obtained per run at 720 ms TR. \item \textbf{EMOTION:} This task was adapted from the one developed by Hariri et al. \cite{hariri2006preference}. Subjects are shown blocks of trials that either ask them to decide which of two faces on the bottom of the screen match the face at the top, or which of two shapes at the bottom match the shape at the top of the screen. The faces have either an angry or fearful expression. Trials are presented in blocks of 6 trials of the same task (face or shape), with the stimulus presented for 2000 ms and a 1000 ms ITI. Each block is preceded by a 3000 ms task cue ("shape" or "face"). Each of the two runs includes three face blocks and three shape blocks, with 8 seconds of fixation at the end of each run. In total, the emotion processing task fMRI had a run duration of 2:16 minutes per run, with 176 frames per run. \item \textbf{GAMBLING:} This task has been adapted from the one developed by Delgado and Fiez \cite{delgado2000tracking}. Subjects are asked to play a card guessing game wherein they are asked to guess the number on a mystery card in order to win or lose money. Subjects are told that potential card numbers are between 1 and 9 and to indicate whether they think the mystery card number is more or less than 5 by a button press. Feedback to the subject is the actual number on the card and either a green arrow up with "\$1" for reward or red arrow down with "-\$0.5" for loss. If the mystery number is equal to 5, the trial is considered neutral and a grey double headed arrow is shown. The subjects have 1500 ms to respond with button press, followed by 1000 ms of feedback. If the subject responds before the 1500 ms is over, a fixation cross is displayed. The task is presented in blocks of 8 trials that are either mostly reward or mostly loss. In each of the two runs, there are 2 mostly reward and 2 mostly loss blocks, interleaved with 4 fixation blocks (15 seconds each). In total, the gambling task fMRI had a run duration of 3:12 minutes per run, with 253 frames per run. \item \textbf{LANGUAGE:} This task was developed by Binder el al. \cite{binder2011mapping} and uses the E-prime scripts provided by them. The task consists of two runs that each interleave four blocks of a story task and four blocks of a math task. The lengths of the blocks vary (average of approximately 30 seconds), but the task was designed so that the math task blocks match the length of the story task blocks, with some additional math trials at the end of the task to complete the 3.8 minute run as needed. The story blocks present participants with brief auditory stories (5-9 sentences) adapted from Aesop's fables, followed by a 2-alternative forced choice question that asks participants about the topic of the story. The math task also presents trials aurally and requires subjects to complete addition and subtraction problems. Participants push a button to select either the first or the second answer from the options presented. The math task is adaptive to try to maintain a similar level of difficulty across participants. In total, the language task fMRI had a run duration of 3:57 minutes per run, with 316 frames per run. \item \textbf{MOTOR:} This task was adapted from the one developed by Yeo et al., 2011 \cite{yeo2011organization}. In this task, subjects are shown visual cues asking them to either tap their left or right fingers, or squeeze their left or right toes, or move their tongue to map different motor areas in the brain. There are total 10 movements and each movement type lasted 12 seconds, preceded by a 3 second cue. In each of the two runs, there are 13 blocks, with two of tongue movements, four of hand movements (2 right and 2 left), and four of foot movements (2 right and 2 left). In addition, there are three 15-second fixation blocks per run. In total, the motor task fMRI had a run duration of 3:34 minutes per run, with 284 frames per run. \item \textbf{RELATIONAL:} This task has been adapted from the work done by Smith et al. \cite{smith2007localizing}. The stimuli are six different shapes filled with one out of six different textures. Subjects are presented with 2 pairs of objects, one at the top of the screen and the other at the bottom. They have to first decide whether shape or texture differs across the top pair and then they have to decide whether the bottom pair also has the same difference. In the control matching condition, participants are shown two objects at the top of the screen and one at the bottom, and a word in the middle of the screen (either "shape" or "texture"). They are told to decide whether the bottom object matches either of the top two objects on that dimension (e.g., if the word is "shape", is the bottom object the same shape as either of the top two objects. For both conditions, the subject responds with a button press. For the relational condition, the stimuli are presented for 3500 ms, with a 500 ms inter-task interval, with four trials per block. In the matching condition, stimuli are presented for 2800 ms, with a 400 ms inter-task interval, and there are 5 trials per block. Each type of block (relational or matching) lasts a total of 18 seconds. In each of the two runs of this task, there are three relational blocks, three matching blocks, and three 16-second fixation blocks. In total, the relational processing task fMRI had a run duration of 2:56 minutes per run, with 232 frames per run. \item \textbf{SOCIAL:} Subjects were shown 20 second video clips of objects (squares, circles, triangles) that either interacted in some way, or moved randomly on the screen. These videos were developed by either Castelli et al. \cite{castelli2000movement}\cite{castelli2002autism} or Martin et al. \cite{wheatley2007understanding}\cite{white2011developing}. After each video clip, subjects were asked to judge whether the objects had a mental interaction (an interaction that appears as if the shapes are taking into account each other's feelings and thoughts), Not Sure, or No interaction (i.e., there is no obvious interaction between the shapes and the movement appears random). Each of the two task runs has 5 video blocks (2 Mental and 3 Random in one run, 3 Mental and 2 Random in the other run) and 5 fixation blocks (15 seconds each). In total, the social cognition task fMRI had a run duration of 3:27 minutes per run, with 274 frames per run. \item \textbf{WORKING MEMORY:} The working memory task has been adapted from the one developed by Drobyshevsky et al., 2006 \cite{drobyshevsky2006rapid} and Caceres et al., 2009 \cite{caceres2009measuring}. Category specific representation task \cite{downing2001cortical}\cite{peelen2005within}\cite{taylor2007functional}\cite{fox2009defining} and working memory (working memory) task \cite{downing2001cortical}\cite{fox2009defining}\cite{kung2007region}\cite{peelen2005within} were combined into a single task paradigm. Subjects were presented with blocks of trials consisting of pictures of places, tools, faces, and non-mutilated body parts. Within each run, the four different stimulus types were presented in separate blocks. Also, within each run, half of the blocks use a 2-back working memory task and the other half use a 0-back working memory task (as a working memory comparison). A 2.5 second cue indicates the task type (and target for 0-back) at the start of the block. Each of the two runs contains 8 task blocks (10 trials of 2.5 seconds each, for 25 seconds) and 4 fixation blocks (15 seconds). On each trial, the stimulus is presented for 2 seconds, followed by a 500 ms inter-task interval (ITI). In total, the working memory task fMRI had a run duration of 5:01 minutes per run, with 405 frames per run. \end{itemize} \subsection{HCP-YA preprocessing: FC pipeline} \subsubsection{The HCP-YA minimal processing pipeline overview} Our starting point to process the HCP-YA data is the denominated \textit{minimally processed} dataset, as provided by HCP \cite{glasser2013minimal}. The pipeline includes artifact removal, motion correction, and registration to standard space. The main steps of this pipeline are spatial (\textit{minimal}) preprocessing, in standard volumetric and combined volume and surface spaces. By taking care of the necessary spatial preprocessing once in a standardized fashion, rather than expecting each user to repeat this processing, the minimal preprocessing pipeline avoids duplicate effort and ensures a minimum standard of data quality. The main steps of this minimal processing functional pipeline \cite{smith2013resting}\cite{glasser2013minimal}\cite{smith2013resting} are described in this section.\par In total, there are six minimal preprocessing pipelines included in the HCP, three structural (\textit{PreFreeSurfer}, \textit{FreeSurfer}, and \textit{PostFreeSurfer}), two functional (\textit{fMRIVolume} and \textit{fMRISurface}), and a \textit{Diffusion Preprocessing} (not covered in this work) pipeline. Following is a brief description of the structural pipelines: \begin{enumerate} \item \textit{\textbf{PreFreeSurfer:}} This produces an undistorted "native" structural volume space for each subjects, aligns the T1w and T2w images, performs a bias field correction, and registers the subject's native structural volume space to MNI space. \item \textit{\textbf{FreeSurfer:}} This pipeline is based on FreeSurfer version 5.2. It segments the volume into predefined structures, reconstructs white and pial cortical surfaces, and performs FreeSurfer's standard folding-based surface registration to their surface atlas (\texttt{fsaverage}). \item \textit{\textbf{PostFreeSurfer:}} This pipeline produces all of the NIFTI volume and GIFTI surface files necessary for viewing the data in Connectome Workbench, applies the surface registration to the Conte69 surface template \cite{van2012parcellations}, downsamples registered surfaces for connectivity analysis, and creates the final brain mask and myelin maps. \end{enumerate} There are two volume spaces and three surface spaces in the HCP-YA data. The volume spaces are the subject's undistorted native volume space and the standard MNI space, which is useful for comparisons across subjects and studies. The surface spaces are the native surface mesh for each individual (~136k vertices, most accurate for volume to surface mapping), the high resolution Conte69 registered standard mesh (~164k vertices, appropriate for cross-subject analysis of high resolution data like myelin maps), and the low resolution Conte69 registered standard mesh (~32k vertices, appropriate for cross-subject analysis of low resolution data like fMRI or diffusion). The 91,282 standard grayordinate (CIFTI) space is made up of a standard subcortical segmentation in 2 mm MNI space and the 32k Conte69 mesh of both hemispheres. The functional and diffusion pipelines can be run after completing the structural pipelines described above. Following is a brief description of the two functional pipelines: \begin{enumerate} \item \textit{\textbf{fMRIVolume:}} This pipeline removes the spatial distortions, carries out motion correction by realigning volumes, reduces the bias field, normalizes the 4-dimensional image to a global mean, and masks the data with the final brain mask. There is no overt volume smoothing in this pipeline as the output of this pipeline is in the volume space and can be used for volume-based fMRI analysis. \item \textit{\textbf{fMRISurface:}} In this pipeline, the volume-based time series is brought into the CIFTI grayordinate standard space. The voxels within the cortical gray matter ribbon are mapped onto the native cortical surface. This transforms the voxels according to the surface registration onto the 32k Conte69 mesh and maps the set of subcortical gray matter voxels from each subcortical region in each subject to a standard set of voxels in each atlas parcel. This gives a standard set of grayordinates in every subject with 2 mm average surface vertex and subcortical volume spacing. This data is then smoothed with surface and parcel constrained smoothing of 2 mm FWHM (full width at half maximum) to regularize the mapping. This pipeline outputs a CIFTI dense time-series (denominated \texttt{\{TASK\}\_\{ACQ\}\_Atlas\_MSMAll.dtseries.nii}, where \texttt{\{TASK\}} refers to the fMRI condition and \texttt{\{ACQ\}} is the acquisition, either LR or RL) that can be used for surface-based fMRI analysis. \end{enumerate} For the resting-state data, in addition to the minimal processing pipeline described above, a 24-parameter motion regression and ICA-FIX \cite{smith2007localizing}\cite{salimi2014automatic}\cite{griffanti2014ica} have also been applied in the data provided by HCP-YA. The 24 parameters included in the motion regression step are the 6 rigid-body parameter timeseries, their backwards-looking temporal derivatives, and all squared 12 resulting regressors. The motion regressesion and ICA-FIX step applied on resting state fMRI data has produced timeseries denominated \texttt{rfMRI\_REST\_\{ACQ\}\_Atlas\_hp2000\_clean.dtseries.nii}. \subsubsection{Additional processing steps} \label{sec_add_processing} We perform the following additional steps on the fMRI data (denominated \texttt{\{TASK\}\_\{ACQ\}\_Atlas\_MSMAll\_ hp2000\_clean.dtseries.nii} for resting state and \texttt{\{TASK\}\_\{ACQ\}\_Atlas\_MSMAll.dtseries.nii} for task-based fMRI in HCP-YA data): \begin{itemize} \item \textbf{\textit{Nuisance regression:}} This step is carried out for the task fMRI data only, where we regress out the 24-parameter motion regressors (6 rigid-body parameter timeseries, their backward-looking derivatives, and all squared resulting regressors), average time-series from the cerebro-spinal fluid (CSF), and the average time-series from the white matter. We have also provided processed data where this step is not performed. \item \textbf{\textit{Global signal regression (GSR):}} This step involves the removal of the global (or average) signal from the time series of each voxel using linear regression \cite{murphy2017towards}. We have produced two sets of connectomes, one where GSR has been performed and one where it has not been, as there is a lack of consensus in the scientific community regarding whether it should be performed or not \cite{liu2017global}\cite{hayasaka2013functional}\cite{xu2018impact}\cite{gotts2013perils}\cite{saad2012trouble}\cite{aquino2020identifying}. The results shown in the main text are based on connectomes where GSR has been performed as part of the preprocessing. This step was performed for all fMRI conditions (if specified). \item \textbf{\textit{Bandpass filtering:}} Lastly, we bandpass filter the time-series data using the following parameters: \begin{itemize} \item Minimum frequency ($f_{min}$) = 0.009 Hz \item Maximum frequency ($f_{max}$) = 0.08 Hz for resting state and 0.25 Hz for task-based fMRI \item Repetition time (\textit{TR}) = 0.72 s \item Butterworth filter order = 4 \end{itemize} This step was always performed on all the fMRI conditions. \end{itemize} \subsubsection{Brain atlases} The brain atlases used in this paper have been developed by Schaefer et al., 2018 \cite{schaefer2018local}. The Schaefer parcellations are further divisions of the resting state functional networks described by Yeo et al., 2011 \cite{yeo2011organization} and have different levels of granularity (100 to 1,000 brain regions, in steps of 100). We have also added the 14 subcortical regions as provided in the HCP-YA dataset (denominated \texttt{Atlas\_ROIs.2.nii.gz}) to each of these atlases, thus making them 114, 214,..., 1,014 brain regions. All the results described in this paper correspond to the Schaefer parcellations. \begin{figure}[h!] \centering \includegraphics[scale=0.3]{figure_panel_sample_connectome.png} \caption{Example of a single-session, single-subject, whole-brain functional connectome (FC) using the Schaefer100 cortical atlas together with 14 subcortical regions. Functional couplings between brain regions are estimated through Pearson's correlation coefficients between their corresponding BOLD time-series. Rows and columns of the FC are ordered by hemisphere (Left and Right), and further divided into resting-state functional networks denoted by different colors.} \label{fig_sample_connectome} \end{figure} \subsubsection{Estimation of functional connectomes} The next step is to extract the functional time series corresponding to each brain region of the parcellation, \textit{z}-score them, and ultimately estimate the functional connectomes. This step is conducted in Connectome Workbench (freely available at \url{https://www.humanconnectome.org/software/connectome-workbench}) with the commands \texttt{cifti-reduce}, \texttt{cifti-math}, and \texttt{cifti-parcellate}. The command \texttt{cifti-reduce} is used to compute the mean and standard deviation of time series data, while \texttt{cifti-math} computes the \textit{z}-scored time series data using the mean and standard deviation previously computed. Lastly, \texttt{cifti-parcellate} parcellates the voxel-wise time series data into different brain regions as per the Schaefer parcellation, by averaging all the voxel-level time series belonging to each brain region. These parcellated time series have been made available and can be used to perform brain-region level activity and connectivity analysis. \par Finally, whole-brain functional connectomes are generated by computing the Pearson's correlation coefficients between the time series of every pair of brain regions in the parcellated time series data computed in the earlier step. These connectomes are square and symmetric matrices that have been made available and can be used to perform functional connectome analyses. \textit{Figure \ref{fig_sample_connectome}} shows a sample connectome parcellated into the Schaefer100 atlas, with 114 brain regions (100 cortical + 14 subcortical). The figure names all the subcortical regions. \subsection{The differential identifiability framework ($\mathbb{I}\mathit{f}$)} \subsubsection{Identifiability matrix} In order to quantify the subject-level fingerprint from a cohort of functional connectomes, Amico and Goñi \cite{amico2018quest} proposed an object called the \textit{identifiability matrix}. This is a non-symmetric correlation matrix that compares the all-to-all test-retest functional connectomes from a cohort of unrelated subjects. Thus, every entry in this matrix is the Pearson's correlation between test and retest functional connectomes in their vector form. Typically, the \textit{x}-axis of the identifiability matrix represents the test (or first run, or day 1) and \textit{y}-axis the retest (or second run, or day 2). Importantly, the ordering of the subjects is kept the same in the rows and columns of this matrix (i.e., test and retest sessions), and hence the main diagonal contains the correlation values between the test and retest connectomes of the same subject. The higher the values in the main diagonal compared to the off-diagonal elements, the better the subject-level fingerprint of the dataset. \par We have further expanded on this intuitive interpretation of the identifiability matrix by using the connectomes from twin pairs instead of test-retest of the same subject. In order to do so, we are extending the concept of identifiability and fingerprints beyond test/retest of the same subjects. To do so, we have computed the identifiability matrix for two different cohorts of twins, monozygotic (MZ) twins and dizygotic (DZ) twins. In this case, rows and columns of the identifiability matrix represent each of the two twins respectively, with the main diagonal values being the Pearson correlations between the FCs of the twin pair and the off-diagonal being the correlations between the FCs of unrelated subjects from the twin-cohort. \textit{Figure \ref{fig_identmat}} shows the differential identifiability matrices for sample cohorts of 20 test-retest, MZ twin pairs, and DZ twin pairs. \par \begin{figure}[h!] \centering \includegraphics[scale=0.27]{panel_identmats.png} \caption{Differential identifiability matrices for sample cohorts of 20 Unrelated subject test-retest, Monozygotic twin pairs, and Dizygotic twin pairs} \label{fig_identmat} \end{figure} \subsubsection{Differential identifiability score} Using the identifiability matrix, Amico and Goñi \cite{amico2018quest} have proposed a measure called \textit{differential identifiability} ($I_{diff}$) to quantify the subject-level fingerprint. $I_{diff}$ quantifies the contrast between the self-similarity (main diagonal) and the similarity between different subjects (off diagonal). $I_{diff}$ can be computed as: \begin{equation} \label{eqn_idiff_self} I_{diff}^{subject} = (I_{self} - I_{others}) \times 100 \end{equation} \noindent where, \par \noindent $I_{self}$ = self similarity, mean of the main diagonal values in the identifiability matrix \par \noindent $I_{others}$ = similarity between different subjects, mean of the off-diagonal elements in the identifiability matrix \par As discussed above, the differential identifiability score can also be calculated for a twin cohort by pairing FCs of twin subjects instead of test-retest. In this case, the main diagonal elements of the identifiability matrix will be the correlations between the FCs of twin subjects (MZ and DZ) and the off-diagonal elements will be the correlations between unrelated subjects. In this case, the \textit{twins} differential identifiability can be expressed as: \begin{equation} \label{eqn_idiff_twin} I_{diff}^{twin} = (I_{twin} - I_{others}) \times 100 \end{equation} This can be repeated separately for monozygotic (or identical) twins and dizygotic (or fraternal) twins. Monozygotic (MZ) twins are genetically 100\% identical whereas dizygotic (DZ) twins have, on average, 50\% genetic material in common \cite{prescott1995twin}\cite{blokland2013twin}. \subsubsection{PCA-based differential identifiability framework} In order to assess and compare the different Schaefer parcellations with each other in terms of their fingerprints, we have adapted the identifiability framework put forth by Amico and Goñi, 2018 \cite{amico2018quest}. They used group-level principal component analysis (PCA) to decompose functional connectomes into orthogonal \textit{principal components} and then subsequently reconstructed with fewer and fewer principal components in order to find the reconstruction level where the differential identifiability score was maximum. Further developments based on the differential identifiability framework have been recently used to improve FC fingerprints across different scanning sites \cite{bari2019uncovering} as well as in network-derived measurements \cite{rajapandian2020uncovering}. PCA is a statistical procedure that transforms a set of observations of possibly correlated variables into a set of linearly uncorrelated variables, i.e., \textit{principal components}. PCA as a tool is widely used in the exploratory analysis of the underlying structure of data in pattern recognition \cite{deng2013nonlinear}\cite{hsieh2009novel} and denoising \cite{manjon2013diffusion}\cite{de2007denoising}, among other areas. \par \begin{figure}[h] \centering \includegraphics[scale=0.14]{figure_PCA_panel.png} \caption{Workflow scheme of the group-level principal component analysis (PCA) reconstruction procedure of individual functional connectomes. The upper triangular values (as the matrices are symmetrical) of the test and retest FCs are vectorized, \textit{z}-transformed using Fisher transform (MATLAB function \texttt{atanh}), and stacked into a matrix. This matrix is then decomposed using PCA to get as many components as connectomes in the cohort. The next step is to incrementally add principal components to the reconstruction, undo the Fisher transform (MATLAB function \texttt{tanh}) to get reconstructed functional connectomes, and compute the differential identifiability at each step. } \label{fig_PCA} \end{figure} We noticed that during the partial reconstructions of the functional connectomes using subsets of principal components, the FCs were not pure correlation matrices as some of the values in the FCs fell outside the [-1 1] range. To avoid this numerical issue, we have adapted the identifiability framework as proposed by Amico and Goñi for this study by using Fisher transform \cite{fisher1915frequency} as shown in \textit{Figure \ref{fig_PCA}}. As can be seen in the figure, we vectorize upper triangular (as the matrices are symmetrical) values in each FC (two FCs per subject for test-retest and one FC per subject for twins) before assembling them into a matrix where the columns are separate FCs and rows are the vectorized connectivity patterns. Before this vectorization, we \textit{z}-transform the FCs by employing Fisher transform (MATLAB function \texttt{atanh}). Fisher transform has also been employed previously in several studies focusing on functional connectivity in the human brain \cite{negishi2011functional}\cite{hampson2010functional}\cite{tomasi2012resting}\cite{fox2013identification}. The assembled matrix of Fisher transformed FCs is then decomposed using PCA into as many principal components as input FCs. In the next step, we reconstruct the FCs by incrementally adding one principal component at a time (in descending order of explained variance) and employing the inverse Fisher transform (MATLAB function \texttt{tanh}) in order to get back the Pearson correlation-based FCs. These FCs at each step of the reconstruction are then used to compute the identifiability matrix and, by extension, the differential identifiability score ($I_{diff}$). Through this procedure, we obtain a curve of $I_{diff}$ values for the whole range of principal components used in the reconstruction. It should also be noted that when all the principal components are used to reconstruct the FCs, we obtain the original input FCs, and thus the resulting $I_{diff}$ score corresponds to that of the original FCs. \subsection{Assessment of brain fingerprints} Using the identifiability framework, we have assessed the brain fingerprints for three different cohorts -- test-retest of a group of unrelated subjects (for the rest of the paper, we refer to this as \textit{Unrelated subjects}), MZ twin pairs, and DZ twins pairs -- at all the different levels of granularity afforded to us by the Schaefer parcellations, and for all fMRI conditions. We have set up different experimental designs to evaluate the fingerprints at the whole brain level and resting state functional network level. We also examine the effect of scanning length and repetition time (TR) on the subject-level and twin fingerprints for resting state fMRI. This section contains the description of the different experimental designs. \par For consistency, we have only included those subjects in each of these three datasets in the PCA identifiability framework (unrelated subjects, MZ twins, and DZ twins) for whom all the task test/retest (both runs) functional connectomes are available. Thus, there are 428 unrelated subjects, 116 MZ twins pairs, and 63 DZ twin pairs included in the PCA reconstruction. \subsubsection{Whole brain differential identifiability} For a given parcellation granularity and a given fMRI condition, whole-brain FCs are used to compute the differential identifiability profiles. \subsubsection{Comparison of individual and twin fingerprint} To facilitate meaningful comparison between the differential identifiability profiles between the Unrelated subjects, MZ twins, and DZ twins, we have run the identifiability framework on a subset of Unrelated subjects and MZ twin data so that the number of FCs in these cohorts is equal to the number of FCs in the DZ twin dataset (as DZ is the smallest dataset). This analysis was performed only for the Schaefer400 parcellation for all fMRI conditions. We have conducted the analysis for 100 bootstrap runs of 80\% connectome pairs for each of the three cohorts. We hypothesize an ordinal presence of fingerprints that is highest for test-retest, lower for MZ twins, and lowest for DZ twins. \par For each cohort separately, we test a null model where the rows of the identifiability matrix are shuffled before computing the identifiability score. We perform the bootstrap runs for the null models as well. This is to test whether the identifiability values we obtain from the identifiability framework applied to the three cohorts are a matter of chance. \subsubsection{Functional network-specific differential identifiability} In order to quantify the amount of fingerprint specific to a functional network, we assess the differential identifiability profiles by considering only the brain regions inside a specific functional network. For the network definitions, we use the 7 resting-state networks (RSNs) provided by Yeo et al., 2011 \cite{yeo2011organization}. We should highlight that the PCA decomposition for this experiment is identical to when we explore whole-brain differential identifiability as described above, but for the differential identifiability calculation we only include the brain regions that belong to a specific RSN. \subsubsection{Effect of scanning length on differential identifiability} \label{sec_scan_len} In order to test the effect of scanning length (in term of number of frames) on differential identifiability, we compute the differential identifiability profiles by gradually increasing the sequential number of fMRI volumes used in constructing the FCs. For this analysis, we have used resting-state scans as they have the longest scan duration (approx. 15 minutes) and the highest number of frames (1200). This allows us to study the effect of scanning length on differential identifiability for a wide range (50 to 1200, in steps of 50). \section{Results} \label{sec_results} \subsection{The HCP-YA Functional Connectomes Data Release} The results of the processing performed in \textit{Section \ref{sec_methods}} have been made publicly available at \censor{\url{https://rdl-share.ucsd.edu/message/0Y3GKJM7a2CR2FgMSbk4st}} (functional connectomes) and \censor{\url{https://rdl-share.ucsd.edu/message/Lqi0Oj0fALIrh4lv0Z5L06}} (parcellated time-series). Here, we outline the specific data products that have been produced and how to access them. The data release includes FCs and time-series parcellated according to the Schaefer atlases \cite{schaefer2018local} with different levels of granularity. For ease of downloading, we have created separate compressed files for each of the Schaefer parcellations, GSR/non-GSR status, and connectome/timeseries data. Additionally, for the task-based fMRI, we also have FCs with and without 26 regressors that correspond to motion and average signals from white matter and CSF (see details in \textit{Section \ref{sec_add_processing}}). Each compressed file includes the FCs or time-series (depending on the selection) of all the fMRI conditions (resting state and 7 tasks) included in the HCP dataset that we have parcellated into the selected granularity of Schaefer parcellation. \textit{Figure \ref{fig_data_structure}} shows an example of the data structure for connectome and timeseries data, respectively, for data processed with GSR. \par \begin{figure}[h!] \centering \includegraphics[scale=0.4]{panel_data_structure.png} \caption{Sample data structure for functional connectomes and parcellated timeseries} \label{fig_data_structure} \end{figure} \subsection{Whole brain differential identifiability} \subsubsection{Unrelated subjects test-retest} We first ran and assessed the differential identifiability framework on Unrelated subjects. This assessment is an extension with respect to Amico and Goñi \cite{amico2018quest} where a small cohort was evaluated (100 unrelated subjects, as opposed to 428) by using a single parcellation scheme \cite{glasser2016multi}. \textit{Figure \ref{fig_unrelated}} shows the $I_{self}$, $I_{others}$, and $I_{diff}$ profiles for all the Schaefer parcellations for the 7 tasks and resting state included in the HCP-YA dataset. \par Please note that in all the plots ($I_{self}$, $I_{others}$, and $I_{diff}$), the values for the maximum number of principal components correspond to the full reconstruction of the FCs with all the variance retained, i.e., original FCs. The number of principal components for which $I_{diff}$ is highest is considered the optimal point of reconstruction. As can be seen, the $I_{self}$ value decreases with increasing granularity for original FCs. However, as we reconstruct with fewer principal components, the $I_{self}$ values pass through a point of inflection (where $I_{self}$ values are approximately equal for all granularities) and leads to the optimal point of reconstruction where the reverse is true; i.e. $I_{self}$ increases with increasing granularity. $I_{others}$ on the other hand, is consistently lower for higher granularity across the whole range of principal components. Finally, $I_{diff}$ values for original FCs are either approximately equal (e.g.n in MOTION) or increase negligibly with increasing granularity. However, at the optimal point of reconstruction, the $I_{diff}$ score always increases with increasing granularity and the difference between the $I_{diff}$ curves is much more pronounced. \par \begin{landscape} \begin{figure}[htbp!] \centering \includegraphics[scale=0.15]{panel_unrelated.png} \caption{$I_{self}$, $I_{others}$, and $I_{diff}$ curves for Schaefer 100 to 900 parcellations for all the fMRI conditions in HCP, for unrelated subjects. The higher the granularity of the Schaefer parcellation, the higher the test-retest identifiability regardless of the fMRI condition.} \label{fig_unrelated} \end{figure} \end{landscape} \subsubsection{Monozygotic twins} Monozygotic (or identical) twins share 100\% of their genetic material \cite{blokland2013twin}\cite{prescott1995twin}. The differences between the MZ twins thus arise from their having different environments. \par Both $I_{twins}$ and $I_{others}$ for MZ twins decrease with increasing granularity for the original FCs. $I_{diff}$ scores are approximately equal across all granularities for original FCs. However, similarly as in Unrelated subjects, $I_{diff}$ score increases with increasing granularity and the difference between the $I_{diff}$ profiles for MZ twins is prominent at the optimal reconstruction. \begin{landscape} \begin{figure}[htbp!] \centering \includegraphics[scale=0.15]{panel_MZ.png} \caption{$I_{self}$, $I_{others}$, and $I_{diff}$ curves for Schaefer 100 to 900 parcellations for all the fMRI conditions in HCP, for monozygotic (MZ) twin subjects. The higher the granularity of the Schaefer parcellation, the higher the MZ twin identifiability regardless of the fMRI condition, although the differential identifiability of MZ twins is lower than that of test-retest of the same subject.} \label{fig_MZ_wb} \end{figure} \end{landscape} \subsubsection{Dizygotic twins} Dizygotic (or fraternal) twins are, on average, share 50\% of their genetic material \cite{blokland2013twin}\cite{prescott1995twin}. Thus, genetically speaking, they are siblings, but often their environment is more similar than that of non-twin siblings as they are born at the same time \cite{mark2017using}. \par Similar to the results of MZ twins, both $I_{twins}$ and $I_{others}$ for DZ twins also decrease with increasing granularity for the original FCs. $I_{diff}$ scores are approximately equal across all granularities for original FCs and, similar to Unrelated subjects and MZ twins, $I_{diff}$ score increases with increasing granularity at the optimal point of reconstruction. However, the difference between the $I_{diff}$ profiles for DZ twins is not as prominent as that for Unrelated subjects or MZ twins, but is still noticeable. \begin{landscape} \begin{figure}[htbp!] \centering \includegraphics[scale=0.15]{panel_DZ.png} \caption{$I_{self}$, $I_{others}$, and $I_{diff}$ curves for Schaefer 100 to 900 parcellations for all the fMRI conditions in HCP, for dizygotic (DZ) twin subjects. The higher the granularity of the Schaefer parcellation, the higher the DZ twin identifiability regardless of the fMRI condition. The differential identifiability of DZ twins is lower than that of both MZ twins and test-retest of the same subject.} \label{fig_DZ_wb} \end{figure} \end{landscape} \subsubsection{ Comparison of individual and twin fingerprint} In order to facilitate a meaningful comparison across the three cohorts (Unrelated subjects, MZ twins, and DZ twins), we chose a random subset of Unrelated subjects and a separate random subset of MZ twins so that their numbers match the sample size of DZ twins (DZ twins cohort has the smallest sample size out of the three cohorts). In \textit{Figure \ref{fig_continuum}}, we have plotted the $I_{diff}$ profiles for Unrelated subjects, MZ twins, and DZ twins cohorts for all tasks and resting state for the Schaefer400 parcellation. \begin{figure}[htbp!] \centering \includegraphics[scale=0.25]{panel_continuum.png} \caption{$I_{diff}$ profiles for the three cohorts -- Unrelated subject test-retest (red), Monozygotic twins (blue), and Dizygotic twins (orange) -- for all fMRI conditions using Schaefer400 parcellation. The cohort sizes have been matched in order to facilitate comparisons between them. The figure also includes results for the null models based on the three cohorts. Shaded areas represent the variability (5-95 percentile) of $I_{diff}$ scores across the 100 samples without replacement.} \label{fig_continuum} \end{figure} As can be seen in \textit{Figure \ref{fig_continuum}}, the $I_{diff}$ scores are the highest for Unrelated subjects across all the fMRI conditions for the entire range of principal components. These are then followed by the $I_{diff}$ scores for MZ twins and DZ twins, respectively. The three curves at the bottom of the figures are results for the null models. As can be seen, the $I_{diff}$ scores for these null models are approximately zero for the entire range of the principal components (two of the curves are mostly hidden behind the third). \subsubsection{Functional network-specific differential identifiability} In order to assess the level of fingerprint in specific functional networks of the brain, we have computed the $I_{diff}$ profiles of each of the 7 resting-state networks, as proposed by Yeo et al. \cite{yeo2011organization}. \textit{Figure \ref{fig_idiff_yeo}} shows the differential identifiability profiles for the 7 RSNs using resting-state FCs for all Schaefer parcellations. Please note that the decomposition/reconstruction based on PCA is carried out on whole-brain FCs and not on isolated functional networks. In other words, results presented in this section belong to the same decomposition/reconstruction procedure as the ones shown in \textit{Figure \ref{fig_unrelated}}. \par When assessing $I_{diff}$ in an isolated fashion on each RSN, it can be observed that granularity of the parcellations increases $I_{diff}$ at any level of reconstruction. Notice that some RSNs present higher levels of fingerprints than others for all granularities. For any RSN (with the only exception of VISUAL), at the optimal level of reconstruction the $I_{diff}$ scores for any given Schaefer parcellation are higher than that achieved when computing the $I_{diff}$ scores using whole-brain FCs (see \textit{Figure \ref{fig_unrelated}}). \begin{figure}[htbp!] \centering \includegraphics[scale=0.35]{figure_idiff_yeonets.png} \caption{Functional network-specific $I_{diff}$ curves for Schaefer 100 to 900 parcellations for resting state connectomes. The higher the granularity, the higher the differential identifiability in most cases. This does not hold true when the number of brain regions included in a functional network is too few. For example, there are less than 10 brain regions included in the limbic functional network for the Schaefer 100 parcellation, which causes the $I_{diff}$ curve to be very unstable. } \label{fig_idiff_yeo} \end{figure} \subsubsection{Effect of scanning length on differential identifiability} \label{sec_scanlen_effect} For the next analysis, we assessed the effect of different lengths of acquisition time on the differential identifiability of functional connectomes. In order to do so, we subsampled different lengths of time series from the resting state fMRI acquisition and applied the identifiability framework on the resulting connectomes. We repeated this experiment for all the Schaefer parcellations. \textit{Figure \ref{fig_reduced_tseries}} shows the original and optimal differential identifiability for different number of time points and for all Schaefer parcellations, along with the difference between them. Also observe that the difference between original and optimal $I_{diff}$ increases with the granularity of the brain atlas. From the plot showing the difference between the original and optimal $I_{diff}$ scores, we can also note that, for every parcellation and scanning length combination, the differential identifiability is always higher. The different levels of granularity are more distinguishable from each other in terms of their $I_{diff}$ scores for shorter scanning lengths ($>$150 timepoints) at the optimal reconstruction as compared to the original FCs ($>$300 timepoints). \begin{figure}[htbp!] \centering \includegraphics[width=\textwidth]{figure_panel_red_tseries.png} \caption{Original and optimal $I_{diff}$ values for resting state in all Schaefer parcellations for different scanning lengths, along with the difference between the two. For every Schaefer parcellation, we mimic a shorter scanning length by sampling from the entire rs-fMRI scan (50:50:1190 timepoints), construct functional connectomes from these shortened scanning lengths, and run the PCA identifiability framework in order to study their stability. The \textit{x}-axes of the plots show the scanning length, both in terms of minutes and seconds and the number of timepoints. } \label{fig_reduced_tseries} \end{figure} \section{Discussion} In this paper, we have discussed the processing pipeline we have developed to extract brain region-level time-series and the subsequent functional connectomes for each session and all the fMRI conditions of all the subjects in the Human Connectome Project -- Young Adult (HCP-YA) dataset. We have made these time-series and functional connectomes datasets available, parcellated according to the Schaefer atlases \cite{schaefer2018local} that afford us different levels of granularity (100 to 900 brain regions, in steps of 100), combined with subcortical regions (7 regions in each hemisphere; 14 in total). We have also provided a quantification of the individual and twin (monozygotic and dizygotic) fingerprint present in the FCs (of each fMRI condition separately) using an extension of the identifiability framework proposed by Amico and Goñi, 2018 \cite{amico2018quest}. Briefly, results show the presence of fingerprints at three different levels of genetic and environmental similarity (as depicted by Unrelated subjects greater than MZ twins, greater than DZ twins; \textit{Figure \ref{fig_continuum}}).These results are present for all fMRI conditions evaluated and with different sensitivity to parcellation granularity. We also found that the identifiability framework not only uncovers individual and twin-fingerprints in FCs, but, importantly, also enables us to benefit from the higher levels of fingerprints present in FCs corresponding to higher levels of granularity (\textit{Figures \ref{fig_unrelated}, \ref{fig_MZ_wb}}, and \textit{\ref{fig_DZ_wb}}). Subsequently, we discovered that different levels of fingerprint are present for the various resting-state networks, with the same pattern of higher levels of granularity enabling us to uncover higher fingerprints (\textit{Figure \ref{fig_idiff_yeo}}). Finally, we found that the amount of individual fingerprint in resting-state FCs increases with increasing scanning length, but saturating after $\sim$13 minutes of scanning (\textit{Figure \ref{fig_reduced_tseries}}). \par We have assessed the individual-level fingerprint between the test and retest FCs of the cohort of Unrelated subjects using the identifiability framework (see \textit{Figure \ref{fig_unrelated}}). Consistent with previous investigations \cite{amico2018quest}\cite{finn2015functional}\cite{bari2019uncovering}\cite{abbas2020geff}\cite{pallares2018extracting}\cite{liu2018chronnectome}\cite{byrge2019high}\cite{mueller2013individual}\cite{faskowitz2020edge}\cite{venkatesh2020comparing}\cite{satterthwaite2018personalized}\cite{gratton2018functional}\cite{mars2018connectivity}, we have found that FCs have a recurrent and reproducible individual fingerprint across all the fMRI conditions. This is all the more important as in this study, the sample size of upwards of 400 Unrelated subjects is considerably bigger than the previous studies (usually 100 unrelated subjects from the HCP-YA dataset). As can be seen in \textit{Figure \ref{fig_unrelated}}, without implementing the identifiability framework and hence assessing original FCs, there is little to no difference in $I_{diff}$ scores between the different parcellations for each of the fMRI conditions. This implies that the granularity of the parcellation is inconsequential in terms of individual fingerprint, prompting one to use the smallest parcellation as it would lead to lower computational load. However, the difference between the fingerprints of the different parcellations only becomes apparent when the identifiability framework is applied (see \textit{Figure \ref{fig_unrelated}}). In particular, it can be observed that the higher the granularity of a parcellation, the higher the optimum test-retest fingerprint achieved for the cohort of Unrelated subjects. In other words, the potential to uncover fingerprints by fine-grained parcellations of the cortex is only unleashed when using the identifiability framework. On a related note, higher granularity was associated with a larger number of principal components leading to the highest $I_{diff}$ scores. This might be an indication of higher granularity parcellations containing more information about the individual fingerprint. \par A subset of the HCP-YA dataset is made up of monozygotic (MZ) and dizygotic (DZ) twin pairs (see \textit{Table \ref{table_numconns}}), so the next step was to utilize this and quantify the twin-fingerprint in the dataset which has not been done before. In this analysis, we adapt the identifiability framework by using, for each fMRI condition, one single-session functional connectome from each of the twins in a pair (MZ and DZ, separately) in lieu of test and retest of the same subject. We found the presence of a twin-fingerprint (both for MZ and DZ twins) in all fMRI conditions. It is noteworthy that the twin-fingerprint was much higher than expected by chance, at the same time being lower than individual fingerprint (based on test/retest Unrelated subjects; see \textit{Figure \ref{fig_continuum}}). Similar to the cohort of Unrelated subjects, the identifiability framework not only contributed in uncovering higher twin-fingerprints in all fMRI conditions, but also enabled us to utilize the higher granularity of the parcellations to achieve higher $I_{diff}$ scores. In particular, the $I_{diff}$ profiles are similar across the three cohorts, with the cohort of Unrelated subjects achieving the highest peaks, followed by MZ and DZ twins, respectively (see \textit{Figures \ref{fig_unrelated}, \ref{fig_MZ_wb}, \ref{fig_DZ_wb}}, and \textit{\ref{fig_continuum}}). This ordinal structure of the fingerprint across the three cohorts (namely Unrelated subjects, MZ twins, and DZ twins) can be explained in terms of the genetic and environmental similarity between the pairs of connectomes across two sessions. In particular, for Unrelated subjects, the genetic information is 100\% equal and the environment is also highly shared across sessions as the scans belong to the same subject. On the other hand, the MZ twins, even though they share 100\% of their genetic information, they environment is shared to a much lower degree as the MZ twins are two separate individuals. Lastly, DZ twins share (on average) 50\% of their genetic information and the environment shared is similar to that of MZ twins \cite{koeppen2003twins}. Please note that, in \textit{Figure \ref{fig_continuum}}, we selected a subset of the Unrelated subjects and MZ twins cohorts in order to match number to DZ twins (69 pairs) in order to facilitate a meaningful comparison. \par The presence of a substantial twin-fingerprint (MZ and DZ) is a compelling argument for utilizing only a cohort of unrelated subjects when conducting studies that rely on brain fingerprinting and differences between individuals. This is because including twin pairs or siblings in such studies can confound the results, as evidenced by the findings of this paper. An alternative strategy could be to keep both twins from a pair either in the training or the validation dataset. If one of the twins is used for training and the other for validation, it might lead to a false increase in the prediction accuracy of the model under consideration \cite{seguin2020network}. \par In their seminal work, Finn et al., 2015 \cite{finn2015functional} observed that some of the functional networks of the brain contained a higher fingerprint than the whole-brain fingerprint at rest and between fMRI tasks (used as test/retest). In order to replicate and extend on this result, we quantified the amount of individual fingerprint for the 7 resting-state networks (or RSNs, as proposed by Yeo et al., 2011 \cite{yeo2011organization}) using the (single-session) resting-state fMRI condition across all available levels of granularity. Consistent with previous findings, \textit{Figure \ref{fig_idiff_yeo}} shows that some of the RSNs achieve a higher (e.g., somatomotor and frontoparietal) fingerprint than others (e.g., visual and limbic) using original FCs. Similar to the whole-brain scenarios, the identifiability framework uncovers the individual fingerprint in all the RSNs and enables us extract the higher fingerprint present in the higher granularity of the parcellations. Another effect of using the identifiability framework is that the amount of fingerprint present in each RSN becomes much more uniform, with all RSNs (except visual) reaching around $I_{diff}$ $\approx$ 35 for Schaefer900. \par Lastly, \textit{Figure \ref{fig_reduced_tseries}} shows the effect of scanning length on the identifiability of resting-state FCs. We have chosen to run this experiment only on resting-state data as it is the longest fMRI acquisition in the HCP-YA dataset. As can be seen from the profiles of original $I_{diff}$ scores for all the Schaefer parcellations, the difference between the identifiability of different parcellations is negligible up to 300 timepoints. The same is not true for the optimal $I_{diff}$, as the profiles start diverging from each other as early as 150 timepoints. The optimal $I_{diff}$ achieved is also higher than the original $I_{diff}$ at every scanning length, but the difference is more pronounced for higher granularity parcellations. Overall, we observe that a longer scanning duration leads to a higher fingerprint, but saturating at around $\sim$13 minutes for resting-state fMRI. In addition, identifiability framework not only allows us to uncover higher fingerprint for the same scanning duration, but also enables us to utilize the higher granularity parcellation to achieve higher fingerprint. \par Amongst the limitations of this work is the fact that a dataset such as HCP-YA inherently does not have a large cohort of twin subjects or different age groups. This has been a limitation in terms of not being able to ask specific research questions to study the fingerprint between twins or across the lifespan. Also, specific to HCP-YA, the task fMRI acquisition lengths are heterogeneous and not as long as resting state acquisition (see \textit{Table \ref{table_task_summary}}). The availability of longer task-based fMRI sequences would allow researchers to analyze the effect of scanning length and TR on the stability of the connectomes, similar to the analysis done on resting state fMRI data carried out in \textit{Sections \ref{sec_scanlen_effect}}. Lastly, we have not studied the effect of different processing pipelines on identifiability measures \cite{aquino2020identifying}\cite{vytvarova2017impact}. \par In the future, similar efforts could be pursued to also process the diffusion weighted imaging data included in the Human Connectome Project Young Adult dataset and make it available for public use of the corresponding subject-level structural connectomes. Researchers could also provide processed versions other state-of-the-art brain connectivity datasets such as Alzheimer's Disease Neuroimaging Initiative (ADNI) \cite{petersen2010alzheimer}, Adolescent Brain Cognitive Development (ABCD), HCP--Lifespan, HCP--Aging, among others together with their corresponding fingerprint analyses. \par \section*{Competing Interest} None of the authors have any financial competing interest related to this work. \section*{Acknowledgement} Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University. This work was supported in part by NIH R01 EB022574. \par We would like to thank the Lawrence Livermore National Laboratory Data Science Institute's Open Data Initiative and Rushil Anirudh for coordinating the data release, along with the University of California, San Diego Library Digital Collections for hosting the data. Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344. \par Authors acknowledge financial support from NIH R01EB022574 (JG and LS), NIH R01MH108467 (JG), Indiana Alcohol Research Center P60AA07611 (JG), Purdue Discovery Park Data Science Award "Finger- prints of the Human Brain: A Data Science Perspective "(JG), and from the SNSF Ambizione project PZ00P2\_185716 "Fingeprinting the brain: network science to extract features of cognition, behavior and dysfunction" (EA). \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.381836, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcDHxK7Tt6CA5F7kT
\section{Introduction} The precision measurements in the leptonic sector at LEP1/SLC agree with the Standard Model (SM) predictions at the level of a few permille \cite{LP95}, which leads to drastic constraints on any type of New Physics (NP) manifestation. As of today, the situation in the quark sector is slightly different. Through measurements of the $Z\to b\bar b$ and $Z\to c\bar c$ widths and asymmetries, LEP and SLC have given indications for possible departures from the SM predictions for $b$ and $c$ couplings at the level of a few percent. In the $b\bar b$ case such anomalies could be interpreted as a signal for NP in the heavy quark sector, driven for example by the large value of the top mass, whose effects already appear at standard level \cite{mt2SM}. Several models of this type have been proposed (anomalous top quark properties \cite{Yuan}, \cite{GRV}, ETC models \cite{ETC}, anomalous gauge boson couplings \cite{RVAGC}, supersymmetric contributions, new Higgses, gauginos,..\cite{SUSY}, \cite{Comelli}). A common feature of all these explanations is that they fail to explain the possible existence of $c\bar c$ anomalies, which cannot be enhanced by the large top mass. So it seems more difficult to describe the presence of anomalies in both $b\bar b$ and $c\bar c$ channels, without drastically modifying the fermionic sector, for example through the mixing of quark multiplets with higher fermion representations as proposed in\cite{Ma}.\par In this paper we would like to propose a simple explanation based on the existence of a \underline{hadrophilic Z'} vector boson, i.e. one which would couple universally to quarks more strongly than to leptons. We shall not propose here a specific model, although the concept of $Z'$ differently coupled to quarks and to leptons has already been considered in the past \cite{Georgi}. We shall be limited to extracting from LEP1/SLC experiments several suggestions about the required $Z'$ properties. To achieve this, we shall first rely on a model independent framework for the analysis of $Z-Z'$ mixing effects. This is available from a previous work \cite{LRV} in which the $Z'$ couplings to each fermion-antifermion pair were left free. Working in this spirit, we will then derive in section 2 experimental informations on the $Z-Z'$ mixing angle $\theta_M$ and on the $Z'f\bar f$ couplings showing that, indeed, the anomalies in $b\bar b$ and $c\bar c$ production can be described by such an hadrophilic $Z'$. In particular, from the absence of anomaly in the total hadronic width $\Gamma_{had}$ at $Z$ peak \underline{we shall explain in a natural way the fact that the SM departures in $\Gamma_b$ and in $\Gamma_c$ have}\\ \underline{opposite signs}.\par The next relevant question to be answered is that of whether the values of the $Z'$ couplings that we determined in this way do not contradict any already available experimental constraint. In particular, we shall focus in section 3 on the significant excess of dijet events for large masses (above $500$ GeV) at CDF \cite{CDF}. We shall show that this phenomenon could be naturally explained in terms of an hadrophilic $Z'$, whose mass lies in the range between $800$ GeV and $1$ TeV and whose couplings are restricted by the request that the $Z'$ behaves like a not too wide resonance, identifiable in different processes.\par Our second step will then consist of examining in section 4 the consequences of this solution for other processes, in particular possible $Z'$ effects in $e^+e^-\to f\bar f$ at LEP2.\par Here the natural final channels to be considered in our case are the hadronic ones, where the $Z'$ effect would depend on the product of $Z'$ couplings to leptons times $Z'$ couplings to quarks. In this paper, we shall consider the pessimistic case where the leptonic $Z'$ couplings are not sufficiently strong to give rise to visible effects in the leptonic channel. Starting from this conservative assumption, we shall show that it would be still possible to observe effects in hadronic channels. We will proceed in two steps. First, in a model-independent way, we shall establish the domain of $Z'b\bar b$ couplings that would lead to visible deviations in the $b\bar b$ cross section $\sigma_b$ and in the forward backward asymmetry $A^b_{FB}$. We shall show that this domain largely overlaps with the ones suggested by our analysis of LEP1/SLC and CDF results. We shall then examine the total hadronic cross section $\sigma_{had}$ at LEP2 and we shall find again that the domain of $Z'b\bar b$ and $Z'c\bar c$ couplings leading to visible effects contains the values selected by LEP1/SLC and CDF.\par We can therefore conclude that, if a hadrophilic $Z'$ is at the origin of the present observed anomalies, a quantitative study of these three hadronic observables at LEP2 would allow to confirm this relatively simple explanation. In this case, it would become relevant and meaningful to construct a full and satisfactory theoretical model.\par \newpage \section{Analysis of LEP1/SLC results in terms of $Z-Z'$ mixing.} We consider $Z-Z'$ mixing effects at the $Z$ peak in a model independent way following the procedure given in ref.\cite{LRV}. As well-known, the two relevant effects consist in a modification of the $Z$ couplings to fermions, proportional to a mixing angle $\equiv \theta_M$, and in a $Z$ mass shift which induces a contribution to the $\delta_{\rho}$ parameter: \begin{equation} \delta^{Z'}_{\rho} \simeq \theta^2_M {M^2_{Z'}\over M^2_Z} \end{equation} The quantity $\delta^{Z'}_{\rho}$ is a \underline{positive quantity} that can be extracted from the ratio $c_w^2 \equiv {M^2_W\over M^2_Z}$ and its comparison to the quantities measured at the $Z$ peak and defined in the conventional way \cite{epsilon}. From the latest available data \cite{LP95} and under the assumption that no other significant contributions to $\delta_{\rho}$ (e.g. from one extra $W'$) exists, we obtain at two standard deviations: \begin{equation} 0 \leq \delta^{Z'}_{\rho} \leq +0.005 \label{rho} \end{equation} \noindent In this way we derive an upper value for the mixing angle: \begin{equation} |\theta_M| < \sqrt{0.005} {M_Z\over M_{Z'}} \label{tmmax} \end{equation} \noindent Note that for our nextcoming qualitative analysis, values of $\delta^{Z'}_{\rho}$ not unreasonably larger than the limit of eq.~(\ref{rho}) would not modify our conclusions. We shall come back on this point later. We then normalize the $Z'f\bar f$ couplings: \begin{equation} -i{e(0)\over 2s_1c_1}\gamma^{\mu}[g'_{Vf}-g'_{Af}\gamma^5] \end{equation} \noindent in the same way as the $Zf\bar f$ ones: \begin{equation} -i{e(0)\over 2s_1c_1}\gamma^{\mu}[g_{Vf}-g_{Af}\gamma^5] \end{equation} \noindent with $g_{Vl}=-{v_1\over2}$; $g_{Al}=-{1\over2}$; $g_{Vf}=I^3_f-2s^2_1Q_f$; $g_{Af}=I^3_f$; $v_1=1-4s^2_1$; $s^2_1\equiv1-c^2_1\simeq 0.2121$ from $s^2_1c^2_1=\pi\alpha(0)/\sqrt2 G_{\mu}M^2_Z$.\par This allows us to define the ratios: \begin{equation} \xi_{Vf} \equiv {g'_{Vf}\over g_{Vf}} \ \ \ \ \ \ \ \ \ \ \xi_{Af} \equiv {g'_{Af}\over g_{Af}} \end{equation} \noindent which will significantly measure the magnitude of the $Z'f\bar f$ couplings. Keeping in mind the fact that $g_{Vl}$ is depressed by $v_1\simeq0.1516$, we will consider as "natural" (i.e. non enhanced) magnitudes $\xi_{Al}\simeq1$, $\xi_{Vf}\simeq1$, $\xi_{Af}\simeq1$ for $f\neq l$, but $\xi_{Vl}\simeq6$.\par The total fermionic $Z'$ width is given by \begin{equation} \Gamma^{ferm}_{Z'} ={\alpha M_{Z'}\over 12s^4_1c^4_1}\sum_f N_f(1-{4m^2_f\over M^2_{Z'}})^{1/2}[\xi^2_{Vf}g^2_{Vf}(1+{2m^2_f \over M^2_{Z'}})+\xi^2_{Af}g^2_{Af}(1-{4m^2_f\over M^2_{Z'}})] \label{gamzp} \end{equation} \noindent $N_f$ being the lepton ($=1$) or quark ($=3$) colour factor. The $Z-Z'$ mixing effects on Z peak observables ($Z$ partial widths and asymmetries), due to $\delta^{Z'}_{\rho}$ and to the modifications of the $Z$ couplings (of the form $\theta_M g'_{V,A}$) are analyzed in Appendix A. Using the most recent LEP and SLC data \cite{LP95} we obtain informations on $Z'$ couplings. They are summarized below in the form of allowed bands, at two standard deviations, assuming that $|\theta_M|$ saturates the bound,eq.~(\ref{tmmax}), (so in a sense these are minimal bands) with the two possible signs $\eta_M=\pm1$.\par \underline{$Z'l\bar l$ couplings} \begin{equation} \eta_M \xi_{Vl} \simeq (-2.25 \pm 6.25)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \ \ \ \ \ \ \ \ \ \eta_M \xi_{Vl} \simeq (+1.75 \pm 6.25)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} \begin{equation} \eta_M \xi_{Al} \simeq (-0.2 \pm 0.5)({M_{Z'}\over 1 TeV}) \end{equation} \underline{$Z'b\bar b$ couplings} \begin{equation} \eta_M \xi_{Vb} \simeq (-3.45 \pm 20.72)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \ \ \ \ \ \eta_M \xi_{Vb} \simeq (-24.24 \pm 25.98)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \label{lep} \end{equation} \begin{equation} \eta_M \xi_{Ab} \simeq (+4.58 \pm 9.84)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \ \ \ \ \ \ \eta_M \xi_{Ab} \simeq (+14.54 \pm 12.47)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \label{slc} \end{equation} \underline{$Z'c\bar c$ couplings} \begin{equation} \eta_M \xi_{Vc} \simeq (-6.94 \pm 26.60)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \ \ \ \ \ \ \eta_M \xi_{Vc} \simeq (-20.38 \pm 40.62)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} \begin{equation} \eta_M \xi_{Ac} \simeq (-7.88 \pm 8.46)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \ \ \ \ \ \ \eta_M \xi_{Ac} \simeq (-6.01 \pm 9.70)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} Because of the various uncertainties, both theoretical (the assumption about $|\theta_M|$) and experimental (disagreements for various measurements and large errors in the quark cases) we take these results just as indicative and we call the resulting values \underline{suggested $Z'$ couplings}. Several important remarks are nevertheless in order.\par First, as expected, lepton couplings are strongly constrained: $\xi_{Vl}$ and $\xi_{Al}$ lie within the "natural" range mentioned above.\par Secondly,on the contrary, there is room for very large values for quark couplings. In one case, from SLC data, a definite non zero value for $\xi_{Ab}$ is suggested. Obviously the extreme quoted values are to be taken as purely indicative. A priori we would not trust values larger for example than the QCD strength ( $\alpha_s\simeq0.12$), which implies $|\xi_{Af}|<7$ and $|\xi_{Vf}|<7/v_f$, i.e. $|\xi_{Vb}|<10$ and $|\xi_{Vc}|<16$. We will \underline{conventionally} define as "reasonable" the values of the couplings lying within this range. Further restrictions can a priori be set by considering their effects on the total fermionic $Z'$ width eq.~(\ref{gamzp}). This will be discussed in the next section.\par There is one more important information to be extracted from $Z-Z'$ mixing effects at $Z$ peak. From the very precise measurement of $\Gamma_{had}$ leading to: \begin{equation} {\delta \Gamma_{had}\over \Gamma_{had}} =+0.003\pm 0.0017 \label{gamlep} \end{equation} and eq.(A.7) one obtains \begin{equation} \eta_M[4v_c\xi_{Vc} + 12\xi_{Ac} +12v_b\xi_{Vb} +18\xi_{Ab}] = (10.6\pm15.4)({M_{Z'}\over 1 TeV}) \label{correl} \end{equation} \noindent where $v_f=1-4|Q_f|s^2_1$. In practice, up to a small uncertainty, this relation reduces the 4-parameter quark case to a 3-parameter one. This result, valid for the most general type of $Z'$, will introduce a quite useful simplification in our nextcoming calculations.\par From eq.~(\ref{gamlep}) we can derive a strong correlation between $\delta \Gamma_b$ and $\delta \Gamma_c$, that is peculiar of our $Z'$ hypothesis. Our universality assumptions $\delta^{Z'} \Gamma_u = \delta^{Z'} \Gamma_c$ and $\delta^{Z'} \Gamma_d = \delta^{Z'} \Gamma_s = \delta^{Z'} \Gamma_b$ allow us to rewrite eq.~(\ref{gamlep}) as: \begin{equation} {\delta \Gamma_{had}\over \Gamma_{had}} = 2 ({{\delta \Gamma_c}\over \Gamma_c}) ({\Gamma_c\over \Gamma_{had}}) + 3 ({{\delta \Gamma_b}\over \Gamma_b}) ({\Gamma_b\over \Gamma_{had}}) \end{equation} leading to the conclusion: \begin{equation} {\delta \Gamma_b\over \Gamma_b} = - ({{2}\over{3}}) ({{R_c}\over {R_b}}) ({\delta \Gamma_c\over \Gamma_c}) + ({{1}\over{3 R_b}}) ({\delta \Gamma_{had}\over \Gamma_{had}}) \end{equation} Numerically the second term of the right hand part is negligible in first approximation, which finally gives: \begin{equation} {\delta \Gamma_b\over \Gamma_b} \simeq -0.5 {\delta \Gamma_c\over \Gamma_c} \end{equation} Thus, in a natural way, the relative shifts in $\Gamma_b$ and in $\Gamma_c$ are predicted to be of opposite sign, with a ratio consistent with the experimental data and errors, which is a peculiar feature of the model, valid for all the values of its quark couplings that obey the universality request. Finally, note that the values of these suggested $Z'$ couplings grow linearly with the mass $M_{Z'}$. This is a natural consequence of assuming a given $Z-Z'$ mixing effect on the $Z$ peak observables. When $M_{Z'}$ grows, $\theta_M$ decreases. Consequently for a given $Z-Z'$ mixing effect the required $Z'$ couplings increase. \par Our model independent analysis of the LEP1/SLC constraints on the $Z'$ parameters is thus finished. In the next section, we shall investigate whether the large "suggested" $Z'q\bar q$ couplings are not ruled out by the data available from the hadronic colliders. \newpage \section{Analysis of CDF dijet events in terms of a $Z'$ resonance.} The CDF collaboration has reported the observation of an excess of events with two-jet mass above $500$ GeV, compared to the QCD prediction. The jets have been required to satisfy $|\eta| < 2$ ($\eta$ being the pseudorapidity) and the events are required to have $|\cos{\theta}^{\star}| < \frac{2}{3}$, ${\theta}^{\star}$ being the parton scattering angle in the partonic center of mass frame. This kinematical restriction favors the appearance of NP since the QCD cross section is peaked around $|\cos{\theta}^{\star}|\simeq 1$. The two jet production in hadronic collisions has been computed at next to leading order in QCD \cite{EKS}. The aim of this section is that of investigating whether the observed dijet excess may, or may not, be explained in terms of a hadrophilic $Z'$, that a priori represents in our opinion a reasonably natural possibility. In order to pursue this program we have to calculate the effect of the addition to the dominant QCD component of the weak contribution. In the SM this comes from W,Z and photon exchanges. In our analysis we will add the extra contribution due to the $Z'$, with couplings taken within the range suggested by the LEP/SLC analysis. The practical calculation is rather lengthy and will be summarized in Appendix B. \par The weak contribution being evaluated at leading order we shall perform the calculation of the strong part at the same level. It has been shown in \cite{EKS} that the difference between the order $\alpha_s^3$ calculation and the Born calculation is small provided that we fix the arbitrary factorization M and renormalization $\mu$ scales to: \begin{equation} M= \mu = \frac{0.5 M_{JJ}}{2 \cosh(0.7\eta_{\star})} \label{scale} \end{equation} where $M_{JJ}$ is the dijet mass and $\eta_{\star}=\frac{|\eta_1 - \eta_2|}{2}$, $\eta_i$ being the pseudorapidity of jet i. In the following we will use the prescription given in eq.~(\ref{scale}). The deviation from the QCD prediction appears as a resonance bump in the $700-1000$ GeV $M_{JJ}$ mass range, suggesting therefore an indicative $Z'$ mass range around $700-1000$ GeV. Since the bump is wide, the hadrophilic $Z'$ cannot be narrow. \par The results of our investigation are shown in figures 1 and 2. As one can see, the observed dijet excess can be satisfactorily explained for $M_{Z'}$ around $800-900$ GeV and for reasonable $Z'q\bar q$ values i.e.$|\xi_{Af}|$ and $|\xi_{Vf}| \simeq 3$. We have checked that these values satisfy the correlation constraint due to $\Gamma_{had}$, eq.~(\ref{correl}) and lead to an acceptable enhancement of the $Z'$ width eq.~(\ref{gamzp}). Note that $|\xi_{Af}|$ and $|\xi_{Vf}|$ cannot be simultaneously too small (i.e. all $\simeq 1-2$), otherwise the width would be too narrow. To fix a scale in our analysis we allow the $Z'$ width to lie in the range $\Gamma_{Z'} \simeq 150-200$ GeV. Larger values of the $Z'q\bar q$ couplings would lead to an unreasonably wide resonance and the observed peak would be much less pronounced.\par The excess of dijet events could also be explained by an hadrophilic $Z'$ of mass $M_{Z'}=700$ GeV or even $1$ TeV provided that its quark couplings are all suitably larger, i.e. for $|\xi_{Af}|$ and $|\xi_{Vf}|$ values between $3$ and $5$. For what concerns possible effects at LEP2 these situations would lead to more dramatic consequences. For this reason, we shall rather concentrate our analysis on the configuration of figures 1 and 2, which corresponds from this point of view to a more conservative attitude.\par A few technical comments about our calculation are now appropriate. We have used the KMRS set B of parton distributions \cite{KMRS}. The uncertainty due to our imperfect knowledge of the structure functions is small since we calculate a ratio. The dominant weak contribution is due to the $Z'$ pole. We are therefore not sensitive to the sign of $Z'q\bar q$ couplings and the SM weak vector bosons contributions are quite negligible in the high dijet mass range we are interested into.\par This concludes our confrontation of hadrophilic $Z'$ hypothesis to existing data. We shall now investigate the future prospects from LEP2. \newpage \section{ Z' Effects in hadronic production at LEP2} In this section, we shall examine possible visible consequences of our assumption that a hadrophilic $Z'$ exists, with "suggested" couplings and mass derived by an overall analysis of LEP/SLC and CDF data. As rather natural experimental quantities to be considered for this purpose, we shall concentrate our attention on the three hadronic observables that will be measured in a very near future at LEP2, i.e. the $b\bar b$ cross section $\sigma_{b}(q^2)$, the $b\bar b$ forward backward asymmetry $A_{FB,b}(q^2)$ and the total hadronic production cross section $\sigma_{h}(q^2)$, where $\sqrt {q^2}$ is the total center of mass energy that will vary in the range (chosen for theoretical and experimental reasons \cite{LEP2}) $ 140 GeV \leq \sqrt {q^2} \leq 190 GeV$. The calculated shifts on these three quantities due to a $Z'$ will depend on products of $Z'$ quark couplings with $Z'$ lepton couplings. For the latter ones, we have seen from our previous investigation that no special "suggestion" exists that motivates some anomalously large values. In fact, a more detailed investigation of the constraints on the $Z'$ lepton couplings derived from LEP/SLC would lead to the conclusion that $Z'$ signals in the leptonic channel at LEP2 are not forbidden, but are also not specially encouraged. In particular, in the \underline{extreme} configuration of a saturation of the bound on $ |\theta_M|$, the lepton couplings would lie in a domain which corresponds roughly to the domain of non observability for the various leptonic observables at LEP2, which has been derived very recently in another detailed paper \cite{LEP2ZP}. Following a conservative attitude, we shall assume therefore that the leptonic $Z'$ couplings lie in the previous domain of non observability at LEP2. With this input, we shall look for possible effects in the LEP2 hadronic channels, motivated by the suggested anomalously large $Z'$ quark couplings. Of course, should an effect be produced in the leptonic channel, the corresponding situation in the hadronic one would become more favourable than in the configuration that we shall consider from now on.\par The treatment of the $Z'$ shifts on various observables can be performed in various ways. We shall follow in this paper a theoretical approach that has been proposed very recently \cite{univ}, in which this effect can be formally considered as a one loop $Z'$ correction of "box" type to the SM quantities containing conventional $\gamma$ and Z exchanges. These corrections enter in a not universal way in certain gauge-invariant combinations of self-energies, vertices and boxes that have been called $\tilde{\Delta}\alpha(q^2)$, $R((q^2)$, $V_{\gamma Z}(q^2)$ and $V_{Z \gamma}(q^2)$, whose contributions to the various observables have been completely derived and thoroughly discussed in the section 2 of ref.\cite{univ}. We shall not repeat here the derivation of these contributions, and defer the interested reader to the aforementioned reference. For our purposes, it will be sufficient to remind that the relevant one-loop corrected expressions of an observable $O_{lf}$ of the process $e^+e^-\to f \bar f$ (where f is a certain quark) will be of the type: \begin{equation} O_{lf}(q^2) = O_{lf}^{(Born)} \lbrack 1 + a_{lf} \tilde{\Delta}^{(lf)}_{\alpha}(q^2) + b_{lf} R^{(lf)}(q^2) + c_{lf} V^{(lf)}_{\gamma Z}(q^2) + d_{lf} V^{(lf)}_{Z \gamma}(q^2) \rbrack \end{equation} \noindent where $(a,b,c,d)_{lf}$ are certain numerical constants given in ref.\cite{univ} for the various relevant cases and $ O_{lf}^{(Born)}$ is a certain suitably defined "effective" Born approximation. For the case $f=b$, the $Z'$ contributions to the four one loop corrections turn out to be: \begin{equation} \tilde{\Delta}^{(lb)}_{\alpha}(q^2) = -z_{2l}z_{2b} \ \ \ \ \ \ \ \ \ R^{(lb)}(q^2) = z_{1l}z_{1b}\chi^2 \label{sub1} \end{equation} \begin{equation} V^{(lb)}_{\gamma Z}(q^2) = z_{1l}z_{2b}\chi^2 \ \ \ \ \ \ \ \ \ V^{(lb)}_{Z\gamma}(q^2) = z_{2l}z_{1b}\chi^2 \label{sub2} \end{equation} \noindent where we use the reduced couplings: \begin{equation} z_{1b} = \xi_{Ab} \sqrt{{q^2\over M^2_{Z'}-q^2}} \end{equation} \begin{equation} z_{2b} = ({3v_b\over 4s_1c_1})(\xi_{Vb}-\xi_{Ab})\sqrt{{q^2\over M^2_{Z'}-q^2}} \end{equation} \noindent and ${\chi}^2= \frac{(q^2- M^2_{Z})}{q^2}$.\par From these expressions we have computed the relative shifts ${\delta\sigma_{b}(q^2)\over\sigma_{b}}$ and ${\delta A_{FB,b}(q^2)\over A_{FB,b}}$ due to a $Z'$, assuming as previously discussed that the lepton couplings lie in the domain of non observability at LEP2. As it has been shown in \cite{LEP2ZP}, this corresponds to the following limitations on the leptonic ratios: \begin{equation} |\xi_{Vl}| \roughly< ({0.22\over v_1})\sqrt{{M^2_{Z'}-q^2\over q^2}} \label{vl} \end{equation} \begin{equation} |\xi_{Al}| \roughly< (0.18)\sqrt{{M^2_{Z'}-q^2\over q^2}} \label{al} \end{equation} The calculation of the shifts has been performed without taking into account the potentially dangerous effects of QED radiation. From our previous experience \cite{LEP2ZP} we know that, provided that suitable experimental cuts are imposed, the realistic results will not deviate appreciably from those calculated without QED convolution. This is particularly true if one is interested in large effects, as in our case. We defer the reader to ref \cite{LEP2ZP} for a complete disussion of this point. \par From now on, we shall concentrate on the configuration $q^2=(175 GeV)^2$ since, for the purposes of $Z'$ searches, it has been shown in \cite{LEP2ZP} that within the three planned realistic LEP2 phases this is the most convenient one. In this case, we can rewrite for sufficiently large $M_{Z'}$ (which we are assuming) eq.~(\ref{vl}) and eq.~(\ref{al}) as: \begin{equation} |\xi_{Vl}|\roughly< 8.02({M_{Z'}\over 1 TeV}) \ \ \ \ |\xi_{Al}|\roughly< 1.01({M_{Z'}\over 1 TeV}) \label{unseen} \end{equation} \noindent \par In figures 3 and 4 we present our results for the $Z' b \bar b$ couplings rescaled by the factor ${M_{Z'}\over 1 TeV}$. The observability regions of figure 3 correspond to a relative $Z'$ effect in ${\delta\sigma_{b}\over\sigma_{b}}$ of at least five percent (dark area) and ten percent (grey area). In figure 4, numerical effects of five and ten percent on the relative forward-backward asymmetry ${\delta A_{FB,b}\over A_{FB,b}}$ are depicted. Following the analysis presented in table 2 of ref.\cite{LEP2ZP}, these $Z'$ effects would be visible in the chosen LEP2 configuration. Note that we have restricted the variation domain of variables in the figures to values that we called "reasonable" in section 2, i.e. that contain in fact the strip $|\xi_{Ab}| = |\xi_{Vb}| \simeq 3$ suggested by our previous CDF analysis. Note that we did not fix the $M_{Z'}$ value. To be consistent with our prefered CDF choice $M_{Z'} \simeq 800-900$ GeV, we should in fact rescale the values of the couplings shown in the figures 3 and 4 by a (scarcely relevant) $10-20 \%$ factor. \par As one can see from an inspection of the two figures, values of the couplings lying in the neighbourhood of the "suggested" representative set of couplings $|\xi_{Ab}| = |\xi_{Vb}| \simeq 3$ would produce in both cases a large effect. In other words, a hadrophilic $Z'$ with such couplings and mass should not escape indirect experimental detection in the final $b \bar b$ channel at LEP2.\par We discuss now the possible $Z'$ effects on the total hadronic cross section $\sigma_{had}$ (hereafter denoted $\sigma_5$) at LEP2. For up quarks we use the reduced couplings: \begin{equation} z_{1c} =\xi_{Ac} \sqrt{{q^2\over M^2_{Z'}-q^2}} \end{equation} \begin{equation} z_{2c} =({3v_c\over 8s_1c_1})(\xi_{Vc}-\xi_{Ac}) \sqrt{{q^2\over M^2_{Z'}-q^2}} \end{equation} \noindent and the quantities corresponding to eq.~(\ref{sub1}) and eq.~(\ref{sub2}) with the replacement of $b$ by $c$. The expression of $\sigma_5(q^2)$ is taken from ref.\cite{univ} and we considered the relative shift ${\delta\sigma_5\over \sigma_5}$ expressed in terms of the eight quantities corresponding to eq.~(\ref{sub1}) and eq.~(\ref{sub2}) for up quarks ($c$) and down quarks ($b$). A priori they depend on four $Z'$ couplings $\xi_{Vb}$, $\xi_{Ab}$, $\xi_{Vc}$, $\xi_{Ac}$. We imposed the strong correlation eq.~(\ref{correl}) implied by the absence of effect in $\Gamma_{had}$, which practically reduces the freedom to a small domain around a three independent quark parameters case. As above we kept the leptonic $Z'$ couplings inside the non observability domain at LEP2, eq.~(\ref{unseen}).\par With these inputs we looked for visible effects in $\sigma_5(q^2)$. The results are shown in figure 5, demanding ${\delta\sigma_5(q^2)\over \sigma_5}$ larger than 5\%. Following the experimental analysis of ref.\cite{LEP2ZP}, this relative shift would represent a spectacular effect. One sees from this figure that indeed values of couplings $|\xi_{Ab}| = |\xi_{Vb}| = |\xi_{Ac}| = |\xi_{Vc}|\simeq 3$, lying around the suggested CDF ones, would be able to generate a clean and impressive effect both in the $b \bar b$ and in the total hadronic observables. This would represent, in our opinion, a spectacular confirmation of the $Z'$ origin of the apparent LEP/SLC and CDF anomalies. \section{Conclusions} In order to explain possible $b \bar b$ and $c \bar c$ anomalies observed in LEP1 and SLC experiments at Z peak, we used a model independent description of $Z-Z'$ mixing effects starting with arbitrary mixing angle and $Z'f\bar f$ couplings. With this description, using the full set of LEP1/SLC data at Z peak, we have derived "suggested" $Z'$ couplings to leptons and quarks. The presence of anomalous effects in hadronic channels at $Z$ peak as opposed to very stringent constraints in leptonic channels would be explained by a $Z'$ more strongly coupled to quarks than to leptons, a \underline{hadrophilic $Z'$}. We notice, as a support to our assumption, that the absence of effect in $\Gamma_{had}$ leads naturally to the prediction of effects with opposite signs in $\Gamma_b$ and in $\Gamma_c$, in agreement with experimental data.\par We considered the consequences of this hypothesis for other processes. We have first investigated the observed excess of high mass dijet events at CDF. This excess can be naturally explained by the hadrophilic $Z'$ provided that its couplings to quarks are reasonable, its mass range lies around $800-900$ GeV and its width is relatively large ($\Gamma_{Z'} \simeq 200$ GeV).\par We have also examined the observability of hadrophilic $Z'$ effects at LEP2. We have checked that for leptonic channels, the "suggested" strongly constrained leptonic couplings do not particularly motivate $Z'$ effects at LEP2.\par On the contrary the suggested $Z'b\bar b$ couplings would produce large effects in $e^+e^-\to b\bar b$ (cross section and forward-backward asymmetry) at LEP2. Within the assumption that the $Z'$ leptonic couplings are such that no effect is seen in leptonic observables, we have established model independent observability domains in the space of vector and axial $Z'b\bar b$ couplings. These domains correspond to visible effects if the $Z'b\bar b$ couplings have a reasonably enhanced magnitude. There is a large overlap with the domains suggested by LEP1/SLC and CDF. So the existence of a hadrophilic $Z'$ producing LEP1/SLC and CDF anomalies could be confirmed by such measurements at LEP2.\par We have then analysed what information the total hadronic cross section could bring on $Z'c\bar c$ couplings. The interesting feature is the strong correlation imposed by the absence of effect in $\Gamma_{had}$ at $Z$ peak. With this constraint included in the analysis of $\sigma_{had}$ at LEP2, we have determined the observability domains in the space of vector and axial $Z'c\bar c$ couplings. We have established them in correlation with various ranges of "reasonable" $Z'b\bar b$ couplings. It appears that visible effects would also be present in $\sigma_{had}$ for similar "reasonable" values of $Z'c\bar c$ couplings. Should this happen, a deeper theoretical analysis on the origin of such an hadrophilic $Z'$ would become mandatory. \par \vspace{0.5cm} {\bf \underline{Acknowledgements}}\par This work has been partially supported by the EC contract CHRX-CT94-0579. We thank C. Bourrely, J. Ph. Guillet and J. Soffer for discussions. The analysis of the CDF anomaly in terms of a $Z'$ was also independently performed in another publication (G. Altarelli, N. di Bartolomeo, F. Feruglio, R. Gatto, M. Mangano, CERN-TH/96-20, hep-ph9601324). One of us (C.V.) is indepted to G. Altarelli for having suggested in the early stage of this work to consider the role of the CDF dijet excess as a test of the hadrophilic $Z'$ assumption. \newpage \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} % \setcounter{section}{0} {\large {\bf Appendix A : $Z-Z'$ mixing effects on $Z$ peak observables}}\\ \vspace{0.3cm} From the analysis of ref.\cite{LRV} we can derive the shifts to the SM predictions for the various $Z$ peak observables, partial Z decay widths ($\Gamma_f \equiv \Gamma(Z\to f \bar f)$ and asymmetry factors $A_f$. Forgetting systematically terms that are numerically negligible we get : \begin{equation} {\delta \Gamma_l\over \Gamma_l} = \delta^{Z'}_{\rho} + 2\theta_M \xi_{Al} \end{equation} \begin{equation} {\delta A_l} = 3\delta^{Z'}_{\rho} + 2\theta_M v_1 \xi_{Vl} \end{equation} \begin{equation} {\delta \Gamma_u\over \Gamma_u} = {8\over5}\delta^{Z'}_{\rho} + {3\over5}\theta_M [v_u\xi_{Vu} + 3\xi_{Au}] \end{equation} \begin{equation} {\delta \Gamma_d\over \Gamma_d} = {19\over13}\delta^{Z'}_{\rho} + {6\over13}\theta_M [2v_d\xi_{Vd} + 3\xi_{Ad}] \end{equation} \begin{equation} {\delta A_u\over A_u} = {12\over5}\delta^{Z'}_{\rho} + {4\over5}\theta_M [3v_u\xi_{Vu} - \xi_{Au}] \end{equation} \begin{equation} {\delta A_d\over A_d} = {15\over52}\delta^{Z'}_{\rho} + {5\over26}\theta_M [3v_d\xi_{Vd} - 2\xi_{Ad}] \end{equation} \noindent Assuming universality with respect to the three families of quarks we also get: \begin{equation} {\delta \Gamma_h\over \Gamma_h} = {89\over59}\delta^{Z'}_{\rho} + {3\over59}\theta_M [4v_u\xi_{Vu} +12\xi_{Au} +12v_d\xi_{Vd} +18\xi_{Ad}] \end{equation} We can solve this set of equations and express the $Z'$ couplings in terms of $\theta_M$, $\delta^{Z'}_{\rho}$ and the experimental values for the shifts to the observables. The values that we shall give below will always correspond to the upper bound, eq.(3), for $|\theta_M|$, with the two possible signs $\eta_M=\pm 1$ and to experimental data taken at two standard deviations.\par \underline{Lepton couplings} are obtained as: \begin{equation} \xi_{Vl} = {1\over2v_1\theta_M}[\delta A_l-3\delta^{Z'}_{\rho}] \end{equation} \begin{equation} \xi_{Al} = {1\over2\theta_M}[{\delta \Gamma_l\over \Gamma_l} -\delta^{Z'}_{\rho}] \end{equation} The experimental measurement $\Gamma_l =83.93\pm0.14 MeV$ agrees with the SM prediction involving the $\epsilon_i$ parameters which depend on $m_t$ and $M_H$, \cite{epsilon}. Taking $m_t=180 \pm 12 GeV$ and $M_H=65-1000 GeV$ we get at most a total relative shift ${\delta \Gamma_l\over \Gamma_l}=\pm 3\times 10^{-3}$. Combining with $\delta^{Z'}_{\rho}$ given in eq.(2) and the upper bound for $|\theta_M|$ in eq.(3) we obtain: \begin{equation} \eta_M \xi_{Al} \simeq (-0.2\pm0.5)({M_{Z'}\over 1 TeV}) \end{equation} Concerning $A_l$, there is a disagreement between the LEP average $A_l(LEP)=0.147\pm0.004$ and the SLC result $A_{LR}(SLD)=0.1551\pm0.004$, whereas the SM prediction is $A_e(SM)=0.144\pm0.003$. We then consider both cases. Combining these results with $\delta^{Z'}_{\rho}$ in eq.(A.8), we obtain: \begin{equation} \eta_M \xi_{Vl} \simeq (-2.25 \pm 6.25)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \end{equation} \begin{equation} \eta_M \xi_{Vl} \simeq (+1.75 \pm 6.25)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} \underline{b-quark couplings} are obtained from : \begin{equation} \xi_{Vb} = {1\over 30v_b\theta_M} [{325\over 13}{\delta A_b\over A_b} +{10\delta \Gamma_b\over \Gamma_b}-5\delta^{Z'}_{\rho}] \end{equation} \begin{equation} \xi_{Ab} = {1\over 10v_b\theta_M}[{-8\delta A_b\over A_b} +{5\delta \Gamma_b\over \Gamma_b}-5\delta^{Z'}_{\rho}] \end{equation} We used for the $b\bar b$ anomaly the shift ${\delta \Gamma_b\over \Gamma_b} =+0.03\pm 0.008$, but for $A_b$ we have different results from LEP and from SLC to be compared with the SM result $A_b(SM)=0.934$. From $A^b_{FB}$ at LEP, $A_b=0.916\pm0.034$, we obtain ${\delta A_b\over A_b} =-0.02\pm 0.04$ and : \begin{equation} \eta_M \xi_{Vb} \simeq (-3.45 \pm 20.72)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \end{equation} \begin{equation} \eta_M \xi_{Ab} \simeq (+4.58 \pm 9.84)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \end{equation} Using the SLD result, $A_b=0.841\pm0.053$, we obtain ${\delta A_b\over A_b} =-0.1\pm 0.05$ and \begin{equation} \eta_M \xi_{Vb} \simeq (-24.24 \pm 25.98)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} \begin{equation} \eta_M \xi_{Ab} \simeq (+14.54 \pm 12.47)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} For \underline{c-quark couplings} the solutions are : \begin{equation} \xi_{Vc} = {1\over 10v_c\theta_M}[{15\over4}{\delta A_c\over A_c} +{5\over3}{\delta \Gamma_c\over \Gamma_c} -{35\over3}\delta^{Z'}_{\rho}] \end{equation} \begin{equation} \xi_{Ac} = {1\over 10\theta_m}[-{5\over4}{\delta A_c\over A_c} +{5\delta \Gamma_c\over \Gamma_c}-5\delta^{Z'}_{\rho}] \end{equation} Experimental data are less precise than for b-quarks. We have $ {\delta \Gamma_c\over \Gamma_c} =-0.1\pm 0.05 $ but for the asymmetry there is again a discrepancy between LEP and SLC. At LEP, from $A^c_{FB}$, $A_c=0.67\pm0.06$, whereas at SLC $A_c=0.606\pm0.09$, to be compared with the SM prediction $A_c=0.67\pm0.002$. So with ${\delta A_c\over A_c} =0\pm 0.1$ at LEP one obtains : \begin{equation} \eta_M \xi_{Vc} \simeq (-6.94 \pm 26.60)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \end{equation} \begin{equation} \eta_M \xi_{Ab} \simeq (-7.88 \pm 8.46)({M_{Z'}\over 1 TeV}) \ \ \ (LEP) \end{equation} whereas with ${\delta A_c\over A_c} =-0.1\pm 0.15 $ at SLC : \begin{equation} \eta_M \xi_{Vc} \simeq (-20.38 \pm 40.62)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} \begin{equation} \eta_M \xi_{Ab} \simeq (-6.01 \pm 9.70)({M_{Z'}\over 1 TeV}) \ \ \ (SLC) \end{equation} \noindent Note that all above results correspond to the upper bound, eq.(3), for $|\theta_M|$ and to experimental data taken with two standard deviations. \newpage \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} {\large {\bf Appendix B : Dijet invariant mass distribution in hadronic collisions.}}\\ \vspace{0.3cm} The observable that we consider is the dijet invariant mass ($M_{JJ}$) distribution: \begin{equation} \frac{d\sigma}{dM_{JJ}} = \frac{M^2_{JJ}}{2S}\int_{-\eta}^{\eta} d {\eta}_1 \int_{{\eta}_{min}}^{{\eta}_{max}} d{\eta}_2 \sum_{ij} \frac{1}{\cosh^2({\eta}^{\star})} f_i(x_1,M^2) f_j(x_2,M^2) \frac{d\sigma_{ij}}{d{\hat t}} \end{equation} \noindent where the $f_i(x,M^2)$ are the parton distribution evoluted at scale $M^2$; $\eta$ has been defined in Sect.3, ${\eta}_1$ and ${\eta}_2$ are the pseudorapidities of jets 1 and 2, $\eta_{min}=max[-\eta, ln{M_{JJ}\over\sqrt{s}}-\eta_1]$, $\eta_{max}=min[+\eta, -ln{M_{JJ}\over\sqrt{s}}-\eta_1]$, whereas $\frac{d\sigma_{ij}}{d{\hat t}}$ is the partonic cross section for the subprocess $ij \rightarrow 2 jets$. The momenta fractions carried by initial partons read: \begin{equation} x_1 = \frac{M_{JJ}}{\sqrt S} \exp({\eta}_B) \end{equation} and \begin{equation} x_2 = \frac{M_{JJ}}{\sqrt S} \exp({-\eta}_B) \end{equation} where ${\eta}_B = \frac{{\eta}_1 + {\eta}_2}{2}$. The expression for the partonic cross sections can be found in \cite{BGS}. The pure QCD terms for $gg \rightarrow gg$, $qg \rightarrow qg$, $gg \rightarrow q \bar q$, $q \bar q \rightarrow gg$ as well as the QCD and $\gamma$, $Z$ and $W$ exchange contributions to the subprocess $qq \rightarrow qq$ are given in eqs.(A1)-(A6) of \cite{BGS}. The subprocess $q \bar q \rightarrow q \bar q $ is obtained by performing the crossing $s \leftrightarrow u$. The QCD and W,Z,$\gamma$ exchange contributions to $qq' \rightarrow qq'$ are given by eqs.(A7)-(A14) of \cite{BGS}. By crossing $s \leftrightarrow u$ one obtains the $q \bar q' \rightarrow q \bar q'$ subprocess and by crossing $s \leftrightarrow t$ and then $t \leftrightarrow u$ the $q \bar q \rightarrow q' \bar q'$ subprocess. One has also to add the pure W exchange processes involving four distinct quarks: $qq' \rightarrow q''q'''$, $q \bar q''' \rightarrow q'' \bar q'$, as given by eqs.(A15) and (A16) of \cite{BGS}. \par We have now to add the $Z'$ contribution to these various subprocesses. The $Z'Z'$, $Z' \gamma$, $Z'W$ and $Z'g$ squared matrix elements can be directly obtained from the $ZZ$, $Z \gamma$, $ZW$ and $Zg$ ones given in \cite{BGS}, by performing the replacement of $g_{Vq}$ by $\xi_{Vq} g_{Vq}$ and of $g_{Aq}$ by $\xi_{Aq} g_{Aq}$. More precisely one has to replace the $C_L$ and $C_R$ $Z$ couplings to left-handed and right-handed quarks by the following ones: \begin{equation} C'_{q,L} = \frac{1}{2} (g'_{Vq} + g'_{Aq}) = \frac{1}{2} (\xi_{Vq} g_{Vq} + \xi_{Aq} g_{Aq}) \end{equation} \begin{equation} C'_{q,R} = \frac{1}{2} (g'_{Vq} - g'_{Aq}) = \frac{1}{2} (\xi_{Vq} g_{Vq} - \xi_{Aq} g_{Aq}) \end{equation} The contribution due to the interference between the $Z$ and the $Z'$ is the only one that cannot be directly read off from their expressions. We have computed it explicitely. For the subprocess $qq \rightarrow qq$ we obtain (using the same notations as in \cite{BGS}) : \begin{eqnarray} && T_{ZZ'} = 2 \alpha^2_Z \lbrack s^2 (\frac{1}{t_Z t_{Z'}} + \frac{1}{u_Z u_{Z'}} +\frac{1}{3} (\frac{1}{t_Z u_{Z'}} + \frac{1}{u_Z t_{Z'}})) (C_{q,L}^2 C'^{\ 2}_{q,L} + C_{q,R}^2 C'^{\ 2}_{q,R}) \nonumber\\ && + 2 C_{q,L} C'_{q,L} C_{q,R} C'_{q,R} (\frac{u^2}{t_Z t_{Z'}}+ \frac{t^2}{u_Z u_{Z'}}) \rbrack \end{eqnarray} For the subprocess $qq' \rightarrow qq'$ we obtain: \begin{eqnarray} && T_{ZZ'} = 2 \alpha^2_Z \lbrack \frac{s^2}{t_Z t_{Z'}} (C_{q,L} C'_{q,L} C_{q',L} C'_{q',L}+C_{q,R} C'_{q,R} C_{q',R} C'_{q',R}) \nonumber\\ && + \frac{u^2}{t_Z t_{Z'}} (C_{q,L} C'_{q,L} C_{q',R} C'_{q',R} + C_{q,R} C'_{q,R} C_{q',L} C'_{q',L}) \rbrack \end{eqnarray} For subprocesses involving antiquarks the same crossings as previously given have to be performed. The complete expression for $\frac{d\sigma_{ij}}{d{\hat t}}$ is then obtained by summing over the quark flavours (we have not considered top production since its decay involves also a W leading to a different topology) and adding to $\frac{d\sigma_{ij}}{d{\hat t}}(s,t,u)$ the crossed contribution $\frac{d\sigma_{ij}}{d{\hat t}}(s,u,t)$ due to the indiscernability of jets. \newpage
{ "attr-fineweb-edu": 1.52832, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcEs5qsNCPdwhexw_
\section{Introduction} Classical Brill-Noether theory concerns the varieties $W^r_d(C)$, where $C$ is a smooth curve of genus $g$, parameterizing line bundles $L$ with at least $r+1$ linearly independent global sections. One way to understand such loci is to mark two points $p,q$ on $C$, choose a large integer $N$, and view $H^0(C, L)$ as the intersection of two subspaces $P, Q \subseteq H = H^0(C, L(Np+Nq))$, where $P,Q$ consist of sections of $L(Np+Nq)$ vanishing to order at least $N$ at $p$ and $q$, respectively. Provided that $d+N \geq 2g-1$, the dimensions of $P,Q, H$ do not depend on $L$, and $W^r_d(C)$ may be described as the locus where $P,Q$ are sufficently non-transverse. Globalizing this construction, we obtain a vector bundle $\mathcal{H}$ on $\Pic^{d+2N}(C)$ with two subbundles $\mathcal{P}, \mathcal{Q}$, and $W^r_d(C)$ is isomorphic (via a twist by $Np+Nq$) to the degeneracy locus where the fiber of $\mathcal{P} \cap \mathcal{Q}$ jumps dimension by $g-d+r$. The Brill-Noether and Gieseker-Petri theorems amount to the claim that, for a general curve $C$, $W^r_d(C)$ is nonempty if and only if the expected codimension $(r+1)(g-d+r)$ is at most $g$, has the expected codimension if so, and has singular locus equal to $W^{r+1}_d(C)$. I hope to convince the reader that it is fruitful to view these facts through the following lens: as the line bundle $L$ varies in $\Pic^d(C)$, the \emph{relative position} of the subbundles $P,Q$ deforms versally. This amounts to saying that, up to pullback along smooth morphisms (equivalently, \'etale-locally and up to products with affine space), $\Pic^d(C)$ with this choice of two subbundles is the same as a product of Grassmannians, where dimension and singularity of the relevant degeneracy locus can be found be elementary means. Melody Chan and the author conjectured in \cite{cpRR} that the same phenomenon occurs for flags recording vanishing order at the points $p,q$. \subsection{The versality theorem} Fix a smooth twice-marked curve $(C,p,q)$ over an algebraically closed field $k$, and an integer $d \geq 2g-1$ (for notation conventions about what we mean by ``smooth curve'' and other matters, consult Section \ref{ss:conventions}). Pushing down a Poincar\'e line bundle on $C \times \Pic^d(C)$ gives a rank $n = d+1-g$ vector bundle $\mathcal{H}_d$ on $\Pic^d(C)$, with fibers $(\mathcal{H}_d)_{[L]} \cong H^0(C, L)$. For any divisor $D$ on $C$, denote by $\tw_D: \Pic^d(C) \to \Pic^{d+\deg D}(C)$ the ``twist'' isomorphism given pointwise by $[L] \mapsto [L(D)]$, and define for all $0 \leq a,b \leq d-2g+1$, \begin{eqnarray*} \mathcal{P}^a_d &=& \tw_{ap} \mathcal{H}_{d-a}\\ \mathcal{Q}_d^b &=& \tw_{bq} \mathcal{H}_{d-b}. \end{eqnarray*} Regarding these as subbundles of $\mathcal{H}_d$, we obtain two flags $\mathcal{P}^{\sbu}_d, \mathcal{Q}^{\sbu}_d$ in $\mathcal{H}_d$, both of coranks $[0, d-2g+1] \cap \mathbb{Z}$. We call these the \emph{degree-$d$ Brill-Noether flags of $(C,p,q)$}. We discuss in Section \ref{sec:versality} the notion of a \emph{versal pair of flags}; for now we simply remark that roughly speaking we call a pair of flags \emph{versal} if, up to pullback along smooth morphisms, it is isomorphic to a pair of tautological flags on a product of two flag varieties, and therefore the relative positions of the two flags in any fiber deform ``as generally as possible'' around that fiber. We prove \cite[Conjecture 6.3]{cpRR}, namely \begin{thm} \label{thm:versality} Fix integers $d,g$, and an algebraically closed field $k$ of any characteristic. If $(C,p,q)$ is a general twice-marked curve of genus $g$ over $k$, then the pair of degree-$d$ Brill-Noether flags $\mathcal{P}^{\sbu}_d, \mathcal{Q}^{\sbu}_d$ of $(C,p,q)$ is versal. \end{thm} Our proof of Theorem \ref{thm:versality} is a degeneration argument, which can be summarized in the slogan: ``twice-marked curves with versal Brill-Noether flags can be chained together.'' We will show that if two twice-marked curves $(C_1, p_1, q_1)$ and $(C_2, p_2, q_2)$ both have versal Brill-Noether flags in \emph{all} degrees, then if $q_1$ is glued to $p_2$ to form a twice-marked stable curve $(X, p_1, q_2)$, a general member of any smoothing of $(X,p_1, q_2)$ has versal Brill-Noether flags in degree $d$ (so a \emph{very general} member has versal Brill-Noether flags in all degrees). More generally, the same principle holds for a chain of twice-marked curves. The base case $g=1$ is provided by twice-marked elliptic curves with marked points not differing by torsion. This strategy is similar in spirit to the proof in \cite{ossSimpleBN} of the Brill-Noether theorem, where the Brill-Noether condition is strengthened to a property of twice-marked curves that is preserved by chaining two such curves together and smoothing. \begin{rem} \label{rem:notThreePoints} One might hope to generalize Theorem \ref{thm:versality} to three or more marked points, but this is impossible, even for $g=0$. A tuple of three or more fixed flags in a vector space of dimension at least $3$ is never versal \cite[Example 3.4]{cpRR}. \end{rem} \begin{rem} The use of chains of elliptic curves in Brill-Noether theory has a long history; such chains possess a curious ability to behave like ``general'' curves, provided that the attachment points on each elliptic curve do not differ by torsion. This remark gives a few examples of that history, but is in no way comprehensive. In contrast to the flag curves used e.g. in \cite{ehgp}, elliptic chains are often useful without need for a characteristic $0$ hypothesis. The earliest use of elliptic chains in Brill-Noether theory was by Welters \cite{welters}, and they have been used for many applications to Brill-Noether theory of vector bundles on curves, e.g. \cite{teixidorPetri92, teixidorPetri08, teixidorTwistedBN, cltPetriGoodBundles}, and to study other aspects of the geometry of special divisors on curves, e.g. \cite{ctDivisorial, clpt, cpEuler}. The tropical analog of an elliptic chain is a chain of loops; these have found many applications in tropical Brill-Noether theory, e.g. \cite{cdpr, jpGP, pflChains, clrw, manjunathPoincare}. An interesting perspective on why such chains are so applicable to Brill-Noether theory, and what generalizations of them may have similar properties, is in \cite[$\S 5$]{ossermanDimCounts}; see especially Questions 5.4 and 5.5. \end{rem} \subsection{The coupled Petri map} The basic tool used to prove the versality theorem is a generalization of the Petri map. First define, for any line bundle $L$ on a smooth twice-marked curve $(C,p,q)$ and any subset $S \subseteq \mathbb{Z} \times \mathbb{Z}$, \begin{equation} \label{eq:coupledTensors} T^L_{p,q}(S) = \sum_{(a,b) \in S} H^0(C, L(-ap-bq)) \otimes H^0(C, \omega_C \otimes L^\vee(ap+bq)). \end{equation} We write $T^L_{p,q}$ as shorthand for $T^L_{p,q}(\mathbb{Z} \times \mathbb{Z})$. To define this sum, regard all terms as subspaces of $H^0(C^\ast, L) \otimes H^0(C^\ast, \omega_C \otimes L^\vee)$, where $C^\ast = C \backslash \{p,q\}$. Call the elements of $T^L_{p,q}$ \emph{coupled tensors}. \begin{defn} \label{defn:coupledPetri} The \emph{(fully) coupled Petri map} of $L$ on $(C,p,q)$ is the map $$\mu^L_{p,q}:\ T^L_{p,q} \to H^0(C, \omega_C)$$ given in each summand by the cup product. The restriction $\mu^L_{p,q} |_{T^L_{p,q}(S)}$ will we abbreviated $\mu^L_{p,q} |_S$ and called the \emph{$S$-coupled Petri map}. If $\mu^L_{p,q}$ is injective for every line bundle $L \in \Pic(C)$, then we say that $(C,p,q)$ satisfies the \emph{(fully) coupled Petri condition}; if $\mu^L_{p,q} |_S$ is injective for all $L \in \Pic(C)$ we say that $(C,p,q)$ satisfies the \emph{$S$-coupled Petri condition.} \end{defn} The $\{(0,0)\}$-coupled Petri map is the usual Petri map; the fully coupled Petri map may be viewed as gluing together information about the local geometry of Brill-Noether loci around not only $L$ but all of its twists by multiples of $p$ and $q$. The transpose of a specific type of coupled Petri map, in which only twists by a single point $p$ are used, is studied in \cite[Lemma 3.1]{cht}, where it is used to analyze the geometry of Brill-Noether varieties of once-marked curves. \begin{eg} \label{eg:genus0} (Genus $0$ curves) If $(C,p,q)$ is a twice-marked smooth curve of genus $0$, then every line bundle on $C$ is nonspecial, and $T^L_{p,q}$ is trivial, so $(C,p,q)$ satisfies the coupled Petri condition. \end{eg} \begin{eg} \label{eg:genus1} (Genus $1$ curves) A genus $1$ twice-marked smooth curve $(C,p,q)$ satisfies the coupled Petri condition if and only if $p-q$ is a non-torsion element of the Jacobian. If $(C,p,q)$ is a twice-marked smooth curve of genus $1$, then $\omega_C \cong \mathcal{O}_C$ and the only line bundles $L$ for which $H^0(C, L) \otimes H^0(C, \omega_C \otimes L^\vee) \neq \{0\}$ are those isomorphic to $\mathcal{O}_C$. If $p-q$ is non-torsion, then there is at most one $(a,b) \in \mathbb{Z}^2$ such that $L \cong \mathcal{O}_C(ap+bq)$, and it follows that $T^L_{p,q}$ is either trivial or one-dimensional. Since the image $\mu^L_{p,q}(\sigma \otimes \tau)$ of a simple tensor with $\sigma, \tau \neq 0$ is nonzero, $\mu^L_{p,q}$ is injective for all $L$ in this case. So twice-marked elliptic curves with $p-q$ nontorsion satisfy the coupled Petri condition. One the other hand, if $p-q$ is torsion, then for any line bundle $L \cong \mathcal{O}_C(ap+bq)$, $T^L_{p,q}$ is infinite-dimensional and $\mu^L_{p,q}$ fails spectacularly to be injective. \end{eg} We will prove (Corollary \ref{cor:mudelta}) that $\mathcal{P}^{\sbu}_d, \mathcal{Q}^{\sbu}_d$ is a versal pair at $[L] \in \Pic^d(C)$ if and only if the $[0,d-2g+1]^2$-coupled Petri map of $L$ is injective. Indeed, the transpose of that map is equal (up to isomorphism of the domain and codomain) to the map from the tangent space $T_{[L]} \Pic^d(C)$ to the space of first-order deformations of pairs of flags. This identification is the principal goal of Section \ref{sec:versality}. Our main result about the coupled Petri map is the following; it is proved in Section \ref{sec:proofs}. \begin{thm} \label{thm:coupledPetri} Let $S \subseteq \mathbb{Z} \times \mathbb{Z}$ be finite, and fix an algebraically closed field $k$. A general twice-marked genus $g$ curve $(C,p,q)$ over $k$ satisfies the $S$-coupled Petri condition. \end{thm} \begin{rem} \label{rem:veryGeneralCoupled} By taking a countable intersection in $\mathcal{M}_{g,2}$, Theorem \ref{thm:coupledPetri} implies that a \emph{very general} twice-marked curve satisfies the fully coupled Petri condition. In fact, our strategy is to prove the existence of such a curve over some field in every characteristic. It is not possible to replace ``very general'' with ``general'' in this statement, however, as Example \ref{eg:genus1} shows for $g=1$. Over $\mathbb{C}$ the locus of genus-$1$ twice-marked curves $(C,p,q)$ with $p-q$ torsion is analytically dense, and over the algebraic closure of a finite field this locus is all of $\mathcal{M}_{g,2}$. \end{rem} \begin{rem} \label{rem:dimTL} It is not clear at first that one should even expect $T^L_{p,q}$ to be finite-dimensional in general, and Example \ref{eg:genus1} shows that it need not be for special twice-marked curves. There is a combinatorially interesting way to describe its dimension; we sketch it here without proof since it is not needed elsewhere in this paper. There is a unique permutation $\pi$ of $\mathbb{Z}$ such that for all $a,b \in \mathbb{Z}$, $h^0(C, L(-ap-bq)) = r^\pi(a,b) = \# \{ a' \geq a:\ \pi(a') \geq b\}$. The ``typical'' case is that none of the line bundles $L(-ap-bq)$ are special; in that case $\pi = \omega_{d-g}$, the descending permutation $n \mapsto d-g-n$. The dimension of $T^L_{p,q}$ is equal to the length $\ell( \omega_{d-g} \pi)$, which may be finite or infinite. When $g=1$, the inversions of $\omega_{d-g} \pi$ are in bijection with pairs $a,b$ such that $L \cong \mathcal{O}_C(ap+bq)$. Of course, Theorem \ref{thm:coupledPetri} implies that for $(C,p,q)$ very general, $\ell(\omega_{d-g} \pi) \leq g$. \end{rem} \subsection{Background on Brill-Noether theory of marked curves} \label{ss:bnBackground} Eisenbud and Harris \cite{eh86} proved an extended form of the classical Brill-Noether theorem to curves with marked points in their development of the theory of limit linear series. This extended Brill-Noether theorem considers a general $n$-marked smooth curve $(C,p_1, \cdots, p_n)$. Fix integers $d,r$ and sequences $a^1_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, \cdots, a^n_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$, called vanishing sequences, where each is a strictly increasing sequence $a^i_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}} = (a^i_0, \cdots, a^i_r)$ of nonnegative integers. Define the variety $G^r_d(C, (p_1, a^1_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}), \cdots, (p_n, a^n_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}))$, parameterizing linear series $(L,V)$ on $C$ with $\deg L = d, \dim V = r+1$ such that for all $i,j$, $\dim V(-a^i_j\ p_i) \geq r+1 - j$. These varieties are analogs of the Brill-Noether varieties $G^r_d(C)$ for curves with $n$ marked points, which allow for imposed ramification at the marked points. Such varieties are crucial in the theory of limit linear series, because when studying the limit of a linear series as a smooth curve degenerates to a nodal curve, one must control ramification conditions at the nodes. Then the extended Brill-Noether theorem \cite[Theorem 4.5]{eh86} states that, in characteristic $0$, for $(C,p_1, \cdots, p_n)$ general in $\mathcal{M}_{g,n}$, $$ \dim G^r_d(C, (p_1, a^1_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}), \cdots, (p_n, a^n_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})) = g - (r+1)(g-d+r) - \sum_{i=1}^n \sum_{j=0}^r (a^i_j - j), $$ when the variety is nonempty. The characteristic $0$ hypothesis is used in the base case $g=0$, and indeed this extended Brill-Noether theorem is false in positive characteristic when $n > 2$. However, the theorem is true in all characteristics when $n \leq 2$; a simple proof appears in \cite{ossSimpleBN}. Eisenbud and Harris's proof does not consider the singularities of these varieties. In the case $n=1$, this question is considered in \cite{cht}, where the singular locus is described using a map that is essentially a one-marked-point version of the coupled Petri map described above. In the case $n = 2$ and in all characteristics, an analog of the Gieseker-Petri theorem for $G^r_d(C, (p_1, a^1_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}), (p_2, a^2_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}))$ was proved in \cite{cop}. Although the varieties $G^r_d(C, (p_1, a^1_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}), (p_2, a^2_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}))$ are not smooth in general, their singular loci can be completely described in terms of singularities of Schubert varieties. The approach in \cite{cop} does not make use of any version of the Petri map, instead analyzing the geometry of the space of limit linear series on a nodal curve. Brill-Noether varieties, for both marked and unmarked curves, have also been studied from the point of view of enumerative geometry. This study begins in Castelnuovo's calculation in 1889 of the number of points in $W^r_d(C)$ when it is $0$-dimensional and $C$ is general, under the hypothesis that the Brill-Noether theorem was true. Castelnuovo's approach used a degeneration argument; a nice summary of this argument and how it has been adapted to other situations is in \cite[\S 5A]{hm}. Modern approaches have typically used techniques from the theory of degeneracy loci, beginning with the work of Kempf \cite{kempfSchubert} and Kleiman-Laksov \cite{kleimanLaksov1,kleimanLaksov2}, who computed the intersection theoretic class $[W^r_d(C)]$ via the Porteous formula. An analogous computation for curves with one marked point may be deduced from the degeneracy locus formulas in \cite{kempfLaksov73}; this computation may be found, for example, in \cite[\S 4]{nonprimitive}, and an analogous theorem for curves with a \emph{moving} marked point can be found in \cite{farkasTarasca}. The case of two marked points with imposed ramification was studied in detail by Anderson, Chen, and Tarasca \cite{actKclasses}, who used degeneracy locus formulas to obtain not just the intersection class of the Brill-Noether loci but classes in K-theory as well. A recent paper \cite{actMotivic} also considers the case of one marked point from a motivic point of view. The present paper focuses, as in \cite{cop}, on the case $n=2$, where results are possible in all characteristics. Some of the works mentioned above focus on parameter spaces (generalizing $G^r_d(C)$), while others focus on loci in $\Pic^d(C)$ (generalizing $W^r_d(C)$) or consider both points of view. We focus entirely in loci in $\Pic^d(C)$ in this paper. Most significantly, the present work goes beyond imposing vanishing conditions at the individual marked points, and allows conditions to be imposed on combinations of the marked points. The enumerative aspects of this paper, developed in Section \ref{sec:bnDegen}, follow similar methods to those of \cite{actKclasses}. The principle difference in this paper is that while \cite{actKclasses} considers degeneracy loci indexed by $321$-avoiding permutations, imposing conditions on combinations of marked points requires us to consider degeneracy loci indexed by arbitrary permutations (of finite length). See also Remark \ref{rem:321case}. For example, while vanishing conditions may impose a cusp, flex point, or higher ramification condition at a single marked point, the approach in this paper may also impose nodes, bitangents, and other conditions concerning the interaction of the two marked points. \subsection{Brill-Noether degeneracy loci} We will apply the versality theorem to prove a generalization of the basic theorems about $W^r_d(C)$ to degeneracy loci that we call \emph{Brill-Noether degeneracy loci}. Given a twice-marked smooth curve $(C,p,q)$, integer $d$, and function $r: \mathbb{N}^2 \to \mathbb{N}$, we consider the locus \begin{equation} \label{eq:degenFirstForm} \{ [L] \in \Pic^d(C):\ h^0(C, L(-ap-bq)) \geq r(a,b) \mbox{ for all } a,b \in \mathbb{N}^2 \}. \end{equation} For example, by choosing $r(a,b) = \max(0, r_0+1 -a -b)$ for some fixed number $r_0$, this locus is the usual Brill-Noether locus $W^{r_0}_d(C)$; other choices of function $r(a,b)$ encode information about the geometry of the map $C \to \mathbb{P} H^0(C, L)^\vee$. A warning about an unfortunately confusing aspect of our notation: traditionally in Brill-Noether theory, the number $r$ is used for the \emph{projective} dimension of a linear series, and called the ``rank,'' hence $W^r_d(C)$ is defined by the inequality $h^0(C,L) \geq r+1$. But in (\ref{eq:degenFirstForm}) above, we use $r(a,b)$ to bound the vector space dimension, to be consistent with usage when speaking about degeneracy loci, and also use the word ``rank'' for $r(a,b)$. There does not seem to be a good way to avoid this. Only certain functions $r(a,b)$ are useful the construction (\ref{eq:degenFirstForm}). In fact, the only functions where equality $r(a,b) = h^0(C, L(-ap-bq))$ is possible are \emph{rank functions of dot arrays}, defined as follows. \begin{defn} A \emph{dot array} is a finite subset of $\mathbb{N} \times \mathbb{N}$ such that no two distinct elements of $\Pi$ are in the same row or the same column. When drawing a dot array, we index positions like entries in a matrix: $(a,b)$ is placed in row $a$ and column $b$, with $(0,0)$ in the upper-left corner. The \emph{rank function} of a dot array is the function $\mathbb{N}^2 \to \mathbb{N}$ defined by \begin{equation} \label{eq:rPi} r^\Pi(a,b) = \# \{ (a',b') \in \Pi:\ a' \geq a \mbox{ and } b' \geq b \}. \end{equation} We call a function $r: \mathbb{N}^2 \to \mathbb{N}$ a \emph{rank function} if it is the rank function of some dot array. Given a dot array $\Pi$, the \emph{row sequence} is the set of first coordinates of elements of $\Pi$ sorted in increasing order and usually denoted $(a_0, \cdots, a_r)$, where $r = |\Pi|-1$, and the \emph{column sequence} is the sorted set of second coordinates, usually denoted $(b_0, \cdots, b_r)$. \end{defn} See \cite[$\S 1.2$]{fultonPragacz} for a nice discussion and many examples. However, note that we use slightly different index conventions since we think of $a,b$ as codimensions rather than dimensions. This better matches our intended applications to Brill-Noether theory, where $a,b$ will be vanishing orders at a point. For example, we index beginning at $0$ and count ``towards the lower right'' rather than ``towards the upper left.'' \begin{defn} \label{defn:bnDegen} Let $(C,p,q)$ be a twice-marked curve of genus $g$, $d$ a positive integer, and $\Pi$ a dot array such that $|\Pi| \geq d+1-g$. Define set-theoretically $$W^\Pi_d(C,p,q) = \{ [L] \in \Pic^d(C):\ h^0(C, L(-ap-bq)) \geq r^\Pi(a,b) \mbox{ for all } a,b \in \mathbb{N}^2 \}.$$ Give this locus a scheme structure by interpreting each inequality with a Fitting ideal; see Section \ref{sec:bnDegen}. We call $W^\Pi_d(C,p,q)$ a \emph{Brill-Noether degeneracy locus.} \end{defn} In fact, most of the inequalities in Definition \ref{defn:bnDegen} are redundant. Define \begin{equation} \label{eq:essPi} \Ess(\Pi) = \{ (a,b) \in \mathbb{N}^2: r^\Pi(a-1,b) = r^\Pi(a,b-1) = r^\Pi(a,b) > r^\Pi(a+1,b) = r^\Pi(a,b+1) \}, \end{equation} and call this the \emph{essential set} of $\Pi$. To interpret this definition when $a=0$ or $b=0$, define $r^\Pi(-1,b) = r^\Pi(0,b)$ and $r^\Pi(a,-1) = r^\Pi(a,0)$ (this accords with Equation \ref{eq:rPi} by simply allowing $a,b \in \mathbb{Z}$). One may replace ``for all $a,b \in \mathbb{N}^2$'' with ``for all $(a,b) \in \Ess(\Pi)$'' in Definition \ref{defn:bnDegen} without changing the scheme structure. See Section \ref{ss:degenLoci}. It will also be convenient to take a slightly larger set than $\Ess(\Pi)$ for some arguments; define the \emph{essential rows} $\EssR(\Pi)$ and \emph{essential columns} $\EssC(\Pi)$ to be the images of $\Ess(\Pi)$ along the first and second projections to $\mathbb{N}$, respectively. With these definitions, define $$\widetilde{W}^\Pi_d(C,p,q) = \{ [L] \in \Pic^d(C): h^0(C,L(-ap-bq)) \geq r^\Pi(a,b) \mbox{ for all } (a,b) \in \EssR(\Pi) \times \EssC(\Pi) \}.$$ We illustrate the geometric significance of dot arrays and degeneracy loci in Figure \ref{fig:dotArrays}. This figure shows various examples of dot arrays with $3$ dots. The sketch shows a cartoon of a twice-marked curve with a line bundle $[L] \in \widetilde{W}^\Pi_d(C,p,q)$, immersed in $\mathbb{P}^2$ by the complete linear series of $L$ (which we assume to be birational in these sketches). The boxes of the set $\Ess(\Pi)$ are shaded. \newcommand{\grid}{\foreach \x in {0,...,3} \draw (\x-0.5, 0.5) -- (\x-0.5,-3.5); \foreach \y in {0,...,3} \draw (-0.5, 0.5-\y) -- (3.5, 0.5-\y); } \begin{figure} \centering \begin{tabular}{|cc|cc|cc|cc|} \hline \begin{tikzpicture}[scale=0.25] \grid \ess{0}{0} \fdot{0}{2} \fdot{1}{1} \fdot{2}{0} \end{tikzpicture} & \begin{tikzpicture}[scale=0.2] \coordinate (p) at (0,0); \coordinate (q) at (4,1); \node [fill=black, circle, inner sep=1pt, label=left:$p$] at (p) {}; \node [fill=black, circle, inner sep=1pt, label=right:$q$] at (q) {}; \draw (-0.5,-2) .. controls (-0.5,-1) .. (0,0) .. controls (1,2) and (3,-1) .. (4,1) .. controls (4.5,2) .. (4.5,3); \end{tikzpicture} &\begin{tikzpicture}[scale=0.25] \grid \ess{0}{0} \ess{3}{0} \fdot{0}{2} \fdot{1}{1} \fdot{3}{0} \end{tikzpicture} & \begin{tikzpicture}[scale=0.2] \coordinate (p) at (0,0); \coordinate (q) at (2,2); \node [fill=black, circle, inner sep=1pt, label=left:$p$] at (p) {}; \node [fill=black, circle, inner sep=1pt, label=right:$q$] at (q) {}; \draw (-2,-2) .. controls (2,-3) and (-2,3) .. (2,2) .. controls (4,1.5) and (4,0) .. (3,0); \end{tikzpicture} &\begin{tikzpicture}[scale=0.25] \grid \ess{0}{0} \ess{2}{0} \fdot{0}{2} \fdot{2}{1} \fdot{3}{0} \end{tikzpicture} & \begin{tikzpicture}[scale=0.2] \coordinate (p) at (0,0); \coordinate (q) at (4,0); \node [fill=black, circle, inner sep=1pt, label=left:$p$] at (p) {}; \node [fill=black, circle, inner sep=1pt, label=right:$q$] at (q) {}; \draw (-3,-3) .. controls (-2,-3) and (-1,-3) .. (0,0) .. controls (-1,-3) and (3,-3) .. (4,0) .. controls (4.3333,1) and (3,2).. (2,1); \end{tikzpicture} &\begin{tikzpicture}[scale=0.25] \grid \fdot{0}{0} \fdot{1}{2} \fdot{2}{1} \ess{1}{1} \ess{0}{0} \end{tikzpicture} & \begin{tikzpicture}[scale=0.2] \coordinate (p) at (0,0); \coordinate (q) at (0,0); \node [fill=black, circle, inner sep=1pt, label=right:${p,q}$] at (q) {}; \draw (1.5,2) .. controls (1,1) .. (0,0) .. controls (-3,-3) and (-3,3) .. (0,0) .. controls (1,-1) .. (1.5,-2); \end{tikzpicture}\\ \multicolumn{2}{|c|}{generic} & \multicolumn{2}{c|}{flex point at $p$} & \multicolumn{2}{c|}{cusp at $p$} & \multicolumn{2}{c|}{node joining $p,q$} \\\hline \begin{tikzpicture}[scale=0.25] \grid \fdot{0}{1} \fdot{1}{0} \fdot{2}{2} \ess{0}{0} \ess{2}{2} \end{tikzpicture} & \begin{tikzpicture}[scale=0.15] \coordinate (p) at (0,0); \coordinate (q) at (5,-5); \node [fill=black, circle, inner sep=1pt, label=below left:${p}$] at (p) {}; \node [fill=black, circle, inner sep=1pt, label=above right:${q}$] at (q) {}; \draw[dashed] (-2,2) -- (7,-7); \draw (-1,2) .. controls (-1,1.5) and (-0.5,0.5) .. (0,0) .. controls (1,-1) and (3,0) .. (4,-1) .. controls (5,-2) and (4,-4) .. (5,-5) .. controls (5.5,-5.5) and (6.5,-6) .. (7,-6); \end{tikzpicture} & \begin{tikzpicture}[scale=0.25] \grid \fdot{0}{0} \fdot{1}{1} \fdot{2}{2} \ess{0}{0} \ess{1}{1} \ess{2}{2} \end{tikzpicture} & \begin{tikzpicture}[scale=0.6] \coordinate (p) at (0,0); \coordinate (q) at (0,0); \node [fill=black, circle, inner sep=1pt, label=above:${p,q}$] at (p) {}; \draw (0.7,1) .. controls (0.5,0) and (0.1,0) .. (0,0) .. controls (-0.5,0) and (-2,1) .. (-2,0) .. controls (-2,-1) and (-0.5,0) .. (0,0) .. controls (0.1,0) and (0.5,0) .. (0.7,-1); \end{tikzpicture} & \begin{tikzpicture}[scale=0.25] \grid \fdot{3}{3} \fdot{2}{0} \fdot{0}{1} \ess{0}{0} \ess{2}{0} \ess{3}{3} \end{tikzpicture} & \begin{tikzpicture}[scale=0.2] \coordinate (p) at (0,0); \coordinate (q) at (4,0); \node [fill=black, circle, inner sep=1pt, label=above:${p}$] at (p) {}; \node [fill=black, circle, inner sep=1pt, label=above:${q}$] at (q) {}; \draw[dashed] (-2,0) -- (6,0); \draw (-1,2) .. controls (-1,1) and (-1,0) .. (0,0) .. controls (-2,0) and (0,-3) .. (2,-3) .. controls (4,-3) and (6,0) .. (4,0) .. controls (2,0) and (2,1) .. (3,2); \end{tikzpicture} &\begin{tikzpicture}[scale=0.25] \grid \fdot{0}{0} \fdot{2}{1} \fdot{3}{2} \ess{0}{0} \ess{2}{1} \ess{3}{2} \end{tikzpicture} & \begin{tikzpicture}[scale=0.2] \coordinate (p) at (0,0); \node [fill=black, circle, inner sep=1pt, label=above right:${p,q}$] at (p) {}; \draw (-1,2) .. controls (-2,1) and (-2,0) .. (0,0) .. controls (-3,0) and (0,-3) .. (2,-3) .. controls (4,-3) and (3,0) .. (0,0) .. controls (-2,0) .. (-4,-1); \end{tikzpicture} \\ \multicolumn{2}{|c|}{bitangent} & \multicolumn{2}{c|}{tacnode joining $p,q$} & \multicolumn{2}{c|}{node and flex on line} & \multicolumn{2}{c|}{cusp on tangent} \\\hline \end{tabular} \caption{Examples of Brill-Noether degeneracy loci, with $|\Pi| = 3$. The shaded boxes indicate $\Ess(\Pi)$.} \label{fig:dotArrays} \end{figure} Note for example that for the ``generic'' dot array of size $r+1$, $\Pi = \{ (0,r), (1,r-1), \cdots, (r,0) \}$, $W^\Pi_d(C,p,q) = W^r_d(C)$, $\Ess(\Pi) = \{(0,0)\}$, and $\widetilde{W}^\Pi_d(C,p,q) = W^r_d(C) \backslash W^{r+1}_d(C)$ (the choice of marked points is immaterial in this case). The expected dimension of $W^\Pi_d(C,p,q)$ is the following number. Here $a_0, \cdots, a_r$ and $b_0, \cdots, b_r$ denote the row and column sequences, and $r = |\Pi| -1$. \begin{eqnarray*} \rho_g(d, \Pi) &=& g - (r+1)(g-d+r) - \sum_{i=0}^r (a_i-i) - \sum_{i=0}^r (b_i-i) \\ &&- \# \{ (a,b), (a',b') \in \Pi: a < a' \mbox{ and } b < b'\}. \end{eqnarray*} This generalizes the classical Brill-Noether number $\rho_g(d,r) = g - (r+1)(g-d+r)$ that gives the expected dimension of $W^r_d(C)$; the remaining terms count conditions imposed by the required vanishing sequences at $p$ and $q$ and the interaction between them. Dot patterns $\Pi \subset \mathbb{N}^2$ with $| \Pi | \geq d+1-g$ are in bijection with a type of permutation of $\mathbb{Z}$ that we call \emph{$(d,g)$-confined permutations} and describe in Section \ref{ss:dgConfined}. The $(d,g)$-confined permutation $\pi$ associated to $\Pi$ has the feature that $\omega_{d-g} \pi$ has finite length, and in fact this length is the expected codimension: $\rho_g(d,\Pi) = g - \ell(\omega_{d-g} \pi)$. \begin{thm} \label{thm:bnDegen} Fix positive integers $g,d$ and an algebraically closed field $k$. Let $(C,p,q)$ be a general twice-marked smooth curve of genus $g$. For any dot array $\Pi$ such that $|\Pi| \geq d+1-g$, $W^\Pi_d(C,p,q)$ is nonempty if any only if $\rho_g(d,\Pi) \geq 0$. If it is nonempty, it has pure dimension $\rho_g(d, \Pi)$, and the open subscheme $\widetilde{W}^\Pi_d(C,p,q)$ is smooth and dense. Let $\pi$ be the $(d,g)$-confined permutation associated to $\Pi$, and let $R(\omega_{d-g} \pi)$ denote the set of reduced words for the permutation $\omega_{d-g} \pi$ (or equivalently, saturated chains from the identity to $\omega_{d-g} \pi$ in the Bruhat order). Then the Chow class of $W^\Pi_d(C,p,q)$ is $$\left[ W^\Pi_d(C,p,q) \right] = \frac{ |R(\omega_{d-g} \pi)| }{\ell(\omega_{d-g} \pi )!} \Theta^{\ell(\omega_{d-g} \pi)}.$$ \end{thm} In fact, we prove a stronger version of the smoothness statement in Theorem \ref{thm:bnDegenLocal}, which requires a bit more terminology. Since $\deg \Theta^g = g!$, Theorem \ref{thm:bnDegen} implies in part that, in the case $\rho_g(d,\Pi) = 0$, $W^\Pi_d(C,p,q)$ is a set of reduced points, where the number of points is the number of reduced words for a permutation. \begin{rem} \label{rem:321case} When $\Pi$ has no ascents, i.e. no pairs $(a,b),(a',b') \in \Pi$ with $a' \leq a$ and $b' \leq b$, the degeneracy locus $W^\Pi_d(C,p,q)$ is the subscheme of $W^{|\Pi|-1}_d(C)$ where a certain pair of vanishing conditions is imposed on the marked points individually but no conditions are imposed on combinations of the marked points. In this case, a stronger form of the Chow class formula above is proved Proposition 5.1 of \cite{actKclasses} (see also Theorem B, in the case $\beta = 0$); namely a formula is obtained for the class in K-theory. The condition that $\Pi$ has no ascents is equivalent to the condition that $\omega_{d-g} \pi$ is $321$-avoiding. Also, the smoothness theorem in \cite{cop} may be used to deduce the smoothness statement in Theorem \ref{thm:bnDegen}, in the case where $\Pi$ has no ascents. \end{rem} \begin{rem} The number of points in a classical Brill-Noether locus of dimension $0$ is the number of standard Young tableaux on a rectangular partition; it was observed in \cite[p. 3]{cdpr} that this statement can be seen by a bijection in an analogous tropical situation using a chain of loops. In \cite[Theorem 1.4]{pflChains} this bijection was extended to non-rectangular tableaux by considering divisors with prescribed ramification at a point. Partitions may be associated with vexillary permutations, and standard young tableaux are then in bijection with reduced words of these permutations. It seems plausible that, upon formulating a tropical analog of the degeneracy loci $W^\Pi_d(C,p,q)$, there should be a nice bijection between the set of reduced words of a given permutation and a $0$-dimensional tropical degeneracy locus on a chain of loops. \end{rem} \begin{qu} Under what circumstances is the degeneracy locus $W^\Pi_d(C,p,q)$ connected? The connectedness of $W^r_d(C)$ for general curves is usually established using the main theorem of \cite{flConnectedness}, which does not apply in this situation. It is possible that a degeneration proof, along the lines of Osserman's proof of the connectedness of $W^r_d(C)$ \cite{ossermanConnectedness}, may work. \end{qu} We will use the versality theorem to prove Theorem \ref{thm:bnDegen}, by demonstrating that the desired statements can all be deduced from the assumption that the degree-$(d+4g-2)$ Brill-Noether flags are versal. This analysis occupies Section \ref{sec:bnDegen}. \subsection{Generalizations of $G^r_d(C)$} Although the details are not developed in this paper, the versality theorem should also be able to provide new proofs of the main results of \cite{cop} about the geometry of the varieties $G^r_d(C, (p, a_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}), (q, b_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}))$ parameterizing linear series on $C$ with prescribed vanishing orders at two marked points $p$ and $q$, as described in Section \ref{ss:bnBackground}. This potential application is also discussed in \cite[$\S$6]{cpRR} and \cite[$\S$4]{li2020}. By choosing sufficiently large integers $M$ and $N$ and letting $d' = d + M + N$, this variety can be described set-theoretically as \begin{equation}\begin{split} \{ (L, V) \in G^r_{d'}(C):\ & \dim V \cap (\mathcal{P}^{M+a_i}_{d'})_{[L(Mp+Nq)]} \geq r+1-i \\ & \mbox{ and } \dim V \cap (\mathcal{Q}^{N+b_i}_{d'})_{[L(Mp+Nq)]} \geq r+1-i \mbox{ for all } 0 \leq i \leq r \}. \end{split}\end{equation} This has the form of a \emph{relative Richardson variety}, in the sense of \cite{cpRR}. However, the results of \cite{cpRR} on such varieties work with pairs of \emph{complete} flags, so they do not immediately apply in this situation. Assuming that the results of \cite{cpRR} generalize to partial flags, they would quickly imply the results of \cite{cop} about the smooth locus of a twice-pointed Brill-Noether variety, and also imply that the image of the projection $$G^r_d(C, (p, a_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}), (q, b_{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})) \to \Pic^d(C)$$ is equal to a Brill-Noether degeneracy locus of the type discussed in this paper. Furthermore, the techniques of \cite{li2020} provide natural resolutions of singularities for twice-pointed Brill-Noether varieties, via \emph{relative Bott-Samelson varieties}; see \cite[Conjecture 4.4]{li2020}. \subsection{Conventions} \label{ss:conventions} By a \emph{smooth curve} we mean a smooth proper geometrically connected curve over a field $k$ of any characteristic (we will usually, but not always, take $k$ to be algebraically closed). A \emph{point} of a smooth curve will refer to a $k$-valued point unless otherwise stated. A \emph{twice-marked smooth curve} $(C,p,q)$ is a smooth curve with two distinct marked points. For a twice-marked curve $(C,p,q)$, we denote the punctured curve $C \backslash \{p,q\}$ by $C^\ast$, where the marked points $p,q$ will be clear from context. To simplify our notation, intervals $[a,b], (-\infty, b], [a, \infty)$ will always refer to sets of integers $[a,b] \cap \mathbb{Z}, (\infty, b] \cap \mathbb{Z}, [a, \infty) \cap \mathbb{Z}$. A \emph{(partial) flag} in a vector bundle $\mathcal{H}$ (or vector space) is a sequence of nested subbundles (or subspaces) $\mathcal{P}^{a_0} \supset \cdots \supset \mathcal{P}^{a_\ell}$, where the subscript always denotes corank (i.e. the rank of $\mathcal{H} / \mathcal{P}^a$ is $a$). The set $A = \{a_0, \cdots, a_\ell\}$ is called the set of \emph{coranks} of the flag. A \emph{complete flag} is a flag with set of coranks $[0,n]$, where $n$ is the rank of $\mathcal{H}$. If $\alpha: \mathbb{Z} \xrightarrow{\sim} \mathbb{Z}$ is a permutation fixing all but finitely many $n \in \mathbb{Z}$, then the \emph{length} of $\alpha$ is $\ell(\alpha) = \# \{ (a,b) \in \mathbb{Z}: a < b \mbox{ and } \alpha(a) > \alpha(b) \}$, i.e. the number of inversions in $\alpha$. The length is also equal to the minimum length of a factorization of $\alpha$ into a product of simple transpositions; such a factorization is called a \emph{reduced word} for $\alpha$. If a permutation of $\mathbb{Z}$ moves infinitely many points, we say that it has infinite length. Denote by $\omega_m$ the descending permutation of $\mathbb{Z}$ given by $\omega_m(n) = m-n$, which we will sometimes also regard as a permutation of $[0,m-1]$ rather than all of $\mathbb{Z}$. We also use $\omega_C$ and $\omega_{\mathcal{C} / B}$ to denote the canonical sheaf of a smooth curve and the relative dualizing sheaf of a family of curves, and hope this leads to no confusion. The symbol $\mathbb{N}$ will denote the set of \emph{nonnegative} integers. \subsection{Outline of the paper} Section \ref{sec:degen} carries out the degeneration argument needed to prove Theorem \ref{thm:coupledPetri}. Section \ref{sec:flags} gives some preliminary material about relative positions of flags. Section \ref{sec:versality} is concerned with first-order deformations of a pair of flags, and builds the link between versality of Brill-Noether flags and the coupled Petri map. Section \ref{sec:proofs} completes the proofs of Theorems \ref{thm:versality} and \ref{thm:coupledPetri}. Section \ref{sec:bnDegen} applies the versality theorem to the analysis of Brill-Noether degeneracy loci and proves Theorem \ref{thm:bnDegen}. \section*{Acknowledgements} This work was supported by a Miner D. Crary Sabbatical Fellowship from Amherst College. The work and manuscript were completed during the special semester on Combinatorial Algebraic Geometry at ICERM. I am also grateful to Melody Chan, Montserrat Teixidor i Bigas, and David Anderson for helpful conversations and suggestions. \section{Degeneration to chains twice-marked curves} \label{sec:degen} This section is largely independent of the rest of the paper; it's object may be summarized by the slogan: ``twice-marked curves satisfying the coupled Petri condition can be chained together.'' The approach here is based on Eisenbud and Harris's proof of the Gieseker-Petri theorem \cite{ehgp}, with two main adaptations. First, we degenerate to a chain of elliptic curves, rather than a flag curve; this is similar to the approach taken in \cite{welters}, and shares some features with the tropical proof of the Gieseker-Petri theorem in \cite{jpGP}. In fact, we formulate our degeneration in a more flexible way. Any chain of smooth curves, all satisfying the coupled Petri condition when the attachment points are marked, will do; elliptic curves are simply a convenient base case. Second, we work throughout with sections of line bundles on twice-punctured curves (i.e. rational sections); this adds the flexibility to consider the fully coupled Petri condition all at once, and also simplifies the argument somewhat. Our aim is to show that, if a chain of twice-marked curves satisfying the coupled Petri condition is smoothed in a family $\mathcal{C} \to B$, then for any finite set $S$ a general member of $\mathcal{C}$ satisfies the $S$-coupled Petri condition (and therefore a very general member satisfies the fully coupled Petri condition). As in \cite{ehgp}, we demonstrate this by working over a discrete valuation ring, and studying the geometric general fiber. An \emph{arithmetic surface} is a proper flat scheme over a discrete valuation ring whose generic fiber is a smooth curve. Throughout this section, we work with an arithmetic surface as described in the following Situation, and illustrated in Figure \ref{fig:chain}. \begin{figure} \centering \begin{tikzpicture}[thick] \coordinate (p) at (0,0); \coordinate (q) at (0,-5); \coordinate (p1) at (5,0); \coordinate (p2) at (5,-1); \coordinate (p3) at (5,-2); \coordinate (p4) at (5,-3); \coordinate (p5) at (5,-4); \coordinate (q5) at (5,-5); \coordinate (eta) at (0,-7); \coordinate (zero) at (5,-7); \node [fill=black, circle, inner sep=2pt, label=above left:$P_\eta$] at (p) {}; \node [fill=black,circle, inner sep=2pt, label=above left:$Q_\eta$] at (q) {}; \node [fill=black,circle, inner sep=2pt, label=above left:$p_1$] at (p1) {}; \node [fill=black,circle, inner sep=2pt, label=left:{$q_1=p_2$}] at (p2) {}; \node [fill=black,circle, inner sep=2pt, label=left:{$q_2 = p_3$}] at (p3) {}; \node [fill=black,circle, inner sep=2pt, label=left:{$q_3 = p_4$}] at (p4) {}; \node [fill=black,circle, inner sep=2pt, label=left:{$q_{\ell-1} = p_\ell$}] at (p5) {}; \node [fill=black,circle, inner sep=2pt, label=below left:$q_\ell$] at (q5) {}; \node [fill=black,circle, inner sep=2pt, label=below:$\eta$] at (eta) {}; \node [fill=black,circle, inner sep=2pt, label=below:$0$] at (zero) {}; \draw [shorten <= -0.5cm, shorten >= -0.5cm] (p) to[in=-170, out=-10] node[midway, above] {$P = A_0$} (p1); \draw [shorten <= -0.5cm, shorten >= -0.5cm] (q) to[in=-170, out=-10] node[midway, above] {$Q = A_{\ell+1}$} (q5); \draw [shorten <= -0.5cm, shorten >= -0.5cm] ([xshift=-0.05cm]p) to[in=45, out=-135] node[midway, right] {$\mathcal{C}_\eta$} ([xshift=0.05cm]q); \draw [shorten <= -0.2cm, shorten >= -0.2cm] (p1) to[out=-135, in=135] node[midway,right] {$A_1$} (p2); \draw [shorten <= -0.2cm, shorten >= -0.2cm] (p2) to[out=-135, in=135] node[midway,right] {$A_2$} (p3); \draw [shorten <= -0.2cm, shorten >= -0.2cm] (p3) to[out=-135, in=135] node[midway,right] {$A_3$} (p4); \draw [shorten <= -0.2cm, shorten >= -0.2cm] (p5) to[out=-135, in=135] node[midway,right] {$A_\ell$} (q5); \path ([yshift=0.25cm]p4) -- node[midway] {$\vdots$} (p5); \draw[->] (2.5, -5.4) -- (2.5, -6.8); \draw [shorten <= -0.4cm, shorten >= -0.4cm] (eta) -- node[midway, below] {$\on{Spec} R$} (zero); \end{tikzpicture} \caption{The arithmetic surface $\mathcal{C} \to \on{Spec} R$ in Situation \ref{sit:chain}.} \label{fig:chain} \end{figure} \begin{sit} \label{sit:chain} Let $R$ be a discrete valuation ring, with residue field $k$, uniformizer $t$, and fraction field $K$. Assume that $k$ is algebraically closed. Let $\pi: \mathcal{C} \to \on{Spec} R$ be a regular arithmetic surface, with two disjoint sections $P, Q$. We sometimes also denote $P$ by $A_0$ and $Q$ by $A_{\ell+1}$. Assume that the special fiber $\mathcal{C}_k$ is a chain of $\ell$ smooth curves $A_1 \cup \cdots \cup A_\ell$, where $A_n$ and $A_{n+1}$ meet nodally at a point that we denote by both $q_n$ and $p_{n+1}$\footnote{Why give the same point two names? I have found it psychologically helpful to use $q_n$ when treating it as a point of $A_n$, and $p_{n+1}$ when treating it as a point of $A_{n+1}$, so that the central fiber consists of twice-marked curves $(A_n, p_n, q_n)$.}. Furthermore, assume that the section $P$ meets the special fiber at a point $p_1 \in A_1$ distinct from $q_1$, and $Q$ meets the special fiber at a point $q_\ell \in A_\ell$ distinct from $p_\ell$. Denote by $\mathcal{C}^\ast$ the open subscheme given by the complement of $P \cup A_1 \cup \cdots \cup A_\ell \cup Q = A_0 \cup \cdots \cup A_{\ell+1}$, and by $\mathcal{U}_n$ the complement of $A_0 \cup \cdots \cup A_{n-1} \cup A_{n+1} \cup \cdots \cup A_{\ell+1}$. Also denote by $A_n^\ast$ the punctured smooth curve $A_n \backslash \{p_n, q_n\}$ (for $1 \leq n \leq \ell$). The generic fiber $\mathcal{C} \times_R \on{Spec} K$ will be denoted $C_\eta$, and the geometric generic fiber $\mathcal{C} \times_R \on{Spec} \overline{K}$ will be denoted by $C_{\overline{\eta}}$. \end{sit} Throughout this section, we will follow the notation convention that if $\mathcal{L},\mathcal{M}$ are line bundles on $\mathcal{C}$, then $L_n, M_n$ denote the restrictions of these line bundles to $A_n$. Our objective in this section is to prove the following. \begin{thm} \label{thm:petriSmoothing} In Situation \ref{sit:chain}, suppose that each twice-marked curve $(A_n, p_n, q_n)$ satisfies the coupled Petri condition. Then $(C_{\overline{\eta}}, P_{\overline{\eta}}, Q_{\overline{\eta}})$ satisfies the coupled Petri condition. \end{thm} It will simplify notation slightly and add no additional difficulty to work in a slightly more general situation. If $\mathcal{L}, \mathcal{M}$ are line bundles on $\mathcal{C}$, denote by $\mu$ the cup product map $$\mu: H^0(\mathcal{C}^\ast, \mathcal{L}) \otimes_R H^0(\mathcal{C}^\ast, \mathcal{M}) \to H^0(\mathcal{C}^\ast, \mathcal{L} \otimes \mathcal{M}).$$ We will define a notion of coupled tensors for this multiplication map and prove in Theorem \ref{thm:icdegen} an analog of Theorem \ref{thm:petriSmoothing} in this setting, from which we will deduce Theorem \ref{thm:petriSmoothing}. \subsection{Valuation of tensors} \label{ss:valTensor} In Situation \ref{sit:chain}, the divisors $P = A_0, A_1, \cdots, A_\ell, A_{\ell+1} = Q$ determine valuations $\nu_0, \cdots, \nu_{\ell+1}$ on the function field of $\mathcal{C}$. If $\mathcal{L}$ is a line bundle on $\mathcal{C}$, then $\nu_n$ also naturally determines a map $\nu_n: H^0( \mathcal{C}^\ast, \mathcal{L}) \to \mathbb{Z} \cup \{ \infty \}$ giving the vanishing order of a rational section along $A_n$. Following the strategy of Eisenbud and Harris \cite{ehgp}, we further extend $\nu_n$ to tensors, as follows. If $\mathcal{L},\mathcal{M}$ are two line bundles on $\mathcal{C}$, then we define a function $$\nu_n: H^0(\mathcal{C}^\ast, \mathcal{L}) \otimes_R H^0(\mathcal{C}^\ast, \mathcal{M}) \to \mathbb{Z} \cup \{ \infty \}$$ by setting $\nu_n(\rho)$ to the maximum, taken over all expansions $\rho = \sum \sigma_i \otimes \tau_i$ into rank-$1$ tensors, of $\min_i\{ \nu_n(\sigma_i) + \nu_n(\tau_i) \}$. Importantly, $\nu_n(\rho)$ is \emph{not} always equal to $\nu_n( \mu(\rho))$, although it does provide a lower bound. In the same fashion, for each $n \in [1,\ell]$ we extend the valuations $\nu_{p_n}, \nu_{q_n}$ on the function field of $A_n$ to maps $H^0(A_n^\ast, L_n) \otimes_k H^0(A_n^\ast, M_n) \to \mathbb{Z} \cup \{ \infty \}$. For $n \in [1,\ell]$, $\nu_n |_R$ is the discrete valuation on $R$, so any rank-$1$ tensor $\sigma \otimes \tau$ may be written $t^a \sigma' \otimes \tau'$, where $a \in \mathbb{Z}$ and $\nu_n(\sigma) = \nu_n(\tau) = 0$. It follows that for all $a \in \mathbb{Z}$, $$\big\{ \rho \in H^0(\mathcal{C}^\ast, \mathcal{L}) \otimes_R H^0(\mathcal{C}^\ast, \mathcal{M}):\ \nu_n(\rho) \geq a \big\} = t^a H^0(\mathcal{U}_n, \mathcal{L}) \otimes_R H^0(\mathcal{U}_n, \mathcal{M}).$$ On the other hand, if $n \in \{0, \ell+1\}$, then $\nu_n$ induces the trivial valuation on $R$. In general, \begin{equation} \label{eq:nuMult} \nu_n( t^a \rho ) = \begin{cases} a + \nu_n(\rho) & \mbox{ if } 1 \leq n \leq \ell\\ \nu_n(\rho) & \mbox{ if } n \in \{0,\ell+1\}. \end{cases} \end{equation} The exact sequence $0 \to \mathcal{L} (-A_n) \to \mathcal{L} \to L_n \to 0$ induces an exact sequence $0 \to H^0(\mathcal{U}_n, \mathcal{L}(-A_n)) \to H^0(\mathcal{U}_n, \mathcal{L}) \to H^0(A_n^\ast, L_n)$, and the image of the first map may be identified with $t \cdot H^0(\mathcal{U}_n, \mathcal{L})$, hence the second map induces an injection $H^0(\mathcal{U}_n, \mathcal{L}) \otimes_R k \hookrightarrow H^0(A_n^\ast, L_n)$. The same remarks apply to $\mathcal{M}$, and we obtain a restriction homomorphism $$H^0(\mathcal{U}_n, \mathcal{L}) \otimes_R H^0(\mathcal{U}_n, \mathcal{M}) \to H^0(A_n^\ast, L_n) \otimes_k H^0(A_n^\ast, M_n),$$ \noindent the kernel of which consists of all tensors divisible by $t$, i.e. all $\rho$ with $\nu_n(\rho) \geq 1$. We denote the image of a tensor $\rho$ under this homomorphism by $\rho |_{A_n^\ast}$; this is nonzero if $\nu_n(\rho) = 0$. The basic fact we need about valuations of tensors is the following. \begin{prop} \label{prop:nuRestriction} Let $\rho \in H^0(\mathcal{C}^\ast, \mathcal{L}) \otimes_R H^0(\mathcal{C}^\ast, \mathcal{M})$, and let $n \in [1,\ell]$. If $\nu_n(\rho) = 0$, then the restriction $\rho |_{A_n^\ast}$ is nonzero, and satisfies $$\nu_{p_n} \left( \rho |_{A_n^\ast} \right) \geq \nu_{n-1}(\rho), \mbox{ and } \nu_{q_n} \left( \rho |_{A_n^\ast} \right) \geq \nu_{n+1}(\rho).$$ \end{prop} Proposition \ref{prop:nuRestriction} is plausible enough, but it requires a bit of algebra to prove; we briefly develop the necessary tools in the next subsection. \subsection{Adapted bases} To prove Proposition \ref{prop:nuRestriction}, we make use of a special type of basis for finite-dimensional subspaces of sections of line bundles on $\mathcal{C}^\ast$. These bases play the role of the ``compatible bases'' used in \cite[Lemma 1.2]{ehgp}. \begin{defn} \label{def:adapted} In Situation \ref{sit:chain}, let $\mathcal{L}$ be a line bundle on $\mathcal{C}$, and suppose $n \in [0,\ell+1]$. Let $S \subseteq H^0(\mathcal{C}, \mathcal{L})$, and let $V = \operatorname{span} S$. The set $S$ is called \emph{adapted to $A_n$} if it is linearly independent (over $K$), and for all $a \in \mathbb{Z}$, the set $\{ c \sigma:\ c \in K,\ \sigma \in S,\ \nu_n(c \sigma) \geq a \}$ generates the $R$-module $V_a = \{v \in V:\ \nu_n(v) \geq a\}$. \end{defn} Adapted bases are convenient for evaluating valuations of sections: if $\{\sigma_i\}$ is adapted to $A_n$, then for any choice of constants $\{c_i\}$, \begin{equation} \label{eq:adaptedValuation} \nu_n\left( \sum c_i \sigma_i \right) = \min \{ \nu_n(c_i \sigma_i) \}. \end{equation} \begin{lemma} \label{lem:adaptedOne} Let $V \subseteq H^0(\mathcal{C}^\ast, \mathcal{L})$ be a finite-dimensional subspace, and $n \in [0, \ell+1]$. There exists a basis of $V$ that is adapted to $A_n$. \end{lemma} \begin{proof} Observe that if $n \in \{0,\ell+1\}$, then each $V_a$ is a $K$-vector space, while if $n \in [1,\ell]$ then $V_a = t^a V_0$ for all $a \in \mathbb{Z}$, $V_0 \otimes_R K \cong V$, and $V_0$ is a free module since it is torsion-free. In these two cases, we may construct bases adapted to $A_n$ as follows. \begin{enumerate} \item If $n \in \{0, \ell+1\}$: choose $\sigma_1, \cdots, \sigma_{\dim_K V }$ inductively so that for each $a \in \mathbb{Z}$, $\{ \sigma_1, \cdots, \sigma_{\dim_K V_a} \}$ is a basis for $V_a$. \item If $n \in [1,\ell]$: choose an $R$-basis $\sigma_1, \cdots, \sigma_{\dim_K V }$ for $V_0$. \end{enumerate} In either case, $\{ \sigma_i \}$ is a $K$-basis for $V$ adapted to $A_n$. \end{proof} If adapted bases are constructed as in items (1) and (2) above, they lend themselves to modification by the operations of Gaussian elimination. Indeed, if we modify exactly one element of the basis by replacing $\sigma_i$ by $\sigma_i + c \sigma_j$, then we obtain a new adapted basis of the same form provided that \begin{enumerate} \item if $n \in \{0, \ell+1\}$, we choose $c \in K$ and $i > j$, and \item if $n \in [1,\ell]$ we choose $c \in R$ and $i \neq j$. \end{enumerate} \begin{lemma} \label{lem:adaptedTwo} Let $m,n \in [0,\ell+1]$. There exists a basis of $V$ that is adapted to both $A_m$ and $A_n$. \end{lemma} \begin{proof} Let $\{\sigma_i\}$ be a basis adapted to $A_m$, and $\{\tau_i\}$ a basis adapted to $A_n$, constructed as in the proof of Lemma \ref{lem:adaptedOne}. Let $T$ be the change of basis matrix from basis $\{ \sigma_i\}$ to basis $\{\tau_i\}$. We may modify the bases, according to the operations described above, in such a way that $T$ has exactly one nonzero entry in each row and in each column; to do so is an exercise in Gaussian elimination. This proves the lemma, since upon permuting one of the two bases we have that $\sigma_i = c_i \tau_i$ for nonzero constants $c_i \in K$, and it follow that $\{ \sigma_i \}$ is now adapted to both $A_m$ and $A_n$. \end{proof} \begin{lemma} \label{lem:twoNu} Suppose $\rho \in H^0(\mathcal{C}^\ast, \mathcal{L}) \otimes_R H^0(\mathcal{C}^\ast, \mathcal{M})$. If $\nu_m(\rho) \geq a$ and $\nu_n(\rho) \geq b$, then there exist $\sigma_1, \cdots, \sigma_N \in H^0(\mathcal{C}^\ast, \mathcal{L})$ and $\tau_1, \cdots, \tau_N \in H^0(\mathcal{C}^\ast, \mathcal{M})$ such that \begin{enumerate} \item $\rho = \sum \sigma_i \otimes \tau_i$, \item $\nu_m(\sigma_i) + \nu_m(\tau_i) \geq a$ for all $i$, and \item $\nu_n(\sigma_i) + \nu_n(\tau_i) \geq b$ for all $i$. \end{enumerate} \end{lemma} \begin{proof} By definition of $\nu_m(\rho), \nu_n(\rho)$, there exist two expansions $\rho = \sum_{k=1}^{N_1} \alpha_k \otimes \beta_k = \sum_{k=1}^{N_2} \gamma_k \otimes \delta_k$ such that $\nu_m(\alpha_i) + \nu_m(\beta_i) \geq a$ for all $i \in [1,N_1]$ and $\nu_n(\gamma_i) + \nu_n(\delta_i) \geq b$ for all $i \in [1, N_2]$. Let $V$ be the span of all $\alpha_i$ and $\gamma_i$, and $W$ be the span of all $\beta_i$ and $\delta_i$. By Lemma \ref{lem:adaptedTwo}, we may choose bases $\{\sigma_i\}$ of $V$ and $\{\tau_j\}$ of $W$ adapted to both $A_m$ and $A_n$, and $\{ \sigma_i \otimes \tau_j\}$ is a basis for $V \otimes W$. Expanding an individual term $\alpha_k \otimes \beta_k$ in this basis results in a sum $\sum f_{i,j}^k \sigma_i \otimes \tau_j$, where, by Equation \ref{eq:adaptedValuation}, $\nu_m(f_{i,j}^k) + \nu_m(\sigma_i) + \nu_m(\tau_j) \geq a$ for all $i,j$. Thus when $\rho$ is expanded in this basis we obtain an expression $\rho = \sum f_{i,j}\ \sigma_i \otimes \tau_j$ where $\nu_m(f_{i,j}) + \nu_m(\sigma_i) + \nu_m(\tau_i) \geq a$ for all $i,j$. By linear independence, the same coefficients $f_{i,j}$ are obtained from exanding $\sum_{k=1}^{N_2} \gamma_k \otimes \delta_k$, so a similar argument shows that for all $i,j$, $\nu_n(f_{i,j}) + \nu_n(\sigma_i) + \nu_n(\tau_i) \geq b$, which gives the lemma. \end{proof} We now have the tools to prove Proposition \ref{prop:nuRestriction}. \begin{proof}[Proof of Proposition \ref{prop:nuRestriction}] Suppose that $1 \leq n \leq \ell$ and $\rho \in H^0(\mathcal{C}^\ast, \mathcal{L}) \otimes_R H^0(\mathcal{C}^\ast, \mathcal{M})$ satisfies $\nu_n(\rho) = 0$. The claim that $\rho|_{A_n^\ast}$ is nonzero was verified in the paragraph before the proposition statement. By Lemma \ref{lem:twoNu}, there exists an expansion $\rho = \sum \sigma_i \otimes \tau_i$ such that $\nu_{n-1}(\sigma_i) + \nu_{n-1}(\tau_i) \geq \nu_{n-1}(\rho)$ and $\nu_n(\sigma_i) + \nu_n(\tau_i) \geq 0$ for all $i$. Since $\nu_n(t) = 1$, we may move a power of $t$ from one factor to the other in each term so that $\nu_n(\sigma_i), \nu_n(\tau_i) \geq 0$ for all $i$. Thus we have well-defined restrictions $\sigma_i |_{A_n^\ast} \in H^0(A_n^\ast, L_n)$ and $\tau_i |_{A_n^\ast} \in H^0(A_n^\ast, M_n)$, and $\rho |_{A_n^\ast} = \sum (\sigma_i |_{A_n^\ast}) \otimes (\tau_i |_{A_n^\ast}).$ The restriction of $\mathcal{O}_{\mathcal{C}}(A_{n-1})$ to $A_n$ is equal to $\mathcal{O}_{A_n}(p_n)$, and thus $\nu_{p_n}(\sigma_i |_{A_n^\ast}) \geq \nu_{n-1}(\sigma_i)$ and $\nu_{p_n}(\tau_i |_{A_n^\ast}) \geq \nu_{n-1}(\tau_i)$. Therefore $\nu_{p_n} (\rho |_{A_n^\ast}) \geq \min_i \left( \nu_{n-1}(\sigma_i) + \nu_{n-1}(\tau_i) \right) = \nu_{n-1}(\rho)$, as desired. The bound $\nu_{q_n}(\rho |_{A_n^\ast}) \geq \nu_{n+1}(\rho)$ follows by replacing $\nu_{n-1}$ with $\nu_{n+1}$ in this argument. \end{proof} Although we have been working with an arithmetic surface $\mathcal{C} \to \on{Spec} R$ throughout this section, the reader may verify that all arguments regarding the valuations $\nu_0$ and $\nu_{\ell+1}$ work with no modification if we instead consider a smooth curve $C \to \on{Spec} k$ over a field, with valuations $\nu_p, \nu_q$ coming from two distinct rational points. The existence of an adapted basis in that setting is also given by Fact \ref{fact:permBasis}. Adapted to this context, we obtain the following analog of Lemma \ref{lem:twoNu}, which will be useful in the next section. \begin{lemma} \label{lem:adaptedCurve} Let $C \to \on{Spec} k$ be a smooth curve over a field (not necessarily algebraically closed), with distinct rational points $p,q$. Let $L,M$ be two line bundles on $C$, and $\rho \in H^0(C^\ast, L) \otimes_k H^0(C^\ast, M)$. If $A,B \in \mathbb{Z}$ satisfy $\nu_p(\rho) \geq A$ and $\nu_q(\rho) \geq B$, then $$\rho \in \sum_{a,b \in \mathbb{Z}} H^0(C, L((-a-A)p+(-b-B)q)) \otimes_k H^0(C, M(ap+bq)).$$ \end{lemma} \subsection{Injective coupling} \label{ss:injectiveCoupling} We prove in this subsection our basic inductive result. Suppose that $C \to \on{Spec} k$ is a smooth curve over a field (not necessarily algebraically closed), $p,q$ are distinct rational points, and $L,M$ are two line bundles on $C$. Define $$T^{L,M}_{p,q} = \sum_{a,b \in \mathbb{Z}} H^0(C, L(-ap-bq)) \otimes_k H^0(C, M(ap+bq)),$$ and define the \emph{coupled multiplication map} to be the linear map \begin{equation} \label{eq:coupledMult} \mu^{L,M}_{p,q}:\ T^{L,M}_{p,q} \to H^0(C, L \otimes M), \end{equation} given in each summand by the cup product. This generalizes the coupled Petri map. \begin{rem} \label{rem:restrictMuDomain} The set $T^{L,M}_{p,q}$ also has the following two equivalent descriptions. \begin{enumerate} \item We may add the restrictions $-\deg M \leq a+b \leq \deg L$ to the sum defining $T^{L,M}_{p,q}$, since in all other summands one of the line bundles has negative degree. \item By Lemma \ref{lem:adaptedCurve}, $T^{L,M}_{p,q}$ is equal to the set of all $\rho \in H^0(C^\ast, L) \otimes_k H^0(C^\ast,M)$ such that both $\nu_p(\rho) \geq 0$ and $\nu_q(\rho) \geq 0$. \end{enumerate} \end{rem} We say that two line bundles $L,M$ on a smooth twice-marked curve $(C,p,q)$ over a field have \emph{injective coupling} if $\mu^{L,M}_{p,q}$ is injective. \begin{thm} \label{thm:icdegen} In Situation \ref{sit:chain}, let $\mathcal{L}, \mathcal{M}$ be line bundles on $\mathcal{C}$. Suppose that for all $1 \leq n \leq \ell$, $L_n(-p_n)$ and $M_n$ have injective coupling on $(A_n,p_n,q_n)$. Then $L_\eta(-P_\eta), M_\eta$ have injective coupling on $(C_\eta, P_\eta, Q_\eta)$. \end{thm} \begin{proof} Suppose that $\rho \in H^0(\mathcal{C}^\ast, \mathcal{L}) \otimes_R H^0(\mathcal{C}^\ast, \mathcal{M})$ is nonzero and satisfies $\nu_P(\rho) \geq 1$. We will demonstrate that $\nu_Q(\rho) < 0$. For each $n$ between $1$ and $\ell$ inclusive, let $\rho_n = t^{-\nu_n(\rho)} \rho$. Proposition \ref{prop:nuRestriction} implies that $\rho_n |_{A_n^\ast}$ is nonzero and satisfies $\nu_{p_n}( \rho_n |_{A_n^\ast} ) \geq \nu_{n-1}(\rho_n)$ and $\nu_{q_n}( \rho_n |_{A_n^\ast} ) \geq \nu_{n+1}(\rho_n)$. Injective coupling on $A_n$ implies that either $\nu_{n-1}(\rho_n) < 1$ or $\nu_{n+1}(\rho_n) < 0$. By Equation \ref{eq:nuMult}, $$\nu_{n-1}(\rho_n) = \begin{cases} \nu_P(\rho) & \mbox{ if } n = 1 \\ \nu_{n-1}(\rho) - \nu_n(\rho) & \mbox{ if } 2 \leq n \leq \ell \end{cases},$$ and $$\nu_{n+1}(\rho_n) = \begin{cases} \nu_Q(\rho) & \mbox{ if } n = \ell \\ \nu_{n+1}(\rho) - \nu_n(\rho) & \mbox{ if } 1 \leq n \leq \ell-1 \end{cases}.$$ A straightforward induction shows that if $\nu_P(\rho) \geq 1$, then $\nu_{n+1}(\rho) < \nu_n(\rho)$ for $1 \leq n \leq \ell-1$, and $\nu_Q(\rho) < 0$. This establishes the theorem. \end{proof} The following lemma is needed to apply Theorem \ref{thm:icdegen} to the coupled Petri map. \begin{lemma} \label{lem:extraQHarmless} Let $(C,p,q)$ be a twice-marked smooth curve of any genus, and $L$ any line bundle on $C$. The line bundles $L$ and $\omega_C \otimes L^\vee$ have injective coupling if and only if $L(-p)$ and $\omega_C(p+q) \otimes L^\vee$ have injective coupling. \end{lemma} \begin{proof} By the Riemann-Roch formula, for all $a,b$ either $H^0(C, L(-ap-(b-1)q)) = H^0(C,L(-ap-bq))$ or $H^0(C, \omega_C \otimes L^\vee(ap + bq)) = H^0(C, \omega_C \otimes L^\vee(ap + (b-1) q))$. This shows that $T^{L, \omega_C \otimes L^\vee}_{p,q} = T^{L(q), \omega_C \otimes L^\vee}_{p,q}$. Next, observe that $T^{L(q), \omega_C \otimes L^\vee}_{p,q} = T^{L(-p), \omega_C(p+q) \otimes L^\vee}_{p,q}$, simply by replacing $a,b$ with $a+1, b+1$ in the summation. So despite appearances, the coupled multiplication maps for the pair $L, \omega \otimes L^\vee$ and the pair $L(-p), \omega_C(p+q) \otimes L^\vee$ are precisely the same map. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:petriSmoothing}] In Situation \ref{sit:chain}, assume that each $(A_n, p_n, q_n)$ satisfies the coupled Petri condition. Let $L$ be any line bundle on $\mathcal{C}_{{\overline{\eta}}}$; we will prove that $\mu^L_{P_{\overline{\eta}}, Q_{\overline{\eta}}}$ is injective. First consider the case where $L$ is defined over $K$, so that $L$ may be regarded as a line bundle on $C_\eta$. Since $\mathcal{C}$ is regular, this line bundle may be extended (non-uniquely) to a line bundle $\mathcal{L}$ on $\mathcal{C}$ (e.g. choose a Cartier divisor $D$ with $L \cong \mathcal{O}_{C_\eta}(D)$, and take its closure to obtain a Cartier divisor in $\mathcal{C}$). Let $\omega_{\mathcal{C}/R}$ be the relative dualizing sheaf. By the adjunction formula \cite[Theorem 9.1.37]{liu}, together with the fact that $\mathcal{O}_{\mathcal{C}}(A_n) |_{A_n} \cong \mathcal{O}_{A_n}(-p_n-q_n)$, we have $\omega_\mathcal{C}(P+Q) |_{A_n} \cong \omega_{A_n}(p_n + q_n)$. Let $\mathcal{M} = \omega_\mathcal{C} (P+Q) \otimes \mathcal{L}^\vee$. Then $M_n = \omega_{A_n}(p_n+q_n) \otimes L_n^\vee$. Since $(A_n, p_n, q_n)$ satisfies the coupled Petri condition, the bundles $L_n$ and $\omega_{A_n} \otimes L_n^\vee$ have injective coupling; by Lemma \ref{lem:extraQHarmless} so do $L_n(-p_n)$ and $M_n$. Now Theorem \ref{thm:icdegen} implies that $L(-P_\eta)$ and $M = \omega_{\mathcal{C}_\eta}(P_\eta+Q_\eta) \otimes L^\vee$ have injective coupling on $C_\eta$, and Lemma \ref{lem:extraQHarmless} shows that $L$ and $\omega_{\mathcal{C}_\eta} \otimes L^\vee$ have injective coupling. Since the global sections functor commutes with flat base extension, the same is true after $L$ is pulled back from $C_\eta$ to $C_{\overline{\eta}}$. This settles the case of line bundles on $C_{\overline{\eta}}$ defined over $K$. For the general case, note that any line bundle $L$ on $C_{\overline{\eta}}$ is defined over some finite extension $K' \supseteq K$. We may extend $R$ to a discrete valuation ring $R'$ with fraction field $K'$. Upon forming the base extension $\mathcal{C} \times_R \on{Spec} R'$ and resolving singularities, we obtain an arithmetic surface $\mathcal{C}' \to \on{Spec} R'$ with general fiber $C_\eta \times_K \on{Spec} K'$ and special fiber given by replacing each node $p_n$ ($2 \leq n \leq \ell$) by a chain of rational curves (see e.g. \cite[Corollary 3.25]{liu}). By assumption on $A_1, \cdots, A_\ell$ and Example \ref{eg:genus0}, each component of the central fiber of $\mathcal{C}'$ satisfies the coupled Petri condition. Therefore we may invoke the first case to conclude that $L$ has injective coupling on $C_{\overline{\eta}}$. \end{proof} \section{Flags and permutations} \label{sec:flags} This section collects, without proofs, some basic facts about how permutations are used to record the relative position of a pair of flags in a fixed vector space and to define degeneracy loci of flags within vector bundles. There are a variety of conventions in the literature, depending on whether flags are indexed by dimension or codimension, whether indexing begins at $0$ or $1$, and other choices, so it is useful to consolidate the facts that we need in the notation used in this paper. \subsection{The permutation associated to a pair of flags} \begin{fact} \label{fact:flagPerm} Let $P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ be two flags in a vector space $H$, with sets of coranks $A$ and $B$ respectively, and let $n = \dim H$. There exists a unique permutation $\sigma$ of $[0,n-1]$ such that \begin{enumerate} \item For all $a \in A, b \in B$, $\dim P^a \cap Q^b = \# \{ i \in [0,n-1]: i \geq a \mbox{ and } \sigma(i) \geq b \}$, \item for all $a \in [0,n-1]$ with $a > 0$ and $a \not\in A$, $\sigma(a) < \sigma(a-1)$, and \item for all $b \in [0,n-1]$ with $b >0$ and $b \not\in B$, $\sigma^{-1}(b) < \sigma^{-1}(b-1)$. \end{enumerate} \end{fact} The permutation $\sigma$ is called the \emph{permutation associated to $P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$}. We call any permutation of $[0,n-1]$ satisfying conditions (2) and (3) above \emph{compatible with coranks $A, B$}. \begin{fact} \label{fact:permBasis} Let $P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ be a pair of flags with associated permutation $\sigma$. There exists a basis $\{v_0, \cdots, v_{n-1}\}$ of $H$ such that for all $a \in A, b \in B$, $\{v_i: i \geq a \}$ is a basis for $P^a$ and $\{v_i:\ \sigma(i) \geq b\}$ is a basis for $Q^b$. Therefore $\{ v_i:\ i \geq a, \sigma(i) \geq b \}$ is a basis for $P^a \cap Q^b$. \end{fact} Although we will not prove these facts, we sketch a strategy of proof, for the reader who is new to these notions. First, construct a basis $\mathcal{B}$ of $H$ that contains a basis for all subspaces $P^a \cap Q^b$; this can be done inductively, ordering the intersections by dimension. Now sort this basis in two ways: one in order of stratum from $P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$, and one in order of stratum from $Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$. Furthermore, ensure that any two elements $v,w \in \mathcal{B}$ are placed in the same order in both orderings whenever possible without violating the previous sentence. Comparing these two orderings gives $\sigma$. We will call a basis as in Fact \ref{fact:permBasis} \emph{adapted to $P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$}. \subsection{Degeneracy loci and the essential set} \label{ss:degenLoci} The association of a permutation to a pair of flags can also be reversed, to define \emph{degeneracy loci} for a pair of flags in a vector bundle. We follow the conventions of \cite[$\S 2$]{cpRR} here. Given a permutation $\sigma$ of $[0,n-1]$, define the \emph{rank function} of $\sigma$ to be the function $r^\sigma: \mathbb{N}^2 \to \mathbb{N}$ given by \begin{equation} \label{eq:rsigma} r^\sigma(a,b) = \# \{a' \in [0,n-1]: a' \geq a \mbox{ and } \sigma(a') \geq b \}. \end{equation} If $S$ is a scheme with a rank $n$ vector bundle $\mathcal{H}$ and two complete flags $\mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}$ in $\mathcal{H}$, we define a degeneracy locus \begin{equation} \label{eq:dsigmaComplete} D_\sigma(\mathcal{P}^{\sbu}; \mathcal{Q}^{\sbu}) = \big\{ x \in S:\ \dim (\mathcal{P}^a)_x \cap (\mathcal{Q}^b)_x \geq r^\sigma(a,b) \mbox{ for all } a,b \in [0,n-1] \big\}. \end{equation} Of course, the equation above is only a set-theoretic definition. To give a scheme-theoretic definition, translate each inequality into a bound on the rank of the bundle map $\mathcal{Q}^b \to \mathcal{H} / \mathcal{P}^a$, trivialize this bundle map locally, and consider the scheme defined by all the minors of appropriate size in the matrix trivializing the bundle map, which is independent of the choice of trivialization. Alternatively, one can bound the rank of the difference map $\mathcal{P}^a \oplus \mathcal{Q}^b \to \mathcal{H}$, which results in the same ideal (this can be seen by representing this map in block form and comparing its minors to those of $\mathcal{Q}^b \to \mathcal{H} / \mathcal{P}^a$), and makes the symmetry evident. The scheme structure on $D_\sigma$ is the intersection of these schemes. Although there are a priori $d^2$ inequalities imposed in Equation \ref{eq:dsigmaComplete}, many of them are redundant. For example, if $r^\sigma(a,b) = r^\sigma(a+1,b)$ or $r^\sigma(a,b) = r^\sigma(a-1,b)-1$, then the bound on $\dim (\mathcal{P}^a)_x \cap (\mathcal{Q}^b)_x$ is redundant. A minimal set of rank conditions is provided by the \emph{essential set}, defined in \cite{fultonSchubert}. Translated into our notation, this set is \begin{equation} \label{eq:essSet} \Ess(\sigma) = \{(a,b): 1 \leq a,b < n,\ \sigma(a-1) < b \leq \sigma(a) \mbox{ and } \sigma^{-1}(b-1) < a \leq \sigma^{-1}(b) \}. \end{equation} The inequalities in Equation \ref{eq:essSet} are equivalent to the equations \begin{equation} \label{eq:essSetV2} r^\sigma(a-1,b) = r^\sigma(a,b-1) = r^\sigma(a,b) > r^\sigma(a+1,b)= r^\sigma(a,b+1), \end{equation} which make the redundancy more clear (cf. Equation \ref{eq:essPi}). By \cite[Lemma 3.10]{fultonSchubert}, the scheme defined in Equation \ref{eq:dsigmaComplete} is the same if we consider only $(a,b) \in \Ess(\sigma)$, rather than all $(a,b) \in [0,d-1]^2$ (this is not hard to see set-theoretically; importantly, it is also true scheme-theoretically). A pleasant consequence of this is that it is also possible to define $D_\sigma$ for \emph{partial} flags, provided that the flags have strata of all coranks mentioned in $\Ess(\sigma)$. \begin{defn} \label{def:dsigma} Let $\mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}$ be flags with coranks $A,B$ respectively, and let $\sigma$ be a permutation of $[0,n-1]$ such that $\Ess(\sigma) \subseteq A \times B$. Define $$D_\sigma(\mathcal{P}^{\sbu}; \mathcal{Q}^{\sbu}) = \{ x \in S: \dim (\mathcal{P}^a)_x \cap (\mathcal{Q}^b)_x \geq r^\sigma(a,b) \mbox{ for all } (a,b) \in \Ess(\sigma) \},$$ where the scheme structure is defined in the manner described above. \end{defn} Permutations of $[0,n-1]$ have a partial order, the \emph{Bruhat order}, characterized by $\sigma \leq \tau$ if and only if $r^\sigma \geq r^\tau$. Therefore $\tau \leq \sigma$ implies $D_{\tau}(\mathcal{P}^{\sbu}; \mathcal{Q}^{\sbu}) \subseteq D_{\sigma}(\mathcal{P}^{\sbu}; \mathcal{Q}^{\sbu})$, and $D_{\sigma}(\mathcal{P}^{\sbu}; \mathcal{Q}^{\sbu})$ can also be described (set theoretically) as the locus of points $x \in S$ where the permutation $\tau$ associated to $\mathcal{P}^{\sbu}_x, \mathcal{Q}^{\sbu}_x$ satisfies $\tau \leq \sigma$ in Bruhat order. Denote by $\Fl(A; n)$ the flag variety parameterizing flags $V^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ of coranks $A$ in $k^n$. Taking $\mathcal{V}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ to be the tautological flag of coranks $A$ on $\Fl(A;n)$, and $F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ to be a fixed flag in $k^n$ of coranks $B$, we denote $$X_\sigma(F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}) = D_\sigma(\mathcal{V}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}; F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}) \subseteq \Fl(A; n),$$ where $\sigma$ is any permutation compatible with coranks $A, B$. These are \emph{Schubert varieties}, and they are prototypical of all degeneracy loci of pairs of flags. The literature on Schubert varieties is vast; we mention here only a few facts that we need in this paper. \begin{fact} \label{fact:Schubert} Fix a permutation $\sigma$ compatible with coranks $A,B$, and a fixed flag $F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ of coranks $B$ in $k^n$. The Schubert variety $X_\sigma(F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$ has codimension $\ell( \omega_{n-1} \sigma)$ in $\Fl(A; n)$. Its singular locus is a union of Schubert varieties $X_\tau(F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$ for certain permutations $\tau \leq \sigma$. All such $\tau$ have $\ell(\omega_{n-1} \tau) \geq \ell(\omega_{n-1} \sigma) + 2$, so Schubert varieties are smooth in codimension $1$. \end{fact} If $X_{\tau}(F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}) \subseteq X_{\sigma}(F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})_{\operatorname{sing}}$, we will say that \emph{$\tau$ is singular in $X_\sigma$}. A combinatorial description of those $\sigma, \tau$ for which $\tau$ is singular in $X_\sigma$ was conjectured in \cite{lakshmibai-sandhya-criterion} and proved independently by several groups \cite{billey-warrington-maximal, cortez-singularities, kassel-lascoux-reutenauer-singular, manivel-lieu}. \section{Versality of a pair of flags} \label{sec:versality} This section gives a definition and some preliminary results about versality of flags, following the point of view of \cite{cpRR}, and applies these results to relate versality of Brill-Noether flags to the coupled Petri map. Unlike \cite{cpRR}, we do not assume the flags are complete, and we restrict our attention to pairs of \emph{two} versal flags. Throughout this section, let $S$ be a finite-type scheme over an algebraically closed field $k$ of any characteristic, with a rank $n$ vector bundle $\mathcal{H}$. Suppose further that two (partial or complete) flags $\mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}$ are chosen in $\mathcal{H}$, with sets of coranks $A,B$ respectively. We will reserve the letters $a,b$ to denote elements of $A$ or $B$ respectively. Denote by $f: \on{Fr}(\mathcal{H}) \to S$ the frame bundle of $\mathcal{H}$ and by $\Fl(A; n)$ the flag variety parameterizing flags $F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ of coranks $A$ in $k^n$. The flags $\mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}$ induce a morphism $p: \on{Fr}(\mathcal{H}) \to \Fl(A;n) \times \Fl(B;n)$. \begin{defn} \label{defn:versal} The pair of flags $\mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}$ is called \emph{versal} if morphism $p: \on{Fr}(\mathcal{H}) \to \Fl(A; n) \times \Fl(B; n)$ described above is a smooth morphism. \end{defn} \begin{rem} \label{rem:stacky} Definition \ref{defn:versal} may be stated stack-theoretically: the pair of flags $\mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}$ is versal if it induces a smooth map from $S$ to the stack quotient $[ \Fl(A; d) \times \Fl(B; d) / \on{GL}_n(k)]$. There is a moduli problem for pairs of flags, which is represented by this stack, and this explains our use of the term ``versal.'' The details are omitted since we do not need this point of view. \end{rem} \subsection{First-order deformations of a pair of flags} Fix a $k$-point $x \in S$. For simplicity, denote by $H, P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ the fibers of $\mathcal{H}, \mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}$ over $x$. Define \begin{equation} \label{eq:Mspace} M_x = \operatorname{coker} \left( \End H \xrightarrow{\Delta} \End H / \Fix P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}} \oplus \End H / \Fix Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}} \right). \end{equation} where $\Fix P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ denotes $\{ \phi \in \End H:\ \phi(P^a) \subseteq P^a \mbox{ for all $a \in A$}\}$, $\Fix Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ is defined similarly, and $\Delta$ is the diagonal map. Fix a point $y \in f^{-1}(x)$. This amounts to an isomorphism $k^n \xrightarrow{\sim} H$, which induces an isomorphism $$T_{p(y)} \Fl(A; n) \times \Fl(B; n) \cong \End H / \Fix P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}} \oplus \End H / \Fix Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}.$$ Using this identification we define a quotient map $q_y: T_{p(y)} \Fl(A;n) \times \Fl(B;n) \to M_x$. The kernel of $q_y$ is isomorphic to the image of the diagonal map from $\End H$, and may be identified with the tangent space to the $\on{GL}_n(k)$-orbit of $p(y)$. \begin{lemma} \label{lem:deltax} There is a a linear map $\delta_x: T_x S \to M_x$, such that the following diagram commutes. In this diagram, the rows are exact, $\on{GL}_n(k)$ is identified with the fiber $f^{-1} (x)$ and the map to $T_y \on{Fr}(\mathcal{H})$ is the differential of the inclusion. \begin{center} \begin{tikzcd} 0 \ar[r] & T_{\operatorname{id}} \on{GL}_n(k) \ar[r] \ar[d, equals] & T_y \on{Fr}(\mathcal{H}) \ar[d, "dp_y"] \ar[r,"df_y"]& T_x S \ar[r] \ar[d,"\delta_x"] & 0 \\ 0 \ar[r] & \End H \ar[r, "\Delta"] & T_{p(y)} \Fl(A; d) \times \Fl(B; d) \ar[r,"q_y"] & M_x \ar[r] & 0\\ \end{tikzcd} \end{center} The map $\delta_x$ does not depend on the choice of $y \in f^{-1}(x)$. \end{lemma} We omit the proof of Lemma \ref{lem:deltax}; see the discussion in \cite[$\S$3]{cpRR}, which generalizes with minor notation changes to the case of incomplete flags. Therefore the linear map $\delta_x$ describes the essential content of the differential $dp_y$, discarding any information that depends on the choice of frame. The utility of $\delta_x$ stems from the following Lemma, which boils down to the first-order criterion for smoothness of morphisms. We omit the proof; see \cite[Proposition 3.2]{cpRR}, which can be adapted to partial flags by minor notational changes. \begin{lemma} \label{lem:deltaxSmooth} The flag pair $\mathcal{P}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, \mathcal{Q}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ is versal on $S$ if and only if $S$ is smooth over $k$ and $\delta_x$ is surjective for all $x \in S$. \end{lemma} We require a more concrete description of the map $\delta_x$ suitable to study Brill-Noether flags. Let $v \in T_x S$, and regard $v$ as a morphism $v: \on{Spec} k[\varepsilon] \to S$, where $k[\varepsilon]$ is the ring of dual numbers. This gives an extension $0 \to H \to v^\ast \mathcal{H} \to H \to 0$, where the first map is multiplication by $\varepsilon$ and the second is quotient modulo $\varepsilon$. A choice of $y \in f^{-1}(x)$ and $w \in df_y^{-1}(v)$ determines a trivialization $v^\ast \mathcal{H} \xrightarrow{\sim} k[\varepsilon]^n$, which determines a splitting $s: v^\ast \mathcal{H} \to H$. It is possible to inductively construct two splittings $\phi_P, \phi_Q: H \to v^\ast \mathcal{H}$ of the quotient map $v^\ast \mathcal{H} \to H$, such that $\phi_P(P^a) \subseteq v^\ast \mathcal{P}^a$ and $\phi_Q(Q^b) \subseteq v^\ast \mathcal{Q}^b$ for all $a\in A, b \in B$. Although these are not unique, $s \circ \phi_P$ is unique modulo $\Fix P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ and $s \circ \phi_Q$ is unique modulo $\Fix Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$. In fact, these two maps precisely give the differential of $p$: \begin{equation} \label{eq:dpy} dp_y(w) = ( s \circ \phi_P + \Fix P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}},\ s \circ \phi_Q + \Fix Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}) \in \End H / \Fix P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}} \oplus \End H / \Fix Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}. \end{equation} Now, applying the quotient $q_y$ to this formula, we obtain \begin{lemma} \label{lem:deltaxDesc} For any $x \in S$ and $v \in T_x S$, let $s: v^\ast \mathcal{H} \to H$ be any $k$-linear splitting, and $\phi_P, \phi_Q: H \to v^\ast \mathcal{H}$ be any $k$-linear splittings such that $\phi_P(P^a) \subseteq v^\ast \mathcal{P}^a$ and $\phi_Q(Q^b) \subseteq v^\ast \mathcal{Q}^b$ for all $a \in A, b \in B$. Then $$\delta_x(v) = [ s \circ \phi_P, s \circ \phi_Q ],$$ where the square brackets indicate the coset in $M_x$. \end{lemma} \subsection{A description of $M_x^\vee$} The dual vector space $M^\vee_x$ has a convenient description that will be well-suited to relating it to the coupled Petri map. This description is based on the observation that elements of $M_x$ encode first-order deformations of the relative positions of each pair $P^a,Q^b$ of strata, via the following maps. \begin{defn} \label{defn:rab} For $a \in A, b \in B$, define $r_{a,b}: M_x \to \Hom(P^a \cap Q^b, H / (P^a + Q^b))$ by $$r_{a,b}([\psi_P, \psi_Q]) = q \circ (\psi_P - \psi_Q) \mid_{P^a \cap Q^b},$$ where $q$ denotes the quotient map $H \to H / (P^a + Q^b)$. \end{defn} In the following statement, $\perp$ denotes the annihilator subspace in $H^\vee$, all summands are regarded as subspaces of $H^\vee \otimes H$, and we implicitly identify $\left( H/(P^a + Q^b) \right)^\vee$ with $(P^a + Q^b)^\perp$ and $(P^a + Q^b)^\perp \otimes (P^a \cap Q^b)$ with $\Hom(P^a \cap Q^b, H / (P^a + Q^b))^\vee$. \begin{prop} \label{prop:mdual} There is an isomorphism $$\zeta: \sum_{a \in A, b \in B} (P^a + Q^b)^\perp \otimes (P^a \cap Q^b) \to M_x^\vee$$ such that for all $a,b$ the induced map $(P^a + Q^b)^\perp \otimes (P^a \cap Q^b) \to M_x^\vee$ is equal to ${}^t r_{a,b}$. \end{prop} \begin{proof} First, we claim that the product map $$r = \prod_{a,b} r_{a,b}: M_x \to \prod_{a \in A, b \in B} \Hom(P^a \cap Q^b, H / (P^a + Q^b) )$$ is injective. To see this, first choose a basis $\mathcal{B}$ that is adapted to $P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ (see Fact \ref{fact:permBasis}). Any element of $M_x$ is equal to $[\psi, 0]$ for some $\psi \in \End H$, since $[\alpha, \beta] = [\alpha-\beta, 0]$; let $[\psi, 0] \in M_x$ be any element of $\ker r$. We will show that $[\psi, 0] = 0$, by constructing $\psi_1 \in \Fix P^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, \psi_2 \in \Fix Q^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ such that $\psi = \psi_1 - \psi_2$. We will construct $\psi_1, \psi_2$, by specifying their values on each element of $\mathcal{B}$. For each $v \in \mathcal{B}$, choose $a \in A, b \in B$ maximal such that $v \in P^a \cap Q^b$; the assumption $[\psi, 0] \in \ker r_{a,b}$ implies that $\psi(v) \in P^a + Q^b$, so we may write $v = v_1 - v_2$ for some $v_1 \in P^a, v_2 \in Q^b$. Define $\psi_1(v) = v_1,\ \psi_2(v) = v_2$. If $\psi_1, \psi_2$ are constructed in this way, then $[\psi,0] = [\psi_1, \psi_2] = 0 \in M_x$. This establishes that $\prod_{a,b} r_{a,b}$ is injective. Now, define $\Delta: \End H \to \prod_{a,b} \Hom(P^a \cap Q^b, H / (P^a + Q^b))$ to be the diagonal map, and $\pi: \End H \twoheadrightarrow M$ to be the map $\psi \mapsto [\psi, 0]$. Then $r \circ \pi = \Delta$. Dualizing, and using the fact that $r$ is injective and $\pi$ is surjective, we obtain an isomorphism $\zeta: \operatorname{im} {}^t \Delta \xrightarrow{\sim} M^\vee$. Identifying $\operatorname{im} {}^t \Delta$ with $\sum_{a,b} (P^a + Q^b)^\perp \otimes (P^a \cap Q^b)$ gives the desired isomorphism. It follows from the construction that $\zeta_{a,b} = {}^t r_{a,b}$. \end{proof} The isomorphism $\zeta$ demonstrates that the map $\delta_x$ may be conveniently studied via the compositions $r_{a,b} \circ \delta_x$, which record first-order deformations of the pair $P^a, Q^b$ of subspaces. These maps have a convenient cohomological description, which is shown in Figure \ref{fig:snake}. Consider the difference morphism $\mathcal{P}^a \oplus \mathcal{Q}^b \to \mathcal{H}$ given by $(\alpha, \beta) \mapsto \alpha - \beta$. Given a tangent vector $v \in T_x S$, we obtain a map of short exact sequences from $0 \to P^a \oplus Q^b \to v^\ast \mathcal{P}^a \oplus v^\ast \mathcal{Q}^b \to P^a \oplus Q^b \to 0$ to $0 \to H \to v^\ast \mathcal{H} \to H \to 0$. Now, identifying the kernel of $P^a \oplus Q^b \to H$ with $P^a \cap Q^b$, this map of short exact sequences gives a snake map $P^a \cap Q^b \to H / (P^a + Q^b)$. This snake map can be described explicitly as $q \circ s \circ (\phi_P - \phi_Q) \circ \iota$, where $q: H \to H/(P^a +Q^b)$ is the quotient map, $s, \phi_P, \phi_Q$ are splitting as in Lemma \ref{lem:deltaxDesc}, and $\iota$ is the inclusion $P^a \cap Q^b \to H$. But $q \circ s \circ (\phi_P - \phi_Q) \circ \iota$ is also equal to $r_{a,b} \circ \delta_x$. This proves \begin{lemma} \label{lem:rabsnake} Let $v \in T_x S$ be a tangent vector. For all $a \in A, b \in B$, the map $r_{a,b} \circ \delta_x(v)$ is equal to snake map described above and shown in Figure \ref{fig:snake}. \end{lemma} \begin{figure} \centering \begin{tikzcd} &&&0 \ar[d] & \\ && \ar[ddd, phantom, ""{coordinate,name=middle}] &P^a \cap Q^b \ar[d, "\iota"] \ar[dddll, rounded corners, "r_{a,b}\circ \delta_x(v)", to path = { -- ([xshift=12ex]\tikztostart.east) \tikztonodes |- (middle) -| ([xshift=-12ex]\tikztotarget.west) -- (\tikztotarget) } ] \\ 0 \ar[r] & P^a \oplus Q^b \ar[r] \ar[d,crossing over] & v^\ast \mathcal{P}^a \oplus v^\ast \mathcal{Q}^b \ar[r] \ar[d,crossing over] & P^a \oplus Q^b \ar[r] \ar[d, crossing over] \ar[l, bend right, "\phi_P \oplus \phi_Q"'] & 0 \\ 0 \ar[r] & H \ar[r] \ar[d, "q"] & v^\ast \mathcal{H} \ar[r] \ar[l, bend right=15, "s"'] & H \ar[r] & 0\\ & H / (P^a + Q^b) \ar[d] & ~\\ & 0 \end{tikzcd} \caption{The composition $r_{a,b} \circ \delta_x(v)$ as a snake map.} \label{fig:snake} \end{figure} \subsection{First-order deformation of Brill-Noether flags} We now specialize to our intended application: let $(C,p,q)$ be a twice-marked smooth curve over an algebraically closed field $k$, $S = \Pic^d(C)$ for some integer $d \geq 2g-1$, and consider the degree-$d$ Brill-Noether flags $\mathcal{P}^{\sbu}_d, \mathcal{Q}^{\sbu}_d$ in the vector bundle $\mathcal{H}_d$. For convenience, let $n = d+1-g = \operatorname{rank}(\mathcal{H}_d$). Fix a point $[L] \in \Pic^d(C)$. In the notation this section, $H = H^0(C,L)$, $P^a = H^0(C,L(-ap))$, $Q^b = H^0(C,L(-bq))$, $P^a \cap Q^b = H^0(C,L(-ap-bq))$, and the sets of coranks are $A = B = [0, d-2g+1]$. For all $a \in A, b \in B$, there is an exact sequence of $\mathcal{O}_C$-modules \begin{equation} \label{eq:labSequence} 0 \to L(-ap-bq) \to L(-ap) \oplus L(-bq) \to L \to 0, \end{equation} where the second map is given by $(s,t) \mapsto s-t$ on sections. Taking cohomology, and using the fact that $L, L(-ap)$, and $L(-bq)$ are nonspecial line bundles, gives an isomorphism $$ h_{a,b}: H / (P^a + Q^b) \xrightarrow{\sim} H^1(C, L(-ap-bq)).$$ Dualizing these maps and using functoriality of the long exact sequence gives an isomorphism \begin{equation} \label{eq:quoth1} \theta: \sum_{0 \leq a,b \leq n} H^1(C, L(-ap-bq))^\vee \otimes H^0(L(-ap-bq)) \xrightarrow{\sim} \sum_{0 \leq a,b \leq n} (P^a + Q^b)^\perp \otimes (P^a \cap Q^b). \end{equation} Using Serre duality and identifying the domain of $\theta$ with $T^L_{p,q} |_{[0,n]^2}$ gives an isomorphism $$\zeta \circ \theta:\ T^L_{p,q} |_{[0,n]^2} \xrightarrow{\sim} M_{[L]}^\vee.$$ Identifying the contangent space $T^\vee_x S \cong H^1(C,\mathcal{O}_C)^\vee$ with $H^0(C, \omega_C)$, regard ${}^t \delta_{[L]}$ as a map $${}^t \delta_{[L]}: M^\vee_{[L]} \to H^0(C, \omega_C).$$ We may finally link versality of Brill-Noether flags to the coupled Petri map. \begin{prop} \label{prop:etaIsMult} The map $\eta = {}^t \delta_{[L]} \circ \zeta \circ \theta$ is equal to the $[0,n]^2$-coupled Petri map $\mu^L_{p,q} |_{[0,n]^2}$. \end{prop} \begin{proof} Let $\eta_{a,b} = \eta \circ \iota$ denote the restriction of $\eta$ to $H^0(C,L(-ap-bq)) \otimes H^0(C,\omega_C \otimes L^\vee(ap + bq) )$. Then ${}^t \eta_{a,b} = {}^t \iota \circ {}^t \theta \circ {}^t \zeta \circ \delta_{[L]} = h_{a,b} \circ r_{a,b} \circ \delta_{[L]} = h_{a,b} \circ s$, where $s$ is examined in Lemma \ref{lem:rabsnake}: it is the map $$s: T_{[L]} \Pic^d(C) \to \Hom( H^0(C,L(-ap-bq)), H^1(C,L(-ap-bq))$$ such that for all $v \in T_{[L]} \Pic^d(C)$, the map $s(v)$ is equal to the snake map in Lemma \ref{lem:rabsnake}. That snake diagram has a cohomological interpretation. Fix $v \in T_{[L]} \Pic^d(C)$, corresponding to an extension $0 \to L \to \mathcal{L} \to L \to 0$ of $L$ to a line bundle on $C \times_k \on{Spec} k[\varepsilon]$, and consider the following map of short exact sequences. \begin{center} \begin{tikzcd} 0 \ar[r] & L(-ap) \oplus L(-bq) \ar[r] \ar[d] & \mathcal{L}(-ap) \oplus \mathcal{L}(-bq) \ar[r] \ar[d] & L(-ap) \oplus L(-bq) \ar[r] \ar[d] & 0 \\ 0 \ar[r] & L \ar[r] & \mathcal{L} \ar[r] & L \ar[r] & 0\\ \end{tikzcd} \end{center} Observe that none of these sheaves have higher cohomology, and the kernels of the vertical maps are $L(-ap-bq), \mathcal{L}(-ap-bq), L(-ap-bq)$, respectively. In other words, this diagram gives an acyclic resolution of the exact sequence \begin{equation} \label{eq:LabSequence} 0 \to L(-ap-bq) \to \mathcal{L}(-ap-bq) \to L(-ap-bq) \to 0. \end{equation} Therefore, applying the global sections functor and forming the snake diagram gives the long exact cohomology sequence of (\ref{eq:LabSequence}), and the snake map of Figure \ref{fig:snake} is equal, up to the isomorphism $h_{a,b}$, to the cohomology boundary map \begin{equation} \label{eq:bdyMap} \partial:\ H^0(C, L(-ap-bq)) \to H^1(C, L(-ap-bq)). \end{equation} In other words, $\partial = h_{a,b} \circ r_{a,b} \circ \delta_{[L]}(v)$. On the other hand, $\partial$ is also given by taking the cup product with $v$, where we now regard $v$ as an element of $H^1(C, \mathcal{O}_C)$\footnote{This can be seen explicitly by representing the extension $\mathcal{L}$ by a \v{C}ech cocycle, or more abstractly by noting that in the long exact sequence for $\Hom_{\mathcal{O}_C}(\mathcal{O}_C, \bullet)$, the boundary map $\Hom_{\mathcal{O}_C}(\mathcal{O}_C, L) \to \operatorname{Ext}^1_{\mathcal{O}_C}(\mathcal{O}_C, L)$ is given by taking the Yoneda product with the class in $\operatorname{Ext}_{\mathcal{O}_C}^1(L,L)$ of the extension; the latter is identified with $v \in H^1(C, \mathcal{O}_c)$.}. In other words, the map $${}^t \eta_{a,b}: T_{[L]} \Pic^d(C) \cong H^1(C,\mathcal{O}_C) \to \Hom( H^0(C,L(-ap-bq)), H^1(C,L(-ap-bq)))$$ is given by ${}^t \eta_{a,b}(v)(\sigma) = v \cup \sigma.$ Dualizing, and unwinding definitions (including the description of the Serre duality isomorphism as cup product followed by $H^1(C,\omega_C) \xrightarrow{\sim} k$), it follows that $$\eta_{a,b}: H^0(C,\omega_C \otimes L^\vee(ap+bq)) \otimes H^0(C,L(-ap-bq)) \to H^0(C,\omega_C)$$ is the cup product map. Considering all the $\eta_{a,b}$ together shows that $\eta = \mu^L_{p,q} |_{[0,n]^2}$. \end{proof} \begin{cor} \label{cor:mudelta} Let $(C,p,q)$ be a twice-marked smooth curve over an algebraically closed field $k$. The degree-$d$ Brill-Noether flags of $(C,p,q)$ are versal at $[L] \in \Pic^d(C)$ if and only if the $[0,d-2g+1]^2$-coupled Petri map $\mu^L_{p,q} |_{[0,d-2g+1]^2}$ is injective. \end{cor} \section{Proof of the versality theorem} \label{sec:proofs} We now have all the pieces in place to prove Theorem \ref{thm:coupledPetri} on the $S$-coupled Petri condition, and the versality Theorem \ref{thm:versality}. All that remains is to use some properties of the moduli space of curves to obtain the desired results for all algebraically closed fields. \begin{proof}[Proof of Theorem \ref{thm:versality}] If $\pi: \mathcal{C} \to B$ is a proper flat family of smooth curves with two disjoint sections $P,Q$, the construction of degree-$d$ Brill-Noether flags globalizes to give flags $\mathcal{P}^{\sbu}_d, \mathcal{Q}^{\sbu}_d$ in a vector bundle $\mathcal{H}_d$ on the relative Picard variety $\Pic^d(\pi) \to B$. Define a morphism $\on{Fr}(\mathcal{H}_d) \to \Fl(A; n) \times \Fl(B;n)$ as in Definition \ref{defn:versal}. The locus where this map fails to be smooth is a closed subscheme, as is its image in $B$. So the locus of $b \in B$ where the degree-$d$ Brill-Noether flags are versal is open. Since $\mathcal{M}_{g,2}$ is irreducible, we need only verify the existence of a twice-marked curve of genus $g$ with versal degree-$d$ Brill-Noether flags over some field in every characteristic. This can be done as follows: find a twice-marked elliptic curve $(C,p,q)$ with $p-q$ non-torsion, chain $g$ such curves together, and deform to an arithmetic surface over a discrete valuation ring with fraction field of the desired characteristic. By Theorem \ref{thm:petriSmoothing}, the geometric general fiber satisfies the fully coupled Petri condition, so by Corollary \ref{cor:mudelta} its Brill-Noether flags (in every degree) are versal. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:coupledPetri}] Fix a genus $g$, an algebraically closed field $k$, and a finite set $S \subseteq \mathbb{Z} \times \mathbb{Z}$. Let $\mathcal{V} \subseteq \mathcal{M}_{g,2}$ denote the locus of twice-marked curves satisfying the $S$-coupled Petri condition, and for all $d \in \mathbb{Z}$ let $\mathcal{V}_d$ denote the locus of twice-marked curves for which every degree-$d$ line bundle has injective $S$-coupled Petri map. So $\mathcal{V} = \bigcap_{d \in \mathbb{Z}}\ \mathcal{V}_d$. For all but finitely many $d \in \mathbb{Z}$, all elements of $\{d + a + b: (a,b) \in S \}$ are either less than $0$ or greater than $2g-2$; for line bundles $L$ of these degrees the space $T^L_{p,q}(S)$ is trivial, so the $S$-coupled Petri map is injective. Therefore all but finitely many $\mathcal{V}_d$ are equal to all of $\mathcal{M}_{g,2}$, so it suffices to fix a single integer $d$ and verify that $\mathcal{V}_d$ is Zariski-dense. For all $N \in \mathbb{Z}$, the $S$-coupled Petri map of a degree-$d$ line bundle $L$ is equal to the $S'$-coupled Petri map of the degree-$(d+2N)$ line bundle $L(Np+Nq)$, where $S' = \{(a+N,b+N): (a,b) \in S \}$. For $N$ sufficiently large, $S' \subseteq [0,d+2N -2g+1]^2$. By Corollary \ref{cor:mudelta}, $\mathcal{V}_d$ contains the locus of twice-marked curves with versal degree-$(d+2N)$ Brill-Noether flags. This is dense by Theorem \ref{thm:versality}, hence $\mathcal{V}_d$ is dense for all $d \in \mathbb{Z}$. \end{proof} \section{Brill-Noether degeneracy loci} \label{sec:bnDegen} This section analyzes the geometry of Brill-Noether degeneracy loci $W^\Pi_d(C,p,q)$ using the versality of Brill-Noether flags, and proves Theorem \ref{thm:bnDegen}. Assume we are in the following situation. \begin{sit} \label{sit:bnDegen} Let $g,d$ be positive integers, $(C,p,q)$ a twice-marked smooth curve of genus $g$, and $\Pi$ a dot array of size $r+1$ with $g-d+r \geq 0$. Also assume that $\Pi \subseteq [0,d]^2$. \end{sit} The simplifying assumption $\Pi \subseteq [0,d]^2$ is harmless; see the proof of Theorem \ref{thm:bnDegen}. We begin by specifying the scheme structure on $W^\Pi_d(C,p,q)$ and interpreting it as a degeneracy locus of Brill-Noether flags. First, we recall the construction of the scheme structure of $W^r_d(C)$ (see e.g. \cite[$\S \mathrm{IV}.3$]{acgh}): the set-theoretic locus $\{ [L] \in \Pic^d(C): h^0(C,L) \geq r+1 \} = \{ [L] \in \Pic^d(C): h^1(C,L) \geq g-d+r \}$ has a natural scheme structure given by the $(g-d+r)$th Fitting ideal of $R^1 \nu_\ast \mathcal{L}$, where $\nu: C \times \Pic^d(C) \to \Pic^d(C)$ is the projection and $\mathcal{L}$ is a Poincar\'e line bundle. The Fitting ideal may be computed as a determinantal locus using any resolution by vector bundles. In particular, if we choose any two integers $a,b \geq 2g-1-d$ and let $d' = d+a+b$, we have a resolution $\mathcal{P}^a_{d'} \oplus \mathcal{Q}^b_{d'} \to \mathcal{H}_{d'} \to R^1 \nu_\ast \mathcal{L} \to 0$, and the Fitting ideal is therefore equal to the determinantal ideal defining the standard scheme structure of $\{ x \in \Pic^{d'}(C): \dim (\mathcal{P}^a_{d'})_x \cap (\mathcal{Q}^b_{d'})_x \geq r+1\},$ as described in Section \ref{ss:degenLoci}. Now, we define the scheme structure on $W^\Pi_d(C,p,q)$ by a scheme-theoretic intersection \begin{equation} \label{eq:wPiScheme} W^\Pi_d(C,p,q) = \bigcap_{(a,b) \in \mathbb{N}^2} \tw_{ap+bq} \left( W^{r(a,b)-1}_{d-a-b}(C) \right). \end{equation} This in turn may be rephrased as a degeneracy locus of Brill-Noether flags of larger degree. For any two integers $M,N \geq 2g-1$, we may define $d' = d + M + N$ and write equivalently \begin{equation} \label{eq:wPiFlags} \begin{split} W^\Pi_d(C,p,q) = \tw_{-Mp-Nq} \big( \{ x \in \Pic^{d'}(C): & \dim (\mathcal{P}^{M+a}_{d'})_x \cap (\mathcal{Q}^{N+b}_{d'})_x \geq r^\Pi(a,b)\\& \mbox{ for all } a,b \in [0,d] \} \big). \end{split} \end{equation} The bounds $M,N \geq 2g-1$ ensure that $M+a, N+b \leq d'-2g+1$ for all $a,b \in [0,d]$, so that the flag elements $\mathcal{P}^{M+a}_{d'}, \mathcal{Q}^{N+b}_{d'}$ exist. Now, we wish to write this locus as a degeneracy locus $D_\sigma(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'})$ of the form discussed in Section \ref{ss:degenLoci}. To do so, we must convert the dot pattern $\Pi$ to a permutation. This requires a few combinatorial preliminaries. \subsection{$(d,g)$-confined permutations} \label{ss:dgConfined} \begin{figure} \centering \begin{tabular}{cccc} \begin{tikzpicture}[scale=0.25] \foreach \x in {-5,...,6} \draw (\x-0.5, 6.5) -- (\x - 0.5, -6.5); \foreach \y in {-5,...,6} \draw (-6.5, 0.5-\y) -- (6.5, 0.5-\y); \draw[ultra thick] (-0.5, 6.5) -- (-0.5, -6.5); \draw[ultra thick] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{-3} \opendot{4}{-4} \opendot{5}{-5} \opendot{6}{-6} \opendot{-1}{-2} \opendot{-2}{-1} \opendot{-3}{2} \opendot{-4}{4} \opendot{-5}{5} \opendot{-6}{6} \draw[ultra thick, dashed] (3.5, -3.5) rectangle (-3.5, 3.5); \end{tikzpicture} & \begin{tikzpicture}[scale=0.25] \foreach \x in {-5,...,6} \draw (\x-0.5, 6.5) -- (\x - 0.5, -6.5); \foreach \y in {-5,...,6} \draw (-6.5, 0.5-\y) -- (6.5, 0.5-\y); \draw[ultra thick] (-0.5, 6.5) -- (-0.5, -6.5); \draw[ultra thick] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{-2} \opendot{4}{-3} \opendot{5}{-4} \opendot{6}{-5} \opendot{-1}{-1} \opendot{-2}{2} \opendot{-3}{4} \opendot{-4}{5} \opendot{-5}{6} \draw[ultra thick, dashed] (3.5, -3.5) rectangle (-2.5, 2.5); \end{tikzpicture} & \begin{tikzpicture}[scale=0.25] \foreach \x in {-5,...,6} \draw (\x-0.5, 6.5) -- (\x - 0.5, -6.5); \foreach \y in {-5,...,6} \draw (-6.5, 0.5-\y) -- (6.5, 0.5-\y); \draw[ultra thick] (-0.5, 6.5) -- (-0.5, -6.5); \draw[ultra thick] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{-1} \opendot{4}{-2} \opendot{5}{-3} \opendot{6}{-4} \opendot{-1}{2} \opendot{-2}{4} \opendot{-3}{5} \opendot{-4}{6} \draw[ultra thick, dashed] (3.5, -3.5) rectangle (-1.5, 1.5); \end{tikzpicture} & \begin{tikzpicture}[scale=0.25] \foreach \x in {-5,...,6} \draw (\x-0.5, 6.5) -- (\x - 0.5, -6.5); \foreach \y in {-5,...,6} \draw (-6.5, 0.5-\y) -- (6.5, 0.5-\y); \draw[ultra thick] (-0.5, 6.5) -- (-0.5, -6.5); \draw[ultra thick] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{2} \opendot{4}{-1} \opendot{5}{-2} \opendot{6}{-3} \opendot{-1}{4} \opendot{-2}{5} \opendot{-3}{6} \draw[ultra thick, dashed] (3.5, -3.5) rectangle (-0.5, 0.5); \end{tikzpicture} \\ $g-d+r = 2$ & $g-d+r=1$ & $g-d+r=0$ & $g-d+r = -1$\\ \end{tabular} \caption{Extension of a dot pattern with $r+1 = |\Pi| = 3$ to a $(d,g)$-confined permutation, for various values of $d-g$. The dot pattern in the last example is not $(d,g)$-confined. The dashed squares are explained in Lemma \ref{lem:bijectiveSquare}.} \label{fig:dpToPerm} \end{figure} We wish to extend a dot pattern $\Pi \subseteq \mathbb{N} \times \mathbb{N}$ to a dot pattern in $\mathbb{Z} \times \mathbb{Z}$ in the ``most efficient way,'' so that a rank function for $L$ may be translated into a rank function for $L(Mp+Nq)$ that results in the same degeneracy locus. \begin{defn} \label{def:dgConfined} A permutation $\pi: \mathbb{Z} \to \mathbb{Z}$ is called \emph{$(d,g)$-confined} if both $\pi, \pi^{-1}$ are decreasing on $(-\infty, -1]$ and $\omega_{d-g} \pi$ has finite length. \end{defn} Note that the definition of $(d,g)$-confined actually depends only on the difference $d-g$. We may extend Equation \ref{eq:rsigma} to $(d,g)$-confined permutations to define, for all $a,b \in \mathbb{Z}$, \begin{equation} \label{eq:rpi} r^\pi(a,b) = \# \{ a' \geq a: \pi(a') \geq b\}, \end{equation} and the requirement that $\omega_{d-g} \pi$ has finite length ensures that $r^\pi(-a,-b) = d+a+b+1-g$ for sufficiently large $a,b$, in accordance with Riemann-Roch. \begin{lemma} \label{lem:uniqueConfined} Fix positive integers $d,g$. For every dot pattern $\Pi$ with $|\Pi| \geq d+1-g$, there is a unique $(d,g)$-confined permutation $\pi$ such that $\Pi = \{ (a, \pi(a)): a \geq 0 \mbox{ and } \pi(a) \geq 0 \}.$ \end{lemma} \begin{proof} This is analogous to an infinite-grid version of the process described in \cite[p. 9]{fultonPragacz}. Let $r = |\Pi|-1$ and $A = \{ a_0, \cdots, a_r \}$ and $B = \{b_0, \cdots, b_r\}$ be the set of rows and set of columns of $\Pi$. A permutation $\pi$ of $\mathbb{Z}$ satisfies the desired conditions if and only if \begin{enumerate} \item $\pi |_A$ is the bijection $A \to B$ with graph $\Pi$, \item $\pi |_{\mathbb{Z} \backslash A}$ is a \emph{decreasing} bijection $\mathbb{Z} \backslash A \to \mathbb{Z} \backslash B$ such that $\pi(n) = d-g-n$ for all but finitely many $n$, and \item $\pi(\mathbb{N} \backslash A)$ is disjoint from $\mathbb{N} \backslash B$. \end{enumerate} For any dot pattern $\Pi \subset \mathbb{N} \times \mathbb{N}$ (regardless of $|\Pi|$), there is a unique $\pi$ satisfying conditions (1) and (2). For $N$ sufficiently large, this permutation induces a bijection $[-N, (d-g)+N] \backslash A \to [-N, (d-g)+N] \backslash B$, and condition (3) is satisfied if and only if $$|[0,(d-g)+N] \backslash A| \leq |[-N, -1]|,$$ i.e. $d-g -r + N \leq N$. So a permutation $\pi$ satisfying (1-3) above exists if and only if $g-d+r \geq 0$, i.e. $|\Pi| \geq d+1-g$. \end{proof} The rightmost grid in Figure \ref{fig:dpToPerm} illustrates where Lemma \ref{lem:uniqueConfined} breaks down when $\Pi$ has too few dots: there is not enough space outside $\mathbb{N} \times \mathbb{N}$ to complete the dot pattern (hence a white dot must be added within $\mathbb{N} \times \mathbb{N}$). \begin{rem} \label{rem:extendRPi} Observe that for all $a,b \in \mathbb{N}$, $r^\Pi(a,b) = r^\pi(a,b)$. So $r^\pi$ may be viewed as the ``most efficient'' extension of $r^\Pi$ to all of $\mathbb{Z} \times \mathbb{Z}$. However, this extension depends on the choice of $(d,g)$ (rather, on $d-g$), which is why we made the ad hoc choice to define $r^\Pi(a,-1) = r^\Pi(a,0), r^\Pi(-1,b) = r^\Pi(0,b)$ when defining $\Ess(\Pi)$ in Equation \ref{eq:essPi}. The following lemma shows that this ad hoc choice makes almost no difference anyway. \end{rem} \begin{lemma} \label{lem:essVersions} Let $\Pi$ be a dot pattern of size $r+1$, and let $a_0, b_0$ be the minimum row and column occurring in $\Pi$. Let $\pi$ be its $(d,g)$-confined permutation, where $g-d+r \geq 0$. Then $\Ess(\Pi) = \Ess(\pi)$ unless $g-d+r=0$ and $a_0 = b_0 = 0$. In the case $g-d+r=0 , a_0 = b_0 = 0$, we have instead $(0,0) \not\in \Ess(\pi)$ and $\Ess(\Pi) = \Ess(\pi) \cup \{ (0,0) \}$. \end{lemma} In what follows, we will only actually need the inclusion $\Ess(\pi) \subseteq \Ess(\Pi)$, but we prove the sharper statement here partly to justify that the definition of $\Ess(\Pi)$ was sensible. \begin{proof} The monotonicity of $\pi, \pi^{-1}$ on $(-\infty, -1]$ implies that $\Ess(\pi) \subseteq \mathbb{N}^2$. Comparing Equations \ref{eq:essPi} and \ref{eq:essSetV2} and the definitions of $r^\Pi$ and $r^\pi$ shows that $\Ess(\pi) \subseteq \Ess(\Pi)$. Suppose that $(a,b) \in \Ess(\Pi) \backslash \Ess(\pi)$. Since $r^\pi$ and $r^\Pi$ match on $\mathbb{N}^2$, either $a=0$ or $b=0$. Consider first the case where $a>0, b=0$. Then $(a,0) \in \Ess(\Pi)$ implies that $\pi(a-1) < 0 \leq \pi(a)$ and $a \leq \pi^{-1}(0)$; $(a,0) \not\in \Ess(\pi)$ implies that $a \leq \pi^{-1}(-1)$ as well. The construction of $\pi$ in the proof of Lemma \ref{lem:uniqueConfined} shows that $\pi^{-1}(-1)$ is the minimum element of $[-(g-d+r), \infty) \backslash A$, where $A$ is the set of rows occurring in $\Pi$. So $a \leq \pi^{-1}(-1)$ implies that $g-d+r = 0$ and that $0,1, 2, \cdots, a$ are all rows occurring in $\Pi$. But that implies that $\pi^{-1}(a-1) \geq 0$, a contradiction. So this case is impossible. Switching rows with columns shows that the case $a=0, b>0$ is also impossible. So the only possible element of $\Ess(\Pi) \backslash \Ess(\pi)$ is $(0,0)$. Now, observe that $(0,0) \in \Ess(\Pi)$ if and only if $a_0 = b_0 = 0$. On the other hand, $(0,0) \in \Ess(\pi)$ if and only if $\pi(-1) < 0 \leq \pi(0)$ and $\pi^{-1}(-1) < 0 \leq \pi^{-1}(0)$. Now, $\pi(-1) < 0$ and $\pi^{-1}(-1) < 0$ are both equivalent to $g-d+r > 0$, while $\pi(0) \geq 0$ $\pi^{-1}(0) \geq 0$ are equivalent to $b_0 = 0$ and $a_0 = 0$, respectively. So $(0,0) \in \Ess(\Pi) \backslash \Ess(\pi)$ if and only if $g-d+r = 0$ and $a_0 = b_0 = 0$. \end{proof} The $(d,g)$-confined permutation associated to $\Pi$ also provides a convenient and concise reformulation of the expected dimension $\rho_g(d,\Pi)$. \begin{lemma} \label{lem:rhoPerm} If the dot pattern $\Pi$ has $|\Pi| \geq d+1-g$ and associated $(d,g)$-confined permutation $\pi$, then $\rho_g(d,\Pi) = g - \ell(\omega_{d-g} \pi)$. \end{lemma} We omit the straightforward proof of Lemma \ref{lem:rhoPerm}, but instead illustrate it visually in Figure \ref{fig:rhoExample}. \begin{figure} \centering \begin{tabular}{cccc} \begin{tikzpicture}[scale=0.25] \draw[] (-0.5, 6.5) -- (-0.5, -6.5); \draw[] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{-3} \opendot{4}{-4} \opendot{5}{-5} \opendot{6}{-6} \opendot{-1}{-2} \opendot{-2}{-1} \opendot{-3}{2} \opendot{-4}{4} \opendot{-5}{5} \opendot{-6}{6} \draw[thick] (1,0) to[in=0, out=90] (-1,2); \draw[thick] (1,0) to[in=0, out=90] (-2,1); \draw[thick] (0,-2) to[in=-90, out=180] (-1,2); \draw[thick] (0,-2) to[in=-90, out=180] (-2,1); \draw[thick] (3,-3) to[in=-60, out=135] (-1,2); \draw[thick] (3,-3) to[in=-30, out=135] (-2,1); \end{tikzpicture} & \begin{tikzpicture}[scale=0.25] \draw[] (-0.5, 6.5) -- (-0.5, -6.5); \draw[] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{-3} \opendot{4}{-4} \opendot{5}{-5} \opendot{6}{-6} \opendot{-1}{-2} \opendot{-2}{-1} \opendot{-3}{2} \opendot{-4}{4} \opendot{-5}{5} \opendot{-6}{6} \draw[thick] (0,-2) to[in=-90,out=180] (-3,-1); \draw[thick] (3,-3) to[in=-90,out=180] (-3,-1); \end{tikzpicture}& \begin{tikzpicture}[scale=0.25] \draw[] (-0.5, 6.5) -- (-0.5, -6.5); \draw[] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{-3} \opendot{4}{-4} \opendot{5}{-5} \opendot{6}{-6} \opendot{-1}{-2} \opendot{-2}{-1} \opendot{-3}{2} \opendot{-4}{4} \opendot{-5}{5} \opendot{-6}{6} \draw[thick] (3,-3) to[in=-60, out=90] (2,3); \end{tikzpicture}& \begin{tikzpicture}[scale=0.25] \draw[] (-0.5, 6.5) -- (-0.5, -6.5); \draw[] (-6.5, 0.5) -- (6.5, 0.5); \fdot{0}{1} \fdot{2}{0} \fdot{3}{3} \opendot{1}{-3} \opendot{4}{-4} \opendot{5}{-5} \opendot{6}{-6} \opendot{-1}{-2} \opendot{-2}{-1} \opendot{-3}{2} \opendot{-4}{4} \opendot{-5}{5} \opendot{-6}{6} \draw[thick] (3,-3) to[in=0,out=135] (0,-2); \draw[thick] (3,-3) to[in=-90,out=135] (1,0); \end{tikzpicture} \\ $(r+1)(g-d+r)$ & $\sum_{i=0}^r (a_i-i) $ & $\sum_{i=0}^r (b_i-i)$ & inversions within $\Pi$ \end{tabular} \caption{The number of non-inversions of $\pi$ is $g-\rho_g(d, \Pi)$.} \label{fig:rhoExample} \end{figure} In the proof of Theorem \ref{thm:bnDegen}, the replacement of a line bundle $L$ by $L(ap + bq)$ has the effect of sliding the graph of $\pi$ down $a$ steps and to the right $b$ steps. We will want to do this in such a way that the portion in $\mathbb{N} \times \mathbb{N}$ is the graph of a permutation of $[0,n]$ for some integer $n$. The dashed squares in Figure \ref{fig:dpToPerm} show what we are looking for: squares that enclose the graph of a permutation. The following Lemma formalizes how such squares can be found. \begin{lemma} \label{lem:bijectiveSquare} Let $\pi$ be the $(d,g)$-confined permutation associated to a dot pattern $\Pi$ of size $r+1$, and let $a_r, b_r$ be the largest row and largest column occurring in $\Pi$, respectively. Then $\pi$ restricts to a bijection $[d-g-b_r, a_r] \to [d-g-a_r, b_r]$, and $\pi(n) = d-g-n$ for all $n$ such that $n < d-g-b_r$ or $n > a_r$. Also, $\pi$ is decreasing on $[a_r, \infty)$, and $\pi^{-1}$ is decreasing on $[b_r, \infty)$. \end{lemma} \begin{proof} By the construction in the proof of Lemma \ref{lem:uniqueConfined}, $\pi$ restricts to a decreasing bijection between $(-\infty, d-g-b_r-1] \cup [a_r + 1, \infty)$ and $(\infty, d-g-a_r-1] \cup [b_r + 1, \infty)$. Such a bijection is determined by its value on a single point, and the fact that $\omega_{d-g} \pi$ has finite length forces $\pi(n) = d-g-n$ for all $n$ in this restricted domain. This implies that $\pi$ also restricts to a bijection between the complements $[d-g-b_r, a_r]$ and $[d-g-a_r, b_r]$. The fact that $\pi$ is decreasing on $[a_r+1, \infty)$ is clear; note finally that $\pi(a_r) \in [d-g-a_r, b_r]$ and $\pi(a_r+1) = d-g-a_r-1$, so $\pi(a_r) > \pi(a_r+1)$ and $\pi$ is decreasing on all of $[a_r, \infty)$. By the same argument applied to $\pi^{-1}$, $\pi^{-1}$ is decreasing on $[b_r, \infty)$. \end{proof} For fixed $m \in \mathbb{Z}$, denote by $\alpha_m$ the ``add $m$'' permutation $\alpha_m(n) = m+n$. The following corollary will be useful in converting Brill-Noether degeneracy loci to degeneracy loci of Brill-Noether flags. \begin{cor} \label{cor:piSlide} If $M \geq b_r-(d-g)$ and $N \geq a_r - (d-g)$, then the permutation $\alpha_N \pi \alpha_M^{-1}$ restricts to a bijection from $[0, M+N + d-g]$ to itself that is decreasing on $[a_r+M,\infty)$, and its inverse is decreasing on $[b_r+N, \infty)$. \end{cor} \subsection{Geometry of Brill-Noether degeneracy loci} \label{ss:geoDegen} We now have what we need to study Brill-Noether degeneracy loci as degeneracy loci of Brill-Noether flags. With an eye on Equation \ref{eq:wPiFlags}, we add a bit more notation to Situation \ref{sit:bnDegen}. \begin{sit} \label{sit:bnDegen2} In Situation \ref{sit:bnDegen}, also fix integers $M,N \geq 2g-1$, define $d' = d + M + N$, and define $\pi' = \alpha_N \pi \alpha_M^{-1}$. For convenience, define $n = d' +1 - g$ (the rank of $\mathcal{H}_{d'}$). \end{sit} If $a_r, b_r$ are the largest row and column occurring in $\Pi$, the assumption $\Pi \subseteq [0,d]$ (from Situation \ref{sit:bnDegen}) ensures $a_r, b_r \leq d$. So Corollary \ref{cor:piSlide} implies that $\pi'$ restricts to a permutation of $[0, n-1]$, and that $\pi'$ and its inverse are both decreasing on $[d' - (2g-1),n-1]$. Both flags $\mathcal{P}^{\sbu}_{d'}, \mathcal{Q}^{\sbu}_{d'}$ have corank sets $[0, d'-(2g-1)]$ so $\pi'$ meets the monotonicity requirements of Definition \ref{def:dsigma}, and there is a well-defined degeneracy locus $D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'})$. Furthermore, adding degeneracy conditions for pairs $(a,b)$ not in the essential set does not change the scheme structure, so Equation \ref{eq:wPiFlags}, Lemma \ref{lem:essVersions} and the observation that $\Ess(\pi') = \Ess(\pi) + (M,N)$ imply that \begin{equation} \label{eq:wPiDegen} W^\Pi_d(C,p,q) = \tw_{-Mp-Nq} \left( D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d;}) \right). \end{equation} So Brill-Noether degeneracy loci are just degeneracy loci of Brill-Noether flags, hiding behind a twist. We can now use the versality theorem to prove the local statements from Theorem \ref{thm:bnDegen}, including a stronger form of the smoothness statement. \begin{thm} \label{thm:bnDegenLocal} In Situation \ref{sit:bnDegen2}, assume that $(C,p,q)$ has versal degree-$d'$ Brill-Noether flags. Let $[L]$ be any point of $W^\Pi_d(C,p,q)$. The local dimension of $W^\Pi_d(C,p,q)$ is $\rho_g(d,\Pi)$. Furthermore, let $L' = L(Mp+Nq)$, and let $\sigma \leq \pi'$ be the permutation associated to the flags $\mathcal{P}^{\sbu}_{[L']}, \mathcal{Q}^{\sbu}_{[L']}$. Then $W^\Pi_d(C,p,q)$ is singular at $[L]$ if and only if $\sigma$ is singular in $X_{\pi'}$. If $[L] \in \widetilde{W}^\Pi_d(C,p,q)$ then $W^\Pi_d(C,p,q)$ is smooth at $[L]$. \end{thm} \begin{proof} By definition of versality and basic properties of smooth morphisms, the local codimension at $[L']$ of $D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'}) \subset \Pic^{d'}(C)$ is equal to the local codimension at a point $y \in \Fl(A; n) \times \Fl(B;n)$ of $D_{\pi'}(\mathcal{V}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}; \mathcal{W}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$, where $\mathcal{V}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, \mathcal{W}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ are the two (pull-backs of) tautological flags, and $y$ is a point where the permutation associated to $\mathcal{V}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}_y, \mathcal{W}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}_y$ is equal to $\sigma$. Using the fact that the tautological flag of $\Fl(A; n)$ is versal to any fixed flag $F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$, we can apply the same reasoning to deduce that the codimension at $y$ of $D_{\pi'}(\mathcal{V}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}; \mathcal{W}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$ is equal to the codimension at $z \in \Fl(A; d)$ of $X_{\pi'}(F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$, where $z$ is a point corresponding to a flag $V^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ such that the permutation associated to $V^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ is $\sigma$. The Schubert variety $X_{\pi'}$ has pure codimension $\ell(\omega_{n-1} \pi') = g - \rho_g(d,\Pi)$, so we obtain the desired local dimension statement. Furthermore, since smooth morphisms preserve smoothness of subschemes under inverse image, $[L]$ is a smooth point of $W^\Pi_d(C,p,q)$ if and only if $z$ is a smooth point of $X_{\pi'}(F^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$, which holds if and only if $\sigma$ is singular in $X_{\pi'}$, as desired. There is one final wrinkle in proving that $\widetilde{W}^\Pi_d(C,p,q)$ is smooth: we must show that if $[L] \in \widetilde{W}^\Pi_d(C,p,q)$ then $\sigma$ is in the smooth locus of $X_{\pi'}$. It is tempting to say that $\pi' = \sigma$, but unfortunately this need not be true if the strata $\mathcal{P}^a_{d'}, \mathcal{Q}^b_{d'}$ with $(a+M,b+N) \not\in \EssR(\Pi) \times \EssC(\Pi)$ meet in dimension larger than $r^{\pi'}(a,b)$. One can resolve this by invoking a combinatorial description of the singular locus of $X_{\pi'}$, but we can also accomplish it with a cheap trick: simply replace $\mathcal{P}^{\sbu}_{d'}, \mathcal{Q}^{\sbu}_{d'}$ by subflags $\hat{\mathcal{P}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}, \hat{\mathcal{Q}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}$ in which we only retain the strata of coranks $A = \{a: \pi'(a) > \pi'(a-1) \}$, $B = \{b: \pi'^{-1}(b) > \pi'^{-1}(b-1) \}$. Observe that $A \times B \supseteq \Ess(\pi')$, so the degeneracy locus $D_{\pi'}(\hat{\mathcal{P}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}; \hat{\mathcal{Q}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$ is well-defined and equal to $D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'})$. Also, Lemma \ref{lem:essVersions} implies that $A \subseteq \EssR(\Pi) + M$ and $B \subseteq \EssC(\Pi) + N$, so $[L] \in \widetilde{W}^\Pi_d(C,p,q)$ implies that the permutation associated to the subflags $(\hat{\mathcal{P}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}; \hat{\mathcal{Q}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$ is exactly $\pi'$; thus $[L']$ is a smooth point of $D_{\pi'}(\hat{\mathcal{P}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}}; \hat{\mathcal{Q}}^{\raisebox{1pt}{\scaleto{\bullet}{2.5pt}}})$ and $[L]$ is a smooth point of $W^\Pi_d(C,p,q)$. \end{proof} \begin{rem} The proof above follows the techniques of \cite[$\S$4]{cpRR}, although we work here with possibly partial flags. Using \cite[Theorem 4.4]{cpRR} (with mild modifications for partial flags) one can deduce much stronger results about the nature of the singularities of $W^\Pi_d(C,p,q)$; roughly speaking, they are isomorphic, \'etale-locally and up to products with affine space, to the singularities of Schubert varieties. Together with the results of \cite{billey-warrington-maximal, cortez-singularities, kassel-lascoux-reutenauer-singular, manivel-lieu}, this gives essentially a complete description of the singulariites of $W^\Pi_d(C,p,q)$. \end{rem} \subsection{The Chow class of $W^\Pi_d(C,p,q)$} To obtain the existence and intersection-theoretic part of Theorem \ref{thm:bnDegen}, we use the results of \cite{fultonSchubert} about the intersection theory of degeneracy loci (see also the exposition in \cite[$\S$1-2]{fultonPragacz}). We assume throughout this subsection that we are in Situation \ref{sit:bnDegen2}, and furthermore that $W^\Pi_d(C,p,q)$ has the expected dimension $\rho_g(d,\Pi)$ (e.g. it suffices to assume that the degree-$d'$ Brill-Noether flags are versal, by Theorem \ref{thm:bnDegenLocal}). Translating our notation (indexed by codimension, starting at $0$) into the notation of \cite{fultonSchubert} (indexed by dimension, starting at $1$), our permutation $\pi'$ gives rise to a permutation $w$ of $[1,n]$ via the formula $$w(i) = d'+1-g-\pi(i-1) \mbox{ for } 1 \leq i \leq n.$$ Note that $\ell(w) = \ell(\omega_{d'-g} \pi')$ is the codimension of $D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'})$. Let $x_1, \cdots, x_{n}$ be Chern roots of the bundle $\mathcal{H} / \mathcal{P}^{n}$, ordered such that prefixes of this sequence give Chern roots of $\mathcal{H} / \mathcal{P}^a$ for each $a$. Let $y_1, \cdots y_{n}$ be Chern roots of $\mathcal{H}$ such that prefixes of this sequence give Chern roots of $\mathcal{Q}^b$ for each $b$. As usual, these Chern roots cannot be taken literally as elements of the Chow ring of $\Pic^d(C)$, but any symmetric polynomial in the first $\ell$ of them, for any $\ell \geq g$, is identified with a polynomial in the Chern classes of the bundles involved. Finally, let $\mathfrak{S}_w$ be the double Schubert polynomial of Lascoux and Sch\"utzenberger. Since $\Pic^d(C)$ is smooth and $D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'})$ has the expected dimension, \cite[Theorem 8.2]{fultonSchubert} implies that in the Chow ring, \begin{equation} \label{eq:classDSigma} [D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'})] = \mathfrak{S}_w( x_1, ..., x_n;\ y_1, \cdots, y_n). \end{equation} The calculation in \cite[$\S$VII.4]{acgh} shows that all of the bundles $\mathcal{H}_{d'}, \mathcal{P}^a_{d'}, \mathcal{Q}^b_{d'}$ have the same total Chern class $e^{-\Theta}$, where $\Theta$ is the theta class in $\Pic^d(C)$. It follows from this that the Chern roots in Equation \ref{eq:classDSigma} simplify considerably. All the quotients $\mathcal{H} / \mathcal{P}^a$ have trivial Chern classes, so $x_1 = \cdots = x_n = 0$. The first $g$ roots $y_1, \cdots, y_g$ are Chern roots of the rank-$g$ bundle $\mathcal{Q}^{n-g}$, which has Chern class $e^{-\Theta}$. If $e_k$ denotes the degree-$k$ elementary symmetric polynomial, this means that $e_k(y_1, \cdots, y_g) = (-1)^k \frac{\Theta^k}{k!}.$ Each quotient $\mathcal{Q}^b / \mathcal{Q}^{b+1}$ has trivial total Chern class, so $y_{g+1} = \cdots = y_{n} = 0$. Using the formula $\mathfrak{S}_w(x,y) = (-1)^{\ell(w)} \mathfrak{S}_{w^{-1}}(y,x)$, and the fact that $\mathfrak{S}_w$ is a polynomial of degree $\ell(w)$ (e.g. \cite[$\S$1.3]{fultonPragacz}), we may rewrite Equation \ref{eq:classDSigma} as \begin{equation} \label{eq:classDSigmaSimp} \begin{split} [D_{\pi'}(\mathcal{P}^{\sbu}_{d'}; \mathcal{Q}^{\sbu}_{d'})] &= (-1)^{\ell(w)} \mathfrak{S}_{w^{-1}}( y_1, \cdots, y_g, 0, \cdots, 0;\ 0, \cdots, 0) \\ &= \mathfrak{S}_{w^{-1}}( -y_1, \cdots, -y_g, 0, \cdots, 0;\ 0, \cdots, 0). \end{split} \end{equation} When all of the second set of variables in a double Schubert polynomial are set to $0$, one obtains an ordinary Schubert polynomial. So we may write this class more simply as $\mathfrak{S}_{w^{-1}}(-y_1, \cdots, -y_g)$. To finish our computation, we need the following fact about Schubert polynomials. \begin{lemma} \label{lem:schubertPoly} Let $w$ be a permutation of $[1,n]$, and $g$ be an integer such that $w(1) < \cdots < w(g)$ and $g \leq \ell(w)$. Let $e_k$ denote the elementary symmetric polynomial of degree $k$ in $g$ variables, and let $\alpha_1, \cdots, \alpha_g, \Theta$ be elements of a ring such that $e_k(\alpha_1, \cdots, \alpha_g) = \frac{\Theta^k}{k!}.$ Then the Schubert polynomial $\mathfrak{S}_w$ satisfies $\mathfrak{S}_w(\alpha_1, \cdots, \alpha_g, 0, \cdots, 0) = \frac{\Theta^{\ell(w)}}{\ell(w)!} |R(w)|,$ where $R(w)$ denotes the set of reduced words for $w$. \end{lemma} \begin{proof} We use a theorem of Billey, Jockusch, and Stanley. Let $\ell = \ell(w)$, denote by $s_a$ the transposition of $a$ and $a+1$, and identify the elements $a \in R(w)$ by $\ell$-tuples $a = (a_1, a_2, \cdots, a_{\ell})$ where $w = s_{a_1} s_{a_2} \cdots s_{a_\ell}$. For each $a \in R(w)$, define the set $K(a)$ to be the set of all sequences $i = (i_1, \cdots, i_{\ell})$ of positive integers such that \begin{eqnarray} i_1 \leq i_2 \cdots \leq i_{\ell}, \label{eq:cond1}\\ i_j \leq a_j \mbox{ for } 1 \leq j \leq \ell, \mbox{ and}, \label{eq:cond2}\\ i_j < i_{j+1} \mbox{ if } a_j < a_{j+1}. \label{eq:cond3} \end{eqnarray} With these definitions, \cite[Theorem 1.1]{bjs} states that $\mathfrak{S}_w = \sum_{a \in R(w)} \sum_{i \in K(a)} x_{i_1} \cdots x_{i_{\ell}}.$ Split $\mathfrak{S}_w$ into a sum $\mathfrak{S}_{w,1} + \mathfrak{S}_{w,2} + \mathfrak{S}_{w,3}$ by partitioning $K(a)$ into three sets as follows. Let $K_1(a)$ consist of those sequences $i \in K(a)$ such that $i_1 < \cdots < i_{\ell} \leq g$. Let $K_2(a)$ consist of those sequences $i \in K(a)$ with $i_{\ell} \leq g$ but with at least one index repeated. Let $K_3(a)$ consist of all $i \in K(a)$ such that $i_{\ell} > g$. The assumption that $w$ is increasing on $1,2,\cdots,g$ implies that $\mathfrak{S}_{w}$ is symmetric in the variables $x_1, \cdots, x_g$ (see e.g. \cite[4.3(iii)]{macdonaldSchubert} or \cite[Corollary 2.11]{fultonSchubert}). The definitions of the three summands $\mathfrak{S}_{w,1}, \mathfrak{S}_{w,2}, \mathfrak{S}_{w,3}$ imply that each one is symmetric in the variables $x_1, \cdots, x_g$ as well; in particular, $\mathfrak{S}_{w,1}$ is an integer multiple of the elementary symmetric function $e_{\ell}$ and $\mathfrak{S}_{w,2}$ is a linear combination of monomial symmetric polynomials with at least one exponent greater than $1$. Clearly $\mathfrak{S}_{w,3}(\alpha_1, \cdots, \alpha_g, 0, \cdots, 0) = 0$. The expression $\mathfrak{S}_{w,2}(\alpha_1, \cdots, \alpha_g, 0, \cdots, 0)$ may be computed using the \emph{exponential substitution}, discussed in \cite[\S 7.8]{stanleyv2}. The fact that $e_k(\alpha_1, \cdots, \alpha_g) = \frac{\Theta^k}{k!}$ uniquely determines the value of all symmetric polynomials evaluated on $\alpha_1, \cdots, \alpha_g$. By \cite[Proposition 7.8.4(b)]{stanleyv2}, all monomial symmetric polynomials except the $e_k$ evaluate to $0$. Therefore $\mathfrak{S}_{w,2}(\alpha_1, \cdots, \alpha_g, 0, \cdots, 0) = 0$. Finally, we claim that $\mathfrak{S}_{w,1} = |R(w)| \cdot e_{\ell}$. It suffices to show that for all $a \in R(w)$, $K(a)$ contains all $\ell$-element increasing subsequences of $\{1,2,\cdots,g\}$. Fix a reduced word $a \in R(w)$. For all $j \in \{1, \cdots, \ell\}$, define $w_j = s_{a_1} s_{a_2} \cdots s_{a_j}$. Let $x_j = w_j(a_j)$ and $y_j = w_j(a_j+1)$. Since $a$ is a reduced word, $x_j > y_j$, and $x_j$ occurs before $y_j$ in each of $w_j, w_{j+1}, \cdots, w_{\ell}$. The position $w_k^{-1}(y_j)$ may increase by at most $1$ at at time when $k$ increases, so $w^{-1}(y_j) \leq w_j^{-1}(y_j) + (\ell-j) = a_j +1 - \ell-j$. Since $w$ is assumed to be increasing on $\{1, \cdots, g\}$, it follows that \begin{equation} \label{eq:abound} a_j \geq g + \ell + j. \end{equation} Now, let $i = (i_1, \cdots, i_{\ell})$ be any sequence of positive integers such that $1 \leq i_1 < \cdots < i_{\ell} \leq g$. Then $i$ automatically satisfies conditions (\ref{eq:cond1}) and (\ref{eq:cond2}). Also, for all $j$ the fact that $i$ is strictly increasing implies that $i_j \leq g - \ell + j$. Equation (\ref{eq:abound}) implies that $i_j \leq a_j$, so condition (\ref{eq:cond2}) holds as well. Therefore $K_1(a)$ includes all $\ell$-element increasing subsequences of $\{1,\cdots,g\}$. The reverse inclusion is clear from definitions; it follows that $\mathfrak{S}_{w,1} = |R(w)| \cdot e_{\ell}$ as claimed, and the lemma follows. \end{proof} We can now prove the existence and intersection-theoretic part of Theorem \ref{thm:bnDegen}. \begin{thm} \label{thm:existence} In Situation \ref{sit:bnDegen2}, suppose that the degree-$d'$ Brill-Noether flags of $(C,p,q)$ are versal. Then in the Chow ring, $[ W^\Pi_d(C,p,q) ] = \frac{ | R(\omega_{d-g} \pi) |}{\ell(\omega_{d-g} \pi)!} \Theta^{\ell(\omega_{d-g} \pi)}.$ \end{thm} \begin{proof} In the notation above, we have $\ell(\omega_{d-g} \pi) = \ell(w)$. Consider first the case $\ell(w) > g$. Then by Theorem \ref{thm:bnDegenLocal}, $W^\Pi_d(C,p,q)$ is empty, so the formula holds in this case. So assume $\ell(w) \leq g$. Corollary \ref{cor:piSlide} implies that $\pi'^{-1}$ is decreasing on $[d'-(2g-1), n-1]$, so $w^{-1}$ is increasing on $[1,g]$. So we have verified the hypotheses of Lemma \ref{lem:schubertPoly}, and it follows via Equation \ref{eq:wPiDegen} that $$[W^\Pi_d(C,p,q)] = \frac{ \Theta^{\ell(w^{-1})}}{\ell(w^{-1})!} | R(w^{-1}) |.$$ Since $w$ and $w^{-1}$ have the same length and there is a bijection between their sets of reduced words, we obtain the desired formula. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:bnDegen}] Suppose that $g,d$ are positive integers, $k$ is an algebraically closed field, and $\Pi$ is a dot array of size at least $d+1-g$. By Theorem \ref{thm:versality}, a general twice-marked smooth curve $(C,p,q)$ has versal degree-$(d+4g-2)$ Brill-Noether flags, so assume that $(C,p,q)$ is such a curve. First, consider the case where $\Pi \not\subseteq [0,d]^2$. Then either $r^\Pi(d+1,0) > 0$ or $r^\Pi(0,d+1) > 0$, so $W^\Pi_d(C,p,q)$ is empty. But either the largest row $a_r$ or largest column $b_r$ in $\Pi$ exceeds $d$, so $\rho_g(d,\Pi) < 0$ and the theorem statement is correct in this case. Now assume that $\Pi \subseteq [0,d]^2$, so that we are in Situation \ref{sit:bnDegen}. Let $M = N = 2g-1$ in Situation \ref{sit:bnDegen2}. If $\rho_g(d,\Pi) < 0$ then $\ell(\omega_{d-g} \pi) > g$ and Theorem \ref{thm:existence} implies that $W^\Pi_d(C,p,q)$ is empty, as desired. On the other hand, if $\rho_g(d,\Pi) \geq 0$ then Theorem \ref{thm:existence} shows that $W^\Pi_d(C,p,q)$ is nonempty of the claimed class. Theorem \ref{thm:bnDegenLocal} shows that $W^\Pi_d(C,p,q)$ has pure dimension $\rho_g(d,\Pi)$ and that $\widetilde{W}^\Pi_d(C,p,q)$ is smooth. Since the complement of $\widetilde{W}^\Pi_d(C,p,q)$ is a union of loci $\tw_{-Mp-Nq} \left( D_{\sigma}(\mathcal{P}^{\sbu}, \mathcal{Q}^{\sbu}) \right)$ of larger codimension, $\widetilde{W}^\Pi_d(C,p,q)$ is dense. \end{proof} \bibliographystyle{amsalpha}
{ "attr-fineweb-edu": 1.338867, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcFjxK6-gD5TlccIP
\section{Introduction} There are two basic approaches to heavy quarks production in DIS. First is so called zero-mass variable flavor number scheme (ZM-VFNS), which treats heavy quarks as massless partons with corresponding parton distribution functions (PDF). This scheme is applicable when the hard scale (taken here as the virtuality of the exchanged boson $Q^{2}$) is much larger then the mass $m_{\mathbf{Q}}$ of a given heavy quark $\mathbf{Q}$. On the other hand, when $Q^{2}$ is of the order of $m_{\mathbf{Q}}$, so called fixed flavor number scheme (FFNS) is applicable. It retains the full mass dependence in the coefficient function and there is no PDF for $\mathbf{Q}$ as to the leading power it cannot appear in the soft part. Increasing precision of the data forces us to control also the intermediate region of $Q^{2}$. The methods that address this problem are called general mass schemes (GM) \cite{Aivazis:1993pi,Buza:1996wv,Thorne:1997ga,Forte:2010ta}. They are however formulated for inclusive processes only and similar method relevant for jets is highly desirable. In the following we briefly describe our solution to this problem \cite{Kotko_phdthesis,Kotko:2012ws}. It is based on the ACOT massive factorization theorem \cite{Aivazis:1993pi,Collins:1998rz} and massive dipole subtraction method (DSM) \cite{Dittmaier:1999mb,Phaf:2001gc,Catani:2002hc}, which however had to be reformulated in order to match with the former. \section{Dipole subtraction method with massive partons} Consider NLO calculation of a cross section for producing $n$ jets in lepton-hadron reaction. The LO cross section is schematically written as\begin{equation} \sigma_{n}^{\left(\mathrm{LO}\right)}=\mathcal{N}\,\sum_{a}f_{a}\otimes\,\int d\Phi_{n,a}\,\left|\mathcal{M}_{n,a}\right|^{2}F_{n,a},\end{equation} where $\mathcal{N}$ is a normalization factor, $f_{a}$ are PDFs, $d\Phi_{n,a}$ is $n$-parton phase space (PS) and $\mathcal{M}_{n,a}$ is a tree-level matrix element (ME) with $n$ final state partons and one QCD initial state parton $a$. The jet function $F_{n,a}$ determines the actual observable and is realized by a suitable jet algorithm. It satisfies $F_{n+1,a}=F_{n,a}$ in the singular regions of PS. At NLO the corrections involve loop diagrams living on $n$-particle PS and additional real emission belonging to $\left(n+1\right)$-particle PS. Both contain IR singularities which ultimately cancel, however the cancellation is non-trivial as the singularities have different origin. An elegant and exact solution is provided by DSM. One adds and subtracts an auxiliary contribution $\mathcal{D}_{n,a}$, such that it mimics all the singularities of $\mathcal{M}_{n+1,a}$ and at the same time can be analytically integrated over singular regions of PS. To be more specific we have\begin{multline} \sigma_{n}^{\left(\mathrm{NLO}\right)}=\mathcal{N}\,\sum_{a}f_{a}\otimes\Bigg\{\int d\Phi_{n+1,a}\,\left[\left|\mathcal{M}_{n+1,a}\right|^{2}F_{n+1,a}-\mathcal{D}_{n,a}F_{n,a}\right]\\ +\int d\Phi_{n,a}\,\left[\mathcal{M}_{n,a}^{\left(\mathrm{loop}\right)}+\int d\phi_{a}\,\mathcal{D}_{n,a}-\mathcal{C}_{n,a}\right]\, F_{n,a}\Bigg\},\end{multline} where virtual corrections to $\left|\mathcal{M}_{n,a}\right|^{2}$ are symbolically denoted as $\mathcal{M}_{n,a}^{\left(\mathrm{loop}\right)}$. The subspace leading to singularities is $d\phi_{a}$ and fulfils PS factorization formula $d\Phi_{n+1,a}=d\Phi_{n,a}\otimes d\phi_{a}$. Thanks to the properties of $\mathcal{D}_{n,a}$ and $F_{n+1,a}$ the first square bracket is integrable in four dimensions, while in the second, the poles resulting from integral $\int d\phi_{a}\,\mathcal{D}_{n,a}$ are cancelled against the ones in $\mathcal{M}_{n,a}^{\left(\mathrm{loop}\right)}$. However, not all singularities cancel in this way -- there are also collinear poles connected with initial state splittings of massless partons. They are removed by means of collinear subtraction term $\mathcal{C}_{n,a}$. It has the form\begin{equation} \mathcal{C}_{n,a}=\sum_{b}\mathcal{F}_{ab}\otimes\left|\mathcal{M}_{n,b}\right|^{2},\label{eq:coll_sub_term}\end{equation} where $\mathcal{F}_{ab}$ are renormalized partonic PDFs, i.e. the densities of partons $b$ inside parton $a$. For instance in the $\overline{\mathrm{MS}}$ scheme $\mathcal{F}_{ab}\left(z\right)=-\frac{1}{\varepsilon}\,\frac{\alpha_{s}}{2\pi}\, P_{ab}\left(z\right)$, where $P_{ab}\left(z\right)$ are standard splitting functions. Within DSM the dipole function is realized as a sum of contributions corresponding to single emissions with different combinations of ``emitter'' and ``spectator''% partons\footnote{The notion ``emitter'' and ``spectator'' are explained in \cite{Catani:1996vz}}. Each such term $D$ has a general form\begin{equation} D=\hat{V}\,\hat{C}\left|\hat{\mathcal{M}}_{n,a}\right|^{2},\label{eq:dipole}\end{equation} where $\hat{V}$ is so called dipole splitting matrix (in helicity space) and encodes the information about some of the singularities of $\mathcal{M}_{n+1,a}$. The matrix $\hat{C}$ corresponds to color operators for parton emissions, which act on the matrix element. The notation above is symbolic and means that both quantities are correlated in the color and spin space. For DIS, there are three different classes of dipoles $D$, depending on whether emitter or spectator are in the initial or final states. Here we are mainly interested in the case of initial state emitter and final state spectator as they contain factorization-related information. When heavy quarks are present, the above general picture remains the same. If however a massive parton takes part in a splitting process there is no collinear singularity. Nevertheless there are IR sensitive logarithms which become harmful when the external scale becomes large. We shall refer to such terms as \textit{quasi-collinear singularities} \cite{Catani:2000ef} and abbreviate as q-singularities. \begin{wrapfigure}[17]{r}{0.5\textwidth}% \vspace{-10pt} \begin{centering} \includegraphics[width=0.5\textwidth]{Kotko_Piotr_fig1} \end{centering} \vspace{-20pt} \caption{\small ACOT charm structure function calculated using MC implementation of our method ($\mathtt{MassJet}$). The calculations are done for $x_{B}=0.05$ and CTEQ5 PDFs. \label{Fig:GeneralMass_Example_2}} \end{wrapfigure}% The first step towards GM scheme for jets is to construct dipole functions controlling \mbox{q-singularities} for initial state emissions. Moreover we want to have possibly massive initial states as it is allowed by the ACOT scheme. It was partially done in \cite{Dittmaier:1999mb} (for $\mathbf{Q}\rightarrow\mathbf{Q}g$ splitting), while in \cite{Catani:2002hc} the splitting processes with heavy quarks are considered in the final states only. Let us look at the particular example. Consider the initial state $g\rightarrow\mathbf{Q}\overline{\mathbf{Q}}$ splitting. Let us assign the momentum $p_{a}$ to the gluon, $p_{i}$ to the emitted final state quark (or anti-quark), and $p_{j}$ to the spectator. Using these, we construct new momenta which enter $\mathcal{M}_{n,a}$ in (\ref{eq:dipole}): $\tilde{p}_{j}^{\mu}=\tilde{w}\left(p_{i}^{\mu}+p_{j}^{\mu}\right)-\tilde{u}p_{a}^{\mu}$ becomes a new final state and $\tilde{p}_{\underline{ai}}^{\mu}=\left(\tilde{w}-1\right)\left(p_{i}^{\mu}+p_{j}^{\mu}\right)-\left(\tilde{u}-1\right)p_{a}^{\mu}$ becomes a new initial state. The variables $\tilde{u}$, $\tilde{w}$ can be determined from on-shell conditions for $\tilde{p}_{j}$ and $\tilde{p}_{\underline{ai}}$. Our dipole splitting function reads\begin{equation} \hat{V}_{g\rightarrow\mathbf{Q}\overline{\mathbf{Q}},\, j}=8\pi\mu_{\mathrm{r}}^{2\varepsilon}\alpha_{s}T_{R}\left[1-\frac{1}{1-\varepsilon}\,\left(2\tilde{u}\left(1-\tilde{u}\right)-\frac{\left(1-\tilde{u}\right)m_{\mathbf{Q}}^{2}}{p_{i}\cdot p_{a}}\right)\right],\label{eq:Dipsplit_IEFS_g_QQ_V}\end{equation} where $\mu_{r}$ is a mass scale needed in $D=4-2\varepsilon$ dimensions. In this case $\hat{V}$ is just diagonal in helicity space. Consider next the integral of (\ref{eq:Dipsplit_IEFS_g_QQ_V}) over one-particle subspace. It can be convenietly expressed in terms of rescaled masses $\eta_{l}^{2}=m_{l}^{2}/2\tilde{p}_{j}\cdot p_{a}$ for some parton $l$. In the limit of small $\eta_{\mathbf{Q}}$ we get\begin{equation} \int d\phi\,\hat{V}_{g\rightarrow\mathbf{Q}\overline{\mathbf{Q}},\, j}\left(u\right)=\frac{\alpha_{s}}{2\pi}\Bigg[P_{gq}\left(u\right)\Bigg(\log\frac{u^{2}}{u+\eta_{j}^{2}}-\log\eta_{\mathbf{Q}}^{2}\Bigg)+2T_{R}\, u\left(1-u\right)\Bigg]+\mathcal{O}\left(\eta_{\mathbf{Q}}^{2}\right).\label{eq:V_gQQ_int}\end{equation} We see that there is a term of the form $P_{gq}\left(u\right)\log\eta_{\mathbf{Q}}^{2}$ which becomes harmful when the scale becomes large (in massless case there would be a pole $1/\varepsilon$). Similar terms appear also in other dipoles for the initial state emissions. \section{General mass scheme for jets} In the spirit of the ACOT scheme, the initial state q-singularities have to be factorized out. It is accomplished by $\mathcal{C}_{n,a}$ term with partonic PDFs $\mathcal{F}_{ab}$ calculated in a special way. Let us recall at this point, that the latter are defined as certain ME of light-cone operators and can be calculated order by order using special Feynman rules (see e.g. \cite{Collins:2011zzd}). The results contain UV singularities which have to be renormalized, leading to evolution equations. For the present application we calculate $\mathcal{F}_{ab}$ to one loop with full mass dependence and renormalize them using the $\overline{\mathrm{MS}}$ scheme% \footnote{More precisely we use the Collins-Wilczek-Zee renormalization scheme which assumes the $\overline{\mathrm{MS}}$ above certain threshold and zero momentum subtraction below it.% }. Since counterterms are mass independent in this scheme, we assure that hadronic PDFs evolve according to standard massless DGLAP equations. For instance we get \begin{gather} \mathcal{F}_{g\mathbf{Q}}\left(z\right)=\frac{\alpha_{s}}{2\pi}\, T_{R}\,\left(1-2z\left(1-z\right)\right)\,\log\frac{\mu_{r}^{2}}{m_{\mathbf{Q}}^{2}},\\ \mathcal{\mathcal{F}}_{\mathbf{Q}g}\left(z\right)=\frac{\alpha_{s}}{2\pi}\, C_{F}\,\frac{1+\left(1-z\right)^{2}}{z}\left[\log\frac{\mu_{r}^{2}}{m_{\mathbf{Q}}^{2}}-2\log z-1\right],\\ \mathcal{\mathcal{F}}_{\mathbf{Q}\mathbf{Q}}\left(z\right)=\frac{\alpha_{s}}{2\pi}\, C_{F}\left\{ \frac{1+z^{2}}{1-z}\left[\log\frac{\mu_{r}^{2}}{m_{\mathbf{Q}}^{2}}-2\log\left(1-z\right)-1\right]\right\} _{+}.\end{gather} We have checked that the above procedure leads to IR safe dipoles in the limit of vanishing $\eta_{\mathbf{Q}}$. Moreover, the results coincide with those Ref. \cite{Catani:2002hc} in the $\overline{\mathrm{MS}}$ scheme. In order to perform numerical tests we have partially implemented our method in a dedicated C++ program based on FOAM \cite{Jadach:2002kn}. Using the program, we have calculated the charm structure function $F_{2}$ and compared it with semi-analytical calculation in the ACOT scheme. This exercise uses three dipoles and two collinear subtraction terms. The virtual corrections are taken from \cite{Kretzer:1998ju}. We find that the soft poles are indeed cancelled by the corresponding poles coming from the integrated dipoles. Moreover, we find agreement with the semi-analytical calculation and observe that our result properly interpolates between the two limiting solutions of the ZM-VFNS and FFNS schemes, as depicted in Fig. \ref{Fig:GeneralMass_Example_2}. Let us stress that the result is obtained by a numerical integration of a fully differential cross section, which provides a severe test on the implementation of our massive dipole formalism. {\raggedright \begin{footnotesize} \bibliographystyle{DISproc}
{ "attr-fineweb-edu": 1.770508, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcFrxK0zjCxh72ObJ
\section{Introduction} \label{secintro} This paper investigates the geometric properties of the set of solutions of the so-called {\em constrained generalized continuous algebraic Riccati equation} associated with the infinite-horizon linear quadratic (LQ) optimal control problem, when the matrix $R$ weighting the input in the cost function is allowed to be singular. This problem, often referred to as the singular LQ problem, has a long history. It has been investigated in several papers and with different techniques, see \cite{Hautus-S-83,Willems-KS-86,Saberi-S-87,Prattichizzo-MN-04,Kalaimani-BC-13} and the references therein. See also the monographs \cite{Abou-Kandil-FIJ-03,Lancaster-95,Ionescu-OW-99,Saberi-SC-95} for a more general discussion. In particular, in the foundational contributions \cite{Hautus-S-83} and \cite{Willems-KS-86} it was proved that an optimal solution of the singular LQ problem exists for all initial conditions if the class of controls is extended to include distributions. A different perspective was offered in \cite{Prattichizzo-MN-04}, where a geometric approach was employed on the Hamiltonian differential equation to study the subspace of initial conditions for which the control law is impulse-free. In the discrete time this issue does not arise, and it is an established fact that the solution of regular and singular infinite-horizon LQ problem can be found resorting to the so-called {\em constrained generalized discrete algebraic Riccati equation}, see \cite{Ferrante-N-12}. Considerable effort has been devoted | also in recent years | in providing a geometric characterization of the set of solutions of this discrete Riccati equation, see e.g. \cite{Stoorvogel-S-98} and \cite{Ferrante-N-12}. A similar characterization for the continuous-time generalized Riccati equation has never been considered. There are several reasons for considering this equation and for analyzing the geometric structure of its solutions. The first, which is our main motivation, is given by the recent results connecting the continuous time generalized Riccati equation with LQ optimal control problems \cite{Ferrante-N-14}. Another reason derives from the fact that this equation is a particular case of an even more general type of Riccati equation that arises in the literature that flourished in the past twenty years on stochastic optimal control, see e.g. \cite{Abou-Kandil-FIJ-03,Damm-04,Damm-H-01,Dragan-MS-10,Freiling-H-03,Freiling-H-04} and the references cited therein as well as \cite{zorzi1,zorzi2,zorzi3} for the dual version in filtering problems. These research lines may benefit of our contribution. In fact, the natural approach in this field is based on the study of the corresponding Hamiltonian system, so that our new geometric results may furnish a powerful point of view to deal with these problems and with the associated numerical analysis. In \cite{Ionescu-O-96-1} the constrained generalized continuous algebraic Riccati equation was defined, in analogy with the discrete case, by replacing the inverse of the matrix $R$ appearing in the standard Riccati equation with its pseudo-inverse. In particular, this paper offers a characterization in terms of deflating subspaces of the Hamiltonian pencil of the conditions under which the constrained generalized Riccati equation has a stabilizing solution. To our best knowledge, the recent papers \cite{Ferrante-N-14,Ferrante-N-14-1} were the first attempts to link this equation to singular LQ optimal control problems. In \cite{Ferrante-N-14,Ferrante-N-14-1} it was shown that the existence of symmetric solutions of the constrained generalized continuous-time Riccati equation is equivalent to the existence of impulse-free solutions of the associated singular LQ problem from any initial condition. This means, in particular, that an optimal control can always be expressed as a { state-feedback. Now that the connection between the constrained generalized continuous-time algebraic Riccati equation and the singular LQ problem has been explained, the important issue arises of analyzing the set of solutions of such equation and the relations of each such solution with the corresponding LQ control problem.} In this paper a geometric analysis is carried out on the structure of the symmetric solutions of the constrained generalized continuous-time algebraic Riccati equation. This analysis leads to the following main contributions. First, we show that the dynamics of the closed-loop system can be divided into a part that depends on the particular solution $X$ that we are considering, and one which is independent of it. We also show that the latter dynamics, which is not necessarily stable, is confined to an output nulling subspace, so that it does not contribute to the cost function. The spectrum associated with the reachable part of this dynamics can therefore be assigned without affecting the optimality of the cost. As a consequence, we show that the LQ optimal control problem may admit a stabilizing solution even in cases in which the generalized continuous-time Riccati equation does not admit a stabilizing solution. This is a new feature that has no parallel in the regular LQ problems. We finally address the analysis of the structure of the corresponding Hamiltonian system and its relations with the generalized algebraic Riccati equations and the singular LQ optimal control problems: we show that differently from the regular case, only the eigenvalues of the closed-loop dynamics that depend on the particular solution $X$ correspond -- together with their mirrored values -- to the invariant zeros of the Hamiltonian system. An anonymous reviewer has pointed out that some of the results of this paper may be alternatively obtained by performing a preliminary transformation that brings the system in the so-called {\em special coordinate basis} of \cite{Saberi-S-87}. We believe that a direct derivation of these results will provide additional insight to some readers as it connects the results with the structure of the Hamiltonian system. \section{The generalized Riccati equation and Linear Quadratic optimal control} \label{LQ} Let $Q\in {\mathbb{R}}^{n \times n}$, $S \in {\mathbb{R}}^{n \times m}$, $R \in {\mathbb{R}}^{m \times m}$. We make the following standing assumption: \begin{equation} \label{equno} \Pi \stackrel{\text{\tiny def}}{=} \left[ \begin{array}{cc} Q & S \\[-1mm] S^\top & R \end{array} \right]=\Pi^\top \ge 0. \end{equation} The triplet $\Sigma \stackrel{\text{\tiny def}}{=} (A,B,\Pi)$ is referred to as {\em Popov} triple. From the properties of the Schur complement, we recall that the condition $\Pi=\Pi^\top \ge 0$ is equivalent to the simultaneous satisfaction of the three conditions \begin{itemize} \item $R\ge 0$; \item $\ker R \subseteq \ker S$; \item $Q-S\,R^\dagger S^\top \ge 0$; \end{itemize} Dually, $\Pi\ge 0$ if and only if \begin{itemize} \item $Q\ge 0$; \item $\ker Q \subseteq \ker S^\top$; \item $R-S^\top\,Q^\dagger S \ge 0$. \end{itemize} See e.g. \cite{Rami-CZ-02} or \cite{Ferrante-N-12} for a proof. From these considerations it follows also that if $\Pi=\Pi^\top \ge 0$, then $S\,R^\dagger\,R=S$ and $S^\top\,Q^\dagger\,Q=S^\top$. The classic LQ problem can be stated as the problem of finding the control $u(t)$, $t \ge 0$, that minimizes \begin{equation} \label{costinf} J_\infty(x_{ 0},u)=\int_0^\infty [\begin{array}{cc} x^\top(t) & u^\top(t) \end{array} ] \left[ \begin{array}{cc} Q & S \\[-1mm] S^\top & R \end{array} \right] \left[ \begin{array}{c} x(t) \\[-1mm] u(t) \end{array} \right]\,dt \end{equation} subject to the constraint \begin{equation} \label{eqsys} \dot{x}(t)=A\,x(t)+B\,u(t), \qquad x(0)=x_{ 0} \in {\mathbb{R}}^n \end{equation} where $A\in {\mathbb{R}}^{n \times n}$ and $B \in {\mathbb{R}}^{n \times m}$. When $R$ is positive definite, the optimal control (when it exists) does not include distributions, since in such a case an impulsive control $u$ will always cause $J_\infty(x_{ 0},u)$ to be unbounded for any $x_{ 0}\in {\mathbb{R}}^n$. If $R$ is only positive semidefinite, in general the optimal solution can contain the Dirac delta distribution and its derivatives. In the very recent literature, it has been shown that important links exist between the existence of the solutions of the constrained generalized continuous algebraic Riccati equation (often denoted by the acronym CGCARE and formally introduced in the next section) and the non-impulsive optimal solutions of the infinite-horizon LQ problem, \cite{Ferrante-N-14,Ferrante-N-14-1}. This point represents a crucial difference between the discrete and the continuous time. Indeed, while in the discrete time the existence of symmetric positive semidefinite solutions of the constrained generalized discrete algebraic Riccati equation is equivalent to the solvability of the infinite-horizon LQ problem, in the continuous-time case this correspondence holds for the so-called {\em regular} solutions, i.e., the optimal controls of the LQ problem that do not contain distributions. LQ problems have been found to be very important as control problems in their own right. On the other hand, in the last thirty years the LQ problem has been often used as a building block to solve different, and usually more articulated, optimal control problems. For example, in the so-called $H_2$ problem \cite{Stoorvogel-92_2} the index to be minimized is the norm of the output of the system \[ y(t)=C\,x(t)+D\,u(t). \] The corresponding LQ problem is obtained by defining $Q=C^\top\,C$, $S=C^\top\,D$ and $R=D^\top\,D$. Since the very vast majority of systems (for example virtually all mechanical systems) are strictly proper, then the corresponding LQ problem is usually singular. \section{Generalized CARE} Consider the following matrix equation\footnote{The symbol $M^\dagger$ denotes the Moore-Penrose pseudo-inverse of matrix $M$.} \begin{equation} X\,A+A^\top\,X-(S+X\,B)\,R^{\dagger}\,(S^\top+B^\top X)+Q=0, \label{gcare} \end{equation} where the matrices $Q, A\in {\mathbb{R}}^{n \times n}$, $B,S \in {\mathbb{R}}^{n \times m}$, $R \in {\mathbb{R}}^{m \times m}$ are as defined in the previous section. Equation (\ref{gcare}), where $R$ is allowed to be singular, is often referred to as the {\em generalized continuous algebraic Riccati equation} GCARE($\Sigma$). Equation (\ref{gcare}) along with the condition \begin{equation} \ker R \subseteq \ker (S+X\,B), \label{kercond} \end{equation} will be referred to as {\em constrained generalized continuous algebraic Riccati equation}, and denoted by CGCARE($\Sigma$). In view of the positive semidefiniteness of $\Pi$, as already observed in Section \ref{LQ}, we have $\ker R \subseteq \ker S$, which implies that (\ref{kercond}) is equivalent to $\ker R \subseteq \ker (X\,B)$. The following notation is used throughout the paper. First, let $G \stackrel{\text{\tiny def}}{=} I_m-R^\dagger R$ be the orthogonal projector that projects onto $\ker R$. Moreover, we consider a non-singular matrix $T=[T_{1}\mid T _{2}]$ where $\operatorname{im} T_{1}=\operatorname{im} R$ and $\operatorname{im} T _{2}=\operatorname{im} G$, and we define $B_{1}\stackrel{\text{\tiny def}}{=} BT_{1}$ and $B _{2} \stackrel{\text{\tiny def}}{=} BT _{2}$. Finally, to any $X=X^\top \in {\mathbb{R}}^{n \times n}$ we associate the following matrices \begin{eqnarray} Q _{X}& \stackrel{\text{\tiny def}}{=} & Q+A^\top X+X\,A, \qquad S _{X} \stackrel{\text{\tiny def}}{=} S+X\, B, \label{defgx} \\ K _{X} & \stackrel{\text{\tiny def}}{=} & R^\dagger\, S _{X}^\top, \qquad A _{X} \stackrel{\text{\tiny def}}{=} A-B\,K _{X}, \qquad \Pi _{X} \stackrel{\text{\tiny def}}{=} \left[ \begin{array}{cc} Q _{X} & S _{X} \\[-1mm] S _{X}^\top & R \end{array} \right]. \end{eqnarray} When $X$ is a solution of CGDARE($\Sigma$), then $K _{X}$ is the corresponding gain matrix, and $A _{X}$ the associated closed-loop matrix. \begin{remark} { A symmetric and positive semidefinite solution of the generalized discrete-time algebraic Riccati equation also solves the constrained generalized discrete-time algebraic Riccati equation. This fact does not hold in the continuous time, i.e., not all symmetric and positive semidefinite solutions of GCARE($\Sigma$) are also solutions of CGCARE($\Sigma$). } \end{remark} \section{Characterization of the solutions of CGCARE} The purpose of this section is to provide a geometric characterization for the set of solutions of the generalized continuous algebraic Riccati equation. To this end, we first recall some concepts of classical geometric control theory that will be used in the sequel. More details can be found e.g. in \cite{Trentelman-SH-01}. Consider a system described by (\ref{eqsys}) along with the output equation $y(t) = C\,x(t)+D\,u(t)$, that we concisely identify with the quadruple $\Sigma_0=(A,B,C,D)$. The {\em invariant zeros} of $\Sigma_0$, here denoted by ${\cal Z}({A}, {B}, {C}, {D})$, are the values $s \in {\mathbb{C}}$ such that the rank of the Rosenbrock system matrix pencil $\left[ \begin{smallmatrix} A-s\,I_n & B \\[1mm] C & D \end{smallmatrix} \right]$ is smaller than its normal rank, \cite[Def. 3.16]{Zhou-DG-96}. We recall that the {\em reachable subspace} is ${\cal R}_0=\operatorname{im} [\begin{array}{ccccccccc} B && A\,B && \ldots && A^{n-1}\,B \end{array}]$, and coincides with the smallest $A$-invariant subspace of ${\mathbb{R}}^n$ containing the image of $B$, i.e. ${\cal R}_0=\langle A\,|\,\operatorname{im} B \rangle$. An {\em output-nulling subspace} ${\cal V}$ of $\Sigma_0$ is a subspace of ${\mathbb{R}}^n$ for which there exists a matrix $F\,{\in}\,\mathbb{R}^{m\,{\times}\,n}$ such that $(A+B\,F)\,{\cal V}\subseteq {\cal V} \subseteq \ker (C+D\,F)$. Any real matrix $F$ satisfying these inclusions is referred to as a {\it friend \/} of ${\cal V}$. We denote by $\mathfrak{F}({\cal V})$ the set of friends of ${\cal V}$. We denote by ${\cal V}^\star$ the largest output-nulling subspace of $\Sigma_0$, which represents the set of all initial states $x_0$ of $\Sigma_0$ for which a control input exists such that the corresponding output function is identically zero. Such an input function can always be implemented as a static state feedback of the form $u(t)=F\,x(t)$ where $F \in \mathfrak{F}({\cal V}^\star)$. The so-called {\em output-nulling reachability subspace} on ${\cal V}^\star$, herein denoted with ${\cal R}^\star$, is the smallest $(A\,{+}\,B\,F)$-invariant subspace of ${\mathbb{R}}^n$ containing the subspace ${\cal V}^\star\,{\cap}\,B\,\ker\,D$, where $F\,{\in}\,\mathfrak{F}({\cal V}^\star)$, i.e., ${\cal R}^\star=\langle A+B\,F\,|\, {\cal V}^\star \cap B\,\ker D\rangle$ where $F \in \mathfrak{F}({\cal V}^\star)$. % Let $F\in \mathfrak{F}({\cal V}^\star)$. The closed-loop spectrum (viewed as a multiset, with aggregation denoted by $\uplus$) can be partitioned as $\sigma(A+B\,F)=\sigma(A+B\,F\,|\,{\cal V}^\star)\uplus \sigma(A+B\,F\,|\,{\cal X}/{\cal V}^\star)$, where $\sigma(A+B\,F\,|\,{\cal V}^\star)$ is the spectrum of $A+B\,F$ restricted to ${\cal V}^\star$ and $\sigma(A+B\,F\,|\,{\cal X}/{\cal V}^\star)$ is the spectrum of the mapping induced by $A+B\,F$ on the quotient space ${\cal X}/{\cal V}^\star$. The eigenvalues of $A+B\,F$ restricted to ${\cal V}^\star$ can be further split into two disjoint sets: the eigenvalues of $\sigma(A+B\,F |{\cal R}^\star)$ are all freely assignable with a suitable choice of $F$ in $\mathfrak{F}({\cal V}^\star)$. The eigenvalues in $\sigma\,(A+B\,F | {{\cal V}^\star}/{{\cal R}^\star})$ -- which coincide with the {\em invariant zeros} of $\Sigma_0$, see e.g. \cite[Theorem 7.19]{Trentelman-SH-01} -- are fixed for all the choices of $F$ in $\mathfrak{F}({\cal V}^\star)$. Since $\Pi$ is assumed symmetric and positive semidefinite, we can consider a factorization of the form \begin{equation} \label{pifact} \Pi=\left[ \begin{array}{cc} Q & S \\[-1mm] S^\top & R\end{array} \right]=\left[ \begin{array}{cc} C^\top \\[-1mm] D^\top\end{array} \right][ \begin{array}{cc} C & D \end{array} ], \end{equation} where $Q=C^\top C$, $S=C^\top D$ and $R=D^\top D$. Let us define $G(s) \stackrel{\text{\tiny def}}{=} C\,(s\,I_n-A)^{-1}B+D$. Let $G^{\!\!\!\!\phantom{P}^\thicksim}(s) \stackrel{\text{\tiny def}}{=} G^\top(-s)$. The ``spectrum'' or ``spectral density'' $\Phi(s) \stackrel{\text{\tiny def}}{=} G^{\!\!\!\!\phantom{P}^\thicksim}(s)\,G(s)$ can be written as \begin{eqnarray*} \Phi(s)=[ \begin{array}{cc} B^\top (-s\,I_n-A^\top)^{-1} & I_m \end{array} ]\, \left[ \begin{array}{cc} Q & S \\[-1mm] S^\top & R \end{array} \right]\,\left[ \begin{array}{cc} (s\,I_n-A)^{-1}\,B \\[-1mm] I_m \end{array} \right], \end{eqnarray*} which is also referred to as {\em Popov function}. We recall the following classical result. \begin{lemma For any $X=X^\top \in{\mathbb{R}}^{n\times n}$, there holds \begin{equation} \label{alpha} \Phi(s)=[ \begin{array}{cc} B^\top (-s\,I_n-A^\top)^{-1} & I_m \end{array} ]\, \Pi _{X}\,\left[ \begin{array}{cc} (s\,I_n-A)^{-1}\,B \\[-1mm] I_m \end{array} \right]. \end{equation} \end{lemma} \noindent{\bf{\em Proof:}\ \ } The statement follows on noticing that \begin{eqnarray*} && \hspace{-1cm} [ \begin{array}{cc} B^\top (-s\,I_n-A^\top)^{-1} & I_n \end{array} ] (\Pi _{X}-\Pi)\,\left[ \begin{array}{cc} (s\,I_n-A)^{-1}\,B \\ I_n \end{array} \right] \\ && =-B^\top (s\,I_n+A^\top)^{-1}[(s\,I_n+A^\top)X-X(s\,I_n-A)](s\,I_n-A)^{-1} B \\ && -B^\top (s\,I_n+A^\top)^{-1} X \,B+B^\top X (s\,I_n-A)^{-1} B =0. \end{eqnarray*} \hspace*{\fill}~\QED\par\endtrivlist\unskip The following important result relates the rank of the spectrum $\Phi(s)$ with that of the matrix $R$, and it provides an explicit expression for a square spectral factor of $\Phi(s)$. \begin{theorem} Let $X=X^\top$ solve CGCARE($\Sigma$). Then, \begin{enumerate} \item the normal rank of $\Phi(s)$ is equal to the rank of $R$; \item $W(s) \stackrel{\text{\tiny def}}{=} R^{\frac{1}{2}} R^\dagger S _{X}^\top (s\,I_n-A)^{-1}B+R^{\frac{1}{2}}$ is a square spectral factor of $\Phi(s)$. \end{enumerate} \end{theorem} \noindent{\bf{\em Proof:}\ \ } As already observed, since $X$ is a solution of CGCARE($\Sigma$), there holds $\ker R \subseteq \ker (XB)$. It follows that $\Pi _{X}$ can be written as $\Pi _{X}=V\,\left[ \begin{smallmatrix} Q _{X}-S _{X}\,R^\dagger S _{X}^\top & 0 \\[1mm] 0 & R\end{smallmatrix} \right] V^\top$, where $V= \left[ \begin{smallmatrix} I_n & S _{X}\,R^\dagger \\[1mm] 0 & I_m \end{smallmatrix} \right]$. Moreover, if $X$ solves CGCARE($\Sigma$), we get $Q _{X}-S _{X}\,R^\dagger S _{X}^\top=0$, and $\Pi _{X}$ can be factored as $\Pi _{X}=\left( V \left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & R^{\frac{1}{2}} \end{smallmatrix} \right] \right)\left( \left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & R^{\frac{1}{2}} \end{smallmatrix} \right] V^\top\right)$. From $\left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & R^{\frac{1}{2}} \end{smallmatrix} \right] \left[ \begin{smallmatrix} I_n & 0 \\[1mm] R^\dagger\,S _{X}^\top & I_m \end{smallmatrix} \right]=\left[ \begin{smallmatrix} 0 & 0 \\[1mm] R^{\frac{1}{2}} R^\dagger\,S _{X}^\top & R^{\frac{1}{2}} \end{smallmatrix} \right]$, we find that $\Phi(s)$ can be written as $\Phi(s)=W^\top(-s)\,W(s)$, where $W(s)=R^{\frac{1}{2}} R^\dagger S _{X}^\top (s\,I_n-A)^{-1}B+R^{\frac{1}{2}} = R^{\frac{1}{2}} [I_m+ R^\dagger S _{X}^\top (s\,I_n-A)^{-1}B]$. Thus we can write $W(s)= R^{\frac{1}{2}}\,T _{X}(s)$, where $T _{X}(s) \stackrel{\text{\tiny def}}{=} I_m+ R^\dagger\,S _{X}^\top\,(s\,I_n-A)^{-1}B$ is square and invertible for all but finitely many $s \in {\mathbb{C}}$. Its inverse can be written as $T^{-1} _{X}(s)=I_m-R^\dagger S _{X}^\top (s\,I_n-A _{X})^{-1}B$. Thus, the normal rank of $(T _{X}^{\top}(-s))^{-1} \Phi(s) T _{X}^{-1}(s)= R$ is equal to the normal rank $r$ of $\Phi(s)$. \hspace*{\fill}~\QED\par\endtrivlist\unskip In the following lemma, given a solution of CGCARE($\Sigma$), a subspace that will be shown to play a crucial role in the solution of the associated optimal control problem will be introduced. This subspace is the reachable subspace associated with the pair $(A _{X}, B\,G)$. \begin{lemma} Let $X=X^\top$ solve CGCARE($\Sigma$) and define \begin{equation} \label{defR0X} {\cal R} _{0,X} \stackrel{\text{\tiny def}}{=} \operatorname{im} [\begin{array}{ccccccccccc} B\,G && A _{X}\,B\,G && A _{X}^2\,B\,G&& \ldots&&A _{X}^{n-1}\,B\,G \end{array}]. \end{equation} Let $C _{X}\stackrel{\text{\tiny def}}{=} C-D\,R^\dagger\,S _{X}^\top$. There holds ${\cal R} _{0,X} \subseteq \ker C _{X}$. \end{lemma} \noindent{\bf{\em Proof:}\ \ } Since $\Phi(s)=G^\top(-s) G(s)=W^\top(-s)\,W(s)$ with $W(s)= R^{\frac{1}{2}}\,T _{X}(s)$, we find \begin{eqnarray*} G(s)\,T _{X}^{-1}(s) &=& \left(C\,(s\,I_n-A)^{-1}B+D\right) \left( I_m-R^\dagger S _{X}^\top (s\,I_n-A _{X})^{-1}B \right) \\ &=& C\,(s\,I_n-A)^{-1}B+D+C\,(s\,I_n-A _{X})^{-1}B \\ &&-D\,R^\dagger\,S _{X}^\top (s\,I_n-A _{X})^{-1}B \\ &=& (C-D\,R^\dagger\,S _{X}^\top) (s\,I_n-A _{X})^{-1}B+D, \end{eqnarray*} where the first equality follows from observing that $B\,R^\dagger S _{X}^\top=A-A _{X}=(s\,I_n-A _{X})-(s\,I_n-A)$. We have already shown that $(T _{X}^{\top}(-s))^{-1} \Phi(s) T _{X}^{-1}(s)= R$. Thus, $\ker R \subseteq \ker G(s) T _{X}^{-1}(s)$. Hence, $G(s) T _{X}^{-1}(s) \ker R=C _{X}\,(s\,I_n-A _{X})^{-1}B\ker R+D\,\ker R=\{0\}$. Since $D\,\ker R=\{0\}$, then $C _{X}\,(s\,I_n-A _{X})^{-1}B _{2}=0$. Therefore, ${\cal R} _{0,X}$ must be in $\ker C _{X}$. \hspace*{\fill}~\QED\par\endtrivlist\unskip In the case where $X=X^\top$ is the solution of GCARE($\Sigma$) corresponding to the optimal cost, it is intuitive and simple to see that $\ker X$ is output-nulling for the quadruple $(A,B,C,D)$ and the corresponding gain $-K _{X}$ is a friend of $\ker X$, on the basis of the optimality and of the fact that the cost cannot be smaller than zero in view of the positivity of the index. Stated differently, if $x_0\in \ker X$, applying the control $u(t)=-K_X\,x(t)$ ensures that $x(t)\in \ker X$ for all $t \ge 0$, and the cost remains at zero, i.e., \[ \left[ \begin{array}{cc} A-B\,K_X \\ C-D\,K_X \end{array} \right]\,\ker X \subseteq \ker X \oplus \{0\}. \] However, the following much stronger result holds. \begin{theorem} \label{th-inv+friend} Let $X=X^\top$ be a solution of GCARE($\Sigma$). Then, $\ker X$ is an output-nulling subspace of the quadruple $(A,B,C,D)$ and $-K _{X}$ is a friend of $\ker X$, or, equivalently, $\ker X$ is $A _{X}$-invariant and contained in the null-space of $C _{X}$. \end{theorem} \noindent{\bf{\em Proof:}\ \ } { Since $X$ is a solution of GCARE($\Sigma$), the closed-loop Lyapunov equation \begin{equation} \label{clgdare} X\,A _{X}+A _{X}^\top X+Q _{0X}=0 \end{equation} holds, where $Q _{0X} \stackrel{\text{\tiny def}}{=} Q-S\,R^\dagger S^\top +X\,B\,R^\dagger B^\top X= C _{X}^\top C _{X} \ge 0$. Moreover, from the definition of $C _{X}$ we also get $Q _{0X}=C _{X}^\top\,C _{X}=[\,I_n \;\;\; -K _{X}\,]\Pi \left[ \begin{smallmatrix} I_n \\[1mm] -K _{X}^\top \end{smallmatrix} \right] \geq 0$. Now, consider the Lyapunov equation $X\,A _{X}+A _{X}^\top X+C _{X}^\top\,C _{X}=0$, and let $\xi \in \ker X$. By multiplying this equation from the left by $\xi^\top$ and from the right by $\xi$ we obtain $C _{X}\,\xi=0$, which says that $\ker X \subseteq \ker C _{X}$. With this fact in mind, we multiply the same equation from the right by $\xi$, and we obtain $X\,A _{X}\,\xi=0$, which says that $\ker X$ is $A _{X}$-invariant. Thus, $\ker X$ is an $A _{X}$-invariant subspace contained in the null-space of $C _{X}$, and is therefore an output-nulling subspace for $(A,B,C,D)$ and $-K _{X}=-R^\dagger S _{X}$ is an associated friend. \hspace*{\fill}~\QED\par\endtrivlist\unskip We recall that we have defined the subspace ${\cal R} _{0,X}$ as the reachability subspace of the pair $(A_X,B\,G)$. Since $A_X$ depends on the solutions $X=X^\top$ of CGCARE($\Sigma$) considered, at first glance it appears that the subspace ${\cal R} _{0,X}$ also depends on $X$. However, we now prove that this is not the case: the subspace ${\cal R} _{0,X}$ is independent of the particular solution $X=X^\top$ of CGCARE($\Sigma$). Moreover, $A _{X}$ restricted to this subspace does not depend on the particular solution $X=X^\top$ of CGCARE($\Sigma$). \begin{theorem} \label{thee} Let $X=X^\top$ be a solution of CGCARE($\Sigma$), and let ${\cal R} _{0,X}$ be defined by (\ref{defR0X}). Then, \begin{itemize} \item $X\,{\cal R} _{0,X}=\{0\}$; \item ${\cal R} _{0,X}$ is independent of $X$; \item $A _{X}|_{{\cal R} _{0,X}}$ is independent of $X$. \end{itemize} \end{theorem} \noindent{\bf{\em Proof:}\ \ } Since ${\cal R} _{0,X}$ is $A _{X}$-invariant and is contained in $\ker C _{X}$, in a basis of the state space adapted to ${\cal R} _{0,X}$ we have ${\cal R} _{0,X}= \operatorname{im} \left[ \begin{smallmatrix} I_r \\[1mm] 0 \end{smallmatrix} \right]$, $A _{X}=\left[ \begin{smallmatrix} A _{X,11} & A _{X,12} \\[1mm] 0 & A _{X,22} \end{smallmatrix} \right]$, $B _{2}=\left[ \begin{smallmatrix} B _{21} \\[1mm] 0 \end{smallmatrix} \right] $, $C _{X}= [ \begin{array}{cccc} 0 & C _{X,2}\end{array} ]$, where $r=\dim {\cal R} _{0,X}$. If we partition $X$ conformably with this basis as $X=\left[ \begin{smallmatrix} X _{11} & X _{12} \\[1mm] X _{12}^\top & X _{22} \end{smallmatrix} \right]$, we need to show that $X _{11}=0$ and $X _{12}=0$. Due to the structure of $C _{X}$, by pre- and post-multiplying the closed-loop Lyapunov equation $X\,A _{X}+A _{X}^\top X+C _{X}^\top\,C _{X}=0$ by $\left[ \begin{smallmatrix} I_r & 0 \end{smallmatrix} \right]$ and $\left[ \begin{smallmatrix} I_r \\[1mm] 0 \end{smallmatrix} \right]$, respectively, we get $X _{11}\,A _{X,11}+A _{X,11}^\top X _{11}=0$. Now, $\ker R \subseteq \ker (X\,B)$ implies $X\,B _{2}=0$, which in turn implies $X _{11}\,B _{21}=0$ and $X _{12}^\top B _{21}=0$. Therefore, $X _{11}$ satisfies $\left[ \begin{smallmatrix} X _{11}\,A _{X,11}+A _{X,11}^\top X _{11} & X _{11} B _{21} \\[1mm] B _{21}^\top X _{11} & 0 \end{smallmatrix} \right]=0$. Since the pair $(A _{X,11},B _{21})$ is reachable, it is always possible to choose a matrix $K$ in such a way that $A _{X,11}+B _{21}\,K$ has unmixed spectrum. Thus, \begin{eqnarray*} 0& =&\left[ \begin{array}{cc} I_r & K^\top \\[-1mm] 0 & I_{n-r} \end{array} \right] \left[ \begin{array}{cc} X _{11}\,A _{X,11} + A _{X,11}^\top X _{11} & X _{11} B _{21} \\[-1mm] B _{21}^\top X _{11} & 0 \end{array} \right] \left[ \begin{array}{cc} I_r & 0 \\[-1mm] K & I_{n-r} \end{array} \right] \\ & = & \left[ \begin{array}{cc} X _{11}(A _{X,11} + B _{21}K) + (A _{X,11} + B _{21}K)^\top X _{11} & X _{11} B _{21} \\[-1mm] B _{21}^\top X _{11} & 0 \end{array} \right] \end{eqnarray*} gives $X _{11}=0$. The Lyapunov equation (\ref{clgdare}) reads as \begin{eqnarray*} \left[ \begin{array}{cc} 0 & X _{12}\,A _{X,22} \\[-1mm] X _{12}^\top A _{X,11} & \star \end{array} \right]+ \left[ \begin{array}{cc} 0 & A _{X,11}^\top X _{12} \\[-1mm] A _{X,22}^\top X _{12}^\top & \star \end{array} \right]+ \left[ \begin{array}{cc} 0 &0 \\[-1mm] 0 & \star \end{array} \right]=0, \end{eqnarray*} which leads to $X _{12}\,A _{X,22}+A _{X,11}^\top X _{12}=0$. This identity, together with $X _{12}^\top B _{21}=0$, leads to $X _{12}=0$ in view of the observability of the pair $(A _{X,11}^\top,B _{21}^\top)$. Thus, $X\,{\cal R} _{0,X}=\{0\}$. We now want to show that ${\cal R} _{0,X}$ is independent of $X$, where $X=X^\top$ is a solution of CGCARE($\Sigma$). In a certain basis of the input space, we can write $R=\left[ \begin{smallmatrix} R_{0} && 0 \\[1mm] 0 && 0 \end{smallmatrix} \right]$, where $R_{0}$ is positive definite. Matrix $B$ can be written conformably with this basis as $B=[\,B_{1}\;\;B _{2}\,]$. From (\ref{kercond}), in this basis we must have $B _{2}^\top \,X=0$, i.e., $X\,B\,G=0$. Let us write $A _{X}=F-B\,R^\dagger\,B^\top X$, where $F\stackrel{\text{\tiny def}}{=} A-B\,R^\dagger\,S^\top$. We show that ${\cal R} _{0,X}$, i.e, the reachable subspace of the pair $(A _{X},B\,G)$, coincides with that of the pair $(F,B\,G)$, which is independent of $X$ since $F$ is independent of $X$. First, we observe that $A _{X}\,B _{2} = (F-B\,R^\dagger\,B^\top X)\,B _{2}=F\,B _{2}$, since as already observed $X\,B _{2}=0$. We now prove by induction that $A _{X}^j\,B _{2}=F^j\,B _{2}$ for all $j \in \mathbb{N}$. The statement has been proved for $j=1$. Assume $A _{X}^k\,B _{2}=F^k\,B _{2}$ for some $k >1$. First, in view of Theorem \ref{th-inv+friend}, $\ker X$ is $A _{X}$-invariant, which also implies that $A _{X}^k\,\ker X\subseteq \ker X$, i.e., $X\,A _{X}^k\,\ker X=\{0\}$. On the other hand, since $\operatorname{im} B _{2}\subseteq \ker X$, we have also $X\,A _{X}^k\,B _{2}=X\,F^k\,B _{2}=0$. Thus, $A _{X}^{k+1}\,B _{2}= A _{X}\,F^k\,B _{2}= (F-B\,R^\dagger\,B^\top X)\,F^k\,B _{2}=F\,F^k\,B _{2}=F^{k+1}\,B _{2}$. It is now clear that \[ {\cal R} _{0,X}=\operatorname{im} [ \begin{array}{ccccccccc} B _{2} && A _{X}\,B _{2} && \ldots && A _{X}^{n-1}\,B _{2}\end{array} ]= \operatorname{im} [ \begin{array}{ccccccccc} B _{2} && F\,B _{2} && \ldots &&F^{n-1}\,B _{2}\end{array} ] \] which is independent of $X$. We now prove that $A _{X}|_{{\cal R} _{0,X}}$ is independent of $X$. Let $Y=Y^\top$ now be another solution of CGCARE($\Sigma$). Let $A _{Y}$ be the corresponding closed-loop matrix. We find $A _{X}-A _{Y} = B\,R^\dagger (S _{Y}^\top-S _{X}^\top) =B\,R^\dagger\,B^\top (Y-X)$. We want to show that in this basis we have $A _{Y}=\left[ \begin{smallmatrix} A _{X,11} & A _{Y,12} \\[1mm] 0 & A _{Y,22} \end{smallmatrix} \right]$. From the considerations above, since it has been already proved that ${\cal R} _{0,X}={\cal R}_{0,Y}$, in this basis we have $X=\left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & X _{22} \end{smallmatrix} \right]$ and $Y=\left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & Y _{22} \end{smallmatrix} \right]$, so that $A _{Y}=A _{X}-\left[ \begin{smallmatrix} \star \;&\; \star \\[1mm] \star \;&\; \star \end{smallmatrix} \right] \left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & X _{22}-Y _{22} \end{smallmatrix} \right]= \left[ \begin{smallmatrix} A _{X,11} & \star\\[1mm] 0 & \star \end{smallmatrix} \right]$, which shows that $A _{X}|_{{\cal R} _{0,X}}=A _{Y}|_{{\cal R}_{0,Y}}$. \hspace*{\fill}~\QED\par\endtrivlist\unskip {The next result shows that the reachable subspace associated with the pair $(A _{X}, B\,G)$, which we denoted by ${\cal R} _{0,X}$, coincides with the largest reachability output-nulling subspace on the output-nulling subspace $\ker X$. In view of Theorem \ref{thee}, such reachability output-nulling subspace (and the corresponding restriction of the closed-loop mapping to it) is therefore independent of the particular solution $X=X^\top$ of CGCARE($\Sigma$) that we consider.} \begin{theorem} \label{the} Let $X=X^\top$ be a solution of CGCARE($\Sigma$). Let ${\cal R}^\star_{\ker X}$ be the largest reachability subspace on $\ker X$. Then, ${\cal R}^\star_{\ker X}={\cal R} _{0,X}$. \end{theorem} \noindent{\bf{\em Proof:}\ \ } Since ${\cal R}_{0}$ is the reachable subspace of the pair $(A _{X},B\,G)$, it is the smallest $A _{X}$-invariant subspace containing $\operatorname{im} (B\,G)=B\,\ker D$. On the other hand, the reachability subspace ${\cal R}^\star_{\ker X}$ on $\ker X$ is the smallest $(A+B\,F)$-invariant subspace containing $\ker X \cap B\,\ker D$, where $F$ is an {\em arbitrary} friend of $\ker X$, i.e., $F$ is any feedback matrix such that $(A+B\,F)\ker X \subseteq \ker X\subseteq \ker (C+D\,F)$, \cite[Theorem 7.14]{Trentelman-SH-01}. Notice that ${\cal R}^\star_{\ker X}$ does not depend on the choice of the friend $F$, \cite[Theorem 7.18]{Trentelman-SH-01}. We have seen in Theorem \ref{th-inv+friend} that $F= -K _{X}$ is a particular friend of $\ker X$. For this choice of $F$, we have $A+B\,F=A-B\,K _{X}=A _{X}$, so that ${\cal R}^\star_{\ker X}$ is the smallest $A _{X}$-invariant subspace containing $\ker X \cap B\,\ker D$. It is easy to see that $\ker X \cap B\,\ker D$ coincides with $B \ker D$, because $\ker X \supseteq B \ker D$ in view of the inclusion $\ker R \subseteq \ker XB$ following from (\ref{kercond}). \hspace*{\fill}~\QED\par\endtrivlist\unskip { \section{The Hamiltonian system} \label{ESDE} The aim of this section is to establish a link between the geometric properties of the solutions of CGCARE($\Sigma$) presented in the previous section and the structure of the so-called Hamiltonian system, which plays a crucial role in the study of the solutions of continuous-time (differential and algebraic) Riccati equations. Recall that the Hamiltonian system associated with the Popov triple $\Sigma$ is defined by the equations \begin{eqnarray} \begin{array}{rcl} \left[ \begin{array}{c} \dot{x}(t) \\[-1mm] \dot{\lambda}(t) \end{array} \right] &=& \left[ \begin{array}{cc} A & 0 \\[-1mm] -Q & -A^\top \end{array} \right] \left[ \begin{array}{c} x(t) \\[-1mm] \lambda(t)\end{array} \right]+\left[ \begin{array}{c} B \\[-1mm] -S \end{array} \right] u(t) \\ y(t)&=&[\begin{array}{cc} S^\top & B^\top \end{array}] \left[ \begin{array}{c} x(t) \\[-1mm] \lambda(t)\end{array} \right]+R\,u(t), \end{array} \label{ham} \end{eqnarray} where the variable $\lambda(t)$ is the costate vector. We define $\hat{A} \stackrel{\text{\tiny def}}{=} \left[ \begin{smallmatrix} A & 0 \\[1mm] -Q & -A^\top \end{smallmatrix} \right]$, $\hat{B} \stackrel{\text{\tiny def}}{=} \left[ \begin{smallmatrix} B \\[1mm] -S \end{smallmatrix} \right]$, $\hat{C} \stackrel{\text{\tiny def}}{=} \left[ \begin{smallmatrix} S^\top & B^\top \end{smallmatrix} \right]$ and $\hat{D} \stackrel{\text{\tiny def}}{=} R$. The Hamiltonian system (\ref{ham}) is identified with the matrix quadruple $(\hat{A}, \hat{B}, \hat{C}, \hat{D})$. The Hamiltonian system has strong relations with the corresponding optimal control problem. Indeed, using an Euler-Lagrange approach, the optimality conditions of an LQ problem can be written as in (\ref{ham}) with the additional constraint $y(t) =0$ for all $t \ge 0$. It is a classic and very well-known result that the set of invariant zeros of the Hamiltonian system is mirrored with respect to the imaginary axis, see e.g. \cite{Prattichizzo-MN-04}. Moreover, given a solution $X$ of the standard continuous-time algebraic Riccati equation, the invariant zeros of the Hamiltonian system (\ref{ham}) are given by the union of the eigenvalues of the closed-loop matrix $A _{X}$ with those of $-A _{X}$. In symbols, \begin{equation} \label{std} {\cal Z}({\hat{A}, \hat{B}, \hat{C}, \hat{D}})= \sigma(A _{X}) \cup \sigma(-A _{X}). \end{equation} The goal of this section is to show that when $R$ is singular but the CGCARE($\Sigma$) admits a solution $X$, the set of invariant zeros of the Hamiltonian system (\ref{ham}) is a subset of such union. More precisely, the following result holds. \begin{theorem} \label{thefund} Let $X$ be a solution of CGCARE($\Sigma$). Let the pair $(A _{X},B\,G)$ be written in the reachability form as $\left[ \begin{smallmatrix} A _{X,11} & A _{X,12} \\[1mm] 0 & A _{X,22} \end{smallmatrix} \right]$, $\left[ \begin{smallmatrix} B_{2,1} \\[1mm] 0 \end{smallmatrix} \right]$, where the pair $(A _{X,11},B_{2,1})$ is completely reachable. Let $\Gamma _{X} \stackrel{\text{\tiny def}}{=} A _{X,22}$. There holds \[ {\cal Z}(\hat{A}, \hat{B}, \hat{C}, \hat{D})=\sigma(\Gamma _{X}) \cup \sigma(-\Gamma _{X}). \] \end{theorem} \ \\[-5mm] In order to prove Theorem \ref{thefund}, we need the following technical lemmas. \ \\[-5mm] \begin{lemma} \label{zeri} The set of invariant zeros of a quadruple $(A,B,C,D)$ is invariant with respect to state feedback and output injection and with respect to changes of coordinates in the state space, i.e., for any matrices $F$ and $G$ and for any non-singular $T$ of suitable sizes there hold \begin{eqnarray*} {\cal Z}(A,B,C,D)&=&{\cal Z}(A+B\,F,B,C+D\,F,D)\\ &=&{\cal Z}(A+G\,C,B+G\,D,C,D) \\ &=&{\cal Z}(T^{-1}A\,T,T^{-1} B, C\,T,D). \end{eqnarray*} \end{lemma} \noindent{\bf{\em Proof:}\ \ } The first equality follows by observing that for all matrices $F$ and for all $s \in {\mathbb{C}}$ there holds $\left[ \begin{smallmatrix} A+B\,F-s\,I_n & B \\[1mm] C+D\,F & D \end{smallmatrix} \right] = \left[ \begin{smallmatrix} A-s\,I_n & B \\[1mm] C & D \end{smallmatrix} \right] \left[ \begin{smallmatrix} I_n & 0 \\[1mm] F & I_m \end{smallmatrix} \right]$. The second is dual. The third statement follows from $\left[ \begin{smallmatrix} T^{-1}A\,T-s\,I_n & T^{-1}B \\[1mm] C\,T & D \end{smallmatrix} \right] = \left[ \begin{smallmatrix} T^{-1} & 0 \\[1mm] 0 & I_p \end{smallmatrix} \right]\left[ \begin{smallmatrix} A-s\,I & B \\[1mm] C & D \end{smallmatrix} \right] \left[ \begin{smallmatrix} T & 0 \\[1mm] 0 & I_m \end{smallmatrix} \right]$. \hspace*{\fill}~\QED\par\endtrivlist\unskip \begin{lemma} \label{lemnuovo} Let $X=X^\top$ be a solution of CGCARE($\Sigma$). The invariant zeros of the Hamiltonian system (\ref{ham}) coincide with the generalized eigenvalues of the matrix pencil \begin{equation}\label{tiblokdec}\hat{P}(s)=\left[ \begin{array}{ccc} A _{X}-s\,I_n & 0 & B \\[-1mm] 0 & -(A _{X}^\top+s\,I_n) & 0 \\[-1mm] 0 & B^\top & R \end{array} \right]. \end{equation} \end{lemma} \noindent{\bf{\em Proof:}\ \ } We perform a state-feedback transformation in (\ref{ham}). Let $u(t)=[\begin{array}{cc} -K _{X} & 0 \end{array} ] \left[ \begin{smallmatrix} x(t) \\[1mm] \lambda(t) \end{smallmatrix} \right]+v(t)$, so that \begin{eqnarray*} && \left[ \begin{array}{c} \dot{x}(t) \\[-1mm] \dot{\lambda}(t) \end{array} \right] = \left[ \begin{array}{cc} A-B\,K _{X} & 0 \\[-1mm] -Q+S\,K _{X} & -A^\top \end{array} \right] \left[ \begin{array}{c} x(t) \\[-1mm] \lambda(t)\end{array} \right]+\left[ \begin{array}{c} B \\[-1mm] -S \end{array} \right] v(t) \\ && y(t)=[\begin{array}{cc} S^\top-R\,K _{X} & B^\top \end{array} ]\left[ \begin{array}{c} x(t) \\[-1mm] \lambda(t)\end{array} \right]+R\,v(t) \end{eqnarray*} Now we change coordinates in the state-space of the Hamiltonian system with $T=\left[ \begin{smallmatrix} I_n & 0 \\[1mm] X & I_n \end{smallmatrix} \right]$, and we obtain \begin{eqnarray*} \hat{A}^\prime & = & T^{-1} \hat{A}\,T= \left[ \begin{array}{cc} I_n & 0 \\[-1mm] -X & I_n \end{array} \right] \left[ \begin{array}{cc} A-B\,K _{X} & 0 \\[-1mm] -Q+S\,K _{X} & -A^\top \end{array} \right] \left[ \begin{array}{cc} I_n & 0 \\[-1mm] X & I_n \end{array} \right] \\ &=& \operatorname{diag} \{A _{X},-A^\top\}, \end{eqnarray*} since $-X\,A _{X}-Q+S\,K _{X}-A^\top X=0$ in view of CGCARE($\Sigma$). Moreover $\hat{B}^\prime=T^{-1}\,\hat{B}=\left[ \begin{smallmatrix} I_n & 0 \\[1mm] -X & I_n \end{smallmatrix} \right] \left[ \begin{smallmatrix} B \\[1mm] -S \end{smallmatrix} \right]=\left[ \begin{smallmatrix} B \\[1mm] -X\,B-S \end{smallmatrix} \right]$ and $\hat{C}^\prime = \hat{C}\,T= [\begin{array}{cc} S^\top-R\,K _{X} & B^\top\end{array}] \left[ \begin{smallmatrix} I_n & 0 \\[1mm] X & I_n \end{smallmatrix} \right] = [\begin{array}{ccc} S^\top-R\,R^\dagger S _{X} && B^\top\end{array}] =[ \begin{array}{ccc} 0 && B^\top \end{array} ]$. Finally, $\hat{D}^\prime =R$. In view of Lemma \ref{zeri}, we have ${\cal Z}(\hat{A}, \hat{B}, \hat{C}, \hat{D})={\cal Z}(\hat{A}^\prime, \hat{B}^\prime, \hat{C}^\prime, \hat{D}^\prime)$. Now we perform an output-injection using the matrix $G=\left[ \begin{smallmatrix} 0 \\[1mm] K _{X}^\top \end{smallmatrix} \right]$ and we obtain \begin{eqnarray*} \hat{A}^{\prime \prime} & = & \hat{A}^\prime +G\,\hat{C}=\left[ \begin{array}{cc} A _{X} & 0 \\[-1mm] 0 & -A^\top \end{array} \right] +\left[ \begin{array}{c} 0 \\ K _{X}^\top \end{array} \right] [ \begin{array}{cc} 0 & B^\top \end{array} ] = \left[ \begin{array}{cc} A _{X} & 0 \\[-1mm] 0 & -A _{X}^\top \end{array} \right]\\ \hat{B}^{\prime \prime} & = & \hat{B}^\prime +G\,\hat{D}=\left[ \begin{array}{c} B \\[-1mm] -X\,B-S \end{array} \right]+\left[ \begin{array}{c} 0 \\[-1mm] K _{X}^\top \end{array} \right] R= \left[ \begin{array}{c} B \\[-1mm] 0 \end{array} \right] \\ \hat{C}^{\prime \prime} & = & \hat{C}^\prime= [ \begin{array}{ccc} 0 && B^\top \end{array} ] \quad \qquad \hat{D}^{\prime \prime} = \hat{D}^\prime=R, \end{eqnarray*} where we have used the fact that $-X\,B-S+S _{X}\,R^\dagger R=0$ since $S\,R^\dagger R=S$ and $XB(R^\dagger R-I_m)=0$. Again, in view of Lemma \ref{zeri}, we have ${\cal Z}(\hat{A}, \hat{B}, \hat{C}, \hat{D})={\cal Z}(\hat{A}^{\prime \prime}, \hat{B}^{\prime \prime}, \hat{C}^{\prime \prime}, \hat{D}^{\prime \prime})$. Thus, the invariant zeros of the Hamiltonian system $(\hat{A}, \hat{B}, \hat{C}, \hat{D})$ are the values of $\zeta \in {\mathbb{C}}$ such that the Rosenbrock matrix pencil (\ref{tiblokdec}) loses rank. \hspace*{\fill}~\QED\par\endtrivlist\unskip It is worth remarking that the generalized eigenvalues of $\hat{P}(s)$ are independent of the solution $X=X^\top$ of CGCARE($\Sigma$), since these coincide with the invariant zeros of the Hamiltonian system. Observe also that when $R$ is non-singular {(i.e., when $X$ is a solution of CARE($\Sigma$))}, this result allows us to re-obtain (\ref{std}), since clearly \begin{equation} \label{division} \sigma(\hat{P}(s))=\sigma(A _{X}-s\,I_n) \cup \sigma (A _{X}^\top+s\,I_n), \end{equation} where the symbol $\sigma(\hat{P}(s))$ stands for the set of generalized eigenvalues of the pencil $\hat{P}(s)$ counting multiplicities. However, (\ref{division}) does not hold when $R$ is singular. } \begin{example} \label{ex0} { Let $A=\left[ \begin{smallmatrix} -4 & 0 \\[1mm] 2 & 6 \end{smallmatrix} \right]$, $B=\left[ \begin{smallmatrix} 0 & -7 \\[1mm] 2 & -4 \end{smallmatrix} \right]$, $Q=\left[ \begin{smallmatrix} \frac{17}{4} & 0 \\ 0 & 0 \end{smallmatrix} \right]$, $S=\left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & 0 \end{smallmatrix} \right]$, $R=\left[ \begin{smallmatrix} 0 & 0 \\[1mm] 0 & 4 \end{smallmatrix} \right]$. The matrix $X=\operatorname{diag}\{-1,0\}$ is a solution of CGCARE($\Sigma$) but CARE($\Sigma$) is not defined in this case. The closed-loop matrix is $A _{X}=\left[ \begin{smallmatrix} 33/4 & 0 \\[1mm] 9 & 6 \end{smallmatrix} \right]$. Applying the result in Lemma \ref{lemnuovo} we find that the Rosenbrock matrix associated with the Hamiltonian system can be written as {\small \begin{eqnarray*} \hat{P}(s)=\left[ \begin{array}{cc|cc|cc} \frac{33}{4} - s & 0 & 0 & 0 & \;0 & -7 \\[-1mm] 9 & 6-s & 0 & 0 & \;2 & -4 \\ \hline 0 & 0 & -\frac{33}{4}-s & -9 & \;0 & 0\\[-1mm] 0 & 0 & 0 & -6-s & \;0 & 0\\[-1mm] \hline 0 & 0 & 0 & 2 & \;0 & 0\\[-1mm] 0 & 0 & -7 & -4 &\; 0 & 4 \end{array} \right]. \end{eqnarray*} } The normal rank of $\hat{P}(s)$ is equal to $5$. The eigenvalues of $A _{X}$ are equal to $33/4$ and $6$. While it is true that when $s=\pm 33/4$ the rank of $\hat{P}(s)$ is equal to $4$, for both $s=6$ and $s=-6$ the rank of $\hat{P}(s)$ is equal to $5$. This result says that, unlike the regular case, not all the eigenvalues of $A _{X}$ are invariant zeros of the Hamiltonian system. Specifically, the invariant zeros of the Hamiltonian system are $\pm 33/4$. \hspace*{\fill}~$\square$\par\endtrivlist\unskip } \end{example} \begin{theorem} \label{th0} Let $X$ be a solution of CGCARE($\Sigma$). Two matrices ${U} _{X}$ and ${V} _{X}$ exist such that {\small \begin{eqnarray} && \hspace{-3mm}{{\Large {U} _{X}\,\hat{P}(s)\,{V} _{X} =}} \nonumber \\ && \left[ \begin{array}{cc|c|ccc} A _{X,11}-s\,I_{r} & B _{21} & 0 & A _{X,12} & 0 & B _{11} \\ \hline 0 & 0 & -(A _{X,11}^\top+s\,I_r) & 0 & 0 & 0 \\[-1mm] 0 & 0 & B _{21}^\top & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & A _{X,22}-s\,I_{n-r}\!\!& 0 & B _{12} \\[-1mm] 0 & 0 & -A _{X,12}^\top & 0 & \!\! -(A _{X,22}^\top+s\,I_{n-r}) & 0 \\[-1mm] 0 & 0 & B _{11}^\top & 0 & B _{12}^\top & R_{0} \end{array} \right], \label{fc} \end{eqnarray} } where the pair $(A _{X,11},B _{21})$ is reachable and $R_{0}$ is invertible. Moreover, the sub-matrix pencil \[ \hat{P}_{1}(s) \stackrel{\text{\tiny def}}{=} \left[ \begin{smallmatrix} A _{X,22}-s\,I_{n-r}& 0 & B _{12} \\ 0 & -(A _{X,22}^\top+s\,I_{n-r}) & 0 \\ 0 & B _{12}^\top & R_{0} \end{smallmatrix} \right] \] in (\ref{fc}) is regular, and the generalized eigenvalues of the pencil $\hat{P}(s)$ are the generalized eigenvalues of $\hat{P}_{1}(s)$ \end{theorem} \noindent{\bf{\em Proof:}\ \ } Consider an orthogonal change of coordinate in the input space ${\mathbb{R}}^m$ induced by {the $m \times m$ orthogonal matrix $T=[\,T_{1}\;\;T _{2}\,]$ where $\operatorname{im} T_{1}=\operatorname{im} R$ and $\operatorname{im} T_{2}=\operatorname{im} G=\ker R$.} In this basis $R$ is block-diagonal, with the first block being non-singular and the second being zero, i.e., $T^\top R\,T= \operatorname{diag} \{R_{0},0\}$, where $R_{0}$ is invertible. Its dimension is denoted by $m_{1}$. Consider the block matrix $\hat{T}\stackrel{\text{\tiny def}}{=}\operatorname{diag}\{I_n, I_n, T\}$. By multiplying $\hat{P}(s)$ on the left by $\hat{T}^\top$ and on the right by $\hat{T}$, and by defining the matrices $B_{1} \stackrel{\text{\tiny def}}{=} B\,T_{1}$ and $B _{2} \stackrel{\text{\tiny def}}{=} B\,T_{2}$ we ge {\small \begin{eqnarray*} \hat{T}^\top\hat{P}(s)\, \hat{T}= \left[ \begin{array}{cccc} A _{X}-s\,I_n & 0 & B_{1} & B _{2} \\[-1mm] 0 & -(A _{X}^\top+s\,I_n) & 0 & 0\\[-1mm] 0 & B_{1}^\top & R_{0} & 0 \\[-1mm] 0 & B _{2}^\top & 0 & 0 \end{array} \right]. \end{eqnarray*} } Notice that $\operatorname{im} B _{2}= \operatorname{im} (B\,G)$ in view of the identity $\ker R=\operatorname{im} G$. Matrix $B_{1}$ has $m_{1}$ columns. Let us denote by $m _{2} \stackrel{\text{\tiny def}}{=} m - m_{1}$ the number of columns of $B _{2}$. Let us now take a matrix $H=[\,H_{1}\;\;H _{2}\,]$ such that $\operatorname{im} H_{1}$ is the reachable subspace from the origin of the pair $(A _{X},B _{2})$, which coincides with the subspace $\ker R$, yielding $H^{-1}\,A _{X}\,H=\left[ \begin{smallmatrix} A _{X,11} & A _{X,12} \\ 0 & A _{X,22} \end{smallmatrix} \right] , \; H^{-1}\,B _{2}=\left[ \begin{smallmatrix} B _{21} \\[1mm] 0 \end{smallmatrix} \right]$, $H^{-1}\,B_{1}=\left[ \begin{smallmatrix} B _{11} \\[1mm] B _{12} \end{smallmatrix} \right]$. Let $\hat{H}=\operatorname{diag}\{H,H,I_{m_{1}},I_{m _{2}}\}$ be partitioned conformably with the block structure of the pencil. Reordering $\hat{H}^{-1} \hat{T}^\top\hat{P}(s)\, \hat{T} \hat{H}$ via two suitable unimodular matrices $\Omega_{1}$ and $\Omega_{2}$ yields (\ref{fc}) with $\hat{U} _{X}=\Omega_{1}\,\hat{H}^{-1}\,\hat{T}^\top$ and $\hat{V} _{X} \hat{T}\,\hat{H}\, \Omega _{2}$, where $r$ is the size of the reachable subspace of the pair $(A _{X},B _{2})$. { We now proceed with the computation of the normal rank of $P(s)$ .} Since the pair $(A _{X,11}, B _{21})$ is reachable by construction, all the $r$ rows of the submatrix $[\,A _{X,11}-s\,I_{r} \;\; B _{21} \,]$ are linearly independent for every $s \in {\mathbb{C}} \cup \{\infty\}$. This also means that of the $r+m _{2}$ columns of $[\,A _{X,11}-s\,I_{r} \;\; B _{21} \,]$, only $r$ are linearly independent, and this gives rise to the presence of a null-space of $P(s)$ whose dimension $m _{2}$ is independent of $s \in {\mathbb{C}} \cup \{\infty\}$. Thus, \begin{eqnarray*} \operatorname{rank} \hat{P}(s)=r+ \operatorname{rank} \left[ \begin{array}{c|ccc} -(A _{X,11}^\top+s\,I_r) & 0 & 0 & 0 \\[-1mm] B _{21}^\top & 0 & 0 & 0 \\ \hline 0 & A _{X,22}-s\,I_{n-r}& 0 & B _{12} \\[-1mm] -A _{X,12}^\top & 0 & -(A _{X,22}^\top+s\,I_{n-r}) & 0 \\[-1mm] B _{11}^\top & 0 & B _{12}^\top & R_{0} \end{array} \right]. \end{eqnarray*} Again, since the pair $(A _{X,11}, B _{21})$ is reachable, then $(A _{X,11}^\top, B _{21}^\top)$ is observable, and the rank of the submatrix $\left[ \begin{smallmatrix} -(A _{X,11}^\top+s\,I_r) \\ B _{21}^\top \end{smallmatrix} \right]$ is constant and equal to $r$ for every $s \in {\mathbb{C}} \cup \{\infty\}$. Thus, $\operatorname{rank} \hat{P}(s)=2\,r+ \operatorname{rank} P_{1}(s)$, where {\small \begin{eqnarray*} \hat{P}_{1}(s) = \left[ \begin{array}{ccc}A _{X,22}-s\,I_{n-r}& 0 & B _{12} \\[-1mm] 0 & -(A _{X,22}^\top+s\,I_{n-r}) & 0 \\[-1mm] 0 & B _{12}^\top & R_{0} \end{array} \right]. \end{eqnarray*} } Since $\det \hat{P}_{1}(s)=\det(A _{X,22}-s\,I_{n-r}) \cdot \det(-(A _{X,22}^\top+s\,I_{n-r}))\cdot \det R_{0}$, a value $s \in {\mathbb{C}}$ can certainly be found for which $\det \hat{P}_{1}(s) \neq 0$. This means that the normal rank of $\hat{P}_{1}(s)$ is equal to $2\,(n-r)+m_{1}$, and therefore $\operatorname{normrank} \hat{P}(s)=2\,r+2\,(n-r)+m_{1}=2\,n+m_{1}$. It also follows that the generalized eigenvalues of the pencil $\hat{P}(s)$ are the values $s \in {\mathbb{C}} \cup \{\infty\}$ for which the rank of $\hat{P}_{1}(s)$ is smaller than its normal rank $2\,(n-r)+m_{1}$. These values are the eigenvalues of $A _{X,22}$ plus their opposites, including possibly the eigenvalue at infinity, whose multiplicity {| be it algebraic or geometric | is the multiplicity of the zero eigenvalue of the matrix} $P_{\infty}\stackrel{\text{\tiny def}}{=} \operatorname{diag}\{I_{n-r},I_{n-r},0_{m_{1}}\}$. {The last $m_{1}$ columns of $P_{\infty}$ give rise to an eigenvalue at infinity whose multiplicity (algebraic and geometric) is exactly equal to $m_{1}$, since in this case the dimension of $\ker P_{\infty}$ is equal to $m_{1}$. \hspace*{\fill}~\QED\par\endtrivlist\unskip Theorem \ref{thefund} now follows as a corollary of Theorem \ref{the}. Indeed, from (\ref{fc}) we find ${\cal Z}(\hat{A}, \hat{B}, \hat{C}, \hat{D})=\sigma(A _{X,22}) \cup \sigma(-A _{X,22})$. It turns out that, unlike the regular case, not all the eigenvalues of the closed-loop matrix $A _{X}$ are invariant zeros of the Hamiltonian system (\ref{ham}). In particular, the eigenvalues of $A _{X}$ restricted to ${\cal R}^\star_{\ker X}$ -- which are the controllable eigenvalues of the pair $(A _{X},B\,G)$ -- are not invariant zeros of the Hamiltonian system, whereas the eigenvalues induced by $A _{X}$ on ${\mathbb{R}}^n / {\cal R}^\star_{\ker X}$ along with their opposites are invariant zeros of the Hamiltonian system. \begin{example} { Consider Example \ref{ex0}. Using the solution $X=\operatorname{diag} \{-1,0\}$ of CGCARE($\Sigma$) we easily find that $\ker R$ and $\operatorname{im} R$ are respectively spanned by the vectors $\left[ \begin{smallmatrix} 1 \\[1mm] 0 \end{smallmatrix} \right]$ and $\left[ \begin{smallmatrix} 0 \\[1mm] 1 \end{smallmatrix} \right]$. Hence, by taking $T=\left[ \begin{smallmatrix} 0 & 1 \\[1mm] 1 & 0 \end{smallmatrix} \right]$ we obtain $T^\top R\,T=\operatorname{diag}\{ 4,0\}$. Hence, in this case $m_{0}=1$. Moreover, we partition $B\,T$ as $B\,T=\left[ \begin{smallmatrix} -7 & 0 \\[1mm] -4 & 2 \end{smallmatrix} \right]$, so that $B_{1}=\left[ \begin{smallmatrix} -7 \\[1mm] -4 \end{smallmatrix} \right]$ and $B _{2}=\left[ \begin{smallmatrix} 0 \\[1mm] 2 \end{smallmatrix} \right]$. As expected, the image of $B _{2}=B\,G$ is exactly equal to the reachability subspace on $\ker X=\operatorname{span} \left\{\left[ \begin{smallmatrix} 0 \\[1mm] 1 \end{smallmatrix} \right]\right\}$, which in this case coincides with $\operatorname{span} \left\{\left[ \begin{smallmatrix} 0 \\[1mm] 1 \end{smallmatrix} \right]\right\}$. The normal rank of the $\hat{P}(s)$ is equal to $2\,n+m_{0}=5$. The invariant zeros of the Hamiltonian system are given by the uncontrollable eigenvalues of the pair $(A _{X},B _{2})=\left(\left[ \begin{smallmatrix} 33/4 & 0 \\[1mm] 9 & 6 \end{smallmatrix} \right],\left[ \begin{smallmatrix} 0 \\[1mm] 2 \end{smallmatrix} \right]\right)$ plus their opposites, i.e., $33/4$ and one at $-33/4$. Since $A _{X,22}=0$ and $B _{12}=2$, the matrix pencil $\hat{P}(s)$ also has a generalized eigenvalue at infinity with multiplicity equal to the dimension of $\ker \left[ \begin{smallmatrix} 0 & 0 \\[1mm] 2 & 0 \end{smallmatrix} \right]$, which is equal to $1$. By writing the Rosenbrock matrix pencil associated with the Hamiltonian system in the form given by (\ref{fc}), we get in fact {\small \begin{eqnarray} \hat{T}^\top \hat{P}(s)\, \hat{T}= \left[ \begin{array}{cc|c|ccc} 6-s & 2 & 0 & 9 & 0 & -4 \\ \hline 0 & 0 & -6-s & 0 & 0 & 0 \\[-1mm] 0 & 0 & 2 & 0 & 0 & 0 \\ \hline 0 & 0 & 0 & \frac{33}{4}-s & 0 & -7 \\[-1mm] 0 & 0 & -9 & 0 & -(\frac{33}{4}+s) & 0 \\[-1mm] 0 & 0 & -4 & 0 & -7 & 4 \end{array} \right], \label{tgy} \end{eqnarray} } which shows that $33/4$ and $-33/4$ are indeed the only finite generalized eigenvalues of $\hat{P}(s)$. } \end{example} \begin{remark} { The MATLAB$^{\textrm{\tiny{\textregistered}}}$ routine for the solution of the continuous-time algebraic Riccati equation is {\tt care.m}. This routine requires matrix $R$ to be positive definite, and delivers the stabilizing solution of this equation (which exists if and only if $(A,B)$ is stabilizable and the Hamiltonian matrix has no eigenvalues on the imaginary axis). Thus, {\tt care.m} cannot handle the case where $R$ is singular. Using the decomposition in this section, and using {\tt care.m} for the regular part of the Hamiltonian pencil delivers the solution of CGCARE($\Sigma$) which, loosely speaking, is {\em as stabilizing as possible}. Differently from what happens in the standard case, when no stabilizing solutions of CGCARE($\Sigma$) exist, it may still be possible to add another feedback which stabilizes the system, as the following section will show. } \end{remark} { \section{Stabilization} In the previous sections, we have observed that the eigenvalues of the closed-loop matrix $A _{X}$ restricted to the subspace ${\cal R}_{0,X}$ are independent of the particular solution $X=X^\top$ of CGCARE($\Sigma$) considered. This means that these eigenvalues -- which do not appear as invariant zeros of the Hamiltonian system -- are present in the closed-loop regardless of the solution $X=X^\top$ of CGCARE($\Sigma$) that we consider. On the other hand, we have also observed that ${\cal R}_{0,X}$ coincides with the subspace ${\cal R}^\star_{\ker X}$, which is by definition the smallest $(A-B\,K _{X})$-invariant subspace containing $\ker X \cap B\,\ker D=\operatorname{im} (B\,G)$. It follows that it is always possible to find a matrix $L$ that assigns all the eigenvalues of the map $(A _{X}+B\,G\,L)$ restricted to the reachable subspace ${\cal R}^\star_{\ker X}$, by adding a further term $B\,G\,L\,x(t)$ to the feedback control law, because this does not change the value of the cost with respect to the one obtained by $u(t)=-K _{X}\,x(t)$. Indeed, the additional term only affects the part of the trajectory on ${\cal R}^\star_{\ker X}$ which is output-nulling. However, in doing so it may stabilize the closed-loop if $\ker X$ is externally stabilized by $-K _{X}$, see \cite{Trentelman-SH-01}. Indeed, since ${\cal R}_{0,X}$ is output-nulling with respect to the quadruple $(A,B,C,D)$, it is also output-nulling for the quadruple $(A-B\,K _{X},B,C-D\,K _{X},D)$, and two matrices $\Xi$ and $\Omega$ exist such that \begin{equation} \label{XU} \left[ \begin{array}{c} A _{X} \\[-1mm] C _{X} \end{array} \right] \,R_{0,X}=\left[ \begin{array}{c} R_{0,X} \\[-1mm] 0 \end{array} \right] \Xi+\left[ \begin{array}{c} B \\[-1mm] D \end{array} \right] \Omega, \end{equation} where $R_{0,X}$ is a basis matrix of ${\cal R}_{0,X}$. In order to find a feedback matrix that stabilizes the system, we solve the former in $\Xi$ and $\Omega$, so as to find $L$ such that $\left[ \begin{smallmatrix} A _{X}+B\,L \\[1mm] C _{X}+D\,L \end{smallmatrix} \right] \,R_{0,X}=\left[ \begin{smallmatrix} R_{0,X} \\[1mm] 0 \end{smallmatrix} \right] \Xi$, where the eigenvalues of $\Xi$ are the eigenvalues of the map $A _{X}+B\,L$ restricted to ${\cal R}_{0,X}$. We first compute the set of solutions of (\ref{XU}) in $\Xi$ and $\Omega$, i.e., \begin{eqnarray} \label{hu} \left[ \begin{array}{c} \Xi \\[-1mm] \Omega \end{array} \right]=\left[ \begin{array}{c} \hat{\Xi} \\[-1mm] \hat{\Omega} \end{array} \right]+\left[ \begin{array}{c} H_{1}\\[-1mm] H _{2} \end{array} \right] K, \quad \left[ \begin{array}{c} \hat{\Xi} \\[-1mm] \hat{\Omega} \end{array} \right]\stackrel{\text{\tiny def}}{=} \left[ \begin{array}{cc} R_{0} & B \\[-1mm] 0& D \end{array} \right]^\dagger \left[ \begin{array}{c} A _{X} \\[-1mm] C _{X} \end{array} \right] R_{0} \end{eqnarray} and $\left[ \begin{smallmatrix} H_{1} \\[1mm] H _{2} \end{smallmatrix} \right]$ is a basis matrix of $\ker \left[ \begin{smallmatrix} R_{0} & B \\[1mm] 0 & D \end{smallmatrix} \right]$. Since ${\cal R}_{0,X}$ is a controllability subspace, the pair $(\hat{\Xi},H_{1})$ is reachable. This implies that a matrix $K$ in (\ref{hu}) can always be found so that the eigenvalues of $\Xi$ are freely assignable (provided they come in complex conjugate pairs). Hence, we use such $K$ in (\ref{hu}) and then we compute $L=-\Omega\,R_{0}^\dagger$. This choice guarantees that only the eigenvalues of $A _{X}$ restricted to ${\cal R}_{0,X}$ get affected by the use of $L$. % \begin{example} \label{ex2} { Let $A = \left[ \begin{smallmatrix} -8 & 0\\[1mm] 6 & 0 \end{smallmatrix} \right]$, $B= \left[ \begin{smallmatrix} 0 \\[1mm] -4 \end{smallmatrix} \right]$, $C = [\begin{array}{cc} 4 & 0 \end{array} ]$, $D=0$, so that $Q=C^\top C=\operatorname{diag} \{16,0\}$, $S=C^\top D=0$ and $R=D^\top D=0$. % % One can directly verify that the set of solutions of GCARE($\Sigma$) is parameterized in $t \in {\mathbb{R}}$ as $X_t= \left[ \begin{smallmatrix} \frac{9}{16}\,t+1 & \frac{3}{4}\,t \\[1mm] \frac{3}{4}\,t & t \end{smallmatrix} \right]$. Thus, $X=X_{0}=\left[ \begin{smallmatrix} 1 & 0 \\[1mm] 0 & 0 \end{smallmatrix} \right]$ is the only solution of GCARE($\Sigma$) for which $\ker R \subseteq \ker (S+X\,B)$. This implies that $X=X_{0}=\left[ \begin{smallmatrix} 1 & 0 \\[1mm] 0 & 0 \end{smallmatrix} \right]$ is the only solution of CGCARE($\Sigma$), and it is positive semidefinite. Since $R=0$, we find $K _{X}=[\,0\;\;\;0\,]$, which implies $A _{X}=A$. Hence, CGCARE($\Sigma$) does not admit a stabilizing solution. However, we now see that the infinite-horizon problem admits an optimal solution which is also stabilizing. Indeed, we find ${\cal R}_{0,X}=\operatorname{im} \left[ \begin{smallmatrix} 0 \\[1mm] 1 \end{smallmatrix} \right]$. The eigenvalue of $A _{X}$ restricted to ${\cal R}_{0,X}$ is 0, while the eigenvalue induced by the map $A_{X}$ on the quotient space ${\mathbb{R}}^2/{\cal R}_{0,X}$ is $-8$. The optimal trajectory is \[ \left[ \begin{array}{c} x_{1}(t) \\[-1mm] x _{2}(t)\end{array} \right]=e^{A _{X}\,t}\left[ \begin{array}{c} x_{1}(0) \\[-1mm] x _{2}(0)\end{array} \right]=\left[ \begin{array}{cc} e^{-8\,t} & 0 \\[-1mm] \frac{3}{4}\left( 1-e^{-8\,t}\right) & 1 \end{array} \right] \left[ \begin{array}{c} x_{1}(0) \\[-1mm] x _{2}(0)\end{array} \right], \] which implies that the optimal cost is $J^*=x_{1}^2(0)$, i.e., it coincides with $x^\top(0)\,X_{0}\,x(0)$. We can find another optimal solution that assigns the additional eigenvalue of the closed loop to $-1$. In this case, $\left[ \begin{smallmatrix} R_{0,X} & B \\[1mm] 0 & D \end{smallmatrix} \right]^\dagger=\left[ \begin{smallmatrix} 0 & 0 \\ 1 & -4 \\ 0 & 0 \end{smallmatrix} \right]^\dagger=\frac{1}{17} \left[ \begin{smallmatrix} 0 & 1 & 0 \\[1mm] 0 & -4 & 0 \end{smallmatrix} \right]$, so that $\left[ \begin{smallmatrix} \hat{\Xi} \\[1mm] \hat{\Omega}\end{smallmatrix} \right]=\left[ \begin{smallmatrix} 0 \\[1mm] 0 \end{smallmatrix} \right]$ using (\ref{hu}). Moreover, a basis for the null-space of $\left[ \begin{smallmatrix} R_{0,X} & B \\[1mm] 0 & D \end{smallmatrix} \right]$ is $\left[ \begin{smallmatrix} 4 \\[1mm] 1 \end{smallmatrix} \right]$. We find $\left[ \begin{smallmatrix} \Xi \\[1mm] \Omega \end{smallmatrix} \right]=\left[ \begin{smallmatrix} 4 \\[1mm] 1 \end{smallmatrix} \right] \,K$. Imposing $\Xi=-1$ gives $K=-1/4$, which in turn gives $L=-\Omega\,R_{0,X}^\dagger=-K\,\left[ \begin{smallmatrix} 0 \\[1mm] 1 \end{smallmatrix} \right]^\dagger=\frac{1}{4} [\begin{array}{cc} 0 & 1 \end{array}]=[\begin{array}{cc} 0 & \frac{1}{4} \end{array}]$. Thus, $e^{(A+B\,L)\,t}=\left[ \begin{smallmatrix} e^{-8\,t} & 0 \\[1mm] \star & e^{-t} \end{smallmatrix} \right]$, and the value of the cost remains $J^*=x_{1}^2(0)$. This solution is optimal, and is also stabilizing. Thus, we found a stabilizing optimal control even in a situation in which CGCARE($\Sigma$) does not admit a stabilizing solution. } \end{example} \begin{remark} { The same procedure used in Example \ref{ex2} can be used also in examples where the eigenvalues of $A_{ X}$ are complex. Consider e.g. $A = \left[ \begin{smallmatrix} 1 & 1 \\[1mm] -1 & 1 \end{smallmatrix} \right]$, $B= \left[ \begin{smallmatrix} 1 \\[1mm] 0 \end{smallmatrix} \right]$, and $Q$, $S$ and $R$ are zero matrices. The only solution of CGCARE($\Sigma$) is $X = 0$, so that $\sigma(A_{X})=\sigma(A)=\{1\pm i\}$. However, using the same procedure of Example \ref{ex2} we can find a matrix $L=[\begin{array}{cc} -9 & 19 \end{array}]$ which stabilizes the system since $\sigma(A_{X}+B\,G\,L)=\{-3,-4\}$. Thus, an optimal feedback that is stabilizing exists. } \end{remark} \begin{remark} The case discussed in the previous example is somehow extreme. In fact, if CGCARE($\Sigma$) admits a solution and $R=0$ (which clearly implies $S=0$) it is clear that $BG=B$ and, for any solution $X$ of CGCARE($\Sigma$), $A_X=A$. Therefore in this case there exists a matrix $L$ such that the system can be stabilized by the feedback $A_X+BGL$ (that does not affect the cost index) if and only if $(A,B)$ is stabilizable. This is an extreme case, but, as shown in the following example, $R=0$ is far from being a necessary condition for the occurrence of cases where CGCARE($\Sigma$) admits solutions, none of which is stabilizing, but there exist a solution $X$ and a matrix $L$ such that $A_X+BGL$ is a stability matrix. \end{remark} \begin{example} \label{ex6.7} { Let $A = \left[ \begin{smallmatrix} 1 & 1 & 1\\[1mm] -3 & 1 & 0 \\[1mm] 1 & 0 & 0 \end{smallmatrix} \right]$, $B= \left[ \begin{smallmatrix} 0 & 2 \\[1mm] 0 & 0\\[1mm] 1 & 0 \end{smallmatrix} \right]$, $Q = \left[ \begin{smallmatrix} 1 & 0 & -1\\[1mm] 0 & 0 & 0 \\[1mm] -1 & 0 & 1 \end{smallmatrix} \right]$, $S=\left[ \begin{smallmatrix} 1 & 0 \\[1mm] 0 & 0 \\[1mm] -1 & 0 \end{smallmatrix} \right]$ and $R=\left[ \begin{smallmatrix} 1 & 0 \\[1mm] 0 & 0 \end{smallmatrix} \right]$. One can directly verify that the only two solutions of CGCARE($\Sigma$) are $X_{0}=0$ and $X_{1}= \operatorname{diag}\{0,0,2\}$. None of these two solutions is stabilizing. Indeed, the eigenvalues of the closed-loop matrix relative to $X_{0}$ are $\{1,1\pm i\,\sqrt{3}\}$, while those of the one relative to $X_{1}$ are $\{-1,1\pm i\,\sqrt{3}\}$. Thus, CGCARE($\Sigma$) does not have a stabilizing solution. Let us consider the solution $X=X_{1}$. We have $$ A_{X_1}=\left[ \begin{array}{cc|c}1\;&\; 1 \; &\; 1 \\ -3 \;&\; 1 \;&\; 0 \\ \hline 0 \;&\; 0 \;&\; -1 \end{array} \right] \qquad \text{and} \qquad BG=\left[ \begin{array}{c|c} 0 \;&\; 2 \\ 0 \;&\; 0 \\ \hline 0 \;&\; 0 \end{array} \right]. $$ Thus, by suitably selecting $L$ we can arbitrarily place the eigenvalues of the north-east corner of $ A_{X_1} +BGL$ while the third eigenvalue is fixed (and stable) and this new feedback does not affect the cost (\ref{costinf}). For example, we can take \[ L=\left[ \begin{array}{ccc} 0 & 0 & 0 \\ -\frac{7}{2} & \frac{3}{2} & 0\end{array} \right] \] so that the overall closed-loop matrix becomes \[ A_{X_1} +B\,G\,L=\left[ \begin{array}{ccc} -6 \;&\; 4 \;&\; 1 \\ -3 \;&\; 1 \;&\; 0 \\ 0 \;&\; 0 \;&\; -1 \end{array} \right], \] whose eigenvalues are $\{-1,-2,-3\}$ and, hence, it is stable. } \end{example} \section*{Concluding remarks and future directions} In this paper we investigated some structural properties of CGCARE arising from singular LQ optimal control problems. These considerations revealed that a subspace can be identified that is independent of the particular solution of the Riccati equation considered and, even more importantly, such that the closed-loop matrix restricted to this subspace does not depend on the particular solution of the Riccati equation. If such subspace is not zero, in the optimal control a further term can be added to the state feedback associated with the solution of the Riccati equation that does not affect the value of the cost function. This term can be expressed in state-feedback form, and can be used as a degree of freedom to be employed to stabilize the closed-loop even in cases in which no stabilizing solutions exist of the Riccati equation. As for the discrete-time case, see \cite{Ferrante-04,NF-SCL-arxiv}, our analysis is expected to lead to a procedure for the order reduction of the CGCARE, which we believe will provide a relevant numerical edge in the solutions of CGCARE.
{ "attr-fineweb-edu": 1.049805, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcJbxK4tBVhvvrMNT
\section{Introduction} Quantum technologies face many challenges, often arising due to the unavoidable coupling of any system to its environment. The prediction of their dynamics requires open quantum system methods that include such coupling effects, for example the Caldeira-Leggett model \cite{Breuer02} and the spin-boson model \cite{Weiss12}. These methods are successfully employed in many physical contexts, e.g., quantum optics \cite{Krinner18, Maniscalco04, Stewart17}, condensed matter \cite{Ronzani18, Semin20, Deffner13, Wilhelm04, Hanson08, Luschen17}, quantum computation \cite{Verstraete09, Kliesch11, Thorwart01}, nuclear physics \cite{Brambilla21} and quantum chemistry \cite{Teh19}. For instance, modelling circuit quantum electrodynamics with the spin-boson model shows that the heat transport of a superconducting qubit within a hybrid environment changes significantly, depending on the qubit-resonator and resonator-reservoir couplings \cite{Ronzani18}. In the mathematical treatment of an open quantum system, a coupling function $\tens{{\cal C}}_{\omega}$ is typically introduced that describes how strongly the system interacts with bath degrees of freedom (DoF). Its functional form determines the temporal memory of the bath and whether the noise is coloured or not \cite{Breuer02,Weiss12,Anders20}, critically affecting the system dynamics \cite{Deffner13, Liu16, Zou20}. A large body of theoretical results exist for various toy models that make specific assumptions on the coupling function $\tens{{\cal C}}_{\omega}$ \cite{Weiss12, Breuer02, Shabani05}. However, a major drawback is a somewhat lacking connection to system- or material-specific characteristics to which these methods could be applied: for a given DoF, in a given material, which coupling function $\tens{{\cal C}}_{\omega}$ should one choose to model its dynamics? An alternative approach is taken in the condensed matter literature, where open quantum systems are usually characterized by the density of states (DOS) $D_{\omega}$ of their environment \cite{Chen05}, and modes in the environment are often assumed to couple to the system with the same strength $g$ \cite{Tritt05}. Measurement of, for example, the phonon DOS is well-established using different inelastic scattering techniques \cite{Ament11,Bayle14}. In this paper, we present a useful relation that translates the coupling function $\tens{{\cal C}}_{\omega}$ of an open quantum system into an experimentally measurable DOS $D_{\omega}$, and vice versa. Our relation paves the way to parametrizing realistic coupling functions for a range of applications, for example, for spins in a magnetic material that experience damping through the coupling to the crystal lattice \cite{Anders20, Reppert20} or for nitrogen vacancy centers, a solid-state analogue of trapped atoms, whose coherence lifetime in optical transitions is also limited by interaction with phonons \cite{Bar-Gill12,Fuchs10}. The link is explicitly established for a generic quantum system that couples locally to a bosonic environment. Extensions to other environments, such as fermionic environments, will be possible using similar arguments. The paper is organised as follows: we first introduce the two approaches involving $D_{\omega}$ and $\tens{{\cal C}}_{\omega}$, respectively. Setting up the dynamics of the environment, we evaluate its memory kernel and establish the link between $D_{\omega}$ and $\tens{{\cal C}}_{\omega}$. We then demonstrate why the widely used Debye approximation is equivalent to the well-known Ohmic coupling function. While these approximations suffice at low frequencies, experimental DOS show peaks at higher frequencies. By approximating a given DOS with a series of Lorentzians, we show how the frequency dependence of the coupling may be obtained, leading to non-trivial dissipation regimes. As an illustration of the power of the derived link, we parametrize two experimentally measured phonon DOS, those of gold and iron (see Supplementary Material (SM) for the latter) and one theoretically computed phonon DOS of yttrium iron garnet (YIG), and extract key parameters for the corresponding coupling functions. These give direct insight into the impact of memory for any phonon-damped dynamics in these materials. \begin{figure} \includegraphics[width=0.47\textwidth]{fig1-a.jpg} \includegraphics[width=0.44\textwidth]{fig1-b.jpg} \caption{Schematic picture of two equivalent approaches to modelling the open quantum systems. (a) Wave vector approach: Each bath frequency $\omega$ includes several wave vectors $\{{\vect{k}}\}$. The interaction of each bath wave vector $\{{\vect{k}}\}$ with the system is often taken to have the same coupling strength $g$. (b) Frequency approach: Every bath frequency $\omega$ couples to the system with a strength given by $\tens{\cal C}_{\omega}$.} \label{fig1} \end{figure} \section{Two approaches}\label{two-appraoches} The Hamiltonian of a quantum system in contact with a bath is \begin{eqnarray} \hat{{\cal H}}_{tot} = \hat{{\cal H}}_S+ \hat{{\cal H}}_B +\hat{{\cal H}}_{SB}\,, \label{eq:H-tot} \end{eqnarray} where the bath Hamiltonian $\hat{{\cal H}}_B$ and the system Hamiltonian $\hat{{\cal H}}_S$ may contain the internal interactions among their own components. The system-bath interaction is assumed to be of product form, \begin{eqnarray} \hat{{\cal H}}_{SB} = - \hat{\vect{S}} \cdot \hat{\vect{B}}\,, \label{eq:system-bath-H} \end{eqnarray} where $\hat{\vect{S}}$ is a (Hermitian) system operator and $\hat{\vect{B}}$ is a bath operator, each with $d_s$ components. The form of the bath Hamiltonian $\hat{{\cal H}}_{B}$ and of the bath operator $\hat{\vect{B}}$ depends on the context. We consider here a bosonic bath, i.e. an infinite set of harmonic oscillators. In the literature, one can broadly distinguish two representations of the bath, working either in wave vector (WV) or frequency (F) \ space, as illustrated in Fig.\,\ref{fig1}. The wave vector approach\xspace is common in condensed matter physics \cite{Chen05, Weiss12} where the bath Hamiltonian is expressed as a sum over all possible modes ${\vect{k}}$ \begin{eqnarray} \hat{{\cal H}}_B^{WV} = \sum_{{\vect{k}}} \hbar \omega_{\vect{k}} \left(\hat{b}_{\vect{k}}^\dag \hat{b}_{\vect{k}}+\frac{1}{2}\right)\,. \label{eq:bath-H} \end{eqnarray} Here $\omega = \omega_{\vect{k}}$ gives the dispersion relation of a normal mode with wave vector ${\vect{k}}$ and $\hat{b}_{{\vect{k}}}$ ($\hat{b}_{{\vect{k}}}^{\dag}$) are bosonic annihilation (creation) operators of a mode excitation with commutation relations $[ \hat{b}_{{\vect{k}}},\hat{b}_{{\vect{k}}'}^{\dag}] = \delta_{{\vect{k}} {\vect{k}}'}$. Usually one considers a three-dimensional ($3$D) structure with wave vectors ${\vect{k}} = (k_x, k_y, k_z)$. For example, in a cubic $3$D lattice with number of lattice sites $N$, lattice constant $a$ and volume $V = Na^3$, each component of ${\vect{k}}$ runs through the range $\left(-\frac{\sqrt[3]{N}-1}{2}, \ldots, 0, \ldots, \frac{\sqrt[3]{N}-1}{2} \right) \, \frac{2 \pi}{\sqrt[3]{N} a}$. For large $N$ and $V$, and for any function $f(\omega_{{\vect{k}}})$ that only depends on the frequency $\omega_{{\vect{k}}}$, one can approximate sums over the wave vectors as \begin{align} \frac{1}{V} \sum_{ {\vect{k}}} f(\omega_{\vect{k}}) \cong \, \int \! \frac{\upd^3k}{(2\pi)^3} \, f(\omega_{\vect{k}}) =: \int \!\upd\omega \, D_{\omega} \, f(\omega)\,. \label{eq:l-omega} \end{align} This equation defines $D_{\omega}$ as the DOS per unit volume of bath modes at frequency $\omega$ \cite{Chen05}. For bosonic baths, we choose the standard interaction \cite{Weiss12} where the bath operator $\hat{\vect{B}}$ is linear in the bosonic mode operators (single phonon processes), \begin{eqnarray} \hat{\vect{B}}^{WV} = \frac{1}{\sqrt{V}}\sum_{{\vect{k}}} \vect{ \xi}_{{\vect{k}}} \, \hat{b}_{\vect{k}} + \text{h.c.}\,, \label{eq:phonon-field-2} \end{eqnarray} where $\vect{ \xi}_{{\vect{k}}} = \left( \hbar g^2/ (2 \omega_{{\vect{k}}}) \right)^{1/2}\!\vect{ \epsilon}_{\vect{k}}$ with $\vect{ \epsilon}_{\vect{k}}$ a $d_s$-dimensional unit polarisation vector \cite{Breuer02}. The coupling constant $g$ is assumed to be mode-independent for simplicity \cite{Tritt05}. Eq.\,\eqref{eq:system-bath-H} may be generalized to the situation that several system components $\hat{\vect{S}}_m$ are located at different positions $\vect{ R}_m$, and sum over interaction terms, i.e. $\hat{{\cal H}}_{SB} = -\sum_{m}\hat{\vect{S}}_m \cdot \hat{\vect{B}}( \vect{ R}_m )$. The field operators would then be $\vect{ R}$-dependent, i.e. $\hat{\vect{B}}^{WV}(\vect{ R}) = \frac{1}{\sqrt{V}}\sum_{{\vect{k}}} \vect{ \xi}_{{\vect{k}}} \, \hat{b}_{\vect{k}} \, {\rm e}^{ {\rm i} {\vect{k}} \cdot \vect{ R} } + \text{h.c.}$. For simplicity, we will concentrate in the following on just one system site and drop summation over $m$ again. Another approach to setting up bath Hamiltonian $\hat{{\cal H}}_B$ and interaction $\hat{{\cal H}}_{SB}$ is based on a frequency expansion often employed in the open quantum systems literature \cite{Weiss12, Breuer02}. In contrast to Eq.\,\eqref{eq:bath-H}, here $\hat{{\cal H}}_B$ is written directly as a sum or integral over frequencies, \begin{eqnarray} \hat{{\cal H}}_B^{F} = \frac{1}{2}\int_{0}^{\infty} \!\!\!\!\upd\omega \left( \hat{\vect{ P }}_{\omega}^2 + \omega^2\hat{\vect{ X}}_{\omega}^2\right), \label{eq:HB-omega} \end{eqnarray} where $\hat{\vect{ P }}_{\omega}$ and $\hat{\vect{X}}_{\omega}$ are $3$D [in general $d$-dimensional ($d$D)] momentum and position operators, respectively, for the bath oscillator with frequency $\omega$. Their components obey $[\hat{X}_{\omega,j}, \hat{P}_{\omega',l}] = {\rm i}\hbar \,\delta_{jl}\,\delta(\omega - \omega')$. In this approach, the bath operator in Eq.\,\eqref{eq:system-bath-H} is often chosen as \cite{ Anders20} \begin{eqnarray} \hat{\vect{B}}^{F} = \int_{0}^{\infty} \!\!\!\!\upd\omega \,\tens{{\cal C}}_{\omega} \hat{\vect{ X}}_{\omega}\,, \label{eq:B-omega} \end{eqnarray} where the coupling function $\tens{{\cal C}}_{\omega}$ (in general a $d_s\times d$ tensor) is weighting the system-bath coupling at frequency $\omega$. {The system operators couple isotropically to the bath if $\tens{{\cal C}}_{\omega}\tens{{\cal C}}_{\omega}^{T} = \mathbbm{1}_{d_s}\,C_{\omega}^2$. The scalar coupling function $C_{\omega}$} is related to the bath spectral density $J_{\omega}$, which alternatively quantifies the effect of the environment on the system as $J_{\omega} \propto C^2_{\omega}/\omega$ \cite{Breuer02,Weiss12}. The bath dynamics can be categorised \cite{Weiss12} based on the low-$\omega$ exponent of the spectral density, $J_{\omega}\propto \omega^s$, into three different classes, called Ohmic ($s = 1$), sub-Ohmic ($s < 1$), and super-Ohmic ($s > 1$). The difference between wave vector approach\xspace and frequency approach\xspace is that at a fixed frequency $\omega$, there is in Eq.\,\eqref{eq:B-omega} just one bath operator $\hat{\vect{X}}_\omega$ that couples to the system, while according to Eq.\,\eqref{eq:phonon-field-2}, the interaction is distributed over several wave vector modes ${\vect{k}}$ with weighting factors $\vect{ \xi}_{\vect{k}}$, their number being set by the DOS $D_{\omega}$ (see Fig.\,\ref{fig1}). We now want to address the question of the connection between the DOS $D_{\omega}$ and the coupling function $\tens{{\cal C}}_{\omega}$. To achieve this we consider one relevant quantity in both approaches and equate the corresponding formulas. In the following, we choose the memory kernel $\tens{\cal K}$ which encodes the response of the bath to the system operator $\hat{\vect{S}}$. Note that the choice of $\hat{\vect{B}}$ in Eq.\,\eqref{eq:phonon-field-2} restricts the discussion to the linear response of the bath, as is reasonable for a bath that is thermodynamically large \cite{Breuer02, Weiss12}. \section{Memory kernel in both approaches} To find an explicit relation in the wave vector approach\xspace for the dynamics of the bath operator $\hat{{\vect{B}}}^{WV}$ in Eq.\,\eqref{eq:phonon-field-2}, the starting point is the equation of motion for $\hat{b}_{\vect{k}}$, \begin{eqnarray} \frac{ d \hat{b}_{\vect{k}} }{ dt } = - {\rm i} \omega_{{\vect{k}}} \hat{b}_{\vect{k}} + \frac{{\rm i} }{\hbar \sqrt{V}} \vect{ \xi}_{{\vect{k}}}^{\dag} \cdot \hat{{\vect{S}}} \, , \label{eq:equation-of-motion} \end{eqnarray} whose retarded solution contains two terms \begin{eqnarray} \hat{b}_{{\vect{k}}}(t) & = & \hat{b}_{{\vect{k}}}(0) \, {\rm e}^{- {\rm i} \omega_{{\vect{k}}} t} \\ && {} + \frac{{\rm i} }{\hbar \sqrt{V}} \vect{ \xi}_{{\vect{k}}}^{\dag} \cdot \int_{0}^{t}\!\!\upd t'\, \hat{{\vect{S}}}(t') \, {\rm e}^{- {\rm i} \omega_{{\vect{k}}} (t-t')}\,. \nonumber \label{eq:solution-equation-of-motion} \end{eqnarray} Therefore the time evolution of the bath operator can be written as $\hat{{\vect{B}}}^{WV}(t) = \hat{{\vect{B}}}_{\rm induced}^{WV}(t) + \hat{{\vect{B}}}_{ \rm response}^{WV}(t)$. The first term represents the internally evolving bath which is given by $\hat{{\vect{B}}}_{\rm induced}^{WV}(t) = \frac{1}{\sqrt{V}}\sum_{{\vect{k}}}\hat{b}_{{\vect{k}}}(0) e^{-i\omega t}\vect{ \xi}_{{\vect{k}}} + \text{h.c.}$, while ${\hat{\vect{B}}}_{\rm response}^{WV}(t)$ contains information about the system's past trajectory, \begin{eqnarray} \hat{{\vect{B}}}_{\rm response}^{WV}(t) = \int_{0}^{\infty}\!\!\!\!\upd t'\, \tens{\cal K}^{WV}(t-t') \,\hat{{\vect{S}}}(t')\,, \label{eq:response-field} \end{eqnarray} where $\tens{\cal K}^{WV}(t-t')$ is the memory kernel (a tensor), \begin{eqnarray} \tens{\cal K}^{WV}(t-t') = \Theta(t-t') \frac{g^2}{ V}\sum_{{\vect{k}}} \vect{ \epsilon}_{{\vect{k}}} \vect{ \epsilon}_{{\vect{k}}}^{\dag} \frac{\sin \omega_{{\vect{k}}}(t-t')}{\omega_{{\vect{k}}}}\,. \label{eq:kernel-1} \end{eqnarray} Here, the $\vect{ \xi}_{{\vect{k}}}$ have been expressed by the unit polarisation vectors $\vect{ \epsilon}_{{\vect{k}}}$ [see after Eq.\,\eqref{eq:phonon-field-2}] and $\Theta(t-t')$ is the Heaviside function, which ensures that the bath responds only to the past state of the system, i.e. $t' < t$. For large volume $V$ and in the continuum limit, the summation over ${\vect{k}}$ in Eq.\,\eqref{eq:kernel-1} can be transformed into a frequency integration as in Eq.\,\eqref{eq:l-omega}. The projection on polarization vectors, averaged over an isofrequency surface $\Omega$, is taken into account by a ($d_s \times d_s$) positive Hermitian matrix $\tens{{\cal M}}_{\omega} = (\Omega)^{-1}\int \upd \Omega \; \vect{ \epsilon}_{{\vect{k}}} \vect{ \epsilon}^{\dag}_{{\vect{k}}}$, normalized to unit trace. With this notation, the memory tensor in the wave vector approach\xspace is \begin{eqnarray} \tens{\cal K}^{WV}(t-t') = \Theta(t-t') g^2\!\!\!\int_{0}^{\infty}\!\!\!\!\upd\omega\, \tens{{\cal M}}_{\omega} D_\omega \frac{\sin \omega(t-t')}{\omega} \,. \label{eq:kernel1} \end{eqnarray} Turning now to the frequency approach\xspace, the dynamics of the bath operator $\hat{\vect{X}}_{\omega}$ in Eq.\,\eqref{eq:B-omega} follows a driven oscillator equation \begin{eqnarray} \frac{d^2\hat{\vect{X}}_{\omega}}{dt^2}+\omega^2\hat{\vect{X}}_{\omega} = \tens{{\cal C}}_{\omega}^{T}\, \hat{\vect{S}} \, . \label{eq:equation-of-motion-X} \end{eqnarray} Its exact solution is \begin{eqnarray} \hat{\vect{X}}_{\omega}(t)&=& \hat{\vect{X}}_{\omega}(0)\cos{\omega t} + \hat{\vect{ P }}_{\omega}(0)\sin{\omega t}\nonumber\\ &&{} + \int_{-\infty}^{\infty}\!\!\!\!\upd t'G_{\omega}(t-t')\, \tens{{\cal C}}_{\omega}^{T}\, \hat{\vect{S}}(t')\,, \label{eq:X-solution} \end{eqnarray} where $G_{\omega}(t-t') = \Theta(t-t') \sin \omega(t-t')/\omega$ is the retarded Green's function. Inserting this solution in Eq.\,\eqref{eq:B-omega} leads again to induced and response evolution parts given, respectively, by $\hat{\vect{B}}_{\rm induced}^{F}(t)= \int_{0}^{\infty} \!\!\upd\omega\left(\hat{\vect{X}}_{\omega}(0)\cos{\omega t} + \hat{\vect{ P }}_{\omega}(0)\sin{\omega t}\right)$ and \begin{eqnarray} \hat{\vect{B}}_{\rm response}^{F}(t)= \int_{0}^{\infty} \!\!\!\!\upd\omega\int_{0}^{\infty}\!\!\!\!\upd t'\,G_{\omega}(t-t') \,\tens{{\cal C}}_{\omega}\tens{{\cal C}}_{\omega}^{T} \,\hat{\vect{S}}(t')\,. \label{eq:B-1-omega} \end{eqnarray} Comparing with Eq.\,\eqref{eq:response-field} one can identify the memory kernel tensor in the frequency approach\xspace as \begin{eqnarray} \tens{\cal K}^{F}(t-t') = \int_{0}^{\infty} \!\!\!\!\upd\omega \; \tens{{\cal C}}_{\omega} \tens{{\cal C}}_{\omega}^{T} \,G_{\omega}(t-t')\,. \label{eq:kernel2} \end{eqnarray} \section{Coupling function $\tens{{\cal C}}_{\omega} $ versus DOS $D_{\omega}$} Since Eqs.\,\eqref{eq:kernel1} and \eqref{eq:kernel2} describe the same memory effects, we may set them equal, leading to \begin{eqnarray} \tens{{\cal C}}_{\omega} \tens{{\cal C}}_{\omega}^{T} = g^2\tens{\cal M}_{\omega} D_{\omega} \,. \label{eq:Dw-Cw} \end{eqnarray} This relation links the system-bath couplings in the two approaches, i.e. the DOS $D_{\omega}$ is proportional to the Hermitian "square" of the coupling function $\tens{{\cal C}}_{\omega}$. This is the first result of the paper. Relation\,\eqref{eq:Dw-Cw} holds under two assumptions: First, we assumed the polarization matrix $\tens{\cal M}_{\omega}$ is sufficient to carry all information about the isofrequency surface in ${\vect{k}}$-space. Second, we considered identical coupling strengths $g$ to all modes of the reservoir (as sketched in Fig.\,\ref{fig1}) -- however, an extension to a $g$ that depends on frequency $\omega$ is straightforward. The result\,\eqref{eq:Dw-Cw} may be applied to any quantum system that interacts linearly with a bosonic bath. For instance, magnetic materials in which spins $\hat{\boldsymbol S}$ relax in contact with a phonon reservoir have been studied extensively \cite{Costi03, Anders20,Thorwart01}. An impurity in a condensate described by a Caldeira-Leggett model in Ref.\cite{Lampo17} is another example. For isotropic coupling to an isotropic bath, we may set $\tens{{\cal C}}_{\omega} = \mathbbm{1}_{d_s}\,C_{\omega}$ with scalar $C_{\omega}$, and $\tens{\cal M}_{\omega} = \mathbbm{1}_{d_s}/d_s$ because the trace of $\tens{\cal M}_{\omega}$ averages the norm square of the unit polarization vectors $\vect{ \epsilon}_{{\vect{k}}}$. With these assumptions, Eq.\,\eqref{eq:Dw-Cw} reduces to a scalar equation \begin{eqnarray} C_{\omega}^2 = \frac{g^2}{d_s} D_{\omega}\,. \label{eq:scalar-Dw-Cw} \end{eqnarray} Note that in general $d_s \leq d $. An example where $d_s = d$ is a $3$D spin vector that couples to a $3$D phononic environment \cite{Anders20}. A rectangular ($d_s \times d$) coupling matrix $\tens{{\cal C}}_{\omega}$ may model a graphene-on-substrate structure, where the electronic system ($d_s = 2$) is in contact with a $3$D phononic bath \cite{Cusati17}. \section{Debye approximation} In condensed-matter physics, the Debye model is used to describe the phonon contribution to a crystal's thermodynamic properties. It assumes an acoustic dispersion, i.e. $\omega = c \vert {\vect{k}}\vert$ with an averaged sound speed $c$, resulting in $3$D in \cite{Chen05} \begin{eqnarray} D_{\omega}^{\rm Deb} = \frac{3\,\omega^2}{2\pi^2 c^3}\, \Theta(\omega_{\rm D} - \omega)\,. \label{eq:Debye-DOS} \end{eqnarray} Here $\omega_D$ is the Debye frequency, i.e. the maximum bath frequency, which in practice is taken to be near the edge of the Brillouin zone. For example, for gold, see Fig.\,\ref{fig2} (a), the Debye model fits the DOS data reasonably well in frequency region~$I$ up to $\approx 1.4 \un{THz}$. For the Debye DOS, our relation Eq.\,\eqref{eq:scalar-Dw-Cw} implies the coupling function (setting $d_s = d = 3$) \begin{eqnarray} C_{\omega}^{\rm Deb} = \frac{g\,\omega}{\sqrt{2\pi^2 c^3}} \, \Theta(\omega_{\rm D} - \omega)\,. \label{eq:Debye-Cw} \end{eqnarray} The scaling of $C^{\rm Deb}_{\omega}$ implies that the spectral density $J(\omega) \propto C_{\omega}^2/\omega$ is Ohmic, i.e. $J(\omega) \propto \omega$. Hence, the Debye model with constant coupling $g$ in the wave vector approach captures the same relaxation dynamics as an Ohmic bath in the frequency approach. \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{fig2.pdf} \caption[]{ (a) Debye DOS (pink solid line, Eq.\,\eqref{eq:Debye-DOS}) and two-peak Lorentzian DOS (blue solid line, Eq.\,\eqref{eq:Dw-Lorentzian-sum}) fitted to a measured phonon DOS for gold (red dots) reported as in Ref.\cite{Munoz13}. The Debye frequency for gold is $\omega_D/2\pi = 3.54\un{THz}$ given in Ref.\cite{Chen05}. Fit specified peak frequencies $\omega_{0,j}$, widths $\Gamma_{j}$ and peak ratios $A_j/A_1$ are given in Table~\ref{tab:fit-to-Au}. The grey dashed lines separate three frequency regimes discussed in the main text. (b) Memory kernels ${\cal K}(t-t')$ corresponding to Debye DOS and two-peak Lorentzian DOS. } \label{fig2} \end{center} \end{figure} Beyond $3$D cubic lattices, $D_{\omega}$ will depend on the dimensionality and lattice symmetry. What happens if the lattice is effectively two- or one-dimensional? To answer this, let us imagine a $d$D isotropic lattice with volume $V=Na^d$. The volume element of such a lattice in ${\vect{k}}$-space corresponds to $\upd^{d}k = \Omega_d k^{d-1} \upd k$ where $\Omega_d = 2, 2\pi, 4\pi$ is the $d$D solid angle for $d = 1, 2, 3$, respectively. Analogously to the $3$D lattice, using the acoustic dispersion with an averaged sound speed $c$, one finds the $d$D Debye DOS \begin{eqnarray} D_{\omega}^{(d)} = \frac{ \Omega_d\, \omega^{d-1} }{ (2 \pi c )^d } \Theta(\omega_{\rm D} - \omega)\,. \label{eq:Dw-in-dD} \end{eqnarray} Via Eq.\,\eqref{eq:scalar-Dw-Cw} we obtain the power-law $C_{\omega}\propto \omega^{(d-1)/2}$ for the corresponding coupling functions which implies spectral densities $J(\omega) \propto \omega^{d-2}$. Thus, isotropic baths in $2$D or $1$D behave in a distinctly sub-Ohmic way. \section{Inferring coupling functions from DOS data} Beyond the conceptually useful Debye model, a structured DOS with several peaks is a generic feature of real materials \cite{Munoz13,Mauger14}. Sums of Lorentzian or Gaussian functions are two convenient candidates to approximate such peaky shaped densities \cite{Lemmer18}. Here, we fit experimentally measured DOS for gold \cite{Munoz13} (and iron \cite{Mauger14} in SM) and theoretically computed DOS for YIG \cite{Wang20} to a function consisting of multiple Lorentzians, \begin{eqnarray} D_{\omega}^{\rm Lor} = \frac{6\, A_1}{g^2\pi} \sum_{j=1}^{\nu} \frac{A_{j}\Gamma_{j}}{A_1}\frac{\omega^2}{(\omega_{0,j}^2 - \omega^2)^2+\Gamma_{j}^2 \omega^2}\,. \label{eq:Dw-Lorentzian-sum} \end{eqnarray} The fits, see Figs.\,\ref{fig2} (a), \ref{fig3} and figure in SM, reveal the material specific peak frequencies $\omega_{0,j}$, peak widths $\Gamma_j$ and peak ratios $A_j/A_1$, see Table\,\ref{tab:fit-to-Au} and tables in SM, while the first peak amplitude $A_1$ remains undetermined. Fixing $A_1$ would require information additional to the DOS, such as the system's relaxation rate due to interaction with the phonon bath. Note that phonon DOS are generally slightly temperature dependent \cite{Mauger14}. Hence the fit parameters in Eq.\,\eqref{eq:Dw-Lorentzian-sum} will be (usually weak) functions of temperature, a dependence that only matters when a large range of temperatures is considered. \begin{table}[bhtbp] \centering {\footnotesize \caption{Fit parameters of two-peak Lorentzian matched to the experimentally measured DOS for gold reported in Ref.\cite{Munoz13} (see Fig.~\ref{fig2} (a)).} \vspace{2mm} \begin{tabular}{c|ccc} \hline \text{peak}& \text{frequency} & \text{width} & \text{ratio}\\ $j$ & $\omega_{0,j}/2\pi\ [\un{\!THz}]$ & $\Gamma_{j}/2\pi\ [\un{\!THz}]$ & $A_j/A_1$\\ \hline 1 & 2.11 & 1.3 & 1 \\ 2 & 4.05 & 0.56 & 0.15 \\ \hline \end{tabular} \label{tab:fit-to-Au} } \end{table} The peak widths in Eq.\,\eqref{eq:Dw-Lorentzian-sum} determine a characteristic memory time $1/\Gamma_j$. However, beyond this single timescale number, the functional dependence of the memory is fully determined by the kernel Eq.\,\eqref{eq:kernel1}, which for multi-peak Lorentzians is proportional to \begin{equation} \tens{\cal K}^{\rm Lor}(t-t') \propto \sum_{j}^{\nu} A_j e^{-\frac{\Gamma_{j}(t-t')}{2}} \frac{\sin (\omega_{1,j}(t-t'))}{\omega_{1,j}} \Theta(t-t')\,, \end{equation} with $\omega_{1,j} = \sqrt{\omega_{0,j}^2 - \Gamma_j^2/4}$. For gold, Fig.\,\ref{fig2} (a) shows the phonon DOS measured by Mu\~{n}oz {\it et al.} \cite{Munoz13}, together with our two-peak Lorentzian fit. The fit gives good agreement in all frequency regimes, with a slightly slower decay in region $III$ than the measured DOS. For a system coupled to phonons in gold, the peak widths (see Table~\ref{tab:fit-to-Au}) immediately imply a characteristic memory time in the picosecond range. The relevant kernel is shown (blue) in Fig.\,\ref{fig2} (b) for the two-peak fitted DOS of gold shown in~(a). Using the Debye model instead would give a qualitatively different behaviour: the pink curve shows a distinctly slower long-time tail, due to the sharp cutoff at the Debye frequency. Without any cutoff, it would transform into $\tens{\cal K} (t-t') \propto \partial_{t'}\delta (t - t')$, implying no memory \cite{Anders20}. In contrast, the Lorentzian fit (blue) provides a quantitatively accurate memory kernel. Our approach may provide a more realistic picture of the magnetization dynamics based on actual material data. YIG \cite{Barker20, Barker21} is a typical magnetic insulator in which the relaxation of a spin DoF $\hat{\boldsymbol S}$ is dominated by the coupling to phonons\cite{Sebastian18}, similar to magnetic alloys like Co-Fe \cite{Schoen16}, while in metallic materials, the coupling to electrons is more relevant\cite{Kormann14}. Fig.\,\ref{fig3} illustrates a theoretically computed DOS for YIG \cite{Wang20} with a fit that contains eighteen Lorentzians. (Parameters are displayed in Table~\ref{tab:fit-to-YIG} in the SM.) In this fit, a few negative amplitudes $A_j$ in Eq.\,\eqref{eq:Dw-Lorentzian-sum} are needed to reproduce the gap near $16\un{THz}$. While positive $A_j$ are easily justified as energy transfer from the system to the bath modes, we can understand negative amplitudes as energy flow in the reverse direction, i.e. from the bath to the system. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{fig4.pdf} \caption[]{ Illustration of eighteen-peak Lorentzian DOS, Eq.\,\eqref{eq:Dw-Lorentzian-sum}, (orange curve) fitted to the theoretically predicted phonon DOS $D_{\omega}$ for YIG (cyan curve) reported in Ref.\cite{Wang20}. The fitted peak frequencies $\omega_{0,j}$, widths $\Gamma_{j}$ and amplitude ratios $A_j/A_1$ can be found in Table~\ref{tab:fit-to-YIG} in the SM.} \label{fig3} \end{center} \end{figure} Using additional information of the typical Gilbert damping parameter for this material \cite{Krysztofik17}, also the peak amplitude $A_1$ can be determined (see the SM). More generally using relation \eqref{eq:Dw-Cw}, the multi-peak DOS \eqref{eq:Dw-Lorentzian-sum} imply coupling functions of the form \begin{eqnarray} C_{\omega}^{\rm Lor} = \sqrt{A_1\sum_{j=1}^{\nu} \frac{2A_{j}\Gamma_{j}}{A_1\pi}\frac{\omega^2}{(\omega_{0,j}^2 - \omega^2)^2+\Gamma_{j}^2 \omega^2}}\,. \label{eq:Cw-Lorentzian-sum} \end{eqnarray} This allows us, for the first time, to specify the overall magnitude of the coupling of a system to a phononic bath using the above multi-peak Lorentzian fits to the measured DOS in real materials. This second result of our paper should be useful for modelling the Brownian motion of spins \cite{Anders20, Coffey20} and in applications such as quantum information processing with solid-state spin systems \cite{Hegde20}. \section{Conclusion} We have derived the general relation \,\eqref{eq:Dw-Cw} that translates the function ${\cal C}_{\omega}$, determining the coupling of a generic system to a bosonic bath at various frequencies, into the density of states $D_{\omega}$ of the latter. This was achieved by evaluating the memory kernel of dynamical bath variables in two equivalent approaches. Several applications of the relation were then discussed. We demonstrated how for systems damped by phonons in $3$D, Debye's quadratic DOS captures the same physics as a linear coupling function $C_{\omega}$ which corresponds to an Ohmic spectral density. Secondly, we have established how to infer $C_{\omega}$ from the measured DOS of a material, such that it reflects the specific properties of the material and goes beyond a purely mathematical choice of coupling function. Given that real materials have densities of states with multiple peaks, the typical picture which emerges from our general relation \eqref{eq:Dw-Cw} is that the coupling function is non-Ohmic and memory effects in the system dynamics become important. The corresponding time scales (in the ps range, e.g., for gold in Fig.\,\ref{fig2} (b)) can be conveniently determined by fitting multiple Lorentzians to the bath DOS. Future work could address how to extend relation \eqref{eq:Dw-Cw} to systems interacting with multiple independent baths. This should be suitable for non-equilibrium settings involving different temperatures \cite{Millen2014}, as used in heat transport \cite{Dhar08}. The impact of memory may also change the behaviour of systems like superconducting qubits or two-level systems that are in contact with two baths \cite{Senior20, Segal05}. \section{Acknowledgments} We thank Jorge A. Mu\~{n}oz, Lisa M. Mauger and Brent Fultz for sharing their experimental data. We would also like to thank Joseph Barker, Luis Correa, and Simon Horsley for illuminating discussions, and Matias Bargheer for comments on an early draft of the paper. We gratefully acknowledge funding for this research from the University of Potsdam.
{ "attr-fineweb-edu": 1.894531, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcNnxK5YsWTJsOuxA
\section{Introduction} In this paper we present our methodology used in a winning entry for the probabilistic load forecasting track of the Global Energy Forecasting Competition 2014 (GEFCom2014). The competition consisted of twelve weekly tasks which require using historical data for the estimation of $99$ quantiles ($0.01, 0.02, ..., 0.99$) for each hour of the following month. Each forecast is evaluated using the pinball function. For further details on the competition structure and the data the interested reader should refer to the GEFCom2014 introduction paper \cite{GefComIntro2015}. In Section \ref{PrelimAnalysis} we present a preliminary analysis of the data that motivates the development of the main forecasting methods introduced in Section \ref{Methods}. In Section \ref{taskOutlines} we give a short description of our submissions in chronological order to explain the reasoning behind the chosen forecasts and the developments of the subsequent forecasts. We present an overall view of the results and conclude in Section \ref{Discussion} with a discussion, lessons learned and future work. \section{Preliminary Analysis} \label{PrelimAnalysis} We start by performing a preliminary analysis to determine our initial forecast methods. We first tested the competition's initial historical data set to confirm that load and temperature are strongly correlated, as shown in other studies \cite{Charlton2012}, see also the GEFCom2014 introduction paper \cite{GefComIntro2015} for the time-series plots of the data. This motivates the development of our kernel density estimation method conditional on the temperature (see Section \ref{sec:CKDT}). We also found that all the weather stations were strongly correlated with each other and the load data. Hence as an initial estimate of the temperature we simply took an average over all $25$ stations. The load data has strong daily, weekly and yearly seasonalities as well as trends \cite{GefComIntro2015}. A visual analysis of the load data showed that certain hours of the day exhibited strong bi-annual seasonalities (such as $11$pm) whereas others did not (e.g. $3$pm). This could be due to heating and cooling appliances being employed through the seasons. This inspires our choice of biannual model in the quantile regression based forecast (see Section \ref{QR}). Consideration of the autocorrelation and partial autocorrelation plots confirmed the presence of the weekly and daily periodicities. Our forecasts described in the following section are influenced with this periodicity in mind. \section{Methododology} \label{Methods} In this section we present the main methods implemented for the competitive tasks of the competition. \subsection{Kernel Density Estimation (KDE)} \label{KDE} Many of the methods we employ are non-parametric kernel density based estimates and similar to those as presented in \cite{Jeon2012} for probabilistic wind forecasting and \cite{Arora2014} for household-level probabilistic load forecasting. This method is motivated by the strong weekly correlations in the data. A simple kernel density estimate produces an estimate of the probability distribution function $f(X)$ of the load $X$ (at a particular future time period) using past hourly observations $\{X_i\}$ (assuming $i=1$ is beginning of historical load data: $1^{st}$ Jan 2005.). It is given by \begin{equation} f(X)=\frac{1}{nh_x}\sum_{i=1}^n K \left( \frac{X-X_i}{h_x} \right), \label{eq:KDE} \end{equation} where $h_x$ is the load bandwidth. We use a Gaussian kernel function, $K(\bullet)$, for all our kernel based forecast methods. Our first method is a KDE with a time decay parameter, $0<\lambda\leq 1$. The role of the decay parameter is to give higher weight to more recent observations. To forecast day $D$ of the week, $D = 1, 2, \dots, 7,$ at hour $h$, $h = 1, 2, \ldots, 24$, we applied a KDE on all historical observations of the same day $D$ and hour $h$. This method only considers observations belonging to the same hourly period of the week, denoted by $w$, $w = 1, \ldots, 168$, and we refer to it as \textit{KDE-W}. This can be expressed as \begin{equation} f(X)=\frac{1}{nh_x}\sum_{\substack{ i=1 \\ \{ i\!\!\!\!\!\mod \!s = w \} }}^n \frac{\lambda ^{\alpha(i)}}{\sum_{\substack{i=1 \\ \{i\!\!\!\!\!\mod \!s = w\} }}^n\lambda^{\alpha(i)}} K \left( \frac{X-X_i}{h_x} \right). \label{KDEW} \end{equation} The parameter $s = 168$ is the number of forecasting hours in a week and $\alpha(i)$ is a periodic function given by \footnote{The careful reader should note that the formula \eqref{expoKDEW} might need a further correction by one when $D$ is in a leap year. However this does not affect our results, since we did not forecast leap years. Additionally such an error would have a negligible effect in the weight.} \begin{eqnarray} \alpha(i) &=& \min \left (|\mathcal{D} - (\mathcal{D}(i)-\mathbf{1}_{A}(i))|, \mathcal{T}(i) - |\mathcal{D} - \mathcal{D}(i)|\right ), \label{expoKDEW} \end{eqnarray} where $\mathcal{D}(i) = 1, 2, \ldots, \mathcal{T}(i)$ is the day of the year (consisting of $\mathcal{T}(i)$ days) corresponding to the historical data $X_i$ and $\mathcal{D}$ is the day of the year corresponding to the forecasted day. To correct for leap years we use an indicator function $\mathbf{1}_{A}(i)$ where $A=\{i | \mathcal{D}(i)>28 \text{ and } \mathcal{T}(i)=366 \}$. Expression \eqref{expoKDEW} is simply a periodic absolute value function with annual period, whose minimum values occur annually on the same dates as the forecasted day. This method is similar to the one presented in \cite{Arora2014}. However the new feature is the half-yearly symmetry of the time-decay exponential \eqref{expoKDEW}. Since there is an annual periodicity in the load we incorporated it into the time-decay parameter such that observations during similar days of the year influence the forecast more than other, less relevant observations. The decay parameter also helps us to take into account the non-stationary behaviour of demand. This method performed better compared to a similar KDE-W using only a simple monotonically decreasing time-decay parameter across the year. The model parameters were generated using cross-validation on the month prior to the forecasting month. To find the optimal bandwidth, $h_x$, we used the \textit{fminbnd} function from the optimisation toolbox in Matlab. For the time-decay parameter $\lambda$ we considered different values between $0.92$ and $1$ with $0.01$ increments\footnote{The time-decay parameter must be in the interval $(0,1]$, the smaller the value the fewer historical observations which have significant influence on the final forecast. After testing over several tasks we found that the decay parameter is bounded below by $0.92$.}. The kernel density based estimate has been used as a benchmark in probabilistic forecast methods applied to household level electricity demand. It serves as a useful starting point for our forecasts \cite{Arora2014}. The method has the advantage of being quicker to implement than more complicated kernel based methods, such as the conditional kernel density estimate on independent parameters, which we introduce in the next sections. \subsection{Conditional Kernel Density Estimate on Period of Week (CKD-W)} \label{sec:CKDW} A KDE forecast conditional on the period of the week, denoted by $w$, $w = 1, \ldots, 168$, (CKD-W) \cite{Arora2014} gives a higher weight to observations from similar hourly periods of the week and can be represented as \begin{equation} f(X|w)=\sum_{i=1}^{n}\frac{\lambda^{\alpha(i)} K((w_i-w)/h_w)}{\sum_{i=1}^{n}\lambda^{\alpha(i)} K((w_i-w)/h_w)}K \left( \frac{X-X_i}{h_x} \right) \label{CKDW} \end{equation} where $\alpha(i)$ is defined in \eqref{expoKDEW}. This method is similar to the one presented in \cite{Arora2014}. However the new feature is the half-yearly symmetric time-decay exponential \eqref{expoKDEW} which is justified by the yearly periodicity of the load as explained in the previous section. The validation process can be computationally very expensive, especially while searching for multiple optimised parameters (here there are three parameters, the bandwidths for load and week period variables, and the time decay). In particular, despite using the Matlab parallelisation toobox, executing this method on our (conventional) machines\footnote{All forecasts were executed on a machine with Intel Core i7-361QM Quad-Core Processor @ 2.30GHz and 16GB of memory.}, required more than a day to complete, which is not practical given the weekly constraints of the competition. In an attempt to reduce the computational cost, we reduced the number of historical observation and the length of the validation period. We only used observations starting from January of $2008$ and we cross-validated our parameters using only one week from the validation month\footnote{Initially we used the first week, but later we used the last week from the validation month because it is closer to the period to be forecasted. However the improvement was minor.}. For the optimisation of the bandwidths we used the \textit{fminsearch} function (implementing a $\log$ transformation to ensure that we only model for positive values) from the optimisation toolbox in Matlab. For the time-decay parameter we looped over different values of $\lambda$ between 0.92 and 1 with 0.01 increments. At the final stages of the competition we used the \textit{fminsearchbnd} function\footnote{\url{http://www.mathworks.com/matlabcentral/fileexchange/8277-fminsearchbnd--fminsearchcon}.} for parameter optimisation, which improves both the computational time and the accuracy. We call this implementation of the method CKD-W2, see also Section \ref{taskOutlines}. \subsection{Conditional Kernel Density Estimate on Temperature Forecast (CKD-T)} \label{sec:CKDT} Weather information is particularly useful for an accurate load forecast (among many references in the literature see \cite{Jeon2012} in the context of CKD methods, and also a winning entry of GEFCom2012 \cite{Charlton2012}). For this reason we implemented a KDE method conditional on the temperature (CKD-T). We take the explanatory variable to be the mean hourly temperature $T$ from the 25 weather substations. The conditional probability density is given by \begin{equation} f(X|T)=\sum_{i \in \mathcal{A}}\frac{ K((T_i-T)/h_T)}{\sum_{i \in \mathcal{A}}K((T_i-T)/h_T)}K \left( \frac{X-X_i}{h_x} \right), \label{CKDT} \end{equation} where $h_T$ is the bandwidth of the temperature kernel and $T_i$ is the temperature corresponding to the same hour $h$ and day $d$ as the load $X_i$. The index subset $\mathcal{A}$ consists of indices at time $h$ and days $d-5,\ldots,d,\ldots,d+5$ of all previous years. The formula \eqref{CKDT} does not include a time-decay parameter since we assume the temperature is the main driver of seasonality. Thus we do not include a decay parameter which would increase the computational expense for very little gain. For parameter optimisation we used the \textit{fminsearch} function, implementing a $\log$ transformation as with the CKD-W forecast. Since temperature forecasts are inaccurate beyond a few days this method was only implemented for the first day of a task. As we will shortly describe in Section \ref{Sec:Comb}, the remaining days of a task are forecasted using a weighted combination of CKD-W and a quantile forecast, introduced in Section \ref{QR}. \subsubsection{Temperature Forecast} \label{TempForecastStuff} The CKD-T method requires a forecast of the mean temperature in order to create a load forecast. We follow a simple autoregression forecast method, similar to that presented in \cite{Liu2014}. The model was chosen for its simplicity. In addition, temperature can change rapidly within a couple of days, and without more data (such as wind speeds and direction), and the access to complicated numerical weather prediction software we decided a simple model is appropriate for our uses. The model consists of a trend, seasonalities (both diurnal and yearly) and lagged temperature variables. We model the temperature $T_j$ at timestep $j$ as \begin{equation} T_j=\beta_0+\beta_1 j+S_j^d+S_j^a+\sum_{k=1}^{25} \alpha_k T_{j-k}. \label{eq:TempForecast} \end{equation} The diurnal seasonal terms are described by \begin{equation} S_j^d=\sum_{p=1}^P\left (\gamma_p \sin\left( 2\pi p \frac{d(j)}{24} \right)+ \delta_p \cos \left( 2\pi p \frac{d(j)}{24} \right) \right ), \label{eq:} \end{equation} where $\gamma_p, \delta_p$ are Fourier coefficients (with $P=4$) and $d(j)= j \mod 24$ is the conversion to the hour of the day. The yearly seasonal terms are modelled by \begin{equation} S_j^a=\sum_{m=1}^M\psi_m \sin\left( 2\pi m \frac{(f(j) + \phi)}{365} \right), \label{eq:} \end{equation} where $\psi_m$, $m=1, 2, ..., M$ and $M=3$, are the coefficients and $f(j)= j/24$. The method slightly differs from that in \cite{Liu2014} which uses $f(j)=\lfloor j/24 \rfloor$ (the day of the data). The shift $\phi$ ensures the period terms match the period of the data as optimally as possible. The value $\phi=-85$ was chosen such that the mean absolute percentage error (MAPE) is minimised. We set $j=0$ for the start of data at midnight on $1^{st}$ January $2005$. The final terms of equation (\ref{eq:TempForecast}) are the lags. By consideration of the autocorrelation, we checked the potential number of lag terms to use and found that the previous $25$ hours gave the minimum MAPE for day ahead and month ahead temperature forecasts over November 2009 (a preliminary task). The values of $M, P$ and $\phi$ were all chosen by cross validation over the month of November $2009$. The coefficients $\beta_0, \beta_1, \gamma_p, \delta_p$ and $\psi$ were found via the linear regression function in Matlab, \textit{regress}. We attempted to select the most representative and accurate weather stations to improve the day ahead CKD-T forecast. We chose groups of three and six weather stations which gave the best MAPE for a day ahead temperature forecast. Using the average temperature from these stations in (\ref{eq:TempForecast}) did not provide a consistent improvement in the pinball scores. Hence we only used the mean over all weather stations for the CKD-T day ahead forecasts. \subsection{Quantile Regression (QR)} \label{QR} The quantile regression is a generalisation of standard regression where each quantile is found through the minimisation of a linear model to historical observations according to a loss function \cite{Koenker1978}. Suppose we have a model of the demand, at time $t=1, ..., n$ given by $f(\mathbf{U}_t, \boldsymbol\beta)$, where $\mathbf{U}_t$ are the independent variables and $\boldsymbol\beta$ are the unknown model parameters. Also suppose we have observations of the load $y_t$ at the same times $t=1, ..., n$. Then for a given quantile $q$ the aim is to find the parameters $\boldsymbol\beta_q$ given by \begin{equation} \boldsymbol\beta_{q}=\underset{\boldsymbol\beta}{\operatorname{argmin}} \sum_{t=1}^{n}\rho_{q}(y_{t}-f(\mathbf{U}_t, \boldsymbol\beta)), \label{eq:Quantileregression} \end{equation} where $\rho(\bullet)$ is the loss function given by \begin{equation} \rho_{q}(z)=|z(q-\mathbf{1}_{(z<0)})|, \label{eq:lossfunction} \end{equation} where $\mathbf{1}_{(z<0)}$ is the indicator function. We created a simple linear function, for each hour of the day separately, based on only trend and seasonal terms. For each daily hour on day $k$ (with $k=1$ meaning $1^{st}$ Jan $2005$) of the data set, we define our model by \begin{equation} L_k=a_0+a_1 k+\sum_{p=1}^2 b_p \sin\left( 2\pi p \frac{(k +\phi_1)}{365} \right)+ \sum_{m=1}^2 c_m \sin \left( 2\pi m \frac{(k+\phi_2)}{365} \right). \label{eq:} \end{equation} The first shift term is chosen $\phi_1=-111$, by minimising the MAPE, and the second shift is $\phi_2=\phi_1-364/2$. The double seasonality offset term was used because of the double yearly period discovered in the load for some hours of the day. The coefficients $a_0, a_1, b_1, b_2, c_1, c_2$ are found for each quantile forecasted via a simple linear programming method. We implemented this using \textit{optimset} function in Matlab utilising the Simplex algorithm option. To reduce computational cost we only used $500$ days of history to find the parameters. Once the quantile forecasts were found we resorted them to ensure there was no crossing of the quantiles \cite{Chernozhukov2010}. \subsection{Mixed Forecasts and Hybrid Forecasts} \label{Sec:Comb} Each of the main forecasts presented had different performance for different forecast horizons. For this reason we created new forecasts which were mixes of our main methods based on their performances over different horizons. We consider two main methods \begin{itemize} \item Mix 1: This is simply the CKD-W forecast but using the CKD-T forecast for the first day. \item Mix 2: As mix 1 but using the QR forecast from the start of the $8^{th}$ day until the end of the month. \end{itemize} With the success of the mixed forecasts (see Section \ref{taskOutlines}) we also explored combinations of the forecasts. This has been shown to improve the overall forecast accuracy compared to individual forecast methods \cite{Rangan2008}. We split the forecast into five different time periods. Period one was simply the first day, period two the rest of the first week, period three the second week, period four the third week and period five the rest of the month. For the first period we simply used the CKD-T which had the best day ahead accuracy of all the forecasts. For each of the other periods we took a weighted average of the quantiles time series in the quantile regression forecast, $F_{\text{QR}}$, and the CKD-W forecast, $F_{\text{CKD-W}}$, \begin{equation} F_{\text{Hybrid}}(\tau) = w(\tau) F_{\text{CKD-W}}(\tau) + (1 - w(\tau)) F_{\text{QR}}(\tau), \end{equation} where $\tau = 2,3,4,5$ is the time period and $0 \leq w(\tau) \leq 1$ is the average optimal weight at time period $\tau$. The optimal weight of each past task is found by searching different weighted combinations of the CKD-W with the quantile regression forecasts for each time-period $\tau > 1$ that minimise the pinball scores. We repeat this process for a number of past tasks and then take the average optimal weight for each time period. We call this forecast the \textit{hybrid forecast}. \section{Task Submissions and Results} \label{taskOutlines} We ranked our forecasts using the scores from prior tasks. We used this to understand which methods to persist with and which ones to reject or adapt. In this section we describe our selection procedure for each task in chronological order to justify our methodology and approach. Figure \ref{AllScores} shows graphically the scores for our best submissions, the benchmark and the top scoring forecast for each task\footnote{Tasks 1 to 3 were trial tasks. We focused on searching for patterns, trends and correlations in the load and temperature data and developing our more sophisticated methods. We submitted simple parametric models and the KDE-W method.}. The plot shows our forecasts performing consistently well in all tasks other than task four and eight as we will explain below. We note that the leader is not the same entrant for each task. The benchmark is simply the previous year's load used for all quantiles. \begin{figure*} \begin{center} \includegraphics[scale = 0.3]{Scores_12tasks_3forecasts_grayscale.pdf} % \caption{Pinball scores of our submitted forecast, the benchmark and the final leader.} \label{AllScores} \end{center} \end{figure*} In \textbf{tasks 4 and 5} we implemented the KDE-W method (see Section \ref{KDE}). December $2010$ (task 3) appeared to have unusually low temperatures and since this month was also used for parameters training it could explain the high scores of most entrants of task 4. We note that the simple quantile regression forecast (introduced in task 9) performs very well for this method, scoring 10.36, in fact beating the top entry score. This could be due to being less influenced by the previous, exceptional, month. For \textbf{tasks 6 and 7}, we developed the CKD-W method to take into account weekly effects. This was found to perform better than the KDE-W method. We also submitted a CKD-W for \textbf{task 8} but trained the parameters on the same month of the previous year, rather than the previous month of the same year. The data from the previous year would be less recent but likely related to the current month's behaviour due to annual periodicity of the load. In addition, data from the previous month had little influence on forecasts of beyond a week so it made sense to attempt to optimize parameters on data available for the entire period. Although this method performed better than CKD-W for task 7 the method did not perform as well as expected for the task 8 submission and was abandoned from thereon. We found that the CKD-T method, although poor for forecasting the entire month was the most successful method for forecasting a day ahead (see Section \ref{sec:CKDT}). In addition we developed the QR forecast which was performing well, especially at horizons of over a week ahead. Hence, for \textbf{tasks 9, 10 and 11} we implemented our mixed forecasts. Modifying the first day forecast with the CKD-T forecast, to create mix forecast 1, gave us improved forecast for task 9. Further improvements came with mix forecast 2 which was used as our submission for task 10 and 11 (giving us second place in both leaderboards). Further testing of the forecasts on older tasks indeed confirmed the improvement of the methods. Up to this point the mix 2 forecast gave the more consistent best scores for tasks 2 through to 8 with the smallest average pinball score of $ 8.61 $ compared to the next best of the quantile forecast with the CKD-T for the first day of $8.63$ (the benchmark average was $15.28$). This seems to indicate that a major contribution to the improvement came from the quantile regression forecast. For \textbf{tasks 12 to 15} we implemented the hybrid forecast. For these tasks we trained the weights using tasks 6 to 11, 2 to 12, 2 to 13 and 2 to 14 respectively. This forecast performed better for each task compared to our other methods, see also Table \ref*{TableScoresFinal}. For task $15$, we initially attempted to model separately the special days, Christmas Eve, Christmas Day and New Year's day. However we saw no improvement in our forecasts and since these days all occurred on weekends for task 15, we abandoned this idea. The hybrid models were consistently the best model for tasks 12 to 14 with an average pinball score of $5.36$ compared to the next best score of $5.41$ for the CKD-W2 method. However for task $15$ the method did not perform too well, with a pinball score of $9.55$ compared to only $7.844$ for the KDE-W and $8.099$ for the CKD-T (the winning score was $8.229$). In fact the CKD-T method performed surprisingly well for tasks 12 through 14 with an average score of $5.42$ meaning a better score on average than the hybrid forecast for tasks 12 through 15 ($6.089$). This is particularly surprising given the CKD-T method had the worst performance of all methods prior to task 12. This could possibly be the result of relatively stable temperatures for these months. The final scores were calculated as a weighted average of the percentage improvement relative to the benchmark for each task. Each percentage score was given a weight which increased linearly from the fourth to last task. The scores for selected methods (plus, for comparison the scores of the leading submission for each task \footnote{The leading submission is the best submission from all teams for each task. Not to be confused with the submission of the winning team.} and the competition's winning team) are shown in Table \ref{TableScoresFinal}. The larger the score the better the forecast. The hybrid forecast uses the weights used in the final task and therefore is not a completely accurate representation of the actual hybrid forecast score since the data was optimized to the same tasks. However it does show the potential of the method. If we had more time then potentially we could train the weights on a larger sample for each time period by a rolling window rather than, sometimes, less than six tasks. The table shows the improvements made with subsequent tasks. We note, despite the simplicity of the method, the QR forecast is one of the best non-hybrid forecast on average. However on a few tasks this forecast did not perform as well as the CKD-W and CKD-W2 forecasts (tasks 5, 6, 11, 14) and thus a mixed forecast is perhaps a more reliable choice since these methods perform well when QR does and reduce the errors when QR does not perform as well. The better score of CKD-W2 over CKD-W shows the importance of using the best optimization programs for the forecast. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c||c|c|c|c|c|c|c|} \hline Forecast & LS & WT & Actual & Hybrid & QR &CKD-W2 & Mix 2 & Mix 1 & CKD-W & KDE-W \\ \hline Score & 54.2 & 50.8 & 48.5 & 51.4 & 48.7 & 48.7 &48.4 & 47.5 & 47.4 & 44.6 \\ \hline \end{tabular} \caption{Weighted average scores of the leading team of each task (``LS''), the competition's winning team (``WT'') and our valid submitted forecasts (``Actual'') and our main methods described in Section \ref{Methods}.} \label{TableScoresFinal} \end{center} \end{table} \section{Summary and Discussion} \label{Discussion} We have described a number of methods for creating probabilistic forecasts and outlined our methodology for adapting these forecasts for each task. We chose and developed these methods based on a number of characteristics including success of the methods in similar applications, their computational simplicity and their versatility in incorporating the periodic nature of the data. We have created several forecasts which perform well and obtain the lead score in a number of tasks. Our forecasts performed consistently well, too. All forecasts beat the benchmark with only three of the twelve submissions not improving on the benchmark by at least $40\%$. Overall we obtained five top two finishes in the twelve tasks, with top position twice and second position on three occasions. This was the second highest top two finishes amongst all final candidates. There are periodicities in the scores likely due to more variability in load due to heating and cooling. The benchmark and forecast scores are correlated due to this. Very large benchmarks scores are likely due to large differences in weather conditions. In certain tasks (such as 3 and 4) all teams scored poorly. For example, task 3 we found that there were very low temperatures which correlated with large forecast errors on the $14^{th}$ December. The strong correlation between the weather and load demand imply that the biggest single improvement in forecast accuracy will come from better long term weather forecasts. Table \ref{TableScoresFinal} illustrates that the hybrid forecast is the best scoring overall. However it is clear that the simple quantile forecast is responsible for much of this improvement with all forecasts using this method scoring very similarly. Although CKD-W2 and QR perform similarly on average, the CKD-W2 only performs better than the quantile regression on a few tasks. On those tasks the difference is significant and therefore the hybrid forecast reduces this discrepancy. Further improvements could have been done to further improve the scores. There are a number of changes which may improve the basic forecasts (CKD-W, CKD-T, QR) such as including weekday identifiers or improving the choice of weather stations. However the simplest modification we could make is to improve the weights used in the hybrid forecasts. In particular we could train the weights on a rolling basis from one day to the next. This mean that the most recent (and accurate) weights could be applied, and potentially we could even forecast such weights. In this paper we have reported a simple combination of our two forecast methods to create a hybrid forecast. It has been shown that a simple linear combination is not optimal since, even if the forecasts are properly calibrated, the final forecast will not be \cite{Rangan2008}. Hence we could also consider other methods, for example the beta linear pool method as described in \cite{Rangan2008}. A surprising for us result of the competition was the success of very simple methods. The quantile regression, which only modelled the trend and yearly seasonality, was one of our, and the competitions', best performing forecasts. Such methods could thus be used as benchmarks for more complicated methods.
{ "attr-fineweb-edu": 1.847656, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcOjxK6nrxpQc0U3Z
\section{Introduction} The Boltzmann kinetic equation is a successfully used dynamical model to describe heavy ion collisions and multifragmentation up to intermediate energies \cite{BCR92,P89,HM87,BKG84,BD88,D84,GRSVR87,Ba92,AB85,KWB92,MLSCN98}. The scattering is added to the Vlasov equation either by the relaxation time \cite{KN84,D84a,K85,AK93,KB89} or by scattering integrals of the Boltzmann-type \cite{BKG84,GRSVR87,KB89,N28,UU33,GBD87,KJS85,SG86,CLG87} or their nonlocal extensions \cite{SLM96,MLSK98,MLNCCT01} in the same way as it was done in the Landau-Vlasov equation for hot plasmas. All these simulation have in common that the motion of particles between collisions are covered by Hamilton equations or the Vlasov kinetic equation with a selfconsistent potential. Even within collisionless Vlasov codes the onset of fragmentation is described \cite{BKN76,W82,BBS84}. The use of selfconsistent Vlasov equation is not restricted to nuclear collisions but has been applied successfully for collisions of ions with metal clusters \cite{GG96,PG99,CRSU00}. Despite the fact that the Vlasov equation is a reversible kinetic equation and the initial configuration should be retained after a long enough calculation, fragmentation and energy dissipation is observed. Obviously this is due to the fact that the Poincar{\' e} time is much larger than the time where the phase space is filled by various trajectories. Therefore one can consider superficially this spreading as an irreversible process in the sense of entropy production. Nevertheless, the underlying dynamics is reversible. The numerical implementation of Vlasov codes demands a certain coarse graining of the space and momentum coordinates. This numerical uncertainty is quite sufficient to generate genuine dissipation and entropy production. This fact has been investigated in \cite{JB96} and will be considered in detail within this paper. It has been argued that the errors due to numerically coarse graining accumulate diffusively. We will demonstrate that these errors will lead to a unique equilibration of Vlasov equation. While the diffusion itself is less effected, the coarse graining leads to a different dynamical evolution and has consequences for the extraction of damping rates from Vlasov-type simulations. Theoretically one can derive kinetic equations by phase-space averaging of the Liouville equation which itself is exactly of Vlasov type \cite{K75}. The resulting collision integrals represent two different approximations of the nonequilibrium dynamics: (i) The truncation of coupling to higher order correlations (hierarchy) and (ii) The smoothing procedure which translates the fluctuating stochastic equation into a kinetic equation for the smoothed distribution function. This latter procedure is sometimes also called coarse graining. For the discussion of appropriate collision integrals also in the quantum case see \cite{LSM97}. Here we will focus on a detailed analysis of consequences of coarse graining. The idea used here dates back to the work of Gibbs and Ehrenfest \cite{G5,EE7}. They suggested to coarse grain the entropy definition by a more rough distribution function \beq f(p,r,t)={1 \over \Delta(p,r) } \int\limits_{\Delta(p,r)} f_{\delta}(p',r',t) dp' dr'.\label{fd} \eeq The physical meaning consists in the fact that any observable is a mean value of an averaging about a certain area in phase space. It was shown that the entropy with this coarse grained distribution increases \cite{S76}. This means that in a closed system the entropy can rise if we average the observation about small phase space cells. The interpretation is that other phase space points can enter and leave the cell which is not compensated. A phase space mixing occurs \cite{S76} since the two limits cannot be interchanged, i.e. the thermodynamic limit and the limit of vanishing phase space cell. The coarse graining of Ehrenfest can be observed if the thermodynamical limit is carried out first and the limit of small phase space cells afterwards. It solves therefore not the problem of entropy production, but gives an interesting aspect to entropy production by coarse grained observations \cite{S76}. In this paper we like to investigate three questions: \begin{itemize} \item Which kinetic equation is really solved numerically if the Vlasov equation is implemented in numerics? \item What are the properties of this kinetic equation, especially which kind of dissipative features appear ? \item What are the consequences to practical applications, e.g. damping of giant resonances and binding energies? \end{itemize} The outline of the paper is as follows. Next we derive the kinetic equation which is obeyed by the coarse grained distribution function. Then in chapter III we discuss the entropy production. We demonstrate with the help of two exactly solvable models that this entropy production is due to mixing, i.e. a mode coupling and not simply by spreading of Gaussians. The solution of the stationary Vlasov equation is then presented in chapter IV. We will find that the stationary solution can be represented as an infinite sum of modified Boltzmann distributions. This expansion shows the unique character of time evolution which is only determined by the initial distribution. In chapter V we discuss consequences of this result: (i) The thermodynamics becomes modified by spatial correlations. The selfconsistency leads to a modified Thomas- Fermi equation lowering the binding energy, (ii) the structure factor shows a substructure similar as obtained from vertex corrections and (iii) the damping width of collective resonances is shown to be larger by coarse graining. While the centroid energy is smaller by momentum coarse graining it increases by spatial coarse graining. \section{Coarse grained Vlasov equation} The origin of the coarse graining may be the numerical implementation or the use of averaged distribution functions instead of the fluctuating one. To illustrate the method we examine the quasi-classical Vlasov equation and show which equation is really solved if one is forced, by numerical reasons, to use coarse graining. It will become clear shortly that instead of Vlasov equation a modified kinetic equation is solved when coarse graining is present. The quantum mechanical or TDHF equation can be treated in analogy. We start from the Vlasov equation \begin{equation} {\pa t} f_{\delta}(prt) +{p \over m} {\pa r} f_{\delta}(prt) -{\pa r} V_{\delta}(r,t) {\pa p} f_{\delta}(prt)=0 \label{vlas} \end{equation} which solution can be formally represented as an infinite sum of exact test-particles \begin{equation} f_{\delta}(prt)=\sum\limits_{i=1}^{\infty} \delta(r-R_i(t))\,\delta (p-P_i(t))\label{sum} \end{equation} where the test-particle positions and momenta evolve corresponding to the Hamilton equations $ {\dot R_i(t)}=P_i/m$ and $ {\dot P_i(t)}=-\partial_R V(R_i(t))$. In the following we understand $p,r,P$ and $R$ as vectors and suppress their explicit notation. For the mean-field term $V_{\delta}$ we assume a Hartree approximation given by a convolution of the density with the two-particle interaction $V_0$ \begin{equation} V_{\delta}(r,t)= \int dr' V_0(r-r') \int {dp' \over (2\pi \hbar)^3} f_{\delta}(p'r't).\label{vd} \end{equation} In practice all numerical calculations use two assumptions : (i) The infinite number of test-particles is truncated by a finite value. (ii) The used test particle has a finite width due to numerical errors and/or smoothing demand of the procedure. While in \cite{RS95a,RS95b,LSR95} was discussed that the approximation (i) leads to a Boltzmann-like collision integral, the approximation (ii) will deserve further investigations. Especially, we will show that the finite width of test-particles leads to a coarse graining and a dissipation forcing the system to a Boltzmann- like distribution. This is even valid with infinite numbers of test-particles. Therefore we consider the effect of coarse graining as the most determining one for one-body dissipation. The finite width of test particles can be reproduced most conveniently by a convolution of the exact solution (\ref{sum}) with a Gaussian $g_a(x)=(2 \pi \sigma_a^2)^{-1.5} {\rm exp} (-x^2/(2 \sigma_a^2))$ resulting in the coarse grained distribution function $f$ \begin{eqnarray} f(prt)&=&\{f_{\delta}\}_g\nonumber\\ &=&\int {d p' \over (2 \pi \hbar)^3} d r' g_r(r-r') g_p(p-p') f_{\delta}(p'r't)\nonumber\\ &=&\sum\limits_{i=1}^{\infty} g_r(r-R_i(t))g_p(p-P_i(t)).\label{f} \end{eqnarray} To answer the question which kind of kinetic equation describes this smoothed distribution function, the kinetic equation for (\ref{f}) is derived from (\ref{vlas}) by convolution with a Gaussian. The equation for general coarse grained mean-fields has been already derived in \cite{LSR95}. In order to make the physical content more explicit we calculate the different terms explicitly and present the coarse grained kinetic equation. The free drift term ${p \over m} \partial_r f_{\delta}$ takes the form after convolution \begin{equation} \{{p \over m} \partial_r f_{\delta}\}_g={p \over m} {\pa r} f(prt) +{\sigma_p^2 \over m} {\pa r}{\pa p} f(prt)\label{pm} \end{equation} which is established by partial integration. We see that the free streaming is modified by an additional resistive term which will give dissipative features. The mean-field term takes the form \begin{eqnarray} \{{\pa r} V_{\delta}(r){\pa p} f_{\delta}(prt)\}_g&=&{\pa p}\{{\pa r} V_{\delta}(r) f_{\delta}(prt)\}_g\nonumber\\ =&&{\pa p}\{\{{\pa r} V_{\delta}(r) f_{\delta}(prt)\}_{g_r}\}_{g_p}. \end{eqnarray} The space convolution with $g_r$ is performed using the relation \cite{T89} \begin{equation} \{AB\}_g=\{A\}_g \rm{exp}(\sigma_r^2 {\leftvector}_r {\rightvector}_r)\{B\}_g\label{rel} \end{equation} with the result \begin{equation} {\pa p}\{{\pa r} V(r,t) \exp (\sigma_r^2 {\leftvector}_r {\rightvector}_r) \{f_{\delta}(prt)\}_{g_r}\}_{g_p}.\label{vp} \end{equation} $V(r,t)$ is the mean-field calculated with the space and momentum coarse grained distribution $f$ instead of $f_{\delta}$, via (\ref{vd}) which can be seen as \begin{eqnarray} V(r,t)&=& \{V_{\delta}(r,t)\}_{g_r}=\int dr'' V_0(r'') \int dr' g_r(r-r''-r')\nonumber\\ &\times& \int {dp' \over (2\pi \hbar)^3} f_{\delta}(p',r',t)\nonumber\\ &=&\int dr'' V_0(r'') \int dr' g_r(r-r''-r') \nonumber\\ &\times&\int {dp' \over (2\pi \hbar)^3} \int {dp \over (2\pi \hbar)^3} g_p(p-p') f_{\delta}(p',r',t)\nonumber\\ &=&\int dr'' V_0(r'') \int {dp \over (2\pi \hbar)^3} f(p,r-r'',t)\label{vr} \end{eqnarray} where the last equality shows the invariance of particle density due to coarse graining. The momentum convolution with the Gaussian is then performed in (\ref{vp}) to yield the momentum and space coarse distribution function $f$. We like to point out that the test-particle method would lead to a further folding of the mean-field potential if it is read off from a finite grid \cite{RS95a}. The coarse grained Vlasov equation reads now \begin{eqnarray} {\pa t} f(prt)&+&{p \over m} {\pa r} f(prt)+{\sigma_p^2 \over m} {\pa r}{\pa p} f(prt)\nonumber\\ &-&{\pa r} V(r,t) \exp (\sigma_r^2 {\leftvector }_r {\rightvector}_r) {\pa p} f(prt)=0.\label{kinetic} \end{eqnarray} This equation is the main result of this chapter and represents the Vlasov equation for the coarse grained distributions and should be compared with the Husimi representation of \cite{LSR95}. While the distribution function is exactly the Husimi representation of the Wigner function the corresponding coarse grained kinetic equation was not given before. Equation (\ref{kinetic}) represents the kinetic equation which is really solved numerically if the Vlasov equation is attempted to be solved. The coarse graining leads to two additional contributions besides the original Vlasov equation. We will see that this causes just the dissipative like features. While we will continue now and investigate (\ref{kinetic}) more closely, in appendix \ref{app} the question is addressed how the underlying dynamics is modified by coarse graining. We find that in principle the coarse grained equation can be mapped to the original Vlasov equation if one defines new testparticles with modified dynamical equations by a modified potential. The relation between the original and the modified potential appears just as the inverse folded mean field potential which is the opposite relation than presented in \cite{RS95a,RS95b,LSR95}. \section{Entropy production} To analyze the dissipative feature we rewrite (\ref{kinetic}) into \begin{equation} {\pa t} f(prt) +{p \over m} {\pa r} f(prt) -{\pa r} V(r,t) {\pa p} f(prt)=I_{\rm diss} \label{vlas1} \end{equation} with the one-body (collision integral like) term \begin{eqnarray} I_{\rm diss}&=&-{\sigma_p^2 \over m} {\pa r}{\pa p} f(prt)\nonumber\\ &+&{\pa r} V(r,t) (\exp (\sigma_r^2 {\leftvector }_r {\rightvector}_r)-1) {\pa p} f(prt)\nonumber\\ &\approx& (\sigma_r^2{\partial^2 \over \partial r^2} V(r,t)-{\sigma_p^2 \over m}) {\pa r}{\pa p} f(prt) + o(\sigma^4).\nonumber\\\label{coll} \end{eqnarray} In order to demonstrate the entropy production explicitly we use the linearized form of (\ref{coll}) and built up the balance equation for entropy $S=f {\rm ln} f$ by multiplying (\ref{vlas1}) with ${\rm ln} f$ and integrating over $p$. The entropy balance reads then \begin{equation} {\dot S(r,t)}+{\pa r} \int {dp \over (2\pi \hbar)^3} {p \over m} f {\rm ln}f = \Phi(r,t) \int {dp \over (2\pi \hbar)^3} {\partial_r f \partial_p f \over f}\label{entropy} \end{equation} with $\Phi(r,t)=-\sigma_r^2{\partial^2 \over \partial r^2} V(r,t)+{\sigma_p^2 \over m}$. The entropy density obeys therefore a conservation law with a source term on the right hand side of (\ref{entropy}). Especially one sees that the total entropy change is equal to \begin{equation} {\dot S(t)}= \int dr {\dot S}(r,t)=\int dr \Phi(r,t) \int {dp \over (2\pi \hbar)^3} {\partial_r f \partial_p f \over f}.\label{entropy1} \end{equation} This expression allows already to learn some interesting properties of such distribution functions which lead to an entropy increase. For a distribution function symmetric either in $p$ or $r$, no entropy production occurs since $\partial f$ would be asymmetric. We get an entropy production only for explicit space dependent distributions due to the $\partial_r f$ factor. Near equilibrium where the assumed nonequilibrium distribution functions are falling in space and momentum, the second derivative of the mean-field is negative (due to finite systems) and therefore $\Phi>0$. Consequently we obtain an increase of entropy on average due to the spatial coarse graining. This means that the finite width of test-particles induces an entropy increase similar like irreversibility. The reason for the entropy increase is a nonlinear mode coupling which can be considered as a general feature of coarse graining. Therefore we rewrite the collision integral in the first line of (\ref{coll}) by Fourier transformation into another form \begin{eqnarray} I_{\rm diss}&=&\int dr' {\pa p} f(r'pt) \left [ \delta(r-r') ({\sigma_p^2 \over m} \leftvector_r-{\pa r} V(r,t) )\right.\nonumber\\ &+&\left. \int dr'' {\pa r''} V(r'') \int {dk_1 dk_2 \over (2\pi \hbar)^6} \right .\nonumber\\ &\times&\left .{\rm exp}(-i(k_1+k_2) r-\sigma_r^2 k_1 k_2+i k_2 r' +i k_1 r'')\right].\nonumber\\ && \label{dissp} \end{eqnarray} We see that due to the spatial coarse graining a nonlinear mode coupling occurs. This is represented by the product $k_1 k_2$ between the modes in the distribution function and the mean field. This latter effect is the reason for the production of entropy and is connected with the spatial coarse graining $\sigma_r$.\footnote{Please remark that $f$ and $V$ are coarse grained values itself which Fourier-transforms contain an exponential $k_1^2 \sigma_r^2/2$ and $k_2^2 \sigma_r^2/2$ respectively. Therefore the total exponent $(k_1+k_2)^2\sigma_r^2/2$ appears and the expression is convergent while the bare $k_1 k_2$ product above may lead to the impression of non-convergence.} To understand the physical origin of this entropy production we choose two simple models for illustration. In the first example we assume a fixed external potential. In the second example we give an exactly solvable model including the selfconsistent mean-field potential. \subsection{Free Streaming} The initial condition before folding is assumed as \beq f_\delta(rp0) \propto e^{-\frac{\lambda}{2}p^2}\delta(r). \eeq The Vlasov-equation with $V=0$ yields then the time dependent solution \begin{equation}\label{eq:micro} f_\delta(r,p,t) \propto e^{-\frac{\lambda}{2}p^2}\delta(r-{p \over m}t) \end{equation} or in vector notation ${\bf x}=(r,p)$ \begin{eqnarray}\label{eq:microvect} f_\delta(r,p,t) &=& \left(\lambda \over 2\pi\right)^{3/2} \left(\mu \over 2\pi \right)^{3/2} e^{-\frac{\lambda}{2}p^2-\frac{\mu}{2}(r-pt)^2} \\ \nonumber &=& { e^{-\frac{1}{2}{\bf x}^T\hat{{\bf \Lambda}}^{-1}{\bf x}}\over (2\pi )^{3} \sqrt{\mbox{Det}\{\hat{{\bf \Lambda}}\}}} ,\quad \mu \longrightarrow \infty \end{eqnarray} with \begin{equation}\label{eq:deflam} \hat{{\bf \Lambda}}^{-1} = \left(\begin{array}{cc} \mu & -\mu t \\ -\mu t & {\lambda}+\mu t^2 \end{array}\right) \quad. \end{equation} The distribution (\ref{eq:microvect}) is to be folded with \beq\label{eq:Husfold} {\cal G}(r,p) & =& {e^{-\frac{1}{2}{\bf x}^T\hat{{\bf \Sigma}}^{-1}{\bf x}} \over (2\pi)^{3} \sqrt{\mbox{Det}\{\hat{{\bf \Sigma}}\}}} \nonumber\\ \hat{{\bf \Sigma}}^{-1}&=&\left(\begin{array}{cc} \sigma_r^{-2} & 0 \\ 0 & \sigma_p^{-2} \end{array}\right) \quad. \eeq Using the Gaussian folding theorem, we obtain \begin{eqnarray}\label{eq:findis} f(r,p,t) &=& { e^{-\frac{1}{2}{\bf x}^T\hat{\bf M}^{-1}{\bf x}} \over (2\pi)^{3} \sqrt{\mbox{Det}\{\hat{\bf M}\}}} \quad, \\[3pt] \nonumber \hat {\bf M} &=& \hat {\bf \Lambda}+\hat {\bf \Sigma} = \left(\begin{array}{cc} \frac{1}{\mu }+\frac{t^2}{{\lambda}}+\sigma_r^{-2} & \frac{t}{{\lambda}} \\ \frac{t}{{\lambda}} & \frac{1}{\lambda}+\sigma_p^{-2} \end{array}\right) \end{eqnarray} and the entropy is \beq\label{eq:entrop} S & =& -\int drdp\,f\log{(f)} = \int drdp\,f(r,p,t)\nonumber\\ &\times& \left(\frac{1}{2}{\bf x}\hat{\bf M}^{-1}{\bf x} +3 \log{(2\pi)} +\frac 1 2 \log{\left(\mbox{Det}\{\hat{\bf M}\}\right)} \right). \nonumber\\ && \eeq Using known Gaussian integration rules and performing the limit $\mu\longrightarrow\infty$, we obtain finally \begin{equation}\label{eq:finentrop} S = 3(\log{(2\pi)}+1) +\frac 3 2 \log{\left (\frac{1}{\sigma_r^2 \lambda}+\frac{1}{\sigma_r^2\sigma_p^2}+\frac{t}{\sigma_p^2 \lambda}\right )} \end{equation} which appreciates $S(t)=3 \log{\left ({2 \pi e \over \sqrt{\lambda}\sigma_p} t\right )}$ for large times. Interestingly the long time limit is only modified by $\sigma_p$. One can see that the entropy is monotonically increasing. The reason is the continuous solvation of the folded distribution. \subsection{Selfconsistent bounded model} \label{IIc} After demonstrating the increase of entropy with time we like to consider two questions: (i) Is this increase due to the spreading of the Gaussian, which we assumed for space coarse graining ? (ii) Can we interchange the procedures {\it coarse graining} and {\it dynamical evolution}? This means we like to check whether we will get identical results when we first solve the exact Vlasov equation and then coarse grain the solution or when we coarse grain the Vlasov equation first and solve the modified one afterwards. This question reveals the sensible dependence on the initial distribution. To answer both questions we consider another example of exactly solvable model given in \cite{Mr97}, where we will add an external harmonic oscillator potential. The potential $V(r,t)$ consists of the separable multipole-multipole force $v_{1234}=v g_{12}g_{34}$ for the mean-field and an external harmonic oscillator potential \beq V(\vec r,t)=v g(r) Q(t)+\frac 1 2 m \omega^2 r^2 \eeq where the selfconsistent requirement is \beq Q(t)=\int {d r d p\over (2\pi)^3} g(r) f_{\delta}(r,p,t) \label{self} \eeq according to (\ref{vd}). We may think of the external harmonic potential as a representation of a realistic confining potential expanded around the origin $U_{\rm ext}(r)\approx U_{\rm ext}(0) (1-(r/R)^2/2)$ with the radius $R$. We have therefore \beq \omega^2\approx -U_{\rm ext}(0)/m R^2.\label{om} \eeq The one-particle distribution function obeys the quasi-classical Vlasov equation (\ref{vlas}). We can solve the Vlasov equation exactly by solving the differential equation for the equipotential lines, which are the Hamilton equations. We choose a form factor of \beq g(r)=a_x r_x +a_y r_y+ a_z r_z. \label{pot} \eeq This model is special by two reasons. Firstly, for this model the linearization (\ref{coll}) is exact, because higher than second order spatial derivatives vanish. Secondly, within this model the quantum-Vlasov equation agrees with the semiclassical Vlasov equation investigated here. The Hamilton equations of trajectories which correspond to this model \beq \partial_t p&=&-v a Q(t) -m \omega^2 r\nonumber\\ \partial_t r&=&{p \over m} \label{hamilt} \eeq are solved as \beq \left (\matrix{r\cr p}\right )&=& \left(\matrix{\sin{\omega t} & \cos{\omega t}\cr m \omega \cos{\omega t}&-m\omega \sin{\omega t}}\right) \left(\matrix{c_1\cr c_2}\right) \nonumber\\ &-&v a\int\limits_0^{t} dt' Q(t') \left(\matrix{{1 \over m\omega} \sin{\omega (t-t')}\cr\cos{\omega(t-t')}}\right).\label{coord} \eeq Now we know that the constants of motion $c_1,c_2$ are constant at any time of evolution. Therefore we can relate them to the initial momenta and positions which gives \beq \left(\matrix{c_1\cr c_2}\right)=\left(\matrix{\frac{p_0}{m\omega}\cr r_0}\right). \eeq From equation (\ref{coord}) we express now $r_0$ and $p_0$ as functions of $(r,p,t)$ which results into \beq \left (\matrix{r_0\cr p_0}\right )&=& \left(\matrix{\cos{\omega t} & -{1\over m \omega}\sin{\omega t}\cr m \omega \sin{\omega t}&\cos{\omega t}}\right) \nonumber\\ &\times&\left( \left(\matrix{r\cr p}\right)+\mu a\int\limits_0^{t} dt' Q(t') \left(\matrix{{1 \over m\omega} \sin{\omega (t-t')}\cr\cos{\omega(t-t')}}\right)\right )\nonumber\\ &\equiv&\hat {\bf A} \left (\matrix{r\cr p}\right )+{\bf C}. \label{coord1} \eeq Given an initial distribution $f_0(r_0,p_0)$, we can express the solution at any time as \beq f_{\delta}(r,p,t)&=&f_0(r_0,p_0) =f_0(\hat {\bf A} {\bf x}+{\bf C}) \label{sol} \eeq with ${\bf x}=(r,p)$ and the matrix $\hat {\bf A}$ and the vector ${\bf C}$ can be read off from (\ref{coord1}). The selfconsistency requirement (\ref{self}) leads to a determination of Q(t) \beq Q(t)&=&a <r_0> {\rm cosh}\sqrt{\Omega^2-\omega^2} \,t \nonumber\\ &+&{a \over m\omega\sqrt{\Omega^2-\omega^2}}<{p \over m}>{\rm sinh}\sqrt{\Omega^2-\omega^2} \, t \label{se} \eeq where \beq \Omega^2&=&-{v a^2\over m} N\nonumber\\ N&=&\int {drdp \over (2\pi \hbar)^3} f_0(r,p)\nonumber\\ < r_0>&=&\int {drdp \over (2\pi \hbar)^3} r f_0(r,p)\nonumber\\ <{ p_0 \over m}>&=&\int {drdp \over (2\pi \hbar)^3} { p \over m} f_0(r,p)\nonumber\\ \eeq are expressed by moments of the initial distribution. We have given an exact solution of the selfconsistent 3-D Vlasov equation. It is interesting that for $\Omega>\omega$ we have a positive Lyapunov exponent $\sqrt{\Omega^2-\omega^2}$ while in the opposite case the solution oscillates. We like to point out that (\ref{sol}) represents a general solution of any Vlasov equation if we understand $\hat {\bf A} {\bf x} +{\bf C}$ as a nonlinear transformation solving the corresponding Hamilton equations. In analogy to the foregoing chapter we can now coarse grain this exact solution. We are able to check by this way whether the so far observed entropy increase is due to the spreading of the test particles with Gaussian width. If so, we would observe an entropy increase even if we start with an equilibrium solution. We will demonstrate now that this is not the case. Instead, we have a nonlinear mode coupling. The actual calculation is performed in the appendix \ref{melfc}. We choose as an initial distribution a Gaussian one. The entropy finally reads from (\ref{entra}) \beq &&S = 3\log{(2\pi e) } +\frac 3 2 \log \left \{ \frac 1 2 (\sigma_p^2+m^2 \omega^2 \sigma_r^2)({1\over \lambda m^2\omega^2}+{1\over \mu})\right .\nonumber\\ &&\left . + ({1 \over \lambda \mu}\!+\!\sigma_p \sigma_r)\!+\! \frac 1 2 (m^2 \omega^2 \sigma_r^2\!-\!\sigma_p^2)({1\over \lambda m^2\omega^2}\!-\!{1\over \mu})\cos(2 \omega t) \right \}.\nonumber\\&& \label{entrb} \eeq The entropy is just oscillating around a stationary value. The Gaussian form of initial distributions leads to {\it no} stationary solution. This shows that the entropy increase observed so far is not due to the spreading of Gaussians but due to the form of initial distribution. Other initial distributions would lead to an increase of entropy which can be checked e.g. with ground state Fermi functions. Now we turn to the second question concerning the interchange between {\it coarse graining} and {\it solution of kinetic equation}. While we have solved the exact Vlasov equation so far and coarse grained the solution afterwards we now revert the procedure and solve the coarse grained Vlasov equation (\ref{vlas1}) with (\ref{coll}). Please remind that the linearization with respect to the width is exact for this model here. The entropy in this case is calculated from (\ref{solb}) with the result \beq &&S = 3\log{2 \pi e } +\frac 3 2 \log \left \{ {1\over \lambda \mu}+\frac{1}{2} (\sigma_p^2-m^2 \omega^2 \sigma_r^2) \right .\nonumber\\ &&\times \left . ({1\over \lambda m^2\omega^2} -{1\over \mu}-{\sigma_p^2\over m^2 \omega^2}+\sigma_r) (1-\cos(2 \omega t)) \right \}. \eeq Comparing with (\ref{entrb}) we see a different expression, however the oscillating behavior remains. Within linear orders of $\sigma$ both expressions differ by a constant of $2{\sigma_p^2 \over \mu}+2 {\sigma_r^2 \over \lambda}$. The difference between these two expressions, which corresponds to the interchange of coarse graining and dynamics is explained by the use of the same initial distribution in both cases. We obtain a different dynamical behavior. If we would additionally coarse grain the initial distribution (\ref{ftfo}) we would have to replace $\hat {\bf \Lambda}$ by $\hat {\bf \Lambda} +\hat {\bf \Sigma}^{-1}$. Instead of (\ref{M}) we would obtain $\hat M=(\hat {\bf A}^T (\hat {\bf \Lambda} +\hat {\bf \Sigma}^{-1})\hat {\bf A})^{-1} -\hat {\bf B} $ and the resulting entropy agrees exactly with (\ref{entrb}). As already pointed out this would be not the case for nonlinear models other than (\ref{pot}) since then the selfconsistent condition (\ref{self}) will be affected by coarse graining itself. \subsection{Consequences} This observation has some practical consequences. Since in all numerical calculations one solves the coarse grained Vlasov equation instead of the exact one, which corresponds to first coarse graining and then solving, one should not expect the correct dynamical behavior starting from a fixed initial distribution. Instead the correct procedure is to coarse grain first the initial distribution with the Gaussians of the used test-particles and then solve numerical the time evolution of this distribution. The refolding at any time step yields then the exact solution of Vlasov equation for such trivial models as described here where the selfconsistency condition is not altered by the coarse graining. It is important to stress again that the above equivalency between the two ways of solving and coarse graining are only equivalent in the linear model (\ref{pot}) which we have solved exactly. For more realistic potentials the selfconsistency condition like (\ref{self}) becomes altered itself by the coarse graining which leads to an essential nontrivial change in the dynamics. Therefore both method will lead to different results. The nonlinear dynamics is non-trivially changed if one coarse-grains first and solves then or if one solves first and coarse-grains afterwards. However, in order to diminish the numerical error due to coarse graining the refolding should be performed at least. \subsubsection{Time scale of entropy production} From the chapter above we see that the entropy production due to coarse graining in an harmonic external potential is oscillating like $1-\cos{(2 \omega t)}$. We can use this fact to define the typical time scale of entropy production \beq \tau_c={\pi \over 2 \omega}\approx {\pi R \over 2}\sqrt{{m \over U_{\rm ext}(0)}} \eeq where we used (\ref{om}). For a typical nuclear situation we obtain therefore $\tau_c\approx 4.8 A^{1/3}$fm/c such that for $^{12}O$ we get $\tau_c\approx 12$fm/c and for $^{208}Pb$ we have $\tau_c\approx 28$fm/c. It is remarkable that the time scale on which the entropy is changing is independent of the used internal interaction. This clearly underlines the Landau damping type of dissipation. Coarse graining produces no genuine dissipation. \section{Stationary solution of coarse grained Vlasov equation} We will demonstrate in this chapter that the solution of the general coarse grained Vlasov equation is approaching a modified Boltzmann limit for long times provided a stationary solution is approached at all. The example in chapter \ref{IIc} has shown that this is not the case every time. Exclusively for appropriate initial conditions, which will be characterized in chapter \ref{stab}, we will obtain stationary solutions due to the additional terms in the Vlasov equation (\ref{vlas1}). \subsection{Solution of coarse grained Vlasov equation} We want to consider the solution of the kinetic equation (\ref{vlas1}) with the assumption of arbitrary time dependent mean-fields. Therefore we solve the following partial differential equation neglecting the selfconsistency in $V$ at first and remember it later. We consider \begin{eqnarray} {\pa t} f(prt) &+&\Phi(r,t) {\pa r}{\pa p} f(prt) +{p \over m} {\pa r} f(prt) \nonumber\\ &-&{\pa r} V(r,t) {\pa p} f(prt)=0 \label{vlas2} \end{eqnarray} with $\Phi(r,t)=({\sigma_p^2 \over m}-\sigma_r^2{\partial^2 \over \partial r^2} V(r,t))$ and the boundary conditions $f(r,p,0)=f_0(rp), \int dp dr f(rpt)/(2 \pi \hbar)^3=N$. This equation is of parabolic type and as an initial value problem well defined with an unique solution. The unique stationary solution, if such a stationary solution is approached, is an implicit representation of the stationary solution by the remembrance of the dependence of $V$ on the distribution function itself, i.e. the selfconsistency. Because the problem is uniquely defined it is enough to find a special representation of the solution. \subsubsection{Representation of stationary solution} The stationary solution of (\ref{vlas2}) can be found by separation of variables. Assuming $f_{\rm stat}(pr)=f_P(p)f_R(r)$ we have \begin{equation} f_{\rm stat}^n(pr)={\rm const} \times {\rm exp} \left [-c_n({p^2 \over 2m}+\int\limits^r dr' {\partial_r' V(r') \over 1 - c_n \Phi(r')})\right ],\label{solut} \end{equation} with the separation constant $c_n$ which holds for vectors $p,r$. The general stationary solution is given by superposition of these $c_n$ dependent expression (\ref{solut}). Linearizing via the coarse graining $\sigma$ leads us to \beq f_{\rm stat}(p,r)&=&\sum\limits_n a_n {\rm e}^{-c_n ({p^2 \over 2 m}+V(r))} \nonumber\\ &\times&(1- c_n^2 ({\sigma_p^2 \over m} V(r) -{\sigma_r^2 \over 2}(V'(r))^2 ) )\label{soluta} \eeq from which we deduce the equilibrium density distribution to have the form \beq n(r)&=&\sum\limits_n a_n ({m \over 2 \pi \hbar^2 c_n})^{3/2} {\rm e}^{-c_n V(r)} \nonumber\\ &\times&(1- c_n^2 ({\sigma_p^2 \over m} V(r) -{\sigma_r^2 \over 2}(V'(r))^2 ) ). \eeq Please remark that the summation can also be replaced by an integration which would translate into continuous functions $c_n$ and $a_n$. For the reason of legibility we discuss further on only the discrete case. With this solution we have derived our goal to present a stationary solution of the modified Vlasov equation. Because the initial problem was uniquely defined this solution is the unique stationary one. It has to be remarked that due to the selfconsistency ( dependence of $V$ on the distribution function itself ) equation (\ref{soluta}) is an implicit representation of the solution. \subsection{Determination of $c_n$} \label{cn} The open expansion coefficients $c_n$ are completely determined by the initial distributions. This can be seen as follows. To solve the Vlasov equation we can rewrite the solution into Hamilton equations with a time dependent potential. This time dependence comes from the selfconsistent potential and/or from an external potential. The selfconsistency is then represented by a nonlinear determining equation similar to (\ref{self}). The Hamilton equation for the trajectories can be solved in principle with integration constants $c_i$ as we demonstrated in the model. Because these are constants at any time we obtain a transformation between initial coordinates and the coordinates at a later time, including the time dependence as $(r_0,p_0)=A[r,p,t]$. Since we like to solve the Vlasov equation as initial value problem with a given initial distribution $f_0(r_0,p_0)$, the time dependent solution is given formally by $f(r,p,t)=f_0(A[r,p,t])$. {\it If} this solution approaches a stationary state, which is dependent on the initial distribution as well as on the interaction potential, we obtain with the result of the foregoing chapter \beq f_0(A[r,p,\infty])&=&\sum\limits_n a_n {\rm e}^{-c_n ({p^2 \over 2 m}+V(r))} \nonumber\\ &\times&(1- c_n^2 ({\sigma_p^2 \over m} V(r) -{\sigma_r^2 \over 2}(V'(r))^2 ) ).\label{56} \eeq By integrating over $r$ and inverse Laplace transform it is possible to extract the $c_n$ uniquely. In the next chapter we will find that only such initial distributions will lead to stationary solutions which obeys $c_n<{m\over 2 \sigma_p^2}$. Since the $c_n$ are determined by the initial distribution and the used potential, we can decide which initial distributions will lead to stationary solutions with a given interaction potential and coarse graining $\sigma_p$. \subsection{Global stability of the stationary solution} \label{stab} Since the general possible solution of the stationary coarse grained Vlasov equation covers a range of unphysical solutions which will never be reached, we have to ask which solutions are the stable ones. Therefore we employ a linear response analysis of the Vlasov equation (\ref{vlas2}) to an external potential $U_{ext}$. We assume a homogeneous equilibrium characterized by the distribution $f_0(p)$. Then the equation (\ref{vlas2}) can be linearized according to $f(prt)=f_0(p)+\delta f(prt)$ as \beq (-i \omega + i {p k \over m}) \delta f(pk\omega) &=& i V'(n_0) k \partial_p f_0 \delta n(k,\omega) \nonumber\\ &+& i k \partial_p f_0 U_{ex}(k\omega)\nonumber\\ &-&i {\sigma_p^2 \over m} k \partial_p \delta f(pk\omega). \eeq Here we have Fourier transformed $r, t$ coordinates. It can be easily solved for $\delta f$. Integrating this solution over $p$ an algebraic solution for $\delta n$ is obtained \beq \delta n = U_{ex} {\Pi(k\omega) \over 1- V'(n_0) \Pi(k\omega)} \label{dn} \eeq with the polarization function \beq \Pi(k\omega)&=&\int {dp \over (2 \pi \hbar)^3}(1- {\sigma_p^2 \over m} {k \partial p \over \omega-{p k\over m}}) {k \partial_p f_0 \over \omega-{p k \over m}}\nonumber\\ &=&\int {dp \over (2 \pi \hbar)^3}(1+ {k^2 \sigma_p^2 \over m^2} {1 \over (\omega-{p k\over m})^2}) {k \partial_p f_0 \over \omega-{p k \over m}}. \label{Pi} \eeq We see that the usual RPA response function is modified by a structure function \beq M^2(kp\omega)=1+{k^2 \sigma_p^2 \over m^2} {1 \over (\omega-{p k\over m})^2}. \label{appr} \eeq This describes the fact that the elementary particle considered here (testparticles) have a finite width or an internal structure. For large distances $k\rightarrow 0$ we see that $M$ approaches 1, which means that this structure is only resolvable at smaller distances. A known approach to find the structure functions ${\tilde M}$ inside the RPA polarization function is to include vertex corrections. The resulting response functions can be written generally \beq \Pi(k \omega)=\int {dp \over (2 \pi \hbar)^3} {\tilde M}^2(kp\omega) {k \partial_p f_0 \over \omega-{p k \over m}}. \eeq By approximating this structure function $\tilde M$ by (\ref{appr}) we might simulate higher order correlations by finite momentum widths of testparticles. Also the influence of finite size can be compared with this expression \cite{MVFLS98,M99}. The equilibrium solution is stable as long as the increment of the collective mode does not change the sign. Otherwise the collective mode would exponentially increase with time and the solution would be unstable with respect to small perturbations. The collective mode is given in linear response by the complex solution of $1-V'(n_0) \Pi(k\omega)=0$ of eq. (\ref{dn}). For small increments the sign of the complex part is completely determined by the sign of ${\rm Im} \Pi$ of (\ref{Pi}). Let us check this stability condition for the stationary solution. From (\ref{solut}) we have \beq f_{\rm stat}(p)&=&\int dr f_{\rm stat}(p,r)=\sum\limits_n a_n {\rm e}^{-c_n ({p^2 \over 2 m})} d_n \eeq where $d_n$ is the spatial integral over the potential dependent part. From (\ref{Pi}) we obtain as criterion of stability \beq &&{\rm Im} \Pi=-(1+{k^2\sigma_p^2\over m^2}{\partial^2\over \partial \omega^2}){m^2 \omega \over 2 \pi k}\sum\limits_n a_n d_n {\rm e}^{-{c_n\over 2 m} \left ({m \omega\over k}\right )^2}\ge 0.\nonumber\\&&\label{ssum} \eeq To provide stable solutions we demand that any term in the sum should be positive. We observe now two sources of possible instability: (i) The expansion coefficient $a_n$ without coarse graining can become negative for enough pathologic initial functions. For such initial distributions we would not get stable solutions by Vlasov dynamics itself. (ii) Provided the original distribution is stable $a_n>0$ we find an additional criterion for stability if we use coarse graining. Demanding the coefficients to be positive leads us from (\ref{Pi}) to the equation \beq 1+\sigma_p^2 (-{3 c_n\over m} +c_n^2 ({\omega \over k})^2)\ge 0. \eeq In the case of small wave lengths ${m\omega \over k}\rightarrow \infty$ we obtain the most restrictive condition \footnote{The less restrictive condition for stability reads \beq c_n\not\in {3 k^2\over 2 m \omega^2}\left (1- \sqrt{1-{4 m^2\omega^2\over 9 k^2 \sigma_p^2}},1+ \sqrt{1-{4 m^2 \omega^2\over 9 k^2 \sigma_p^2}}\right ). \label{co} \eeq In the general case with arbitrary wave length we see that (\ref{co}) is fulfilled if \beq \sigma_p<\frac 2 3 {m |\omega| \over k}. \label{c2} \eeq We conclude that all solutions (\ref{solut}) are stable if the coarse graining is smaller than the typical scales in the system (\ref{c2}). If (\ref{c2}) is not fulfilled, than only such solutions (\ref{solut}) are stable solutions which expansion coefficients $c_n$ are smaller than the inverse coarse graining via (\ref{c1}). Therefore (\ref{c1}) is the most restrictive condition.} \beq c_n<{m\over 2 \sigma_p^2}. \label{c1} \eeq If this condition is not fulfilled we find $m \omega/k$ combinations such that the sum in (\ref{ssum}) can change the sign and the solution is unstable. In other words (\ref{c1}) is a necessary condition for stability of the stationary solution. Since the $c_n$ are determined by the initial distribution according to (\ref{56}) and the potential as discussed in chapter \ref{cn}, we see that only special initial distributions can lead to stationary long time distributions due to coarse graining with a given potential. Interestingly, for a Maxwellian initial distribution $c_n=1/k_B T$ and $a_n={\rm const} \delta_{n,0} $, the coarse graining has to obey \beq \sigma_p^2<{m \over 2} k_B T \eeq to provide stable solutions. This gives the intuitive clear interpretation that the coarse graining energy $\sigma_p^2/2 m$ should be smaller than a quarter of the kinetic energy in order to render numerical investigations stable. \section{Thermodynamic consequences} We have reached our goal to show that phase space coarse graining leads to dissipative features of reversible kinetic equations and the special coarse graining with Gaussian width forces the system to a sum of modified Boltzmann - like distributions. We have shown algebraically the numerical observed fact that in ordinary Boltzmann codes two different limiting values of the distribution functions are appreciated \cite{RS95b}. From one- particle dynamics a Boltzmann- like distribution and from quantum Boltzmann collision integrals a Fermi distribution evolves. In contrast to earlier work we find that even for an infinite number of test-particles the Boltzmann limit is appreciated due to the finite width of test-particles. From the found stationary solution of the coarse grained Vlasov equation we can now derive thermodynamic consequences. We assume for simplicity a Maxwellian initial distribution which may evolve in time via the coarse grained Vlasov equation. Then the coefficients $c_n$ of the stationary solution (\ref{soluta}) are only single parameters determined by $c_n=\beta=1/k_B T$ and \beq &&a_n=\delta_{n,0} {{N \lambda_T^3 \over g} \over \int d r {\rm e}^{-\beta V(r)} \left \{1-\beta^2 [V(r) {\sigma_p^2 \over m}- {\sigma_r^2 \over 2} (\partial_r V(r))^2]\right \}} \nonumber\\&& \label{64} \eeq by the requirement of particle conservation that the integral over momentum and space of (\ref{soluta}) should equal to the particle number $N$. Here $g$ describes the spin-isospin degeneracy and $\lambda_T^2=2 \pi \hbar^2/m k_B T$ and we use the linearization in $\sigma$ since the original equation (\ref{vlas2}) is valid only up to orders of $o(\sigma^4)$. From the expression (\ref{soluta}) we see that the coarse graining leads to a modification factor of the distribution function $f_{\rm stat}(pr)$ in comparison with the Maxwell-Boltzmann distribution function $f_M(pr)={\rm const \,exp} (-\beta p^2/2m -\beta V(r))$ \beq &&{f_{\rm stat}(pr) \over f_M(pr)} = {1-\beta^2 \left (V(r){\sigma_p^2 \over m}- {\sigma_r^2 \over 2}(\partial_r V(r))^2\right ) \over 1-\beta^2 < \left (V(r){\sigma_p^2 \over m}- {\sigma_r^2 \over 2}(\partial_r V(r))^2\right )> } \nonumber\\ &=&1-\beta^2 \left ({\sigma_p^2 \over m}\left [V(r)-<V(r)>\right ] \right . \nonumber\\ &&\left . \qquad \qquad - {\sigma_r^2 \over 2}\left [(\partial_r V(r))^2-<(\partial_r V(r))^2>\right ]\right ) \label{thermo}.\nonumber\\&& \eeq where the spatial average is abbreviated as\\ $<a>=\int dr {\rm e}^{-\beta V(r)} a/\int dr {\rm e}^{-\beta V(r)}$. As a consequence, no contribution of coarse graining occurs for mean values of momentum dependent observables within the lowest order of coarse gaining width. However, for space dependent observables one gets a modified thermodynamics by $\sigma_p$ coupling the mean spatial fluctuation of the potential and by $\sigma_r$ coupling the fluctuation of the potential derivative. Consequently, only the {\it spatial} thermodynamical quantities will be influenced by the underlying coarse graining. We can consider this modification factor as the expression of spatial correlations induced by momentum and spatial coarse graining. \begin{figure} \psfig{file=tf.eps,width=9cm,angle=-90} \caption{The influence of different coarse graining parameters on the selfconsistent potential. Above the space coarse graining is zero and below the momentum coarse graining is set to zero. While the momentum coarse graining leads to an increase of the selfconsistent potential the spatial coarse graining enhances the gradient and produces a skin.} \label{tf} \end{figure} \subsection{Selfconsistency} Now we return to the question of selfconsistency. We have so far assumed silently that (\ref{vd}) or (\ref{vr}) are completed by a selfconsistent potential $V(r)$. Without coarse graining this would be accomplished by solving the Thomas--Fermi--like equation (\ref{vd}) with $f\propto \exp{-\beta (p^2/2m+V(r))}$ \beq V(r)={N \over g}{\int dr' V_0(r-r') {\rm e}^{-\beta V(r')}\over \int dr' {\rm e}^{-\beta V(r')}} \label{tf1} \eeq and analogously for the degenerate case. For the coarse grained case we see now from (\ref{soluta}) and (\ref{64}) that the potential in the distribution function $f$ has to be replaced by \beq V_{\rm eff}(r)=V(r)+\beta \left [ {\sigma_p^2 \over m}V(r)-{\sigma_r^2 \over 2} (\partial_r V(r))^2 \right ]. \label{pots} \eeq Therefore one can solve the Thomas Fermi like equation (\ref{tf1}) as before but replace afterwards $V\rightarrow V_{\rm eff}$. In the following let us assume we have solved the Thomas--Fermi equation (\ref{tf1}) and we will discuss now the influence of coarse graining. We see that the coarse graining in momentum space increases the potential (\ref{pots}) globally. In contrast, the coarse graining in space causes a widening of the potential. This is demonstrated in figure \ref{tf}, where we assume a Wood--Saxon potential for $V(r)$ in $^{208}Pb$ and plotted the change of the potential for different parameters of coarse graining at a temperature of $10$ MeV. While the momentum coarse graining leads to an overall increase, the spatial graining sharpens the gradient in the potential and causes the appearance of a skin. One can consider this as a modification of the Thomas- Fermi equation by the finite width of phase space graining. The coarse graining obviously produces higher binding properties and larger rms- radii. The appearance of a skin is important to note as a relict of the testparticle width. \subsection{Consequences on collective modes} As a practical application we show now that the coarse graining which is numerically unavoidable leads to false predictions concerning the energy and the width of giant resonances. For illustrative purpose we restrict to the low temperature $T=0$ case. The collective mode is given by the complex zeros $\omega=E-i\Gamma$ of the denominator of (\ref{dn}). This solution provides us with the centroid energy $E$ and damping $\Gamma$ of the collective mode. We find for $T=0$ the dispersion relation \beq &&1-V \Pi(q,\omega)\equiv 1+F_0 (1+{\sigma_p^2\over 2 p_f^2 }{\partial^2\over \partial u^2})\nonumber\\ &&\times \left [-1+{u \over 2} {\rm ln}\left ({1+u \over 1-u}\right)-i{\pi \over 2}u \Theta(1-u) \right ]=0\label{lint0} \eeq where we have introduced $u=m\omega/p_f/k$, the Landau parameter $F_0={2 V'(n_0) m p_f \over \pi^2 \hbar^3}$ and the Fermi momentum $p_f$. \begin{figure} \psfig{file=width.eps,width=9cm} \caption{The centroid energy and width of collective motion versus the pseudoparticle width in the low-temperature linear regime (\protect\ref{lint0}). The values are scaled to the free one $\sigma_p=0$.} \label{width} \end{figure} In Figure \ref{width} we give the ratio of the centroid energy and damping to the corresponding un-coarse grained ones for typical nuclear situation. We see that with increasing width of coarse graining the centroid energy decreases and the width increases. This is understandable because due to coarse graining we lower the particle- hole threshold resulting into lower centroid energy and an artificial damping at the same time. Since coarse graining is unavoidable in numerical implementations the extraction of damping width of giant resonances should be critically revised with respect to the used pseudoparticle width. In contrast to the coarse graining in momentum space, we find a different behavior in spatial domain. In figure \ref{ca} we plot a realistic numerical solution of the full Vlasov equation describing monopole resonances in $^{40}Ca$. Here the spatial width of the test particles has been varied. We see that the centroid energy is increasing with increasing testparticle width. This behavior can be understood from figure \ref{tf}. The spatial coarse graining enlarges the gradient of the potential and therefore increases the restoring force. This translates into a higher incompressibility and a higher collective energy. \begin{figure} \psfig{file=ca.eps,width=9cm} \caption{The centroid energy of monopole resonance in $^{40}Ca$ versus spatial testparticle width within a Vlasov simulation. The centroid energy is increasing with increasing width.} \label{ca} \end{figure} A more complete discussion of the solution of Vlasov equation and also application to giant resonances far from the stability line can be found in \cite{MFW00}. \section{Summary} The dissipative features of coarse grained Vlasov equations are investigated. This procedure occurs due to numerical simulation techniques. We have calculated explicitly the entropy production, which is due to nonlinear mode coupling but not due to dissipation. We find that a sum of modified Boltzmann distributions is approached by the coarse graining. Examples are shown where no stationary condition is approached at all since the solution oscillates and examples are given where a stationary solution can be reached. The different behavior is completely determined by the initial distribution and the used potential. The stability analysis leads to a criterion for the initial distribution dependent on the width of test particles, which can only lead to stationary solutions. We have demonstrated by a special model that the two steps, coarse graining and dynamical evolution of the distribution function, are only interchangeable if the initial distribution is also coarse grained. It is argued that this property does not hold in general due to the feedback of the selfconsistent potential. Because the coarse graining is unavoidable in numerically implementations, the Vlasov codes should be critically revised with respect to the question if they start really from a coarse grained initial distribution. Thermodynamical consequences are discussed. A correlated part to any thermodynamical observable is calculated explicitly. It is given in terms of the space and momentum width. We find that the selfconsistently determined nuclear potential is overestimated by testparticle simulations with finite width of the test particles. This should have implementations on Thomas- Fermi calculations where an overbinding is found. The linear response function is calculated for homogeneous systems and the spectra of density fluctuation is presented. It is found that the RPA polarization function becomes modified due to the finite momentum width of testparticles. These modifications can be understood as an internal structure the particles bear. This structure function is formally compared with vertex corrections to the RPA. It is pointed out that the higher order vertex corrections beyond RPA can be casted into similar structure functions. Therefore we suggest a method of simulating higher order correlations in one-body treatments by choosing an appropriate momentum width of testparticles. As a practical consequence the collective mode is analyzed. We find that the coarse graining enhances the damping width and lowers the centroid energy of collective modes, e.g. giant resonances. The dependence on the coarse graining width is given quantitatively and corresponding simulations should be revised. \section{Acknowledgments} The authors are especially indebted to P.G. Reinhard for many discussions and critical comments. The model IIIA has been contributed by him. J. Dorignac is thanked for critical reading and helpful comments. The friendly and hospitable atmosphere of LPC in Caen is gratefully acknowledged.
{ "attr-fineweb-edu": 1.922852, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcQXxK0fkXPSOpSB0
\section{Introduction}\label{sec:intro} We study in this letter the dynamics of a chain obtained performing the continuous limit of a system of links and beads. This kind of problems has been addressed in the seminal paper of Edwards and Goodyear \cite{EdwGoo} using an approach based on the Langevin equation. Related papers dedicated to the statistical mechanics of a freely jointed chain in the continuous limit are for example \cite{kraemers,grosberg,mazars, a-e}. In Ref. \cite{FEPAVI1} it has been shown that it is possible to investigate the dynamics of such a chain in a path integral framework. The model which describes the fluctuations of the chain is a generalization of the nonlinear sigma model \cite{nlsigma} called the generalized nonlinear sigma model or simply GNLSM. In \cite{FEPAVI1} it has also been discussed the relation between the GNLSM and the Rouse model \cite{rouse,doiedwards}. Applications of the GNLSM have been the subject of Ref.~\cite{FEPAVI2}, in which the dynamic structure factor of the chain has been computed in the semiclassical approximation. The goal of the present work is to derive the probability distribution $Z(\mathbf r_{12})$ which measures the probability that in a given interval of time $\Delta t$ the average distance between two points of the chain $P_1$ and $P_2$ is $\mathbf r_{12}$. With respect to Ref.~\cite{FEPAVI2}, we use to perform the calculation a totally different approximation, which linearizes the GNLSM. Moreover, a background field method is adopted, in which the effects of the thermal fluctuations are considered in the background of a fixed chain configuration chosen among the classical solutions of the linearized equations of motion. We find out that classical configurations are particularly stable against the changes due to fluctuations. The latter become relevant only if they act on the chain for a significant amount of time. The material presented in this letter is divided as follows. The GNLSM is linearized exploiting a gaussian approximation of the functional Dirac delta function. Next, the probability distribution $Z(\mathbf r_{12})$ is computed within this approximation. The asymptotic form of $Z(\mathbf r_{12})$, which is valid in the case in which the chain is stiff, is derived. Finally, a discussion of the obtained result is presented. \section{The generalized nonlinear sigma model in the gaussian approximation}\label{sec2} Let us consider the partition function of the GNLSM: \beq{ Z=\int{\cal D}\mathbf R(t,\sigma) e^{-\tilde c\int_0^{t_f}\int_0^Nd\sigma \dot\mathbf R^2(t,\sigma)} \delta(|\mathbf R'|-\ell) }{parfun} with $\dot\mathbf R=\partial \mathbf R/\partial t$ and $\mathbf R'=\partial \mathbf R/\partial \sigma$. The boundary conditions at $t=0$ and $t=t_f$ of the field $\mathbf R$ are respectively given by $ \mathbf R(0,\sigma)=\mathbf R_0(\sigma)$ and $\mathbf R(t_f,\sigma)=\mathbf R_f(\sigma)$, where $\mathbf R_0(\sigma)$ and $\mathbf R_f(\sigma)$ represent given static conformations of the chain. The boundary conditions with respect to $\sigma$ are periodic: $ \mathbf R(t,\sigma)=\mathbf R(t,\sigma+N)$. It was shown in Refs. \cite{FEPAVI1} and \cite{FEPAVI2} that the above partition function describes the dynamics of a closed chain that is the continuous version of a freely jointed chain consisting of links and beads. The constant $\tilde c$ appearing in Eq.~\ceq{parfun} is given by: \beq{ \tilde c= c\ell\qquad\mbox{with}\qquad c=\frac{M}{4k_BT\tau L} }{ctildedef} Here $k_B$ denotes the Boltzmann constant, $T$ is the temperature and $\tau$ is the relaxation time which characterizes the ratio of the decay of the drift velocity of the beads. $M$ and $L$ represent the total mass and the total length of the chain respectively. Let us note that in Eq. \ceq{parfun} the trajectory of the chain has been parametrized with the help of the dimensionless parameter $\sigma$, which is related to the arc--length used in Refs. \cite{FEPAVI1} and \cite{FEPAVI2} by the relation $s=\ell \sigma$. The scale of length $\ell$ introduced with this parametrization is connected to the quantities $L$ and $N$ by the identity $N=\frac{L}{\ell}$. We remark also that, with respect to Refs.~\cite{FEPAVI1} and \cite{FEPAVI2}, the constraint $\mathbf R'^2=\ell^2$ has been replaced by the constraint $|\mathbf R'|=\ell$ exploiting the property of the functional Dirac delta function $\delta(\mathbf R'^2-\ell^2 )=\delta( |\mathbf R'|-\ell) $. A proof of this equation can be found in Ref.~\cite{FEPAVI1}. In order to deal with the delta function in Eq.~\ceq{parfun} we use the following gaussian approximation \cite{FEPAVI1}: \beq{ \delta(|\mathbf R'|-\ell)\sim \exp\left(\int_0^{t_f}dt\int_0^Nd\sigma\frac\nu2 \mathbf R^{\prime\, 2}\right) }{deltaapproxtwo} which is valid when the parameter $\nu$ is large, while $\ell$ is small. As a consequence, the partition function $Z$ becomes: \beq{ Z=\int{\cal D}\mathbf R\exp\left[-\int_0^{t_f}dt\int_0^Nd\sigma \left( \tilde c\dot\mathbf R^2+\frac \nu2\mathbf R^{\prime\, 2} \right)\right] }{parfunapp} After the approximation \ceq{deltaapproxtwo} the chain may be regarded as a gaussian chain consisting in a set of $N$ segments of average length $\ell$. \section{The probability distribution of the average position between two points of the chain} At this point we pick up two points $P_1$ and $P_2$ of the chain, for instance $\mathbf R(t,\sigma_1)$ and $\mathbf R(t,\sigma_2)$. We wish to study how the relative position between these two points changes due to thermal fluctuations. To this purpose, we introduce the following probability distribution: \begin{eqnarray} Z(\mathbf r_{12})&=&\int {\cal D} \mathbf R \exp\left[ -\int_0^{t_f}dt\int_0^Nd\sigma\left( \tilde c\dot\mathbf R^2+\frac\nu 2\mathbf R^{\prime\,2} \right) \right]\nonumber\\ &\times&\delta\left( \mathbf r_{12}-\int_{t_1}^{t_2}\frac{dt}{\Delta t} (\mathbf R(t,\sigma_2)-\mathbf R(t,\sigma_1)) \right)\label{zetar12} \end{eqnarray} We suppose that $0\le t_1\le t_2\le t_f$. To evaluate the path integral in Eq.~\ceq{zetar12} it is convenient to perform the splitting: $ \mathbf R(t,\sigma)=\mathbf R_{cl}(t,\sigma)+\mathbf r(t,\sigma)$. Here $\mathbf R_{cl}(t,\sigma)$ is a solution of the classical equations of motion $ \left[ \tilde c\frac{\partial^2}{\partial t^2} +\frac\nu2\frac{\partial^2}{\partial \sigma^2} \right]\mathbf R_{cl}=0$. The fluctuations around the classical background $\mathbf R_{cl}(t,\sigma)$ are taken into account by $\mathbf r(t,\sigma)$. The boundary conditions for $\mathbf R_{cl}(t,\sigma)$ at the initial and final instants are: $ \mathbf R_{cl}(t_f,\sigma)=\mathbf R_f(\sigma)$ and $\mathbf R_{cl}(0,\sigma)=\mathbf R_0(\sigma)$. With respect to $\sigma$ periodic boundary conditions are assumed: $ \mathbf R_{cl}(t_f,\sigma)=\mathbf R_{cl}(t,\sigma+N)$. The solution of the classical equations of motion is complicated due the presence of the non-trivial boundary conditions and will not be reported here. It can be found in standard books of mathematical methods in physics, like for instance \cite{morsfesh}. The only important information related to the background fields that is needed in the probability distribution $Z(\mathbf r_{cl})$ is the average relative position $\mathbf r_{cl}$ of the points $P_1$ and $P_2$ for a given classical conformation $\mathbf R_{cl}(t,\sigma)$: \beq{ \mathbf r_{cl}=\int_{t_1}^{t_2}\frac{dt}{\Delta t}\left( \mathbf R_{cl}(t,\sigma_2)-\mathbf R_{cl}(t,\sigma_1) \right) }{smarcldef} For the fluctuations $\mathbf r(t,\sigma)$ we choose Dirichlet boundary conditions in time $ \mathbf r(t_f,0)=0$, $\mathbf r(0,\sigma)=0$. The boundary conditions with respect to $\sigma$ are instead periodic: $ \mathbf r(t,\sigma)=\mathbf r(t,\sigma+N)$. After a few calculations, it is possible to rewrite $Z(\mathbf r_{12})$ in the form: \beq{ Z(\mathbf r_{12})=\int_{-\infty}^{\infty}d^3\mathbf k e^{i\mathbf k\cdot(\mathbf r_{12}-\mathbf r_{cl} )}e^{-A_{cl}}Z(\mathbf k) }{zr12castone} where $\mathbf r_{cl}$ and $ A_{cl}=\int_0^{t_f}dt\int_0^Nd\sigma\left( \tilde c\dot\mathbf R^2_{cl}+\frac{\nu}2\mathbf R^{\prime\,2}_{cl} \right)$ take into account the contribution of the classical background, while the fluctuations appear in $Z(\mathbf k)$: \beq{ Z(\mathbf k)=\int{\cal D}\mathbf r\exp\left[ -\int_0^{t_f}dt\int_0^Nd\sigma\left( \tilde c\dot\mathbf r^2+\frac\nu2\mathbf r^{\prime\,2}+i\mathbf J\cdot\mathbf r \right) \right] }{zr12zk} In Eq.~\ceq{zr12zk} the external current $\mathbf J$ is given by: \beq{\mathbf J(t,\sigma)=\frac{\mathbf k}{\Delta t} \left( \delta(\sigma-\sigma_2)-\delta(\sigma-\sigma_1) \right) \theta(t_2-t)\theta(t-t_1) } {extcurdef} This current has been introduced to provide a convenient way of rewriting the integral $\int_{t_1}^{t_2}\frac{dt}{\Delta t}(\mathbf r(t,\sigma_2)-\mathbf r(t,\sigma_1))$. Let's now concentrate on the computation of the partition function of the fluctuations $Z(\mathbf k)$. The gaussian integration over the fields $\mathbf r(t,\sigma)$ may be easily carried out and gives as a result: \beq{ Z(\mathbf k)=Ce^{W(\mathbf k)} }{ztildek} where $C$ is a constant and \beq{ W(\mathbf k)=\exp\left[ \frac 12\int_0^{t_f}dtdt'\int_0^Nd\sigma d\sigma' G(t,\sigma;t',\sigma')\mathbf J(t,\sigma)\cdot \mathbf J(t',\sigma') \right] }{wtildek} In the above equation $G(t,\sigma;t',\sigma')$ denotes the Green function satisfying the equation: \beq{ \left[ \tilde c\frac{\partial^2}{\partial t^2}+\frac{\nu}{2}\frac{\partial^2}{\partial \sigma^2} \right]G(t,\sigma;t',\sigma')=\delta(t-t')\delta(\sigma-\sigma') }{grefundefequ} According to our settings, we choose for $G(t,\sigma;t',\sigma')$ Dirichlet boundary conditions at the instants $t=t_f$ and $t=0$ and periodic boundary conditions in $\sigma$ and $\sigma'$. To solve Eq.~\ceq{grefundefequ}, we decompose $G(t,\sigma;t',\sigma')$ in Fourier series as follows: \beq{ G(t,\sigma;t',\sigma')=\sum_{n=-\infty}^{+\infty}g_n(t,t')e^{-2\pi i\frac\ell L(\sigma-\sigma')n} }{gexp} Substituting also the Fourier expansion of the periodic Dirac delta function $\delta(\sigma-\sigma')=\sum_{n=-\infty}^{+\infty} \frac\ell L e^{-2\pi i\frac\ell L(\sigma-\sigma')n} $ in Eq.~\ceq{grefundefequ} and solving for $g_n(t,t')$, we obtain for $n=0$: \beq{g_0(t,t') =\frac 1{Lc}\theta(t'-t)t\frac{t'-t_f}{t_f}+ \frac1{Lc}\theta(t-t')t'\frac{t-t_f}{t_f} }{gzttp} and for $n\ne 0$: \begin{eqnarray} g_n(t,t')&=&A_n\theta(t-t')\sinh\beta_nt'\sinh\beta_n(t-t_f)\nonumber\\ &+&A_n\theta(t'-t)\sinh\beta_nt\sinh\beta_n(t_f-t')\label{gnttp} \end{eqnarray} with \beq{ \beta_n=\frac{|n|\pi}{L}\sqrt{\frac{2\nu \ell}{c}} }{betandef} \beq{ A_n=-\frac{1}{\sqrt{2\nu\ell c}}\,\, \frac{1}{|n|\pi\sinh\beta_nt_f} }{Andef} In Eqs.~\ceq{gzttp} and \ceq{gnttp} $\theta(t)$ denotes the Heaviside $\theta-$function $\theta(t)=1$ for $t\ge 0$ and $\theta(t)=0$ for $t<0$. To complete the calculation of $Z(\mathbf k)$ in Eq.~\ceq{ztildek} we rewrite $W(\mathbf k)$ as follows: \beq{ W(\mathbf k)=\frac 12\sum_{n=-\infty}^{+\infty}\int_0^{t_f}dtdt'\int_0^Nd\sigma d\sigma' g_n(t,t')e^{-2\pi i \frac \ell L (\sigma-\sigma')n}\mathbf J(t,\sigma)\cdot \mathbf J(t',\sigma') }{wkbar} It is easy to show that the only non-zero contributions to the above integral are those for which $n\ne 0$. After integrating over the variables $\sigma,\sigma'$ and over one of the time variables, we obtain: \begin{eqnarray} W(\mathbf k)&=&\frac{2\mathbf k^2}{(\Delta t)^2}\sum_{n\ne 0} \frac {A_n}{\beta_n} \left[ 1-\cos 2\pi n\left(\frac {\ell_2-\ell_1}{L}\right) \right]\nonumber\\ &\times&\int_{t_1}^{t_2}dt\sinh\beta_n(t_f-t)(\cosh\beta_nt- \cosh\beta_nt_1) \label{wkbartwo} \end{eqnarray} where we have put $ \ell_2=\ell\sigma_2$ and $\ell_1=\ell\sigma_1$. Remembering that the coefficients $A_n$ defined in Eq.~\ceq{Andef} are all strictly negative, it is easy to realize that $W(\mathbf k)$ is negative too. This property of $W(\mathbf k)$ will be necessary in order to perform the remaining integration over $\mathbf k$ in the probability distribution $Z(\mathbf r_{12})$ of Eq.~\ceq{zr12castone}. A last integration over $dt$ in Eq.~\ceq{wkbartwo} delivers the following result: \begin{eqnarray} W(\mathbf k)&=&\frac{2\mathbf k^2}{(\Delta t)^2} \sum_{n\ne 0} \frac{A_n}{\beta_n} \left[ 1-\cos 2\pi n\left( \frac{\ell_2-\ell_1}{L} \right) \right]\left[ \frac 12(t_2-t_1)\sinh\beta_nt_f-\frac{\cosh\beta_nt_f}{2\beta_n} \right.\nonumber\\ &-&\frac 1{4\beta_n}(\cosh\beta_n(t_f-2t_2) +\cosh\beta_n(t_f-2t_1))\label{wkbarexact} \\ &+&\left. \frac{1}{2\beta_n}(\cosh\beta_n(t_f+t_1-t_2)+\cosh\beta_n(t_f-(t_1+t_2))) \right]\nonumber \end{eqnarray} Substituting Eq.~\ceq{wkbarexact} back in Eq.~\ceq{ztildek} we get an exact expression for $Z(\mathbf k)$. To obtain the probability distribution $Z(\mathbf r_{12})$ of Eq.~\ceq{zr12castone} in closed form a gaussian integration over $\mathbf k$ is sufficient. \section{Calculation of the probability distribution $Z(\mathbf r_{12})$ in the limit of a stiff chain} To simplify our calculations of the previous Section, we compute the infinite sum in Eq.~\ceq{wkbarexact} using an approximation. First of all, we give a physical meaning to the parameter $\nu$ by putting: \beq{\nu=\frac{\alpha}{k_BT\tau\ell}}{nuphysint} After performing the above substitution in the partition function of Eq.~\ceq{parfunapp}, the term proportional to $\nu$ appearing in the action of the fields $\mathbf R$ becomes exactly the term introduced in the GNLSM in Ref.~\cite{FEPAVI2} in order to take into account the bending energy of the chain. This connection with the bending energy is confirmed by the fact that $\alpha$ has dimensions of an energy per unit of length, i.~e. $[\alpha]=[\mbox{energy}]\cdot[\mbox{length}]^{-1}$. As it was shown in Ref.~\cite{FEPAVI2}, large values of $\alpha$ correspond to a stiff chain. Indeed, it is possible to check from Eq.~\ceq{zr12zk} that in this case the corrections to the tangent vectors $\mathbf R_{cl}'$ coming from the fluctuations $\mathbf r'$ are strongly suppressed in $Z(\mathbf k)$. In the following, we will work in the limit of a stiff chain, i.~e.: \beq{\alpha>>1}{stiffchain} implies that also From Eqs.~\ceq{nuphysint} and \ceq{stiffchain} it turns out that the quantity $\nu\ell=\frac {\alpha}{k_BT\tau}$ appearing in the coefficients $\beta_n$ and $A_n$ (see Eqs.~\ceq{betandef} and \ceq{Andef}) is very large. As a consequence, within the approximation \ceq{stiffchain} the coefficients $\beta_n$ are large, while the coefficients $A_n$ are exponentially small. Taking into account these facts, we may write the following asymptotic expression of $W(\mathbf k)$: \beq{ W(\mathbf k)=\frac{2\mathbf k^2}{(\Delta t)^2}\frac{L}{2\pi^2\nu\ell} \left[ -A\frac{(t_2-t_1)}{2}+B\frac{L}{\pi}\sqrt{\frac{c}{2\nu\ell}} \right]+{\cal O}\left( e^{-\frac{2\pi}{L}\sqrt{\frac{2\nu\ell}{c}}(t_2-t_1)} \right) }{wkbarapprox} In Eq.~\ceq{wkbarapprox} we have introduced the convenient notation: \begin{eqnarray} A&=&\sum_{n\ne 0}\frac 1{n^2}\left[ 1-\cos 2\pi n\left(\frac{\ell_2-\ell_1}{L}\right) \right]\\ B&=&\frac 12\sum_{n\ne 0}\frac 1{|n|^3}\left[ 1-\cos 2\pi n\left( \frac{\ell_2-\ell_1}{L} \right) \right] \end{eqnarray} It is possible to check that the other contributions to $W(\mathbf k)$, expressed in Eq.~\ceq{wkbarapprox} with the symbol ${\cal O}\left( e^{-\frac{2\pi}{L}\sqrt{\frac{2\nu\ell}{c}}(t_2-t_1)} \right)$, decay at least as fast as $e^{-\frac{2\pi}{L}\sqrt{\frac{2\nu\ell}{c}}(t_2-t_1)}$. These terms become negligibly small when $\nu\ell$ is large, provided of course that: \beq{ t_2-t_1>>\frac12\frac L\pi\sqrt{\frac{c}{2\nu\ell}} }{condneg} Substituting Eq.~\ceq{wkbarapprox} in Eq.~\ceq{ztildek}, we obtain for $Z(\mathbf k)$ the approximate expression: \beq{ Z(\mathbf k)=Ce^{-\frac{\kappa\mathbf k^2}{2}} }{zkfinal} where \beq{ \kappa=\frac 4{(\Delta t)^2}\frac{L}{\pi^22\nu\ell}\left[ A\frac{t_2-t_1}{2}-B\frac L\pi\sqrt{\frac{c}{2\nu\ell}} \right] }{alphapardef} The condition \ceq{condneg} guarantees that $\kappa>0$. This is what is needed to compute the integral over $\mathbf k$ in Eq.~\ceq{zr12castone}. After a few calculations we arrive at the final result: \beq{ Z(\mathbf r_{12})=Ce^{-A_{cl}} \left( \frac{2\pi}{\kappa} \right)^{\frac 32}\exp\left[ -\frac 12\frac{(\mathbf r_{12}-\mathbf r_{cl})^2}{\kappa} \right] }{probdistfina} Eq.~\ceq{probdistfina} has a straightforward interpretation. The quantity $\kappa$ is inversely proportional to the product $\nu\ell$, which is supposed to be large. Thus $\kappa$ is very small. As a consequence, Eq.~\ceq{probdistfina} implies that the relative positions of the two points $P_1$ and $P_2$ exhibits a sharp peak around the classical value, i.~e. when $\mathbf r_{12}=\mathbf r_{cl}$. From Eq.~\ceq{probdistfina} it turns also out that the changes of the classical background configuration $\mathbf R_{cl}(t,\sigma)$ due to the fluctuations $\mathbf r(t,\sigma)$ are relatively small. This is due to the fact that there is no contribution to the parameter $\kappa$ which is of zeroth order in $\nu\ell$. Potentially, a zeroth order contribution could come from the term proportional to $g_0(t,t')$ in Eq.~\ceq{wkbar}, but we have seen that this term vanishes identically. This suggests that the effects of thermal fluctuations are weak at short time-scales. Of course, with the increasing of the time interval $\Delta t=t_2-t_1$ over which the average of the relative position of $P_1$ and $P_2$ is measured, the fluctuations have more and more time to act on the chain and their influence becomes more and more important. This intuitive prediction is confirmed by Eq.~\ceq{probdistfina}. Indeed, the coefficient $\kappa$ grows linearly with $\Delta t$ and, for larger values of $\kappa$, the peak around the classical point $\mathbf r_{12}=\mathbf r_{cl}$ in the probability distribution $Z(\mathbf r_{12})$ becomes less sharp as expected. \section{Conclusions} In this work the GNLSM of Refs.~\cite{FEPAVI1} and \cite{FEPAVI2} has been applied to compute the distribution function $Z(\mathbf r_{12})$ of the relative position between two points of a chain with rigid constraints. The calculation has been performed using the approximation \ceq{deltaapproxtwo} of the functional Dirac delta function which is needed to impose the constraints. After this approximation, the finest details of the chain are lost, a fact that has already been noted in the case of a static chain \cite{EdwGoo}. A closed form of the probability distribution \ceq{probdistfina} may be obtained starting from the exact calculation of the contribution of the fluctuations to $Z(\mathbf r_{12})$ given in Eq.~\ceq{wkbarexact}. The final formula for $Z(\mathbf r_{12})$ computed in this way is however complicated. For this reason the physical meaning of $Z(\mathbf r_{12})$ has been investigated at the end of the previous Section in the case in which the energy needed for bending the chain is large. In this approximation the contributions of the fluctuations which decay exponentially when $\alpha$ becomes large are neglected. Concluding, it would be interesting to explore the connections between the GNLSM and other models of the dynamics of a chain, in which instead of rigid constraints the addition of potentials is used in order to prevent the breaking of the chain \cite{FEMIROVI}. \section{Acknowledgements} This work has been financed by the Polish Ministry of Science and Higher Education, scientific project N202 156 31/2933. F. Ferrari gratefully acknowledges also the support of the action COST~P12 financed by the European Union and the hospitality of C. Schick at the University of Rostock. The authors would like to thank V. G. Rostiashvili for fruitful discussions.
{ "attr-fineweb-edu": 1.734375, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcRPxK6wB9k0iKq8g
\subsubsection*{Abstract} This work is devoted to the global stability theory of solutions for a general isothermal model of capillary fluids derived by C. Rohde in \cite{4Ro}, which can be used as a phase transition model.\\ This chapter is structured in the following way: first of all inspired by the result by P.-L. Lions in \cite{4L2} on the Navier-Stokes compressible system we will show the global stability of weak solution for our system with isentropic pressure and next with general pressure. Next we will consider perturbations close to a stable equilibrium as in the case of strong solution. \section{Introduction} \subsection{Presentation of the model} The correct mathematical description of liquid-vapor phase interfaces and their dynamical behavior in compressible fluid flow has a long history. We are concerned with compressible fluids endowed with internal capillarity. One of the first model which takes into consideration the variation of density on the interface between two phases, originates from the XIXth century work by Van der Waals and Korteweg \cite{4Ko}. It was actually derived in his modern form in the 1980s using the second gradient theory, see for instance \cite{4JL,4TN}. Korteweg suggests a modification of the Navier-Stokes system to account additionally for phase transition phenomena in introducing a term of capillarity. He assumed that the thickness of the interfaces was not null as in the {\it sharp interface approach}. This is called the {\it diffuse interface approach}.\\ Korteweg-type models are based on an extended version of nonequilibrium thermodynamics, which assumes that the energy of the fluid not only depends on standard variables but on the gradient of the density. In terms of the free energy, this principle takes the form of a generalized Gibbs relation, see \cite{4TN}.\\ In the present chapter, we follow a new approach introduced by Coquel, Rohde and their collaborators in \cite{4CR}. They remark that the local diffuse interface approach requires more regular solution than in the original sharp interface approach. Indeed the interfaces are assumed of non zero thickness, so that the density vary continuously between the two interfaces, whereas in the sharp interface models, the interfaces represent zone of discontinuity for the density. Coquel, Rohde and their collaborators present an alternative model with a capillarity term which avoids spatial derivatives. The model reads: $$ \begin{aligned} \begin{cases} &\partial_{t}\rho+{\rm div}(\rho u)=0\\ &\partial_{t}(\rho\,u)+{\rm div}(\rho u\otimes u)-\mu\Delta u-(\lambda+\mu)\nabla{\rm div} u+\nabla(P(\rho))=\kappa\rho\nabla D[\rho]\\ &(\rho_{t=0},u_{t=0})=(\rho_{0},u_{0})\\ \end{cases} \end{aligned} \leqno{(NSK)} $$ with: $$\mu>0\;\;\mbox{and}\;\;\lambda+2\mu>0$$ where $\rho$ denotes the density of the fluid and $u\in\mathbb{R}^{N}$ the velocity, $\mu$ and $\lambda$ represent the viscosity coefficients, $\kappa$ is a coefficient of capillarity, $P$ is a general pressure function. We are particulary interested by Van der Waals type pressure: $$ \begin{aligned} &P:(0,b)\rightarrow(0,+\infty)\\ &P(\rho)=\frac{RT_{*}\rho}{b-\rho}-a\rho^{2}\\ \end{aligned} $$ where $a$, $b$, $R$, $T_{*}$ are positive constants, $R$ being the specific gas constant. For fixed values $a$, $b$ we choose the constant reference temperature $T_{*}$ so small as $P$ to be monotone decreasing in some non-empty interval.\\ Further we impose the conditions: \begin{equation} u(t,x)\rightarrow0,\;\;\rho(t,x)\rightarrow 0\;\;\mbox{as}\;\;|x|\rightarrow+\infty, \label{4bord} \end{equation} In last section, we consider also more general situations: monotone pressure, other boundary conditions than boundary condition \ref{4bord}, at infinity instead of \ref{4bord}: \begin{equation} u(t,x)\rightarrow u_{\infty},\;\;\rho(t,x)\rightarrow \rho_{\infty}\;\;\mbox{as}\;\;|x|\rightarrow+\infty, \label{4bord2} \end{equation} where $\rho_{\infty}$ is a given nonnegative constant.\\ The term $\kappa\rho\nabla D[\rho]$ corresponds to the capillarity which is supposed to model capillarity effects close to phase transitions in \cite{4Ko}. The classical Korteweg's capillarity term is $D[\rho]=\Delta\rho$.\\ Based on Korteweg's original ideas Coquel, Rohde and their collaborators in \cite{4CR} and Rohde in \cite{4Ro} choose a nonlocal capillarity term $D$ which penalizes rapid variations in the density field close from the interfaces. They introduce the following capillarity term: $$D[\rho]=\phi*\rho-\rho$$ where $\phi$ is chosen so that: $$\phi\in L^{\infty}(\mathbb{R}^{N})\cap C^{1}(\mathbb{R}^{N})\cap W^{1,\,1}(\mathbb{R}^{N}),\;\;\;\int_{\mathbb{R}^{N}}\phi(x)dx=1,\;\;\phi \;\;\mbox{even},\;\mbox{and}\;\;\phi\geq0. $$ This choice of capillarity term allows to get solution with jumps, i.e with sharp interfaces. Before tackling the global stability theory for the system $(NSK)$, let us derive formally the uniform bounds available on $(\rho,u)$. \subsection{Energy spaces} We assume to simplify that $P(\rho)=a\rho^{\gamma}$ with $\gamma\geq1$. Let $\Pi $ (free energy) be defined by: \begin{equation} \Pi(s)=s\biggl(\int^{s}_{0}\frac{P(z)}{z^{2}}dz\biggl), \label{4pressionpi} \end{equation} so that $P(s)=s\Pi^{'}(s)-\Pi(s)\, ,\,\Pi^{'}(\bar{\rho})=0$ and if we renormalize the mass equation: $$\partial_{t}\Pi(\rho)+{\rm div}(u\Pi(\rho))+P(\rho){\rm div}(u)=0\;\;\mbox{in}\;\;{\cal D}^{'}((0,T)\times\mathbb{R}^{N}).$$ Notice that $\Pi$ is convex. Multiplying the equation of momentum conservation by $u$ and integrating by parts over $\mathbb{R}^{N}$, we obtain the following energy estimate:\\ \begin{equation} \begin{aligned} &\int_{\mathbb{R}^{N}}(\frac{1}{2}\rho |u|^{2}+\Pi(\rho)+E_{global}[\rho(.,t)])(x)dx(t)+\int_{0}^{t}\int_{\mathbb{R}^{N}}(\mu D(u):D(u)\\ &\hspace{2cm}+(\lambda+\mu)|{\rm div} u|^{2})dx \leq\int_{\mathbb{R}^{N}}\big(\frac{|m_{0}|^{2}}{2\rho}+\Pi(\rho_{0})+E_{global}[\rho_{0}]\big)dx, \end{aligned} \label{4inegaliteenergie} \end{equation} where we have: $$E_{global}[\rho(.,t)](x)=\frac{\kappa}{4}\int_{\mathbb{R}^{N}}\phi(x-y)(\rho(y,t)-\rho(x,t))^{2}dy.$$ The only non-standard term is the energy term $E_{global}$ which comes from the product of $u$ with the capillarity term $\kappa\rho\nabla(\phi*\rho-\rho)$. Indeed we have: $$ \begin{aligned} \kappa\int_{\mathbb{R}^{N}}u(t,x)\rho(t,x)\cdot\nabla&([\phi*\rho(t,\cdot)](x)-\rho(t,x))dx\\ &\;\;\;=-\kappa\int_{\mathbb{R}^{N}}{\rm div}(u(t,x)\rho(t,x))([\phi*\rho(t,\cdot)](x)-\rho(t,x))dx,\\ &\;\;\;=\kappa\int_{\mathbb{R}^{N}}\frac{\partial}{\partial t}\rho(t,x)([\phi*\rho(t,\cdot))](x)-\rho(t,x))dx,\\ &\;\;\;=-\frac{d}{dt}\int_{\mathbb{R}^{N}}E_{global}[\rho(t,\cdot)](x)dx\;.\\ \end{aligned} $$ To derive the last equality we use the relation: $$ \begin{aligned} \frac{d}{dt}\int_{\mathbb{R}^{N}}E_{global}[\rho(t,\cdot)](x)dx&=\frac{\kappa}{2} \int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\phi(x-y)(\rho(t,y)-\rho(t,x))\frac{\partial}{\partial t}\rho(t,y)dydx\\ &\;\;\;\;\;\;\;\;\;\;\;+\frac{\kappa}{2}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\phi(y-x)(\rho(t,x)- \rho(t,y))\frac{\partial}{\partial t}\rho(t,x)dydx,\\[2mm] &=\kappa\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\phi(x-y)(\rho(t,y)-\rho(t,x))\frac{\partial}{\partial t}\rho(t,y)dydx,\\[2mm] &=-\kappa\int_{\mathbb{R}^{N}}([\phi*\rho(t,\cdot)](x)-\rho(t,x))\frac{\partial}{\partial t}\rho(t,x)dx\;.\\ \end{aligned} $$ where we just use integration by parts.\\ In the sequel we will note: \begin{equation} {\cal E}(\rho,\rho u)(t)=\int_{\mathbb{R}^{N}}(\frac{1}{2}\rho |u|^{2}+\Pi(\rho)+E_{global}[\rho(.,t)])(x)dx(t), \label{4defenergie} \end{equation} We are interested to use the above inequality energy to determine the functional space we must work with.\\ So if we expand $E_{global}[\rho(.,t)](x)$ we get:\\ $$E_{global}[\rho(t,\cdot)](x)=\frac{\kappa}{4}\big(\rho^{2}+\phi*\rho^{2}-2\rho\,(\phi*\rho)\big).$$ Because by the mass equation we obtain that $\rho$ is bounded in $ L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$ if we suppose that $\rho_{0}\in L^{1}$ and we have supposed that $\phi\in L^{\infty}(\mathbb{R}^{N})$, we obtain that $\rho\,(\phi*\rho)$ is bounded in $ L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$. So we get that $\rho^{2}+\phi*\rho^{2}\in L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$ and as $\phi\geq 0$ and $\rho\geq0$ we get a control of $\rho$ in $L^{\infty}(0,T;L^{2}(\mathbb{R}^{N}))$ (a property which turns out to be important to taking advantage of the theory of renormalized solutions, indeed $\rho$ in $L^{\infty}(0,T;L^{2}(\mathbb{R}^{N}))$ implies that $\rho\in L^{2}_{loc}(\mathbb{R}^{+}\times\mathbb{R}^{N})$ and we will can use the theorem of Diperna-Lions on renormalized solutions, see \cite{4L1}) \\ In view of (\ref{4inegaliteenergie}), we can specify initial conditions on $\rho_{/t=0}=\rho_{0}$ and $\rho u_{/t=0}=m_{0}$ where we assume that: \begin{equation} \begin{aligned} &\bullet\;\;\rho_{0}\geq 0\;\;\mbox{a.e in}\;\;\mathbb{R}^{N},\;\rho_{0}\in L^{1}(\mathbb{R}^{N})\cap L^{s}(\mathbb{R}^{N})\;\;\;\mbox{with}\;\;s=\max(2,\gamma),\hspace{6cm}\\ &\bullet\;\;m_{0}=0\;\;\mbox{a.e on}\;\;{\rho_{0}=0},\\ &\bullet\;\;\frac{|m_{0}|^{2}}{\rho_{0}}\;\;\mbox{(defined to be $0$ on $ {\rho_{0}=0}$) is in}\;\;L^{1}(\mathbb{R}^{N}).\\ \end{aligned} \label{4donneeinitiale} \end{equation} We deduce the following a priori bounds which give us the energy space in which we will work: \begin{itemize} \item $\rho\in L^{\infty}(0,T;L^{1}(\mathbb{R}^{N})\cap L^{s}(\mathbb{R}^{N}))$, \item $\rho|u|^{2}\in L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$, \item $\nabla u\in L^{2}((0,T)\times\mathbb{R}^{N})^{N}$. \end{itemize} We will use this uniform bound in our result of compactness. Let us emphasize at this point that the above a priori bounds do not provide any control on $\nabla\rho$ in contrast with the case of $D[\rho]=\Delta\rho$ (see in \cite{4DD}).\\ \subsection{Notion of weak solutions} We now explain what we mean by renormalized weak solutions, weak solutions, and bounded energy weak solution of problem $(NSK)$.\\ Multiplying mass equation by $b^{'}(\rho)$, we obtained the so-called renormalized equation (see \cite{4L1}): \begin{equation} \frac{\partial}{\partial t}b(\rho)+{\rm div}(b(\rho)u)+(\rho b^{'}(\rho)-b(\rho)){\rm div}u=0. \label{4renormalis} \end{equation} with: \begin{equation} b\in C^{0}([0,+\infty))\cap C^{1}((0,+\infty)),\;\;|b^{'}(t)|\leq ct^{-\lambda_{0}},\;\;t\in(0,1],\;\,\lambda_{0}<1 \label{4foncb1} \end{equation} with growth conditions at infinity: \begin{equation} |b^{'}(t)|\leq ct^{\lambda_{1}},\;\;t\geq1,\;\;\mbox{where}\;\;c>0,\;-1<\lambda_{1}<\frac{s}{2}-1. \label{4foncb2} \end{equation} \begin{definition} A couple $(\rho,u)$ is called a renormalized weak solution of problem $(NSK)$ if we have: \begin{itemize} \item Equation of mass holds in ${\cal D}^{'}(\mathbb{R}^{N})$. \item Equation (\ref{4renormalis}) holds in ${\cal D}^{'}(\mathbb{R}^{N})$ for any function $b$ belonging to (\ref{4foncb1}) and (\ref{4foncb2}). \end{itemize} \label{4defrenormal} \end{definition} \begin{definition} Let the couple $(\rho_{0},u_{0})$ satisfy; \begin{itemize} \item $\rho_{0}\in L^{1}(\mathbb{R}^{N})$, $\Pi(\rho_{0})\in L^{1}(\mathbb{R}^{N})$, $E_{global}[\rho_{0}]\in L^{1}(\mathbb{R}^{N})$, $\rho_{0}\geq 0$ a.e in $\mathbb{R}^{N}$. \item $\rho_{0}u_{0}\in (L^{1}(\mathbb{R}^{N}))^{N}$ such that $\rho_{0}|u_{0}|^{2}1_{\rho_{0}>0}\in L^{1}(\mathbb{R}^{N})$ \item and such that $\rho_{0}u_{0}=0$ whenever $x\in\{\rho_{0}=0\}$, \end{itemize} where the quantity $\Pi$ is defined in (\ref{4pressionpi}). We have the following definitions: \begin{enumerate} \item A couple $(\rho,u)$ is called a weak solution of problem $(NSK)$ on $\mathbb{R}$ if: \begin{enumerate} \item $\rho\in L^{r}(L^{r}(\mathbb{R}^{N})$ for $s\leq r\leq+\infty$, \item $P(\rho)\in L^{\infty}(L^{1}(\mathbb{R}^{N}))$, $\rho\geq0$ a.e in $\mathbb{R}\times\mathbb{R}^{N}$, \item $\nabla u\in L^{2}(L^{2}(\mathbb{R}^{N}))$, $\rho|u|^{2}\in L^{\infty}(L^{1}(\mathbb{R}^{N}))$. \item Mass equation holds in ${\cal D}^{'}(\mathbb{R}\times\mathbb{R}^{N})$.\label{4vrai1} \item Momentum equation holds in ${\cal D}^{'}(\mathbb{R}\times\mathbb{R}^{N})^{N}$.\label{4vrai2} \item $\lim_{t\rightarrow 0^{+}}\int_{\mathbb{R}^{N}}\rho(t)\varphi=\int_{\mathbb{R}^{N}}\rho_{0}\varphi$, $\forall\varphi\in{\cal D}(\mathbb{R}^{N})$,\label{4vrai3} \item $\lim_{t\rightarrow 0^{+}}\int_{\mathbb{R}^{N}}\rho u(t)\cdot\phi=\int_{\mathbb{R}^{N}}\rho_{0} u_{0}\cdot\phi$, $\forall\phi\in{\cal D}(\mathbb{R}^{N})^{N}$.\label{4vrai4} \end{enumerate} \item A couple $(\rho,u)$ is called a bounded energy weak solution of problem $(NSK)$ if in addition to (\ref{4vrai1}), (\ref{4vrai2}), (\ref{4vrai3}), (\ref{4vrai4}) we have: \begin{itemize} \item The quantity ${\cal E}_{0}$ is finite and inequality (\ref{4inegaliteenergie}) with ${\cal E}$ defined by (\ref{4defenergie}) and with ${\cal E}_{0}$ in place of ${\cal E}(\rho(0),\rho u(0))$ holds a.e in $\mathbb{R}$. \end{itemize} \end{enumerate} \label{4defsolutionsfaibles} \end{definition} \subsection{Mathematical results} We wish to prove global stability results for $(NSK)$ with $D[\rho]=\phi*\rho-\rho$ in functional spaces very close to energy spaces. In the non capillary case and $P(\rho)=a\rho^{\gamma}$, P-L. Lions in \cite{4L2} proved the global existence of weak solutions $(\rho,u)$ to $(NSK)$ with $\kappa=0$ (which become the system of Navier-Stokes compressible isotherm) for $\gamma> \frac{N}{2}$ if $N\geq 4$, $\gamma\geq \frac{3N}{N+2}$ if $N=2,3$ and initial data $(\rho_{0},m_{0})$ such that: $$\rho_{0},\;\;\rho_{0}^{\gamma},\;\;\frac{|m_{0}|^{2}}{\rho_{0}}\in L^{1}(\mathbb{R}^{N}).$$ where we agree that $m_{0}=0$ on $\{x\in\mathbb{R}^{N}/\;\rho_{0}(x)=0\}$. More precisely, he obtains the existence of global weak solutions $(\rho,u)$ to $(NSK)$ with $\kappa=0$ such that for all $t\in(0,+\infty)$: \begin{itemize} \item $\rho\in L^{\infty}(0,T;L^{\gamma}(\mathbb{R}^{N}))$ and $\rho\in C([0,T],L^{p}(\mathbb{R}^{N}))$ if $1\leq p<\gamma$, \item $\rho\in L^{q}((0,T)\times\mathbb{R}^{N})$ for $q=\gamma-1+\frac{2\gamma}{N}>\gamma$. \item $\rho|u|^{2}\in L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$ and $Du\in L^{2}((0,T)\times\mathbb{R}^{N})$. \end{itemize} Notice that the main difficulty for proving Lions' theorem consists in exhibiting strong compactness properties of the density $\rho$ in $L^{p}_{loc}(\mathbb{R}^{+}\times\mathbb{R}^{N})$ spaces required to pass to the limit in the pressure term $P(\rho)=a\rho^{\gamma}$.\\ Let us mention that Feireisl in \cite{4F} generalized the result to $\gamma>\frac{N}{2}$ in establishing that we can obtain renormalized solution without imposing that $\rho\in L^{2}_{loc}(\mathbb{R}^{+}\times\mathbb{R}^{N})$ (what needed Lions in dimension $N=2,3$, that's why $\gamma-1+\frac{2\gamma}{N}\geq2$), for this he introduces the concept of oscillation defect measure evaluating the loss of compactness. We refer to the book of Novotn\'y and Stra$\check{\mbox{s}}$kraba for more details (see \cite{4NS}).\\ Let us mention here that the existence of strong solution with $D[\rho]=\Delta\rho$ is known since the work by Hattori an Li in \cite{4Ha1}, \cite{4Ha2} in the whole space $\mathbb{R}^{N}$. In \cite{4DD}, Danchin and Desjardins study the well-posedness of the problem for the isothermal case with constant coefficients in critical Besov spaces. We recall too the results from Rohde in \cite{4Ro} who obtains the existence and uniqueness in finite time for two-dimensional initial data in $H^{4}(\mathbb{R}^{2})\times H^{4}(\mathbb{R}^{2})$. \\ In the present chapter, we aim at showing the global stability of weak solutions in the energy spaces for the system $(NSK)$. This work is composed of four parts, the first one concerns estimates on the density to get a gain of integrability on the density needed to pass to the weak limit in the term of pressure and of capillarity. The second part is the passage to the weak limit in the non-linear terms of the density and the velocity according to Lions' methods. The idea is to use renormalized solution to test the weak limit on convex test functions. In this part we will concentrate on the case of simple pressure of type $P(\rho)=a\rho^{\gamma}$. We get the following theorem where $(\rho_{n},u_{n})_{n\in\mathbb{N}}$ is a suit of global regular approximate solution for the problem $(NSK)$. \begin{theorem} \label{4principal} Let $N\geq2$. Let $\gamma>N/2$ if $N\geq4$ and $\gamma\geq1$ else.\\ Let the couple $(\rho_{0}^{n},u_{0}^{n})$ satisfy: \begin{itemize} \item $\rho^{0}_{n}$ is uniformly bounded in $L^{1}(\mathbb{R}^{N})\cap L^{s}(\mathbb{R}^{N})$ with $s=\max (\gamma,2)$ and $\rho_{0}^{n}\geq 0$ a.e in $\mathbb{R}^{N}$, \item $\frac{|\rho^{0}_{n}u_{0}^{n}|^{2}}{\rho^{0}_{n}}$ is uniformly bounded in $L^{1}(\mathbb{R}^{N})$, \item and such that $\rho_{0}^{n}u_{0}^{n}=0$ whenever $x\in\{\rho_{0}=0\}$. \end{itemize} In addition we suppose that $\rho^{0}_{n}$ converges in $L^{1}(\mathbb{R}^{N})$ to $\rho_{0}$, then up to a subsequence $(\rho_{n},u_{n})$ converges strongly to a weak solution $(\rho,u)$ of the system $(NSK)$ satisfying the initial condition $(\rho_{0},u_{0})$ as in (\ref{4donneeinitiale}). Moreover we have the following convergence: \begin{itemize} \item $\rho_{n}\rightarrow_{n}\rho$ in $C([0,T],L^{p}(\mathbb{R}^{N}))\cap L^{r}((0,T)\times K)$ for all $1\leq p<s$, $1\leq r<q$, with $q=s+\frac{N\gamma}{2}-1$ if $N\geq3$, where $K=\mathbb{R}^{N}$ except when $N\geq 4$ and $\gamma<\frac{N}{2}(1+\frac{1}{N})$ where $K$ is an arbitrary compact. \item $\rho_{n}\rightarrow_{n}\rho$ in $C([0,T],L^{p}(\mathbb{R}^{2}))\cap L^{r}((0,T)\times K)$ for all $1\leq p<s$, $1\leq r<q$, with $K$ an arbitrary compact in $\mathbb{R}^{2}$ if $N=2$. \end{itemize} In addition we have: \begin{itemize} \item $\rho_{n}u_{n}\rightarrow\rho u$ in $L^{p}(0,T;L^{r}(\mathbb{R}^{N}))$ for all $1\leq p<+\infty$ and $1\leq r<\frac{2s}{s+1}$, \item $\rho_{n}(u_{i})_{n}(u_{j})_{n}\rightarrow\rho_{n}u_{i}u_{j}$ in $L^{p}(0,T;L^{1}(\Omega))$ for all $1\leq p<+\infty$, $1\leq i,j\leq N$ if $N\geq3$, where $K=\mathbb{R}^{N}$ except when $N\geq 4$ and $\gamma<\frac{N}{2}(1+\frac{1}{N})$ where $K$ is an arbitrary compact. \item $\rho_{n}(u_{i})_{n}(u_{j})_{n}\rightarrow\rho_{n}u_{i}u_{j}$ in $L^{p}(0,T;L^{1}(\Omega))$ for all $1\leq p<+\infty$, $1\leq i,j\leq N$ with $\Omega$ an arbitrary bounded open set in $\mathbb{R}^{2}$ if $N=2$. \end{itemize} \end{theorem} In the third part we will focus on general pressure, and particulary van der Waal's pressure. In the fourth part we concentrate on the case with initial data close to a constant $\bar{\rho}$, and we will work in Orlicz space, this case is the most adapted for the strong solution because it enables us to control the vacuum so that one can use the property of ellipticity of the momentum equation. \section{Existence of weak solution for a isentropic pressure law} \subsection{A priori estimates on the density} In this part we are interested by getting a gain of integrability on the density and we consider the case where $P(\rho)=a\rho^{\gamma}$. This will enable us to pass to the weak limit in the pressure and the Korteweg terms. It is expressed by the following theorem: \begin{theorem} \label{4integration} Let $N\geq2$ and $\gamma\geq1$, with in addition $\gamma>\frac{N}{2}$ if $N\geq4$. Let $(\rho,u)$ be a regular solution of the system $(NSK)$ with $\rho\geq 0$ and $\rho\in L^{\infty}(L^{1}\cap L^{s+\varepsilon})$ where we define $\varepsilon$ below. Then we have if $\gamma\geq \frac{N}{2}(1+\frac{1}{N})$ for $N\geq4$ : $$ \begin{aligned} \int_{(0,T)\times\mathbb{R}^{N}}\big(\rho^{\gamma+\varepsilon}+\rho^{2+\varepsilon}\big)dxdt\leq M&\;\;\mbox{for any}\;\;0<\varepsilon\leq \frac{2}{N}\gamma-1\;\;\;\mbox{if} \;\;N\geq4,\\ &\;\;\mbox{and}\;\;\;0<\varepsilon\leq\frac{4}{N}-1\;\;\;\mbox{if}\;\; N=2,3. \end{aligned} $$ with $M$ depending only on the initial conditions and on the time $T$.\\ If $\gamma<\frac{N}{2}(1+\frac{1}{N})$ for $N\geq4$, we have: $$ \begin{aligned} \int_{(0,T)\times K}\big(\rho^{\gamma+\varepsilon}+\rho^{2+\varepsilon}\big)dxdt\leq M^{'}&\;\;\mbox{for any}\;\;0<\varepsilon\leq \frac{2}{N}\gamma-1\;\;\;\mbox{if} \;\;N\geq4,\\ \end{aligned} $$ for any arbitrary compact $K$ with $M^{'}$ depending only on the initial conditions, on $K$ and on the time $T$. \end{theorem} {\bf Proof:} \\ \\ We will begin with the case where $N\geq3$ and we treat after the specific case $N=2$. \subsubsection*{Case $N\geq3$:} We apply to the momentum equation the operator $(-\Delta)^{-1}{\rm div}$ in order to concentrate us on the pressure and we get: \begin{equation} \begin{aligned} a\rho^{\gamma}=\frac{\partial}{\partial t}(-\Delta)^{-1}{\rm div}(\rho u)+ (-\Delta)^{-1}\partial^{2}_{i,j}(\rho u_{i}u_{j})+&(2\mu+\lambda){\rm div} u\\ &-\kappa(-\Delta)^{-1}{\rm div}\big(\rho\nabla(\phi*\rho-\rho)\big),\\ \end{aligned} \label{4premiereeq} \end{equation} and in multiplying by $\rho^{\varepsilon}$ with $0<\varepsilon\leq\min (\frac{1}{N},\frac{2}{N}\gamma-1)$ to estimate $\rho^{\gamma+\varepsilon}$, we get: \begin{equation} \begin{aligned} a\rho^{\gamma+\varepsilon}+\frac{\kappa}{2}\rho^{2+\varepsilon}=&-\kappa\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho(\nabla\phi*\rho))+\rho^{\varepsilon}(-\Delta)^{-1}\partial^{2}_{ij}(\rho u_{i}u_{j})\\ &+\frac{\partial}{\partial t}\big(\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)\big)-[\frac{\partial}{\partial t}\rho^{\varepsilon}](-\Delta)^{-1}{\rm div}(\rho u)+(\mu+\zeta){\rm div} u\;,\\ \end{aligned} \end{equation} where we note $\xi=\lambda+\mu$. We now rewrite the previous equality as follows: \begin{equation} \begin{aligned} &a\rho^{\gamma+\varepsilon}+\frac{\kappa}{2}\rho^{2+\varepsilon}=-\kappa\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho(\nabla\phi*\rho))+\rho^{\varepsilon}(-\Delta)^{-1}\partial^{2}_{ij}(\rho (u_{i})(u_{j}))\\ &\hspace{1,3cm}+\frac{\partial}{\partial t}\big(\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)\big)+{\rm div}[u\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)]+(\mu+\zeta){\rm div} u\\ &\hspace{2,8cm}-(\rho)^{\varepsilon}u\cdot\nabla(-\Delta)^{-1}{\rm div}(\rho u)+(1-\varepsilon)({\rm div}u)\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)\;,\\ \end{aligned} \label{42.14} \end{equation} \\ Next we integrate (\ref{42.14}) in time on $[0,T]$ and in space so we obtain: \begin{equation} \begin{aligned} &\int_{(0,T)\times\mathbb{R}^{N}}(a\rho^{\gamma+\varepsilon}+\frac{\kappa}{2}\rho^{2+\varepsilon})dx\,dt= \int_{(0,T)\times\mathbb{R}^{N}}\biggl(\frac{\partial}{\partial t}[\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)]+ (\mu+\zeta)({\rm div}u)\rho^{\varepsilon}\\[2mm] &\hspace{2,9cm}+(1-\varepsilon)({\rm div} u)\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)+\rho^{\varepsilon}[R_{i}R_{j} (\rho u_{i}u_{j})-u_{i}R_{i}R_{j}(\rho u_{j})]\\[2mm] &\hspace{3,8cm}+{\rm div}[u\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)]-\kappa\rho^{\varepsilon}(-\Delta)^{-1} {\rm div}(\rho\nabla(\phi*\rho))\biggl)dx\,dt\;.\\[2mm] \end{aligned} \label{4eq3} \end{equation} where $R_{i}$ is the classical Riesz transform.\\ Now we want to control the term $\int^{T}_{0}\int_{\mathbb{R}^{N}}\big(\rho^{\gamma+\varepsilon}+\frac{\kappa}{2}\rho^{2+\varepsilon}\big)dxdt$. As $\rho$ is positive, it will enable us to control $\|\rho\|_{L_{t,x}^{\gamma+\varepsilon}}$ and $\|\rho\|_{L_{t,x}^{2+\varepsilon}}$. This may be achieved by controlling each term on the right side of (\ref{4eq3}).\\ \\ We start by treating the term $\int_{(0,T)\times\mathbb{R}^{N}}\frac{\partial}{\partial t}[\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)]$. So we need to control $\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)$ in $L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$ and $\rho^{\varepsilon}_{0}(-\Delta)^{-1}{\rm div}(\rho_{0}u_{0})$ because: $$ \begin{aligned} \int_{(0,T)\times\mathbb{R}^{N}}\frac{\partial}{\partial t}[\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)](t,x)dt\,dx=&\int_{\mathbb{R}^{N}}\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)](x)dx(t)\\ &\hspace{1cm}-\int_{\mathbb{R}^{N}}\rho_{0}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{0} u_{0})](x)dx,\\ \end{aligned} $$ We recall that $\rho$, $\rho^{2}$, $\rho^{\gamma}$ and $\rho |u|^{2}$ are bounded in $L^{\infty}(L^{1})$ while $Du$ is bounded in $L^{2}((0,T)\times\mathbb{R}^{N})$ and $u$ is bounded in $L^{2}(0,T;L^{\frac{2N}{N-2}}(\mathbb{R}^{N}))$ by Sobolev embedding. In particular by H\"older inequalities we get that $\rho u$ is bounded in $L^{\infty}(0,T,(L^{\frac{2\gamma}{\gamma+1}}\cap L^{\frac{4}{3}})(\mathbb{R}^{N}))$. Thus we get in using H\"older inequalities and Sobolev embedding:\\ $\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)\in L^{\infty}(0,T,L^{1} \cap L^{\alpha})$ with: $$\frac{1}{\alpha}=\frac{\varepsilon}{s}+\min(\frac{\gamma+1}{2\gamma},\frac{3}{4})-\frac{1}{N} <1.$$ The fact that $\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho u)\in L^{\infty}(0,T,L^{1})$ is obtained by interpolation because $\rho\in L^{\infty}(L^{1})$ and in using less integrability in Sobolev embedding. \\ Next we have the same type of estimates for $\|\rho^{\varepsilon}_{0}(-\Delta)^{-1}{\rm div}(\rho_{0}u_{0})\|_{L^{1}(\mathbb{R}^{N})}$.\\ Finally (\ref{4eq3}) is rewritten on the following form in using Green formula: \\ $$ \begin{aligned} \int^{T}_{0}\int_{\mathbb{R}^{N}}\big(\rho^{\gamma+\varepsilon}&+\frac{\kappa}{2}\rho^{2+\varepsilon}\big)dxdt\leq C\big(1+ \int^{T}_{0}\int_{\mathbb{R}^{N}}\big[\,|{\rm div}u|\rho^{\varepsilon}(1+|(-\Delta)^{-1}{\rm div}(\rho u)|\,]\\[3mm] &+\rho^{\varepsilon}|R_{i}R_{j}(\rho u_{i}u_{j})-u_{i}R_{i}R_{j}(\rho u_{j})| +\kappa\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div}(\rho\nabla(\phi*\rho))\big]dt\,dx\big).\\ \end{aligned} $$ Now we will treat each term of the right hand side. We treat all the terms with the same type of estimates than P.-L. Lions in \cite{4L2} , excepted the capillarity term. \\ \\ We start with the term $|{\rm div}u|\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div}(\rho u)|$ where we have: $$|{\rm div}u|\in L^{2}(L^{2}),\;\; \rho^{\varepsilon}\in L^{\infty}(L^{\frac{s}{\varepsilon}}),\;\; \rho\,u\in L^{2}(0,T,L^{r}(\mathbb{R}^{N}))$$ with $\frac{1}{r}=\frac{1}{s}+\frac{N-2}{2N}$ and by Sobolev embedding $|(-\Delta)^{-1}{\rm div}(\rho u)|\in L^{2}(L^{s^{'}})$ with $\frac{1}{s^{'}}=\frac{1}{r}-\frac{1}{N}$ (this is possible only if $r<N$). We are in a critical case for the Sobolev embedding ( i.e $r\geq N$) only when $N=3$ and $\gamma\geq 6$, that's why for $N=3$ and $\gamma\geq6$.\\ So by H\"older inequalities we get $|{\rm div}u|\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div}(\rho u)|\in L^{1}(L^{s_{1}})$ with: $\frac{1}{s_{1}}=\frac{1}{s^{'}}+\frac{\varepsilon}{s}+\frac{1}{2}=1-\frac{2}{N}+\frac{1+\varepsilon}{s}\leq1$ as we have $s>\frac{N}{2}$.\\ Moreover by interpolation $|{\rm div}u|\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div}(\rho u)|$ belongs to $L^{1}(0,T;L^{1}(\mathbb{R}^{N}))$.\\ We now treat the case $N=3$ and $\gamma\geq6$ where we choose in this case $\varepsilon=\frac{2}{N}\gamma-1$ to explicit precisely this case: $$ \begin{aligned} &\||{\rm div}u|\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div}(\rho u)|\|_{L^{1}}\leq \|Du\|_{L^{2}(L^{2})}\|\rho\|_{L^{\gamma+\varepsilon}}^{\varepsilon}\|\rho u\|_{L^{\frac{2(\gamma+\varepsilon)}{\gamma-\varepsilon}} (L^{\frac{6(\gamma+\varepsilon)}{5\gamma-\varepsilon}})}\\ &\leq C\|\rho\|^{\varepsilon}_{L^{\gamma+\varepsilon}}\|\rho u\|_{L^{\frac{10\gamma-6}{\gamma+3}} (L^{\frac{3(10\gamma-6)}{13\gamma+3}})}\leq\, C\|\rho\|^{\varepsilon}_ {L^{\gamma+\varepsilon}}\|\rho u\|^{\frac{\gamma+3}{5\gamma-3}}_{L^{2} (L^{\frac{6\gamma}{\gamma+6}})}\|\rho u\|^{\frac{2(2\gamma-3}{5\gamma-3}}_{L^{\infty}(L^{2})}\\ &\leq C\|\rho\|^{\varepsilon}_{L^{\gamma+\varepsilon}}\|\rho u\|_{L^{2}(L^{\frac{6\gamma}{\gamma+6}})}^{\frac{5\gamma} {5\gamma-3}}\|\rho u\|_{L^{\infty}(L^{\frac{2\gamma}{\gamma+1}})}^{\frac{2(2\gamma-3)}{5\gamma-3)}}\\ &\leq C\|\rho\|^{\varepsilon}_{L^{\gamma+\varepsilon}} \end{aligned} $$ \\ since we have $\frac{1}{2}+\frac{\varepsilon}{\gamma+\varepsilon}+\frac{\gamma-\varepsilon}{2(\gamma+\varepsilon)}=1$, $\frac{1}{2}+\frac{\varepsilon} {\gamma+\varepsilon}+\frac{5\gamma-\varepsilon}{6(\gamma+\varepsilon)}-\frac{1}{3}=1$, and $\frac{6(\gamma+\varepsilon)}{5\gamma-\varepsilon}=3\frac{10\gamma-6} {13\gamma+3}<3$.\\ \\ We now want to treat the term: $\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div }(\rho\nabla(\phi*\rho))|$, so we have: $\rho\nabla(\phi*\rho)=\rho(\nabla\phi*\rho)\in L^{\infty}(L^{1}\cap L^{\frac{s}{2}})$ by H\"older inequalities and the fact that we have $\rho\in L^{\infty}(L^{1})$ and $\nabla\phi\in L^{1}$.\\ After we get that $\rho^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho\nabla(\phi*\rho))\in L^{\infty}(L^{r_{1}})$ with: $\frac{1}{r_{1}}=\frac{\varepsilon}{s}+\frac{2}{s}-\frac{1}{N}=\frac{2+\varepsilon}{s}-\frac{1}{N}<1$.\\ We conclude that $\rho^{\varepsilon}(-\Delta)^{-1}\rm div(\rho\nabla(\phi*\rho))$ is $L^{\infty}(L^{1})$ in using interpolation when $N=2,3$. Indeed we have $\rho\nabla(\phi*\rho)\in L^{\infty}(L^{1})$ and in choosing $\varepsilon=\frac{2}{N}s-1$ we have: $1-\frac{1}{N}+\frac{2}{N}s-1\geq1$ when $s\geq\frac{N}{2}(1+\frac{1}{N})$. This is the case when $N=2,3$, and this is the case when $N\geq 4$ and $\gamma\geq\frac{N}{2}(1+\frac{1}{N})$.\\ In the other case we need to work in arbitrary compact. \\ We have after the term $({\rm div}(u))\rho^{\varepsilon}$. We recall that $\rho^{\varepsilon}$ is in $L^{\infty}(L^{\frac{1}{\varepsilon}}\cap L^{\frac{s}{\varepsilon}})$. If $\varepsilon\geq\frac{1}{2}$ (i.e $s\geq\frac{3}{4}N$), the bound is obvious because $\frac{1}{2}+\varepsilon\geq1$ and $\frac{1}{2}+\frac{\varepsilon}{s}<1$, we can then conclude by interpolation. On the other hand, this rather simple term presents a technical difficulty when $\varepsilon\leq\frac{1}{2}$ since we do not know in that case if ${\rm div u}\,\rho^{\varepsilon}\in L^{1}(\mathbb{R}^{N}\times(0,T))$. One way to get round the difficulty is to multiply (\ref{4premiereeq}) by $\rho^{\varepsilon}1_{\{\rho\geq1\}}$. Then we obtain an estimate on $\rho^{s+\varepsilon}1_{(\rho\geq1)}$ in $L^{1}((0,T)\times\mathbb{R}^{N})$ as $\rho^{\varepsilon}1_{\{\rho\geq1\}}|{\rm div}u|\leq \rho|{\rm div}u|\in L^{1}((0,T)\times\mathbb{R}^{N})$ (where $\varepsilon\leq\frac{1}{2}$) and we can conclude since $0\leq\rho^{s+\varepsilon}1_{\{\rho<1\}}\leq\rho$ on $(0,T)\times\mathbb{R}^{N}$ and $\rho\in L^{\infty}(L^{1})$.\\ \\ We end with the following term $\rho^{\varepsilon}(R_{i}R_{j}(\rho\,u_{i}u_{j})-u_{i}R_{i}R_{j}(\rho\,u_{j}))$. In the same way than in the previous inequalities we have $\rho^{\varepsilon}R_{i}R_{j}(\rho\,u_{i}u_{j})$ is bounded in $L^{1}(0,T;L^{1}(\mathbb{R}^{N}))$. Indeed we have by H\"older inequalities and the fact that $R_{i}$ is continuous from $L^{p}$ in $L^{p}$ with $1<p<+\infty$: $\frac{1}{s}+2\frac{N-2}{2N}+\frac{\varepsilon}{s}=1-\frac{2}{N}+\frac{1+\varepsilon}{s}\leq1$ (because $s>\frac{N}{2}$). And we conclude by interpolation. We treat the term $\rho^{\varepsilon}u_{i}R_{i}R_{j}(\rho\,u_{j})$ similarly. \\ \\ We have to treat now the case $N=2$ where we have to modify the estimates when we are in critical cases for Sobolev embedding. \subsubsection*{Case $N=2$:} In the case $N=2$ most of the proof given above stay exact except for the slightly more delicate terms $\rho^{e}{\rm div}u|(-\Delta)^{-1}{\rm div}(\rho u)|$ and $\rho^{\varepsilon}(R_{i}R_{j}(\rho\,u_{i}u_{j})-u_{i}R_{i}R_{j}(\rho\,u_{j}))$.\\ We start with the term $|\rho^{e}{\rm div}u(-\Delta)^{-1}{\rm div}(\rho u)|$. In our previous estimate it was possible to use Sobolev embedding on the term $(-\Delta)^{-1}{\rm div}(\rho u)|$ only if $r\geq N$ (see above the notation), so in the case where $N=2$ we are in a critical case for the Sobolev embedding when $\gamma\geq2$.\\ This may be overcome by using that, by virtue of Sobolev embedding, we have: $$ \begin{aligned} &\||{\rm div}u|\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div}(\rho u)|\|_{L^{1}}\leq C\|\rho\|_{L^{\gamma+\varepsilon}(L^{\gamma+\varepsilon})}^{\varepsilon} \|\rho u\|_{L^{2(\gamma+\varepsilon)}(L^{\frac{2(\gamma+\varepsilon)}{\gamma+\varepsilon+1}})}\\ \end{aligned} $$ Indeed: $\frac{1}{2}+\frac{\varepsilon}{\gamma+\varepsilon}+\frac{\gamma+\varepsilon+1}{2(\gamma+\varepsilon)}-\frac{1}{2}=\frac{1}{2}+\frac{2\varepsilon+1}{2\varepsilon+2\gamma}\leq1= \frac{1}{2}+\frac{\varepsilon}{\gamma+\varepsilon}+\frac{1}{2(\gamma+\varepsilon})\leq1$ and $\frac{1}{2}+\frac{\varepsilon}{\gamma+\varepsilon}+\frac{1}{2(\gamma+\varepsilon)}\leq1$. Moreover we have as $\rho u=\sqrt{\rho}\sqrt{\rho}u$ $$\|\rho u\|_{L^{2(\gamma+\varepsilon)}(L^{\frac{2(\gamma+\varepsilon)}{\gamma+\varepsilon+1}})}\leq C\| \rho\|_{L^{\gamma+\varepsilon}(L^{\gamma+\varepsilon})}^{\frac{1}{2}}$$ then: $$\||{\rm div}u|\rho^{\varepsilon}|(-\Delta)^{-1}{\rm div}(\rho u)|\|_{L^{1}(L^{1})}\leq C\|\rho\|^ {\varepsilon+\frac{1}{2}}_{L^{\gamma+\varepsilon}(L^{\gamma+\varepsilon})}.$$ \\ Next we are interested by the term $\rho^{\varepsilon}(R_{i}R_{j}(\rho\,u_{i}u_{j})-u_{i}R_{i}R_{j}(\rho\,u_{j}))$. We use the fact that $u$ is bounded in $L^{2}(0,T;\dot{H}^{1})$ and thus in $L^{2}(0,T;BMO)$. Then by the Coifman-Rochberg-Weiss commutator theorem in \cite{4CRW}, we have for almost all $t\in[0,T]$:\\ $$\|R_{i}R_{j}(\rho\,u_{i}u_{j})-u_{i}R_{i}R_{j}(\rho\,u_{j})\|_{L^{\frac{2(\gamma+\varepsilon)}{\gamma+\varepsilon+1}}(L^{\frac{2(\gamma+\varepsilon)}{\gamma+\varepsilon+1}})} \leq C\|u\|_{L^{2}( BMO)}\|\rho\,u\|_{L^{2(\gamma+\varepsilon)}(L^{\frac{2(\gamma+\varepsilon)}{\gamma+\varepsilon+1}})}$$ So we have:\\ $$\|\rho^{\varepsilon}(R_{i}R_{j}(\rho\,u_{i}u_{j})-u_{i}R_{i}R_{j}(\rho\,u_{j}))\|_{L^{1}}\leq C\|\rho\|^{\varepsilon+\frac{1}{2}} _{L^{\gamma+\varepsilon}(L^{\gamma+\varepsilon})}$$ \\ In view of the previous inequalities we get finally: $$\|\rho\|_{L^{\gamma+\varepsilon}(L^{\gamma+\varepsilon})}^{\gamma+\varepsilon}\leq C(1+\|\rho\|_{L^{\gamma+\varepsilon}(L^{\gamma+\varepsilon})} ^{\frac{1}{2}+\varepsilon})$$ and the $L^{\gamma+\varepsilon}(L^{\gamma+\varepsilon})$ bound on $\rho$ is proven since $\frac{1}{2}+\varepsilon<\gamma+\varepsilon$. \hfill{$\Box$} \\ \subsection{Compactness results for compressible Navier-Stokes equations of Korteweg type in the case of isentropic pressure} In the sequel we are not going to treat in details the case with $N\geq4$ and $\gamma<\frac{N}{2}(1+\frac{1}{N})$, we just remark that the proof is the same as in the case $N=2$, it suffices to localize because we can only apply the theorem \ref{4integration} on the gain of integrability on any compact $K$.\\ So let follow the theorem \ref{4integration} and assume that $\gamma>\frac{N}{2}$ if $N\geq4$ and $\gamma\geq1$ such that if $(\rho,u)$ is a regular solution then $\rho\in L^{q}((0,T)\times\mathbb{R}^{N})$ with $q=\gamma+1-\frac{2\gamma}{N}$. We can observe that in this case $q>s=\max(\gamma,2)$. We will see that it will be very useful in the sequel to justify the passage to the weak limit in some terms to get a gain of integrability on the density. Indeed the key point to proving the existence of weak solutions is the passage to the limit in the term of pressure and in the term of capillarity $\rho\nabla(\phi*\rho-\rho)$.\\ First, we assume that a sequence $(\rho_{n},u_{n})_{n\in\mathbb{N}}$ of approximate weak solutions has been constructed by a mollifying process, which have suitable regularity to justify the formal estimates like the energy estimate and the theorem \ref{4integration}. $(\rho_{n},u_{n})_{n\in\mathbb{N}}$ has the initial data of the theorem \ref{4principal} with uniform bounds, i.e: \\ Moreover $(\rho_{n},u_{n})_{n\in\mathbb{N}}$ verifies the energy inequalities (\ref{4inegaliteenergie}) and the previous theorem \ref{4integration}, we have then: \begin{itemize} \item $\rho_{n}$ is bounded uniformly in $L^{\infty}(0,T;L^{1}\cap L^{s}(\mathbb{R}^{N}))\cap C([0,T];L^{p}(\mathbb{R}^{N}))$ for $1\leq p<\max(2,\gamma)$, \item $\rho_{n}\geq0$ a.e. $\rho_{n}$ is bounded uniformly in $L^{q}(0,T,\mathbb{R}^{N})$ with $q>s$, \item $\nabla u_{n}$ is bounded in $L^{2}(0,T;L^{2}(\mathbb{R}^{N}))$, $\rho_{n} |u_{n}|^{2}$ is bounded in $L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$, \item $u_{n}$ is bounded in $L^{2}(0,T;L^{\frac{2N}{N-2}}(\mathbb{R}^{N}))$ for $N\geq3$. \end{itemize} And we have in passing to the weak limit in the previous bound in extracting subsequence if necessary: \begin{itemize} \item $\rho_{n}\rightarrow\rho$ weakly in $L^{s}((0,T)\times\mathbb{R}^{N})$, \item $u_{n}\rightarrow u$ weakly in $L^{2}(0,T,\dot{H}^{1}(\mathbb{R}^{N}))$, \item $\rho_{n}^{\gamma}\rightarrow \overline{\rho^{\gamma}}$ weakly in $L^{r}((0,T)\times\mathbb{R}^{N})$ for $r=\frac{q}{\gamma}>1$, \item $\rho_{n}^{2}\rightarrow \overline{\rho^{2}}$ weakly in $L^{r_{1}}((0,T)\times\mathbb{R}^{N})$ for $r_{1}=\frac{q}{2}>1$. \end{itemize} \begin{notation} We will always write in the sequel $\overline{B(\rho)}$ to mean the weak limit of the sequence $B(\rho_{n})$ bounded in appropriate space that we will precise. \end{notation} We recall that the main difficulty will be to pass to the limit in the pressure term and the capillary term. The idea of the proof will be to test the convergence of the sequence $(\rho_{n})_{n\in\mathbb{N}}$ on convex functions $B$ in order to use their properties of lower semi-continuity with respect to the weak topology in $L^{1}(\mathbb{R}^{N})$. In this goal we will use the theory of renormalized solutions introduced by Diperna and Lions in \cite{DLion}. So we will obtain strong convergence of $\rho_{n}$ in appropriate spaces. \subsection{Idea of the proof\label{4idee}} We here give a sketchy proof of the theorem \ref{4principal}. In this spirit we can rewrite mass conservation of the regular solution $(\rho_{n},u_{n})_{n\in\mathbb{N}}$ on the form: $$\frac{p}{\partial t}(B(\rho_{n}))+{\rm div}(u_{n}B(\rho_{n}))=(B(\rho_{n})-\rho_{n} B^{'}(\rho_{n})){\rm div}u_{n}.$$ Supposing that $B(\rho_{n})$ is bounded in appropriate space we can pass to the weak limit where we have in the energy space $\rho_{n}\rightharpoonup\rho$ and $u_{n}\rightharpoonup u$, so we get: \begin{equation} \frac{p}{\partial t}(\overline{B(\rho)})+{\rm div}(u \overline{B(\rho)})=\overline{(B(\rho)-\rho B^{'}(\rho)){\rm div}u} \label{41} \end{equation} We will set: $b(\rho)=B(\rho)-\rho B^{'}(\rho).$\\ Next in seeing the mass equation for approximate solutions $(\rho_{n},u_{n})_{n\in\mathbb{N}}$ , and in passing directly to the limit via the weak convergence arguing like P-L. Lions in \cite{4L2} p 13 we get: \begin{equation} \frac{d}{dt}\rho+{\rm div}(\rho u)=0. \label{4a1} \end{equation} After we will just have to verify the passage to the limit for the product $\rho u$. Next we will use the theorem on the renormalized solutions of Diperna-Lions in \cite{4L1} on (\ref{4a1}) in recalling that $\rho\in L^{\infty}(L^{2})$. So we get: \begin{equation} \frac{d}{dt}(B(\rho))+{\rm div}(uB(\rho))=b(\rho){\rm div}(u). \label{42} \end{equation} Next we subtract (\ref{41}) to (\ref{42}), so we obtain: \begin{equation} \frac{d}{dt}(\overline{B(\rho)}-B(\rho))+{\rm div}(u(\overline{B(\rho)}-B(\rho)))=\overline{b(\rho){\rm div}u}-b(\rho){\rm div}u. \label{43} \end{equation} Consequently, in order to estimate the difference $\overline{B(\rho)}-B(\rho)$ which tests the convergence of $\rho_{n}$, we need to estimate the difference $\overline{b(\rho){\rm div}(u)}-b(\rho){\rm div}(u)$. We choose then $B$ a concave function and we have: $$\overline{B(\rho)}-B(\rho)\leq 0.$$ The goal will be now to prove the reverse inequality in order to justify that $B(\rho_{n})$ tends to $B(\rho)$ a.e.\\ So now we aim at estimating the difference $\overline{b(\rho){\rm div}u}-b(\rho){\rm div}u$. This may be achieved by introducing the effective viscous pressure $\mbox{P}_{eff}=P-(2\mu+\lambda){\rm div}u $ after D. Hoff in \cite{4H1}, which satisfies some important properties of weak convergence.\\ In fact owing to the capillarity term we adapt Hoff's concept to our equation in setting: $$\widetilde{P}_{eff}=P+\frac{\kappa}{2}\rho^{2}-(2\mu+\lambda){\rm div}u.$$ {\bf Proof of theorem \ref{4principal}} \\ \\ We begin with the case $N\geq3$, and next we will complete the proof by the case $N=2$ in specifying the changes to bring.\\ Before getting into the heart of the proof, we first recall that we obtain easily the convergence in distribution sense of $\rho_{n}u_{n}$ to $\rho u$ and $\rho_{n}(u_{n})_{i}(u_{n})_{j}$ to $\rho u_{i}u_{j}$. We refer to the classical result by Lions (see \cite{4L2}) or the book of Novotn\'y and Stra$\check{\mbox{s}}$kraba \cite{4NS}. \subsubsection*{Case $N\geq3$} We have seen in the idea of the proof \ref{4idee} that our goal is to compare $\overline{B(\rho)}$ and $B(\rho)$ for certain concave functions $B$ . From the mass equation we have obtained: \begin{equation} \partial_{t}(\overline{B(\rho)}-B(\rho))+{\rm div}(u(\overline{B(\rho)}-B(\rho)))=\overline{b(\rho){\rm div}(u)}-b(\rho){\rm div}(u). \label{4a3} \end{equation} \\ So before comparing $\overline{B(\rho)}$ and $B(\rho)$, we have to investigate the expression $\overline{b(\rho){\rm div}(u)}-b(\rho){\rm div}(u)$. By virtue of theorem \ref{4integration} which gives a gain of integrability we can take the function $B(x)=x^{\varepsilon}$ , as we control for $\varepsilon$ small enough $\rho^{s+\varepsilon}$. Our goal now is to exhibit the effective pressure $\widetilde{P}_{eff}$, and to multiply it by $\rho^{\varepsilon}$ to extract $\overline{{\rm div}u\, b(\rho)}$ . We will see in the sequel how to compare it with $b(\rho){\rm div}(u)$. So we focus on the convergence of the pressure and capillarity terms. \subsubsection*{Control of the term $\overline{{\rm div}u\, b(\rho)}$ } So we take the ${\rm div}$ on the momentum equation satisfied by the regular solution. We get: \begin{equation} \begin{aligned} \frac{\partial}{\partial t}{\rm div}(\rho_{n} u_{n})+\partial^{2}_{ij}(\rho_{n} u^{i}_{n}u^{j}_{n})-\zeta\Delta{\rm div} u_{n}+\Delta(a\rho^{\gamma}_{n})=&\kappa{\rm div}(\rho_{n}(\nabla\phi*\rho_{n}))-\frac{\kappa}{2}\Delta(\rho^{2}_{n}),\\ \label{410.1} \end{aligned} \end{equation} with $\zeta=\lambda+2\mu$. Applying the operator $(-\Delta)^{-1}$ to (\ref{410.1}), we obtain: \begin{equation} \begin{aligned} &\frac{\partial}{\partial t}(-\Delta)^{-1}{\rm div}(\rho_{n} u_{n})+(-\Delta)^{-1}\partial^{2}_{ij}(\rho_{n} u^{i}_{n}u^{j}_{n})+[\zeta{\rm div} u_{n}-a\rho^{\gamma}_{n}-\frac{\kappa}{2}\rho^{2}_{n}]\\ &\hspace{8cm}=\kappa(-\Delta)^{-1}{\rm div}(\rho_{n}(\nabla\phi*\rho_{n}))\;.\\ \end{aligned} \label{43} \end{equation} After we multiply (\ref{43}) by $\rho_{n}^{\varepsilon}$ with $\varepsilon$ that we choose enough small with $\varepsilon\in(0,1)$: \begin{equation} \begin{aligned} &[(\mu+\zeta){\rm div} u_{n}-a\rho_{n}^{\gamma}-\frac{\kappa}{2}\rho_{n}^{2}]\rho_{n}^{\varepsilon}=\kappa\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}(\nabla\phi*\rho_{n}))\\ &\hspace{4cm}-\rho_{n}^{\varepsilon}\frac{\partial}{\partial t}(-\Delta)^{-1}{\rm div}(\rho_{n} u_{n})-\rho_{n}^{\varepsilon}(-\Delta)^{-1}\partial_{ij}(\rho_{n} u^{i}_{n}u^{j}_{n}).\\ \end{aligned} \label{44} \end{equation} So if we rewrite (\ref{44}), we have: \begin{equation} \begin{aligned} &[(\mu+\zeta){\rm div} u_{n}-a\rho_{n}^{\gamma}-\frac{\kappa}{2}\rho_{n}^{2}]\rho_{n}^{\varepsilon}=\kappa\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}(\nabla\phi*\rho_{n}))-\rho_{n}^{\varepsilon}(-\Delta)^{-1}\partial^{2}_{ij}(\rho_{n} u^{i}{n}u^{j}_{n})\\[2mm] &\hspace{4,6cm}-\frac{\partial}{\partial t}\big((\rho_{n})^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})\big)+[\frac{\partial}{\partial t}(\rho_{n})^{\varepsilon}] (-\Delta)^{-1}{\rm div}(\rho_{n}u_{n}),\\ \end{aligned} $$ Next we have: $$ \begin{aligned} &[(\mu+\zeta){\rm div} u_{n}-a\rho_{n}^{\gamma}-\frac{\kappa}{2}\rho_{n}^{2}]\rho_{n}^{\varepsilon}=\kappa\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}(\nabla\phi*\rho_{n}))-\rho_{n}^{\varepsilon}(-\Delta)^{-1}\partial^{2}_{ij}(\rho_{n} u^{i}_{n}u^{j}_{n})\\[2mm] &\hspace{4,1cm}-\frac{\partial}{\partial t}[(\rho_{n})^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})]- {\rm div}[ u_{n}(\rho_{n})^{\varepsilon} (-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})]\\[2mm] &\hspace{2,5cm}+(\rho_{n})^{\varepsilon}u_{n}\cdot\nabla(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n}) +(1-\varepsilon)({\rm div}u_{n})(\rho_{n})^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n}),\\[2mm] \end{aligned} \label{45} \end{equation} or finally: \begin{equation} \begin{aligned} &[(\mu+\xi){\rm div} u_{n}-a\rho_{n}^{\gamma}-\frac{\kappa}{2}\rho_{n}^{2}]\rho_{n}^{\varepsilon}=\kappa\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}(\nabla\phi*\rho_{n}))\\[2mm] &\hspace{0,5cm}-\frac{\partial}{\partial t}[\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})]-{\rm div}[ u_{n}(\rho_{n})^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})]\\[2mm] &\hspace{3cm}+(\rho_{n})^{\varepsilon}[u_{n}.\nabla(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})-(-\Delta)^{-1}\partial^{2}_{ij}(\rho u^{i}_{n}u^{j}_{n})]\\[2mm] &\hspace{6,3cm}+(1-\varepsilon)({\rm div}u_{n})(\rho_{n})^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})\;.\\ \end{aligned} \label{46} \end{equation} \\ Now like in Lions \cite{L2} we want to pass to the limit in the distribution sense in (\ref{46}) in order to estimate $\overline{{\rm div}u(\rho)^{\varepsilon}}$. \subsubsection*{Passage to the weak limit in (\ref{46})} In this goal we use the following lemma of P-L Lions in \cite{4L2} to express the weak limit of the non-linear terms. \begin{lemme} \label{4lemme1} Let $\Omega$ a open of $\mathbb{R}^{N}$. Let $(g_{n},h_{n})$ converge weakly to $(g,h)$ in $L^{p_{1}}(0,T,L^{p_{2}}(\Omega))\times L^{q_{1}}(0,T,L^{q_{2}}(\Omega))$ where $1\leq p_{1},p_{2},q_{1},q_{2}\leq+\infty$ satisfy,\\ $$\frac{1}{p_{1}}+\frac{1}{q_{1}}=\frac{1}{p_{2}}+\frac{1}{q_{2}}=1\;.$$ We assume in addition that: \begin{equation} \frac{\partial g^{n}}{\partial t}\;\;\mbox{is bounded in}\;\; L^{1}(0,T,W^{-m,1}(\Omega))\;\;\mbox{for some}\;\;m\geq0\;\;\mbox{independent of}\;\;n.\hspace{8cm}\label{4101}\\ \end{equation} and that: \begin{equation} \|h^{n}-h^{n}(\cdot,\cdot+\xi)\|_{L^{q_{1}}(0,T,L^{q_{2}}(\Omega))}\rightarrow0\;\;\;\;\mbox{as}\;\; |\xi|\rightarrow0,\,\mbox{uniformly in n.}\\ \label{4101a} \end{equation} Then, $g^{n}h^{n}$ converges to $gh$ (in the sense of distribution on $\Omega\times(0,T))$. \end{lemme} So we use the above lemma to pass to the weak limit in the four following non-linear terms of (\ref{46}):\\ $$ \begin{array}{ll} T^{1}_{n}=u_{n}\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n}),\;\;\;&T^{2}_{n}=\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n}),\\[2mm] T^{3}_{n}=\rho_{n}^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}(\nabla\phi*\rho_{n})),\;\;\;& T^{4}_{n}=({\rm div}u_{n})(\rho_{n})^{\varepsilon}(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})\;.\\ \end{array} $$ So we choose the different $g_{n}^{i}$ and $h_{n}^{i}$ as follows: $$ \begin{aligned} &\mbox{for}\;\;T^{1}_{n}\hspace{1cm}g_{n}^{1}=u_{n}(\rho_{n})^{\varepsilon}\hspace{1,3cm}g^{1}=u\overline{\rho^{\varepsilon}}\hspace{1,3cm} h_{n}^{1}=(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})\hspace{3cm}\\ &\mbox{for}\;\;T^{2}_{n}\hspace{1cm}g_{n}^{2}=\rho_{n}^{\varepsilon}\hspace{2,15cm}g^{2}=\overline{\rho^{\varepsilon}}\hspace{1,5cm} h_{n}^{2}=(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n})\\ &\mbox{for}\;\;T^{3}_{n}\hspace{1cm}g_{n}^{3}=\rho_{n}^{\varepsilon}\hspace{2,15cm}g^{3}=\overline{\rho^{\varepsilon}}\hspace{1,5cm} h_{n}^{3}=(-\Delta)^{-1}{\rm div}(\rho_{n}(\nabla\phi*\rho_{n}))\\ &\mbox{for}\;\;T^{4}_{n}\hspace{1cm}g_{n}^{4}=({\rm div}u_{n})(\rho_{n})^{\varepsilon}\;\;\hspace{0,3cm}g^{4}= \overline{{\rm div}u\,\rho^{\varepsilon}}\;\;\;\hspace{0,3cm}h_{n}^{4}=(-\Delta)^{-1}{\rm div}(\rho_{n}u_{n}).\\ \end{aligned} $$ To show that $u_{n}(\rho_{n})^{\varepsilon}$ converges in distribution sense to $u\overline{\rho^{\varepsilon}}$ we apply easily lemma \ref{4lemme1} with $h_{n}=u_{n}$ and $g_{n}=\rho_{n}^{\varepsilon}$. We now want to examine each term and apply the above lemma to pass to the limit in the weak sense.\\ We start with the first term $T^{1}_{n}$. We have that $\rho_{n}^{\varepsilon}u_{n}\in L^{\infty}(L^{q})\cap L^{2}(L^{r})$ with $\frac{1}{q}=\frac{\varepsilon}{2s}+\frac{1}{2}$ and $\frac{1}{r}=\frac{(N-2)}{2N}+\frac{\varepsilon}{s}=\frac{1}{2}-\frac{1}{N}+\frac{\varepsilon}{s}$. In addition the hypothesis (\ref{4101}) is immediately verified (use the momentum equation).\\ We now want to verify the hypothesis $(\ref{4101a})$, so we have $h^{1}_{n}$ belongs to $L^{\infty}(W_{loc}^{1,q^{'}}(\mathbb{R}^{N}))\cap L^{2}(W_{loc}^{1,r^{'}}(\mathbb{R}^{N}))$ with $\frac{1}{q^{'}}=\frac{1}{2}+\frac{1}{2s}$ and $\frac{1}{r^{'}}=\frac{(N-2)}{2N}+\frac{1}{s}=\frac{1}{2}+\frac{1}{s}-\frac{1}{N}$. This result enables us to verify the hypothesis (\ref{4101a}) by Sobolev embedding.\\ So we can choose (with the notation of the above lemma) $q_{1}=2$ and $q_{2}\in (r^{'},\frac{Nr^{'}}{N-r^{'}})$, $p_{1}=2$, and $p_{2}=1-\frac{1}{q_{2}}$ which is possible by interpolation. Indeed we have: $\frac{1}{r^{'}}+\frac{1}{r}=1-\frac{2}{N}+\frac{1+\varepsilon}{s}\leq1$.\\ We proceed in the same way for $T^{2}_{n}$ and $T^{4}_{n}$. \\ We can similarly examine $T^{3}_{n}$, because $\rho_{n}^{\varepsilon}\in L^{\infty}(L^{\frac{1}{\varepsilon}}\cap L^{\frac{s}{\varepsilon}})$ and $\rho_{n}(\nabla\phi*\rho_{n})\in L^{\infty}(L^{1}\cap L^{\frac{s}{2}})$, we can choose $p_{2}=\frac{1}{\varepsilon}$, we have then $(\Delta)^{-1}{\rm div}\rho_{n}(\nabla\phi*\rho_{n})\in L^{\infty}(0,T;W^{1,\frac{s}{2}})$ so that we can choose $q_{1}=2$, $q_{2}\in(1,\frac{N\frac{s}{2}}{N-\frac{s}{2}})$. We can conclude by interpolation. \\ \\ Finally we have to study the last non linear following term that we treat similarly as P-L.Lions in \cite{4L2}: $$A_{n}=(\rho_{n})^{\varepsilon}[u_{n}.\nabla(-\Delta)^{-1}{\rm div}(\rho_{n} u_{n})-(-\Delta)^{-1}\partial_{ij}^{2}(\rho_{n} (u_{i})_{n}(u_{j})_{n})].$$ We can express this term $A_{n}$ as follows: $$A_{n}=(\rho_{n})^{\varepsilon}[u^{j}_{n},R_{ij}](\rho_{n} u^{i}_{n}).$$ where $R_{ij}=(-\Delta)^{-1}\partial_{ij}^{2}$ with $R_{i}$ the classical Riesz transform.\\ Next, we use a result by Coifman, Meyer on this type of commutator (see \cite{4M}) to take advantage of the regularity of $[u^{j}_{n},R_{ij}](\rho_{n} u^{i}_{n})$. \begin{theorem} The following map is continuous for any $N\geq 2$: \begin{equation} \begin{aligned} &W^{1,r_{1}}(\mathbb{R}^{N})^{N}\times L^{r_{2}}(\mathbb{R}^{N})\rightarrow W^{1,r_{3}}(\mathbb{R}^{N})^{N}\\ &\hspace{3cm}(a,b)\rightarrow[a_{j},R_{i}R_{j}]b_{i}\\ \end{aligned} \label{4applicationlineaire} \end{equation} with: $\frac{1}{r_{3}}=\frac{1}{r_{1}}+\frac{1}{r_{2}}$. \end{theorem} To pass to the weak limit in $A_{n}$ we will use the previous lemma. We start with the case with $s>3$. This quantity belongs to the space $L^{1}(W^{1,q})$ provided that $D u_{n}\in L^{2}(L^{2})$ and $\rho u^{j}\in L^{2}(L^{r})$ where $\frac{1}{r}=\frac{N-2}{2N}+\frac{1}{s}=\frac{1}{2}-\frac{1}{N}+\frac{1}{s}$ in which case $\frac{1}{q}=\frac{1}{r}+\frac{1}{2}=1-\frac{1}{N}+\frac{1}{s}\leq1$.\\ for After we can use the above lemma applied to $h_{n}=[R_{ij},u^{j}_{n}](\rho_{n}u^{i}_{n})$ and $g_{n}=\rho_{n}^{\varepsilon}$. We can show easily in using again lemma \ref{4lemme1} that $h_{n}$ converges in distribution sense to $[R_{ij},u_{j}](\rho_{n}u_{i})$.\\ So we can take: $q_{1}=1$, $p_{1}=+\infty$ and $q_{2}\in (q,\frac{qN}{N-q})$, $p_{2}=1-\frac{1}{q_{2}}$, this one because we can use interpolation and we can localize as we want limit in distribution sense.\\ In the case where $s\leq 3$, a simple interpolation argument can be used to accommodate the general case. It suffices to fix $L^{r_{2}}(\mathbb{R}^{N})$ in the application (\ref{4applicationlineaire}) and use a result of Riesz-Thorin. \\ Finally according to the equation (\ref{44}), and after passing to the limit we get: \begin{equation} \begin{aligned} &[(\mu+\xi)\overline{{\rm div} u\,\rho^{\varepsilon}}-\overline{(a\rho^{\gamma+\varepsilon})}-\frac{\kappa}{2}\overline{\rho^{2+\varepsilon}}]=\overline{\rho^{\varepsilon}}(-\Delta)^{-1}{\rm div}(\overline{\rho(\nabla\phi*\rho)})-\frac{\partial}{\partial t}[\overline{\rho^{\varepsilon}}(-\Delta)^{-1}{\rm div}(\rho u)]\\ &\hspace{2,4cm}-{\rm div}[\overline{\rho^{\varepsilon}}u(-\Delta)^{-1}{\rm div}(\rho u)]+\overline{\rho^{\varepsilon}}[u.\nabla(-\Delta)^{-1}{\rm div}(\rho u)-(-\Delta)^{-1}\partial_{ij}(\rho u_{i}u_{j})]\\ &\hspace{9cm}+(1-\varepsilon)\overline{{\rm div} u\,\rho^{\varepsilon}}(-\Delta)^{-1}{\rm div}(\rho u). \label{4111a}\\ \end{aligned} \end{equation} \subsubsection*{Inequality between the terms $\overline{\rho^{\varepsilon}}\,{\rm div}u$ and $\overline{{\rm div} u\,\rho^{\varepsilon}}$} Now we are interested in estimating the term $\overline{\rho^{\varepsilon}}{\rm div}u$ in order to describe the quantity $\overline{\rho^{\varepsilon}}\,{\rm div}u-\overline{{\rm div} u\,\rho^{\varepsilon}}$ before considering the quantity $\rho^{\varepsilon}\,{\rm div}u-\overline{{\rm div} u\,\rho^{\varepsilon}}$. We pass to the weak limit directly in (\ref{43}) and we get in using again the lemma \ref{lemme1}: \begin{equation} \begin{aligned} \frac{\partial}{\partial t}(-\Delta)^{-1}{\rm div}(\rho u)+(-\Delta)^{-1}\partial_{ij}^{2}(\rho u_{i}u_{j})+&[(\mu+\xi){\rm div} u-\overline{a\rho^{\gamma}}]=\\ &-(-\Delta)^{-1}{\rm div}(\overline{\rho(\nabla\phi*\rho)} +\frac{\kappa}{2}\overline{\rho^{2}}. \label{410a1} \end{aligned} \end{equation} Now we just multiply (\ref{410a1}) with $\overline{\rho^{\varepsilon}}$ and we can see that each term has a distribution sense.\\ So we get in proceeding in the same way as before: \begin{equation} \begin{aligned} &[(\mu+\xi){\rm div} u\,\overline{\rho^{\varepsilon}}-\overline{(a\rho^{\gamma})}\;\overline{\rho^{\varepsilon})}-\frac{\kappa}{2}\overline{\rho^{2}}\overline{\rho^{\varepsilon}}]= \overline{\rho^{\varepsilon}}(-\Delta)^{-1}{\rm div}(\overline{\rho(\nabla\phi*\rho))}\\[2mm] &\hspace{0,3cm}-\overline{\rho^{\varepsilon}}\frac{\partial}{\partial t}[\rho(-\Delta)^{-1}{\rm div} (\rho u)]+\overline{\rho^{\varepsilon}}[u.\nabla(-\Delta)^{-1}{\rm div}(\rho u)-(-\Delta)^{-1}\partial_{ij}(\rho u_{i}u_{j})]\\[2mm] &\hspace{3,2cm}-{\rm div}[\rho^{\varepsilon}u(-\Delta)^{-1}{\rm div}(\rho u)]+(1-\varepsilon)\overline{{\rm div} u(\rho)^{\varepsilon}}\,(-\Delta)^{-1}{\rm div}(\rho u).\\ \end{aligned} \label{411a} \end{equation} In subtracting (\ref{411a}) from (\ref{4111a}), we get: $$ \begin{aligned} &(\mu+\xi)\,\overline{{\rm div} u(\rho)^{\varepsilon}}-a\overline{\rho^{\gamma+\varepsilon}}-\frac{\kappa}{2}\overline{\rho^{2+\varepsilon}}=(\mu+\xi){\rm div} u\, \overline{\rho^{\varepsilon}}-a\overline{\rho^{\gamma}}\,\overline{\rho^{\varepsilon}}-\frac{\kappa}{2}\overline{\rho^{2}} \overline{\rho^{\varepsilon}}\;\;\;\mbox{a.e}\;.\\ \end{aligned} $$ Next we observe that by convexity: $$(\overline{\rho^{\gamma+\varepsilon}})^{\frac{\varepsilon}{\gamma+\varepsilon}}\geq(\overline{\rho^{\varepsilon}}),\;\;\; (\overline{\rho^{\gamma+\varepsilon}})^{\frac{\gamma}{\gamma+\varepsilon}}\geq(\overline{\rho^{\gamma}})\;\;\;\;\mbox{a.e}\;.$$ So we get: \begin{equation} \overline{{\rm div}u\,(\rho)^{\varepsilon}}\geq{\rm div}u\,\overline{\rho^{\varepsilon}}. \label{420a} \end{equation} \subsubsection*{Comparison between $\rho$ and $\overline{\rho^{\varepsilon}}^{\frac{1}{\varepsilon}}$} As on since $(\rho_{n},u_{n})$ are regular solutions we get the equality (\ref{42}) applied to $B(x)=x^{\varepsilon}$. So we get: \begin{equation} \frac{\partial}{\partial t}(\rho_{n})^{\varepsilon}+{\rm div}(u_{n}(\rho_{n})^{\varepsilon})=(1-\varepsilon){\rm div} u_{n}(\rho_{n})^{\varepsilon}. \label{4e1} \end{equation} And after we pass to the weak limit in (\ref{4e1}) and we get: \begin{equation} \frac{\partial}{\partial t}\overline{\rho^{\varepsilon}}+{\rm div}(u\,\overline{\rho^{\varepsilon}})=(1-\varepsilon)\overline{{\rm div} u\rho^{\varepsilon}}. \label{4102} \end{equation} When we pass to the weak limit in combining with (\ref{420a}) we get:\\ \begin{equation} \frac{\partial}{\partial t}\overline{(\rho)^{\varepsilon}}+{\rm div}(u\overline{(\rho)^{\varepsilon}})\geq(1-\varepsilon){\rm div}u \overline{(\rho)^{\varepsilon}}. \label{412} \end{equation} Now we wish to conclude about the pointwise convergence of $\rho_{n}$ in proving that $(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}}=\rho$ and to finish we will use the following theorem (see \cite{4F} p 34) applied to $B(x)=x^{\frac{1}{\varepsilon}}$ which is convex. \begin{theorem} \label{4convexite} Let $(v_{n})_{n\in\mathbb{N}}$ be a sequence of functions bounded in $L^{1}(\mathbb{R}^{N})$ such that: $$v_{n}\rightharpoonup v\;\;\;\;\mbox{weakly in}\;\;L^{1}(\mathbb{R}^{N}).$$ Let $\varphi:\mathbb{R}\longrightarrow(-\infty,+\infty]$ be a upper semi-continuous strictly concave function such that $\varphi(v_{n})\in L^{1}(\mathbb{R}^{N})$ for any $n$, and: $$\varphi(v_{n})\rightharpoonup\overline{\varphi(v)}\;\;\;\mbox{weakly in}\;\;L^{1}(\mathbb{R}^{N}).$$ Then: $$v_{n}(y)\rightarrow v(y)\;\;\mbox{a.e.}$$ extracting a subsequence as the case may be. \end{theorem} Now we want to use a type of Diperna-Lions theorem on inequality (\ref{412}). Our goal is to renormalize this inequality with the function $B(x)=x^{\frac{1}{\varepsilon}}$ so that one can compare $\rho$ and $\overline{\rho^{\varepsilon}}^{\frac{1}{\varepsilon}}$. Although (\ref{412}) doesn't correspond exactly to the mass equation, we can use the same technics to renormalize the solution provided that $\rho\in L^{\infty}(L^{2})$ which is the case. In our case it is very important that $\rho\in L^{\infty}(L^{2})$, indeed it avoids to have supplementary conditions on the index $\gamma$ like for the compressible Navier-Stokes system in \cite{4L2}. We recall of Diperna-Lions theorem on renormalized solution for the mass equation. \begin{theorem} Suppose that $\rho\in L^{\infty}(L ^{2})$ and $\beta\in C^{1}([0,\infty);\mathbb{R})$.\\ We have then: $$\frac{\partial \beta(\rho)}{\partial t}+{\rm div}(\beta(\rho)\,u)=(\beta(\rho)-\rho\beta^{'}(\rho)){\rm div}u$$ in distribution sense. \end{theorem} We now want to adapt this theorem for our equation (\ref{12}) with $\beta(x)=x^{\frac{1}{\varepsilon}}$ , so we may regularize by $\omega_{\alpha}$ (with $\omega_{\alpha}=\frac{1}{\alpha^{N}}\omega(\frac{\cdot}{\alpha})$ where $\omega\in C_{0}^{\infty}(\mathbb{R}^{N})$, $\mbox{supp}\;\omega\in B_{1}$ and $\int\omega dx=1$) and find for all $\beta\in C^{\infty}_{0}([0,+\infty))$: $$ \begin{aligned} &\frac{\partial}{\partial t}(\overline{\rho^{\varepsilon}}*\omega_{\alpha})+{\rm div}[u\,\overline{\rho^{\varepsilon}}*\omega_{\alpha}]\geq (1-\varepsilon){\rm div}u\, \overline{\rho^{\varepsilon}}*\omega_{\alpha}+R_{\alpha}\\ \end{aligned} $$ where we have: $$R_{\alpha}={\rm div}[u\,\overline{\rho^{\varepsilon}}*\omega_{\alpha}]-{\rm div}(u\,\overline{\rho^{\varepsilon}})*\omega_{\alpha}+(1-\varepsilon)[{\rm div}u\, \overline{\rho^{\varepsilon}}]*\omega_{\alpha}-(1-\varepsilon){\rm div}u\, \overline{\rho^{\varepsilon}}*\omega_{\alpha}$$ $$ \begin{aligned} &\frac{\partial}{\partial t}(\beta(\overline{\rho^{\varepsilon}}*\omega_{\alpha}))+{\rm div}[u\,\beta(\overline{\rho^{\varepsilon}}*\omega_{\alpha})]\geq (1-\varepsilon){\rm div}u\, \overline{\rho^{\varepsilon}}*\omega_{\alpha}\,\beta^{'}(\overline{\rho^{\varepsilon}}*\omega_{\alpha})\\[2mm] &\hspace{3.6cm}+({\rm div}u)[\beta(\overline{\rho^{\varepsilon}}*\omega_{\alpha})-\overline{\rho^{\varepsilon}}*\omega_{\alpha}\beta^{'} (\overline{\rho^{\varepsilon}}*\omega_{\alpha})]+R_{\alpha}\beta^{'}(\overline{\rho^{\varepsilon}}*\omega_{\alpha})\\[2.5mm] &\hspace{5cm}=-\varepsilon({\rm div}u)\overline{(\rho)^{\varepsilon}}\beta^{'}(\overline{\rho^{\varepsilon}})+({\rm div}u)\,\beta(\overline{\rho^{\varepsilon}})+R_{\alpha}\beta^{'}(\overline{\rho^{\varepsilon}}*\omega_{\alpha})\;.\\ \end{aligned} $$ After we pass to the limit when $\alpha\rightarrow0$ and we see that $R_{\alpha}$ tends to $0$ in using lemma on regularization in \cite{4L1} p 43. This looks like a rather innocent manipulation but it's at this point that we require to control $\rho$ in $L^{2}(0,T;\mathbb{R}^{N})$. And in our case we don't need to impose $\gamma>\frac{N}{2}$ for $N=2,3$. Hence: $$ \begin{aligned} \frac{\partial}{\partial t}(\beta(\overline{(\rho)^{\varepsilon}}))+{\rm div}[u\,\beta(\overline{(\rho)^{\varepsilon}})]\geq &-\varepsilon({\rm div}u)\overline{\rho^{\varepsilon}}\beta^{'}(\overline{\rho^{\varepsilon}})+({\rm div}u)\beta(\overline{\rho^{\varepsilon}}).\\ \end{aligned} $$ \\ We then choose $\beta=(\Psi_{M})^{\frac{1}{\varepsilon}}$ where $\Psi_{M}=M\Psi(\frac{\cdot}{M})$, $M\geq1,\,\Psi\in C^{\infty}_{0}([0,+\infty)),\,\Psi(x)=x$ on $[0,1]$, $\mbox{supp}\Psi\subset[0,2]$, and we obtain:\\ $$ \begin{aligned} &\frac{\partial}{\partial t}(\Psi_{M}(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}})+{\rm div}[u\,\Psi_{M}(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}}]\\[2mm] &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\geq({\rm div}u)\Psi_{M}(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}-1}\Psi_{M}^{'}(\overline{\rho^{\varepsilon}})\overline{\rho^{\varepsilon}}+({\rm div}u)\Psi_{M}(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}}\\[2mm] &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\geq{\rm div}u\Psi_{M}(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}-1} [\Psi_{M}(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}}-\Psi_{M}^{'}(\overline{\rho^{\varepsilon}})\overline{\rho^{\varepsilon}}] 1_{(\overline{\rho^{\varepsilon}}>M)}\\[2mm] &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\geq -C_{0}|{\rm div}u|M^{\frac{1}{\varepsilon}}1_{(\overline{\rho^{\varepsilon}}>M)}.\\ \end{aligned} $$ \\ where $C_{0}=\sup\{|\Psi(x)|^{\frac{1}{\varepsilon}-1}|\Psi(x)-x\Psi^{'}(x)|, \;x\in[0,+\infty)\}$.\\ Now we claim that: \begin{equation} \frac{\partial}{\partial t}(\overline{(\rho)^{\varepsilon}})^{\frac{1}{\varepsilon}})+{\rm div}(u\,\overline{(\rho)^{\varepsilon}})^{\frac{1}{\varepsilon}})\geq0.\\ \label{413} \end{equation} For proving that, we notice that by convexity $\overline{(\rho)^{\varepsilon}}^{\frac{1}{\varepsilon}}\leq\rho$, so we get : $$\||{\rm div}u|M^{\frac{1}{\varepsilon}}1_{\overline{(\rho)^{\varepsilon}}>M}\|_{L^{1}_{T}(L^{1}(\mathbb{R}^{N})}\leq\|{\rm div}u\|_{L^{2}_{T}(L^{2}(\mathbb{R}^{N})}\|\rho\,1_{\rho>M^{\frac{1}{\varepsilon}}}\|_{L^{2}_{T}(L^{2}(\mathbb{R}^{N})}\rightarrow0\;\; \mbox{as}\;\;M\rightarrow +\infty.$$ We have concluded by dominated convergence. We have concluded by dominated convergence. \\ At this stage we subtract the mass equation to (\ref{413}) and we get in setting $r=\rho-\overline{(\rho)^{\varepsilon}}^{\frac{1}{\varepsilon}}$:\\ \begin{equation} \frac{\partial}{\partial t}(r)+{\rm div}(ur)\leq0.\\ \label{414} \end{equation} We now want to integrate and to use the fact that $r\geq0$ to get that $r=0$ a.a. To justify the integration we test our inequality against a cut-off function of the form $\varphi(\frac{\cdot}{R})$ where $\varphi\in C^{\infty}_{0}(\mathbb{R}^{N})$, $\varphi=1$ on $B(0,1)$, $\mbox{Supp}\varphi\subset B(0,2)$ and $R>1$. We test the equation $(\ref{414})$ against $\varphi_{R}$ and we get: \begin{equation} \int_{[0,T]\times\mathbb{R}^{N}}\frac{\partial}{\partial t}[r(t,x)]\varphi_{R}(x)-u(t,x)r(t,x)\frac{1}{R}\nabla\varphi(\frac{x}{R})dt\,dx\leq0.\\ \label{415} \end{equation} Next we notice that: $$ \begin{aligned} &|\int_{[0,T]\times\mathbb{R}^{N}}u(t,x)r(t,x)\frac{1}{R}\nabla\varphi(\frac{x}{R})dt\,dx|\leq\|u\|_{L^{1}(0,T;L^{\frac{2N}{N-2}}(\mathbb{R}^{N}))} \|r\|_{L^{1}(0,T;L^{\frac{2N}{N+2}}(\mathbb{R}^{N}))}\\ &\hspace{11cm}*\frac{1}{R}\|\nabla\varphi\|_{L^{\infty}(\mathbb{R}^{N})}.\\ \end{aligned} $$ It implies that: $$\int_{[0,T]\times\mathbb{R}^{N}}u(t,x)r(t,x)\frac{1}{R}\nabla\varphi(\frac{x}{R})dt\,dx\rightarrow0\;\;\;\mbox{as} \;\;R\rightarrow+\infty.$$ We have then: $$\int_{[0,T]\times\mathbb{R}^{N}}\frac{\partial}{\partial t}r(t,x)\varphi_{R}(x)dt\,dx=\int_{\mathbb{R}^{N}}r(T,x)\varphi_{R}(T,x)dx- \int_{\mathbb{R}^{N}}r(0,x)\varphi_{R}(0,x)dx.$$ Now we want to verify that $r(0,\cdot)=0$ to conclude because. Indeed we will obtain that: $$\lim_{R\rightarrow+\infty}\int_{\mathbb{R}^{N}}r(T,x)\varphi_{R}(T,x)dx\rightarrow\int_{\mathbb{R}^{N}}r(T,x)dx\leq0\;\;\;\mbox{and} \;\;\;r\geq0.$$ then $r=0$.\\ We know that $\rho_{n}$ is uniformly bounded in $L^{\infty}(L^{1}\cap L^{s}(\mathbb{R}^{N}))$, then $\rho_{n}^{\varepsilon}$ is relatively compact in $C([0,T];L^{p}-w)$ with $1<p<s$ (where $L^{p}-w$ denote the space $L^{p}$ endowed with weak topology). Moreover $(\rho_{0}^{\varepsilon})_{n}$ converges to $\rho_{0}^{\varepsilon}$, we deduce then $r(0)=0$ a.a. \\ Now as $r=0$ we conclude in using the theorem \ref{4convexite} $\rho_{n}$ converges a.a to $\rho$ and that $\rho_{n}$ converges to $\rho$ in $L^{p}([0,T]\times B_{R})$ for all $p\in[1,q)$ and in $L^{p_{1}}(0,T,L^{p_{2}}(B_{R}))$ for all $p_{1}\in[1,+\infty)$, $p_{2}\in[1,s)$ and for all $R\in(0,+\infty)$. \subsubsection*{Conclusion} We wish now conclude and get the convergence of our theorem in the total space. \\ We aim at proving here the convergence of $\rho_{n}$ in $C([0,T],L^{p}(\mathbb{R}^{N}))\cap L^{q^{'}}(\mathbb{R}^{N}\times (0,T))$ for all $1\leq p<s,\,1\leq q^{'}<q$. We have just to show the convergence of $\rho_{n}$ to $\rho$ in $C([0,T],L^{1}(\mathbb{R}^{N}))$. To this end, we introduce $d_{n}=\sqrt{\rho_{n}}$ which clearly converges to $\sqrt{\rho}$ in $L^{2p_{1}}(0,T,L^{2p_{2}}(B_{R}))\cap L^{2p}(B_{R}\times (0,T))$ to $d=\sqrt{\rho}$ for all $R\in(0,+\infty)$.\\ We next remark that $\rho\in C([0,T],L^{1}(\mathbb{R}^{N}))$ and thus $d\in C([0,T],L^{2}(\mathbb{R}^{N})$. Indeed, using once more the regularization lemma in \cite{4L1} we obtain the existence of a bounded $\rho_{\alpha}\in C([0,T],L^{1}(\mathbb{R}^{N}))$ smooth in $x$ for all $t$ satisfying: $$\frac{\partial \rho_{\alpha}}{\partial t}+{\rm div}(u\rho_{\alpha})=r_{\alpha}\;\;\;\mbox{in}\;\;L^{1}((0,T)\times\mathbb{R}^{N})\;\;\mbox{as} \;\;\alpha \rightarrow 0_{+}.$$ with $r_{\alpha}={\rm div}(u\rho_{\alpha})-{\rm div}(\rho u)*w_{\alpha}$ (where $w$ is defined as in the previous part).\\ $\rho_{\alpha}\rightarrow\rho$ in $L^{1}(\mathbb{R}^{N}\times(0,T))$, $\rho_{\alpha}/ _{t=0}\rightarrow\rho/ _{t=0}$ in $L^{1}(\mathbb{R}^{N})$ as $\alpha\rightarrow0_{+}.$\\ From these facts, it is straightforward to deduce that: $$\frac{\partial}{\partial t}|\rho_{\alpha}-\rho_{\eta}|+{\rm div}(u|\rho_{\alpha}-\rho_{\eta}|)=r_{\alpha}-r_{\eta}$$ and thus: $$\sup_{[0,T]}\int_{\mathbb{R}^{N}}|\rho_{\alpha}-\rho_{\eta}|dx=\int_{0}^{T}\int_{\mathbb{R}^{N}}|r_{\alpha}-r_{\eta}|dx.$$ Since $\rho\in C([0,T],L^{p}(B_{R})-w)$ (for all $R\in(0,+\infty)$, $1<p<s$), we may then deduce that $\rho_{\alpha}$ converge to $\rho$ in $C([0,T],L^{1}(\mathbb{R}^{N}))$.\\ Next, we observe that we can justify as we did above that $d_{n}$ and $d$ satisfy:\\ $$\frac{\partial d_{n}}{\partial t}+{\rm div}(u_{n}d_{n})=\frac{1}{2}d_{n}{\rm div}(u_{n}),$$ $$\frac{\partial d}{\partial t}+{\rm div}(ud)=\frac{1}{2}d{\rm div}(u).$$ Therefore once more, $d_{n}$ converges to $d$ in $C([0,T],L^{2}(\mathbb{R}^{N})-w)$.\\ Thus in order to conclude, we just have to show that whenever $t_{n}\in[0,T]$, $t_{n}\rightarrow t$, then $d_{n}(t_{n})\rightarrow d(t)$ in $L^{2}(\mathbb{R}^{N})$ or equivalently that $\int_{\mathbb{R}^{N}}d_{n}(t_{n})^{2}dx=\int_{\mathbb{R}^{N}}\rho_{n}(t_{n})dx\rightarrow_{n}\int_{\mathbb{R}^{N}}d(t)^{2}dx= \int_{\mathbb{R}^{N}}\rho(t)dx$. This is the case since we deduce from the mass equation, integrating this equation over $\mathbb{R}^{N}$ and justifying the integration exactly like previously that: $$\int_{\mathbb{R}^{n}}\rho_{n}(t_{n})dx=\int_{\mathbb{R}^{n}}(\rho_{0})_{n}dx\rightarrow_{n}\int_{\mathbb{R}^{n}}\rho_{0}dx= \int_{\mathbb{R}^{n}}\rho(t)dx.$$ We then conclude by uniform continuity that $\|\rho_{n}(t_{n})-\rho_{n}(t)\|_{L^{1}}$ tends to $0$. \subsubsection*{Case $N=2$} First of all, the main difficulty is the fact that we no longer have global $L^{p}$ bounds on $u_{n}$. That's why most of the proof is in fact local and we know that $u_{n}$ is bounded in $L^{2}(0,T;L^{p}(B_{R}))$ for all $p\in[1,+\infty)$, $R\in(0,+\infty)$.\\ As we need to localize the argument, we get the following limit: $$ \begin{aligned} &((\mu+\xi){\rm div}u_{n}-a\rho_{n}^{\gamma}-\frac{1}{2}\rho_{n}^{2})\rho_{n}^{\varepsilon}\rightharpoonup_{n} ((\mu+\xi){\rm div}u-a\overline{\rho^{\gamma}}-\frac{1}{2}\overline{\rho^{2}})\,\overline{\rho^{\varepsilon}}\;\; \mbox{in}\;\;{\cal D}^{'}(\mathbb{R}^{N}\times[0,T])\\ \end{aligned} $$ let $\varphi\in C_{0}^{\infty}(\mathbb{R}^{N})$, $0\leq\varphi\leq1$, $\mbox{supp}\varphi\subset K$ for an arbitrary compact set $K\in\mathbb{R}^{N}$. We apply the operator $(\Delta)^{-1}{\rm div}$ to the momentum equation that we have localized and we pass directly to the weak limit: \begin{equation} \begin{aligned} \frac{\partial}{\partial t}(-\Delta)^{-1}{\rm div}(\varphi\rho u)+R_{ij}(\varphi\rho u_{i}u_{j})+&[(\mu+\xi)\rm div u\,\varphi-\overline{a\rho^{\gamma}}\varphi]\\ &=\kappa(-\Delta)^{-1}{\rm div}(\varphi\overline{\rho(\nabla\phi*\rho)} &-\frac{\kappa}{2}\varphi\overline{\rho^{2}}))+(-\Delta)^{-1}\overline{R}. \label{410} \end{aligned} \end{equation} with: $$ \begin{aligned} &\overline{R}=\partial_{i}\varphi\partial_{j}(\rho u_{i}u_{j})+(\partial_{ij}\varphi)\rho u_{i}u_{j}-(\mu+\xi)\Delta\varphi{\rm div}u-(2\mu+\xi)\nabla\varphi\cdot \nabla{\rm div}u+\mu\Delta u\cdot\nabla\varphi\\ &+\Delta\varphi a\overline{\rho^{\gamma}}+a\nabla\varphi\cdot\nabla\overline{\rho^{\gamma}} +\frac{\kappa}{2}\Delta\varphi \overline{\rho^{2}}+\frac{\kappa}{2}\nabla\varphi\cdot\nabla\overline{\rho^{2}} .\\ \end{aligned} $$ Now we multiply (\ref{410}) with $\overline{\rho^{\varepsilon}}$ and we verify that each term has a sense.\\ So we get in proceeding in the same way as before, we can verify that $(\rho_{n})^{\varepsilon}(-\Delta)^{-1}R_{n}$ converges in distribution sense to $\overline{\rho^{\varepsilon}}(-\Delta)^{-1}\overline{R}$ for small enough $\varepsilon$. We get as in the previous case for $N\geq3$: $$ \begin{aligned} &\varphi[(\mu+\xi)\,\overline{{\rm div} u(\rho)^{\varepsilon}}-a\overline{\rho^{\gamma+\varepsilon}}-\frac{\kappa}{2}\overline{\rho^{2+\varepsilon}}]=\varphi[(\mu+\xi)({\rm div} u)\varphi\, \overline{\rho^{\varepsilon}}-a\overline{\varphi\rho^{\gamma}}\,\overline{\rho^{\varepsilon}}-\frac{\kappa}{2}\overline{\varphi\rho^{2}} \overline{\rho^{\varepsilon}}]\;\;\;\mbox{a.e}\;.\\ \end{aligned} $$ We then deduce the following inequalities as in the previous proof: $$\frac{d}{dt}(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}}+{\rm div}(u(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}})\geq0\;\;\;\mbox{in}\;\;{\cal D}^{'}((0,T)\times\mathbb{R}^{N}).$$ We see that the only point left to check is the justification of the integration over $\mathbb{R}^{2}$ of terms like ${\rm div}(\overline{\rho^{\varepsilon}}^{\frac{1}{\varepsilon}}u)$ or ${\rm div}(\rho u)$ and more precisely that the integral vanishes. This is in fact straightforward provided we use the bounds on $\rho\in L^{\infty}(L^{1}(\mathbb{R}^{N}))$ and $\rho |u|^{2}\in L^{\infty}(L^{1}(\mathbb{R}^{N}))$ and so $\rho u\in L^{\infty}(L^{1}(\mathbb{R}^{N}))$. Then, letting $\varphi\in C_{0}^{\infty}(\mathbb{R}^{2})$, $0\leq\varphi\leq1$, $\varphi=1$ on $B(0,1)$ and $\varphi=0$ on $^{c}B(0,2)$. We set $\varphi_{R}(\cdot)=\varphi(\frac{\cdot}{R})$ for $R\geq1$, we have similarly as in the previous case: $$\begin{aligned} &|\int^{T}_{0}dt\int_{\mathbb{R}^{2}}ru\cdot\nabla\varphi_{R}(x)dx|\leq\|\nabla\varphi\|_{L^{\infty}(\mathbb{R}^{2})}\frac{1}{R}\|\rho u\|_{L^{1}( 0,T)\times\mathbb{R}^{2})}\rightarrow0\;\;\;\mbox{as}\;\;R\rightarrow+\infty\\ \end{aligned} $$ We can then conclude as in the previous proof.\\ \hfill{$\Box$} \subsubsection*{Proof of the convergence assertion on $\rho_{n}u_{n}$} We now want to show the convergence of $\rho_{n}u_{n}$ to have informations on strong convergence of $u_{n}$ modulo the vacuum. We recall in this part some classical inequalities to get the convergence of $\rho_{n}u_{n}$, for more details see Lions in $\cite{4L2}$ We use once more a mollifier $k_{\alpha}=\frac{1}{\alpha^{N}}k(\frac{\cdot}{\alpha})$ where $k\in C_{0}^{+\infty}(\mathbb{R}^{N})$ and we let $g_{\alpha}=g*k_{\alpha}$ for an arbitrary function $g$. We first observe that we have for all $\frac{N}{2}<p<s$: $$ \begin{aligned} |\big((\rho_{n}u_{n})_{\alpha}-\rho_{n}u_{n}\big) (x)|=\big|\int_{\mathbb{R}^{d}}[\rho_{n}(t,y)-\rho_{n}(t,x)]&u_{n}(t,y)k_{\alpha}(x-y)dy\\ &+\rho_{n}(t,x)\big((u_{n})_{\alpha}-u_{n}\big)(t,x)\big|\\ \end{aligned} $$ We have in using H\"older inequalities with the measure $k_{\alpha}(x-y)dy$: $$ \begin{aligned} &|\big((\rho_{n}u_{n})_{\alpha}-\rho_{n}u_{n}\big) (x)|\leq\big[\int_{\mathbb{R}^{d}}|\rho_{n}(t,y)-\rho_{n}(t,x)|^{p}k_{\alpha}(x-y)dy\big]^{\frac{1}{p}}\,\big( |u_{n}|^{\frac{p}{p-1}}\big)_{\alpha}^{\frac{p-1}{p}}\\ &\hspace{10cm}+\rho_{n}|(u_{n})_{\alpha}-u_{n}|(t,x).\\ \end{aligned} $$ Hence for all $t\geq 0$ $$ \begin{aligned} &\int_{\mathbb{R}^{d}}|\big((\rho_{n}u_{n})_{\alpha}-\rho_{n}u_{n}\big) (x)|dx\leq\big[\int_{\mathbb{R}^{d}} dx\int_{\mathbb{R}^{d}}|\rho_{n}(t,y)-\rho_{n}(t,x)|^{p}k_{\alpha}(x-y)dy \big]^{\frac{1}{p}}\\ &\hspace{6cm} *\|\big( |u_{n}|^{\frac{p}{p-1}}\big)_{\alpha}\|_{L^{1}}^{\frac{p-1}{p}}+\|\rho_{n}\|_{L^{p}}\|(u_{n})_{\alpha}-u_{n} \|_{L^{\frac{p-1}{p}}},\\[3mm] &\hspace{3cm}\leq\big[\sup_{|z|\leq\alpha}\|\rho_{n}(\cdot+z)-\rho_{n}\|_{L^{p}}\big]\|u_{n}\|_{L^{\frac{p-1}{p}}} +\|\rho_{n}\|_{L^{p}}\|(u_{n})_{\alpha}-u_{n} \|_{L^{\frac{p-1}{p}}}.\\ \end{aligned} $$ Next if we choose $p>\frac{2N}{N+2}$, $\frac{p}{p-1}<\frac{2N}{N-2}$ then $\|(u_{n})_{\alpha}-u_{n} \|_{L^{2}(0,T;L^{\frac{p-1}{p}})}$ converge to $0$ as $\alpha$ goes to $0_{+}$ uniformly in $n$. In addition, the convergence on $\rho_{n}$ assure that $\sup_{|z|\leq\alpha}\|\rho_{n}(\cdot+z)-\rho_{n}\|_{L^{p}}$ converge to $0$ as $\alpha$ goes to $0_{+}$ uniformly in $n$. Therefore in conclusion, $(\rho_{n}u_{n})_{\alpha}-\rho_{n}u_{n}$ converge to $0$ in $L^{2}(0,T;L^{1})$ as $\alpha$ goes to $0_{+}$ uniformly in $n$.\\ Next $(\rho_{n}u_{n})_{\alpha}$ is smooth in $x$, uniformly in $n$ and in $t\in[0,T]$. Therefore, remarking that $\frac{\partial}{\partial t}(\rho_{n}u_{n})_{\alpha}$ is bounded in a $L^{2}(0,T;H^{m})$ for any $m\geq0$, we deduce that $(\rho_{n}u_{n})_{\alpha}$ converge to $(\rho u)_{\alpha}$ as $n$ goes to $+\infty$ in $L^{1}((0,T)\times\mathbb{R}^{N})$ for each $\alpha$. Then using the bound on $\rho_{n}u_{n}$ in $L^{\infty}(L^{\frac{2s}{s+1}})$, we deduce that $\rho_{n}u_{n}$ converge to $\rho u$ in $L^{1}((0,T)\times\mathbb{R}^{N})$ and we can conclude by interpolation.\\ The last convergence is a consequence of the strong convergence of $\rho_{n}$ and $\rho_{n}u_{n}$. \section{Existence of weak solution with general pressure} In the sequel we concentrate us only on the cases $N=2,3$. We now want to extend our previous result to more general and physical pressure laws. In particular we are now interested by two cases, the first one concerns monotonous pressure law (close in a certain sense that we will precise to $\rho^{\gamma}$ pressure) , the second one is the case of a slightly modified Van der Waals pressure.\\ The technics of proof will be very similar to the previous proof, only technical points change. \subsection{Monotonous pressure} In this section, we shall investigate an extension of the preceding results to the case of a general monotonous pressure $P$, i.e $P$ is assumed to be a $C^{1}$ non-decreasing function on $[0,+\infty)$ vanishing at $0$.\\ We want here to mention in the general situation our new energy inequality, we recall the inequality (\ref{4inegaliteenergie}): $$ \begin{aligned} &\int_{\mathbb{R}^{N}}(\frac{1}{2}\rho |u|^{2}+\Pi(\rho)+E_{global}[\rho(.,t)])(x)dx(t)+\int_{0}^{t}\int_{\mathbb{R}^{N}}(\mu D(u):D(u)\\ &\hspace{2cm}+(\lambda+\mu)|{\rm div} u|^{2})dx \leq\int_{\mathbb{R}^{N}}\big(\frac{|m_{0}|^{2}}{2\rho}+\Pi(\rho_{0})+E_{global}[\rho_{0}]\big)dx. \end{aligned} $$ where we define $\Pi$ by: $\frac{\partial}{\partial t}(\frac{\Pi(t)}{t})=\frac{P(t)}{t^{2}}$ for all $t>0$.\\ There are two cases worth considering: first of all if $P(t)$ is such that $\int^{1}_{0}\frac{P(s)}{s^{2}}ds<+\infty$ then we can choose $\Pi(\rho)=\rho\int^{\rho}_{0}\frac{P(s)}{s^{2}}ds$.\\ In the other case, i.e $\lim_{t\rightarrow0}\frac{P(t)}{t}=c>0$, we can choose $\Pi(\rho)=\rho\int^{1}_{\rho}\frac{p(s)}{s^{2}}ds$ and $\Pi$ behaves like $\rho\log\rho$ as $\rho$ goes to $0$. \\ We now consider a sequence of solutions $(\rho_{n},u_{n})$ and we make the same assumptions on this sequence as in the previous section except that we need to modify the assumptions on $\rho_{n}$. We assume always that $\rho_{n}$ is bounded in $C([0,T],L^{1}(\mathbb{R}^{N}))$, $P(\rho_{n})$ is bounded in $L^{\infty}(0,T,L^{1}(\mathbb{R}^{N}))$. $$ \begin{aligned} &.\;(\rho_{n})_{n\geq1}\;\;\mbox{is bounded in}\;\; L^{\infty}(0,T,L^{2}),\hspace{10cm} \end{aligned} $$ and we also assume that we have: $$(\rho_{n}^{\varepsilon}P(\rho_{n}))_{n\geq1}\;\;\mbox{is bounded in}\;\;L^{1}(K\times(0,T))$$ for some $\varepsilon>0$, where $K$ is an arbitrary compact set included in $\mathbb{R}^{N}$. \\ \begin{theorem} Let the assumptions of theorem \ref{4principal} be satisfied with in addition P a monotone pressure.\\ Then there exists a renormalized finite energy weak solution to problem $(NSK)$ in the sense of definitions \ref{4defrenormal} and \ref{4defsolutionsfaibles}. Moreover $P(\rho_{n})$ converges to $P(\rho)$ in $L^{1}(K\times(0,T))$ for any compact set $K$. \label{4T32} \end{theorem} {\bf Proof:}\\ \\ In this situation, the proof of theorem \ref{4T32} is based on the same compactness argument as in the theorem \ref{4principal}. In particular, there is essentially one observation which allows us to adapt the proof of theorem \ref{4principal}. Namely we stills obtain the following identity for the effective viscous flux: $$\beta(\rho_{n})((\mu+\xi){\rm div}u_{n}-P(\rho_{n})-\frac{\kappa}{2}\rho_{n}^{2 })\rightharpoonup_{n}\overline{\beta(\rho)}((\mu+\xi){\rm div}u-\overline{P(\rho)}-\frac{\kappa}{2}\overline{\rho}^{2}),$$ with $\beta(\rho)=\rho^{\varepsilon}$ for $\varepsilon$ small enough. We have then: \begin{equation} (\mu+\xi)\overline{\rho^{\varepsilon}{\rm div}u}-\overline{P(\rho)\rho^{\varepsilon}}-\frac{\kappa}{2}\overline{\rho^{2+\varepsilon }})=(\mu+\xi)\overline{{\rm div}u}\overline{\rho^{\varepsilon}}-\overline{P(\rho)}\overline{\rho^{\varepsilon}}-\frac{\kappa}{2}\overline{\rho^{2}}\overline{\rho^{\varepsilon}}), \label{4effectifmono} \end{equation} Now we can recall a lemma coming from P.-L.Lions in \cite{4L2}: \begin{lemme} Let $p_{1}$, $p_{2}\in C([0,\infty))$ be non-decreasing functions. We assume that $p_{1}(\rho_{n})$, $p_{2}(\rho_{n})$ are relatively weakly compact in $L^{1}(K\times(0,T))$ for any compact set $K\subset\mathbb{R}^{N}$. Then, we have: $$\overline{p_{1}(\rho)p_{2}(\rho)}\geq\overline{p_{1}(\rho)}\overline{p_{2}(\rho)}\;\;\mbox{a.e}.$$ \label{4lemme2} \end{lemme} We get finally as in the proof of theorem \ref{4principal} in using lemma \ref{4lemme2}: $$\overline{{\rm div}u\,\rho^{\varepsilon}}\geq{\rm div}u\overline{\rho^{\varepsilon}},$$ All remaining argumentation of the proof of theorem \ref{4principal} can be performed to conclude. \hfill{$\Box$} \subsection{Pressure of Van der Waals type} In this section we are interested by pressure of type Van der Waals which consequently are not necessarily non-decreasing. That's why in the following proof we will proceed slightly differently.\\ So we consider the pressure law: $$ \begin{aligned} P(\rho)&=\frac{RT_{*}\rho}{b-\rho}-a\rho^{2}\;\;\mbox{for}\;\;\rho\leq b-\theta\;\;\;\mbox{for some small}\;\;\theta>0\\[2mm] \end{aligned} $$ and we extend the function $P$ to be strictly increasing function on $\rho\geq b-\theta$.\\ We have then: \begin{enumerate} \item $P^{'}$ is bounded from below, that is: $$P^{'}(\rho)\geq-\bar{\rho}\;\;\;\mbox{for all}\;\;\rho>0.$$ \item $P$ is a strictly increasing function for $\rho$ large enough. \end{enumerate} Under the above conditions, it is easy to see that the pressure can be written as: $$P(\rho)=P_{1}(\rho)-P_{2}(\rho).$$ with $P_{1}$ a non-decreasing function of $\rho$, and $$P_{2}\in C^{2}[0,+\infty),\;\;P_{2}\geq0,\;\;P_{2}=0\;\;\mbox{for}\;\;\rho\geq\overline{\rho}.$$ \begin{rem} The a priori energy estimate give us the bound of $\rho$ in $L^{\infty}(L^{2})$, then we have: $$|\{(t,x)\in(0,T)\times\mathbb{R}^{N}/|\rho(t,x)|>b\}|\leq\frac{T\|\rho\|^{2}_{L^{\infty}(L^{2}(\mathbb{R}^{N}))}}{b^{2}}.$$ Then we can controll in measure the set where $P$ is different from the Van der Waals measure, and it's a set of finite measure. \end{rem} We obtain the following theorem. \begin{theorem} If in addition to the above assumptions, we assume that $\rho^{0}_{n}$ converges in $L^{1}(\mathbb{R}^{N})$ to $\rho_{0}$ then $(\rho,u)$ is a weak solution of the system $(NSK)$ satisfying the initial condition and we have:\\ $$ \begin{aligned} \rho_{n}\rightarrow\rho\;\;\mbox{in}\;\;C([0,T],L^{p}(\mathbb{R}^{N})\cap L^{r}((0,T)\times\mathbb{R}^{N})\;\; &\mbox{for all}\;\;1\leq p<2,\,1\leq r<1+\frac{4}{N},\\ \end{aligned} $$ \label{4theorem3.8} \end{theorem} {\bf Proof: }\\ \\ Most of the proof of theorem \ref{4theorem3.8} is similar as theorem \ref{4principal}. We will use a approximated sequel $T_{k}$ (introduced by Feireisl in \cite{F}) of $\rho$ by some concave bounded function. \begin{definition} We define the function $T\in C^{\infty}(\mathbb{R}^{N})$ as follows: $$ \begin{aligned} &T(z)=z\;\;\;\mbox{for}\;\;z\in[0,1],\\ &T(z)\;\;\mbox{concave on}\;\;[0,+\infty),\\ &T(z)=2\;\;\;\mbox{for}\;\;z\geq3,\\ &T(z)=-T(-z)\;\;\;\mbox{for}\;\;z\in(-\infty,0],\\ \end{aligned} $$ And $T_{k}$ is the cut-off function: $$T_{k}(z)=kT(\frac{z}{k}).$$ \end{definition} In following the proof of theorem \ref{4principal}, we get: $$ \begin{aligned} &\frac{\partial}{\partial t}(\overline{L_{k}(\rho)}-L_{k}(\rho))+{\rm div}((\overline{T_{k}(\rho)}-T_{k}(\rho))u)+\overline{T_{k}(\rho){\rm div}u}-\overline{T_{k}(\rho)}{\rm div}u\\ &\hspace{7cm}=(T_{k}(\rho)-\overline{T_{k}(\rho)}){\rm div}u\;\;\;\mbox{in}\;\;{\cal D}^{'}((0,T)\times\mathbb{R}^{N}),\\ \end{aligned} $$ where $L_{k}(\rho)=\rho\log(\rho)$ for $0\leq\rho\leq k$, and $0\leq L_{k}(\rho)\leq \rho\log(\rho)$ otherwise.\\ So we get in integrating in time on $[t_{1},t_{2}]$: $$ \begin{aligned} &\int_{\mathbb{R}^{N}}\big(\overline{L_{k}(\rho)}-L_{k}(\rho)\big)(t_{2})dx-\int_{\mathbb{R}^{N}}\big(\overline{L_{k}(\rho)}-L_{k}(\rho)\big)(t_{1})dx\\ &\hspace{1,5cm}+\int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}\overline{T_{k}(\rho){\rm div}u}-\overline{T_{k}(\rho)}{\rm div}u\,dxdt=\int^{t_{2}}_{t_{1}} \int_{\mathbb{R}^{N}}(T_{k}(\rho)-\overline{T_{k}(\rho)}){\rm div}u\,dxdt.\\ \end{aligned} $$ We can show that: $$\|T_{k}(\rho)-\overline{T_{k}(\rho)}\|_{L^{2}((0,T)\times\mathbb{R}^{N})}\rightarrow_{k\rightarrow+\infty}0$$ For poving the previous inequality, we see that $\|T_{k}(\rho)-\overline{T_{k}(\rho)}\|_{L^{1}((0,T) \times\mathbb{R}^{N})}\rightarrow0$ for $k\rightarrow+\infty$. We then conclude by interpolation with $T_{k}(\rho)-\overline{T_{k}(\rho)}\in L^{q}(0,T)\times\mathbb{R}^{N})$ with $q>2$. By H\"older inequality we obtain that: $$\int^{t_{2}}_{t_{1}} \int_{\mathbb{R}^{N}}(T_{k}(\rho)-\overline{T_{k}(\rho)}){\rm div}u\,dxdt\rightarrow0\;\;\;\mbox{for}\;k\rightarrow+\infty.$$ We have then: \begin{equation} \begin{aligned} &\lim_{k\rightarrow+\infty}\int_{\mathbb{R}^{N}}\big(\overline{L_{k}(\rho)}-L_{k}(\rho)\big)(t_{2})dx- \int_{\mathbb{R}^{N}}\big(\overline{L_{k}(\rho)}-L_{k}(\rho)\big)(t_{1})dx\\ &\hspace{3cm}=-\lim_{k\rightarrow+\infty}\int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}\overline{T_{k}(\rho){\rm div}u}-\overline{T_{k}(\rho)}{\rm div}u\,dxdt. \end{aligned} \end{equation} We set: $$ \begin{aligned} &dft[\rho_{n}\rightarrow\rho](t)=\lim_{k\rightarrow+\infty}\int_{\mathbb{R}^{N}}\big(\overline{L_{k}(\rho)}-L_{k}(\rho)\big)(t)dx\\ &A(k,\rho)=\int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}\overline{T_{k}(\rho){\rm div}u}-\overline{T_{k}(\rho)}{\rm div}u\,dxdt.\\ \end{aligned} $$ We can show as in the previous proof of theorem \ref{4principal} that: $$ \begin{aligned} &\int^{t_{2}}_{t_{1}}\int_{K}\overline{T_{k}(\rho){\rm div}u}-\overline{T_{k}(\rho)}{\rm div}u\,dxdt\\ &\hspace{2cm}=\lim_{n\rightarrow+\infty}\int^{t_{2}}_{t_{1}} \int_{K}\big((P(\rho_{n})+\frac{\kappa}{2}\rho_{n}^{2})T_{k}(\rho_{n})-\overline{(P(\rho)+\frac{\kappa}{2} \rho^{2})}\overline{T_{k}(\rho)}\big)dxdt. \end{aligned} $$ for any compact $K\subset\mathbb{R}^{N}$. \\ Using the lemma \ref{4lemme2} we deduce that: $$\lim_{n\rightarrow+\infty}\int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}(P_{1}(\rho_{n})+\frac{\kappa}{2} \rho_{n}^{2})T_{k}(\rho_{n})-(\overline{P_{1}(\rho)}+\frac{\kappa}{2}\overline{\rho^{2}})\overline{T_{k}(\rho)}dxdt) \leq0.$$ We have then: $$ \begin{aligned} dft[\rho_{n}\rightarrow\rho](t_{2})-dft[\rho_{n}\rightarrow\rho](t_{1})\leq\lim_{k\rightarrow+\infty} \biggl(\lim_{n\rightarrow+\infty}\int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}P_{2}&(\rho_{n})T_{k}(\rho_{n})\\ &-\overline{P_{2}(\rho)}\,\overline{T_{k}(\rho)}dxdt\biggl).\\ \end{aligned} $$ As the sequence $(\rho_{n})_{n\in\mathbb{N}}$ is bounded in $L^{\infty}(0,T,L^{2}(\mathbb{R}^{N}))$, and $P_{2}$ is a bounded function, we have: $$ \begin{aligned} &\lim_{k\rightarrow+\infty}\biggl(\lim_{n\rightarrow+\infty}\int^{t_{2}}_{\tau_{1}}\int_{\mathbb{R}^{N}} P_{2}(\rho_{n})T_{k}(\rho_{n})-\overline{P_{2}(\rho)}\overline{T_{k}(\rho)}dxdt\biggl)\\ &\hspace{5cm}=\lim_{n\rightarrow+\infty}\int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}P_{2}(\rho_{n})\rho_{n} -\overline{P_{2}(\rho)}\,\overline{\rho}dxdt. \end{aligned} $$ Since the function $P_{2}$ is twice continuously differentiable and compactly supported in $[0,+\infty)$, there exists $\Lambda>0$ big enough such that both $\rho\rightarrow\Lambda\rho \log\rho-\rho P_{2}(\rho)$ and $\rho\rightarrow\Lambda\rho \log\rho+\rho P_{2}(\rho)$ are convex functions of $\rho$, indeed the twice derivative are positive.\\ As a consequence of weak lower semi-continuity of convex functionals, we obtain: $$ \begin{aligned} &\lim_{n\rightarrow+\infty}\int^{\tau_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}P_{2}(\rho_{n})\rho_{n}-\overline{P_{2}(\rho)}\, \overline{\rho}\,dx\,dt\\ &\hspace{2cm}\leq\Lambda\int^{\tau_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}(\overline{\rho\,\log\rho}-\rho\,\log\rho)dxdt+ \int^{\tau_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}(P_{2}(\rho)-\overline{P_{2}(\rho)})\rho\, dx\,dt.\\ \end{aligned} $$ Futhermore we have: $$ \begin{aligned} \int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}(P_{2}(\rho)-\overline{P_{2}(\rho)})\rho dxdt&\leq\int^{\tau_{2}}_{\tau_{1}}\int_{\rho\leq\rho_{r}}(P_{2}(\rho)-\overline{P_{2}(\rho)})\rho\\ &\leq\Lambda\int^{t_{2}}_{t_{1}}\int_{\rho\leq\rho_{r}}(\overline{\rho\,\log\rho}-\rho\,\log\rho)dxdt\\ & \leq\Lambda\rho_{r}\int^{t_{2}}_{t_{1}}\int_{\mathbb{R}^{N}}(\overline{\rho\,\log\rho}-\rho\,\log\rho)dxdt \end{aligned} $$ The previous relation gives: $$dft[\rho_{n}\rightarrow\rho](t_{2})\leq dft[\rho_{n}\rightarrow\rho](t)+\omega\int^{t_{2}} _{t_{1}}dft[\rho_{n}\rightarrow\rho](t).$$ Applying Gr\"onwall's lemma we infer: $$dft[\rho_{n}\rightarrow\rho](t_{2})\leq dft[\rho_{n}\rightarrow\rho](t_{1})exp(\omega(t_{2}-t_{1})).$$ We conclude that $dft[\rho_{n}\rightarrow\rho](t)=0\;\forall t$, because $\rho^{0}_{n}$ converges strongly in $L^{1}$ to $\rho_{0}$. \section{Weak solutions with data close to a stable equilibrium} We consider in this section one situation which is rather different from the three cases considered in the preceeding sections. This situation is relevant for practical applications and realistic flow and they involve conditions at infinity different from those studied.\\ We wish to investigate the system $(NSK)$ with hypothesis close from these of the theorem for strong solutions. We want then to study the system with a density close from a stable equilibrium in the goal to can choose initial data which avoid the vacuum. We look now for a solution $(\rho,u)$ defined on $\mathbb{R}\times\mathbb{R}^{N}$ of the system $(NSK)$ (where $P(\rho)=a\rho^{\gamma}$) with $\rho\geq0$ on $\mathbb{R}\times\mathbb{R}^{N}$.\\ In addition we require $(\rho,u)$ to satisfy the following limit conditions: $$(\rho,u)(x,t)\rightarrow (\bar{\rho},0)\;\;\mbox{as}\;\;|x|\rightarrow +\infty,\;\;\mbox{for all}\;\;t>0$$ where $\bar{\rho}>0$. \\ Such an analysis requires the use of the Orlicz spaces. We define the Orlicz space $L^{q}_{p}(\mathbb{R}^{N})$ as follows: $$L^{q}_{p}(\mathbb{R}^{N})=\{f\in L^{1}_{loc}(\mathbb{R}^{N})/f 1_{\{|f|\leq\delta\}}\in L^{p}(\mathbb{R}^{N}),\;\; f 1_{\{|f|\geq\delta\}}\in L^{q}(\mathbb{R}^{N})\}.$$ For the following properties, we can give this definition too. \begin{definition} We define $\Psi$ as a convex function on $[0,+\infty)$ which is equal (or equivalent) to $x^{p}$ for $x$ small and to $x^{q}$ for $x$ large. $$L^{q}_{p}(\mathbb{R}^{N})=\{f\in L^{1}_{loc}(\mathbb{R}^{N})/\Psi(f)\in L^{1}(\mathbb{R}^{N})\}.$$ We can check that $L^{q}_{p}(\mathbb{R}^{N})$ is a linear space. Now we endow $L^{q}_{p}(\mathbb{R}^{N})$ with a norm so that $L^{q}_{p}(\mathbb{R}^{N})$ is a separable Banach space.\\ $$\|f\|_{L^{q}_{p}(\mathbb{R}^{N})}=\inf\big(t>0/\;\;\Psi(\frac{f}{t})\leq 1\big).$$ \end{definition} We recall now some useful properties of the Orlicz space: \begin{corollaire} We have: \begin{enumerate} \item Embedding: $$L^{q}_{p}(\mathbb{R}^{N})\subset L^{q_{1}}_{p_{1}}(\mathbb{R}^{N})\;\;if\;\;1\leq q_{1}\leq q<+\infty,\;\;1\leq p\leq p_{1}<+\infty.$$ \item Topology: $f_{n}\rightarrow_{n} 0$ in $L^{q}_{p}(\mathbb{R}^{N})$ if and only if $\Psi(f_{n})\rightarrow_{n} 0$ and that: $$\Psi(\frac{f}{\|f\|_{L^{q}_{p}(\mathbb{R}^{N})}})=1\;\;\;\mbox{if}\;\;f\ne 0$$ \end{enumerate} \end{corollaire} We recall now some useful properties of the Orlicz space: \begin{proposition} \label{4compositionorlicz} We then have: \begin{itemize} \item Dual space: If $p>1$ then $(L^{q}_{p}(\mathbb{R}^{N}))^{'}=L^{q^{'}}_{p^{'}}(\mathbb{R}^{N})$ where $q^{'}=\frac{q}{q-1},\,p^{'}=\frac{p}{p-1}.$ \item Let $F$ be a continuous function on $\mathbb{R}$ such that $F(0)=0$, $F$ is differentiable at $0$ and $F(t)|t|^{-\theta}\rightarrow\alpha\ne 0$ at $t\rightarrow +\infty$. Then if $q\geq\theta$,\\ $$F(f)\in L^{\frac{q}{\theta}}_{p}(\mathbb{R}^{N})\;\;\mbox{if}\;\;f\in L^{q}_{p}(\mathbb{R}^{N}).$$ \end{itemize} \end{proposition} Our goal is now to get energy estimate. We have to face a new difficulty. Indeed $\rho,\,\rho|u|^{2},\,\rho^{\gamma}$ need not belong to $L^{1}$.\\ We first want to explain how it is possible to obtain natural a priori bounds which correspond to energy-like identities.\\ Next we write the following formal identities: \begin{equation} \begin{aligned} &\frac{1}{\gamma-1}\frac{d}{dt}(\rho^{\gamma}-\bar{\rho}^{\gamma} -\gamma\bar{\rho}^{\gamma-1}(\rho-\bar{\rho})+{\rm div}[u\frac{\gamma}{\gamma-1}(\rho^{\gamma}-\bar{\rho}^{\gamma-1}\rho)]=u\cdot\nabla(\rho^{\gamma})\\[2mm] &\rho\frac{d}{dt}\frac{|u|^{2}}{2}+\rho u\cdot\nabla\frac{|u|^{2}}{2}-\mu\Delta u\cdot u-\xi\nabla{\rm div}u\cdot u+au\cdot\nabla\rho^{\gamma}=\kappa\rho u\nabla(\phi*\rho-\rho).\\ \label{426} \end{aligned} \end{equation} We may then integrate in space the equality (\ref{426}) and we get: \begin{equation} \begin{aligned} &\big(\int_{\mathbb{R}^{N}}\rho\frac{|u|^{2}}{2}+\frac{a}{\gamma-1}(\rho^{\gamma}+(\gamma-1)\bar{\rho}^{\gamma} -\gamma\bar{\rho}^{\gamma-1}\rho)+E_{global}[\rho-\bar{\rho}]dx\big)(t)\\ &+\int_{0}^{t}ds\int_{\mathbb{R}^{N}}2\mu|D u|^{2}+2\xi|{\rm div}u|^{2}dx\leq\int_{\mathbb{R}^{N}}\rho_{0}\frac{|u_{0}|^{2}}{2}+\frac{a}{\gamma-1}(\rho_{0}^{\gamma}+(\gamma-1)\bar{\rho}^{\gamma} -\gamma\bar{\rho}^{\gamma-1}\rho_{0})\\ &\hspace{11cm}+E_{global}[\rho_{0}-\bar{\rho}]dx.\\ \end{aligned} \end{equation} \begin{notation} In the sequel we will note: $$j_{\gamma}(\rho)=\rho^{\gamma}+(\gamma-1)\bar{\rho}^{\gamma} -\gamma\bar{\rho}^{\gamma-1}\rho.$$ \end{notation} Now we can recall a theorem (see \cite{4L2}) on the Orlicz space concerning this quantity: \begin{theorem} $j_{\gamma}(\rho)\in L^{1}(\mathbb{R}^{N})$ if and only if $\rho-\bar{\rho}\in L^{\gamma}_{2}.$ \end{theorem} In following this theorem and our energy estimate we get that $\rho-\bar{\rho}\in L^{\infty}(0,T;L^{\gamma}_{2}(\mathbb{R}^{N}))$ for all $T\in\mathbb{R}$.\\ Moreover we have: $$E_{global}[\rho(t,\cdot)-\bar{\rho}](x)=\frac{\kappa}{4}\big(\rho-\bar{\rho})^{2}+\phi*(\rho-\bar{\rho})^{2}- 2(\rho-\bar{\rho})\,(\phi*(\rho-\bar{\rho})).$$ Then in using the fact that $\rho-\bar{\rho}\in L^{\infty}(0,T;L^{\gamma}_{2}(\mathbb{R}^{N}))\hookrightarrow L^{2}(\mathbb{R}^{N}) +L^{\gamma}(\mathbb{R}^{N})$ and interpolation on $\nabla\phi$, we get that $\rho-\bar{\rho}\in L^{\infty}(0,T;L^{2}(\mathbb{R}^{N}))$.\\ We may now turn to our compactness result. First of all, we consider sequences of solutions $(\rho_{n},u_{n})$ of the system corresponding to some initial conditions $(\rho^{0}_{n},u^{0}_{n})$.\\ In using the above energy inequalities, we assume that $j_{\gamma}(\rho^{0}_{n})$, $E_{global}[\rho^{0}_{n}-\bar{\rho}]$ and $\rho^{0}_{n}|u^{0}_{n}|^{2}$ are bounded in $L^{\infty}(L^{1}(\mathbb{R}^{N}))$ and that $\rho^{0}_{n}-\bar{\rho}$ converges weakly in $L^{\gamma}_{2}(\mathbb{R}^{N})$ to some $\rho_{0}-\bar{\rho}$.\\ \\ We now assume that: $$j_{\gamma}(\rho_{n}),\,E_{global}[\rho_{n}-\bar{\rho}],\,\rho_{n}|u_{n}-\bar{u}|^{2}\;\;\mbox{are bounded in}\;\;L^{\infty}(0,T,L^{1}(\mathbb{R}^{N})),$$ Moreover we have for all $,T\in(0,+\infty)$ and for all compact sets $K\subset\mathbb{R}^{N}$: $$\rho_{n}-\bar{\rho}\in L^{\infty}(L^{2}(\mathbb{R}^{N})\;\;\;\mbox{and}\;\;\;\rho_{n}\;\;\mbox{is bounded in} \;\;L^{q}((0,T)\times K),$$ for some $q>s$. $$ \begin{aligned} &Du_{n}\;\;\mbox{is bounded in}\;\;L^{2}(\mathbb{R}^{N}\times (0,T))\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\ &u_{n}\;\;\mbox{is bounded in}\;\;L^{2}(0,T,H^{1}(B_{R}))\;\;\mbox{for all}\;\;R,T\in (0,+\infty)\;.\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\ \end{aligned} $$ \\ Extracting subsequences if necessary, we may assume that $\rho_{n},\,u_{n}$ converge weakly respectively in $L^{2}((0,T)\times B_{R})$, $L^{2}(0,T;H^{1}(B_{R}))$ to $\rho,\,u$ for all $R,T\in(0,+\infty)$. We also extract subsequences for which $\rho_{n}u_{n},\,\rho_{n}u_{n}\otimes u_{n}$ converge weakly as previously. \begin{rem} We notice that the conditions at infinity are implicitly contained in the fact that $(\rho_{n}-\bar{\rho})^{2}$ and $\rho_{n}|u_{n}|^{2}\in L^{1}(\mathbb{R}^{N})$. \end{rem} We then have the following theorem. \begin{theorem} Let $\gamma\geq1$. We assume that $\rho^{0}_{n}$ converges in $L^{1}(B_{R})$ (for all $R\in(0,+\infty)$) to $\rho_{0}$. Then $(\rho_{n},u_{n})_{n\in\mathbb{N}}$ converges in distribution sense to $(\rho,u)$ a solution of $(NSK)$.\\ Moreover we have for all $R,T\in(0,+\infty)$: $$ \begin{aligned} &\rho_{n}\rightarrow_{n}\rho\;\;\mbox{in}\;\;C([0,T],L^{p}(B_{R}\times(0,T))\cap L^{s_{1}}(B_{R}\times(0,T))\:\;\mbox{for all}\;\;1\leq p<s,1\leq s_{1}<q.\\ \end{aligned} $$ with $q=s+\frac{4}{N}-1$. \end{theorem} {\bf Proof:} \\ As in the theorem \ref{4principal}, we want test the strong convergence of $\rho_{n}$ on concav function $B$. Since the proof is purely local, we have again for small enough $\varepsilon>0$: \begin{equation} \begin{aligned} &(\rho_{n})^{\varepsilon}\big((\mu+\xi){\rm div}u_{n}-a(\rho_{n})^{\gamma}-\frac{\kappa}{2}\rho_{n}^{2}\big)\rightharpoonup_{n}\overline{(\rho)^{\varepsilon}} \big((\mu+\xi){\rm div}u-a\overline{\rho^{\gamma}}-\frac{\kappa}{2}\overline{\rho^{2}}\big)\;\;\;\\ &\hspace{11cm}\mbox{in}\;\;{\cal D}^{'}\big((0,\infty)\times\mathbb{R}^{N}\big),\\[2mm] &\frac{d}{dt}(\overline{\rho^{\varepsilon}})+{\rm div}(u\overline{\rho^{\varepsilon}})\geq (1-\varepsilon)({\rm div}u)\overline{\rho^{\varepsilon}}\;\;\;\mbox{in}\;\;{\cal D}^{'}\big((0,\infty)\times\mathbb{R}^{N}\big).\\ \end{aligned} \label{4H40} \end{equation} Next since $(\overline{\rho^{\varepsilon}})^{\frac{1}{\varepsilon}}\in L^{2}(B_{R}\times(0,T))$ for all $R,T\in(0,+\infty)$, as in the theorem \ref{4principal} in using a result of type Diperna-Lions on renormalized solutions, we get: \begin{equation} \label{4H41} \frac{d}{dt}(\overline{(\rho)^{\varepsilon}}^{\frac{1}{\varepsilon}})+{\rm div}(u\overline{(\rho)^{\varepsilon}}^{\frac{1}{\varepsilon}})\geq 0\;\;\;\mbox{in}\;\; {\cal D}^{'}\big((0,\infty)\times\mathbb{R}^{N}\big). \end{equation} while we have $\overline{(\rho)^{\varepsilon}}^{\frac{1}{\varepsilon}}\leq\rho$ a.e in $\mathbb{R}^{N}\times(0,+\infty)$ and $\overline{(\rho)^{\varepsilon}}^{\frac{1}{\varepsilon}}_{/t=0}=\rho_{/t=0}$ in $\mathbb{R}^{N}$.\\ Now in subtracting the second equality of (\ref{4H41}) from the mass equation and setting $f=\rho-\overline{\rho^{\varepsilon}}^{\frac{1}{\varepsilon}}$, we have: \begin{equation} \frac{d}{dt}(f)+{\rm div}(uf)\leq0,\;f\geq0\;\;\mbox{a.e},\;f_{/t=0}=0\;\;\mbox{in}\;\; \mathbb{R}^{N}. \label{4H42} \end{equation} Next we want again to show from (\ref{4H42}) that $f=0$, in integrating (\ref{4H42}) and in using the fact that $f\leq0$ to conclude that $f=0$. The difference with the proof of theorem \ref{4principal} is to justify the integration by parts as we work in different energy space.\\ We need a cut-off function. We introduce $\varphi\in C^{\infty}_{0}(\mathbb{R}^{N}),\,0\leq\varphi\leq1,\,\mbox{supp}\varphi\subset B_{2} ,\,\varphi=1\;\mbox{on}\;B_{1}$ and we set $\varphi_{R}=\varphi(\frac{x}{R})$ for $R\geq1$. Multiplying (\ref{4H42}) by $\varphi_{R}(x)$, we obtain: \begin{equation} \begin{aligned} &\frac{d}{dt}\int_{\mathbb{R}^{N}}f\varphi_{R}(x)dx=&\int_{\mathbb{R}^{N}}\frac{1}{R}fu\cdot\nabla\varphi(\frac{x}{R}).\\ \end{aligned} \label{4H43} \end{equation} Next, if $T>0$ is fixed, we see that $\mbox{supp}\,\nabla\varphi(\frac{\cdot}{R})\subset\{R\leq|x|\leq2R\}$ , therefore, for $R$ large enough we have: \begin{equation} \begin{aligned} \frac{d}{dt}\int_{\mathbb{R}^{N}}f\varphi_{R}(x)dx=&\int_{\mathbb{R}^{N}}\frac{1}{R}fu.\nabla\varphi(\frac{x}{R}),\\ &\leq\frac{C}{R}\int_{\mathbb{R}^{N}}f|u|1_{(R\leq|x|\leq2R)}dx,\;\;\mbox{for}\;t\in(0,T),\\ \end{aligned} \label{4H44} \end{equation} To conclude that $f=0$, we only have to prove that: \begin{equation} \frac{1}{R}\int_{\mathbb{R}^{N}}f|u|1_{(R\leq|x|\leq2R)}dx\rightarrow0\;\;\mbox{as}\;R\rightarrow+\infty. \label{4H45} \end{equation} We now use that fact that $f\in L^{\infty}(0,T,L^{2}(\mathbb{R}^{N}))$ and $f|u|^{2}\in L^{\infty}(0,T,L^{1}(\mathbb{R}^{N})$ for all $T\in(0,+\infty)$ to control (\ref{4H45}). The second fact is obvious since $0\leq\rho$ and $\rho|u|^{2}\in L^{\infty}(0,T;L^{1}(\mathbb{R}^{N}))$.\\ In order to prove the first claim, we only have to show that $\overline{(\rho)^{\varepsilon}}^{\frac{1}{\varepsilon}}-\bar{\rho}\in L^{\infty}(0,T,L^{2}(\mathbb{R}^{N}))$. We rewrite $(\rho_{n})^{\varepsilon}-(\bar{\rho})^{\varepsilon}=(\bar{\rho}+(\rho_{n}-\bar{\rho}))^{\varepsilon}- (\bar{\rho})^{\varepsilon}$ is bounded in $L^{\infty}(0,T,L^{2}(\mathbb{R}^{N}))$ in using proposition \ref{4compositionorlicz} with $F(x)=(\bar{\rho}+x)^{\varepsilon}-(\bar{\rho})^{\varepsilon}$. So we have $\sqrt{f}\in L^{\infty}(L^{4}(\mathbb{R}^{N}))$ and we get: $$ \begin{aligned} \frac{1}{R}\int^{T}_{0}dt\int_{\mathbb{R}^{N}}f|u|&1_{(R\leq|x| \leq2R)}dx\leq \frac{C_{0}}{R}\mbox{meas} (C(0,R,2R))^{\frac{1}{4}}.\\ \end{aligned} $$ We recall that: $$\mbox{meas}(C(0,R,2R))\thicksim_{R\rightarrow+\infty}C(n)R^{N}$$ Then we get: $$\frac{d}{dt}\int_{\mathbb{R}^{N}}f\varphi_{R}(x)dx\rightarrow_{R\rightarrow +\infty}0$$ and we conclude as in the proof of theorem \ref{4principal}.\\ At this stage, it only remains to show that, for instance, $\rho_{n}$ converge to $\rho$ in $C([0,T],L^{1}(B_{R}))$ for all $R,T\in(0,+\infty)$. In order to do so, we just have to localize the corresponding argument in the proof of theorem \ref{4principal}.\\ Therefore we choose for $R,T\in (0,+\infty)$ fixed, $\varphi\in C_{0}^{\infty}(\mathbb{R}^{N})$ such that $\varphi=1$ on $B_{R}$, $0\leq\varphi$ on $\mathbb{R}^{N}$. Then, we observe that we have: $$ \begin{aligned} &\frac{\partial}{\partial t}(\varphi^{2}\rho_{n})+{\rm div}(u_{n}(\varphi^{2}\rho_{n}))=\rho_{n}u_{n}\cdot\nabla\varphi^{2},\;\frac{\partial}{\partial t}(\varphi^{2}\rho)+{\rm div}(u(\varphi^{2}\rho))=\rho u\cdot\nabla\varphi^{2}\\ &\frac{\partial}{\partial t}(\varphi\sqrt{\rho_{n}})+{\rm div}(u_{n}(\varphi\rho_{n}))=\frac{1}{2}({\rm div}u_{n})\varphi\sqrt{\rho_{n}}+\sqrt{\rho_{n}}u_{n}\cdot\nabla\varphi,\\ &\frac{\partial}{\partial t}(\varphi\sqrt{\rho})+{\rm div}(u(\varphi\rho))=\frac{1}{2}({\rm div}u)\varphi\sqrt{\rho}+\sqrt{\rho}u\cdot\nabla\varphi.\\ \end{aligned} $$ From these equations, we deduce as in the proof of the previous theorem, that $\varphi^{2}\rho\in C([0,+\infty),L^{1}(\mathbb{R}^{N}))$, $\varphi\sqrt{\rho}\in C([0,+\infty),L^{2}(\mathbb{R}^{N}))$ and that $\varphi\sqrt{\rho_{n}}$ converges weakly in $L^{2}(\mathbb{R}^{N})$, uniformly in $t\in[0,T]$. Therefore, in order to conclude, we just have to show that we have:\\ $$\int_{\mathbb{R}^{N}}\varphi^{2}\rho_{n}(t_{n})dx\rightarrow\int_{\mathbb{R}^{N}}\varphi^{2}\rho(\bar{t})dx$$ whenever $t_{n}\in[0,T]$, $t_{n}\rightarrow_{n}\bar{t}$, and this is straightforward since we have, in view of the above equation:\\ $$ \begin{aligned} \int_{\mathbb{R}^{N}}\varphi^{2}\rho_{n}(t_{n})dx&=\int_{\mathbb{R}^{N}}\varphi^{2}(\rho_{0})_{n}dx+\int^{t_{n}}_{0}ds\int_{\mathbb{R}^{N}} \rho_{n}u_{n}\cdot\nabla\varphi^{2}\,dx\\ &\longrightarrow_{n}\int_{\mathbb{R}^{N}}\varphi^{2}\rho_{0}dx+\int^{T}_{0}ds\int_{\mathbb{R}^{N}} \rho u\cdot\nabla\varphi^{2}\,dx=\int_{\mathbb{R}^{N}}\varphi\rho(\bar{t})dx.\\ \end{aligned} $$ \hfill{$\Box$}
{ "attr-fineweb-edu": 1.623047, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcSPxK5YsWPOofKpF
\section{Introduction} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs/teaser} \caption{FLOPs-Accuracy spectrum of \textbf{single} US-MobileNet v1 model, compared with \textbf{four individual} MobileNet v1 models.} \label{figs:teaser} \end{figure} The ability to run neural network models within latency budget is of paramount importance for applications on mobile phones, augmented reality glasses, self-driving cars, security cameras and many others~\cite{yang2018efficient, huang2018data, ren2015faster}. Among these applications, many are required to deploy trained models across different devices or hardware versions~\cite{yu2018slimmable, huang2017multi, liu2017dynamic}. However, a single trained network cannot achieve optimal accuracy-efficiency trade-offs across different devices (\eg, face recognition model running on diverse mobile phones). To address the problem, recently slimmable networks~\cite{yu2018slimmable} were introduced that can switch among different widths at runtime, permitting instant and adaptive accuracy-efficiency trade-offs. The width can be chosen from a predefined widths set, for example {\([0.25, 0.5, 0.75, 1.0]\times\)}, where {\([\cdot]\times\)} denotes available widths, and {\(0.25 \times\)} represents that the width in all layers is scaled by \(0.25\) of the full model. To train a slimmable network, switchable batch normalization~\cite{yu2018slimmable} is proposed that privatizes batch normalization~\cite{ioffe2015batch} layers for each sub-network. A slimmable network has accuracy similar to that of individually trained ones with the same width~\cite{yu2018slimmable}. Driven by slimmable networks, a further question arises: \textit{can a single neural network run at arbitrary width}? The question motivates us to rethink the basic form of feature aggregation. In deep neural networks, the value of a single output neuron is an aggregation of all input neurons weighted by learnable coefficients \(y = \sum_{i=1}^{n} w_{i} x_{i}\), where \(x\) is input neuron, \(y\) is output neuron, \(w\) is learnable coefficient and \(n\) is number of input channels. This formulation indicates that each input channel or group of channels can be viewed as a \textit{residual component}~\cite{he2016deep} to an output neuron. Thus, a wider network should have no worse performance than its slim one (the accuracy of slim one can always be achieved by learning new connections to zeros). In other words, if we consider a single layer, the residual error between full aggregation and partial aggregation decreases and is bounded: \begin{equation} \label{eqs:intro} |y^{n} - y^{k+1}| \leq |y^{n} - y^{k}| \leq |y^{n} - y^{k_0}|, \end{equation} where \(y^{k}\) summarizes the first \(k\) channels \(y^{k} = \sum_{i=1}^{k} w_{i} x_{i}\), \(\forall k \in [k_0, n)\), \(k_0\) is a constant hyper-parameter (for example, \(k_0 = \lceil0.25n\rceil\)). The bounded inequality\footnote{The analysis is based on a single hidden layer. Future research on theoretical analysis of deep neural networks with nonlinear activation may fully reveal why or why not universally slimmable networks exist.} suggests that a slimmable network~\cite{yu2018slimmable} executable at a discrete widths set can potentially run at any width in between (if properly trained), since the residual error decreases by the increase of width and is bounded. Moreover, the inequality conceptually applies to any deep neural network, regardless of what normalization layers~\cite{ioffe2015batch, salimans2016weight} are used. However, as suggested in~\cite{yu2018slimmable}, batch normalization (BN)~\cite{ioffe2015batch} requires special treatment because of the inconsistency between training and testing. In this work, we present universally slimmable networks (US-Nets) that can run at any width in a wide range. Three fundamental challenges of training US-Nets are addressed. First, how to deal with neural networks with batch normalization? Second, how to train US-Nets efficiently? Third, compared with training individual networks, what else can we explore in US-Nets to improve overall performance? Batch normalization~\cite{ioffe2015batch} has been one of the most important components in deep learning. During training, it normalizes feature with mean and variance of current mini-batch, while in inference, moving averaged statistics of training are used instead. This inconsistency leads to failure of training slimmable networks, as shown in~\cite{yu2018slimmable}. The switchable batch normalization~\cite{yu2018slimmable} (we address the version of shared scale and bias by default, the version of private scale and bias will be discussed in Section~\ref{secs:discuss}) is then introduced. However, it is not practical for training US-Nets for two reasons. First, accumulating independent BN statistics of all sub-networks in a US-Net during training is computationally intensive and inefficient. Second, if in each iteration we only update some sampled sub-networks, then these BN statistics are insufficiently accumulated thus inaccurate, leading to much worse accuracy in our experiments. To properly address the problem, we adapt the batch normalization with a simple modification. The modification is to calculate BN statistics of all widths after training. The weights of US-Nets are fixed after training, thus all BN statistics can be computed in parallel on cluster servers. More importantly, we find that a \textit{randomly sampled subset} of training images, as few as \num{1} mini-batch (\num{1024} images), already produces accurate estimation. Thus calculating BN post-statistics can be very fast. We note that to be more general, we intentionally avoid modifying the formulation of BN or proposing new normalization. Next we propose an improved training algorithm for US-Nets motivated by the bounded inequality in Equation~\ref{eqs:intro}. To train a US-Net, a natural solution is to accumulate or average losses sampled from different widths. For example, in each training iteration we randomly sample \(n\) widths in the range of { \([0.25, 1.0]\times\)}. Taking a step further, we should notice that in a US-Net, performances at all widths are bounded by performance of the model at smallest width (\eg, {\(0.25\times\)}) and largest width (\eg, { \(1.0\times\)}). In other words, optimizing performance lower bound and upper bound can implicitly optimize the model at all widths. Thus, instead of sampling \(n\) widths randomly, in each training iteration we train the model at smallest width, largest width and (\(n\)-\(2\)) randomly sampled widths. We employ this rule (named \textit{the sandwich rule}) to train US-Nets and show better convergence behavior and overall performance. Further we propose \textit{inplace distillation} that transfers knowledge inside a single US-Net from full-network to sub-networks \textit{inplace} in each training iteration. The idea is motivated by two-step knowledge distilling~\cite{hinton2015distilling} where a large model is trained first, then its learned knowledge is transferred to a small model by training with predicted soft-targets. In US-Nets, by \textit{the sandwich rule} we train the model at largest width, smallest width and other randomly sampled widths all together in each iteration. Remarkably, this training scheme naturally supports inplace knowledge transferring: we can directly use the predicted label of the model at the largest width as the training label for other widths, while for the largest width we use ground truth. It can be implemented inplace in training without additional computation and memory cost. Importantly, the proposed \textit{inplace distillation} is general and we find it works well not only for image classification, but also on tasks of image super-resolution and deep reinforcement learning. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{figs/rethink_neuron} \caption{Illustration of a network executing at different widths. We specifically consider an output neuron \(y_1\) in a layer (right, zoomed).} \label{figs:rethink_neuron} \end{figure*} We apply the proposed methods to train universally slimmable networks on representative tasks with representative networks (both with and without BN, and both residual and non-residual networks). We show that trained US-Nets perform similarly or even better than individually trained models. Extensive ablation studies on \textit{the sandwich rule} and \textit{inplace distillation} demonstrate the effectiveness of our proposed methods. Our contributions are summarized as follows: \begin{enumerate} \setlength\itemsep{0.05em} \item For the first time we are able to train a single neural network executable at arbitrary width, using a simple and general approach. \item We further propose two improved training techniques in the context of US-Nets to enhance training process and boost testing accuracy. \item We present experiments and ablation studies on image classification, image super-resolution and deep reinforcement learning. \item We further intensively study the US-Nets with regard to (1) width lower bound \(k_0\), (2) width divisor \(d\), (3) number of sampled widths per training iteration \(n\), and (4) size of subset for BN post-statistics \(s\). \item We further show that our method can also be applied to train nonuniform US-Nets where each layer can adjust its own width ratio, instead of a global width ratio uniformly applied on all layers. \item Our discovery opens up the possibility to many related fields, for examples, network comparison in terms of FLOPs-Accuracy spectrum (Figure~\ref{figs:teaser}), and one-shot architecture search for number of channels~\cite{yu2019network}. \end{enumerate} \section{Related Work} \textbf{Slimmable Networks.} Yu \etal~\cite{yu2018slimmable} present the initial approach to train a single neural network executable at different widths, permitting instant and adaptive accuracy-efficiency trade-offs at runtime. The width can be chosen from a predefined widths set. The major obstacle of training slimmable networks is addressed: accumulating different numbers of channels results in different feature mean and variance. This discrepancy across different sub-networks leads to inaccurate statistics of shared Batch Normalization layers~\cite{ioffe2015batch}. Switchable batch normalization is proposed that employs independent batch normalization for different sub-networks in a slimmable network. On tasks of image recognition (\ie, classification, detection and segmentation), slimmable networks achieve accuracy similar to that of individually trained models~\cite{yu2018slimmable}. \textbf{Knowledge Distilling.} The idea of knowledge distilling~\cite{hinton2015distilling} is to transfer the learned knowledge from a pretrained network to a new one by training it with predicted features, soft-targets or both. It has many applications in computer vision, network compression, reinforcement learning and sequence learning problems~\cite{bagherinezhad2018label, chen2015net2net, kim2016sequence, parisotto2015actor, romero2014fitnets}. FitNet~\cite{romero2014fitnets} proposes to train a thinner network using both outputs and intermediate representations learned by the teacher network as hints. Net2Net~\cite{chen2015net2net} proposes to transfer the knowledge from a pretrained network to new deeper or wider one for accelerating training. Actor-Mimic~\cite{parisotto2015actor} trains a single policy network to behave in multiple tasks with guidance of many teacher networks. Knowledge distillation is also effectively applied to word-level prediction for neural machine translation~\cite{kim2016sequence}. \section{Universally Slimmable Networks} \subsection{Rethinking Feature Aggregation} \label{secs:rethink} Deep neural networks are composed of layers where each layer is made of neurons. As the fundamental element of deep learning, a neuron performs weighted sum of all input neurons as its value, propagating layer by layer to make final predictions. An example is shown in Figure~\ref{figs:rethink_neuron}. The output neuron \(y\) is computed as: \begin{equation} y = \sum_{i=1}^{n} w_{i} x_{i}, \end{equation} where \(n\) is the number of input neurons (or channels in convolutional networks), \(x = \{x_1, x_2, ..., x_n\}\) is input neurons, \(w = \{w_1, w_2, ..., w_n\}\) is learnable coefficient, \(y\) is a single output neuron. This process is also known as \textit{feature aggregation}: each input neuron is responsible for detecting a particular feature, and the output neuron aggregates all input features with learnable transformations. The number of channels in a network is usually a manually picked hyper-parameter (\eg, \num{128}, \num{256}, ..., \num{2048}). It plays a significant role in the accuracy and efficiency of deep models: wider networks normally have better accuracy with sacrifice of runtime efficiency. To provide the flexibility, many architecture engineering works~\cite{howard2017mobilenets, sandler2018inverted, zhang2017shufflenet} individually train their proposed networks with different width multipliers: a global hyper-parameter to slim a network uniformly at each layer. We aim to train a single network that can directly run at arbitrary width. It motivates us to rethink the basic form of feature aggregation in deep neural networks. As shown in Figure~\ref{figs:rethink_neuron}, feature aggregation can be explicitly interpreted in the framework of \textit{channel-wise residual learning}~\cite{he2016deep}, where each input channel or group of channels can be viewed as a \textit{residual component}~\cite{he2016deep} for the output neuron. Thus, a wider network should have no worse performance than its slim one (the accuracy of slim one can always be achieved by learning new connections to zeros). In other words, the residual error \(\delta\) between fully aggregated feature \(y^n\) and partially aggregated feature \(y^k\) decreases and is bounded: \begin{equation} \label{eqs:rethink} 0 \leq \delta_{k+1} \leq \delta_{k} \leq \delta_{k_0}, \delta_k = |y^{n} - y^{k}|, \end{equation} where \(y^{k}\) summarizes the first \(k\) channels \(y^{k} = \sum_{i=1}^{k} w_{i} x_{i}\), \(\forall k \in [k_0, n)\), \(k_0\) is a constant hyper-parameter (for example, \(k_0 = \lceil0.25n\rceil\)). The bounded inequality in Equation~\ref{eqs:rethink} provides clues about several speculations: (1) Slimmable network~\cite{yu2018slimmable} executable at a discrete widths set can potentially run at any width in between (if properly trained). In other words, a single neural network may execute at any width in a wide range for \(k\) from \(k_0\) to \(n\), since the residual error of each feature is bounded by \(\delta_{k_0}\), and decreases by increase of width \(k\). (2) Conceptually the bounded inequality applies to any deep neural network, regardless of what normalization layers (\eg, batch normalization~\cite{ioffe2015batch} and weight normalization~\cite{salimans2016weight}) are used. Thus, in the following sections we mainly explore how to train a single neural network executable at arbitrary width. These networks are named as \textit{universally slimmable networks}, or simply \textit{US-Nets}. \subsection{Post-Statistics of Batch Normalization} \label{secs:post_statistics} However, as suggested in~\cite{yu2018slimmable}, batch normalization~\cite{ioffe2015batch} requires special treatment because of the inconsistency between training and testing. During training, features in each layer are normalized with mean and variance of the current mini-batch feature values \(x_B\): \begin{equation} \hat x_B = \gamma \frac{x_B - E_B[x_B]}{\sqrt{Var_B[x_B]+\epsilon}} + \beta, \end{equation} where \(\epsilon\) is a small value (e.g.\ \(10^{-5}\)) to avoid zero-division, \(\gamma\) and \(\beta\) are learnable scale and bias. The values of feature mean and variance are then updated to global statistics as moving averages: \begin{equation} \label{eqs:moving_averages} \begin{split} \mu_t &= m \mu_{t-1} + (1-m) E_B[x_B],\\ \sigma^2_{t} &= m \sigma^2_{t-1} + (1-m) Var_B[x_B], \end{split} \end{equation} where \(m\) is the momentum (\eg, \(0.9\)), and \(t\) is the index of training iteration. We denote \(\mu = \mu_T, \sigma^2 = \sigma^2_T\), assuming the network is trained for \(T\) iterations totally. During inference, these global statistics are used instead: \begin{equation} \label{eqs:bn_test} \hat x_{\mathit{test}} = \gamma^* \frac{x_{\mathit{test}} - \mu}{\sqrt{\sigma^2+\epsilon}} + \beta^*, \end{equation} where \(\gamma^*\) and \(\beta^*\) are the optimized scale and bias. Note that after training, the Equation~\ref{eqs:bn_test} can be reformulated as a simple linear transformation: \begin{equation} \label{eqs:merge} \hat x_{\mathit{test}} = \gamma'x_{\mathit{test}} + \beta', \gamma' = \frac{\gamma^*}{\sqrt{\sigma^2 + \epsilon}}, \beta' = \beta^* - \gamma'\mu, \end{equation} and usually \(\gamma'\) and \(\beta'\) can be further fused into its previous convolution layer. In slimmable networks, accumulating different numbers of channels results in different feature means and variances, which further leads to inaccurate statistics of shared BN~\cite{yu2018slimmable}. Yu \etal introduced switchable batch normalization that privatizes \(\gamma\), \(\beta\), \(\mu\), \(\sigma^2\) of BN for each sub-network. Although parameter \(\gamma\), \(\beta\) can be merged after training (Equation~\ref{eqs:merge}), slimmable networks with shared \(\gamma\) and \(\beta\) have close performance~\cite{yu2018slimmable}. Regarding universally slimmable networks, however, switchable batch normalization~\cite{yu2018slimmable} is not practical for two reasons. First, accumulating independent BN statistics of all sub-networks in a US-Net during training is computationally intensive and inefficient. For example, assuming an \(n\mathit{-channel}\) layer can adjust its width from \(\lceil0.25n\rceil\) to \(n\), totally there are (\(n - \lceil0.25n\rceil\)) sub-networks to evaluate and \(\lceil0.25n\rceil + (\lceil0.25n\rceil+1) + ... + n = \mathcal{O}(n^2)\) variables of BN statistics to update in each training iteration. Second, if in each iteration we only update some sampled sub-networks, then these BN statistics are insufficiently accumulated thus inaccurate, leading to much worse accuracy in our experiments. To this end, we adapt the batch normalization with a simple modification that can properly address the problem. The modification is to calculate BN statistics of all widths after training. Trainable parameters of US-Nets are fixed, thus all BN statistics can be computed in parallel on cluster servers. After training, we can calculate BN statistics over training samples, either as moving averages in Equation~\ref{eqs:moving_averages} or exact averages as follows: \begin{equation} \label{eqs:exact_averages} \begin{split} m &= (t-1)/t,\\ \mu_t &= m \mu_{t-1} + (1-m) E_B[x_B],\\ \sigma^2_{t} &= m \sigma^2_{t-1} + (1-m) Var_B[x_B]. \end{split} \end{equation} Our experiments show that exact averages have slightly better performance than moving averages. In practice, we find it is not necessary to accumulate BN statistics over all training samples: a randomly sampled subset (\eg, \(1k\) images) already produces accurate estimations. With this option, calculating post-statistics of BN can be extremely fast (by default we calculate over all training samples). In experiments, we will compare the accuracy for different sample sizes. Moreover, in research or development, it is important to track the validation accuracy of a model as it trains. Although it is not supported with post-statistics of BN, we can use a simple engineering trick in training US-Nets: always tracking BN statistics of the model at largest and smallest width during training. \section{Improved Training Techniques} In this section, we describe our training algorithm for US-Nets from bottom to top. We first introduce motivations and details of \textit{the sandwich rule} and \textit{inplace distillation}, and then present the overall algorithm for training universally slimmable networks. \subsection{The Sandwich Rule} \label{secs:sandwich} To train a US-Net, a natural solution is to accumulate or average losses sampled from different sub-networks. For example, in each training iteration we randomly sample \(n\) widths in the range of {\([0.25, 1.0]\times\)} and apply gradients back-propagated from accumulated loss. Taking a step further, the bounded inequality in Equation~\ref{eqs:rethink} tells that in a US-Net, performances at all widths are bounded by performance of the model at smallest width {\(0.25\times\)} and largest width {\(1.0\times\)}. In other words, optimizing performance lower bound and upper bound can implicitly optimize all sub-networks in a US-Net. Thus, we propose \textit{the sandwich rule} that in each iteration we train the model at smallest width, largest width and (\(n-2\)) random widths, instead of \(n\) random widths. We employ this rule and show better convergence behavior and overall performance in experiments. % \textit{The sandwich rule} brings two additional benefits. First, as mentioned in Section~\ref{secs:post_statistics}, by training smallest width and largest width, we can explicitly track the validation accuracy of a model as it trains, which also indicates the performance lower bound and upper bound of a US-Net. Second, training the largest width is also important and necessary for our next training technique: \textit{inplace distillation}. \subsection{Inplace Distillation} \label{secs:distill} The essential idea behind \textit{inplace distillation} is to transfer knowledge inside a single US-Net from full-network to sub-networks inplace in each training iteration. It is motivated by two-step knowledge distilling~\cite{hinton2015distilling} where a large model is trained first, then its learned knowledge is transferred to a small model by training with predicted class soft-probabilities. In US-Nets, by \textit{the sandwich rule} we train the model at largest width, smallest width and other randomly sampled widths all together in each iteration. Remarkably, this training scheme naturally supports inplace knowledge distillation: we can directly use the predicted label of the model at the largest width as the training label for other widths, while for the largest width we use ground truth. The proposed \textit{inplace distillation} is simple, efficient, and general. In contrast to two-step knowledge distillation~\cite{hinton2015distilling}, \textit{inplace distillation} is single-shot: it can be implemented inplace in training without additional computation or memory cost. And it is generally applicable to all our tasks including image classification, image super-resolution and deep reinforcement learning. For image classification, we use predicted soft-probabilities by largest width with cross entropy as objective function. In image super-resolution, predicted high-resolution patches are used as labels with either \(\ell_1\) or \(\ell_2\) as training objective. For deep reinforcement learning we take proximal policy optimization algorithm (Actor-Critic)~\cite{schulman2017proximal} as an example. To distill, we run the policy predicted by the model at largest width as roll-outs for training other widths. In practice, it is important to stop gradients of label tensor predicted by the largest width, which means that the loss of a sub-network will never back-propagate through the computation graph of the full-network. Also, the predicted label is directly computed in training mode if it has batch normalization. It works well and saves additional forward cost of inference mode. We tried to combine both ground truth label and predicted label as training label for sub-networks, using either constant balance of two losses or decaying balance, but the results are worse. \subsection{Training Universally Slimmable Networks} Equipped with \textit{the sandwich rule} and \textit{inplace distillation}, the overall algorithm for training US-Nets is revealed in Algorithm~\ref{algos:algo}. For simplicity, calculating post-statistics of BN using Equation~\ref{eqs:exact_averages} is not included. It is noteworthy that: (1) The algorithm is general for different tasks and networks. (2) The GPU memory cost is the same as training individual networks thus we can use the same batch size. (3) In all our experiments, same hyper-parameters are applied. (4) It is relatively simple to implement and we show PyTorch-Style pseudo code as an example in Algorithm~\ref{algos:algo}. \begin{algorithm}[ht] \caption{Training universally slimmable network \(M\).} \begin{algorithmic}[1] \Require{Define \textit{width range}, for example, { \([0.25, 1.0] \times\)}.} \Require{Define \textit{n} as number of sampled widths per training iteration, for example, \(n=4\).} \State{Initialize training settings of shared network \(M\).} \For {\(t = 1, ..., T_{iters}\)} \State{Get next mini-batch of data \(x\) and label \(y\).} \State{Clear gradients, \(optimizer.zero\_grad()\).} \State{Execute full-network, \(y' = M(x)\).} \State{Compute loss, \(loss = criterion(y', y)\).} \State{Accumulate gradients, \(loss.backward()\).} \State{Stop gradients of \(y'\) as label, \(y' = y'.detach()\).} \State{Randomly sample (\(n-2\)) widths, as \textit{width samples}.} \State{Add smallest width to \textit{width samples}.} \For {\textit{width} in \textit{width samples}} \State{Execute sub-network at \textit{width}, \(\hat{y} = M'(x)\).} \State{Compute loss, \(loss = criterion(\hat{y}, y')\).} \State{Accumulate gradients, \(loss.backward()\).} \EndFor \State{Update weights, \(optimizer.step()\).} \EndFor \end{algorithmic} \label{algos:algo} \end{algorithm} \section{Experiments} In this section, we first present experiments on tasks of ImageNet classification, image super-resolution and deep reinforcement learning. Next we provide extensive ablation studies regarding \textit{the sandwich rule} and \textit{inplace distillation}. We further study US-Nets with regard to size of samples for BN post-statistics \(s\), width lower bound \(k_0\), width divisor \(d\) and number of sampled widths per training iteration \(n\). In all tables and figures, we use I-Net to denote individually trained models at different widths, S-Net to denote 4-switch slimmable networks~\cite{yu2018slimmable} and US-Net to denote our proposed universally slimmable networks. \subsection{Main Results} \textbf{ImageNet Classification.} We experiment with the ImageNet~\cite{deng2009imagenet} classification dataset with \num{1000} classes. Two representative mobile network architectures, MobileNet v1~\cite{howard2017mobilenets} and MobileNet v2~\cite{sandler2018inverted}, are evaluated. Note that MobileNet v1 is a non-residual network, while MobileNet v2 is a residual network. \input{tabs/main_results.tex} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figs/main_results} \caption{FLOPs-Accuracy spectrum of US-MobileNet v1 and US-MobileNet v2, compared with I-Net~\cite{howard2017mobilenets, sandler2018inverted} and S-Net~\cite{yu2018slimmable}.} \label{figs:main_results} \end{figure} We use default training and testing settings in~\cite{howard2017mobilenets, sandler2018inverted} except: (1) We only train US-Nets for \num{250} epochs instead of \num{480} epochs for fast experimentation. (2) We use stochastic gradient descent as the optimizer instead of the \textit{RMSProp}. (3) We decrease learning rate linearly from \(0.5\) to \(0\) with batch size \num{1024} on \num{8} GPUs. We always report results with the model of final training epoch. To be fair, we use \(n=4\) for training US-Nets following Algorithm~\ref{algos:algo}. We first show numerical results in Table~\ref{tabs:main_results}. Compared with individual models and 4-switch slimmable networks~\cite{yu2018slimmable}, US-Nets have better classification accuracy on average. In Figure~\ref{figs:main_results}, we show FLOPs-Accuracy spectrum of US-MobileNet v1 at widths of {\([.25:.025:1.0]\times\)} and US-MobileNet v2 at widths of { \([.35:.025:1.0]\times\)}. \textbf{Image Super-Resolution.} We experiment with DIV2K dataset~\cite{timofte2017ntire} which contains \num{800} training and \num{100} validation 2K-resolution images, on the task of bicubic \(\times 2\) image super-resolution. The network WDSR~\cite{yu2018wide} is evaluated. Note that WDSR network has no batch normalization layer~\cite{ioffe2015batch}, instead weight normalization~\cite{salimans2016weight} is used, which requires no further modification in US-Nets. We first individually train two models at width \(n=32\) and width \(n=64\) with 8 residual blocks. We then train US-Nets that can execute at any width in \([32, 64]\), either with or without proposed \textit{inplace distillation} in Section~\ref{secs:distill}. The results are shown in Figure~\ref{figs:image_sr}. US-WDSR have slightly worse performance than individually trained models (but only \(0.01\) lower PSNR). The US-WDSR trained without \textit{inplace distillation} has slightly worse performance. It is noteworthy that we use default hyper-parameters optimized for individual models, which may not be optimal for our slimmable models (e.g., learning rate, initialization, weight decay, etc). \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figs/super_resolution} \caption{FLOPs-PSNR spectrum of US-WDSR and super-resolved high-resolution images under different computations. FLOPs are calculated using input size \(48\times48\).} \label{figs:image_sr} \end{figure} \begin{figure*}[ht] \centering \vspace*{-8mm} \includegraphics[width=\linewidth]{figs/rl} \caption{Mean Episode Reward with US-Net and I-Net based on actor-critic style PPO~\cite{schulman2017proximal}. Curves are not smoothed.} \label{figs:deep_rl} \vspace*{-2mm} \end{figure*} \textbf{Deep Reinforcement Learning.} We experiment with Atari game \textit{BreakoutNoFrameskip-v4}~\cite{1606.01540} using Actor-Critic proximal policy optimization algorithm~\cite{schulman2017proximal}. Following baseline models~\cite{schulman2017proximal}, we stack three convolutions with base channel number as \num{32}, \num{64} , \num{32}, kernel size as \num{8}, \num{4}, \num{3}, stride as \num{4}, \num{2}, \num{1}, and a fully-connected layer with \num{512} output features. The output is shared for both actor (with an additional fully-connected layer to number of actions) and critic (with an additional fully-connected layer to \num{1}). Note that the network has no batch normalization layer. We first individually train the model at different widths of { \([0.25, 0.5, 0.75, 1.0]\times\)}. Then a US-Net is trained with \textit{inplace distillation} following Section~\ref{secs:distill} and Algorithm~\ref{algos:algo}. The performances are shown in Figure~\ref{figs:deep_rl}. From left to right, we show individually trained models, universally slimmable models (four corresponding widths are shown for comparison), and performance comparison between I-Net and US-Net at widths of { \([0.25, 0.5, 0.75, 1.0]\times\)}. The curves show that the US-Net consistently outperforms four individually trained networks in the task of deep reinforcement learning. We note that we include the Atari game example mainly to illustrate that our slimmable training is also applicable to CNNs for RL. We believe it is important because in more challenging RL solutions, for example \textit{AlphaGo}~\cite{silver2017mastering} and \textit{AlphaStar}~\cite{alphastar}, the inference latency and adaptive computation ability will be critical. \subsection{Ablation Study} \input{tabs/sandwich_rule.tex} \textbf{The Sandwich Rule.} We study the effectiveness of \textit{the sandwich rule} by ablation experiments. We train four models of US-MobileNet v1 with \(n=3\) using different width sampling rules: \(n\) randomly sampled widths, (\(n-1\)) randomly sampled widths plus the smallest width, (\(n-1\)) randomly sampled widths plus the largest width, and (\(n-2\)) randomly sampled widths plus both the smallest and largest width. Results are shown in Table~\ref{tabs:the_sandwich_rule}. The US-Net trained with \textit{the sandwich rule} has better performance on average, with good accuracy at both smallest width and largest width. Moreover, training the model at smallest width is more important than training the model at largest width as shown in the 2nd row and 3rd row of Table~\ref{tabs:the_sandwich_rule}, which suggests the importance of width lower bound \(k_0\). \textit{Inplace distillation} is not used in all these experiments since it is not applicable to width sampling rules excluding largest width. \textbf{Inplace Distillation.} Next we study the effectiveness of proposed \textit{inplace distillation} mainly on ImageNet classification. The results of image super-resolution (both with and without \textit{inplace distillation}) and deep reinforcement learning (with \textit{inplace distillation}) are already shown in Figure~\ref{figs:image_sr} and Figure~\ref{figs:deep_rl}. We use the same settings to train two US-MobileNet v1 models either with or without \textit{inplace distillation}, and show the comparison in Figure~\ref{figs:distillation}. \textit{Inplace distillation} significantly improves overall performance at no cost. We suppose it could be an essential component for training slimmable networks. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figs/distill_classification} \caption{FLOPs-Accuracy spectrum of two US-MobileNet v1 models trained either with or without \textit{inplace distillation}.} \label{figs:distillation} \vspace{-2mm} \end{figure} \input{tabs/batch_norm.tex} \textbf{Post-Statistics of Batch Normalization.} We further study post-statistics for batch normalization in US-Nets. We update BN statistics after training US-MobileNet v1 when all weights are fixed. We then compute BN statistics using four methods: moving average over entire training set, exact average over entire training set, exact average over randomly sampled \(1k\) training subset, and exact average over randomly sampled \(2k\) training subset. Table~\ref{tabs:batch_norm} shows that exact averaging has slightly better performance and a small subset produces equally accurate BN statistics. It indicates that calculating post-statistics of BN can be very fast. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figs/lower_bound} \caption{FLOPs-Accuracy spectrum of three US-MobileNet v1 models with different \textbf{width lower bounds}.} \label{figs:lower_bound} \vspace{-2mm} \end{figure} \textbf{Width Lower Bound \(k_0\).} Width lower bound \(k_0\) is of central importance in the bounded Equation~\ref{eqs:rethink}. Although it is usually enough to adjust a model between width { \(0.25\times\)} and { \(1.0\times\)}, we are interested in how the width lower bound affects overall performance. We train three US-MobileNet v1 models with different width lower bounds \(k_0\) as {\(0.25\times\)}, {\(0.35\times\)}, {\(0.05\times\)} and show results in Figure~\ref{figs:lower_bound}. It reveals that the performance of a US-Net is grounded on its width lower bound, as suggested in our analysis in Section~\ref{secs:rethink}. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figs/width_divisor} \caption{FLOPs-Accuracy spectrum of two US-MobileNet v1 models with different \textbf{width divisors}.} \label{figs:width_divisor} \vspace{-2mm} \end{figure} \textbf{Width Divisor \(d\).} Width divisor is introduced in MobileNets~\cite{howard2017mobilenets, sandler2018inverted} to floor the channel number approximately as \(\lfloor nr/d \rfloor * d\), where \(n\) is base channel number, \(r\) is width multiplier, \(d\) is width divisor\footnote{Details are in hyperlink \href{https://github.com/tensorflow/models/blob/0344c5503ee55e24f0de7f37336a6e08f10976fd/research/slim/nets/mobilenet/mobilenet.py\#L62-L69}{TensorFlow Models} (PDF required).}. To exactly match FLOPs of MobileNets and have a fair comparison, by default we follow MobileNets and set width divisor \(d=8\). This results in the minimal adjustable channel number as \(8\) instead of \(1\), and slightly benefits overall performance, as shown in Figure~\ref{figs:width_divisor}. In practice, with \(d=8\) the US-Nets already provide enough adjustable widths. Also in many hardware systems, matrix multiplication with size dividable by \(d=8, 16, ...,\) may be as fast as a smaller size due to alignment of processing unit (\eg, warp size in GPU is 32). \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{figs/training_n} \caption{FLOPs-Accuracy spectrum of two US-MobileNet v1 trained with different \textbf{numbers of sampled widths per iteration}.} \label{figs:num} \vspace{-2mm} \end{figure} \textbf{Number of Sampled Widths Per Iteration \(n\).} Finally we study the number of sampled widths per training iteration. It is important because larger \(n\) leads to more training time. We train three US-MobileNet v1 models with \(n\) equal to \(3\), \(4\) or \(5\). Figure~\ref{figs:num} shows that the model trained with \(n=4\) has better performance than the one with \(n=3\), while \(n=4\) and \(n=5\) achieve very similar performances. By default, in all our experiments we use \(n=4\). \section{Discussion} \label{secs:discuss} We mainly discuss three topics in this section, with detailed results shown in the supplementary materials. First, for all trained US-Nets so far, the width ratio is uniformly applied to all layers. Can we train a nonuniform US-Net where each layer can independently adjust its own ratio using our proposed methods? This requirement is especially important for related tasks like network slimming. Our answer is YES and we show a simple demonstration on how the nonuniform US-Net can help in network slimming. Second, perhaps the question is naive, but are deep neural networks naturally slimmable? The answer is NO, a naively trained model fails to run at different widths even if their BN statistics are calibrated. Third, in slimmable networks~\cite{yu2018slimmable}, private scale and bias are used as conditional parameters for each sub-network, which brings performance gain slightly. In US-Nets, by default we share scale and bias. We also propose an option that mimics conditional parameters: averaging the output by the number of input channels. \section{Discussion} \label{secs:discuss} In this section, we mainly discuss three topics with some experimental results. \textbf{Nonuniform Universally Slimmable Networks.} For all trained US-Nets so far, the width ratio is uniformly applied to all layers (\eg, MobileNet { \(0.25 \times\)} means width in all layers are scaled by \(0.25\)). Can we train a nonuniform US-Net where each layer can independently adjust its own ratio using our proposed methods? This requirement is especially important for related tasks like network slimming. Our answer is YES and we show a simple demonstration on how the nonuniform US-Net can help in network slimming. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs/nonuniform} \caption{FLOPs-Accuracy spectrum of nonuniform US-MobileNet v1 tested with different slimming strategies. Note that each layer can adjust its own width ratio. The result suggests that \textit{slimming the stage 5 of MobileNet v1 is not a good choice}.} \label{figs:nonuniform} \end{figure} In this demonstration, we first train a nonuniform US-MobileNet v1. The architecture of MobileNet v1 has 5 resolution stages with base channel number as 64, 128, 256, 512, 1024 in each stage. After training, we apply an additional width ratio \(0.6\) to one of five stages and get five models. Along with global width ratio, we can draw their FLOPs-Accuracy spectrum in Figure~\ref{figs:nonuniform}. For simplicity we only show performances of slimming stage 1, 4 and 5. Slimming stage 2 and 3 have curves close to that of slimming stage 1, while slimming stage 1 achieves the best results. Figure~\ref{figs:nonuniform} shows that the stage 5 of MobileNet v1 may require more channels because slimming stage 5 has worst accuracy under same FLOPs. The result suggests \textit{slimming the stage 5 of MobileNet v1 is not a good choice}. It further implicitly indicates that the stage 5 of MobileNet v1 network architecture needs a larger base channel number. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figs/naturally_slimmable} \caption{FLOPs-Accuracy spectrum of US-MobileNet v1, 4-switch S-MobileNet v1 and individual MobileNet v1 { \(1.0\times\)} tested on different widths after BN calibration. The results suggest that deep neural networks are not naturally slimmable.} \label{figs:naturally_slimmable} \end{figure} \textbf{Naturally Slimmable?} Perhaps the question is naive, but are deep neural networks naturally slimmable? We have proposed training methods and improved techniques for universally slimmable networks, yet we have not presented any result if we directly evaluate a trained neural network at arbitrary width either with naive training algorithm or slimmable training algorithm in~\cite{yu2018slimmable}. If we can calibrate post-statistics of BN in these trained models (instead of using our proposed US-Nets training algorithm), do they have good performances? The answer is NO, both naively trained models and slimmable models~\cite{yu2018slimmable} have very low accuracy at arbitrary widths even if their BN statistics are calibrated. In Figure~\ref{figs:naturally_slimmable}, we show results of a US-MobileNet v1, 4-switch S-MobileNet v1 { \([0.25, 0.5, 0.75, 1.0]\times\)} and individually trained MobileNet v1 { \(1.0\times\)}. For individually trained MobileNet v1 { \(1.0\times\)}, it achieves good accuracy at width { \(1.0\times\)}, but fails on other widths especially when its computation is below 200 MFLOPs. For 4-switch S-MobileNet v1 { \([0.25, 0.5, 0.75, 1.0]\times\)}, it achieves good accuracy at widths in { \([0.25, 0.5, 0.75, 1.0]\times\)}, but fails on other widths that are not included in training. Our proposed US-MobileNet v1 achieves good accuracy at any width in the range from 40 MFLOPs to 570 MFLOPs consistently. \textbf{Averaging Output by Input Channel Numbers.} In slimmable networks~\cite{yu2018slimmable}, private scale and bias \(\gamma\), \(\beta\) are used as conditional parameters for each sub-network, which brings slight performance gain. These parameters comes for free because after training, they can be merged as \(y' = \gamma'y + \beta', \gamma' = \frac{\gamma}{\sqrt{\sigma^2 + \epsilon}}, \beta' = \beta - \gamma'\mu\). In US-Nets, by default we share scale and bias. Additionally we propose an option that mimics conditional parameters: averaging the output by the number of input channels. It also brings slight performance gain as shown in Table~\ref{tabs:averaged_output}. In this way, to some extent the \textit{feature aggregation} can be viewed as \textit{feature ensemble} in each layer. \input{tabs/averaged_output.tex} In practice, it is important not to average depthwise convolution, because the actual input to each output channel in depthwise convolution is always single-channel. For networks with batch normalization, the proposed output averaging also come for free since these constants can be merged into BN statistics after training. At runtime when switch to different widths, a switch cost (\eg., fusing new BN to its previous convolution layer) will be applied. But for networks without batch normalization, we should notice that if we do not use output averaging, there is no switch cost. Thus, the proposed output averaging is optional and is not used by default.
{ "attr-fineweb-edu": 1.706055, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcSXxaKPQonPpKk1C
\section{Introduction} Although loop quantum gravity~\cite{pulbook,rovbook,thbook} is endowed with a rigorous mathematical structure, it is still difficult to obtain GR as a low-energy limit from it and make contact with experiments. However, progress has recently been made on the computation of the graviton propagator~\cite{recentgravitons,rovrecent}, and in a previous publication ~\cite{prl} we have identified graviton states within the Hamiltonian framework for a self-dual (or anti-self-dual) connection (for which the Immirzi parameter is $\gamma=\pm i$) . The detailed calculation for general imaginary values of $\gamma$ was provided in~\cite{paper}. To identify the graviton states that correspond to the dynamical, fluctuating part of space-time we compared our approach to cosmological perturbation theory. After taking several subtleties into account (for more details see~\cite{paper}) the Ashtekar Hamiltonian indeed reduces on-shell to the standard tensor perturbation Hamiltonian~\cite{muk}. But novelties come about. We found that only half of the graviton states are physical, retaining only the standard two polarisations for gravitons after reality conditions are imposed. For the physical states we discovered a $\gamma$-dependent chirality in the vacuum energy as well as the 2-point function. In this paper, these results will be generalised to complex $\gamma$ in a Lorentzian theory. This is a non-trivial algebraic exercise with significant modifications in the results for the intermediate steps, but the final result is remarkably simple. For details on how to derive the second order Hamiltonian for gravitons Reference~\cite{paper} should be consulted; here we just summarize the framework and highlight the changes that occur for general $\gamma$. These are most notably the reality conditions and commutation relations between the canonical variables. It turns out that, in spite of these modifications, the final result is very simple: The vacuum chirality derived in~\cite{prl,paper} is only present if $\gamma$ has an imaginary part; for real $\gamma$ the two graviton polarisations are symmetric. The plan of the paper is as follows. In Section II we introduce the perturbed metric and connection variables and their classical solution. Section III explains the reality conditions and commutation relations for general $\gamma$. We present a representation of the Hamiltonian in terms of graviton states in section IV. In section V we explain how a complex $\gamma$ leads to a chirality in the vacuum fluctuations, but only provided that $\gamma$ has an imaginary part. The special case of real $\gamma$ will be investigated in section VI. We finish with a concluding section summarising our results. \section{Notation and classical solution} In this Section we lay down the notation, referring the reader to previous publications~\cite{prl,paper} for details. We consider tensor fluctuations around de Sitter space-time described in the flat slicing, $ds^2=a^2[-d\eta^2 +(\delta_{ab}+h_{ab})dx^adx^b] $, where $h_{ab}$ is a symmetric TT tensor, $ a=- 1/H\eta$, $H^2=\Lambda/3$ and $\eta<0$. Using the convention $\Gamma^i=-\frac{1}{2}\epsilon^{ijk}\Gamma^{jk}$ (where $\Gamma^{ab}$ is the spin connection), the Ashtekar-Immirzi-Barbero connection is given by $A^i=\Gamma^i+\gamma \Gamma^{0i}$, with $\gamma$ the Immirzi parameter. Making use of the Cartan equations for the zeroth order solution, the canonical variables can be expressed as: \bea\label{pertA} A^i_a&=&\gamma Ha \delta^i_a + \frac{a^i_a}{a}\\ E^a_i&=&a^2\delta^a_i - a\delta e^a_i \label{pertE}\; ,\eea where $E^a_i$ is the densitized inverse triad, canonically conjugate to $A^i_a$. As in~\cite{prl,paper} we define $\delta e^i_a$ via the triad, $e^i_a=a\delta^i_a+\delta e^i_a$; we then raise and lower indices in all tensors with the Kronecker-$\delta$, possibly mixing group and spatial indices. This simplifies the notation and is unambiguous if it's understood that $\delta e$ is originally the perturbation in the triad. It turns out that $\delta e_{ij}$ is proportional to the ``$v$'' variable used by cosmologists~\cite{muk,lid}. The canonical variables have symplectic structure \be\label{PBnonpert}\{A^i_a(\vx),E^b_j(\vy)\}=\gamma l_P^2\delta^b_a\delta^i_j\delta(\vx-\vy)\; \ee which implies~\cite{paper} \be\label{PBpert}\{a^i_a(\vx),\delta e^b_j(\vy)\}=-\gamma l_P^2\delta^b_a\delta^i_j\delta(\vx-\vy)\; \; . \ee To make contact with cosmological perturbation theory and standard perturbative quantum field theory we use mode expansions (see~\cite{paper} for a full explanation): \bea \delta e_{ij}&=&\int \frac{d^3 k}{(2\pi)^{\frac{3}{2}}} \sum_{r} \epsilon^r_{ij}({\mathbf k}) {\tilde e}_{r+}(\vk,\eta)e^{i\vk\cdot \vx} \nonumber\\ && +\epsilon^{r\star}_{ij}({\mathbf k}) {\tilde e}^{\dagger}_{r-}(\vk,\eta) e^{-i\vk\cdot \vx}\nonumber\\ a_{ij}&=& \int \frac{d^3 k}{(2\pi)^{\frac{3}{2}}} \sum_{r} \epsilon^r_{ij}({\mathbf k}) {\tilde a}_{r+}(\vk,\eta)e^{i\vk\cdot \vx} \nonumber\\ && +\epsilon^{r\star}_{ij}({\mathbf k}) {\tilde a}^{\dagger}_{r-}(\vk,\eta) e^{-i\vk\cdot \vx} \label{fourrier}\eea where ${\tilde e}_{rp}(\vk,\eta)=e_{rp}(\vk) \Psi_e(k, \eta)$ and ${\tilde a}_{rp}(\vk,\eta)=a_{rp}(\vk) \Psi^{rp}_a(k, \eta)$, and $\epsilon^r_{ij}$ are polarization tensors. Amplitudes ${\tilde a}_{rp}(\vk)$ and ${\tilde e}_{rp}(\vk)$ have two indices (contrasting with previous literature, e.g.~\cite{gravitons,leelaur}): $r=\pm 1$ for right and left helicities, and $p$ for graviton ($p=1 $) and anti-graviton ($p=-1$) modes. The $a_{rp}$ and $e_{rp}$ can be chosen so as {\it not} to carry any time-dependence, and for simplicity we will assume that they are equal. After imposing on-shell conditions we'll find that functions $\Psi_a(k,\eta)$ must then carry an $r$ and $p$ dependence. The classical solution in terms of these variables can be read off from cosmological perturbation theory~\cite{paper}. Since $\Psi_e$ is proportional to the ``$v$'' variable used in Cosmology~\cite{muk,lid}, it must satisfy the well-known equation $\Psi_e''+{\left(k^2-\frac{2}{\eta^2}\right)}\Psi_e=0$ where $'$ denotes derivative with respect to conformal time. This has solution: \be\label{psie} \Psi_e=\frac{e^{-ik \eta}}{2 \sqrt { k} } {\left( 1-\frac{i}{k\eta} \right)}\; ,\ee where the normalization ensures that the amplitudes $e_{rp}$ become annihilation operators upon quantization. The connection can then be inferred from Cartan's torsion-free condition $ T^I=d e^I + \Gamma^I_J\wedge e^J=0$. To first order, this is solved by \bea \delta\Gamma^{0}_{\;i}&=& \frac {1}{a}{\delta e'_{ij}} \, dx^j \\ \delta \Gamma_{ki}&=&-\frac{2}{a}\partial_{[k}\delta e_{i ] j}\, dx^j\label{gammaij}\; . \eea These imply $\delta \Gamma^{i}=\frac{1}{a}\epsilon ^{ijk}\partial_{j}\delta e_{kl}\, dx^l$, so that \be\label{arealsp} a_{ij}=\epsilon_{ikl}\partial_k\delta e_{lj}+\gamma\delta{e}'_{ij} \; .\ee Up until this point the calculation is valid for all complex $\gamma$. The first novelty in this paper appears upon inserting decomposition (\ref{fourrier}) into (\ref{arealsp}), to determine torsion-free conditions in Fourier space. Using relation $\epsilon_{nij}\epsilon^{r}_{il}k_j= i r k \epsilon ^{r}_{nl} $ we obtain: \bea\label{psiapm} \Psi_a^{r+}&=&\gamma \Psi'_e + r k\Psi_e\\ \Psi_a^{r-}&=&\gamma^\ast \Psi'_e + r k\Psi_e\; , \eea and clearly $\gamma^\star =-\gamma$, used in~\cite{prl}, is only valid if $\gamma$ is imaginary. By writing a generally complex $\gamma$ as \be \label{gamma} \gamma= \gamma_R+i\gamma_I \ee we find that inside the horizon ($|k\eta|\gg1$) \be \label{Psia} \Psi_a^{rp}=\Psi_ek\left(r-i\gamma_R+p\gamma_I \right)\; , \ee generalizing the expression derived in~\cite{paper}. We note that the $p$ dependence of these functions only occurs if $\gamma$ has an imaginary part. For a real $\gamma$, $\Psi_a$ is the same for both gravitons and anti-gravitons, as expected (a real connection would be expanded in terms of a single particle ${\tilde a}_r$, so an index $p$ would be unnecessary; see Section~\ref{realgamma} for a longer discussion). This is a first hint that the chirality found in~\cite{prl,paper} is specific to non-real $\gamma$. \section{Reality conditions and Commutation relations} To be able to relate graviton and anti-graviton states (and their respective Hermitian conjugates), we need to impose reality conditions. As in~\cite{paper}, this will be done via the choice of inner product, rather than as operator conditions. Nonetheless it is important to see what these conditions look like in terms of operators (or as classical identities). As the metric is real ($\delta e_{ij}=\delta e^\dagger_{ij}$), we have \be \label{realg}e_{r+}(\vk)=e_{r-}(\vk) \; . \ee The definition of the connection implies \bea \label{reality} \Re A^i&=&\Gamma^i+\gamma_R \Gamma^{0i} \\ \Im A^i&=&\gamma_I\Gamma^{0i}\; . \eea Compared to the corresponding expressions for imaginary $\gamma$ (see~\cite{paper}), we note that the real part of the connection now has a contribution from $\Gamma^{0i}$, i.e. the extrinsic curvature. The reality conditions for the connection should embody the non-dynamical torsion-free conditions, i.e. those not involving the extrinsic curvature, which in the Hamiltonian formalism becomes the time derivative of the metric. The full torsion-free conditions representing (\ref{arealsp}) are now: \bea a_{ij}+{\overline a}_{ij}&=&2a \left(\delta \Gamma_{ij}+\gamma_R \delta \Gamma^0_{ij}\right) \nonumber \\ &=& 2 \epsilon _{ikl}\partial_k \delta e_{lj}+2\gamma_R \delta e'_{ij}\\ a_{ij}-{\overline a}_{ij}&=&2a i\gamma_I \delta \Gamma^0_{ij} =2i\gamma_I \delta e'_{ij}\; , \eea or, in terms of Fourier components: \bea\label{modereal} {\tilde a}_{r+} (\vk,\eta)+ {\tilde a}_{r-}(\vk,\eta) &=& 2 r k {\tilde e}_{r+}(\vk,\eta)+2 \gamma_R{\tilde e}'_{r+}(\vk,\eta)\, \, \,\, \, \, \, \, \, \,\,\,\\ \label{modereal2} {\tilde a}_{r+} (\vk,\eta)- {\tilde a}_{r-}(\vk,\eta) &=& 2 i \gamma_I {\tilde e}'_{r+}(\vk,\eta)\; . \eea Combining (\ref{modereal}) and (\ref{modereal2}) so as to eliminate the time derivative in the metric leads to the condition: \be\label{totreal} i\gamma^\ast{\tilde a}_{r+} (\vk,\eta)- i\gamma{\tilde a}_{r-}(\vk,\eta) = 2 r k \gamma_I {\tilde e}_{r+}(\vk,\eta)\; .\ee Its Hermitian conjugate is: \be\label{totrealdagger} -i\gamma {\tilde a}^\dagger_{r+} (\vk,\eta)+ i\gamma^ \ast{\tilde a}^\dagger_{r-}(\vk,\eta) = 2 r k \gamma_I {\tilde e}^\dagger_{r-}(\vk,\eta)\; ,\ee which also invokes (\ref{realg}). These expressions represent the reality conditions that should be imposed quantum mechanically by the choice of inner product. They are very different from their counterparts for a purely imaginary $\gamma$ and represent novelty number two in our calculation. For each $r$ and $\vk$ there are two independent conditions upon the four operators $a_{rp}(\vk)$ and $e_{rp}(\vk)$. In addition to them there is an independent dynamical torsion-free condition. On shell, i.e. using (\ref{Psia}) and invoking (\ref{realg}), the connection can be written in terms of the metric according to the weak identity: \bea\label{modesonshell-} {\tilde a}_{r-}(\vk,\eta)&\approx& rk {\tilde e}_r +\gamma^\ast {\tilde e}_r'\rightarrow {\tilde e}_r(r-i\gamma^\ast)k \nonumber \\ {\tilde a}_{r+}(\vk,\eta)&\approx& rk {\tilde e}_r +\gamma {\tilde e}_r'\rightarrow {\tilde e}_r(r-i\gamma)k\; ,\label{modesonshell+} \eea where the latter expression is valid in the limit $k|\eta|\gg 1$. These will be useful in deriving the graviton operators for this theory. They render one of the graviton modes unphysical, fully relating metric and connection. Before we can set up a quantum theory in terms of graviton operators we need to define the commutation relations in terms of modes. These are obtained, as usual, from the Poisson brackets (\ref{PBnonpert}) and (\ref{PBpert}), leading to: \be\label{unfixedcrs} \left[A^i_a(\vx),E^b_j(\vy)\right] = i\gamma l_P^2\delta^b_a\delta^i_j\delta(\vx-\vy)\; \ee and \be\label{unfixedcrs1} \left[a^i_a(\vx),\delta e^b_j(\vy)\right] = -i\gamma l_P^2\delta^b_a\delta^i_j\delta(\vx-\vy)\; . \ee The commutators for the mode expansions can be derived as in \cite{paper} and are: \be\label{fixedcrs} [{\tilde a}_{rp}(\vk),{\tilde e}_{sq}^\dagger(\vk ')] =-i(\gamma_R+pi\gamma_I) \frac{l_P^2}{2}\delta_{rs}\delta_{p{\bar q}} \delta(\vk-\vk ')\; , \ee where ${\overline q}=-q$. Compared to~\cite{paper}, the factor $\gamma p$ has been replaced by $\gamma_R+pi\gamma_I$. This is algebraic novelty number three, the last one in our calculation. For real $\gamma$ the $p$ dependence is erased from the commutation relations. \section{The Hamiltonian} We now have all the ingredients to find a Hamiltonian in terms of graviton creation and annihilation operators (which will be linear combinations of the perturbations in the metric and connection variables). A surprise is in store at this point: in spite of the three novelties in the ingredients, spelled out above, the final result for the graviton operators and Hamiltonian is formally the same. The gravitational Hamiltonian in terms of Ashtekar variables is given by \bea \label{HamAsh} {\cal H}&=&\frac{1}{2l_P^2}\int d^3x N E^a_i E^b_j \Big[\epsilon_{ijk}(F^k_{ab}+H^2 \epsilon _{abc} E^c_k)\nn &&-2(1+\gamma^2)K^i_{[a} K^j_{b]}\Big] \label{fullH}\eea where \be\label{extK} K^i_a=\frac{A^i_a-\Gamma^i_a(E)}{\gamma} \ee is the extrinsic curvature of the spatial surfaces. The total Hamiltonian includes two further constraints, the Gauss and vector constraint, but they are automatically satisfied by expansions (\ref{fourrier}) and do not contribute to the order in perturbation theory we will consider \cite{paper}. The dynamics of the theory is encoded by the second order Hamiltonian quadratic in first order perturbations. To derive this Hamiltonian, a number of subtleties need to be taken into account which are spelled out in detail in \cite{paper}. To write the Hamiltonian as a product of graviton creation and annihilation operators inside the horizon, we need to express the second order Hamiltonian in terms of the mode expansion (\ref{fourrier}) (see Appendix III of \cite{paper}). We can determine the graviton operators inside the horizon ($|k\eta| \gg 1$) following the same procedure as in~\cite{paper}. Before reality conditions are imposed there should be unphysical modes that vanish on-shell (and that will turn out to have negative energy and norm). The physical modes should commute with the non-physical modes and reduce, on-shell, to the correct expressions in terms of metric variables. Using these rules, and recalling (\ref{modesonshell+}) and (\ref{fixedcrs}), we define \be \label{Gr+}G_{r{\cal P_+}} = \frac{-r}{i\gamma}{\left({\tilde a}_{r+}- k(r+i\gamma){\tilde e}_{r+}\right)}\ee\be \label{Gr-} G_{r{\cal P_-}}=\frac{-r}{i\gamma}({\tilde a}_{r+}- k(r-i\gamma){\tilde e}_{r+})\ee \be \label{Gr+dagger}G^{\dagger}_{r{\cal P_+}}=\frac{r}{i\gamma}({\tilde a}^\dagger_{r-}- k(r-i\gamma){\tilde e}^\dagger_{r-})\ee \be \label{Gr-dagger}G^{\dagger}_{r{\cal P_-}}=\frac{r}{i\gamma}({\tilde a}^\dagger_{r-}- k(r+i\gamma){\tilde e}^\dagger_{r-})\; . \ee The index ${\cal P}={\cal P_+},{\cal P_-}$ denotes physical and unphysical modes, respectively. The normalisation ensures the right behaviour on-shell, i.e $G_{r{\cal P_-}}\approx 0$ and $G_{r{\cal P_+}}\approx 2rk e_r$. Once the reality conditions (\ref{totreal})--(\ref{totrealdagger})--(\ref{realg}) are enforced one can check that the $G^\dagger$ are indeed the Hermitian conjugate operators of the $G$. The commutation relations are, as required: \bea\label{Galgebra} \left[G_{r{\cal P}}(\vk),G^{\dagger}_{s{\cal P}}(\vk')\right]&=&{\cal P} k l_P^2 \delta_{rs} \delta(\vk-\vk')\\ \left[G_{r{\cal P_+}}(\vk),G^{\dagger}_{s{\cal P_-}}(\vk')\right]&=&0\; . \eea These expressions are precisely the same as found in~\cite{paper} for a purely imaginary $\gamma$, in spite of the three algebraic novelties spelled out above. Somehow the modifications conspire to give the same graviton operators and commutators between them. This means that the Hamiltonian in terms of graviton states can be written in the same way as equation (105) of~\cite{paper}. Just like before an inner product, enforcing the reality conditions, may be found in the representation diagonalizing the $G^\dagger$ operators. The state ${\cal P}={\cal P_+}=1$ has positive energy and norm, and ${\cal P}={\cal P_-}=-1$ has negative energy and norm. On-shell, the Hamiltonian becomes: \be \label{Hamshell}{\cal H}^{ph}_{eff}\approx \frac{1}{2l_P^2}\int d\vk \sum_r [{G}^{ph}_{r}{ G}^{ph \dagger}_{r} (1+ir\gamma)+ {G}_{r}^{ph \dagger}{G}^{ph}_{r} (1-ir\gamma)]\\ \ee where ${G}^{ph}_{r}=G_{r {\cal P_+}}$. The first term in the Hamiltonian we have just derived (which follows from a EEF ordering) needs to be normal ordered, leading to a chiral (i.e. $r$-dependent) vacuum energy $V_r\propto 1+ir\gamma$. The chiral asymmetry is given by \be\label{chiraleq} \frac{V_R-V_L}{V_R+V_L}=i\gamma\; . \ee In~\cite{paper} it was found that for imaginary $\gamma$ the vacuum energy (VE) is chiral and that for $|\gamma|>1$ one of the modes has negative VE. This flags a point of interest, since a negative VE is usually associated with fermionic degrees of freedom. We now find that for $\gamma$ with a real part the VE for each mode is complex. The imaginary part, however, is maximally chiral and so cancels out, when right and left modes are added together. The real part never sees such a cancellation, except in the limit when $|\Im(\gamma)| \rightarrow\infty$, and so the total VE is only zero for the Palatini-Kibble theory. What is the origin of this result? We already pointed out in~\cite{paper} that non-perturbatively the Hamiltonian is generally complex, a matter behind many of the novelties we have exposed. On-shell the Hamiltonian is zero and therefore real. The complexity of the Hamiltonian is not to be confused with its Hermiticity after quantization and the inner product should enforce the Hermiticity of the quantum Hamiltonian. Perturbatively, however, the situation is more complicated. As explained in~\cite{paper}, even though the second order Hamiltonian must still be zero on-shell, the portion dependent on first order variables (to be seen as the perturbative Hamiltonian ${\cal H}^{eff}$) evades the Hamiltonian constraint. A number of other novelties of this sort appear when going from the full theory to perturbation theory. It turns out that the classical perturbed Hamiltonian is always real on-shell, even if it's no longer zero. This is still true for a generally complex $\gamma$. However, quantum mechanically the perturbative Hamiltonian is only Hermitian, on and off-shell, {\it if $\gamma$ is imaginary}. If $\gamma$ has a real part the normal ordered Hamiltonian is still Hermitian, but the VE is not. This can easily be seen from (\ref{Hamshell}): obviously $G_r^{ph}G_r^{ph\dagger}$ and $G_r^{ph\dagger}G_r^{ph}$ are still Hermitian under the chosen inner product, but their coefficients spoil Hermiticity before, but not after ordering. What attitude should we take towards this result? One possibility is that there's nothing wrong with it. Obviously the VE couples to the Einstein's equations, but the total is always real. Should we decide, however, that this feature is pathological then there are two possible implications. One is that a purely imaginary $\gamma$ should be favoured. Another is that a symmetric ordering of the Hamiltonian constraint is to be preferred. For more detail on the different ordering prescriptions see \cite{paper}; however it's obvious that $EFE$ or $\frac{1}{2}\left(EEF+FEE\right)$ ordering would satisfy ${\cal H}={\cal H^\dagger}$ on and off-shell, before and after ordering. In this case there would be no chirality in the VE; however, as the graviton modes are still the same, the vacuum fluctuations, or the 2-point function, would still exhibit a chiral signature, as investigated in the next section. \section{Vacuum fluctuations}\label{fluct} As in \cite{paper}, we now want to compute the 2-point function in terms of connection variables as it determines the vacuum fluctuation power spectrum. This is given by \be \label{PS1}{\langle 0|A^\dagger_r(\vk)A_r(\vk')|0\rangle} =P_r(k)\delta(\vk-\vk')\; ,\ee where $A_r(\vk)$ represents Fourier space connection variables with handedness $r$, i.e. \be\label{Bigak} A_r(\vk)=a_{r+}(\vk) e^{-i k\cdot x} + a_{r-}^\dagger(\vk) e^{i k\cdot x}\; . \ee Note that (\ref{PS1}) depends on a specific ordering of the 2-point function, and in general we have to consider \be A^\dagger A \rightarrow \alpha A^\dagger A + \beta A A^\dagger\; , \ee with $\alpha+\beta=1$ and $\alpha,\beta>0$. As (\ref{PS1}) is a variance, it must always be real and positive (as opposed to the vacuum energy). Any chiral effects will then leave a measurable imprint on this quantity. We need to relate the power spectrum to the physical graviton modes found in section IV. This can be done by substituting the on-shell conditions (\ref{modesonshell-}) into (\ref{Gr+}) and (\ref{Gr+dagger}): \bea a^{ph}_{r+}&=&\frac{r-i\gamma}{2r}G_{r{\cal P_+}}\\ a^{ph\dagger }_{r+}&=&\frac{r+i\gamma^\ast}{2r}G^\dagger_{r{\cal P_+}}\\ a^{ph}_{r-}&=&\frac{r-i\gamma^\ast}{2r}G_{r{\cal P_+}}\\ a^{ph\dagger}_{r-}&=&\frac{r+i\gamma}{2r}G^\dagger_{r{\cal P_+}}\; . \eea Plugging these expressions into (\ref{Bigak}) we obtain: \bea A^{ph}_r(\vk)&=&\frac{r-i\gamma}{2r}G_{r{\cal P_+}}(\vk) e^{-i k\cdot x} + \frac{r+i\gamma}{2r} G_{r{\cal P_+}}^\dagger(\vk) e^{i k\cdot x} \nonumber \\ A_r^{ph\dagger}(\vk)&=&\frac{r-i\gamma^\ast}{2r}G_{r{\cal P_+}}(\vk) e^{-i k\cdot x} + \frac{r+i\gamma^\ast}{2r} G_{r{\cal P_+}}^\dagger(\vk) e^{i k\cdot x}\nonumber \eea so that \be\label{2point} {\langle 0|A^{ph\dagger}_r(\vk)A^{ph}_r(\vk')|0\rangle}=P_r(\gamma) {\langle 0|G_{r{\cal P_+}}(\vk)G^\dagger_{r{\cal P_+}}(\vk')|0\rangle}\; , \ee where \be P_r(\gamma)=\frac{(r+i\gamma)(r-i\gamma^\ast)}{4} =\frac{1-2\gamma_I r+|\gamma|^2}{4}\; . \ee If $\gamma_Ir<0$, $P_r(\gamma)$ is obviously positive. Otherwise, \be P_r(\gamma)\propto 1-2|\gamma_I| +\gamma_I^2+\gamma_R^2=(1-|\gamma_I|)^2+\gamma_R^2 \ee so this is also positive for any complex $\gamma$. Therefore the 2-point function is indeed always real and positive, as required. The chiral asymmetry in the power spectrum can be expressed as \be\label{chiralP} \frac{P_R-P_L}{P_R+P_L}=-\frac{2\gamma_I}{1+|\gamma|^2} \;,\ee or, for a general ordering, \be\frac{P_R-P_L}{P_R+P_L}=\frac{2(\beta- \alpha)\gamma_I}{1+|\gamma|^2}\;. \ee This implies that for a real $\gamma$ there is no asymmetry in the vacuum fluctuations for right and left gravitons. The chirality clearly traces to the fact that for an imaginary $\gamma$ there must exist graviton and anti-graviton modes, i.e. the connection is a complex field. Note however that the presence of a real part of the Immirzi parameter does affect the {\it absolute} value of the asymmetry due to the factor $|\gamma|$ in the denominator of (\ref{chiralP}). The power spectrum asymmetry (\ref{chiralP}) is plotted against a range of values of $\gamma$ figure (\ref{chirality}). It is obviously antisymmetric in $\gamma_I$, the minimum and maximum being at $\gamma=\pm i$ respectively which are the values that correspond to a SD/ASD connection. They display the maximum chirality because the Palatini action can naturally be split into a SD and ASD part \cite{thbook}. The axis $\gamma_I=0$ corresponds to a real $\gamma$ and therefore no asymmetry. \begin{figure} \begin{center} \includegraphics[width=6cm]{chirality2.eps} \end{center} \caption{Power spectrum asymmetry as a function of a generally complex Immirzi parameter $\gamma$.} \label{chirality} \end{figure} The chirality also vanishes in the limit $|\gamma|\rightarrow \infty$ which corresponds to the Palatini-Kibble theory. \section{A purely real $\gamma$}\label{realgamma} In everything we have derived so far we can take the limit $\Im(\gamma) \rightarrow 0$ and regard the result as the real theory. The question remains as to whether this limit is the same as a purely real theory, in which all the variables are real from the start. In principle the two might be different, since some aspects of the construction are obviously discontinuous. For example, in a purely real theory expansions (\ref{fourrier}) have modes $a_r$ and $e_r$ without a $p$ index, so that for a fixed $\vk$ and $r$ we start off with two, rather than four modes. It is important to check that this discontinuity does not propagate into our results, leading to expressions different from those taking the limit $\Im(\gamma) \rightarrow 0$ in the complex theory. In this Section we show that this is not the case: at the very least it is possible to set up the real theory so that no discontinuities arise in any of the expressions in this paper, even though there is a jump in the number of independent degrees of freedom. Note that this is far from obvious since the statements $ {\tilde e}_{r+}={\tilde e}_{r-}$ and ${\tilde a}_{r+}={\tilde a}_{r-}$ are second class constraints in the complex theory, and are not enforced as operator conditions, but as formal conditions on the inner product. The real theory results from imposing them as operator conditions. Firstly, the commutation relations (\ref{fixedcrs}) continuously shrink to \be [{\tilde a}_{r}(\vk),{\tilde e}_{s}^\dagger(\vk ')] =-i\gamma \frac{l_P^2}{2}\delta_{rs} \delta(\vk-\vk ')\; \ee for a real $\gamma$. The reality conditions (\ref{totreal})--(\ref{totrealdagger})--(\ref{realg}) are now trivial (stating $0=0$) and do not constrain the theory. However, for graviton operators we can still use definitions (\ref{Gr+})--(\ref{Gr-dagger}) for $G_{r{\cal P}}$, simply dropping the $p$ index from their right-hand side, for example: \be \label{Gr+1}G_{r{\cal P_+}} = \frac{-r}{i\gamma}{\left({\tilde a}_{r}- k(r+i\gamma){\tilde e}_{r}\right)}\; .\ee It may appear that we are introducing too many modes. In the complex theory, for a fixed $\vk$ and $r$ we start with four modes, ${\tilde a}_{rp}$ and ${\tilde e}_{rp}$, from which we build four $G_{r{\cal P}}$ and $G_{r{\cal P}}^\dagger$. Three reality and torsion-free conditions then reduce them to a single physical operator, as explained after Eqn.~(\ref{totrealdagger}). For the real theory we only have two modes, $a_r$ and $e_r$, from which we build four $G_{r{\cal P}}$ and $G_{r{\cal P}}^\dagger$ without having any reality conditions. However, upon closer inspection we see that for fixed $\vk$ and $r$ there are only two independent modes among the $G_{r{\cal P}}$ and $G_{r{\cal P}}^\dagger$: In the complex theory we needed the reality conditions to ensure that the $G_{r{\cal P}}^\dagger$ were in fact the Hermitian conjugates of the $G_{r{\cal P}}$. If we drop the index $p$ from their expressions, as in (\ref{Gr+1}), then this fact follows trivially from their definitions and the linearity of the $\dagger$ operation. Hence by defining gravitons operators in the real theory we do preserve the number of independent degrees of freedom. The issue persists on how to eliminate the non-physical mode. This is done by imposing the torsion-free condition, relating the $a_r$ to the $e_r$, which amounts to disqualifying the $G_{r{\cal P_-}}$ mode. A {\it possible} implementation, even in the real theory, is to do this via the inner product. As in~\cite{paper}, we work in a holomorphic representation which diagonalizes $G^\dagger_{r{\cal P}}$, i.e.: $ G^\dagger_{r{\cal P}}\Phi(z)= z_{r{\cal P}}\Phi(z)$. Then (\ref{Galgebra}) implies: \be \label{Grop} G_{r{\cal P}}\Phi = {\cal P}k l_P^2 \frac{\partial \Phi}{\partial z_{r{\cal P}}} \; .\ee Following the same procedure as in~\cite{paper} we find \be\label{ansatz}{\langle \Phi_1 | \Phi_2\rangle}=\int d z d {\bar {z}} e^{\mu(z,{\bar z})} {\bar \Phi_1}({\bar z}) \Phi_2 (z)\ee with: \be \mu(z,{\bar z})=\int d{\vk}\sum_{r{\cal P}}\frac{-{\cal P}}{k l_P^2}z_{r{\cal P}}(\vk){\bar z} _{r{\cal P}}(\vk)\; , \ee rendering the states built from operators with ${\cal P}={\cal P_-}=-1$ non-normalizable. As long as this procedure is adopted for the real theory the expressions found in this paper are continuous, and the limit $\Im(\gamma)\rightarrow 0$ does indeed represent the real theory. \section{Conclusion} In this paper we have generalized the results of \cite{paper} to cover all values of the Immirzi parameter. Our analysis shows that an imaginary part of $\gamma$ is needed to produce a chiral effect in the vacuum fluctuations, whereas a purely real $\gamma$ would give the same physical Hamiltonian for right- and left-handed gravitons. The greatest asymmetry occurs for the values $\gamma=\pm i$, corresponding to a SD/ASD connection and the subject of \cite{prl}. Here, as in previous work, the chirality also depends on the ordering used for the 2-point function. Although this implies that an observation of this asymmetry cannot be traced back to one single cause, it is still a striking prediction of quantum gravity in the Ashtekar formalism. It was shown in~\cite{TBpapers} that even a small chiral effect in the gravitational wave background would greatly simplify its detection, making us hopeful that a test of our prediction could even be achieved by PLANCK. Note that other mechanisms exist that produce a similar chiral effect~\cite{steph,gianl,merc}, but the one pointed out here is by far the simplest. It would be interesting to make contact with the work of~\cite{recentgravitons}, where a chiral contribution was found for the graviton propagator. However, in this publication a Euclidean signature and a real $\gamma$ were used, basically the opposite of our set-up, making the link between the predictions unclear. {\bf Acknowledgements} We thank Dionigi Benincasa, Gianluca Calcagni and Chris Isham for help regarding this project.
{ "attr-fineweb-edu": 1.512695, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcZbxK0-nUh8iJD-N
\section{Introduction \glsresetall \Glspl{abm} are becoming a popular modelling tool in a variety of disciplines, from economics \citep{Baptista2016} to epidemiology \citep{ferguson2020impact}. They offer domain experts a high degree of flexibility in modelling complex systems, for example by naturally incorporating interactions between, and heterogeneity across, agents in the system. Typically, \glspl{abm} are stochastic, dynamic models in which the states $\mathbf{z}^t = (\mathbf{z}^t_{1}, \dots, \mathbf{z}^t_{N})$ of a set of $N$ interacting agents, labelled $i = 1, \dots, N$, are simulated over time $t \in \left[0,T\right]$. We assume here that the \gls{abm} progresses in discrete\footnote{We discuss in Section \ref{sec:discuss} how this assumption may be relaxed.} timesteps $t=0, 1, \dots, T$, and that the agent states may be multidimensional such that $\mathbf{z}^t_{i} \in \mathbb{R}^K$ for some $K \geq 1$. \Glspl{abm} further rely on a potentially time-varying graph structure -- represented as an adjacency matrix $\mathbf{w}^t \in \mathbb{R}^{N\times N}$ -- that reflects, for example, the strength of the relationship between pairs of agents, or the set of pairwise interactions that can take place during the simulation. Once a set of parameters $\boldsymbol{\theta \in \boldsymbol{\Theta \subset \mathbb{R}^D$ and the initial states $\mathbf{z}^0$ and $\mathbf{w}^0$ are specified, the agent behaviours and interactions are simulated, and a time-series $\mathbf{x} = (\mathbf{x}^1, \dots, \mathbf{x}^T)$ is generated as output. Typically, $\mathbf{x}^t \in \mathbb{R}^M$ for some $M \geq 1$. The model output $\mathbf{x}$ is often taken to be some aggregate statistics describing the macrostate of the \gls{abm} over time, that is $\mathbf{x}=g(\mathbf{w}, \mathbf{z})$ for some aggregation function $g$. In some cases, this is done out of necessity: it is sometimes the case that only aggregate data is available from the real-world system the \gls{abm} is designed to mirror and, consequently, the \gls{abm} can only be compared to reality through the lens of this aggregate data. Under these circumstances, two natural inference tasks arise: \begin{enumerate} \item parameter inference -- i.e.~inferring the fixed parameters $\boldsymbol{\theta$; and \item latent state inference (filtering and smoothing) -- i.e.~inferring either or both the latent states $\mathbf{z}^t$ and agent-agent relationships $\mathbf{w}^t$ of and between the agents in the system over time. \end{enumerate} Both tasks are complicated by the fact that the relevant marginal and conditional likelihood functions are in general unavailable to compute, due to the complexity of \glspl{abm}. Intractable likelihoods are encountered widely across model types and application domains and, consequently, significant research effort within the statistics and machine learning communities has been directed towards developing likelihood-free, \gls{sbi} procedures that act as more convenient substitutes to their likelihood-based counterparts. For parameter inference, approaches such as \gls{abc} \citep{pritchard1999population, beaumont2002approximate, dyer2021approximate} have seen significant success, while more modern neural network approaches to density \citep{Papamakariosepsilon, Lueckmann2017, Greenberg2019} and density ratio \citep{thomas2021lfire, pmlr-v119-hermans20a} estimation show promise as a means to dramatically reduce the simulation burden in \gls{sbi} procedures; see \citet{dyer2022black} for a recent overview of these methods and their application to \glspl{abm} in the social sciences. Similarly, variants of the Kalman filter and sequential Monte Carlo methods have been developed and applied to the problem of \gls{abm} latent state inference \citep[see e.g.][]{ward2016dynamic, LUX2018391}, although this has received less attention within the \gls{abm} community than the problem of parameter inference. In contrast to this formulation, the increasing availability of granular, longitudinal microdata on agent behaviours and interactions in real-world social systems (e.g. on social media) raises the possibility of dispensing with the structure described above, in which it is often necessary to perform two inference tasks to properly ``fit'' the \gls{abm}. Indeed, in some crucial applications, the \gls{abm} is ``fully-observed'', i.e. $g(\cdot)=identity(\cdot)$ and $\mathbf{x}^t=(\mathbf{w}^t, \mathbf{z}^t)$. Under these circumstances, the filtering and smoothing problems vanish and only the problem of parameter inference remains. In such cases, the process of fully calibrating the \gls{abm} to observed data is greatly simplified as a result of the granularity of the data. Approaches to performing parameter inference for fully-observed \glspl{abm} are currently lacking in the \gls{sbi} literature. Importantly, modellers are lacking approaches to \gls{sbi} that incorporate useful inductive biases that reflect the natural dynamic graph structure of the \gls{abm} and the data. The absence of such methods prevents us from properly capitalising on the availability, and full information content, of such granular data. In this paper, we address this gap by demonstrating how (recurrent) \glspl{gnn} may be combined with neural \gls{sbi} procedures to flexibly and automatically accommodate high-dimensional, high-resolution data describing the evolution of the microstate of a social system. We show that \glspl{gnn} provide useful inductive biases for the use of such microdata as observables against which parameters $\boldsymbol{\theta$ are calibrated in the case of ``fully observed'' \glspl{abm}, with promising performance on test cases modelling the coevolution of opinions and network structure in a social system. \section{Background \& Motivation} \subsection{Simulation-based Parameter Inference} \glsreset{sbi} \Gls{sbi} is a set of algorithms in which likelihood-based parameter inference procedures -- such as Bayesian inference -- are approximated through training on \emph{iid} data $(\mathbf{x}, \boldsymbol{\theta) \sim p(\mathbf{x} \mid \boldsymbol{\theta) \pi(\boldsymbol{\theta)$, where $\pi(\boldsymbol{\theta)$ is a prior density and $p(\mathbf{x} \mid \boldsymbol{\theta)$ is the likelihood function associated with the simulation model. This is done by first sampling $\boldsymbol{\theta \sim \pi(\boldsymbol{\theta)$ and subsequently forward simulating from the simulator at $\boldsymbol{\theta$, represented as $\mathbf{x} \sim p(\mathbf{x} \mid \boldsymbol{\theta)$. Once trained, \gls{sbi} algorithms then, given some observation $\mathbf{y}$, yield estimates of parameter posteriors $\pi(\boldsymbol{\theta \mid \mathbf{y})$ \citep{Papamakariosepsilon, Lueckmann2017, Greenberg2019}, data likelihood functions $p(\mathbf{y} \mid \boldsymbol{\theta)$ \citep{papamakarios2019sequential}, or likelihood-to-evidence ratios $p(\mathbf{y} \mid \boldsymbol{\theta)/p(\mathbf{y})$ \citep{thomas2021lfire, pmlr-v119-hermans20a, dyer2022amortised}. Of particular interest to the current work is the first of these three alternatives, commonly referred to as \gls{npe}. In \gls{npe} algorithms, a conditional density estimator -- such as a mixture density network \citep{bishop1994mixture} or a normalising flow \citep{tabak2010density, tabak2013family, rezende} -- is trained to approximate the map $\mathbf{x} \mapsto \pi(\cdot \mid \mathbf{x})$ using \emph{iid} training data $(\mathbf{x}, \boldsymbol{\theta) \sim p(\mathbf{x} \mid \boldsymbol{\theta) \pi(\boldsymbol{\theta)$. This provides the experimenter with immediate access to the posterior density estimated by the neural network, which can furthermore be constructed to operate directly on raw data $\mathbf{x}$ through the incorporation of appropriate inductive biases into the network architecture. \begin{figure*}[tb] \centering \includegraphics[width=0.99\linewidth]{figures/recurrent_graph_flow_nobold_wider.png} \caption{A schematic of the posterior estimation pipeline we use. The \gls{abm} -- shown as the dynamic graph with evolving node states (node colors) $\mathbf{z}$ and edge weights (line widths) $\mathbf{w}$ -- is embedded into a low-dimensional space with a graph \gls{gru} and a feedforward network applied to the \gls{gru}'s final hidden state, $h_T$. This representation, $g_{\phi}(\mathbf{z}, \mathbf{w})$, is fed to the \gls{maf} to estimate the posterior as $q_{\varphi}(\boldsymbol{\theta \mid g_{\phi}(\mathbf{z}, \mathbf{w}))$.} \label{fig:setup} \end{figure*} \subsection{Graph Neural Networks} \glsreset{gnn} Recent years have seen considerable progress in the development of \glspl{gnn} in machine learning \citep[e.g.][]{yao2019graph, Zhang2020Efficient, BaekKH21}. In many cases, the design of a \gls{gnn} consists of generalising a convolution operator from regular, Euclidean domains -- as appears in convolutional neural networks -- to graphs. This has predominantly proceeded by constructing a convolution in the spatial domain \citep[see e.g.][]{Masci2015Geo, niepert2016learning} or by exploiting the convolution theorem and performing a multiplication in the graph Fourier domain \citep[see e.g.][]{bruna2014spectral}. A recent review of \glspl{gnn} and their design can be found in \citet{ZHOU202057}. The problem of extending \glspl{gnn} to dynamic graphs has also recently received significant attention. In this vein, \citet{li2017diffusion} introduce Diffusion Convolutional Recurrent Neural Networks, with applications to traffic flow prediction. In addition, \citet{seo2018structured} propose Graph Convolutional Recurrent Networks, an adaptation of standard recurrent networks to operate on sequences of graphs via graph convolutional operators. Further examples of recurrent graph neural network architectures exist; a broader survey of neural networks for dynamic graphs can be found in \citet[][Section 7]{wu2021comp}. \section{Methods} In this paper, we consider the problem of parameter inference for the case of fully observed \glspl{abm}, where the data the experimenter observes is the complete trace of agent states $\mathbf{z}^t$ and their relationships $\mathbf{w}^t$ over all timesteps $t = 0, \dots, T$. Under these circumstances, the experimenter requires approaches to parameter inference that accommodate the dynamic graph structure of this data. To address this, we construct a neural posterior estimator in which a recurrent \gls{gnn} $g_{\phi}$ and a neural conditional density estimator $q_{\varphi}$ are paired to approximate the map $(\mathbf{z}, \mathbf{w}) \mapsto \pi( \cdot \mid \mathbf{z}, \mathbf{w})$, where $\phi$ and $\varphi$ are the parameters of the respective neural networks. In particular, we take $g_{\phi}$ to be a feedforward network applied to the final hidden state of the recurrent \gls{gnn} and $q_{\varphi}$ to be a normalising flow. The posterior may then be learned from this low-dimensional representation of the original high-dimensional sequence of graphs as $q_{\varphi}(\cdot \mid g_{\phi}(\mathbf{z}, \mathbf{w}))$. While many choices of architecture are available, we employ a graph convolutional \gls{gru} \citep{seo2018structured} to construct $g_{\phi}$ and use a \gls{maf} \citep{papamakarios2017masked} for $q_{\varphi}$ in this paper. In Figure \ref{fig:setup}, we show a schematic of the pipeline that results from the particular choice of recurrent \gls{gnn} and conditional density estimator used for our experiments, although the exact modules appearing in this experimental setup may be substituted for others without fundamentally altering the overall pipeline. We train the network parameters $\phi$ and $\varphi$ concurrently and on the same loss function as the \gls{maf}, such that the graph sequence embedding and the posterior are learned simultaneously. Further details on the architecture and training procedure we employ can be found in the supplement. \section{Experimental Results} To test the approach, we consider a task based on the Hopfield model of social dynamics proposed by \citet{macy2003polarization} which describes the coevolution of opinions and the social network structure, and the emergence of polarisation, in a population of $N$ agents. At each time step $t = 1, \dots, T$, each agent is equipped with $N-1$ undirected ties to the remaining agents in the population, and the strength and valence of the tie between agents $i$ and $j$ is characterised by $w^t_{ij} \in \left[-1, 1\right]$. Each agent is also equipped with a state vector $\mathbf{z}^t_i \in \lbrace{-1, 1\rbrace}^K, i = 1, \dots, N$, which may represent the opinion status of agent $i$ on each of a number $K \geq 1$ of topics at time $t$. The \emph{social pressure} that agent $i$ experiences on topic $k$ at time $t$ is then modelled as \begin{equation} P^t_{ik} = \frac{1}{N-1} \sum_{j \neq i} w^t_{ij} z^t_{ik}, \end{equation} and $i$'s corresponding propensity to adopt the positive opinion is taken to be \begin{equation} \pi^t_{ik} = \frac{1}{1 + e^{-\rho \cdot P^t_{ik}}}, \end{equation} where $\rho > 0$ is a free parameter of the model. Agent $i$ then adopts the positive opinion on topic $k$ at time $t$ (i.e. $z^{t+1}_{ik} = 1$) if \begin{equation} \pi^t_{ik} > 0.5 + \epsilon U^t_i, \end{equation} where $\epsilon \in \left[0, 1\right]$ is a further free parameter of the model and $U_i^t \sim \mathcal{U}(-0.5,0.5)$. Finally, the ties between agents evolve as \begin{equation} w^{t+1}_{ij} = (1 - \lambda)w^t_{ij} + \frac{\lambda}{K} \sum_{k=1}^K z^{t+1}_{ik} z^{t+1}_{jk}, \end{equation} where $\lambda \in \left[0,1\right]$ is a third free parameter of the model. Taking the initial proportion $p \in \left[0,1\right]$ of all opinion state entries as the final free parameter of the model, we assume the goal of approximating a posterior density for $\boldsymbol{\theta = (\rho, \epsilon, \lambda, p)$. Specifically, we assume prior densities $\rho \sim \mcal{U}(0,5)$, $\epsilon \sim \mcal{U}(0,1)$, $\lambda \sim \mcal{U}(0,1)$, $p \sim \mcal{U}(0,1)$, and further assume that the \gls{abm} is fully observed in the sense that all $w_{ij}^t$ and $z^t_{ik}$ are observed. To generate a pseudo-true dataset, we simulate the model for 25 time steps with $N=20$ at ground-truth parameter values $\boldsymbol{\theta^{*} = (1, 0.8, 0.5, 0.5)$. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/maf_rgcn_better.png} \vspace{-10mm} \caption{Approximate posterior for the Hopfield model obtained with a masked autoregressive flow and recurrent graph embedding network. Red lines/points show the ground truth parameters for the dataset.} \label{fig:Hopfield_MAF} \end{figure} We show the approximate posterior obtained with a \gls{maf} posterior estimator, recurrent graph convolutional embedding network, and a budget of 1000 simulations in Figure \ref{fig:Hopfield_MAF}. The diagonal subplots show the marginal posterior densities, while the off-diagonal subplots show the 2-dimensional projections of the joint densities. We show the ground truth parameters $\boldsymbol{\theta^{*}$ with red lines and points. The approximated posterior assigns high posterior density to the ground truth parameters, providing evidence that a reasonable degree of accuracy has been achieved by the posterior estimator. \section{Discussion}\label{sec:discuss} In this paper, we address the problem of how to learn parameter posteriors for ``fully-observed'' \glspl{abm}, that is, when the full trace of the agents' states and interactions are observed. We propose the use of temporal graph neural networks in neural \gls{sbi} algorithms as a way to incorporate useful inductive biases reflecting the natural dynamic graph structure of \glspl{abm}. Through experiments performed on an \gls{abm} modelling the coevolution of agent opinions and relationship strengths in a dynamic social network, we demonstrated that such an approach can generate approximate Bayesian parameter posteriors in which the ground-truth parameter is assigned a high-posterior density, suggesting that the approximate posterior is accurate to some degree. In future work, we will conduct a more thorough assessment of the quality of the estimated posteriors following the guidelines discussed in \citet{dyer2022black}, for example through the use of posterior predictive checks to assess the predictive power of the inferences drawn, or simulation-based calibration \citep{talts2020validating} to assess the quality of the overall shape of the posterior. In addition, we will extend this approach to continuous-time settings through the use of architectures that are compatible with continuous-time data \citep[see e.g.][]{rossi2020temporal}. \section*{Acknowledgements} The authors thank the anonymous reviewers for their helpful comments and feedback. JD is supported by the EPSRC Centre For Doctoral Training in Industrially Focused Mathematical Modelling (EP/L015803/1) in collaboration with Improbable. \nocite{langley00}
{ "attr-fineweb-edu": 1.733398, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUc_M4uzlhbeH0ZhGq
\section{Introduction} V404\,Cygni (hereafter \rm V404\,Cyg, also known as GS 2023+338) is a well-known, dynamically confirmed black hole X-ray binary (BHB). The black hole, of mass 9--15\,\hbox{$\rm M_{\odot}$}, is in a 6.5d binary system with a lower mass K-type stellar companion, from which it accretes via Roche-lobe overflow (\citealt{Casares92nat, Wagner92, Shahbaz94, Sanwal96, Khargharia10}). Located only $2.39 \pm 0.14$ kpc away (\citealt{MillerJones09}), \rm V404\,Cyg\ is one of the closest black hole systems known (\citealt{blackcat, watchdog}). \begin{figure*} \hspace*{-0.5cm} \epsscale{1.1} \plotone{./figs/v404cyg_isgri_lightcurve_outburst_nustar.eps} \caption{Long-term 25--200\,keV X-ray lightcurve for the recent outburst from \rm V404\,Cyg\ observed with {\it INTEGRAL~\/}\ (see \citealt{Kuulkers16v404} for details). The first four of our five {\it NuSTAR}\ observations are indicated with the shaded regions (N1--4; the fifth, N5, spanned MJD $\sim$57226.35--57227.46); the first caught \rm V404\,Cyg\ during the height of its activity, and is the subject of this work, while the following four observations probed various stages of its decline back to quiescence (Rana et al, \textit{in preparation}). } \vspace{0.3cm} \label{fig_longlc} \end{figure*} As a low-mass X-ray binary (LMXB), \rm V404\,Cyg\ spends the majority of its time in quiescence, and has become one of the key targets for studying black holes in this regime ({\it e.g.}\ \citealt{Reynolds14quies, Bernardini14, Rana16}). However, as with other LMXBs, it undergoes intense accretion outbursts, likely related to the hydrogen ionization instability (see \citealt{Lasota01rev} for a review). Although these events are rare, during these outbursts \rm V404\,Cyg\ becomes one of the brightest X-ray sources in the sky. The X-ray band is vital for studying the accretion flow. For BHBs, the thermal emission from the accretion disk, the high-energy powerlaw continuum (likely resulting from Compton up-scattering of the disk emission), and the disk reflection spectrum (resulting from irradiation of the disk) all contribute to the broadband X-ray emission ({\it e.g.}\ \citealt{Zdziarski02, Reis10lhs, Walton12xrbAGN, Tomsick14}; see \citealt{Done07rev} for a review). The disk reflection spectrum is particularly critical, as this carries information regarding both the geometry of the innermost accretion flow ({\it e.g.}\ \citealt{Wilkins12, Dauser13}) and the spin of the central black hole ({\it e.g.}\ \citealt{Miller09, Reis09spin, Brenneman11, Walton13spin}; see \citealt{Reynolds14rev} and \citealt{Middleton15rev} for recent reviews). \rm V404\,Cyg\ is therefore an important source with which to investigate these accretion phenomena. However, in some respects, \rm V404\,Cyg\ is unusual for a black hole LMXB. Throughout a typical outburst, most sources follow a relatively well-defined pattern of accretion states (see \citealt{Fender14rev} and \citealt{Belloni16rev} for recent reviews). Sources rise from quiescence into the hard state, in which the powerlaw dominates the emission and persistent radio jets are seen. As the accretion rate continues to increase sources transition into the soft state, in which the thermal emission from the disk dominates the observed emission. The radio jets are believed to be quenched in this state, and outflows are typically seen in the form of winds from the accretion disk instead ({\it e.g.}\ \citealt{Miller06a, Neilsen09, Ponti12}, although recent analyses suggest that jets and disk winds may not necessarily be mutually exclusive, \citealt{Rahoui14, Reynolds15, Homan16}). Then, as the sources fade, they move back through the hard state, before finally returning to quiescence. \rm V404\,Cyg\ instead shows much more complexity. Its major 1989 outburst, which first identified the source as an X-ray binary, was well covered by the {\it Ginga}\ observatory (\citealt{Kitamoto89, Terada94, Oosterbroek97, Zycki99a, Zycki99b}). These observations revealed extreme levels of variability across a wide range of timescales. In part, this was driven by large variations in the line-of-sight absorption column, which was often significantly in excess of that seen during quiescence. Such variations are not typically seen in other black hole LMXBs. This strong and variable absorption resulted in complex X-ray spectra, making identification of standard accretion states extremely challenging. In addition, evidence for X-ray reprocessing from both ionised and neutral material was observed at varying intervals, further complicating spectral decomposition ({\it e.g.}\ \citealt{Zycki99b}). In the summer of 2015, \rm V404\,Cyg\ underwent its first major outburst since 1989, triggering an enormous multi-wavelength observing campaign ({\it e.g.}\ \citealt{Rodriguez15v404, Natalucci15, Roques15, King15v404, Jenke16v404, Gandhi16v404, Kimura16nat, MunozDarias16, Motta16v404}, as well as many other works in preparation). As part of this broadband follow-up effort, we undertook a series of high-energy X-ray observations with the \textit{Nuclear Spectroscopic Telescope Array} ({\it NuSTAR}; \citealt{NUSTAR}). Its unique combination of unprecedented high-energy sensitivity and broad bandpass (3--79\,\hbox{\rm keV}) make {\it NuSTAR}\ extremely well suited for disentangling the contributions from reflection and absorption (as demonstrated, for example, by the recent broadband work on the active galaxy NGC\,1365; \citealt{Risaliti13nat, Walton14, Kara15, Rivers15}), and allows detailed, broadband spectroscopy to be performed on timescales much shorter than previously accessible. Critically for \rm V404\,Cyg, {\it NuSTAR}'s triggered read-out means it is also well suited to observing sources with extremely high count-rates ({\it e.g.}\ \citealt{Miller13grs, Fuerst15, Parker16, Walton16cyg}), providing clean, high signal-to-noise measurements of their spectra without suffering from instrumental issues like photon pile-up, etc. In this work, we present results from our 2015 {\it NuSTAR}\ campaign on \rm V404\,Cyg, focusing on observations made at the height of the outburst activity. The paper is structured as follows: section \ref{sec_red} describes the {\it NuSTAR}\ observations and our data reduction procedure, sections \ref{sec_time} and \ref{sec_spec} present our analysis of the temporal and spectral variability exhibited by \rm V404\,Cyg, and section \ref{sec_dis} presents a discussion of the results obtained. Finally, we summarize our main conclusions in section \ref{sec_conc}. \begin{figure*} \hspace*{-0.5cm} \epsscale{1.1} \plotone{./figs/v404cyg_lc_HR_paper.eps} \caption{{\it NuSTAR}\ lightcurve for the first observation of \rm V404\,Cyg\ (top panel, 10s bins). Only the FPMA data are shown for clarity, and the count rates have been corrected for the increasing deadtime that occurs at very high fluxes. For reference, the beginning of the observation corresponds to MJD 57197.935. After the first four {\it NuSTAR}\ orbits, extreme flaring is observed with incident count rates exceeding 10,000\,\hbox{$\rm\thinspace ct~s^{-1}$}\ on several occasions. The strongest six flares, analysed in Section \ref{sec_flares}, are highlighted (red numbers). We also show the evolution of a broadband hardness ratio, computed between 3--10 and 10--79\,keV (bottom panel). Strong spectral variability is observed throughout this latter flaring phase. } \vspace{0.3cm} \label{fig_lcHR} \end{figure*} \section{Observations and Data Reduction} \label{sec_red} Triggered by the summer 2015 outburst, we undertook five observations with {\it NuSTAR}. The timing of these observations is shown in the context of the long-term variability seen by {\it INTEGRAL~\/}\ in Figure \ref{fig_longlc}; the first was undertaken during the height of the activity from the source, and the remaining four were spaced throughout the following few weeks (\citealt{Walton15fade2, Walton15fade1}), during which \rm V404\,Cyg\ declined back to quiescence (\citealt{Sivakoff15quies, Sivakoff15fade}). In this work, we focus on the first observation. Although this is split over two OBSIDs (90102007002, 90102007003), in reality they comprise one continuous observation. The subsequent {\it NuSTAR}\ observations will be presented in Rana et al. (in preparation). The {\it NuSTAR}\ data were reduced largely following standard procedures. Unfiltered event files were cleaned using \rm{\small NUPIPELINE}, part of the {\it NuSTAR}\ Data Analysis Software (v1.5.1; part of the standard HEASOFT distribution), and instrumental responses from {\it NuSTAR}\ CALDB v20150316 are used throughout this work. Due to the high count rate and rapid variability, it was necessary to turn off some of the filtering for hot pixels normally performed by \rm{\small NUPIPELINE}, since source counts were being removed from the peak flares. We did this by setting the `statusexpr' parameter to ``b0000xx00xx0xx000'', which controls the filtering on the STATUS column. In this way we kept the source events that were incorrectly identified as hot/flickering. The {\it NuSTAR}\ calibration database has a list of hot/flickering pixels that have already been identified, which were still removed following standard procedures. Passages of {\it NuSTAR}\ through the South Atlantic Anomaly were also excluded from our analysis. Source products were then extracted from the cleaned events from a circular region centered on the source (radius 160$''$) using \rm{\small NUPRODUCTS}\ for both focal plane modules (FPMA and FPMB). \rm V404\,Cyg\ is easily detected across the whole 3--79\,\hbox{\rm keV}\ {\it NuSTAR}\ bandpass. Owing to its extreme brightness, there were no regions of the detector on which \rm V404\,Cyg\ was located that were free of source counts, so the background was estimated from a blank region on the detector furthest from the source position (each FPM contains four detectors in a $2 \times 2$ array) in order to minimize any contribution from the source to our background estimation. Although there are known to be variations in the background between the detectors for each FPM, these differences are typically only at the ~10\% level (in the background rate) at the highest energies of the {\it NuSTAR}\ bandpass (where the internal detector background dominates; \citealt{NUSKYBKG}). \rm V404\,Cyg\ is always a factor of $>$10 above the estimated background at all energies in the spectra extracted here, so such effects are negligible. Finally, when necessary, data from the two OBSIDs were combined using \hbox{\rm{\small ADDASCASPEC}}\ for each FPM (although we do not combine the FPMA and FPMB data), and all spectra were grouped such that each spectral bin contains at least 50 counts per energy bin, to allow the use of {$\chi^{2}$}\ minimization during spectral fitting. \section{Temporal Variability} \label{sec_time} In Figure \ref{fig_lcHR} (\textit{top panel}), we show the lightcurve observed by {\it NuSTAR}. The count rate shown is the incident count rate inferred rather than that directly recorded, i.e. the rate has been corrected for the deadtime (see \citealt{NUSTAR, Bachetti15}). The most striking aspect is the strong flaring seen throughout the majority of the observation, during which the flux observed from \rm V404\,Cyg\ can rapidly increase by at least an order of magnitude. Many flares comfortably exceed rates of 10,000 \hbox{$\rm\thinspace ct~s^{-1}$}\ (unless stated otherwise, count rates are quoted per FPM), with the most extreme even exceeding 20,000 \hbox{$\rm\thinspace ct~s^{-1}$}. For reference, the incident 3--79\,keV count rate for the Crab nebula is $\sim$500 \hbox{$\rm\thinspace ct~s^{-1}$}\ (\citealt{Madsen15}). Strong X-ray flaring from \rm V404\,Cyg\ has been reported by several authors throughout this outburst ({\it e.g.}\ \citealt{Rodriguez15v404, Natalucci15, Roques15, King15v404, Jenke16v404, SanFan16}). We stress again that even at these count rates the {\it NuSTAR}\ data do not suffer significantly from pile-up; at similar count rates Sco X-1 only had a pile-up fraction of $\sim$0.08\% (see appendix C in \citealt{Gref16solar}). In addition to the extreme flux variability, we also see strong spectral variability throughout the {\it NuSTAR}\ observation. Figure \ref{fig_lcHR} (\textit{bottom panel}) shows the evolution of a simple broadband hardness ratio, computed as the ratio between the count rates in the 3--10 and 10--79\,keV energy bands, which shows a remarkable transition between the 4th and 5th {\it NuSTAR}\ orbits. During the first four orbits, the hardness ratio is relatively stable, but after this point it becomes strongly variable. This transition is roughly coincident with the onset of the flaring portion of the observation. The data from the first four orbits will be discussed in more detail in a dedicated paper (Walton et al. in preparation); here we focus on the strong flaring seen throughout the majority of the {\it NuSTAR}\ observation. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_hri_obs1_all_paper.eps} \caption{Hardness ratio--intensity diagram constructed from the data shown in Figure \ref{fig_lcHR}. The behaviour seen during this {\it NuSTAR}\ observation is extremely complex. However, the strongest flares all show similar hardness ratios. The dashed blue line marks the count rate limit adopted in extracting the flare spectra discussed in Sections \ref{sec_avflares} and \ref{sec_flares}. } \vspace{0.3cm} \label{fig_hri} \end{figure} In order to further characterise the observed variability, in Figure \ref{fig_hri} we plot the 10--79/3--10\,\hbox{\rm keV}\ hardness ratio against the full 3--79\,\hbox{\rm keV}\ count rate. The resulting `hardness ratio -- intensity' (HRI) diagram is rather chaotic, with no clear single trend and a lot of complex structure. There are two distinct `clouds' at moderate intensity with softer spectra (lower hardness ratio, $\lesssim$0.4), which primarily correspond to the data from the first four {\it NuSTAR}\ orbits. The more complex behaviour seen in the rest of the data arises from the flaring period. Noteably, though, the flares themselves all appear to have similar hardness ratios. Finally, at the very lowest fluxes observed there also appears to be a clear positive correlation between flux and hardness ratio, which breaks down above $\sim$100\,\hbox{$\rm\thinspace ct~s^{-1}$}. \section{Spectral Analysis} \label{sec_spec} The majority of this work focuses on spectral analysis of data extracted from the period of intense flaring observed by {\it NuSTAR}. Our spectral analysis is performed with \hbox{\small XSPEC}\, v12.6.0f\ (\citealt{xspec}), and parameter uncertainties are quoted at 90\% confidence for one parameter of interest throughout this work ({\it i.e.} $\Delta\chi^{2} = 2.71$). Residual cross calibration flux uncertainties between the FPMA and FPMB detectors are accounted for by allowing multiplicative constants to float between them, fixing FPMA to unity; the FPMB constants are always found to be within 5\% of unity, as expected (\citealt{NUSTARcal}). \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_eeuf_average_paper.eps} \caption{The average X-ray spectrum from our first {\it NuSTAR}\ observation of \rm V404\,Cyg. FPMA data are shown in black, and FPMB data in red; both have been unfolded through a model that is constant with energy, and have been further rebinned for visual purposes. While strong spectral variability is observed throughout the observation, the average spectrum is still useful for highlighting certain features, noteably a narrow iron emission component, indicating the presence of reprocessing by distant material, and a strong absorption edge at $\sim$7\,keV, indiciating the presence of absorption significantly in excess of the Galactic column throughout much of the observation.} \vspace{0.3cm} \label{fig_spec_av} \end{figure} In Figure \ref{fig_spec_av} we show the average spectrum obtained from the full {\it NuSTAR}\ observation. Given the strong spectral variability discussed previously, a detailed analysis of this average spectrum would not be particularly meaningful. However, a visual inspection is still useful in terms of highlighting some of the features of the observed data. In particular, there is clear structure in the iron K bandpass. There is a strong absorption edge above 7\,\hbox{\rm keV}, indicating there is absorption in excess of the Galactic column ($N_{\rm{H,Gal}} \sim 10^{22}$ cm$^{-2}$; {\it e.g.}\ \citealt{Reynolds14quies, Bernardini14, Rana16}) throughout much of the observation. This is similar to the 1989 outburst ({\it e.g.}\ \citealt{Oosterbroek97, Zycki99b}). In addition, as discussed by \cite{King15v404} and \cite{Motta16v404}, there is a clear, narrow emission line from neutral iron, indicating a contribution from reprocessing by distant, neutral material; evidence for such emission was also seen in the 1989 data (\citealt{Zycki99b}). \subsection{The Average Flare Spectrum} \label{sec_avflares} In Figure \ref{fig_flares} (\textit{top panel}), we show the average spectrum for the flares, extracted by selecting only periods where the count rate (per FPM) was $>$ 4000\,\hbox{$\rm\thinspace ct~s^{-1}$}. The total good exposure in the resulting spectrum is only $\sim$110--120\,s. In contrast to the average spectrum, there is no visually apparent edge at $\sim$7\,\hbox{\rm keV}, indicating the line-of-sight absorption is much weaker during these periods, and that we therefore have a cleaner view of the intrinsic spectrum. The flare spectrum is very hard, and there is still visible structure in the iron K band. In Figure \ref{fig_flares} (\textit{bottom panel}), we show the data/model residuals to a simple model consisting of a powerlaw continuum with a high-energy exponential cutoff, modified by a neutral absorption column which is free to vary above a lower limit of $10^{22}$\,cm$^{-2}$ (set by prior constraints on the Galactic column; see above). We use the \rm{\small TBABS}\ absorption model, adopting the ISM abundances reported in \cite{tbabs} as our `solar' abundance set, and the cross-sections of \cite{Verner96}, as recommended. This model is fit to the 3--4, 8--10 and 50--79\,keV energy ranges in order to minimize the influence of any reflected emission present in the spectrum. The photon index obtained is very hard, $\Gamma \sim 1.5$, with a cutoff energy of $E_{\rm{cut}} \sim 160$\,keV. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_flares_eeuf_ratio.eps} \caption{The flare spectrum, extracted from periods where the count rate (per FPM) exceeds 4,000\,\hbox{$\rm\thinspace ct~s^{-1}$}\ (top panel, computed in the same manner as Figure \ref{fig_spec_av}). As before, the FPMA and FPMB data are shown in black and red, respectively, and the data have been further rebinned for visual purposes. The inset shows a comparison between the FPMA data for the flare spectrum and the average spectrum (blue) in the iron K bandpass, with the latter scaled up in flux so that the peaks of the narrow iron emission match; the strong edge seen in the average spectrum is not present in the flare spectrum. The bottom panel shows the data/model ratio to a simple powerlaw continuum with a high-energy exponential cutoff, fit to the 3--4, 8--10 and 50--79\,keV bands. The residuals imply the presence of a strong reflection component from the inner accretion disk.} \vspace{0.3cm} \label{fig_flares} \end{figure} A very strong Compton hump is visible around $\sim$20--30\,keV, indicating a significant contribution from X-ray reprocessing by optically-thick material. The iron emission is also rather strong, and although there is a narrow core to the line profile, the majority of the line emission is broadened with a clear red-wing, a hallmark of relativistically broadened reflection from an accretion disk (referred to as a `diskline' profile; {\it e.g.}\ \citealt{Fabian89, kdblur}). Modeling the 3--10\,\hbox{\rm keV}\ bandpass with the simple continuum model above (fixing the cutoff energy to its best-fit value, given the limited energy range being considered), and including both an unresolved Gaussian at 6.4\,keV and a \rm{\small RELLINE}\ component (\citealt{relconv}) to account for the narrow core and the iron emission from the accretion disk, respectively, we find that the \rm{\small RELLINE}\ component has an equivalent width of $\mathrm{EW} \sim 400$\,eV, while the narrow core is much weaker, with $\mathrm{EW} \sim 25$\,eV. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_hri_edge.eps} \caption{Hardness ratio--intensity diagram, similar to Figure \ref{fig_hri} but with 100s time bins, for the narrow-band hardness ratio ($R_{\rm{edge}}$; see Section \ref{sec_sel}) constructed to probe the depth of the iron edge at $\sim$7\,keV. The major flares, which do not show the prominent edge seen in the average spectrum (Figures \ref{fig_spec_av}, \ref{fig_flares}), show $R_{\rm{edge}}$\ $>$ 0.7 (indicated with the dashed blue line), which is used as a limit to identify other periods with similarly low levels of absorption. } \vspace{0.3cm} \label{fig_hri_edge} \end{figure} \subsection{Flux-Resolved Spectral Evolution} \label{sec_flux} Isolating and modeling the reprocessed emission from the accretion disk is of significant importance, as this provides information on both the spin of the black hole ({\it e.g.}\ \citealt{Risaliti13nat, Walton13spin, Walton14, Reynolds14rev}), and the geometry/location of the illuminating X-ray source ({\it e.g.}\ \citealt{Wilkins12}). This is of particular interest for the intense flares, since such X-ray flares are often associated with jet ejection ({\it e.g.}\ \citealt{Corbel02}). However, constraining the disk reflection is not necessarily straightforward from the iron band alone. In order to aid in disentangling the contributions from reprocessing by the accretion disk and by more distant material to the spectrum, the main body of this work focuses on modeling the broadband evolution of \rm V404\,Cyg\ as a function of flux during the flaring phase of our {\it NuSTAR}\ observation. \subsubsection{Data Selection} \label{sec_sel} One of the main complications for broadband modeling is the strong and variable absorption that is present throughout this observation. In order to mimimize this issue, based on the flare spectrum (Figure \ref{fig_flares}), we select only periods of similarly low absorption for our flux-resolved spectral analysis. In order to identify such periods, we define a narrow band hardness ratio (hereafter $R_{\rm{edge}}$), with the softer band (6.5--7.0\,\hbox{\rm keV}) just below the sharp edge seen in the average spectrum, and the harder band (7.5--8.0\,\hbox{\rm keV}) just above, such that we can track the strength of the edge throughout the flaring period. With these narrow bands, a stronger absorption edge (and thus more absorption) would appear to have a softer spectrum (i.e. a lower hardness ratio). We show the behaviour of $R_{\rm{edge}}$\ in Figure \ref{fig_hri_edge}, in the form of a similar HRI diagram to Figure \ref{fig_hri}. Note that we are forced to adopt a coarser temporal binning (100s) in order for $R_{\rm{edge}}$\ to be well constrained owing to the narrow energy bands used; hence the peak 3--79\,keV count rates differ in this Figure. Nevertheless, it is clear that in terms of $R_{\rm{edge}}$, the highest count rates ({\it i.e.}\ the strongest flares) show the hardest spectra, with $R_{\rm{edge}}$\ $\gtrsim 0.7$, consistent with the lack of absorption seen in Figure \ref{fig_flares}. Furthermore, although the majority of the observation shows a much stronger edge, there are other non-flare periods in which the absorption is similarly weak. These periods are spread randomly throughout the flaring portion of the observation, and span a broad range of flux. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_5lvl_eeuf.eps} \caption{The five X-ray spectra extracted from periods of low absorption (determined based on the strength of the absorption edge at $\sim$7\,keV) for our flux-resolved analysis (F1--5, shown in black, red, green, blue and magenta, respectively; the highest flux state, F5, is the same as the flare spectrum shown in Figure \ref{fig_flares}). Only the FPMA data are shown for clarity, and as with Figures \ref{fig_spec_av} and \ref{fig_flares}, the data have been unfolded through a constant, and the data have been rebinned for visual purposes. The X-ray spectrum visibly evolves with flux, with the continuum above $\sim$10\,\hbox{\rm keV}\ becoming more peaked (i.e. there is more spectral curvature) at higher fluxes. } \vspace{0.3cm} \label{fig_5lvl} \end{figure} We therefore select only data with $R_{\rm{edge}}$\ $\geq 0.7$ for the lower fluxes intervals (i.e. $<$4000\,\hbox{$\rm\thinspace ct~s^{-1}$}), and then divide these periods into four flux bins: 100--500, 500--1000, 1000--2000 and 2000--4000\ \hbox{$\rm\thinspace ct~s^{-1}$}\ (per FPM, using the count rates from the finer 10\,s binning). The lower limit to the data considered is set to 100\,\hbox{$\rm\thinspace ct~s^{-1}$}\ in order to avoid the low flux region in which the flux and the broadband hardness ratio are correlated (see Figure \ref{fig_hri}), as the source behaviour is clearly distinct in this regime. We therefore have five flux bins in total (referred to as F1--5, in order of increasing flux), including the flare spectrum extracted from $>$4000\,\hbox{$\rm\thinspace ct~s^{-1}$}\ in section \ref{sec_avflares}. Details of these flux bins are given in Table \ref{tab_flux} and the extracted spectra are shown in Figure \ref{fig_5lvl}; the lack of strong, visible absorption edges in any of these spectra demonstrates the general success of our low-absorption selection procedure. As with the flare spectrum, despite the lack of strong absorption, the spectra from lower fluxes are also very hard. There are a couple of trends that can be seen from a visual inspection of these data. First, the relative contribution from the narrow core of the iron emission is stronger at lower fluxes. Second, the continuum above $\sim$10\,\hbox{\rm keV}\ is shows a lot more spectral curvature at higher fluxes. Before proceeding with our more detailed spectral analysis, we repeat our phenomenological modelling of the 3--10\,\hbox{\rm keV}\ bandpass performed above for the flare spectrum, and fit the data for each of these flux bins with a combination of a broad and narrow iron emission component. In order to minimize parameter degeneracies, given the limited bandpass utilized, we make the simplifying assumption that the profile of the broad iron emission is the same for all fluxes. With this simple modelling, although the individual uncertainties are relatively large, we find that the strength of the broad iron emission increases with increasing flux (see Table \ref{tab_flux}). \subsubsection{Basic Model Setup} \label{sec_mod} Having extracted our low-absorption, flux-resolved {\it NuSTAR}\ data, we construct a spectral model for \rm V404\,Cyg\ that incorporates both the primary emission from the black hole, as well as X-ray reprocessing by both the accretion disk, and more distant material. Our model also includes neutral absorption, allowing for both the Galactic column and a second absorption column, assumed to be intrinsic to the source, to account for any absorption in excess of the Galactic column. To model the relativistic disk reflection, we use the \rm{\small RELXILL}\ model (\citealt{relxill}). This is a merging of the \rm{\small XILLVER}\ reflection model (v0.4c; \citealt{xillver}) with the \rm{\small RELCONV}\ model for the relativistic effects close to a black hole that smear out the rest-frame reflection spectrum (\citealt{relconv}). In particular, given the potential association between the X-ray flares and jet activity (as mentioned above, and discussed in more detail in section \ref{sec_jet}), we use the \rm{\small RELXILLLP}\ model (part of the broader \rm{\small RELXILL}\ family of models). This includes both the primary continuum --- assumed to be a powerlaw with a high-energy exponential cutoff --- and the reflected emission from the accretion disk, and treats the illuminating X-ray source as a point source located above the accretion disk on the spin-axis of the black hole ({\it i.e.}\ the `lamppost' accretion geometry), an idealized geometrical approximation appropriate for the scenario in which the hard X-ray continuum is associated with the base of a jet ({\it e.g.}\ \citealt{Markoff05, Miller12}). The key parameters for the \rm{\small RELXILLLP}\ model are the photon index and the high-energy cutoff of the illuminating continuum ($\Gamma$, $E_{\rm{cut}}$), the spin of the black hole ($a^*$), the inclination and the inner and outer radii of the accretion disk ($i$, $r_{\rm{in}}$, $r_{\rm{out}}$), the iron abundance and ionization parameter of the accreting material ($A_{\rm{Fe}}$, $\xi = 4\pi F/n$, where $F$ is the ionizing flux incident on the disk, integrated between 1--1000\,Ry, and $n$ is the density of the material), the height of the illuminating source above the disk ($h$) and the strength of the disk reflection ($R_{\rm{disk}}$). Note that here, we use the ``reflection fraction" definition outlined in \cite{relxill_norm}. This determines the strength of the reflected emission from the relative intensities of the powerlaw continuum as seen by the disk and by the distant observer, which can be computed self-consistently for the lamppost geometry via relativistic ray-tracing. The outer radius of the disk is set to 1000\,$r_{\rm{G}}$\ throughout our analysis (where $r_{\rm{G}}$\ is the gravitational radius), the maximum permitted by the model, and following \cite{Garcia15}, we consider cutoff energies up to 1000\,keV. We also compute $h$ in units of the event horizon ($r_{\rm{H}}$, which varies between 1 and 2\,$r_{\rm{G}}$\ for maximally rotating and non-rotating black holes, respectively) throughout this work, so that we can require that the X-ray source is always outside this radius. For practical reasons, we actually set a lower limit of 2$r_{\rm{H}}$\ for $h$ in order to prevent the model from implying unphysically small X-ray sources, as the illuminating source obviously must have some physical extent (particularly if it is associated with a jet) despite being approximated in our models as a point source. \begin{table} \caption{Details of the five flux bins used in our flux resolved spectral analysis of the peruids of low absorption (selected to have $R_{\rm{edge}}$\ $\geq$ 0.7).} \vspace{-0.25cm} \begin{center} \begin{tabular}{c c c c c} \hline \hline \\[-0.1cm] Flux & Count & Good & Broad \\ \\[-0.25cm] Bin & Rate & Exposure & Fe K EW \\ \\[-0.25cm] & (ct s$^{-1}$ FPM$^{-1}$) & (FPMA/B; s) & (eV) \\ \\[-0.2cm] \hline \hline \\[-0.1cm] F1 & 100--500 & 1074/1105 & $260^{+70}_{-60}$ \\ \\[-0.2cm] F2 & 500--1000 & 1067/1120 & $350^{+60}_{-50}$ \\ \\[-0.2cm] F3 & 1000-2000 & 722/769 & $390^{+40}_{-80}$ \\ \\[-0.2cm] F4 & 2000-4000 & 260/280 & $460^{+60}_{-100}$ \\ \\[-0.2cm] F5 & $>$4000 & 112/121 & $440^{+50}_{-90}$ \\ \\[-0.2cm] \hline \hline \end{tabular} \vspace{-0.2cm} \label{tab_flux} \end{center} \vspace{0.5cm} \end{table} For the distant reprocessor, we use the \rm{\small XILLVER}\ reflection model. As the narrow core is at 6.4\,keV, we assume this to be neutral (i.e. $\log\xi = 0$; throughout this work we quote $\xi$ in units of \hbox{\rm erg~cm~s$^{-1}$}). The key parameters here are the photon index and high-energy cutoff of the illuminating continuum, the inclination of the reflecting slab, the iron abundance, and the strength of the reflected emission. Both the photon index and the high-energy cutoff are assumed to be the same as for the \rm{\small RELXILLLP}\ component, and as \rm{\small RELXILLLP}\ already includes the primary continuum emission, we configure the \rm{\small XILLVER}\ model to only provide the reflected emission ({\it i.e.}\ we set the reflection fraction parameter to $-1$). One complication is that the geometry of the distant reprocessor is not known, and different geometries can result in differences in the reflected spectra ({\it e.g.}\ \citealt{Brightman15}). \rm{\small XILLVER}\ assumes a simple semi-infinite slab, but this is unlikely to be physically realistic. Therefore in order to allow the \rm{\small XILLVER}\ component representing the distant reprocessor the flexibility to differ from the simple slab approximation, we allow the iron abundance and inclination parameters of this component to vary independently of the other model components. These are effectively `dummy' parameters which allow us to incorporate this flexibility with a simple parameterization. However, we set a lower limit on $A_{\rm{Fe}}$ of 0.9, such that the limit in which the distant reflection dominates the 2--10\,keV bandpass would remain consistent with \cite{King15v404}, who report equivalent widths of up to 1\,keV for the narrow, neutral iron emission based on their analysis of the high-resolution {\it Chandra}\ HETG data taken during this outburst. \begin{table*} \caption{A summary of the lamppost reflection models applied during our flux- and flare-resolved analyses, presented in Sections \ref{sec_flux} and \ref{sec_flares}, respectively.} \vspace{-0.25cm} \begin{center} \begin{tabular}{c c c c} \hline \hline \\[-0.1cm] Model & Dataset & Source Emission & Notes \\ \\[-0.2cm] \hline \\[-0.1cm] 1 & Flux-resolved & Lamppost only & $R_{\rm{disk}}$ a free parameter, $r_{\rm{in}}$\ fixed at the ISCO \\ \\[-0.2cm] 2 & Flux-resolved & Lamppost only & $R_{\rm{disk}}$ calculated self-consistently, $r_{\rm{in}}$\ free to vary, $h$ constant \\ \\[-0.2cm] 3 & Flux-resolved & Lamppost only & $R_{\rm{disk}}$ calculated self-consistently, $r_{\rm{in}}$\ fixed at the ISCO, $h$ free to vary \\ \\[-0.2cm] 4 & Flux-resolved & Lamppost $+$ disk & $R_{\rm{disk}}$ calculated self-consistently, $r_{\rm{in}}$\ free to vary, $h$ constant \\ \\[-0.2cm] 5 & Flare-resolved & Lamppost only & $R_{\rm{disk}}$ calculated self-consistently, $r_{\rm{in}}$\ fixed at the ISCO \\ \\[-0.2cm] 6 & Flare-resolved & Lamppost $+$ disk & $R_{\rm{disk}}$ calculated self-consistently, $r_{\rm{in}}$\ fixed at the ISCO \\ \\[-0.2cm] 6i & Flare-resolved & Lamppost $+$ disk & Same as model 6, but $i$ limited to $\geq$50$^{\circ}$\ \\ \\[-0.2cm] \hline \hline \end{tabular} \vspace{-0.2cm} \label{tab_model} \end{center} \end{table*} \begin{table*} \caption{Results for the free parameters in the basic lamppost reflection model (Model 1) constructed for the spectral evolution as a function of flux.} \vspace{-0.25cm} \begin{center} \begin{tabular}{c c c c c c c c c} \hline \hline \\[-0.1cm] Model Component & \multicolumn{2}{c}{Parameter} & Global & \multicolumn{5}{c}{Flux Level} \\ \\[-0.15cm] & & & & F1 & F2 & F3 & F4 & F5 \\ \\[-0.2cm] \hline \hline \\ \rm{\small TBABS}$_{\rm{src}}$ & {$N_{\rm H}$}\ & [$10^{21}$ cm$^{-2}$] & & $9.1^{+2.2}_{-1.7}$ & $9.1^{+1.6}_{-2.1}$ & $<0.7$ & $<1.6$ & $<0.4$ \\ \\[-0.2cm] \rm{\small RELXILLLP}\ & $\Gamma$ & & & $1.43 \pm 0.02$ & $1.49 \pm 0.03$ & $1.42^{+0.02}_{-0.01}$ & $1.41^{+0.02}_{-0.01}$ & $1.40^{+0.01}_{-0.02}$ \\ \\[-0.2cm] & $E_{\rm{cut}}$ & [keV] & & $>840$\tmark[a] & $610^{+340}_{-210}$ & $240^{+10}_{-20}$ & $150^{+20}_{-10}$ & $120\pm10$ \\ \\[-0.2cm] & $a^*$ & & $>-0.1$ \\ \\[-0.2cm] & $i$ & [$^{\circ}$] & $27\pm2$ \\ \\[-0.2cm] & $h$ & $r_{\rm{H}}$\ & & $6.0^{+7.0}_{-2.0}$ & $4.7^{+4.0}_{-1.0}$ & $3.9^{+3.0}_{-1.1}$ & $3.7^{+2.8}_{-0.9}$ & $3.2^{+2.5}_{-1.1}$ \\ \\[-0.2cm] & $\log\xi$ & $\log$[\hbox{\rm erg~cm~s$^{-1}$}] & & $3.01^{+0.02}_{-0.01}$ & $3.02 \pm 0.01$ & $3.09^{+0.01}_{-0.02}$ & $3.15^{+0.05}_{-0.02}$ & $3.47^{+0.05}_{-0.04}$ \\ \\[-0.2cm] & $A_{\rm{Fe}}$ & [solar] & $1.9^{+0.3}_{-0.1}$ \\ \\[-0.2cm] & $R_{\rm{disk}}$ & & & $1.1\pm0.2$ & $1.5^{+0.3}_{-0.2}$ & $1.7^{+0.4}_{-0.2}$ & $2.0^{+0.6}_{-0.2}$ & $3.0^{+0.8}_{-0.5}$ \\ \\[-0.2cm] & Norm & & & $0.15^{+0.03}_{-0.02}$ & $0.23^{+0.05}_{-0.03}$ & $0.33^{+0.08}_{-0.09}$ & $0.47^{+0.09}_{-0.06}$ & $0.65^{+0.34}_{-0.15}$ \\ \\[-0.2cm] \hbox{\rm{\small XSTAR}}$_{\rm{abs}}$ & $\log\xi$ & $\log$[\hbox{\rm erg~cm~s$^{-1}$}] & $4.6^{+0.8}_{-0.3}$ \\ \\[-0.2cm] & {$N_{\rm H}$}\ & [$10^{21}$ cm$^{-2}$] & & $3.7^{+5.0}_{-2.3}$ & $4.2^{+4.8}_{-1.5}$ & $<3.5$ & $3.4^{+4.0}_{-1.3}$ & $3.0^{+2.7}_{-1.1}$ \\ \\[-0.2cm] \rm{\small XILLVER}\ & $i$\tmark[b] & [$^{\circ}$] & $<11$ \\ \\[-0.2cm] & $A_{\rm{Fe}}$\tmark[b] & [solar] & $<0.91$ \\ \\[-0.2cm] & Norm & [$10^{-2}$] & & $1.2^{+1.1}_{-0.8}$ & $8.6^{+0.7}_{-1.1}$ & $12.7^{+0.8}_{-0.7}$ & $18.7^{+1.6}_{-1.5}$ & $33.0^{+2.7}_{-3.0}$ \\ \\[-0.2cm] \hbox{\rm{\small XSTAR}}$_{\rm{emis}}$ & $\log\xi$ & $\log$[\hbox{\rm erg~cm~s$^{-1}$}] & & $<1.7$ \\ \\[-0.2cm] & Norm & [$10^{4}$] & & $1.3\pm0.3$ \\ \\[-0.2cm] \hline \\[-0.1cm] {$\chi^{2}$}/DoF & & & 10599/10308 & \\ \\[-0.2cm] \hline \\[-0.15cm] $F_{3-79}$\tmark[c] & \multicolumn{2}{c}{[$10^{-8}$\,\hbox{$\rm\thinspace erg~cm^{-2}~s^{-1}$}]} & & $3.78 \pm 0.02$ & $8.85 \pm 0.03$ & $16.06 \pm 0.06$ & $28.0 \pm 0.1$ & $54.6 \pm 0.3$ \\ \\[-0.2cm] \hline \hline \end{tabular} \vspace{-0.2cm} \label{tab_param} \end{center} \vspace{0.1cm} $^a$ $E_{\rm{cut}}$ is constrained to be $\leq$1000\,keV following \cite{Garcia15}. \\ $^b$ These act as dummy `shape' parameters to allow this component the flexibility to deviate from the simple slab approximation adopted in the XILLVER model (see main text), and in turn allow for our lack of knowledge with regards to the geometry of the distant reprocessor. \\ $^c$ Average flux in the 3--79\,keV bandpass \vspace{0.4cm} \end{table*} \cite{King15v404} also find both emission and absorption lines from \hbox{\rm Fe\,{\small XXV}}\ and \hbox{\rm Fe\,{\small XXVI}}. We therefore also allow for a contribution from photoionized emission and absorption. These are treated with grid models generated with \hbox{\rm{\small XSTAR}}\ (\citealt{xstar}), and are customised specifically for \rm V404\,Cyg. In brief, these grids are calculated assuming the abundances set derived by \cite{GonzHern11} for the stellar companion of \rm V404\,Cyg, and their free parameters are the ionization state of the material, its column density, and its outflow velocity (see King et al., in preparation, for full details). While \cite{King15v404} find the absoption features to be mildly outflowing, the velocity shifts are small in comparison to the spectral resolution of {\it NuSTAR}, and we therefore keep these photoionised components to be fixed at rest. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_Rdisc.eps} \caption{The evolution of the disk reflection fraction $R_{\rm{disk}}$ with source flux inferred from our basic lamppost model for the spectral evolution seen from \rm V404\,Cyg\ (Model 1). We find that $R_{\rm{disk}}$ increases with increasing flux, implying an evolution in the geometry of the innermost accretion flow. The strong reflection found at the highest fluxes ($R_{\rm{disk}} \sim 3$) would require gravitational lightbending. The data points are color-coded to match the spectra shown in Figure \ref{fig_5lvl}. } \label{fig_Rfrac} \end{figure} Finally, for the neutral absorption, we again use the \rm{\small TBABS}\ model. The Galactic column is set to $N_{\rm{H,Gal}} = 10^{22}$\,cm$^{-2}$, as discussed previously, and is assumed to have the ISM abundances of \cite{tbabs}. For the additional, source intrinsic absorption, we use the version of \rm{\small TBABS}\ with variable elemental abundances, so that we can link the iron abundance of this absorber to that of the disk reflection model ({\it i.e.}\ we assume that \rm V404\,Cyg\ is a chemically homogeneous system). We assume that this absorber is sufficiently distant from the innermost accretion flow that it should act on all of the emission components arising from this region. This absorber may potentially be associated with the distant reprocessor, and to allow for this possibility we configure the model such that while the Galactic absorption acts on all the emission components, the source intrinsic absorber acts only on the primary emission and the relativistic disk reflection, but not the distant reflection. However, this choice with makes little difference to the results obtained, as the distant reprocessor makes a negligible contribution to the spectrum at the lowest energies covered by {\it NuSTAR}. The form of the basic model applied, in \hbox{\small XSPEC}\ jargon, is therefore as follows: \rm{\small TBABS}$_{\rm{Gal}}$ $\times$ ( \rm{\small XILLVER}\ $+$ \hbox{\rm{\small XSTAR}}$_{\rm{emis}}$ $+$ ( \hbox{\rm{\small XSTAR}}$_{\rm{abs}}$ $\times$ \rm{\small TBABS}$_{\rm{src}}$ $\times$ \rm{\small RELXILLLP}\ ) ) We apply this model to the five flux states shown in Figure \ref{fig_5lvl} simultaneously. In doing so, we require the black hole spin, the inclination of the accretion disk and the iron abundance of the system to be the same across all flux states, as these physical parameters should not vary over the course of this {\it NuSTAR}\ observation. As it is unlikely that the geometry of the distant reprocessor would evolve significantly throughout our observation, we also link the `shape' parameters that would relate to geometry in our simple parameterisation (iron abundance, slab inclination) for this component across all flux levels. However, we do allow this component to respond to the changes in the intrinsic emission from \rm V404\,Cyg. During the fitting process we found that the ionizaion of the \hbox{\rm{\small XSTAR}}\ absorption component was consistent for all the flux states, so for simplicity we also linked this parameter. Additionally, we found that the photoionized emission only makes a significant contribution to the lowest flux state, F1, and so fixed its normalization to zero for F2--5. Furthermore, we found that this photoionised emission only provided an additional contribution to the narrow Fe K emission, and as such the column density and normalisation were highly degenerate, so we fixed the former to an arbitrary value of $10^{19}$\,cm$^{-2}$. \begin{table*} \caption{Results for the high-spin solutions obtained with the lamppost reflection models constructed to investigate potential geometric evolution scenarios as a function of flux (Models 2 and 3).} \vspace{-0.25cm} \begin{center} \begin{tabular}{c c c c c c c c c} \hline \hline \\[-0.1cm] Model Component & \multicolumn{2}{c}{Parameter} & Global & \multicolumn{5}{c}{Flux Level} \\ \\[-0.15cm] & & & & F1 & F2 & F3 & F4 & F5 \\ \\[-0.2cm] \hline \hline \\[-0.1cm] \multicolumn{9}{c}{Model 2: truncating disk, static corona} \\ \\[-0.1cm] \rm{\small RELXILLLP}\ & $\Gamma$ & & & $1.41^{+0.02}_{-0.03}$ & $1.44^{+0.02}_{-0.03}$ & $1.40\pm0.01$ & $1.37\pm0.02$ & $1.37\pm0.01$ \\ \\[-0.2cm] & $E_{\rm{cut}}$ & [keV] & & $>540$\tmark[a] & $330\pm60$ & $190\pm10$ & $125^{+8}_{-6}$ & $91^{+4}_{-3}$ \\ \\[-0.2cm] & $a^*$ & & $>0.95$ \\ \\[-0.2cm] & $i$ & [$^{\circ}$] & $36\pm1$ \\ \\[-0.2cm] & $h$ & $r_{\rm{H}}$\ & $2.3^{+0.4}_{-0.1}$ \\ \\[-0.2cm] & $A_{\rm{Fe}}$ & [solar] & $3.0\pm0.1$ \\ \\[-0.2cm] & $r_{\rm{in}}$\ & $r_{\rm{ISCO}}$\ & & $2.5^{+0.4}_{-0.3}$ & $2.3^{+0.1}_{-0.2}$ & $2.0\pm0.1$ & $1.7^{+0.2}_{-0.1}$ & 1 (fixed) \\ \\[-0.2cm] & $R_{\rm{disk}}$\tmark[b] & & & 1.3 & 1.5 & 1.7 & 1.9 & 3.0 \\ \\[-0.2cm] & Norm & & & $0.61^{+0.11}_{-0.10}$ & $0.80^{+0.18}_{-0.16}$ & $1.07\pm0.15$ & $1.53^{+0.24}_{-0.31}$ & $2.61^{+0.19}_{-0.51}$ \\ \\[-0.2cm] \hline \\[-0.1cm] {$\chi^{2}$}/DoF & & & 10656/10313 & \\ \\[-0.2cm] \hline \hline \\[-0.1cm] \multicolumn{9}{c}{Model 3: stable disk, dynamic corona} \\ \\[-0.1cm] \rm{\small RELXILLLP}\ & $\Gamma$ & & & $1.36^{+0.03}_{-0.01}$ & $1.41 \pm 0.01$ & $1.37^{+0.02}_{-0.01}$ & $1.36 \pm 0.01$ & $1.38 \pm 0.01$ \\ \\[-0.2cm] & $E_{\rm{cut}}$ & [keV] & & $540^{+80}_{-50}$ & $280 \pm 20$ & $180 \pm 10$ & $123^{+5}_{-3}$ & $94 \pm 3$ \\ \\[-0.2cm] & $a^*$ & & $>0.88$ \\ \\[-0.2cm] & $i$ & [$^{\circ}$] & $28^{+1}_{-2}$ \\ \\[-0.2cm] & $h$ & $r_{\rm{H}}$\ & & $5.2^{+1.5}_{-0.4}$ & $5.6 \pm 0.3$ & $4.6^{+0.8}_{-0.2}$ & $4.1^{+0.3}_{-0.2}$ & $4.4 \pm 0.2$ \\ \\[-0.2cm] & $A_{\rm{Fe}}$ & [solar] & $3.03 \pm 0.05$ \\ \\[-0.2cm] & $R_{\rm{disk}}$\tmark[b] & & & 1.7 & 1.6 & 1.8 & 2.0 & 1.9 \\ \\[-0.2cm] & Norm & & & $0.18 \pm 0.01$ & $0.22 \pm 0.01$ & $0.38^{+0.05}_{-0.01}$ & $0.63 ^{+0.09}_{-0.04}$ & $1.02^{+0.12}_{-0.07}$ \\ \\[-0.2cm] \hline \\[-0.1cm] {$\chi^{2}$}/DoF & & & 10678/10313 & \\ \\[-0.2cm] \hline \hline \end{tabular} \vspace{-0.2cm} \label{tab_param_fix} \end{center} $^a$ $E_{\rm{cut}}$ is constrained to be $\leq$1000\,keV following \cite{Garcia15}. \\ $^b$ For these models, $R_{\rm{disk}}$ is calculated self-consistently in the lamppost geometry from $a^*$, $h$ and $r_{\rm{in}}$. As it is not a free parameter, errors are not estimated. \vspace{0.4cm} \end{table*} \subsubsection{Results} \label{sec_res} To begin with, we assume that the disk extends in to the innermost stable circular orbit (ISCO) for all flux states and that the corona is not outflowing, and we allow the reflection fraction to vary as a free parameter (Model 1; a summary of all the models considered in our flux- and flare-resolved analyses is given in Table \ref{tab_model}). This model provides a good fit to the global dataset, with {$\chi^{2}$}\ = 10599 for 10308 degrees of freedom (DoF). We observe several trends in the fits, which are presented in Table \ref{tab_param}. Most notably, we find that the strength of the disk reflection increases with increasing flux (see Figure \ref{fig_Rfrac}). This is a strong indicator that the (average) geometry of the innermost accretion flow evolves as a function of source flux. In addition to these variations, the ionization of the disk increases as the observed flux increases, as would broadly be expected for an increasing ionizing flux, and there are changes in the intrinsic continuum, with the high-energy cutoff decreasing in energy as the flux increases. The black hole spin is not well constrained with this model (although the majority of negative spins are excluded: $a^* > -0.1$). However, during the flares the disk reflection is very strong, ($R_{\rm{disk}} \sim 3$). Caution over the exact value is necessary here, as the strength of the reflection obtained is dependent to some extent on the form of the high-energy curvature included in the input continuum model (a simple exponential cutoff in this work), and there is also some degeneracy between the $R_{\rm{disk}}$ and $E_{\rm{cut}}$ parameters. However, taking the result at face value, this would imply a scenario in which strong gravitational lightbending enhances the disk reflection ({\it e.g.}\ \citealt{lightbending}). In turn, this would imply that \rm V404\,Cyg\ hosts a rapidly rotating black hole ({\it e.g.}\ \citealt{Parker14mrk, Dauser14}). Although we are using an idealized lamppost geometry in this work, as long as the disk is thin then this is the case regardless of the precise geometry of the X-ray source, as the disk must extend close to the black hole in order to subtend a sufficiently large solid angle to produce the high reflection fraction; the validity of the thin disk assumption (which is currently implicit in the \rm{\small RELXILL}\ models) for these flares is discussed further in Section \ref{sec_spin}. Potential evidence for strong reflection during bright flares has also been seen from {\it INTEGRAL~\/}\ observations of this outburst (\citealt{Roques15, Natalucci15}). We stress, though, that despite any degeneracy between these parameters, the variations in both $E_{\rm{cut}}$ and $R_{\rm{disk}}$ are significant; if we try to force one of these two parameters to be the same for each of the flux states and only allow the other to vary, the fits are significanty worse ($\Delta\chi^{2} > 80$ for four fewer free parameters). While the absolute values themselves are somewhat model dependent, the trend of increasing $R_{\rm{disk}}$ with increasing flux appears to be robust to such issues. For the disk reflection fraction to vary in such a manner, the solid angle subtended by the disk as seen by the X-ray source must decrease as the observed flux decreases. A few potential scenarios could produce such behaviour: (1) the disk itself could evolve ({\it e.g.}\ truncate) such that it genuinely covers a smaller solid angle at lower fluxes; (2) the corona could evolve and vary its location/size, such that the degree of gravitational lightbending is reduced; (3) the corona could alternately vary its velocity, such that the beaming away from the disk is increased. While some combination of these three effects is of course possible, and probably even likely should the flares be related to jet ejection events, from a practical standpoint their individual effects on the observed reflection emission are rather similar (\citealt{Dauser13, Fabian14}). Therefore, in order to investigate the potential geometric evolution without introducing further parameter degeneracies, we also modify our basic model to consider two limiting scenarios representing the first 2 of these 3 possibilities (Models 2 and 3, respectively), now making use of the fact that given a combination of black hole spin, inner disk radius and X-ray source height, \rm{\small RELXILLLP}\ can self-consistently compute the expected $R_{\rm{disk}}$ from the lamppost geometry.\footnote{Models that can also self-consistently compute $R_{\rm{disk}}$ for an X-ray source with a vertical outflow velocity are under development, but are not yet ready for publication, limiting our ability to test the third scenario of a variable source velocity.} First, we assume that the corona remains static and the disk progressively truncates as the flux decreases, and second, we assume that the disk remains static and the corona progressively moves away from the disk (note that this could relate to either a physical movement of the corona, or a vertical expansion of the electron cloud). In the truncation scenario (which we call Model 2), we therefore allow the inner radius of the disk ($r_{\rm{in}}$) to vary, although given the results above we assume that during the flares the disk does reach the ISCO, and we link $h$ across all flux states. For the lower flux states, $r_{\rm{in}}$\ is computed in units of $r_{\rm{ISCO}}$, so that we can ensure that $r_{\rm{in}}$\ $\geq$ $r_{\rm{ISCO}}$, as simulations find that emission from within the ISCO is negligible ({\it e.g.}\ \citealt{Shafee08, Reynolds08}). In the dynamic corona scenario (Model 3), we assume that the disk reaches the ISCO for all fluxes, and instead vary $h$. The results from these two scenarios are presented in Table \ref{tab_param_fix}; we focus only on the key \rm{\small RELXILLLP}\ parameters as the parameters for the other components generally remain similar to Model 1. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_spin_contours.eps} \caption{$\Delta\chi^{2}$ confidence contours for the black hole spin for our relativistic disk reflection models computed with a self-consistent lamppost geometry. The top panel shows the models for our flux resolved analysis (Models 2--4, see Section \ref{sec_res}), while the bottom panel shows the models for our analysis of the strongest six flares individually (Models 5--6; see Section \ref{sec_flares}). For Model 6, we show both the contour calculated with no constraint on the inclination (dotted) and with the inclination constrained to $i \geq 50$\,$^{\circ}$\ (solid; Model 6i) to match the estimates for the orbital plane of the binary system. The dashed horizontal lines indicate the 90, 95 and 99\% confidence limits for one parameter of interest. } \label{fig_spin} \end{figure} \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_ratio_trunc_diskbb.eps} \caption{Data/model residuals for the truncating disk model with the thermal disk emission included from our flux-resolved analysis (Model 4; see section \ref{sec_res}). For each of the flux states, FPMA data are shown in black, and FPMB in red. As before, the data have been further rebinned for visual clarity. } \label{fig_flux_rat} \end{figure} Both of these scenarios provide reasonable fits to the data ({$\chi^{2}$}/DoF = 10656/10313 and 10675/10312 for Models 2 and 3, respectively), although the truncation scenario formally provides the better fit, and both are worse fits than Model 1 (in which $R_{\rm{disk}}$ is a free parameter) owing to the additional physical constraints imposed. With these additional constraints, both scenarios require the spin to be at least moderate ($a^* \gtrsim 0.6$), but above this value the {$\chi^{2}$}\ landscape becomes complex. Both scenarios show distinct minima that provide similarly good fits (different by $\Delta\chi^{2} < 5$ in both cases) at a high spin value and at a more moderate spin value (see Figure \ref{fig_spin}). For the truncating disk scenario (Model 2) the high spin solution ($a^* \sim 0.97$) is marginally preferred over the lower spin solution ($a^* \sim 0.82$), while for the dynamic corona scenario (Model 3) the lower spin solution ($a^* \sim 0.65$) is marginally preferred over the high spin solution ($a^* \sim 0.97$), perhaps indicating that even when allowing the height of the X-ray source to vary the data still want an evolution in the inner radius of the disk at some level. In both of these scenarios, we present the results for the high spin solution (which in the latter case gives a fit of {$\chi^{2}$}/DoF = 10678/10312) as our subsequent modeling of the individual flares strongly suggests the black hole in \rm V404\,Cyg\ is rapidly rotating (see Section \ref{sec_flares}). However, for completeness, we also present the parameter constraints for the lower-spin solutions in Appendix \ref{app_lowspin}; where the solutions are not the global best fit, errors are calculated as $\Delta\chi^{2} = 2.71$ around the local $\chi^{2}$ minimum. Separating out the solutions in this manner also allows the evolution required in $r_{\rm{in}}$ and $h$ to be more clearly seen, as both of these parameters are scaled by the spin in our model implementation. As expected, we see that either the inner radius of the accretion disk moves outwards (Model 2), or the source height moves upwards (Model 3), as the flux decreases. In Model 2 the inner disk radius evolves from the ISCO (assumed) out to $\sim$2.5\,$r_{\rm{ISCO}}$, and in Model 3 the source height evolves from $\sim$4 to $\sim$6\,$r_{\rm{H}}$. Finally, although the X-ray emission in the {\it NuSTAR}\ bandpass is clearly dominated by a hard, high-energy continuum and reprocessed emission, we also test for the presence of any thermal emission from an accretion disk. As the self-consistent evolutionary scenario that formally provides the best fit, we focus on the truncating disk scenario, and modify our model for the intrinsic emission from \rm V404\,Cyg\ to include a multi-color blackbody accretion disk (Model 4), using the \rm{\small DISKBB}\ model (\citealt{diskbb}). In the \hbox{\small XSPEC}\ model outlined in Section \ref{sec_mod}, we thus update the source term to be \rm{\small TBABS}$_{\rm{src}}$ $\times$ ( \rm{\small DISKBB}\ $+$ \rm{\small RELXILLLP}\ ). $R_{\rm{disk}}$ is still calculated self-consistently in the lamppost geometry from $a^*$, $h$ and $r_{\rm{in}}$. During the fitting process for this model, we found that the \rm{\small DISKBB}\ component only makes a significant contribution during the highest flux state (F5), and so fixed its normalization to zero for F1--4. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_flares_eem_trunc_H2_diskbb.eps} \caption{The best-fit disk reflection model obtained for the flare spectrum (F5) from Model 4 in our flux-resolved analysis (the truncating disk model with the thermal disk emission included). The total model is shown in black, and the relative contributions from the accretion disk (blue), the high-energy powerlaw tail (red), the disk reflection (magenta) and the distant reflection (green) are also shown.} \vspace{0.3cm} \label{fig_flaremod} \end{figure} This model provides a good fit to the data, with {$\chi^{2}$}/DoF = 10581/10311 ({\it i.e.}\ an improvement of $\Delta\chi^{2}$ = 75 for two additional free parameters over Model 2). We do not tabulate the parameter values, as the vast majority have not varied significantly from the values presented for Model 2 in Table \ref{tab_param_fix}, but a few key parameters are worth highlighting individually. The best fit disk temperature for the average flare spectrum is $0.41^{+0.10}_{-0.07}$\,keV, such that this component only contributes close to the lower boundary of the {\it NuSTAR}\ bandpass. However, this temperature is similar to values reported from X-ray observatories with coverage extending to lower energies throughout this outburst ({\it e.g.}\ \citealt{Radhika16}, Rahoui et al 2016, \textit{submitted}). The inclusion of this additional continuum component at the lower end of the {\it NuSTAR}\ bandpass allows the high energy powerlaw continuum to take on a harder photon index ($\Gamma = 1.32 \pm 0.01$), and subsequently a lower energy cutoff ($E_{\rm{cut}} = 75 \pm 4$\,keV), such that this primary continuum emission exhibits stronger curvature in the {\it NuSTAR}\ band. In turn, this allows a slightly lower reflection fraction ($R_{\rm{disk}} = 2.5$), with the source height increasing slightly to $h = 2.5^{+0.5}_{-0.1}$\,$r_{\rm{H}}$. The black hole spin remains high, with $a^* > 0.82$ (and noteably the {$\chi^{2}$}\ contour only displays a single solution; see Figure \ref{fig_spin}). The data/model ratios for the five flux states are shown in Figure \ref{fig_flux_rat} for this model, and the best-fit model along with the relative contributions of the various emission components are shown in Figure \ref{fig_flaremod} for the highest flux state (F5). It is worth noting that all these flux-resolved models have returned inclinations for the inner disk of $\sim$30$^{\circ}$. This inclination would mark a large difference between the inclination of the inner disk and the orbital plane, which the latest optical studies during quiescence have estimated to be $i_{\rm{orb}} \sim 65$$^{\circ}$\ (\citealt{Khargharia10}), with literature estimates covering a range from 50--75$^{\circ}$\ (\citealt{Shahbaz94, Shahbaz96, Sanwal96}). While evidence of misalignment between the inner and outer regions of the disk has been seen in other sources, {\it e.g.}\ Cygnus X-1 (\citealt{Tomsick14, Walton16cyg}), a difference this large would likely be unphysical ({\it e.g.}\ \citealt{Fragos10, Nealon15}). We will return to this issue in the following section. \subsection{Individual Flares} \label{sec_flares} We also investigate a number of the individual flares, focusing on the six that reach or exceed $\sim$10,000\,\hbox{$\rm\thinspace ct~s^{-1}$}\ (labeled in Figure \ref{fig_lcHR}). Following the reduction procedure used in section \ref{sec_avflares}, we extracted {\it NuSTAR}\ spectra for each of these six flares individually. These spectra are shown in Figure \ref{fig_allflares}. While they are all reasonably similar, as suggested by their similar broadband hardness ratios (Figure \ref{fig_hri}), there are also obvious differences between them, so there is still some clear averaging of different states in our flux-resolved analysis. For example, the first flare has a harder spectrum at lower energies than the subsequent flares, and the third flare has a softer spectrum than the rest over the {\it NuSTAR}\ bandpass. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_individual_flares_eeuf.eps} \caption{The X-ray spectra extracted from the six major flares highlighted in Figure \ref{fig_lcHR} (Flares 1--6 shown in black, red, green, blue, magenta and orange, respectively). As with Figure \ref{fig_5lvl}, only the FPMA data are shown for clarity, and the data have been unfolded through a constant and rebinned for visual purposes. While the flares all show similar broadband hardness ratios (Figure \ref{fig_hri}), there are clear differences between them. For example, Flare 1 (black) shows a harder spectrum at lower energies, and Flare 3 (green) shows a softer spectrum than the rest. } \label{fig_allflares} \end{figure} \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_ratio_flares_diskbb.eps} \caption{Data/model residuals for the lamppost reflection model with the thermal disk emission included from our flare-resolved analysis (Model 6; see section \ref{sec_flares}). Again, for each of the flares the FPMA data are shown in black and the FPMB data in red, and the data have been further rebinned for visual clarity.} \label{fig_allflares_rat} \end{figure} \begin{table*} \caption{Results obtained for the free parameters in the lamppost reflection models constructed for the joint fits to the individual flare spectra (Models 5 and 6). For Model 6, the high-spin solution is given.} \vspace{-0.25cm} \begin{center} \begin{tabular}{c c c c c c c c c c} \hline \hline \\[-0.1cm] Model Component & \multicolumn{2}{c}{Parameter} & Global & \multicolumn{6}{c}{Flare} \\ \\[-0.15cm] & & & & 1 & 2 & 3 & 4 & 5 & 6 \\ \\[-0.2cm] \hline \hline \\[-0.1cm] \multicolumn{10}{c}{Model 5: lamppost only} \\ \\[-0.1cm] \rm{\small TBABS}$_{\rm{src}}$ & {$N_{\rm H}$}\ & [$10^{22}$ cm$^{-2}$] & & $3.3^{+0.6}_{-0.8}$ & $<0.2$ & $<0.3$ & $<0.2$ & $<0.1$ & $<0.1$ \\ \\[-0.2cm] \rm{\small RELXILLLP}\ & $\Gamma$ & & & $1.22^{+0.06}_{-0.12}$ & $1.44^{+0.01}_{-0.03}$ & $1.63^{+0.08}_{-0.12}$ & $1.52^{+0.03}_{-0.04}$ & $1.28^{+0.02}_{-0.03}$ & $1.58^{+0.01}_{-0.03}$ \\ \\[-0.2cm] & $E_{\rm{cut}}$ & [keV] & & $50^{+6}_{-10}$ & $210 \pm 30$ & $47 \pm 4$ & $94^{+12}_{-15}$ & $60^{+4}_{-6}$ & $140^{+20}_{-10}$ \\ \\[-0.2cm] & $a^*$ & & $>0.99$ \\ \\[-0.2cm] & $i$ & [$^{\circ}$] & $42\pm2$ \\ \\[-0.2cm] & $h$ & $r_{\rm{H}}$\ & & $<2.2$ & $<2.4$ & $4.0^{+3.5}_{-1.0}$ & $2.7^{+1.6}_{-0.3}$ & $<2.1$ & $8.0^{+3.3}_{-0.9}$ \\ \\[-0.2cm] & $A_{\rm{Fe}}$ & [solar] & $2.9^{+0.3}_{-0.6}$ \\ \\[-0.2cm] & $\log\xi$ & $\log$[\hbox{\rm erg~cm~s$^{-1}$}] & & $3.2 \pm 0.1$ & $3.5^{+0.2}_{-0.1}$ & $3.1^{+0.2}_{-0.1}$ & $3.4^{+0.2}_{-0.1}$ & $3.36^{+0.07}_{-0.05}$ & $2.2 \pm 0.1$ \\ \\[-0.2cm] & $R_{\rm{disk}}$\tmark[a] & & & $5.3$ & $4.4$ & $2.3$ & $3.4$ & $5.3$ & $1.5$ \\ \\[-0.2cm] & Norm & & & $4.2^{+0.3}_{-0.5}$ & $4.4^{+0.9}_{-0.7}$ & $1.1^{+0.3}_{-0.4}$ & $2.1^{+0.5}_{-1.1}$ & $3.9^{+0.3}_{-1.2}$ & $1.3 \pm 0.2$ \\ \\[-0.2cm] \hbox{\rm{\small XSTAR}}$_{\rm{abs}}$ & $\log\xi$ & $\log$[\hbox{\rm erg~cm~s$^{-1}$}] & $5.3^{+0.4}_{-0.3}$ \\ \\[-0.2cm] & {$N_{\rm H}$}\ & [$10^{21}$ cm$^{-2}$] & & $43^{+12}_{-10}$ & $<1$ & $<3$ & $<7$ & $<3$ & $<12$ \\ \\[-0.2cm] \rm{\small XILLVER}\ & Norm & & & $0.22 \pm 0.04$ & $0.42^{+0.09}_{-0.08}$ & $0.25 \pm 0.06$ & $0.34^{+0.04}_{-0.07}$ & $0.45 \pm 0.05$ & $0.40 \pm 0.04$ \\ \\[-0.2cm] \hline \\[-0.1cm] {$\chi^{2}$}/DoF & & & 6039/5859 & \\ \\[-0.2cm] \hline \hline \\[-0.1cm] \multicolumn{10}{c}{Model 6: lamppost with disk emission} \\ \\[-0.1cm] \rm{\small TBABS}$_{\rm{src}}$ & {$N_{\rm H}$}\ & [$10^{22}$ cm$^{-2}$] & & $5.1^{+1.0}_{-1.1}$ & $2.3^{+0.9}_{-0.6}$ & $3.6^{+1.4}_{-1.3}$ & $2.2^{+1.5}_{-1.3}$ & $3.3^{+1.1}_{-1.0}$ & $3.8^{+0.8}_{-0.9}$ \\ \\[-0.2cm] \rm{\small DISKBB}\ & $T_{\rm{in}}$ & [keV] & $0.49 \pm 0.04$ \\ \\[-0.2cm] & Norm & [$10^5$] & & $1.5^{+1.3}_{-0.7}$ & $1.9^{+1.5}_{-1.1}$ & $3.4^{+2.9}_{-1.5}$ & $2.2^{+1.4}_{-1.1}$ & $2.7^{+2.2}_{-1.2}$ & $2.8^{+1.9}_{-1.2}$ \\ \\[-0.2cm] \rm{\small RELXILLLP}\ & $\Gamma$ & & & $<1.04$\tmark[b] & $1.27^{+0.04}_{-0.03}$ & $1.29^{+0.12}_{-0.08}$ & $1.33^{+0.07}_{-0.06}$ & $<1.12$\tmark[b] & $1.22^{+0.06}_{-0.04}$ \\ \\[-0.2cm] & $E_{\rm{cut}}$ & [keV] & & $40^{+3}_{-2}$ & $113^{+16}_{-28}$ & $32^{+5}_{-4}$ & $60^{+12}_{-9}$ & $42^{+5}_{-3}$ & $68^{+8}_{-6}$ \\ \\[-0.2cm] & $a^*$ & & $>0.98$ \\ \\[-0.2cm] & $i$ & [$^{\circ}$] & $52^{+2}_{-3}$ \\ \\[-0.2cm] & $h$ & $r_{\rm{H}}$\ & & $<2.3$ & $<3.8$ & $<3.6$ & $<3.5$ & $<2.4$ & $7.5^{+8.7}_{-1.9}$ \\ \\[-0.2cm] & $\log\xi_{\rm{disk}}$ & $\log$[\hbox{\rm erg~cm~s$^{-1}$}] & & $3.5 \pm 0.1$ & $3.8 \pm 0.1$ & $3.4 \pm 0.1$ & $3.6 \pm 0.1$ & $3.6 \pm 0.1$ & $3.5 \pm 0.1$ \\ \\[-0.2cm] & $A_{\rm{Fe}}$ & [solar] & $5.0^{+0.7}_{-0.4}$ \\ \\[-0.2cm] & $R_{\rm{disk}}$\tmark[a] & & & 5.3 & 3.9 & 4.1 & 4.1 & 5.3 & 1.6 \\ \\[-0.2cm] & Norm & & & $3.5^{+0.2}_{-0.8}$ & $3.1^{+2.0}_{-1.7}$ & $1.9^{+1.0}_{-0.9}$ & $2.7^{+1.4}_{-1.3}$ & $3.5^{+0.2}_{-1.2}$ & $1.0^{+0.4}_{-0.2}$ \\ \\[-0.2cm] \hbox{\rm{\small XSTAR}}$_{\rm{abs}}$ & $\log\xi$ & $\log$[\hbox{\rm erg~cm~s$^{-1}$}] & $5.7 \pm 0.2$ \\ \\[-0.2cm] & {$N_{\rm H}$}\ & [$10^{21}$ cm$^{-2}$] & & $88^{+4}_{-3}$ & $<6$ & $<16$ & $<30$ & $<23$ & $19^{+17}_{-11}$ \\ \\[-0.2cm] \rm{\small XILLVER}\ & Norm & & & $0.13 \pm 0.05$ & $0.22^{+0.09}_{-0.08}$ & $0.13 \pm 0.07$ & $0.19 \pm 0.09$ & $0.29 \pm0.06$ & $0.26^{+0.06}_{-0.03}$ \\ \\[-0.2cm] \hline \\[-0.15cm] {$\chi^{2}$}/DoF & & & 5906/5852 & \\ \\[-0.2cm] \hline \\[-0.15cm] $F_{3-79}$\tmark[c] & \multicolumn{2}{c}{[$10^{-8}$\,\hbox{$\rm\thinspace erg~cm^{-2}~s^{-1}$}]} & & $50.7 \pm 0.6$ & $62.4 \pm 0.6$ & $34.4 \pm 0.5$ & $50.6 \pm 0.6$ & $57.0 \pm 0.6$ & $60.0 \pm 0.4$ \\ \\[-0.2cm] \hline \hline \end{tabular} \vspace{-0.2cm} \label{tab_flares} \end{center} $^a$ For these models, $R_{\rm{disk}}$ is calculated self-consistently in the lamppost geometry from $a^*$ and $h$. As it is not a free parameter, errors are not estimated. \\ $^b$ The RELXILLLP model is only calculated for $\Gamma \geq 1$. \\ $^c$ Average flux in the 3--79\,keV bandpass. \vspace{0.5cm} \end{table*} We performed a joint fit of each of these flare spectra with the lamppost model discussed in sections \ref{sec_mod} and \ref{sec_res} (excluding the photoionised emission component, which makes no contribution to the flux-resolved fits at high fluxes). As with our flux-resolved analysis, we link the black hole spin, iron abundance, accretion disk inclination, and ionization state of the photoionized absorption across all the flares. Additionally, for the distant reprocessor, we fix the shape parameters (iron abundance, slab inclination) to the values found in the flux-resolved work. We also assume that the disk extends to the ISCO and again compute the reflection fraction self-consistently assuming a lamppost geometry. Finally, given the results presented in Section \ref{sec_res}, we fit the lamppost model both with and without an accretion disk contribution, again using the \rm{\small DISKBB}\ model. While fitting the model with the \rm{\small DISKBB}\ component, we found the disk temperatures to be consistent among all the flares, and so linked this parameter across the datasets for simplicity. The results obtained with both these models (Models 5 and 6, respectively) are presented in Table \ref{tab_flares}. The pure lamppost model (Model 5) fits the data well, with with {$\chi^{2}$}/DoF = 6039/5859. The spin is constrained to be very high, $a^* > 0.99$ (see Figure \ref{fig_spin}), and there is a slight increase in the inclination inferred for the inner disk; while the flux-resolved analysis typically found $i \sim 30$$^{\circ}$, here we find $i \sim 40$$^{\circ}$. We find that the first flare shows stronger absorption than the subsequent flares, both in terms of the neutral and the ionized absorption components. The former results in the harder spectrum seen from this flare at lower energies, and there is a clear absorption line from ionized iron at $\sim$6.7\,keV produced by the latter, similar to that reported by \cite{King15v404}, which is not seen in any of the subsequent flares. As with the flux-resolved analysis, we find that during the flares the height inferred for the X-ray source is very close to the black hole, always within $\sim$10\,$r_{\rm{G}}$. The model including the disk emission (Model 6) again provides a substantial improvement over the basic lamppost model, resulting in an outstanding fit to the data ({$\chi^{2}$}/DoF = 5906/5852, {\it i.e.}\ an improvement of $\Delta\chi^{2}$ = 133 for 7 additional free parameters). We show the data/model ratios for the individual flares with this model in Figure \ref{fig_allflares_rat}. The disk temperature is again similar to that reported by lower energy missions, $T_{\rm{in}} \sim 0.5$\,keV, and as before we see that the inclusion of this emission allows the high-energy continuum to take on a harder form, subsequently resulting in lower-energy cutoffs. The neutral absorption inferred also increases to compensate for this additional low-energy continuum emission. With regards to the black hole spin, we again find a situation in which two solutions exist that provide statistically equivalent fits (separated by $\Delta\chi^{2} < 1$; see Figure \ref{fig_spin}): one at high spin ($a^* > 0.98$) which is marginally preferred, and another broad, local minimum at a more moderate spin ($a^* \sim 0.5$). In this case, the dual solutions are related to a significant degeneracy between the spin and the disk inclination, resulting from the combination of the additional continuum component, and the lower total S/N utilized in these fits (these data represent $\sim$80\% of the exposure from which the F5 spectrum considered in the previous section is extracted). For the best-fit, high spin solution we find that the inclination has further increased to $i \sim 52$$^{\circ}$, which is similar to the estimates for the orbital inclination of the system ($i_{\rm{orb}} \sim 50$--75$^{\circ}$; {\it e.g.}\ \citealt{Shahbaz94, Khargharia10}). In contrast, for the more moderate spin solution we find that the associated inclination is $<20$$^{\circ}$. This would imply an even more extreme disk warp than the flux-resolved analysis, which we deem unphysical. This degeneracy between the spin and the inclination is distinct from the traditional sense of a parameter degeneracy, in which two parameters are correlated such that any value of one can be made acceptable by adjusting the other; rather there are two solutions that are acceptable in distinct areas of parameter space. We therefore present the results from the high-spin solution in Table \ref{tab_flares}, although again the parameter constraints for the lower-spin solution are presented in Appendix \ref{app_lowspin}, and re-calculate the confidence contour for the black hole spin with the inclination constrained to be $i \geq 45$$^{\circ}$\ for this model (which we refer to as Model 6i; see Figure \ref{fig_spin}) in order to ensure a reasonable agreement between the inner and outer disk. This constraint strongly requires a rapidly rotating black hole. We also assess the degree to which the assumed geometry is driving the spin constraint in this scenario by relaxing the requirement that $R_{\rm{disk}}$ is set self-consistently and allowing this to vary as a free parameter for each of the 6 flares (but keeping the $i \geq 45$$^{\circ}$\ constraint). Although the constraint on the spin is naturally looser, we still find that $a^* > 0.7$ and the constraints on $R_{\rm{disk}}$ are all consistent with the values presented in Table \ref{tab_flares}. If we exclude unphysically large disk warps, a rapidly rotating black hole is still required regardless of any additional geometric constraints. \begin{figure*} \hspace*{-0.5cm} \epsscale{1.1} \plotone{./figs/v404cyg_eeuf_timeres_flare4_all14.eps} \caption{The 14 time-resolved X-ray spectra extracted across the evolution of flare 4 (labeled T1--14). As before, the data have been unfolded through a constant and rebinned for visual purposes, and the FPMA and FPMB data are shown in black and red, respectively. Significant spectral changes are seen as the source flares and then decays. } \vspace{0.3cm} \label{fig_flare4spec} \end{figure*} The range of heights inferred for the X-ray source remains similar to the pure lamppost case. However, one issue of note with this model is that the iron abundance has increased to $A_{\rm{Fe}}$/solar $\sim$ 5 (for both solutions), in order to compensate for the harder irradiating continuum and reproduce the observed line flux. All our previous models had typically found $A_{\rm{Fe}}$/solar $\sim$ 2--3, which is similar to the iron abundance of $A_{\rm{Fe}}$/solar $\sim$ 2 found for the companion star by \cite{GonzHern11}. While this is certainly not always the case ({\it e.g.}\ \citealt{ElBatal16}), similarly high iron abundances have also been reported for a few other Galactic BHBs observed by {\it NuSTAR}\ when using the \rm{\small XILLVER}\ based family of reflection models ({\it e.g.}\ \citealt{Parker16, Walton16cyg, Fuerst16gx}). The abundance inferred may be dependent on the reflection code utilized; \cite{Walton16cyg} note that for the Galactic binary Cygnus X-1, the iron abundances obtained with the \rm{\small XILLVER}\ family of reflection models are generally a factor of $\sim$2 larger than those obtained with the \rm{\small REFLIONX}\ (\citealt{reflion}) family of models (see also \citealt{Miller15}). Should the iron abundance here be systematically overpredicted by a similar factor, this would bring the abundance derived back down to $A_{\rm{Fe}}$/solar $\sim$ 2.5, which would again be similar to that reported by \cite{GonzHern11}. However, we stress that the key results obtained here do not strongly depend on this issue. If we fix the iron abundance to $A_{\rm{Fe}}$/solar = 2 in Model 6, the fit worsens slightly (but is still excellent, {$\chi^{2}$}/DoF = 5933/5853). The spin is strongly constrained to be very high ($a^* > 0.997$), and the requirement for small source heights further tightens. The most noteable change is that the best-fit inclination further increases to $i \sim 60$$^{\circ}$, which is still in good agreement with the range estimated for the orbital plane. The 3--79\,keV fluxes observed from these spectra are also given in Table \ref{tab_flares}. However, the average count rates during the periods from which these spectra are extracted are obviously significantly lower than the peaks of the flares. Assuming a similar spectral form, scaling these fluxes up to the peak incident count rates observed during these flares -- as determined from lightcurves with 1s time bins -- corresponds to peak 3--79\,keV fluxes ranging from 0.8--2.0 $\times 10^{-6}$\,\hbox{$\rm\thinspace erg~cm^{-2}~s^{-1}$}. For a 10\,\hbox{$\rm M_{\odot}$}\ black hole at a distance of 2.4\,kpc, these fluxes equate to 3--79\,keV luminosities of $\sim$0.4--1.0\,$L_{\rm{E}}$\ (where $L_{\rm{E}}$\ = $1.4 \times 10^{39}$ \hbox{erg~s$^{-1}$}\ is the Eddington luminosity). The bolometric fluxes observed from the \rm{\small DISKBB}\ component in these spectra, which assumes a thin disk as described by \cite{Shakura73}, equate to disk luminosities of $\sim$0.1\,$L_{\rm{E}}$\ (assuming the disk is viewed close to $i \sim 60$$^{\circ}$). Temperatures of $T_{\rm{in}} \sim 0.5$ keV are not unreasonable for such luminosities ({\it e.g.}\ \citealt{Gierlinski04, MReynolds13}). Assuming these fluxes also scale up during the peaks of the flares, the peak disk fluxes would equate to luminosities of $\sim$0.3--0.5\,$L_{\rm{E}}$. \subsection{Evolution Across Flare 4} \label{sec_flare4} As the final component of our analysis in this work, we track the evolution of the spectrum across one of the major flares considered in Section \ref{sec_flares}. We focus on Flare 4 (see Figure \ref{fig_lcHR}), as this is followed by a relatively long, uninterrupted period of low absorption (as determined by our analysis in Section \ref{sec_sel}). As such, we should have a relatively clean view of the flare and its subsequent decline. In order to track the evolution of the spectrum, we split the data into bins with 40\,s duration, and extracted spectra from each, again following the method outlined in Section \ref{sec_red}. While significant variability obviously occurs on shorter timescales ({\it e.g.}\ Gandhi et al. in preparation), 40\,s duration was found to offer a good balance between retaining good time resolution and the need for reasonable S/N in the individual spectra. We start immediately prior to the flare, and continue until the point that the observed count rate (as averaged over 40\,s) starts to rise again after the decline of the flare, resulting in 14 time-resolved spectra (per FPM) in total (hereafter T1--14). These spectra are shown in Figure \ref{fig_flare4spec}. There are too many datasets to undertake a joint analysis of all the data, so we fit the data from each of the time bins individually, using the same lamppost-based model utilized in our joint analysis of the major flares observed (Section \ref{sec_flares}). Specifically, we use the model that includes the thermal disk emission (Model 6). However, the average good exposure time per FPM is only $\sim$11\,s per bin (being higher for lower flux bins and vice versa, owing to the instrumental deadtime; \citealt{NUSTAR}), so the S/N per time bin is relatively low. We therefore limit ourselves to considering only a few key free parameters when fitting each of these datasets. As there is no evidence for ionized iron absorption during this flare (only an upper limit is obtained on the column for this component during flare 4, see Table \ref{tab_flares}), we exclude the \hbox{\rm{\small XSTAR}}\ absorption component from our analysis in this section. Furthermore, we fix all the remaining global parameters (black hole spin, disk inclination, iron abundance and disk temperature) to the best fit values presented for Model 6 in Table \ref{tab_flares}. We also fix the ionization parameter to the value obtained in our flux-resolved analysis (see Table \ref{tab_param}), based on the average count rate in that time bin, thus ensuring that the ionization increases as the flux increases. Finally, we are not able to simultaneously constrain both the inner radius of the disk and the height of the X-ray source, so we initially fix the latter at the best-fit obtained for this flare in our flare-resolved analysis ($h = 2.5$\,$r_{\rm{H}}$). The free parameters allowed to vary for each of the time-resolved datasets are therefore the (source intrinsic) neutral absorption column, the photon index and high-energy cutoff of the powerlaw continuum, the inner radius of the disk, and the normalizations of the various emission components. As before, the reflection fraction $R_{\rm{disk}}$ is calculated self-consistently from the spin, source height and inner radius of the disk in the lamppost geometry, which helps to constrain $r_{\rm{in}}$\ in these fits. The results for a number of the key parameters, as well as a zoom-in on the lightcurve of this flare, are shown in Figure \ref{fig_flare4res}, which shows a characteristic fast rise, exponential decay profile. Aside from the first time bin, the absorption stays relatively low and stable throughout, as expected. Prior to the flare, the observed spectrum is relatively soft (in comparison to the spectra shown in Figures \ref{fig_5lvl} and \ref{fig_allflares}). Then, as the source flares the spectrum hardens significantly (reaching $\Gamma = 1.14^{+0.04}_{-0.08}$), and during the decline it softens again before gradually becoming harder as the source fades. We see a significant difference in the average cutoff energy before and after the flare. Finally, we also see a significant difference in the inner radius of the disk, the key geometry parameter in this analysis, across the evolution of the flare, being close to the ISCO prior to and during the rise of the flare, before moving out to $\sim$10\,$r_{\rm{ISCO}}$\ during the subsequent decline. The data are well modeled, with an average {$\chi^{2}$}/DoF of 1.02 (for an average of 302 DoF). As a sanity check, assuming a disk structure of $h_{\rm{D}}/r_{\rm{D}} \sim 0.2$ (where $h_{\rm{D}}$ is the scale height of the disk at a given radius $r_{\rm{D}}$; see Section \ref{sec_spin}), a standard viscosity parameter of $\alpha \sim 0.1$, and that the dynamical timescale is set by the Keplerian orbital timescale, we estimate the viscous timescale for the disk should be $\sim$0.01\,s at a radius of 10\,$r_{\rm{G}}$\ (for our best-fit spin, $r_{\rm{ISCO}}$\ $\sim$ $r_{\rm{G}}$) for \rm V404\,Cyg. Significant evolution of the inner disk is therefore certainly possible over the timescales probed here. We also consider two additional iterations of this analysis. First, as with our flux-resolved analysis, we also consider the case in which $h$ varies and $r_{\rm{in}}$\ stays constant, fixing the latter to the ISCO throughout. Equivalent results are obtained, with the only difference being that $h$ increases as the flare evolves instead of $r_{\rm{in}}$, starting at $\sim$2\,$r_{\rm{H}}$\ before jumping to $\sim$20\,$r_{\rm{H}}$\ in the decline of the flare. The fit statistics are very similar to the scenario in which $r_{\rm{in}}$\ varies and $h$ is constant. At least one of $h$ or $r_{\rm{in}}$\ must therefore increase across the flare; in reality the two may well evolve together. Second, we relax our assumption with regards to the ionization of the disk. While this would be expected to increase with increasing luminosity for a constant density, with the inner regions of the disk evolving its density may also vary. We therefore re-fit the data with the ionization as a further free parameter. While this increases the uncertainties on the other parameters, the same qualitative evolution is still seen, with the main difference being that the point at which $r_{\rm{in}}$\ moves outwards occurs later in time. Broadly speaking, the ionization of the disk does still appear to increase with increasing flux. \begin{figure} \hspace*{-0.5cm} \epsscale{1.15} \plotone{./figs/v404cyg_timeres_flare4_results_Rin_ion.eps} \caption{The results for the key lamppost model parameters obtained with our time-resolved spectral analysis of Flare 4, in this case allowing the inner radius of the disk to vary while holding the height of the X-ray source constant (see text). The top panel shows the lightcurve around flare 4 (10s bins), while the lower panels show the evolution of the intrinsic neutral absorption column, the photon index and high-energy cutoff of the powerlaw continuum, and the inner disk radius, respectively (each 40s bins). The high-energy continuum hardens significantly during the peak of the flare. In addition, both the high-energy cutoff and the inner radius of the disk are significantly larger after the peak of the flare than before. The shaded regions indicate periods when the count rate exceeds 4000\,\hbox{$\rm\thinspace ct~s^{-1}$}, which contribute to the Flare 4 spectrum shown in Figure \ref{fig_allflares}. } \vspace{0.3cm} \label{fig_flare4res} \end{figure} Finally, we note that \rm V404\,Cyg\ is known to exhibit a strong dust halo, which can produce emission that can potentially mimic an accretion disk component, particularly when the source is faint (\citealt{Vasilopoulos16, Beardmore16, Heinz16, Motta16v404}). However, during this work we are largely focusing on periods when the source was very bright. Furthermore, in the analysis presented here we find that the normalization of the \rm{\small DISKBB}\ component included in the model varies across Flare 4 along with the overall flux. This is too fast for the response from dusty interstellar clouds, and so we cannot be mistaking a dust contribution for the accretion disk in this work. \section{Discussion} \label{sec_dis} We have undertaken an analysis of the first of a series of {\it NuSTAR}\ observations of \rm V404\,Cyg\ taken across its recent outburst in summer 2015. This observation was taken during the period of extreme activity from the source (see Figure \ref{fig_longlc}). Extreme flux and spectral variability is present throughout (see Figure \ref{fig_lcHR}), driven in part by strong and variable line-of-sight absorption, similar to that seen in the last major outburst from this source in 1989 ({\it e.g.}\ \citealt{Zycki99b}). We also see a period of intense flaring, similar to that reported by other high-energy observatories ({\it e.g.}\ \citealt{Rodriguez15v404, Natalucci15, Roques15, King15v404, Jenke16v404}), with the source reaching observed fluxes that correspond to its Eddington luminosity in the 3--79\,keV band in the most extreme cases covered by {\it NuSTAR}. Given the strength of these flares, the ability of {\it NuSTAR}\ to cleanly observe extreme count rates free of instrumental effects such as pile-up, owing to its triggered read-out (\citealt{NUSTAR}), has been critical to this work. Our analysis focuses primarily on this flaring period. While the line-of-sight absorption is often strong during this observation, as indicated by the strong edge seen at $\sim$7\,keV in the average spectrum from the entire observation (see Figure \ref{fig_spec_av}), the average spectrum extracted from the highest fluxes (the flare peaks) seen during this period shows comparatively little absorption, with no strong edge seen, and thus offers us a relatively clean view of the intrinsic spectrum from \rm V404\,Cyg. These data show clear evidence of relativistic reflection from an accretion disk (Figure \ref{fig_flares}), as well as reprocessing from more distant material (see also \citealt{King15v404, Motta16v404}). We undertake a series of detailed analyses in order to determine the relative contributions of these components, and probe the geometry of the inner accretion flow during these flares. First, we use these flares as a template to identify further periods of low absorption throughout the rest of the {\it NuSTAR}\ observation, and undertake a flux-resolved analysis of these data (Section \ref{sec_flux}), averaging them into five flux bins and fitting these simultaneously with the latest self-consistent disk reflection model, assuming a lamppost geometry (\rm{\small RELXILLLP}; \citealt{relxill}). The relative contribution of the disk reflection decreases with decreasing flux. The evolution of the strength of the disk reflection implies that, on average, the solid angle subtended by the disk, as seen by the illuminating X-ray source, decreases with decreasing flux. In turn, this requires an evolution in the geometry of the innermost accretion flow. To minimize parameter degeneracies we tested two limiting scenarios based on an idealized lamppost approximation for the accretion geometry, first in which the changing solid angle is explained with a truncating disk and a static illuminating source, and second with a stable disk and a changing source height (resulting in a varying degree of gravitational lightbending). The latter scenario could potentially represent either a physical motion or a vertical expansion of the X-ray source. We note, however, that it is possible (if not likely, as discussed below) that both the inner radius of the disk and the height of the X-ray source could be varying simultaneously. Both of the scenarios considered suggest that during the peaks of the flares, the average position of the X-ray source is close to the black hole ($h \lesssim 5$\,$r_{\rm{G}}$). In addition to the high-energy powerlaw continuum and the reprocessed emission components that dominate the majority of the {\it NuSTAR}\ band, we also find evidence for a weak contribution from thermal emission from the disk in the highest flux bin, seen at the lowest energies probed (see Figure \ref{fig_flaremod}). The lower flux data do not show any evidence for such emission in the {\it NuSTAR}\ band. Second, we undertake a joint analysis of the spectra extracted from the peaks of the six strongest flares observed (highlighted in Figure \ref{fig_longlc}; Section \ref{sec_flares}). We again fit the data with our lamppost disk reflection model in order to build on our previous analysis and probe the geometry during these flares individually. While these flares all have broadly similar spectra, there are also differences between them (Figure \ref{fig_allflares}), so it is important to assess what effect the averaging of different spectra inherent to our flux-resolved analysis might have on the results obtained. Our analysis of these data with our lamppost disk reflection model finds further support for the contribution of thermal disk emission at the highest fluxes, and also confirms that the X-ray source is indeed close to the black hole (within $\sim$10\,$r_{\rm{G}}$) during these flares. With the strong gravitational light bending associated with this regime resulting in an increased fraction of the emitted flux being lost over the black hole horizon and/or bent onto the accretion disk, the intrinsic power emitted during these flares would be even larger than simply inferred from the observed fluxes. For the high spin solutions, the work of \cite{Dauser14} suggests that only $\sim$20\% of the intrinsically emitted flux should be lost over the event horizon, so the reflection fraction -- defined here to be the ratio of the fluxes seen by the disk and by the observer -- provides a reasonably good scaling factor between the observed and intrinsic fluxes. At the flare peaks, we would therefore infer the hard X-ray continuum to be intrinsically $\sim$4 times brighter (on average) than observed based on our flare-resolved analysis. However, we stress that this correction is geometry dependent, and even within the assumed geometry depends strongly on the source height; increasing $h$ within the formal statistical uncertainties quoted in Table \ref{tab_flares} can reduce this factor quite substantially (by up to $\sim$40\%). Finally, we undertake a time-resolved analysis of the evolution across one of these major flares, focusing on flare 4 (Section \ref{sec_flare4}). Spectra are extracted every 40\,s, and fit individually with our lamppost disk reflection model. Owing to the short exposures, the S/N in each spectrum is relatively poor. We therefore again focus on the limiting scenarios in which the inner radius of the disk varies while the height of the X-ray source remains constant, and vice versa (although we again stress that this is for pragmatic reasons regarding parameter degeneracies, and that both quantities may in reality vary together, as discussed below). In both cases, we find clear differences before and after the peak of the flare, so at least one of these quantities must evolve across the flare; either the disk truncates, or the height of the source increases (Figure \ref{fig_flare4res}). During the peak of the flare, the primary continuum is extremely hard ($\Gamma \sim 1.1$), and we also see a clear evolution in the high-energy cutoff, which is significantly higher after the peak of the flare than it was before. \subsection{Jet Activity} \label{sec_jet} We suggest that the strong flares observed by {\it NuSTAR}\ mark transient jet ejection events, with the jet becoming the source of the X-rays illuminating the disk (hence our use of the lamppost geometry throughout our reflection modeling). There are a variety of lines of evidence from the X-ray band alone that support this claim. In other accreting black holes, strong X-ray flares are known to be associated with such events. The BHB XTE\,J1550-564 is particularly notable in this respect. During its 1998 outburst, an $\sim$Eddington level flare was observed by {\it RXTE}, which triggered the onset of super-luminal radio ejecta, and some time later ejecta were resolved from the central point source by {\it Chandra}\ (\citealt{Corbel02, Tomsick03, Kaaret03, Steiner12}). Such behavior has also been seen from the BHB H\,1743$-$322 (\citealt{Corbel05, McClintock09, Steiner12h1743}), and the X-ray flux seems to be elevated in the hours prior to many of the radio ejections from the BHB GRS\,1915+105 ({\it e.g.}\ \citealt{Punsly13, Punsly16}). In addition, some X-ray flares in active galaxies also appear to be associated with jet ejection events ({\it e.g.}\ the recent flares observed from Mrk\,335 and M81; \citealt{Wilkins15, King16m81}). The fact that the intrinsic spectrum observed immediately prior to flare 4 in our time-resolved analysis is inferred to be quite soft ($\Gamma \sim 2.3$) is of potential importance here, as this is very similar to the `steep powerlaw state' identified by \citet[also referred to as the `Very High State' in other works]{Remillard06rev}. For most LMXBs in outburst, transient jets are launched as they flare up to $\sim$Eddington during the transition from the hard state to the soft state, which occurs via the steep powerlaw state ({\it e.g.}\ \citealt{Fender04, Corbel04, Steiner11, Narayan12}). In addition to this broader precedent, the nature of the X-ray spectrum observed during these flares also supports a jet scenario. Even after accounting for the reprocessed emission, the primary X-ray continuum is found to be extremely hard, despite the high flux; on average we see $\Gamma \sim 1.4$, and from our time-resolved analysis of flare 4 we see that the continuum even reaches $\Gamma \sim 1.1$. This is not the spectrum that would be expected from an accretion flow radiating at $\sim$Eddington, which should be dominated by emission from a multi-color blackbody accretion disk, modified slightly by the effects of photon advection ({\it e.g.}\ \citealt{Middleton12, Middleton13nat, Straub13}). In addition, spectra this hard (particularly in the $\Gamma \sim 1.1$ case) are difficult to produce via Compton scattering of thermal disk photons in a standard accretion disk corona. Strong illumination of the corona by the disk should cool the electrons and produce a softer spectrum. The hard X-ray source would therefore be required be extremely photon starved ({\it e.g.}\ \citealt{Fabian88, Haardt93}), in which case only a very small fraction of the disk emission would be scattered into the hard X-ray continuum, or some other process must serve to counteract the cooling of the electrons. This may point to a magnetic origin for the flares, which would also support a transient jet scenario ({\it e.g.}\ \citealt{Dexter14}). Furthermore, if we assume that immediately prior to this flare the high-energy continuum is produced by thermal Comptonization, following a simlar calculation to \cite{Merloni01} and taking $h$ to be representative of the size-scale of the corona, we find that there is not enough thermal energy stored in the corona to power the flare by many orders of magnitude, which would also support a magnetic origin. While the spectrum during the peak is also likely too hard for direct synchrotron emission from a jet, which would be expected to give $\Gamma \sim 1.7$ in the X-ray band, but synchrotron-self-Compton emission ({\it e.g.}\ \citealt{Markoff05}) may be able to produce a high-energy continuum this hard. The increase of roughly an order of magnitude in $E_{\rm{cut}}$ observed across flare 4, from $\sim$50 to $\sim$500\,keV, would also appear to indicate that significant energy is being injected into the X-ray emitting electron population during this event, as $E_{\rm{cut}}$ is a proxy for the electron temperature $T_{\rm{e}}$. {\it INTEGRAL~\/}\ may have seen a similar evolution in the cutoff across one of the bright flares observed during its coverage of this outburst (\citealt{Natalucci15}). If the height of the source does increase across this flare, the change in gravitational redshift experienced by the primary emission could contribute at least in part to the difference seen in $E_{\rm{cut}}$, since this correction is not yet incorporated into the \rm{\small RELXILLLP}\ model (\citealt{Niedzwiecki16}). However, in the most extreme scenario, where $r_{\rm{in}}$\ remains constant while $h$ varies (evolving from $\sim$2 to $\sim$20\,$r_{\rm{G}}$), the movement of the source height should only result in a factor of $\sim$2 change in the observed cutoff energy (assuming no intrinsic variation). This is clearly insufficient to explain the difference observed, and so we conclude that the intrinsic cutoff energy does indeed increase across the flare. Assuming the powerlaw emission is produced by Compton scattering at least during the times both prior to and after the main flare, this implies either an increase in the characteristic electron temperature if the particle distribution remains thermal, or perhaps a transition to a more powerlaw-like (non-thermal) distribution that extends up to significantly higher energies. With the spectral coverage of {\it NuSTAR}\ stopping at 79\,keV, it can be difficult to distinguish between these two scenarios for sufficiently high electron temperatures in the thermal case, as the high-energy cutoff is shifted out of the {\it NuSTAR}\ bandpass, resulting in the observation of a powerlaw spectrum with little or no curvature. In turn, this results in a run-away effect in terms of the measured $E_{\rm{cut}}$, owing to the fact that the cutoff powerlaw model is constantly curving at all energies, while a thermal Comptonization continuum is more powerlaw-like until it rolls over with a sharper cutoff (see the discussion in \citealt{Fuerst16}), potentially explaining the fact that $E_{\rm{cut}}$ is often consistent with the maximum value currently permitted by the \rm{\small RELXILL}\ models after the flare. Nevertheless, the evolution in $E_{\rm{cut}}$ observed here provides a good match to the jet model described in \cite{Markoff05}, in which electrons are accelerated into a powerlaw distribution within a region $\sim$10--100\,$r_{\rm{G}}$\ above the jet's point of origin. Should the particle distribution instead be thermal both before and after the peak of the flare, assuming that the size of the corona increases across the flare ({\it i.e.}\ it expands as either $r_{\rm{in}}$\ or $h$ increase), then the evolution would be similar to that expected for a corona being kept close to its catastrophic pair production limit (see \citealt{Fabian15}, and references therein). Finally, the geometric results from our reflection modelling are also likely consistent with a jet scenario. We see evidence for either the disk truncating or the height of the X-ray source increasing, both on average as the source flux decreases, and also across one of the major flares individually. Although we cannot constrain the evolution of the inner disk radius and the source height simultaneously, as variations in the two produce similar results for the observed reflection spectrum ({\it e.g.}\ \citealt{Fabian14}, hence our treatment of these two possibilities in isolation), as noted previously it is quite possible that both of these quantities evolve. Indeed, if we repeat the time resolved analysis of flare 4 presented in Section \ref{sec_flare4} forcing this to be the case, linking the two with a simple linear relation and assuming that both evolve simultaneously just for illustration $($[$h$/$r_{\rm{H}}$] = 2[$r_{\rm{in}}$/$r_{\rm{ISCO}}$]$)$, we again find the same qualitative evolution seen in Figure \ref{fig_flare4res}, and the fits are as good as the scenarios in which only one of $r_{\rm{in}}$\ and $h$ is allowed to vary. In this scenario, the magnitude of the changes in $r_{\rm{in}}$\ and $h$ are both reduced in comparison to the de-coupled scenarios discussed in Section \ref{sec_flare4}, with $r_{\rm{in}}$\ evolving from 1--5\,$r_{\rm{ISCO}}$, and $h$ evolving from 2--10\,$r_{\rm{H}}$. We note that, should the ejecta have reached a significant outflow velocity, the reflection fraction would be reduced for a given combination of $h$ and $r_{\rm{in}}$\ ({\it e.g.}\ \citealt{Beloborodov99}) resulting in these quantities potentially being overestimated during the times after the flare, but again the same qualitative evolution should be seen. Furthermore, acceleration up to significant outflow velocities may not be expected so close to the black hole (see Section \ref{sec_launch}). In a flare associated with transient jet ejection, obviously if the jet is the source of illumination then one naturally expects the height of the source to increase across the flare. However, such ejection events may also be associated with an evacuation of the inner disk, as the same instability that results in the ejection also results in catastrophic accretion of the innermost portion of the disk (\citealt{Szuszkiewicz98, Meier01}). \cite{Chen95} suggest that thin disk solutions should become unstable above luminosities of $\sim$0.3\,$L_{\rm{E}}$, similar to the peak disk fluxes inferred here. Evidence for such behaviour might be seen, for example, in GRS\,1915+105, where radio ejections are also preceded by dips in X-ray intensity in some of the oscillatory states exhibited by this source, during which the inner radius of the accretion disk is inferred to increase ({\it e.g.}\ \citealt{Pooley97, Mirabel98, KleinWolt02}), though \cite{Rodriguez08} suggest that the ejections might actually be associated with the post-dip flares observed in those cycles. Similar behaviour may also have been seen in the radio galaxies 3C\,120 (\citealt{Marscher02, Chatterjee09, Lohfink13}), and 3C\,111 (\citealt{Chatterjee11}), where radio ejections appear to be preceeded by X-ray dips. Therefore, both an increasing source height \textit{and} a truncation of the inner accretion disk may be expected for transient ejection events, consistent with the evolution seen in our analysis. \subsubsection{Radio Monitoring} A natural prediction of the jet scenario is that radio emission should be observed. Throughout this recent outburst, \rm V404\,Cyg\ was frequently monitored by the Arcminute Microkelvin Imager - Large Array (hereafter AMI; \citealt{AMI}), a compact array of eight dishes operating in the 13-18 GHz frequency range. The full AMI campaign on \rm V404\,Cyg\ will be presented in Fender et al. (in preparation; see also \citealt{Mooley15}); here we focus on the coverage that is simultaneous with our {\it NuSTAR}\ observation. Flagging and calibration of the data were performed with the AMI REDUCE software (\citealt{Perrott13}). The calibrated data were then imported into CASA and flux densities of \rm V404\,Cyg\ were extracted by vector averaging over all baselines; the absolute flux calibration uncertainty is $\sim$5\%. \begin{figure} \hspace*{-0.5cm} \epsscale{1.16} \plotone{./figs/v404cyg_lc_ami_vs_nustar.eps} \caption{A comparison of the {\it NuSTAR}\ lightcurve (top panel) and the radio monitoring during this period from AMI (see text). Although there is a significant gap in the AMI coverage owing to earth occultation, preventing a detailed analysis of the radio vs X-ray behaviour, the overlapping coverage is sufficient to demonstrate the onset of radio activity coincident with the strong flaring phase seen by {\it NuSTAR}.} \vspace{0.3cm} \label{fig_radio} \end{figure} A comparison of the {\it NuSTAR}\ and AMI lightcurves is shown in Figure \ref{fig_radio}. Unfortunately, owing to occultation by the earth, the majority of the flaring period observed by {\it NuSTAR}\ does not have simultaneous AMI coverage which, in combination with the frequent earth-occultations experienced by {\it NuSTAR}, prevents any detailed analysis attempting to search for radio responses to specific X-ray flares. However, there is AMI coverage right at the beginning of this period, and towards the end of the {\it NuSTAR}\ observation. These short periods of overlap do clearly show that radio activity commences as the flaring phase of the {\it NuSTAR}\ observation begins, which then appears to persist throughout. The coincidence of this radio activity further supports our suggestion that the major flares seen by {\it NuSTAR}\ represent jet ejection events. \subsubsection{Transient Jet Launching} \label{sec_launch} One of the most popular theoretical mechanisms for launching jets is that they are powered by the spin of the central black hole (\citealt{BZ77}). The accreting black hole system also may power a Blandford-Payne-type jet (\citealt{BP82}) powered instead by the rotation of the accretion disk. It has been suggested that there is observational evidence for a correlation between black hole spin and jet power (taking the peak radio flux as a proxy for jet power) for the transient jet ejections seen from other BHBs at high luminosities (\citealt{Narayan12, Steiner13}), as expected for the \cite{BZ77} mechanism. However, this is still rather controversial (\citealt{Russell13}). If we are correct and these flares do represent jet ejections in which the jet is the source of illumination for the disk, then our reflection analysis suggests that these jets are launched from very close to the black hole (as close as a few $r_{\rm{G}}$). The size-scales inferred here are broadly comparable to the size-scale inferred for the base of the jet in M87 (\citealt{Doeleman12, Asada12, Nakamura13}), although this is a low-Eddington system that is likely analogous to the persistent jets seen in the low/hard state of BHBs, rather than the high-Eddington transient ejections potentially observed here. One of the other key results from the M87 system is that the acceleration of the outflowing plasma occurs gradually as the distance from the black hole increases; jets do not seem to be immediately launched with relativistic velocities (\citealt{Nakamura13}). This is an important point, as it means that the emission from the regions of the jet close to the black hole is unlikely to be heavily beamed, and can therefore illuminate the disk. As a further point of interest, \cite{Koljonen15} present evidence for a relation between the photon index of the high-energy X-ray continuum and the frequency at which the low-energy synchrotron spectrum from the jet breaks from optically thick to optically thin emission for a sample of accreting black holes, consisting of both Galactic BHBs and AGN. This has been derived primarily from data obtained in the low-Eddington jet regime (including low-Eddington observations of \rm V404\,Cyg). However, should these high-Eddington ejections adhere to the same relation, the photon indices of $\Gamma \sim 1.1$ seen during the peak of flare 4 would imply a break frequency of $\sim$10$^{16}$\,Hz at this time. Unfortunately independent observational constraints on the jet break are not available for this epoch, owing to the lack of simultaneous radio--UV coverage. Nevertheless, should this be correct, this would be among the highest break frequencies inferred among the sample utilized by \cite{Koljonen15}, and would provide further, albeit indirect evidence that the key jet activity in this case occurs very close to the black hole. Indeed, for the jet model discussed in \cite{Markoff05}, a break frequency of $\sim$10$^{16}$\,Hz would imply a height for the initial zone of particle acceleration of only a few $r_{\rm{G}}$\ above the base of the jet, which would be consistent with the geometric evolution across this flare inferred from the reflection fits presented here. While not a proof that these ejections are powered by black hole spin, the size-scales inferred here do at least meet one of the expectations for the \cite{BZ77} mechanism, that the jets should originate from regions very close to the black hole. In addition, our work suggests that \rm V404\,Cyg\ hosts a rapidly rotating black hole (see below), such that it is likely that there would be significant rotational energy for the jets to tap into. However, we are not able to make any further assessment with regards to the correlations presented by \cite{Narayan12} and \cite{Steiner13} with these data, as it is highly plausible that the available radio coverage missed the peak flux (Figure \ref{fig_radio}). The large gap in coverage also means we are not able to reliably estimate the total energy of the radio flare, suggested by \cite{Fender10} and \cite{Russell13} as an alternative proxy for jet power. The other major possibility, that the jets are primarily powered by the disk rather than the black hole (\citealt{BP82}), is also compatible with our results. In this scenario, the implied size-scales would require that the jets be powered in the very innermost regions of the accretion disk. \subsection{Black Hole Spin} \label{sec_spin} Through our investigation of the inner accretion geometry, we are also able to place constraints on the spin of the black hole in \rm V404\,Cyg. Our initial modeling of the flux-resolved spectra provided some indication that the black hole spin is high, owing to the strong disk reflection inferred ($R_{\rm{disk}} \sim 3$). This requires strong gravitational lightbending, which in turn requires a high black hole spin, such that the disk can extend very close to the black hole and subtend a large solid angle as seen by the illuminating X-ray source (\citealt{lightbending, Dauser13}). Evidence for strong gravitational lightbending has previously been observed in a wide variety of active galactic nuclei ({\it e.g.}\ \citealt{Zoghbi08, Fabian12, Parker14mrk, Reis14nat, Reynolds14, Chiang15iras, Lanzuisi16}), but also in other Galactic BHBs ({\it e.g.}\ \citealt{Rossi05, Reis13lb}). Furthermore, as noted previously, potential evidence for strong reflection has also been seen during flares seen by the {\it INTEGRAL~\/}\ coverage of this outburst from \rm V404\,Cyg (\citealt{Roques15, Natalucci15}). A high spin is supported by our flux-resolved analysis with a self-consistent lamppost geometry. There is some complexity in the results obtained for the two scenarios considered with the pure lamppost reflection model (varying the inner radius of the disk while holding the height of the X-ray source constant, and vice versa; Models 2 and 3, respectively), with similarly good fits obtained with high and more moderate spin solutions in both cases. However, we obtain a significant improvement in the global fit with the inclusion of a contribution from thermal disk emission at the highest fluxes in addition to the lamppost component (Model 4); this is our best-fit model for the flux-resolved data. In this case, a high spin is unambiguously preferred: $a^* > 0.82$ (see Figure \ref{fig_spin}, top panel). In addition, a high spin is also supported by our flare-resolved analysis, focusing on the peaks of the six most extreme flares observed. While this analysis utilizes much less total exposure than our flux-resolved analysis, it has the advantage of relying on much less averaging of different spectra (see Figure \ref{fig_allflares}). The pure lamppost model strongly requires a high spin (Model 5), but we again see a significant improvement in the fit with the inclusion of a thermal disk component (Model 6); this is our best-fit model for the flare-resolved data. In this case, we see a strong degeneracy between the black hole spin and the inclination of the inner accretion disk, resulting in high- and moderate-spin solutions again providing similarly good fits. The best-fit inclination, $i_{\rm{disk}} \sim 52$$^{\circ}$, which corresponds to the high-spin solution, is in good agreement with the range inferred for the orbital plane of the binary system, $i_{\rm{orb}} \sim 50$--75$^{\circ}$; {\it e.g.}\ \citealt{Shahbaz94, Khargharia10}). If we require the inclination to be in this range (Model 6i), then the spin is again strongly required to be high: $a^* > 0.98$ (see Figure \ref{fig_spin}, bottom panel). Taking a more conservative 99\% confidence level, the spin constraint expands to $a^* > 0.92$. Given the lower degree of time-averaging of different spectral `states' in the data analysed\footnote{While this does formally still occur to some minor degree, this does not appear to have any significant effect on the results obtained. The periods contributing to the Flare 4 spectrum considered in Section \ref{sec_flares} and shown in Figure \ref{fig_allflares} are shaded blue in Figure \ref{fig_flare4spec}. These are drawn from periods T2--6 shown in Figure \ref{fig_flare4res}, during which \rm V404\,Cyg\ does show some spectral variations (T2 is notably different to T3--6). However, if we sum the data just from periods T2--5, where the observed spectra are all very similar, the resulting spectra are practically identical to the Flare 4 spectra from Section \ref{sec_flares}.} and the good agreement with the orbital inclination, we consider this to be the most robust spin constraint derived from any of our models. The quantitative constraints on the black hole spin discussed here are the statistical parameter constraints obtained through our spectral modeling. There are additional systematic errors associated with the assumptions inherent to the models used here which are likely significant, but difficult to robustly quantify. One issue common to any attempt to constrain black hole spin is the assumption that the accretion disk truncates quickly at the ISCO, and that no significant emission should be observed from within this radius. Numerical simulations suggest that, for thin disks, this is a reasonable assumption ({\it e.g.}\ \citealt{Shafee08, Reynolds08}), and that any additional uncertainty should be small ($\sim$a few percent), particularly for rapidly rotating black holes. For the particular case of \rm V404\,Cyg\ considered here, given the extreme luminosities reached during the flares it is worth considering whether the assumption of a thin disk is reasonable. Standard accretion theory predicts that as the accretion flow becomes more luminous, its scale-height should start to increase as vertical support from radiation pressure becomes more prominent ({\it e.g.}\ \citealt{Shakura73}). Indeed, some thickness to the disk may be required in order for the disk to be able to anchor the magnetic fields required for jet ejections ({\it e.g.}\ \citealt{Meier01, Tchekhovskoy12}). This is potentially important for both the issue of how quickly the disk truncates at the ISCO, as thicker disks are more able to exhibit emission that `spills over' the ISCO slightly ({\it e.g.}\ \citealt{Reynolds08}), and also for the self-consistent lamppost reflection models, which calculate the expected reflection contribution assuming a thin disk geometry (\citealt{Dauser13, relxill}). In section \ref{sec_flares}, we estimated the peak disk luminosities to be $L_{\rm{disk}} \sim 0.3-0.5$\,$L_{\rm{E}}$. Typically, the high-energy powerlaw emission from Galactic BHBs is assumed to arise from Compton up-scattering of disk photons, and so the intrinsic disk luminosities would have to be further corrected for the flux lost into the powerlaw component ({\it e.g.}\ \citealt{Steiner09}). This may well be the case at times outside of the flare peaks. However, as noted above, during the flares the powerlaw emission is likely too hard to originate via Compton scattering of disk photons. If we are correct about the magnetic/jet ejection nature of these flares, then we should be able to take the peak disk fluxes at roughly face value. Therefore, we take $L_{\rm{disk}}$/$L_{\rm{E}}$\ $\lesssim$ 0.5. For the calculations of the expected disk structure presented in \cite{McClintock06}, this would correspond to a maximum scale height of $h_{\rm{D}}/r_{\rm{D}} \lesssim 0.2$, or equivalently a half-opening angle for the inner disk of $\lesssim$10$^{\circ}$. This is unlikely to be large enough that our assumption of a thin disk would lead to large errors. Even if we are incorrect and the high-energy continuum does arise through up-scattering of disk photons, since photon number (rather than flux) is conserved, the peak intrinsic disk fluxes would only have been $\sim$20\% larger, even accounting for the hard X-ray flux bent away from the observer in our strong lightbending scenario. Indeed, \cite{Straub11} find that the X-ray spectrum of LMC X-3 is still fairly well described by a thin disk model up to luminosities of $L_{\rm{disk}} \sim 0.6$\,$L_{\rm{E}}$. Furthermore, while the flare peaks are extreme, the majority of the good exposure obtained naturally covers lower fluxes, during which the thin disk approximation should be even more reliable in terms of the reflection modeling. We therefore expect that, while there may be some mild deviation from the thin disk approximation during the peaks of the flares that could serve to relax the constraints on the spin slightly, this is unlikely to result in major errors, and our conclusion that \rm V404\,Cyg\ hosts a rapidly rotating black hole is likely robust to such issues. \section{Conclusions} \label{sec_conc} The behaviour exhibited by \rm V404\,Cyg\ during its recent 2015 outburst is highly complex. Our {\it NuSTAR}\ observation obtained during the height of this outburst activity revealed extreme variability, both in terms of the observed flux and also the spectral properties of \rm V404\,Cyg. In part, these variations are driven by strong and variable line-of-sight absorption, as seen in previous outbursts from this source. However, strong flares reaching $\sim$Eddington in the {\it NuSTAR}\ bandpass are also observed, during which the central source appears to be relatively unobscured. These flares instead show clear evidence for a strong contribution from relativistic reflection, providing a means to probe the geometry of the innermost accretion flow. We argue these flares represent transient jet ejection events, during which the ejected plasma is the source of illumination for the accretion disk. This is based on the combination of their observed properties, analogy with other Galactic BHBs, and also the simultaneous onset of radio activity with the period of intense X-ray flaring observed. If we are correct, then our modeling of the relativistic reflection with a lamppost approximation implies that these jets are launched in very close proximity to the black hole (within a few $r_{\rm{G}}$), consistent with expectations for jet launching models that tap either the spin of the central black hole, or rotation of the very innermost accretion disk. In addition, our analysis allows us to place constraints on the black hole spin. Although there are some quantitative differences between the different models constructed, we consider our most robust spin constraint to be $a^* > 0.92$ (99\% statistical uncertainty only). To the best of our knowledge, this is the first spin constraint for \rm V404\,Cyg. \section*{ACKNOWLEDGEMENTS} The authors would like to thank the anonymous reviewer for their suggestions which helped to improve the manuscript. DJW, PG and MJM acknowledge support from STFC Ernest Rutherford fellowships (grant ST/J003697/2). KPM acknowledges support from the Hintze Foundation. ALK acknowledges support from NASA through an Einstein Postdoctoral Fellowship (grant number PF4-150125) awarded by the Chandra X-ray Center, operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. ACF acknowledges support from ERC Advanced Grant 340442. LN wishes to acknowledge the Italian Space Agency (ASI) for Financial support by ASI/INAF grant I/037/12/0-011/13 This research has made use of data obtained with {\it NuSTAR}, a project led by Caltech, funded by NASA and managed by NASA/JPL, and has utilized the \rm {\small NUSTARDAS}\ software package, jointly developed by the ASDC (Italy) and Caltech (USA). This research has also made use of data from AMI, which is supported by the ERC, and we thank the AMI staff for scheduling these radio observations. {\it Facilities:} \facility{NuSTAR}, \facility{AMI}
{ "attr-fineweb-edu": 1.967773, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUchzxK02iP4Wj_66Y
\section{Introduction and summary} Understanding the stringy dynamics of nontrivial spacetime geometries is an interesting question, especially in the absence of spacetime supersymmetry. In this case, there typically are geometric instabilities in the system, often stemming from closed string tachyons in the theory (see {\it e.g.}\ \cite{emilrev, minwalla0405} for reviews), whose time dynamics is hard to unravel in detail. However understanding the detailed phase structure of these geometries is often tractable based on analyses of renormalization group flows in appropriate 2-dimensional gauged linear sigma models (GLSMs) \cite{wittenphases} describing the system with unbroken $(2,2)$ worldsheet supersymmetry. In this case, such an analysis closely dovetails with the resolution of possible localized singularities present in the space. A simple and prototypical example of such a renormalization group flow description of spacetime dynamics is the shrinking of a 2-sphere (${\mathbb P}^1$) given by\ $|\phi_1|^2+|\phi_2|^2=r//U(1)$. The complex coordinates $\phi_i$ have the $U(1)$ identifications\ $(\phi_1,\phi_2)\rightarrow (e^{i\theta}\phi_1,e^{i\theta}\phi_2)$, which we quotient by, to obtain a 2-sphere (this symplectic quotient construction will be elaborated on abundantly later). The parameter $r=R^2$ is the size of the sphere. The GLSM description of this system shows a 1-loop renormalization of the parameter $r$ \begin{equation}\label{P1flow} r=r_0+2 \log {\mu\over\Lambda}\ \qquad \equiv\qquad R^2=R_0^2-t\ . \end{equation} In the equation on the right, we have recast the RG flow equation\footnote{This can also be obtained from studying worldsheet RG flow (or Ricci flow) of the 2-sphere\ ${d\over dt}g_{\mu\nu}\sim -R_{\mu\nu}$, giving\ ${{d\over dt}(R^2)}\sim -1$.} as an equation for the time evolution\footnote{Time in this paper means RG time. Although time evolution in spacetime is not in general the same as worldsheet RG flow, it is consistent for the time evolution trajectories to be qualitatively similar to the RG flow trajectories and in many known examples, the endpoints from both approaches are identical. See {\it e.g.}\ \cite{headrick0510, suyama} for recent related discussions: in particular, the worldsheet beta-function equations show that there is no obstruction to either RG flow (from c-theorems) or time-evolution (since the dilaton can be turned off) for noncompact singularities such as those considered here. Furthermore, for the special kinds of complex spaces we deal with here, the worldsheet theory has unbroken worldsheet supersymmetry.} of the radius by identifying the RG scale $2\log{\mu\over\Lambda}\equiv -t$ ($\mu$ decreases along the RG flow) and $r_0$ with the initial size $R_0^2$. Early time ($t\sim 0$ here) corresponds to $\mu\sim\Lambda$ which in this case is $r\sim r_0\gg 0$, {\it i.e.}\ large $R\sim R_0$: more generally the sign of the coefficient of the logarithm dictates the direction of evolution of the geometry. The RG flow shows that the sphere has an instability to shrink, with the shrinking being slow initially since for large $R_0$, we have\ $R\sim R_0-{t\over 2R_0} +\ldots$. This kind of behaviour also arises in the context of singular spaces in 3 complex dimensions where much more complicated and interesting phenomena happen. Two types of 3-dimensional nonsupersymmetric unstable singularities, particularly rich both in physical content and mathematical structure, are conifolds \cite{knconiflips} and orbifolds \cite{drmknmrp, drmkn} (see also \cite{sarkar0407}), thought of as local singularities in some compact space, the full spacetime then being of the form ${\mathbb R}^{3,1}\times{\cal M}$. The conifold-like singularities \cite{knconiflips} (reviewed in Sec.~2) are toric (as are orbifolds), labelled by a charge matrix \begin{equation} Q=(\begin{array}{cccc} n_1 & n_2 & -n_3 & -n_4 \end{array})\ , \qquad\ \ \qquad \sum Q_i\neq 0\ , \end{equation} for integers $n_i>0$, which characterizes their toric data ($Q=(\begin{array}{cccc} 1 & 1 & -1 & -1 \end{array})$ corresponding to the supersymmetric conifold). The condition $\sum_iQ_i\neq 0$ implements spacetime supersymmetry breaking. It is possible to show that these are nonsupersymmetric orbifolds of the latter, and thus can be locally described by a hypersurface equation $z_1z_4-z_2z_3=0$, with the $z_i$ having discrete identifications from the quotienting. Generically these spaces are not complete intersections of hypersurfaces. They can be described as \begin{equation} \sum_i Q_i |\phi_i|^2 = n_1|\phi_1|^2 + n_2|\phi_2|^2 - n_3|\phi_3|^2 - n_4|\phi_4|^2 = r\ //U(1)\ , \end{equation} where the $U(1)$ gauge group acts as $\phi_i\rightarrow e^{iQ_i\beta}\phi_i$ on the GLSM fields $\phi_i$, as will be described in detail later. The variations of the Fayet-Iliopoulos parameter $r$ describe the distinct phases of the geometry, with the $r\gg 0$ and $r\ll 0$ resolved phases giving fibrations over two topologically distinct 2-cycles. These $small$ $resolutions$ --- K\"ahler blowups of the singularity (at $r=0$) by 2-cycles --- have an asymmetry stemming from $\sum Q_i\neq 0$. Indeed the 1-loop renormalization\ $r=(\sum_iQ_i)\log {\mu\over\Lambda}$\ shows that one of these 2-spheres ${\mathbb P}^1_-$ is unstable to shrinking and the other, more stable, ${\mathbb P}^1_+$ grows. This spontaneous blowdown of a 2-cycle accompanied by the spontaneous blowup of a topologically distinct 2-cycle is a flip transition. Say at early times we set up the system in the unstable, approximately classical, (ultraviolet) phase where the shrinking 2-sphere ${\mathbb P}^1_-$ is large: then the geometry will dynamically evolve\footnote{Letting $q=-\sum_iQ_i>0,\ R_0^2=\log{\mu_0\over\Lambda}$ ($\mu_0\gg\Lambda$), we recast $r=q\log {\mu\over\Lambda}$ to obtain\ $R_-=q^{1/2}\sqrt{R_0^2-t}\sim R_0-{t\over R_0} ,\ R_+=q^{1/2}\sqrt{t-t_0}\sim \sqrt{t}-{t_0\over\sqrt{t}}$ for early ($t\sim 0$) and late ($t\gg R_0^2$) times, $t_0=R_0^2$ being when $R=0$: {\it i.e.}\ the shrinking of ${\mathbb P}^1_-$ and growing of ${\mathbb P}^1_+$ are slow for large ${\mathbb P}^1$s. The shrinking of ${\mathbb P}^1_-$ accelerates towards the singular region, while ${\mathbb P}^1_+$ first rapidly grows, then decelerates (within this 1-loop RG flow).} towards the more stable ${\mathbb P}^1_+$, with an inherent directionality in time, the singular region near $r=0$ where quantum (worldsheet instanton) corrections in the GLSM are large being a transient intermediate state\footnote{Although one cannot make reliable statements within this approximation about the singular region, arising as it does in the ``middle'' of the RG flow, it is worth making a comment about the geometry of this region. It was shown in \cite{knconiflips} (see also Sec.~2) that the structure of these spaces as quotients of the supersymmetric conifold obstructs the only 3-cycle (complex structure) deformation of the latter (although there can exist new abstract deformations that have no interpretation ``upstairs''). This suggests that there are no analogs of ``strong'' topology change and conifold transitions with nonperturbative light wrapped brane states here (see also the discussion on the GLSM before Sec.~3.1).}. An obvious question that arises on this analysis of \cite{knconiflips} on the small resolutions is: \emph{are there RG evolution trajectories of a given unstable conifold-like singularity where the endpoints include the supersymmetric conifold, and more general lower order conifold-like singularities?}\\ In this paper, we answer this question in the affirmative. Unlike the simple ${\mathbb P}^1$ example described in (\ref{P1flow}), there typically are orbifold singularities present on the ${\mathbb P}^1_{\pm}$ loci (as described in \cite{knconiflips}), which are themselves unstable to resolving themselves, typically by blowups of 4-cycles (divisors) which can be interpreted as twisted sector tachyon states in the corresponding orbifold conformal field theories. For a large 2-sphere ${\mathbb P}^1_-$, the localized orbifold singularities on its locus are widely separated spatially. As this ${\mathbb P}^1_-$ shrinks, these pieces of spacetime potentially containing residual singularities come together, interact and recombine giving new spaces of distinct topology. The existence of both 2-cycle and various 4-cycle blowup modes of the conifold singularity besides those leading to the small resolutions makes the full phase structure given by the GLSM quite rich. This GLSM (also admitting $(2,2)$ worldsheet supersymmetry) with a $U(1)^{n+1}$ gauge group, for say $n$ additional 4-cycle blowup modes, is described by an enlarged charge matrix $Q_i^a,\ a=1,\ldots,n+1$, with $n+1$ Fayet-Iliopoulos parameters $r_a$ controlling the vacuum structure, their RG flows describing the various phase transitions occurring in these geometries (a heuristic picture of the phase structure of a 2-parameter system is shown in Figure~\ref{phases}). \begin{figure} \begin{center} \epsfig{file=conifPhases.eps, width=6.5cm} \caption{{\small A heuristic picture of the phases of a 2-parameter system. The blue and green circles are ${\mathbb P}^1$s and weighted ${\mathbb P}^2$s respectively. The red triangles are residual orbifold singularities on their loci.}} \label{phases} \end{center} \end{figure} The geometry of the typical GLSM phase consists of combinations of 2-cycles and 4-cycles expanding/contracting in time, separating pieces of spacetime described by appropriate collections of coordinate charts glued together on their overlaps in accordance with the corresponding toric resolution (see Figures~\ref{fig17519},~\ref{fig1746}). Besides flips and blowups of residual orbifold twisted sector tachyons, generic transitions between the various distinct phases include flops (marginal blowdowns/blowups of 2-cycles) -- these arise along infrared moduli spaces. In such a case, the geometry can end up anywhere on this moduli space, including occasionally at (real) codimension-2 singularities on it: these correspond to lower order supersymmetric conifold-like spaces, {\it e.g.}\ the $Y^{pq}$ and $L^{a,b,c}$ spaces (see Sec.3). As discussed in \cite{knconiflips}, the GLSM RG flow for a flip transition in fact always drives it in the direction of the (partial) resolution leading to a less singular residual geometry, {\it i.e.}\ a more stable endpoint. This enables a classification of the phases of the enlarged GLSMs discussed here corresponding to these unstable singularities into ``stable'' and ``unstable'' basins of attraction, noting the directionality of the RG trajectories involving potential flips, which always flow towards the more stable phases. The eventual stable phases typically consist of the stable 2-sphere ${\mathbb P}^1_+$ expanding in time, alongwith the various other expanding 4-cycles corresponding to the condensation of possible tachyons localized on the orbifold singularities on its locus: these phases include the various small resolutions of possible lower order conifold-like singularities. Since the GLSM with $(2,2)$ worldsheet supersymmetry has a smooth RG flow, the various phase transitions occurring in the evolution of the geometry are smooth. A nontrivial GSO projection \begin{equation} \sum_iQ_i=even \end{equation} was obtained in \cite{knconiflips} for the ${\mathbb R}^{3,1}\times {\cal C}^{(flip)}$ spacetime background to admit a Type II string description with no bulk tachyons and admitting spacetime fermions. Here we show that the enlarged $Q_i^a$ charge matrix can be truncated appropriately so as to obtain a phase structure consistent with this Type II GSO projection. The final decay endpoints in Type II string theories are supersymmetric. It is worth comparing these geometries to other simpler ones, {\it e.g.}\ ${\mathbb C}^3/{\mathbb Z}_N$ orbifold singularities \cite{drmknmrp, drmkn}. In the latter, the unstable blowup modes can be mapped explicitly to localized closed string tachyon states arising in the twisted sectors of the conformal field theories describing these orbifolds. A flip transition arises when a more dominant tachyon (more negative spacetime mass) condenses during the condensation of some tachyon, thus corresponding to a more relevant operator in the GLSM turning on during the RG flow induced by some relevant operator. Therefore a careful analysis of the closed string spectrum of the orbifold conformal field theory is in principle sufficient to understand the decay structure of the singularity. Generically such unstable orbifolds decay in a cascade-like fashion to lower order orbifold singularities which might themselves be unstable, and so on. In the present context of the conifold-like spaces, such a conformal field theory description is not easy to obtain in the vicinity of the singular region (which arises in the ``middle'' of the RG flows, unlike the orbifold cases). However since the conifold transition itself appears to be obstructed \cite{knconiflips} (see footnote 4), it would seem that one could in principle use worldsheet techniques in the early time semiclassical regions to predict the full evolution structure. In this regard, the geometry/GLSM methods used here, aided by the structure of the residual orbifold singularities\footnote{The structure of nonsupersymmetric 3-dimensional orbifold singularities \cite{drmknmrp, drmkn} is reviewed in Appendix A.} that arise in the small resolutions, are especially powerful in obtaining an explicit analysis. The GLSM description, dovetailing beautifully with the toric geometry description, gives detailed insights into the phase structure of these singularities (see Sec.~3). We analyze in detail some examples of singularities and exhibit a cascade-like phase structure containing lower order conifold-like singularities, including in particular the supersymmetric conifold and the $Y^{pq}$ spaces. \section{Some preliminaries on tachyons, flips and conifolds} In this section, we present some generalities on the nonsupersymmetric conifold-like singularities in question, largely reviewing results presented earlier in \cite{knconiflips}. Consider a charge matrix \begin{equation}\label{Qgen} Q = \left( \begin{array}{cccc} n_1 & n_2 & -n_3 & -n_4 \end{array} \right) \end{equation} and a ${\mathbb C}^*$ action on the complex coordinates $\Psi_i\equiv a, b, c, d$, with this charge matrix as $\Psi_i\rightarrow \lambda^{Q_i}\Psi_i, \ \lambda\in{\mathbb C}^*$. Using the redefined coordinates $a^{1\over n_1},\ b^{1\over n_2},\ c^{1\over n_3},\ d^{1\over n_4}$, we find the invariant monomials \begin{equation}\label{zin1n2n3n4} z_1=a^{1\over n_1}c^{1\over n_3} ,\ \ \ z_2=a^{1\over n_1}d^{1\over n_4} ,\ \ \ z_3=b^{1\over n_2}c^{1\over n_3} ,\ \ \ z_4=b^{1\over n_2}d^{1\over n_4}\ , \end{equation} satisfying locally \begin{equation}\label{flipconif} z_1 z_4 - z_2 z_3 = 0\ , \end{equation} showing that the space is locally the supersymetric conifold. Globally however, the phases $e^{2\pi i/n_k}$ induced on the $z_i$ by the independent rotations on the underlying variables $a, b, c, d$, induce a quotient structure on the singularity with a discrete group $\Gamma$, the coordinates $z_i$ having the identifications \begin{eqnarray}\label{Zn1n2n3n4} ( \begin{array}{cccc} z_1 & z_2 & z_3 & z_4 \end{array} )\ &\longrightarrow^{a}& ( \begin{array}{cccc} e^{2\pi i/n_1}z_1 & e^{2\pi i/n_1}z_2 & z_3 & z_4 \end{array} )\ , \nonumber\\ &\longrightarrow^{b}& ( \begin{array}{cccc} z_1 & z_2 & e^{2\pi i/n_2}z_3 & e^{2\pi i/n_2}z_4 \end{array} )\ , \nonumber\\ &\longrightarrow^{c}& ( \begin{array}{cccc} e^{2\pi i/n_3}z_1 & z_2 & e^{2\pi i/n_3}z_3 & z_4 \end{array} )\ , \\ &\longrightarrow^{d}& ( \begin{array}{cccc} z_1 & e^{2\pi i/n_4}z_2 & z_3 & e^{2\pi i/n_4}z_4 \end{array} )\ . \nonumber \end{eqnarray} Thus in general the flip conifold ${\cal C}^{(flip)}$ described by\ $Q=(\begin{array}{cccc} n_1 & n_2 & -n_3 & -n_4 \end{array})$ is the quotient \begin{equation}\label{CFquotient} {\cal C}^{(flip)} = {{\cal C}\over \prod_i {\mathbb Z}_{n_i}} \end{equation} of the supersymmetric conifold ${\cal C}$ with the action given by (\ref{Zn1n2n3n4}). As a toric variety described by this holomorphic quotient construction, this space can be described by relations between monomials of the variables $a,b,c,d$, invariant under the ${\mathbb C}^*$ action. In general, such spaces are not complete intersections of hypersurfaces, {\it i.e.}\ the number of variables minus the number of equations is not equal to the dimension of the space. The quotient structure above can be shown to obstruct the only complex structure deformation (locally given as $z_1z_4-z_2z_3=\epsilon$) of the supersymmetric conifold\footnote{For example, under the symmetry $d\rightarrow e^{2\pi i}d$ of the underlying geometry, the $z_i$ coordinates transform as in (\ref{Zn1n2n3n4}), giving a nontrivial phase $e^{2\pi i/n_4}$ to $z_1z_4-z_2z_3$ which is inconsistent with a nonzero real $\epsilon$ parameter.}: there can of course be new abstract (non-toric) deformations which may not allow any interpretation in terms of the ``upstairs'' (quotient) structure. \begin{figure} \begin{center} \epsfig{file=figflip2.eps, width=8cm} \caption{{\small The toric fan for a nonsupersymmetric conifold-like singularity alongwith the two small resolutions $\{e_1e_2\}, \{e_3e_4\}$, and an interior lattice point $e_5$.}} \label{figflip} \end{center} \end{figure} A toric singularity corresponding to a charge matrix $Q$ can be described, as in Figure~\ref{figflip}, by a strongly convex rational polyhedral cone\footnote{A review of toric varieties and their GLSM descriptions appears {\it e.g.}\ in \cite{morrisonplesserInstantons} (see also \cite{greeneCY}).} defined by four lattice vectors $e_i$ satisfying the relation \begin{equation}\label{fan} \sum Q_i e_i = n_1 e_1 + n_2 e_2 - n_3 e_3 - n_4 e_4 = 0 \end{equation} in a 3-dimensional $\mbox{\boldmath$N$}$ lattice. Assuming any three, say $e_1,e_2,e_3$, of the four vectors $e_i$ define a non-degenerate volume, we see using elementary 3-dimensional vector analysis that \begin{equation} (e_3-e_1)\cdot (e_2-e_1)\times(e_4-e_1) = {(\sum_i Q_i)\over n_4}\ e_1\cdot e_2\times e_3\ , \end{equation} so that the four lattice points $e_i$ are coplanar iff\ $\sum_i Q_i=0$. In this case these singularities are described as Calabi-Yau cones, corresponding to the $Y^{p,q}$ and $L^{a,b,c}$ spaces \cite{martellisparks, hanany}. By $SL(3,{\mathbb Z})$ transformations on the lattice, one can freely choose two of the $e_i$, and then find the other two consistent with the relation (\ref{fan}). Thus fixing, say, $e_3, e_4$, we find \begin{equation} e_1=(-n_2,n_3k,n_4k),\qquad e_2=(n_1,n_3l,n_4l),\qquad e_3=(0,1,0),\qquad e_4=(0,0,1)\ , \end{equation} where $k,l$ are two integers satisfying $n_1k+n_2l=1$ (assuming $n_1,n_2$ are coprime, $k,l$ always exist by the Euclidean algorithm). For simplicity, we will restrict attention to the case $n_1=1$, which is sufficient for the physics we want to describe. In this case, we choose $k=1, l=0$, so that \begin{equation}\label{coneein11} e_1=(-n_2,n_3,n_4),\qquad e_2=(1,0,0),\qquad e_3=(0,1,0),\qquad e_4=(0,0,1)\ . \end{equation} These singularities are isolated (point-like) if there are no lattice points on the ``walls'' of the toric cone\footnote{This criterion is a generalization of similar conditions for orbifolds \cite{drmknmrp}, reviewed in Appendix A, and for supersymmetric $Y^{p,q}, L^{a,b,c}$ spaces \cite{martellisparks, hanany}.}. This is true if $n_2$ is coprime with both of $n_3,n_4$, which can be seen as follows. If say $n_2,n_3$ had common factors, {\it i.e.}\ say $n_2=m_1m_2, n_3=m_1m_3$ for some factors $m_i$, then one can construct integral lattice points\ $re_1+se_4,\ 0<r,s<1$, on the $\{e_1e_4\}$ wall: for example\footnote{We mention that $\{x\}=x-[x]$ denotes the fractional part of $x$, while $[x]$ is the integer part of $x$ (the greatest integer $\leq x$). By definition, $0\leq \{x\}<1$. Then for $m,n>0$, we have $\ [{-m\over n}]=-[{m\over n}]-1\ $ and therefore $\{ {-m\over n} \}=-{m\over n}-[{-m\over n}]=1-\{ {m\over n} \}$.}, taking $r={1\over m_1}$ and $s=1-\{ {n_4\over m_1}\}$, we have\ $re_1+se_4=(-m_2,m_3,{n_4\over m_1}+s)=(-m_2,m_3,[{n_4\over m_1}]+1) \in\mbox{\boldmath$N$}$, lying on the $\{e_1 e_4\}$ wall. Furthermore, since we can always write $n_4=m_4m_1+\nu$ for some $m_4$ and $\nu=0,1,\ldots,m_1-1$, we have\ $r+s={1\over m_1}+1-\{ {n_4\over m_1}\}<1$\ if\ $n_4\neq m_4m_1$ ($\nu\neq 0$), {\it i.e.}\ the point $re_1+se_4$ lies strictly in the interior of the $\{e_1 e_4\}$ wall\ (if $n_4=m_4m_1$, the interior point $(-m_2,m_3,m_4)={1\over m_1}e_1$ exists). Similarly, if $n_2,n_4$ have common factors, then there are lattice points in the interior of the $\{e_1 e_3\}$ wall. Note that if $n_3,n_4$ have common factors, there potentially are lattice points on the internal $\{e_1,e_2\}$ wall. There is a nice description of the physics of such a geometry as the Higgs branch of the moduli space of a $U(1)$ gauged linear sigma model admitting $(2,2)$ worldsheet supersymmetry with four scalar superfields $\Psi\equiv \phi_1,\phi_2,\phi_3,\phi_4$, and a Fayet-Iliopoulos (real) parameter $r$. The fields $\Psi$ transform under $U(1)$ gauge transformations with the charge matrix $Q_i$ as \begin{equation}\label{U1gt} \Psi_i \rightarrow e^{i Q_i\beta} \Psi_i, \qquad\qquad Q_i = (n_1, n_2, -n_3, -n_4)\ , \end{equation} $\beta$ being the gauge parameter. The action for the GLSM is (using conventions of \cite{wittenphases, morrisonplesserInstantons}) \begin{equation} S = \int d^2 z\ \biggl[ d^4 \theta\ \biggl( {\bar \Psi_i} e^{2Q_i V} \Psi_i - {1\over 4e^2} {\bar \Sigma} \Sigma \biggr) + \mathop{\rm Re}\biggl( i t\int d^2 {\tilde \theta}\ \Sigma \biggr) \biggr]\ , \end{equation} where $t = ir + {\theta\over 2\pi}$ , $\theta$ being the $\theta$-angle in $1+1$-dimensions, and $e$ being the gauge coupling. The twisted chiral superfields $\Sigma_a$ (whose bosonic components include complex scalars $\sigma_a$) represent field-strengths for the gauge fields. The classical vacuum structure can be found from the bosonic potential \begin{equation}\label{bospot} U = \sum_a {(D)^2\over 2e^2} + 2{\bar\sigma}\sigma\sum_i Q_iQ_i |\Psi_i|^2\ . \end{equation} Then $U=0$ requires $D=0$: solving this for $r\neq 0$ gives expectation values for the $\Psi_i$, which Higgs the gauge group down to some discrete subgroup and lead to mass terms for the $\sigma$ whose expectation value thus vanishes. The classical vacuum structure is then described by the D-term equation \begin{equation} -{D\over e^2} = \sum_i Q_i |\Psi_i|^2 = n_1|\phi_1|^2 + n_2|\phi_2|^2 - n_3|\phi_3|^2 - n_4|\phi_4|^2 = r\ //U(1)\ , \end{equation} from which one can realize the two small resolutions (K\"ahler blowups by 2-cycles) as rank-2 bundles over ${\mathbb P}^1_{\pm}$, as manifested by the GLSM moduli space for the single FI parameter ranges $r\gg 0$ and $r\ll 0$. These small resolutions are described in the toric fan by the $\{e_1,e_2\}$ and $\{e_3,e_4\}$ subdivisions: {\it e.g.}\ the $\{e_3,e_4\}$ subdivision giving residual subcones $C(0;e_2,e_3,e_4),\ C(0;e_1,e_3,e_4)$, is described by the coordinate charts $\{(\phi_2,\phi_3,\phi_4),\ (\phi_1,\phi_3,\phi_4)\}$. The FI parameter $r$ has a 1-loop renormalization given by \begin{equation}\label{rflow} r = \biggl({\sum_iQ_i\over 2\pi}\biggr) \log {\mu\over \Lambda} = \left({\Delta V\over 2\pi}\right) \log {\mu\over \Lambda}\ , \end{equation} showing that for $\sum_iQ_i\neq 0$, the GLSM RG flow drives the system away from the shrinking 2-sphere ${\mathbb P}^1_-$, towards the phase corresponding to the growing 2-sphere ${\mathbb P}^1_+$.\footnote{This has smaller $\mbox{\boldmath$N$}$ lattice volume: the residual subcone volumes for the two small resolutions are\\ ${\mathbb P}^1_+: V_+=V(0;e_2,e_3,e_4)+V(0;e_1,e_3,e_4)=n_1+n_2 , \ \ {\mathbb P}^1_-: V_-=V(0;e_1,e_2,e_3)+V(0;e_1,e_2,e_4)=n_4+n_3 , $\\ giving the difference $\Delta V = V_+ - V_- = \sum_iQ_i$.} This dynamical evolution process executing a flip transition mediates mild dynamical topology change since the blown-down 2-cycle ${\mathbb P}^1_-$ and blown-up 2-cycle ${\mathbb P}^1_+$ have distinct intersection numbers with various cycles in the geometry.\\ The geometric structure of the residual coordinate charts can be gleaned from the toric fan. From the Smith normal form algorithm of \cite{drmknmrp} (or otherwise), we can see that the various residual subcones correspond to the orbifolds\ $C(0;e_1,e_2,e_3)\equiv {\mathbb Z}_{n_4}(1,n_2,-n_3)$, $C(0;e_1,e_2,e_4)\equiv {\mathbb Z}_{n_3}(1,n_2,-n_4)$, and $C(0;e_1,e_3,e_4)\equiv {\mathbb Z}_{n_2}(1,-n_3,-n_4)$, up to shifts of the orbifold weights by the respective orbifold orders, since these cannot be determined unambiguously by the Smith algorithm. Using this, one can see that a consistent Type II GSO projection \begin{equation}\label{gso} \Delta n=\sum Q_i = n_1 + n_2 - n_3 - n_4 = even \end{equation} can be assigned to the conifold-like singularity in question, from the known Type II GSO projection $\sum k_i=even$ \cite{drmknmrp} on the ${\mathbb C}^3/{\mathbb Z}_M (k_1,k_2,k_3)$ residual orbifolds, if we make the reasonable assumption that a GSO projection defined for the geometry is not broken along the RG flows describing the decay channels. In what follows, we will examine the phase structure of these singularities in greater detail using their description in terms of toric geometry and GLSMs. In particular we exhibit a cascade-like phase structure for a singularity with given charge matrix $Q$, containing lower order singularities $Q'$ with smaller $\sum_i Q_i'$, consistent with the above GSO projection. \section{The phases of unstable conifolds} In this section, we will study the full phase structure of the unstable conifold-like singularities in question using GLSMs and toric geometry techniques. The prime physical observation is that the intermediate endpoint geometries arising in the small resolution decay channels above can contain additional blowup modes (interpreted as twisted sector tachyons if these are residual orbifold singularities), which further continue the evolution of the full geometry. Since these additional blowup modes are present in the original conifold-like singularity, there can in principle exist new decay channels corresponding to first blowing up these modes. Technically this is because the toric fan for such a singularity potentially contains in its interior one or more lattice points, since the residual subcones are potentially singular if their $\mbox{\boldmath$N$}$ lattice volumes are greater than unity\footnote{We recall that the $\mbox{\boldmath$N$}$ lattice volume of an orbifold-like cone gives the order of the orbifold singularity.}. Thus in addition to the small resolution subdivisions \cite{knconiflips} reviewed above, the cone $C(0;e_1,e_2,e_3,e_4)$ defining the conifold-like singularity can also be subdivided using these interior lattice points. In the case of orbifold singularities, the spacetime masses of tachyons, corresponding to worldsheet R-charges of the appropriate twisted sector operators in the orbifold conformal field theory, effectively grade the decay channels. Since there is no such tractable conformal field theory description for the conifold-like geometries themselves (in the vicinity of the singularity), it is difficult to a priori identify their most dominant evolution channels. However one can efficiently resort to GLSM renormalization group techniques (developed for unstable 3-dimensional orbifolds in \cite{drmkn}) which essentially describe the full phase structure of these geometries and the possible evolution patterns to the final stable endpoints. We will first discuss the toric geometry description and then describe some generalities of the corresponding GLSM. Consider a singularity with charge matrix $Q$ described by the cone defined by the $e_i,\ i=1,\ldots,4$, with one relation\ $\sum_iQ_ie_i=0$ in the 3-dimensional $\mbox{\boldmath$N$}$ lattice. For simplicity, we restrict attention to singularities with $n_1=1$, {\it i.e.}\ of the form\ $Q=(\begin{array}{cccc} 1 & n_2 & -n_3 & -n_4 \end{array})$, with the $e_i$ given by (\ref{coneein11}). Then as described in the previous section, there always exist two topologically distinct (asymmetric) small resolutions corresponding to the subdivisions $\{e_1e_2\}$ and $\{e_3e_4\}$: the subdivision $\{e_3e_4\}$ gives a less singular residual geometry (smaller $\mbox{\boldmath$N$}$ lattice subcone volumes) if $n_1+n_2<n_3+n_4$. We can obtain detailed insight into the structure of the fan by taking recourse to the structure of the ${\mathbb C}^3/{\mathbb Z}_N$ orbifold singularities arising in these small resolution subdivisions using the techniques and results of \cite{drmknmrp}, reviewed in Appendix A. The basic point is that there exists a precise correspondence between operators in the orbifold conformal field theory and $\mbox{\boldmath$N$}$ lattice points in the interior of ({\it i.e.}\ on or below the affine hyperplane $\Delta$, described in Appendix A; see Figure~\ref{figorb}) the toric cone representing the orbifold. Thus $\mbox{\boldmath$N$}$ lattice points in a given subcone of the toric cone, corresponding to specific blowup modes of the singularity, precisely map to tachyons or moduli arising in twisted sectors of the orbifold conformal field theory corresponding to the subcone. Now by an interior lattice point of the conifold-like cone $C(0;e_1,e_2,e_3,e_4)$ (see Figure~\ref{figflip}), we mean lattice points in the interior of the subcone $C(0;e_1,e_3,e_4)$ arising in the stable small resolution (for $n_1+n_2<n_3+n_4$). Any other point in the interior of say subcones $C(0;e_1,e_2,e_3)$ or $C(0;e_1,e_2,e_4)$ but not $C(0;e_1,e_3,e_4)$ is effectively equivalent to an irrelevant operator from the GLSM point of view. Now if there exists a lattice point $e_5$ in the interior of the cone $C(0;e_1,e_2,e_3,e_4)$, then there are two independent relations between these five vectors $e_i,\ i=1,\ldots,5$ in the 3-dimensional lattice $\mbox{\boldmath$N$}$: these can be chosen as a basis for all possible relations between these vectors. These relations \begin{equation} \sum_i Q_i^a e_i = 0 \end{equation} define a charge matrix $Q_i^a$: changing the basis of relations amounts to changing a row of $Q_i^a$ to a rational linear combination of the two rows also having integral charges. Similarly, $n$ extra lattice points in the interior of the cone give $n+1$ relations between the $e_i,\ i=1,\ldots,4+n$, thus defining a $(n+1)\times (4+n)$ charge matrix $Q_i^a$. Specifying the structure of this $Q_i^a$ is equivalent to giving all the information contained in the toric fan of the singularity. For example, if there exists a single extra lattice point $e_5$ in the interior of the subcone $C(0;e_1,e_3,e_4)\equiv {\mathbb Z}_{n_2}$, then there is a relation of the form\ $e_5={1\over n_2} (m_1e_1 + m_3e_3 + m_4e_4),\ m_i>0$, defining a row $Q_i^2=(\begin{array}{ccccc} m_1 & 0 & m_3 & m_4 & -n_2 \end{array})$. This point corresponds to a tachyon if $\sum_im_i<n_2$. Thus the combinatorics of $Q_i^a$ determines the geometry of the toric fan, {\it e.g.}\ whether $e_5$ is contained in the intersection of subcones say $C(0;e_1,e_3,e_4)$ and $C(0;e_1,e_2,e_3)$, and so on. Furthermore in Type II theories, there is a nontrivial GSO projection that acts nontrivially on these lattice points, preserving only some of them physically: this may be thought of as arising from the GSO projections in the orbifold theories corresponding to the subcones arising under the small resolutions. Thus an interior lattice point may not in fact correspond to any blowup mode that actually exists in the physical theory. A simple way to encode the consequences of this GSO projection is to ensure that each row of the charge matrix $Q_i^a$ in the GLSM for the physical Type II theory sums to an even integer \begin{equation} \sum_iQ_i^a=even\ , \qquad \ a=1,\ldots,n+1\ . \end{equation} It is easy to see that this Type II truncation of $Q_i^a$ retaining only rows with even sum is consistent (and we will elaborately describe this in examples later): {\it e.g.}\ in the example above, the point $e_5\in C(0;e_1,e_3,e_4)$ given by $e_5={1\over n_2} (m_1e_1 + m_3e_3 + m_4e_4)$ defines a new conifold-like subcone $C(0;e_5,e_2,e_3,e_4)$, corresponding to a charge matrix $Q'$, which admits a Type II GSO projection iff $\sum_iQ_i'=even$. This constraint effectively arises from the GSO projection on the point $e_5$ thought of as a twisted sector state in the orbifold corresponding to the subcone $C(0;e_1,e_3,e_4)$. The full phase structure of such a geometry is obtained by studying an enlarged GLSM with gauge group $U(1)^{n+1}$ with $4+n$ superfields $\Psi_i$ and $n+1$ Fayet-Iliopoulos parameters $r_a$. Much of the remainder of this section is a direct generalization of the techniques described in \cite{drmkn} to the conifold-like singularities in question here: we present a detailed discussion primarily for completeness. The action of such a GLSM (in conventions of \cite{wittenphases, morrisonplesserInstantons}) is \begin{equation} S = \int d^2 z\ \biggl[ d^4 \theta\ \biggl( {\bar \Psi_i} e^{2Q_i^a V_a} \Psi_i - {1\over 4e_a^2} {\bar \Sigma_a} \Sigma_a \biggr) + \mathop{\rm Re}\biggl( i t_a\int d^2 {\tilde \theta}\ \Sigma_a \biggr) \biggr]\ , \end{equation} where summation on the index $a=1,\ldots, n+1$ is implied. The\ $t_a = ir_a + {\theta_a\over 2\pi}$ \ are Fayet-Iliopoulos parameters and $\theta$-angles for each of the $n+1$ gauge fields ($e_a$ being the gauge couplings). The twisted chiral superfields $\Sigma_a$ (whose bosonic components are complex scalars $\sigma_a$) represent field-strengths for the gauge fields. The action of the $U(1)^{n+1}$ gauge group on the $\Psi_i$ is given in terms of the $(n+1)\times (4+n)$ charge matrix $Q_i^a$ above as \begin{equation}\label{Qiagen} \Psi_i \rightarrow e^{i Q_i^a\lambda}\ \Psi_i\ , \qquad Q_i^a = \left( \begin{array}{ccccccc} n_1 & n_2 & -n_3 & -n_4 & 0 & \ldots \\ 0 & q_2^2 & -q_3^2 & -q_4^2 & q_5^2 & \ldots \\ & & \cdot & & & \ldots \\ & & \cdot & & & \ldots \\ \end{array} \right), \qquad \ a=1,\ldots,n+1\ . \end{equation} Such a charge matrix only specifies the $U(1)^{n+1}$ action up to a finite group, due to the possibility of a ${\mathbb Q}$-linear combination of the rows of the matrix also having integral charges. The specific form of $Q_i^a$ is chosen to conveniently illustrate specific geometric substructures: for example, the second row above, with $q_1^2=0$, describes the conifold-like subcone $C(0;e_2,e_3,e_4,e_5)$. The variations of the $n+1$ independent FI parameters control the vacuum structure of the theory. The space of classical ground states of this theory can be found from the bosonic potential \begin{equation}\label{bospotgen} U = \sum_a {(D_a)^2\over 2e_a^2} + 2\sum_{a,b} {\bar \sigma}_a \sigma_b \sum_i Q_i^a Q_i^b |\Psi_i|^2\ . \end{equation} Then $U=0$ requires $D_a=0$: solving these for $r_a\neq 0$ gives expectation values for the $\Psi_i$, which Higgs the gauge group down to some discrete subgroup and lead to mass terms for the $\sigma_a$ whose expectation values thus vanish. The classical vacua of the theory are then given in terms of solutions to the D-term equations \begin{equation} {-D_a\over e^2} = \sum_i Q_i^a |\Psi_i|^2 - r_a = 0\ , \qquad a=1,\ldots,n+1\ . \end{equation} At the generic point in $r$-space, the $U(1)^{n+1}$ gauge group is completely Higgsed, giving collections of coordinate charts that characterize in general distinct toric varieties. In other words, this $(n+1)$-parameter system admits several ``phases'' (convex hulls in $r$-space, defining the secondary fan) depending on the values of the $r_a$. At boundaries between these phases where some (but not all) of the $r_a$ vanish, some of the $U(1)$s survive giving rise to singularities classically. Each phase is an endpoint since if left unperturbed, the geometry can remain in the corresponding resolution indefinitely (within this noncompact approximation): in this sense, each phase is a fixed point of the GLSM RG flow. However some of these phases are unstable while others are stable, in the sense that fluctuations ({\it e.g.}\ blowups/flips of cycles stemming from instabilities) will cause the system to run away from the unstable phases towards the stable ones. This can be gleaned from the 1-loop renormalization of the FI parameters \begin{equation}\label{flow} r_a = \bigg({\sum_iQ_i^a\over 2\pi}\bigg) \log {\mu\over \Lambda}\ , \end{equation} where $\mu$ is the RG scale and $\Lambda$ is a cutoff scale where the $r_a$ are defined to vanish. A generic linear combination of the gauge fields coupling to a linear combination $\sum_a\alpha_ar_a$ of the FI parameters, the $\alpha_a$ being arbitrary real numbers, has a 1-loop running whose coefficient vanishes if \begin{equation}\label{ra1loop} \sum_{\alpha=1}^{n+1} \sum_{i=1}^{n+4} \alpha_a Q_i^a = 0\ , \end{equation} in which case the linear combination is marginal. This equation defines a codimension-one hyperplane perpendicular to a ray, called the Flow-ray, emanating from the origin and passing through the point $(-\sum_i Q_i^1, -\sum_i Q_i^2, \ldots, -\sum_i Q_i^{n+1})$ in $r$-space which has real dimension $n+1$. Using the redefinition\ $ {Q_i^a}'\equiv(\sum_iQ_i^1)Q_i^a-(\sum_iQ_i^a)Q_i^1 , \ a\neq 1$, we see that\ $\sum_i{Q_i^a}'=(\sum_iQ_i^1)(\sum_iQ_i^a)-(\sum_iQ_i^a) (\sum_iQ_i^1)=0$, \ for $a\neq 1$, \ so that the FI parameters coupling to these redefined $n$ gauge fields have vanishing 1-loop running. Thus there is a single relevant direction (along the flow-ray) and an $n$-dimensional hyperplane of the $n$ marginal directions in $r$-space. By studying various linear combinations $\sum_a\alpha_ar_a$, we see that the 1-loop RG flows drive the system along the single relevant direction to the phases in the large $r$ regions of $r$-space, {\it i.e.}, $r_a\gg 0$ (if none of the $r_a$ is marginal), that are adjacent to the Flow-ray \ $F\equiv(-\sum_iQ_i^1,-\sum_iQ_i^2,\ldots,-\sum_iQ_i^{n+1})$,\ or contain it in their interior: these are the stable phases. Reversing this logic, we see that the direction precisely opposite to the Flow-ray, {\it i.e.}\ $-F\equiv(\sum_iQ_i^1,\sum_iQ_i^2,\ldots,\sum_iQ_i^{n+1})$, defines the ultraviolet of the theory. This ray will again lie either in the interior of some one convex hull or adjoin multiple convex hulls. This ray $-F$ corresponds to the maximally unstable direction which is generically the unstable small resolution ${\mathbb P}^1_-$, defining the ultraviolet of the theory (see the examples that follow). This is because any of the residual localized orbifold singularities on this ${\mathbb P}^1_-$ locus can be further resolved (if unstable) by turning on the corresponding FI parameter, which process is along the Flow-ray direction. We restrict attention to the large $r_a$ regions, thus ignoring worldsheet instanton corrections: this is sufficient for understanding the phase structure, and consistent for initial values of $r_a$ whose components in the marginal directions lie far from the center of the marginal $n$-plane. The 1-loop renormalization of the FI parameters can be expressed \cite{wittenphases, wittenIAS, morrisonplesserInstantons} in terms of a perturbatively quantum-corrected twisted chiral superpotential for the $\Sigma_a$ for a general $n+1$-parameter system, obtained by considering the large-$\sigma$ region in field space and integrating out those scalars $\Psi_i$ that are massive here (and their expectation values vanish energetically). This leads to the modified potential \begin{equation}\label{bospotGen} U(\sigma) = {e^2\over 2} \sum_{a=1}^{n+1} \bigg| i{\hat \tau}_a - {\sum_{i=1}^{4+n} Q_i^a \over 2\pi} (\log (\sqrt{2} \sum_{b=1}^{n+1} Q_i^b \sigma_b/\Lambda) + 1) \bigg|^2\ . \end{equation} The singularities predicted classically at the locations of the phase boundaries arise from the existence of low-energy states at large $\sigma$. The physics for the nonsupersymmetric cases here is somewhat different from the cases where $\sum_iQ_i^a=0$ for all $a$, as discussed in general in \cite{wittenphases, wittenIAS, morrisonplesserInstantons} (and for orbifold flips in \cite{drmkn}). Consider the vicinity of such a singularity at a phase boundary but far from the (fully) singular region where all $r_a$ are zero, and focus on the single $U(1)$ (with say charges $Q_i^1$) that is unbroken there ({\it i.e.}\ we integrate out the other $\sigma_a,\ a\neq 1$, by setting them to zero). Now if $\sum_iQ_i^1=0$ ({\it i.e.}\ unbroken spacetime supersymmetry), then there is a genuine singularity when $U(\sigma)={e^2\over 2}|i{\hat\tau}_a-{1\over 2\pi} \sum_iQ_i^1\log|Q_i^1||^2=0$, and if $\sum_iQ_i^a=0$ for all $a$, this argument can be applied to all of the $U(1)$s. However for the nonsupersymmetric cases here, we have $\sum_iQ_i^a\neq 0$: so if say $\sum_iQ_i^1\neq 0$ (with the other $Q_i^a$ redefined to ${Q_i^a}'$ with $\sum_i{Q_i^a}'=0$), then along the single relevant direction where $\sum_iQ_i^1\neq 0$, the potential energy has a $|\log \sigma_1|^2$ growth. Thus the field space accessible to very low-lying states is effectively compact (for finite worldsheet volume) and there is no singularity for any $r_a,\theta_a$, along the RG flow: in other words, the RG flow is smooth along the relevant direction for all values of $\tau_1$, and the phase boundaries do not indicate singularities. Thus the overall physical picture is the following: the generic system in question begins life at early times in the ultraviolet phase, typically the unstable 2-sphere ${\mathbb P}^1_-$ which has a tendency to shrink. If this 2-sphere size is large, then this is an approximately classical phase of the theory, with the shrinking being very slow initially. This ${\mathbb P}^1_-$ typically has residual localized orbifold singularities which are widely separated for a large ${\mathbb P}^1_-$. As the 2-sphere shrinks, tachyons localized at these orbifolds might condense resolving the latter by 4-cycle blowup modes. As the system evolves, these various cycles interact and recombine potentially via several topology-changing flip transitions until the geometry ultimately settles down into any of the stable phases (which typically have distinct topology). A stable phase typically consists of the stable 2-sphere ${\mathbb P}^1_+$ growing in time, with the various possible orbifold singularities on its locus resolving themselves by tachyon condensation\footnote{Note that these conifold-like singularities always contain the small resolutions which are K\"ahler blowup modes. However since the Type II GSO projection only preserves some of the K\"ahler blowup modes in the geometry, some of the residual endpoint orbifold singularities arising under the small resolutions could be ``string-terminal'' (as described in \cite{drmknmrp}). In other words, these residual orbifolds cannot be completely resolved solely by K\"ahler blowup modes (corresponding to GSO-preserved twisted sector tachyons/moduli in the chiral ring). Indeed since these residual orbifolds can now be described by conformal field theory, we see the existence of non-K\"ahler blowup modes corresponding to twisted sector tachyons arising in any of the various (anti-)chiral rings. Thus since in the Type II theory, there is no (all-ring) terminal ${\mathbb C}^3/{\mathbb Z}_N$ orbifold singularity \cite{drmknmrp}, the final decay endpoints of the conifold-like singularity are smooth.}. The transitions occurring in the course of this evolution between various phases are smooth as discussed above. In what follows, we describe two 2-parameter examples in some detail illustrating the above generalities: one corresponds to a singularity that has a unique late-time endpoint (within this 2-parameter approximation), while the other includes the supersymmetric conifold in its final endpoints, thus exhibiting infrared moduli representing the flop between the two topologically distinct small resolutions of the latter. Before doing so, we mention a simple example of a singularity which has no interior lattice point (as defined earlier), and evolves to its stable small resolution. The singularity $Q=(\begin{array}{cccc} 1 & 1 & -1 & -3 \end{array})$ is the simplest unstable Type II conifold-like singularity. The stable small resolution given by the subdivision $\{e_3e_4\}$ completely resolves the singularity, since the subcone $C(0,e_1,e_3,e_4)$, potentially an orbifold singularity, is in fact smooth. The other small resolution gives rise to the orbifold subcone $C(0,e_1,e_2,e_3)\equiv {\mathbb Z}_3 (1,1,2)$ which is effectively supersymmetric since its only GSO-preserved blowup mode is a marginal twisted sector state arising in one of the anti-chiral rings (the subcone $C(0,e_1,e_2,e_4)$ is smooth). \subsection{Decays to a single stable phase} Consider the singularity $Q=(\begin{array}{cccc} 1 & 7 & -5 & -19 \end{array})$\ (see Figure~\ref{fig17519}). The subcones can be identified as the following Type II orbifolds: \begin{equation} C(0;e_1,e_2,e_3)\equiv {\mathbb Z}_{19}(1,7,14)\ , \ C(0;e_1,e_2,e_4)\equiv {\mathbb Z}_5(1,2,1)\ , \ C(0;e_1,e_3,e_4)\equiv {\mathbb Z}_7(1,2,-5)\ , \end{equation} while $C(0;e_2,e_3,e_4)$ is of course smooth. It is straightforward to see that \begin{equation} e_5\equiv (-1,1,3) = {1\over 7} (e_1 + 2e_3 + 2e_4)\ \ \in\ \ C(0;e_1,e_3,e_4) \end{equation} corresponds to the tachyon in the twisted sector $j=1$, having R-charge $R_j=({1\over 7},{2\over 7},{2\over 7})$\ (GSO preserved since $E_j=-1$ using (\ref{gsoEj})). Including this lattice point gives the charge matrix \begin{equation}\label{Qia17519} Q_i^a = \left( \begin{array}{ccccc} 1 & 7 & -5 & -19 & 0 \\ 0 & 1 & -1 & -3 & 1 \\ \end{array} \right)\ , \end{equation} where we have used the conifold-like relation\ $e_2+e_5-e_3-3e_4=0$ to define the second row. Note $\sum_iQ_i^a=even,\ a=1,2$, incorporating the GSO projection. One could equally well have defined the second row in $Q_i^a$ as $(\begin{array}{ccccc} 1 & 0 & 2 & 2 & -7 \end{array})$ noticing as above that $e_5\in C(0;e_1,e_3,e_4)$: this does not change the physics. To understand the phase structure of this theory, let us analyze the D-term equations (suppressing the gauge couplings) \begin{eqnarray}\label{Dterms17519} -D_1 &=& |\phi_1|^2 + 7 |\phi_2|^2 - 5 |\phi_3|^2 - 19 |\phi_4|^2 - r_1 = 0\ , \nonumber\\ -D_2 &=& |\phi_2|^2 + |\phi_5|^2 - |\phi_3|^2 - 3 |\phi_4|^2 - r_2 = 0\ . \end{eqnarray} \begin{figure} \begin{center} \epsfig{file=fig17519.eps, width=11cm} \caption{{\small Phases of\ $Q=(1\ 7\ -5\ -19)$, with the toric subdivisions and corresponding coordinate charts in each phase, as well as the RG flow directions and the physics of each phase boundary.}} \label{fig17519} \end{center} \end{figure} There are three other auxiliary D-terms too: \begin{eqnarray}\label{D'17519} -D_2' &=& -D_1+7D_2 = |\phi_1|^2 + 2 |\phi_3|^2 + 2 |\phi_4|^2 - 7 |\phi_5|^2 - (r_1-7r_2) = 0\ , \nonumber\\ -D_3' &=& -D_1+5D_2 = |\phi_1|^2 + 2 |\phi_2|^2 - 4 |\phi_4|^2 - 5 |\phi_5|^2 - (r_1-5r_2) = 0\ , \\ -D_4' &=& -3D_1+19D_2 = 3 |\phi_1|^2 + 2 |\phi_2|^2 + 4 |\phi_3|^2 - 19 |\phi_5|^2 - (3r_1-19r_2) = 0\ . \nonumber \end{eqnarray} These are obtained by looking at different linear combinations of the two $U(1)$s that do not couple to some subset of the chiral superfields: {\it e.g.}\ the $U(1)$s giving $D_2'$ and $D_3'$ do not couple to $\phi_2$ and $\phi_3$ respectively. These D-terms show that the five rays drawn from the origin $(0,0)$ out through the points $\phi_1\equiv (1,0),\ \phi_2\equiv (7,1),\ \phi_3\equiv (-5,-1),\ \phi_4\equiv (-19,-3),\ \phi_5\equiv (0,1)$, are phase boundaries: {\it e.g.}\ at the boundary $(7,1)$, the $U(1)$ coupling to $r_1-7r_2$ is unHiggsed, signalling a classical singularity due to the existence of a new $\sigma$-field direction. Before analyzing the phase structure, let us can gain some insight into the geometry of this singularity. In the holomorphic quotient construction, introduce coordinates $x_i,\ i=1,\ldots,5$, corresponding to the lattice points $e_i$ subject to the quotient action $x_i\rightarrow\lambda^{Q_i^a}x_i$ with $Q_i^a$ given in (\ref{Qia17519}). Then the divisors $x_i=0,\ i=1,2,3,4$, are noncompact divisors, while the divisor $x_5=0$ is a compact one, whose structure can be gleaned as follows: the $({\mathbb C}^*)^2$ action is \begin{eqnarray} g_1:\ (x_1,x_2,x_3,x_4,x_5)\ &\sim& (\lambda x_1,\lambda^7 x_2, \lambda^{-5} x_3,\lambda^{-19} x_4,x_5)\ , \nonumber\\ g_2:\ (x_1,x_2,x_3,x_4,x_5)\ &\sim& (x_1,\lambda x_2,\lambda^{-1} x_3, \lambda^{-3} x_4,\lambda x_5)\ , \end{eqnarray} so that on $x_5=0$, the group element $g_1g_2^{-7}(\lambda)$ has action \begin{equation} (x_1,x_2,x_3,x_4,0)\ \sim\ (\lambda x_1, x_2, \lambda^2 x_3,\lambda^2 x_4,0)\ . \end{equation} When the divisor is of finite size, we expect a smooth non-degenerate description of the 3-dimensional space, to obtain which we must exclude the set $(x_1,x_3,x_4)=(0,0,0)$ \footnote{More formally, in the fan $\{\{e_1,e_5,e_3\}, \{e_1,e_5,e_4\}, \{e_3,e_4,e_5\}\}$, corresponding to the complete subdivision by $e_5$, we exclude the intersection of coordinate hyperplanes $x_1=x_3=x_4=0$ since $e_1,e_3,e_4$, are not contained in any cone of the fan.}. This then yields a weighted projective space ${\mathbb C}{\mathbb P}^2_{1,2,2}$ described by the coordinate chart $(x_1,x_3,x_4)$, with $x_2$ being a third coordinate. From the symplectic quotient point of view, we see from the D-term $D_2'$ that the divisor $x_5=0$, obtained by setting $\phi_5=0$, is \begin{equation} \left\{|\phi_1|^2 + 2 |\phi_3|^2 + 2 |\phi_4|^2 = r_1-7r_2\right\} //U(1)\ , \end{equation} which is ${\mathbb C}{\mathbb P}^2_{1,2,2}$, with $(\phi_1,\phi_3,\phi_4)=(0,0,0)$ being an excluded set for nonzero K\"ahler class, {\it i.e.}\ $r_1-7r_2>0$. Now we will illustrate how the classical moduli space of the GLSM obtained from these D-term equations reproduces the phase diagram for this theory, shown in Figure~\ref{fig17519}. In the convex hull $\{\phi_1\phi_2\}$, {\it i.e.}\ $0<r_2<{1\over 7}r_1$, $D_2,D_2'$ imply that at least one element of each set $\phi_2,\phi_5$, and $\phi_1,\phi_3,\phi_4$, must acquire nonzero vacuum expectation values: the D-term equations do not have solutions for all of these simultaneously zero, which is the excluded set in this phase. Now in the region of moduli space where $\phi_2,\phi_1$ acquire vevs, the light fields at low energies are $\phi_3,\phi_4,\phi_5$, which yield a description of the coordinate chart $(\phi_3,\phi_4,\phi_5)$. If $\phi_2,\phi_3$ acquire vevs, the light fields describe the chart $(\phi_1,\phi_4,\phi_5)$. Similarly we obtain the coordinate charts $(\phi_1,\phi_3,\phi_5)$ and $(\phi_2,\phi_3,\phi_4)$ if $\phi_2,\phi_4$ and $\phi_1,\phi_5$ acquire vevs respectively. Note that each of these collections of nonzero vevs are also consistent with the other D-terms $D_1,D_3',D_4'$. Now although one might imagine a coordinate chart $(\phi_1,\phi_2,\phi_4)$ from $\phi_5,\phi_3$ alone acquiring nonzero vevs, it is easy to see that this is not possible: for if true, $D_2,D_2'$ imply $|\phi_5|^2>|\phi_3|^2$ and $|\phi_3|^2>{7\over 2}|\phi_5|^2$, which is a contradiction. Similarly one sees that the possible chart $(\phi_1,\phi_2,\phi_3)$ from $\phi_5,\phi_4$ alone acquiring vevs is disallowed in this phase. Thus we obtain the coordinate charts $(\phi_3,\phi_4,\phi_5)$, $(\phi_1,\phi_4,\phi_5)$, $(\phi_1,\phi_3,\phi_5)$ and $(\phi_2,\phi_3,\phi_4)$ in this phase of the GLSM. A similar analysis of the moduli space of the GLSM can be carried out in each of the other four phases to obtain all the possible coordinate charts characterizing the geometry of the toric variety in that phase. There is a simple operational method \cite{drmkn} to realize the results of the above analysis of the D-terms for the phase boundaries and the phases of the GLSM is the following: read off each column in $Q_i^a$ given in (\ref{Qia17519}) as a ray drawn out from the origin $(0,0)$ in $(r_1,r_2)$-space, representing a phase boundary. Then the various phases are given by the convex hulls\footnote{A 2-dimensional convex hull is the interior of a region bounded by two rays emanating out from the origin such that the angle subtended by them is less than $\pi$.} bounded by any two of the five phase boundaries represented by the rays $\phi_1\equiv (1,0),\ \phi_2\equiv (7,1),\ \phi_3\equiv (-5,-1),\ \phi_4\equiv (-19,-3),\ \phi_5\equiv (0,1)$. These phase boundaries divide $r$-space into five phase regions, each described as a convex hull of two phase boundaries by several possible overlapping coordinate charts obtained by noting all the possible convex hulls that contain it. The coordinate chart describing a particular convex hull, say $\{\phi_1,\phi_2\}$, is read off as the complementary set $\{\phi_3,\phi_4,\phi_5\}$. Then for instance, this convex hull is contained in the convex hulls $\{\phi_1, \phi_5\},\ \{\phi_2, \phi_3\}$ and $\{\phi_2, \phi_4\}$, so that the full set of coordinate charts characterizing the toric variety in the phase given by this convex hull $\{\phi_1, \phi_2\}$ is \ $\{\ (\phi_3,\phi_4,\phi_5),\ (\phi_2,\phi_3,\phi_4),\ (\phi_1,\phi_4,\phi_5),\\ (\phi_1,\phi_3,\phi_5) \ \}$. From Figure~\ref{fig17519}, we see that this phase is the complete resolution corresponding to the subdivision of the toric cone by the small resolution $\{ e_3,e_4 \}$, followed by the lattice point $e_5$. Physically, the geometry of this space corresponds to the 2-cycle $\{e_3,e_4\}$ and a 4-cycle $e_5$ blowing up simultaneously and expanding in time, separating the spaces described by the above coordinate patches (which are potentially residual orbifold singularities). The way these pieces of spacetime are glued together on the overlaps of their corresponding coordinate patches is what the corresponding toric subdivision in Figure~\ref{fig17519} shows. Using the toric fan, we can glean the structure of the residual geometry: we see that $C(0;e_2,e_3,e_4)$ and $C(0;e_3,e_4,e_5)$ are both smooth, being subcones of $\mbox{\boldmath$N$}$ lattice volume unity. Also we see that $C(0;e_1,e_5,e_3)\equiv {\mathbb Z}_2 (-1,5,4)={\mathbb Z}_2 (1,1,0),\ C(0;e_1,e_5,e_4)\equiv {\mathbb Z}_2 (-3,19,-4)={\mathbb Z}_2 (1,1,0)$, using the relations\ $e_1-5e_5+2e_2-4e_4=0$ and $3e_1-19e_5+2e_2+4e_3=0$. Both of these orbifolds are effectively supersymmetric ${\mathbb Z}_2 (1,-1)$ endpoints since their anti-chiral rings contain blowup moduli. Note also that the interior lattice point $(-4,3,11)={e_1+e_5\over 2}$ is not GSO-preserved, and thus absent in the physical Type II theory (we see that adding this lattice point would add a new row $q_i'=(\begin{array}{cccc} 1 & 4 & -3 & -11 \end{array})$ to the charge matrix, disallowed since $\sum_i q_i'=odd$). This is also consistent with the fact that this point, $(-4,3,11)={1\over 7}(4e_1+e_3+e_4)$, can be interpreted as a $j=4$ twisted sector tachyon of R-charge $({4\over 7},{1\over 7},{1\over 7})$ in the orbifold subcone $C(0;e_1,e_3,e_4)\equiv {\mathbb Z}_7 (1,2,-5)$, and is GSO-projected out ($E_j=2$ using (\ref{gsoEj})). Similarly, using Figure~\ref{fig17519}, we recognize the other phases as follows.\\ The convex hull $\{\phi_2, \phi_5\}$, contained in the convex hull $\{\phi_1, \phi_5\}$,\ yields a description of the toric variety in this phase in terms of the coordinate charts\ $\{(\phi_1,\phi_3,\phi_4),\ (\phi_2,\phi_3,\phi_4)\}$, which is the subdivision of the cone by the small resolution $\{e_3,e_4\}$. As we have seen, $C(0;e_1,e_3,e_4)\equiv {\mathbb Z}_7 (1,2,-5)$, with the interior lattice point $e_5$ mapping to the GSO-preserved $j=1$ twisted sector tachyon of R-charge ${5\over 7}$. \\ The convex hull $\{\phi_4, \phi_5\}$, contained in the convex hull $\{\phi_3, \phi_5\}$,\ gives a description of the toric variety in this phase in terms of the charts\ $\{(\phi_1,\phi_2,\phi_3),\ (\phi_1,\phi_2,\phi_4)\}$, which is the subdivision of the cone by the small resolution $\{e_1,e_2\}$. This is related by a flip to the phase $\{\phi_2, \phi_5\}$. We see that $C(0;e_1,e_2,e_4)\equiv {\mathbb Z}_5(1,2,1)$, while the subcone $C(0;e_1,e_2,e_3)\equiv {\mathbb Z}_{19}(1,7,14)$ contains\ $e_5={1\over 19} (3e_1 + 2e_2 + 4e_3)$, corresponding to the GSO-preserved $j=3$ tachyon with R-charge $({3\over 19},{2\over 19},{4\over 19})$. \\ The convex hull $\{\phi_3, \phi_4\}$, contained in the convex hulls $\{\phi_3, \phi_5\},\ \{\phi_1, \phi_4\},\ \{\phi_2, \phi_4\}$,\ yields a description of the toric variety in this phase in terms of the charts\ $\{(\phi_1,\phi_3,\phi_5),\ (\phi_1,\phi_2,\phi_5),\\ (\phi_2,\phi_3,\phi_5),\ (\phi_1,\phi_2,\phi_4)\}$. This is the subdivision of the cone by the small resolution $\{e_1,e_2\}$, followed by the lattice point $e_5$ which corresponds to condensation of the orbifold tachyon mentioned above.\\ Finally the convex hull $\{\phi_1, \phi_3\}$, contained in the convex hulls $\{\phi_1, \phi_4\},\ \{\phi_2, \phi_3\},\ \{\phi_2, \phi_4\}$,\ yields a description of the toric variety in this phase in terms of the charts\ $\{(\phi_1,\phi_3,\phi_5),\\ (\phi_1,\phi_4,\phi_5),\ (\phi_2,\phi_3,\phi_5),\ (\phi_2,\phi_4,\phi_5) \}$, which is a subdivision by the lattice point $e_5$ related by a flip to the subdivisions corresponding to either of phases\ $\{\phi_3, \phi_4\},\ \{\phi_1, \phi_2\}$. The subcone $C(0;e_2,e_5,e_4)$ is smooth, while $C(0;e_2,e_5,e_3)\equiv {\mathbb Z}_3 (1,1,-1)$. The quantum dynamics of these phases is dictated by the renormalization group flows in the GLSM. We remind the reader that the analysis here is valid only for large $r_1,r_2$, (ignoring worldsheet instanton corrections). The two FI parameters $r_a$ have 1-loop running given by \begin{equation} r_1(\mu) = -{16\over 2\pi} \cdot\log {\mu\over\Lambda}\ , \qquad \qquad r_2(\mu) = -{2\over 2\pi} \cdot\log {\mu\over\Lambda}\ , \end{equation} so that a generic linear combination has the running \begin{equation}\label{flowray51} \alpha_1 r_1 + \alpha_2 r_2 = -{2(8\alpha_1 + \alpha_2)\over 2\pi}\cdot\log {\mu\over\Lambda}\ . \end{equation} The coefficient shows that this parameter is marginal if \ $8\alpha_1+\alpha_2=0$ : this describes a line perpendicular to the ray $(8,1)$ in $r$-space, which is the Flow-ray. Since the Flow-ray lies in the interior of the convex hull\ $\{\phi_1,\phi_2\}$, this is the $unique$ stable phase, and therefore the unique final endpoint geometry in this theory (within this 2-parameter system): all flow lines must eventually end in this phase after crossing one or more of the phase boundaries. The phase $\{\phi_4,\phi_5\}$, containing $-F\equiv (-8,-1)$, is the ultraviolet of the theory, {\it i.e.}\ the early time phase (corresponding to the unstable small resolution ${\mathbb P}^1_-$ with residual orbifold singularities) where all flow-lines begin. It is straightforward to see what crossing each of the phase boundaries corresponds to physically: {\it e.g.}\ crossing any of $\phi_1,\ \phi_3$ or $\phi_5$ corresponds to topology change via a flip, while a localized orbifold tachyon condenses in the process of crossing either of $\phi_2,\ \phi_4$. This shows how the RG flow in the GLSM gives rise to the phase structure of the conifold-like singularity $Q=(\begin{array}{cccc} 1 & 7 & -5 & -19 \end{array})$. Note that the final stable phase is less singular than all other phases\footnote{Its total $\mbox{\boldmath$N$}$ lattice subcone volume\ $V(0;e_1,e_5,e_3)+V(0;e_1,e_5,e_4)+V(0;e_3,e_4,e_5)+V(0;e_2,e_3,e_4) =2+2+1+1$ is less than that for all other subdivisions, as well as $V_+=1+7$.}. It is interesting to note that some of the partial decays of this singularity exhibit two lower order conifold-like singularities, {\it i.e.}\ $C(0;e_2,e_5,e_3,e_4)\equiv Q'=(\begin{array}{cccc} 1 & 1 & -1 & -3 \end{array})$ and $C(0;e_1,e_2,e_4,e_5)\equiv Q''=(\begin{array}{cccc} 1 & 2 & -4 & -5 \end{array})$. These are both Type II singularities having $\sum_i Q_i=even$, showing that the decay structure is consistent with the GSO projection for these singularities. In other words, the evolution of the geometry as described by the GLSM RG flow does not break the GSO projection. Since both singularities are themselves unstable, the stable phase of the full theory also includes their stable resolutions. More generally the various different phases in fact include distinct sets of small resolutions of these singularities. \subsection{Decays to the supersymmetric conifold} Consider the singularity $Q=(\begin{array}{cccc} 1 & 7 & -4 & -6 \end{array})$ (see Figure~\ref{fig1746}). The various subcones arising in this fan can be identified as the following Type II orbifolds: \begin{eqnarray} && C(0;e_1,e_2,e_3)\equiv {\mathbb Z}_6(1,1,-4)\ , \ C(0;e_1,e_2,e_4)\equiv {\mathbb Z}_4(1,-1,2)\ , \ \nonumber\\ && C(0;e_1,e_3,e_4)\equiv {\mathbb Z}_7(1,-4,1)\ , \ C(0;e_1,e_5,e_4)\equiv {\mathbb Z}_3(1,2,1)\ , \ \end{eqnarray} while $C(0;e_2,e_3,e_4),\ C(0;e_1,e_5,e_3)$ are smooth. We can see that the lattice point \begin{eqnarray} e_5\equiv (-1,1,1) &=& {1\over 7} (e_1 + 3e_3 + e_4)\ \ \in\ \ C(0;e_1,e_3,e_4)\ , \nonumber\\ &=& {1\over 6} (e_1 + e_2 + 2e_3)\ \ \in\ \ C(0;e_1,e_2,e_3)\ , \end{eqnarray} corresponds to the $j=1$ twisted sector tachyon in either orbifold, with R-charge $R_j=({1\over 7},{3\over 7},{1\over 7})$ in ${\mathbb Z}_7(1,-4,1)$ and $R_j=({1\over 6},{1\over 6},{1\over 3})$ in ${\mathbb Z}_6(1,1,-4)$\ (GSO preserved since $E_j=-1$ using (\ref{gsoEj})). Including this lattice point gives the charge matrix \begin{equation}\label{Qia1746} Q_i^a = \left( \begin{array}{ccccc} 1 & 7 & -4 & -6 & 0 \\ 0 & 1 & -1 & -1 & 1 \\ \end{array} \right)\ , \end{equation} where we have used the relation\ $e_2+e_5-e_3-e_4=0$ to define the second row. Note $\sum_iQ_i^a=even$ for each row, consistent with the GSO projection. Using other relations to define $Q_i^a$ give equivalent physics. \begin{figure} \begin{center} \epsfig{file=fig1746.eps, width=11cm} \caption{{\small Phases of\ $Q=(1\ 7\ -4\ -6)$, with the toric subdivisions and corresponding coordinate charts in each phase, as well as the RG flow directions and the physics of each phase boundary.}} \label{fig1746} \end{center} \end{figure} The D-term equations (suppressing the gauge couplings) in this theory are \begin{eqnarray}\label{Dterms1746} -D_1 &=& |\phi_1|^2 + 7 |\phi_2|^2 - 4 |\phi_3|^2 - 6 |\phi_4|^2 - r_1 = 0\ , \nonumber\\ -D_2 &=& |\phi_2|^2 + |\phi_5|^2 - |\phi_3|^2 - |\phi_4|^2 - r_2 = 0\ . \end{eqnarray} The three other D-terms obtained from different linear combinations of the two $U(1)$s are \begin{eqnarray}\label{D'1746} -D_2' &=& -D_1+7D_2 = |\phi_1|^2 + 3 |\phi_3|^2 + |\phi_4|^2 - 7 |\phi_5|^2 - (r_1-7r_2) = 0\ , \nonumber\\ -D_3' &=& -D_1+4D_2 = |\phi_1|^2 + 3 |\phi_2|^2 - 2 |\phi_4|^2 - 4 |\phi_5|^2 - (r_1-4r_2) = 0\ , \\ -D_4' &=& -D_1+6D_2 = |\phi_1|^2 + |\phi_2|^2 + 2 |\phi_3|^2 - 6 |\phi_5|^2 - (r_1-6r_2) = 0\ . \nonumber \end{eqnarray} These D-terms give five phase boundaries in terms of rays drawn from the origin $(0,0)$ out through the points $\phi_1\equiv (1,0),\ \phi_2\equiv (7,1),\ \phi_3\equiv (-4,-1),\ \phi_4\equiv (-6,-1),\ \phi_5\equiv (0,1)$. The phase structure of this theory, encapsulated in Figure~\ref{fig1746}, can be analyzed in the same way as in the previous case, so we will be brief here. The renormalization group flows in the GLSM are given by the 1-loop runnings of the two FI parameters $r_a$ \begin{equation} r_1(\mu) = -{2\over 2\pi} \cdot\log {\mu\over\Lambda}\ , \qquad \qquad r_2(\mu) = (0) \cdot\log {\mu\over\Lambda} + r_2^{(0)}\ . \end{equation} Thus the parameter $r_2$ represents a marginal direction, and we have explicitly shown the value $r_2^{(0)}$ of the modulus. The RG flow of $r_1$ however forces $r_1\rightarrow\infty$ in the infrared. Thus the Flow-ray is the ray $(1,0)\equiv \phi_1$ in $r$-space (perpendicular to the $r_2$ direction). There are two convex hulls $\{\phi_1,\phi_2\},\ \{\phi_1,\phi_3\}$, adjoining the Flow-ray, so that there are two stable phases in this case, the $r_a$ satisfying\ $0<r_2<{1\over 7}r_1$ and ${1\over 4}r_1<r_2<0$ respectively. The ultraviolet of the theory, containing the ray $-F=(-1,0)$, is the phase $\{\phi_4,\phi_5\}$ corresponding to the shrinking 2-sphere ${\mathbb P}^1_-$ with residual orbifold singularities. We can see that the nontrivial RG flows of the parameters $r_1-7r_2$ and $r_1-4r_2$ force all flowlines to cross these phase boundaries, thereby passing $into$ the phases $\{\phi_1,\phi_2\}$ and $\{\phi_1,\phi_3\}$ respectively. Physically, the geometry of, say, phase $\{\phi_1,\phi_2\}$ corresponds to the 2-cycle $\{e_3,e_4\}$ and the 4-cycle $e_5$ blowing up simultaneously and expanding in time, separating the spaces described by the coordinate patches $\{\ (\phi_3,\phi_4,\phi_5),\ (\phi_2,\phi_3,\phi_4),\ (\phi_1,\phi_4,\phi_5),\ (\phi_1,\phi_3,\phi_5) \ \}$, with the corresponding toric subdivision in Figure~\ref{fig1746} showing the way these pieces of spacetime are glued together on the overlaps of their corresponding coordinate patches. Similarly we can describe the geometry of the topologically distinct phase $\{\phi_1,\phi_3\}$. The blowup mode corresponding to the 2-cycle has size given by K\"ahler class $r_2$ which has no renormalization. This marginality of $r_2$ physically means that in the course of the decay, the geometry can end up anywhere on this 1-parameter moduli space. In fact, the modulus $r_2$ corresponds to a topology-changing flop transition interpolating between the two resolutions represented by these phases, as can be seen from the corresponding subdivisions in Figure~\ref{fig1746}. Thus we expect that the geometry will sometimes evolve precisely along the ray\ $r_2^{(0)}=r_2^c,\ r_1\rightarrow\infty$, resulting\footnote{The classical singularity is at $r_2^{(0)}=0$. The constant shift $\tau_2^{eff}=\tau_2^{(0)}+{i\over 2\pi}\sum_iQ_i^2\log|Q_i^2|$ defining the singular point $r_2^{(0)}=r_2^c$, given by $\tau_2^{eff}=0$, arises from the bosonic potential (\ref{bospotGen}), since when $r_1$ is large, $\sigma_1$ is massive and can be integrated out (by setting $\sigma_1=0$) in (\ref{bospotGen}). This gives a real codimension-2 singularity after including the effects of the $\theta$-angle.} in the supersymmetric conifold as a decay product. Indeed the vevs resulting from the nonzero value of $r_1$ Higgs the $U(1)^2$ down to $U(1)$, thus resulting in the singularity (using $D_2$) \begin{equation} \left\{|\phi_2|^2 + |\phi_5|^2 - |\phi_3|^2 - |\phi_4|^2 = 0 \right\} //U(1)\ , \end{equation} which is of course the supersymmetric conifold\ $Q=(\begin{array}{cccc} 1 & 1 & -1 & -1 \end{array})$. Since this is a real codimension-2 singularity in this infrared moduli space, we expect that this is an occasional decay product. Generically the geometry will end up in either of the two stable phases $\{\phi_1,\phi_2\},\ \{\phi_1,\phi_3\}$, corresponding to the small resolutions (related by a flop) of this residual singularity, obtained when $r_2>0$ and $r_2<0$ respectively, as can be seen from the collection of coordinate charts describing the two phases.\\ Here also, the two stable phases are less singular than any of the other phases. Note that the conifold-like singularity $C(0;e_1,e_2,e_4,e_5)\equiv Q'=(\begin{array}{cccc} 1 & 3 & -2 & -4 \end{array})$ also arises among the phases of this theory: this is of course an unstable singularity and the flip leading to its more stable resolution connects the phases $\{\phi_3,\phi_4\}$ and $\{\phi_1,\phi_3\}$. This is also a Type II singularity, consistent with the GSO projection. \subsection{Decays to $Y^{pq}$ spaces} Higher order unstable singularities include, besides the supersymmetric conifold, the supersymmetric $Y^{pq}$ spaces defined by $Q=(\begin{array}{cccc} p-q & p+q & -p & -p \end{array}),\ q<p$, ($p,q$ coprime), and $L^{a,b,c}$ spaces with $(\begin{array}{cccc} a & b & -c & -(a+b-c) \end{array}), c<a+b$, amidst the phases arising in their evolution (see Appendix C for a brief description of the phase structure of the $Y^{pq}$s). A simple subfamily of the $Y^{pq}$s is defined by $Q=(\begin{array}{cccc} 1 & 2p-1 & -p & -p \end{array})$. This has the toric cone defined by $e_5=(-(2p-1),p,p),\ e_2=(1,0,0),\ e_3=(0,1,0),\ e_4=(0,0,1)$. For such a singularity to arise as a decay product in the phases of some higher order unstable singularity, its cone must exist as a subcone in the cone of the latter. If we restrict attention to singularities of the form $Q=(\begin{array}{cccc} 1 & n_2 & -n_3 & -n_4 \end{array})$, then the point $e_5$ must be an interior point of the cone defined by $e_1=(-n_2,n_3,n_4)$ and $e_2,e_3,e_4$, in particular lying in the interior of the orbifold subcone $C(0;e_1,e_3,e_4)$. In other words, we have \begin{eqnarray} && e_5=(-(2p-1),p,p)=a(-n_2,n_3,n_4)+b(0,1,0)+c(0,0,1)\ , \nonumber\\ && \qquad\qquad\qquad\ 0<a,b,c<1\ , \qquad a+b+c<1\ , \end{eqnarray} the last condition expressing $e_5$ to be a tachyon of the orbifold subcone $C(0;e_1,e_3,e_4)$. This then gives conditions on the $n_i$ \begin{equation} (p-1)n_2 < (2p-1)n_3 < pn_2\ , \qquad (p-1)n_2 < (2p-1)n_4 < pn_2\ , \qquad 1+n_2<n_3+n_4\ . \end{equation} Roughly speaking, this means that the affine hyperplane of the subcone $C(0;e_1,e_3,e_4)$ must be appropriately tilted so as to encompass the lattice point $e_5$. This gives lower bound restrictions on the embedding unstable singularity, the order of the embedding singularity rapidly rising with $p$ due to these restrictions. For example, consider the simplest such singularity $Y^{21}\equiv (\begin{array}{cccc} 1 & 3 & -2 & -2 \end{array})$. Then the above conditions give \begin{equation} {n_2\over 3}<n_3,n_4<{2n_2\over 3}\ , \qquad 1+n_2<n_3+n_4\ , \end{equation} the first of which conditions automatically implies that the point $e_6=(-1,1,1)$ is also an interior point as can be checked by a simple calculation. This corresponds to the fact that one of the blowup modes of the $Y^{pq}$ singularities is the supersymmetric conifold (see Appendix C). One of the simplest unstable Type II singularities satisfying these conditions is $Q=(\begin{array}{cccc} 1 & 17 & -9 & -11 \end{array})$. Then we have $C(0;e_1,e_3,e_4)\equiv {\mathbb Z}_{17} (1,8,-11)$, and \begin{equation} e_5=(-3,2,2)={1\over 17}(3e_1+7e_3+e_4)\ , \qquad e_6=(-1,1,1)={1\over 17}(e_1+8e_3+6e_4) \end{equation} corresponding to its GSO-preserved $j=3$ and $j=1$ twisted sector tachyons of R-charge $({3\over 17},{7\over 17},{1\over 17})$ and $({1\over 17},{8\over 17},{6\over 17})$ respectively. Including say $e_5$ alone gives a 2-parameter system defined by \begin{equation} Q_i^a = \left( \begin{array}{ccccc} 1 & 17 & -9 & -11 & 0 \\ 0 & 3 & -2 & -2 & 1 \\ \end{array} \right)\ , \end{equation} which can be analyzed along the same lines as before, resulting in the $Y^{21}$ space as an occasional decay product. Including both $e_5$ and $e_6$ gives a 3-parameter system with charge matrix \begin{equation} Q_i^a = \left( \begin{array}{cccccc} 1 & 17 & -9 & -11 & 0 & 0 \\ 0 & 3 & -2 & -2 & 1 & 0 \\ 0 & 1 & -1 & -1 & 0 & 1 \\ \end{array} \right)\ . \end{equation} The Flow-ray for this system is $(1,0,0)\equiv \phi_1$. By analyzing the secondary fan using the general techniques outlined earlier (and described for a 3-tachyon system in unstable orbifolds in \cite{drmkn}), it can be seen that there are four phases adjoining the Flow-ray, which are the stable phases of this theory corresponding to the various resolutions involving $Y^{21}$ and the supersymmetric conifold contained as an interior blowup mode. It is straightforward to work out the details. More generally, these techniques show that higher order unstable conifold singularities contain blowup modes giving rise to $L^{a,b,c}$ spaces amidst their stable phases. \section{Discussion} We have explored the phase structure of the nonsupersymmetric conifold-like singularities discussed initially in \cite{knconiflips}, exhibiting a cascade-like structure containing lower order conifold-like singularities including supersymmetric ones: this supplements the small resolutions studied in \cite{knconiflips}. The structure is consistent with the Type II GSO projection obtained previously. The GLSMs used here, as for unstable orbifolds, all have $(2,2)$ worldsheet supersymmetry, and have close connections with their topologically twisted versions, {\it i.e.}\ the corresponding A-models, so that various physical observables (in particular those preserving worldsheet supersymmetry) are protected along the RG flows here. However we note that the details of the RG evolution (and therefore also of time evolution) of the nonlinear sigma models (NLSMs) corresponding to these conifold-like geometries can be slightly different from the phase structure obtained here in the GLSM. For instance, while twisted sector tachyons (and their corresponding blowup modes) localized at the residual orbifold singularities on the 2-cycle loci have only logarithmic flows in the GLSM, on the same footing as the 2-cycle modes, they are relevant operators in the NLSMs with nontrivial anomalous dimensions. Thus in the NLSM (and in spacetime), the rate of evolution of a localized tachyon mode is expected to be higher than that of a 2-cycle mode, at least in the large volume limit where the 2-cycle evolution is slow. However although these details could be different, it seems reasonable, given worldsheet supersymmetry, to conjecture that the GLSM faithfully captures the phase structure and the evolution endpoints. A related issue is that the marginal directions orthogonal to the flow-ray preserved along the entire GLSM RG flow are only expected to coincide with corresponding flat directions arising at the $final$ IR endpoints in spacetime, which are supersymmetric as for orbifolds \cite{drmknmrp}. However in spacetime (with broken supersymmetry), it is not clear if there would be any corresponding exactly massless scalar fields during the course of time evolution. Presumably this is reconciled by taking into account the radiation effects present in spacetime but invisible in these (dissipative) RG analyses, which may also be related to string loop corrections (since the dilaton might be expected to turn on). It is worth mentioning that the classical geometry analysis in \cite{knconiflips} on obstructions to the 3-cycle (complex structure) deformation of these singularities due to their structure as quotients of the supersymmetric conifold suggests that there are no analogs of ``strong'' topology change and conifold transitions with nonperturbative light wrapped brane states here. From the GLSM point of view, the singular region where all $r_a$ vanish arises in the ``middle'' of the RG flow and is a transient intermediate state where the approximations in this paper are not reliable. It might be interesting to understand the structure of instanton corrections with a view to obtaining a deeper understanding of the physics of the singular region encoding the flip. On a somewhat broader note, it might be interesting to understand and develop interconnections between renormalization group flows in generalizations of the GLSMs considered here (and the ``space of physical theories'' they describe) and Ricci flows in corresponding geometric systems. The fact that the GLSM RG trajectories in the conifold-like geometries here as well as those in \cite{drmkn} flow towards less singular geometries (smaller $\mbox{\boldmath$N$}$ lattice volumes) suggests that there is a monotonically decreasing c-function-like geometric quantity here. Physically this seems analogous to the tachyon potential, or a height function on the ``space of geometries''. It would be interesting to understand D-brane dynamics in the context of such singularities. We expect that the quivers for these D-brane theories will be at least as rich as those for the $L^{a,b,c}$ spaces described in \cite{hanany}, and perhaps the knowledge of the phase structure of these theories developed here will be helpful in this regard. It is interesting to ask what these D-brane quivers (or possible duals) see as the manifestation of these instabilities. Finally we make a few comments on compactifications of these (noncompact) conifold-like singularities. We expect that such a nonsupersymmetric conifold singularity can be embedded (classically) in an appropriate nonsupersymmetric orbifold of a Calabi-Yau that develops a localized supersymmetric conifold singularity, such that the quotienting action on the latter results in the nonsupersymmetric one. For quotient actions that are isolated, the Calabi-Yau only acquires discrete identifications so that the resulting quotient space ``downstairs'' is locally Calabi-Yau. While we expect that the low-lying singularities, {\it i.e.}\ small $n_i$, admit such locally supersymmetric compactifications, we note that the higher order ones may not. In fact there may be nontrivial constraints on the $n_i$ for the existence of such compactifications. In the noncompact case, we note that the early time semiclassical phase is a small resolution ${\mathbb P}^1_-$ of topology distinct from that of the late time small resolution ${\mathbb P}^1_+$ phase. We expect that both these phases, being semiclassical, admit descriptions as topologically distinct small resolutions in compact embeddings comprising orbifolds of appropriate Calabi-Yaus as described above. Thus one might think that the (intermediate) flip visible explicitly in the GLSM here persists in the compact context as well, where it would mediate mild time-dependent topology change of the ambient compact space, with changes in the intersection numbers of the various cycles of the geometry. However since in the compact context worldsheet RG techniques are subject to the strong constraints imposed by the c-theorem, it is not clear if our GLSM analysis here is reliable in gaining insight into the dynamics of compact versions of the flip transitions here (see {\it e.g.}\ \cite{eva0502} for related discussions in the context of string compactifications on Riemann surfaces). It would be desirable to obtain a deeper understanding of these compactifications \cite{wip} and their dynamics, perhaps implementing the quotient action on the Calabi-Yau directly in a spacetime description. From the latter perspective, the time dependence of the compact internal space would imply interesting time-dependent effects in the remaining 4-dimensional part of spacetime: for instance, in a simple FRW-cosmology-like setup, the 4D scale factor will evolve in accordance with the time dynamics of the internal space. It would be interesting to explore this here perhaps along the lines of \cite{keshav0501}. \vspace{5mm} {\small {\bf Acknowledgments:} I have benefitted from an early discussion with R.~Gopakumar and from comments from S.~Minwalla and D.~Morrison on a draft.}
{ "attr-fineweb-edu": 1.848633, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcl44ubnjorNy3sCm
\section{Introduction} The Eden model \cite{E}, which was originally introduced to model the growth of cell colonies, is one of the simplest and most widely studied \cite{E,WB,PSHL,M,MW,PR,RP,FV,JB,PR2,FSS,KPZ,MJB,HW,ZS1,ZS2,W,WK,KW,M2,DS,BH} growth models and has become a paradigm for studying self affine fractal geometry and scaling in the growth of rough surfaces \cite{M3,BS,M4}. In the simplest variant of the model (Eden A) aggregate particles are added one at a time to randomly selected sites on the surface of a growing cluster \cite{PSHL,M}. The aggregate sites and surface sites are situated at the vertices of a regular lattice in on-lattice simulations. The initial cluster is usually taken to be a single aggregate site in the plane (radial growth) or a line of aggregate sites in the half plane (substrate growth). Most studies of the Eden model have been concerned with describing the asymptotic properties of the surface of the cluster. One of the schemes that has been introduced to better reveal these properties is noise reduction \cite{M3,BS,M4}. Noise reduction is implemented by associating a counter (initially set to zero) with each of the surface sites and incrementing the counter by one each time the associated surface site is selected for growth \cite{WK,KW,M2,DS,BH}. An aggregate particle is then added at a surface site when its associated counter reaches a prescribed value $m$. Increasing values of $m$ lead to increasingly smoother interfaces. It is widely believed that noise reduction reveals asymptotic surface properties at smaller system sizes without affecting the surface scaling exponents. This tenant is investigated in this paper for the Eden A model with growth from a seed on a square lattice. \section{Surface scaling ansatz} Extensive studies of Eden growth from a substrate have identified scaling exponents $\alpha$ and $z$ relating the surface thickness $w$ to the substrate width $L$ and the mean surface height $\langle h \rangle$. For a cluster with $N$ aggregate particles and $\cal N$ surface sites this relationship has the form \cite{FV} \begin{equation} w\sim L^\alpha f\left(\frac{\langle h \rangle}{L^z}\right)\label{FVscale} \end{equation} where the width is defined by \begin{equation} w^2=\langle h^2 \rangle- \langle h \rangle^2= \frac{1}{\cal N}\sum_{i=1}^{\cal N} h_i^2- \left(\frac{1}{\cal N}\sum_{i=1}^{\cal N} h_i\right)^2\label{Subw} \end{equation} and the scaling function $f(x)$ has the properties \begin{equation} f(x)\propto \left\{\begin{array}{rl} x^{\alpha/z} & \mbox{ for $x\ll 1$ }\\ \mbox{const}\label{scale} & \mbox{ for $x\gg 1$. } \end{array} \right. \end{equation} It follows from the scaling laws, Eqs. (\ref{FVscale}),(\ref{scale}), that for $L$ large the surface of Eden clusters on a two-dimensional substrate is a self-affine fractal with Hurst exponent $\alpha$ and fractal dimension $2-\alpha$. Collective evidence from the numerical simulations and algebraic calculations \cite{KPZ} suggest the values $\alpha\simeq 1/2$ and $z\simeq 3/2$ (or $\beta\equiv\alpha/z\simeq 1/3$) for two-dimensional Eden growth on a substrate \cite{M3,BS,M4}. Although it should be pointed out that the numerical results are not definitive due to finite-size effects and the algebraic results may not be entirely applicable since they are based on a continuum model that includes surface relaxations. For an Eden cluster growing in a circular geometry with $N$ aggregate particles, $\cal N$ surface sites and an average radius $\langle R \rangle$ (which grows linearly with time) the interface width \begin{equation} w^2=\langle R^2 \rangle-\langle R \rangle^2 =\frac{1}{\cal N}\sum_{i=1}^{\cal N} R_i^2- \left(\frac{1}{\cal N}\sum_{i=1}^{\cal N} R_i\right)^2\label{radw} \end{equation} is expected to scale as \begin{equation} w\sim {\langle R \rangle}^\beta \label{radscale}. \end{equation} Numerical simulations of Eden A clusters on the square lattice with up to $N\approx 10^5$ aggregate particles again suggest $\beta\simeq 1/3$. However, very large simulations for $N\approx 10^7$, \cite{FSS} and $N\approx 10^9$ \cite{ZS1} reveal the increasingly dominant effects of lattice anisotropy where eventually it is expected \cite{ZS1} that $w\sim \langle R \rangle$. In this paper we have carried out brute force simulations of the Eden A model on the square lattice starting from a single aggregate site over a range of $m$ from the fully stochastic limit $m=1$ to the zero-noise limit $m\to\infty$ \cite{BH}. The brute force calculations avoid the possibility of numerical bias from e.g., quadrant boundary effects \cite{FSS} or multiply selected surface sites \cite{ZS1}. The results of the simulations are shown to be consistent with a two-exponent scaling ansatz of the form \begin{equation} w(N,m)\sim \left\{\begin{array}{rl} a(m) N^{\frac{1}{6}} & \mbox{ for $N\ll N^*(m)$ }\\ b(m) N^{\frac{1}{2}}\label{ansatz} & \mbox{ for $N\gg N^*(m)$ } \end{array} \right. \end{equation} where $N^*(m)$ denotes an empirical cross-over number of aggregate particles for a given $m$. This is in agreement with the expectation that noise reduction does not affect the values of the scaling exponents; however the cross-over value \begin{equation} N^*=\left(\frac{a(m)}{b(m)}\right)^3 \end{equation} is $m$ dependent. The two-exponent scaling ansatz can also be written in the functional form \begin{equation} w(N,m)\sim b(m) N^{\frac{1}{2}}g\left(\frac{N}{N^*(m)}\right)\label{ansatz2} \end{equation} where \begin{equation} g(x)= \left\{\begin{array}{rl} x^{-1/3} & \mbox{ for $x\ll 1$ }\\ 1\label{ascale} & \mbox{ for $x\gg 1$. } \end{array} \right. \end{equation} \section{Zero-Noise Limit} In the zero-noise limit $m\to\infty$, Eden A clusters on a square lattice grow in layers as a compact diamond with \cite{BH} \begin{equation} N_k=2k^2-2k+1,\qquad k=1,2,\ldots \label{Dnos} \end{equation} aggregate particles. In this limit there is no stochastic growth so that \begin{equation} \lim_{m\to\infty} N^*(m)\rightarrow 0 \end{equation} and \begin{equation} \lim_{m\to\infty} w(N,m)\sim b(\infty)N^{\frac{1}{2}}. \end{equation} A simple approximation to $b(\infty)$ can be found from the continuum expression for a diamond in polar co-ordinates: \begin{equation} R(\theta)=\frac{\sqrt{N}}{\sqrt{2}\left(|\sin \theta |+|\cos \theta |\right)}. \end{equation} The averages over $\theta$ can be calculated exactly yielding \begin{equation} w\sim \sqrt{\frac{1}{\pi}-\frac{4\left(\tanh^{-1} \frac{1}{\sqrt{2}} \right)^2}{\pi^2}}N^{\frac{1}{2}} \label{Dscale} \end{equation} and hence $b(\infty)\approx 0.05896\ldots$. \section{Numerical Results} The results described in this section summarize data from our numerical simulations of ensembles of Eden A clusters starting from a single seed on the square lattice. Each ensemble consists of one hundred Monte-Carlo simulations of the Eden A model for a fixed value of the noise-reduction parameter $m$. The surface width, Eq. (\ref{radw}), is averaged over the ensemble copies to obtain the surface width as a function of $N$ for a given $m$. Figure 1 shows plots of the ensemble averaged surface width versus the number of aggregate particles (using a log-log scale) for $m$ at one unit intervals in the range $m\in[1,64]$ and for $m\to\infty$ (dashed line). The curves for increasing values of $m$ are from right to left on the right hand side of the plot and upper to lower on the left hand side of the plot. The peaks in the sawtooth pattern for large $m$ and small $N$ occur at exact `diamond numbers', Eq. (\ref{Dnos}). The surface width data was fit to the two-exponent scaling anstaz, Eq. (\ref{ansatz}). In figure 2 the best fit estimates for a) $a(m)$ and b) $b(m)$ are plotted against $m$ at one unit intervals in the range $m\in[1,64]$. Figure 2 b) also shows (dashed line) the asymptotic value of $b(\infty)$ obtained from the calculation in the zero-noise limit, Eq. (\ref{Dscale}). The surface profile of very large Eden clusters at $m=1$ is slightly anisotropic and well fit (in the first quadrant) by \begin{equation} R(\theta)=\langle R \rangle+A\cos 4\theta \label{aniso}. \end{equation} where $\langle R\rangle\approx \sqrt{\frac{N}{\pi}}$ is the average radius of the cluster and $A$ is the amplitude of the anisotropy (about one percent of $\langle R \rangle$ \cite{ZS1}). The anisotropic profile, Eq. (\ref{aniso}), has a surface width \begin{equation} w\sim\frac{A}{\sqrt{2\pi}}N^{\frac{1}{2}}. \end{equation} Our value $b(1)\approx .005$ is thus consistent with a slight anisotropy of the order of $A\approx \pm 1\% \langle R \rangle$. The functional form of the two-exponent scaling ansatz, Eq. (\ref{ansatz2}), is clearly revealed in Figure 3 where we have collapsed the surface width data for all $N$ and $m$ values onto a single curve by plotting $$ \frac{w(N,m)}{b(m)N^{1/2}} \quad\mbox{versus}\quad \frac{N}{N^*(m)}. $$ \section{Discussion} In this Brief Report we showed that the surface width of noise-reduced Eden A clusters grown from a seed on a square lattice scales with the number of aggregate particles according to a a two-exponent relation. The exponents $1/6$ for $N<N^*$ and $1/2$ for $N>N^*$ were found to be independent of the noise reduction parameter $m$ but the crossover value $N^*$ was found to decrease monotonically with $m$. These results support the tenants that: i) noise reduction does not affect the scaling exponents in Eden-like growth models and ii) increasing noise reduction decreases the size of clusters needed for observing the limiting large $N$ scaling behaviour. In particular, provided the scaling coefficients do not vanish in the limit $m\rightarrow\infty$, the large $N$ scaling exponents could be found rather simply from exact (algebraic or numerical) calculations in the zero-noise limit. On the other hand intermediate scaling results from finite $m$ simulations would have to be interpreted with some caution particularly in cases where the growth is characterized by multi-exponent scaling laws and multiple $m$ dependent cross-over values. It is anticipated that a similar two-exponent scaling relation to Eq. (\ref{ansatz2}) with the same two scaling exponents but different coefficients $a(m)$ and $b(m)$ could be used to characterize on-lattice (e.g., square, triangular or honeycomb) growth of other variants of the Eden model (e.g., Eden B and Eden C). \acknowledgments This work was supported by the Australian Research Council.
{ "attr-fineweb-edu": 1.976562, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcpvxK0flNeFb4RTT
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \input{sections/introduction} \section{Preliminaries} \label{sec:prelim} \input{sections/preliminaries} \section{ProGNNosis: Approach and Evaluation Methodology} \label{sec:methodology} \input{sections/methodology} \section{Performance Evaluation} \label{sec:results} \input{sections/results} \section{Conclusion} \label{sec:conc} \input{sections/conclusions} \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \subsection{Synthetic Graph Dataset Generation} \label{sec:gen} \begin{figure} \includegraphics[width=\columnwidth]{img/param_clustering.eps} \vspace{-0.5cm} \caption{Impact of the RMAT parameters on the clustering of the generated graphs. The x-axis corresponds to parameters $N$ and $E$. The way that the dataset was generated makes the distribution of this variable close to uniform ($N$ and $E$ are not the final number of edges and nodes, but the model parameters). The y axis corresponds to the maximum difference between the components of the vector $r$ ($a-d$). The colors indicate the mean clustering coefficient of each graph.} \label{fig:param_clustering_original} \end{figure} One of the most important aspects of the proposed approach is the creation of a synthetic dataset with diverse characteristics. To create this synthetic dataset, we started by creating a dataset using a naive approach. Then, an optimization problem was stated using this dataset to find a less biased dataset. To generate the synthetic graphs, the popular RMAT graph generator was used \cite{chakrabarti2004r}. We selected this tool because it can generate graphs with a wide variety of metrics that can be controlled through its parameters, and also because it is quite performant allowing us to generate large datasets in a short time. The RMAT generator uses six parameters. The first two parameters are the number of nodes $N$ and the number of edges $E$. Then, we have a vector $r = [a, b, c, d]$ of symmetric parameters that define the fitness distribution for the edge attachment. The $r$ vector is a probability vector, and as such the sum of its elements must be 1. The RMAT generator works by dividing the adjacency matrix into four groups recursively and assigning the probability of an edge falling in each of the groups following the $r$ vector. As already stated in the introduction, one aspect that we found to be especially important in the synthetic dataset used as the training set is that graphs with different characteristics are represented. This means that the dataset should have a wide range and balanced representation of values on the different characterization metrics, hence avoiding selection bias (i.e. bias towards a certain combination of metrics). However, we found that datasets generated in a naive way using RMAT had a selection bias towards graphs with low clustering coefficient, among other characteristics, as can be seen in Figure \ref{fig:param_clustering_original}. To overcome this bias, we propose a method where an optimization problem is defined to find an RMAT parameter distribution that generates a less biased dataset \cite{Wassington2022BiasRV}. Also, a tool to train and generate the dataset was developed, called Graphlaxy, which was used in this work to generate the synthetic training set. Graphlaxy is open source and available at \url{https://github.com/BNN-UPC/graphlaxy}. \subsection{Use Case Description} The scenario that we use to demonstrate the use and results of this method consists of comparing two designs that do not affect the GNN accuracy but that do affect the training time. The designs consist of two different ways of processing the GNN, based on two graph representations: SPARSE and EDGE\_LIST. The SPARSE representation uses sparse matrix multiplication to compute the aggregation function, whereas the EDGE\_LIST representation uses \textit{gather} and \textit{scatter} CUDA instructions to compute the aggregation function. \begin{figure*} \includegraphics[width=\columnwidth]{img/validation_baseline.eps} \includegraphics[width=\columnwidth]{img/validation_result.eps} \vspace{-0.2cm} \caption{Assesed metrics of the synthetic graph datasets (gray), and validation set of real graphs (blue). To the left the naive approach, and to the right the dataset optimized to reduce selection bias. Both are a random sample of the entire ddataset composed of 1000 graphs each to better show the bias.} \label{fig:rmat} \end{figure*} The experiment was repeated for the four different GNN models, described in Section \ref{sec:preGNN}, to show how the results vary depending on the model. To do a fair comparison, all graphs (training and testing set) were populated with a randomly generated set of features and a random class. The feature vectors are assumed of size 32, the hidden layer of size 32, the number of layers is 3 and the number of classes is 2. All the experiments are run using the same \textbf{software}, the framework (PyG) with CUDA version 10.1 and torch version 1.10.2; as well as the same \textbf{hardware}, a machine with CPU Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz, GPU GeForce GTX 980 Ti and 15 GB of RAM. To reason about the results obtained, executions were profiled using NVIDIA Nsight. \subsection{Graph Metrics} Graphs are complex structures that can be measured from multiple angles. Depending on the objective of the measurement, different characteristics of the graph can take a central role or have no impact at all. Also, some characterization metrics are correlated and others are not. Another important aspect is the time it takes to calculate the metrics: some metrics can be assessed via a quick inspection of the graph, and others need a great number of calculations, which in some cases can be reduced by obtaining approximate estimates. In this study, we will use undirected, unweighted, and connected graphs for simplicity. The definitions given in this section will only consider this kind of graph, but the results of the study can be generalized to other kinds of graphs. \vspace{0.2cm} \noindent \textbf{Degree:} The degree of a node is the number of vertices incident to that node or, in other words, the number of connections a node has. Hence, the degree $k_v$ of a node $v$ is generally given by the size of its neighborhood, \begin{equation} k_v = |N(v)|, \end{equation} where the neighborhood of a node can be defined as $N(v) = \{u: \{u,v\} \in V\}.$ Then, the degree distribution can be defined as the fraction of nodes with a given degree. Different characterizations can be extracted from the degree distribution, but some of the most useful ones are the maximum degree, the minimum degree, and the mean degree of a graph. The calculation of the degree distribution can be done in linear order concerning the number of edges in most of the graph representations. \vspace{0.2cm} \noindent \textbf{Density:} The density $D$ of a graph is the ratio between the edges that are present in the graph and the maximum amount of edges that the graph may have, given its number of nodes. Hence, the density of an undirected graph can be defined as: \begin{equation} D(G) = \frac{2\left| E \right|}{\left| V \right|(\left| V \right| -1)}. \end{equation} The density can be calculated in linear order in most of the graph representations, and most of the time the number of nodes and edges is already calculated and stored with the graph. \vspace{0.2cm} \noindent \textbf{Clustering coefficient} The clustering coefficient $C(v)$ of a node $v$ can be defined as: \begin{equation} C(v) = \frac{|\{\{u,w\}: (\{u,w\} \in E \land u,w \in N(v)\}|}{k_v (k_v - 1)} \end{equation} This indicates how close is the neighborhood of that node to generating a complete subgraph (or clique). The mean clustering coefficient is, as its name indicates, the mean of the clustering coefficient of all the nodes in a graph. The computational complexity of calculating the clustering coefficient is $O(n^3)$. Because of this, we use an approximation to calculate the clustering coefficient that is based on using trials instead of using all the nodes to calculate the coefficient, based on the ideas proposed in \cite{schank2005approximating}. \subsection{Regression and Classification} In this work, we use regression and/or classification to extract the relationship between the computation time of a GNN model and the characteristics of an input graph, defined by some of the metrics described above. On the one hand, regression is the statistical process used to find the relationship between a dependent variable and one or more independent variables (or features). The linear regression finds a linear combination of the independent variables that reduce the sum of squared differences with the dependent variable. To measure the performance of regression we will use three metrics: \begin{itemize} \item The \textbf{coefficient of determination ($R^2$)} is the sum of squared residuals and is 1 if all predictions are correct (best case), is 0 for the baseline model (using always the mean as prediction), and maybe negative if the prediction is worst than the baseline model. \item The \textbf{mean squared error (MSE)} is the mean of the square of the differences between the predicted and the real values. It is 0 if all predictions are correct and its value increases with the errors. Since it grows with the square of the error, it is a good indicator of outliers. \item The \textbf{mean absolute percentage error (MAPE)} is the sum of the errors divided by the real value. MAPE is good to find if the error does grow disproportionate to the real value. \end{itemize} On the other hand, classification is also a statistical process, but in this case, the dependent variable is discrete and each value it can take is called a class. One popular option for classification is the use of Support Vector Machine (SVM), which defines a hyperplane that separates the data into categories. This hyperplane is such that it maximizes the margin between the clases. To measure the performance of classification we will use the \textbf{accuracy}, which is simply the number of correctly classified samples divided by the total number of samples. \subsection{GNN Fundamentals} \label{sec:preGNN} Although GNNs are a wide family of algorithms, in this work we focus on the problem of node classification. The main idea is to use semi-supervised learning, where only some nodes are labeled with their class and the GNN should be able to generalize the labels to the rest of the nodes. To do so, each node has a vector of input features $x_v$. With this input vector, the GNN does a series of transformations to obtain $z_v$, the output vector. In multiclass node classification, the output vector is composed of one slot for each class, the highest number the one corresponding to the class assigned to the node. Each node goes through a series of intermediate states $h_v^i$. The final state is $h_v^N$, being $N$ the number of layers of the GNN. In general, the $i$-th layer of a GNN can formally be described as: \begin{equation} h_v^{i+1} = U^i\left(h_v^i, A^i(\{h_u^i: u \in N(v)\}\right) \end{equation} where $h_v^i$ is the hidden layer of node $v$ on generation $i$, $A^i(\cdot)$ is the \textit{aggregate} function, whereas $U^i(\cdot)$ is the \textit{update} function that combines the aggregated result with the previous state of the node. The training of the model is done through backpropagation to minimize a loss function. In each epoch, the weights update function and, in some models, the aggregate function, are updated after the model is used to predict the labels of the labeled nodes. See \cite{abadal2021computing} for more details on the computations of the training process. \subsection{Sample GNN Models} We will now present some of the most popular GNN models, which we employ in this work to evaluate the generalization potential of \textsc{ProGNNosis}. The implementation used in this study for all the models is based on Pytorch Geometric (PyG) \cite{fey2019fast}, which is one of the most used frameworks. \vspace{0.2cm} \noindent \textbf{Graph Convolutional Network (GCN):} GCN is based on convolutional neural networks, but applied to graphs \cite{kipf2016semi}. It uses a local approximation of the eigenvalues of the adjacency matrix to compute a convolution in the non-euclidean space defined by the graph. The GCN step for a node $v$ can be defined as \begin{equation} h_v^{i+1} = \theta \left( W^i \sum_{u \in N(v) \cup {v}} \frac{h_u^i}{\sqrt{k_u k_v}} \right), \end{equation} where $W$ are the trainable weights and $\theta$ is a function that introduces a non-linearity, e.g. \emph{ReLu}. Note that the aggregated features are normalized to the degree of the connected vertices, e.g. $k_v$. \vspace{0.2cm} \noindent \textbf{Graph Isomorphism Network (GIN):} GIN is a simple model built to demonstrate that even simple models may be powerful on GNN \cite{xu2018powerful}, and aims to classify graphs based on their similarity. The GIN step can be defined as \begin{equation} h_v^{i+1} = MLP^i \left( (1 + \epsilon) h_v + \sum_{u \in N(v)} h_u^i \right), \end{equation} where $MLP^i$ is a multi-layer perceptron, and $\epsilon$ is a parameter of the model that indicates the importance of the nodes own value to the ones of its neighborhood. \vspace{0.2cm} \noindent \textbf{Graph Attention Networks (GAT):} GAT is based on the concept of attention, where the edges have a learnable weight that changes over the generations depending on the feature vectors of the nodes \cite{velivckovic2017graph}. The GAT step can be defined as \begin{equation} h_v^{i+1} = \theta \left( \sum_{u \in N(v) \cup {v}} a_{u,v} W^i \times h_u^i \right), \end{equation} where $a_{u,v}$ is the attention coeficient for nodes $u$ and $v$. The attention coefficient can be calculated as \begin{equation} a_{u,v} = softmax_{N(v)}\left(a(W^i \times h_u,W^i \times h_v)\right), \end{equation} where $a$ is the attention function, softmax is the normalization between neighbors, and $W$ are the trainable weights. \vspace{0.2cm} \noindent \textbf{GraphSAGE (SAGE):} GraphSAGE (SAmple and aggreGatE) owes its name to the fact that it samples the neighbors of a node for the aggregation stage \cite{hamilton2017inductive}. SAGE can be used with different aggregations, we will present here the one we used that is the \textit{sum} aggregation, which take the form: \begin{equation} h_v^{i+1} = \theta \left(W^i_1 h_v^i + \sum_{u \in N(v)} W^i_2 h_u^i \right), \end{equation} where $W_1$ and $W_2$ are the trainable weights. \subsection{Dataset Generation} Using the method described in Section \ref{sec:gen}, we were able to generate a dataset that represents most of the real graphs used as validation. It can be seen in Figure \ref{fig:rmat} that, in the naive dataset created with RMAT only, most of the graphs have low clustering coefficients and that some of the real graphs from the validation set have high clustering coefficients, rendering these graphs underrepresented. We can also see that the final dataset generated with Graphlaxy \cite{Wassington2022BiasRV} solves this problem and that most graphs from the validation set fall inside the limits of the point cloud corresponding to the graphs in this dataset. The dataset is composed of graphs with edges ranging from one thousand to one million. The number of nodes and the rest of the parameters of RMAT are controlled by a distribution resulting from the optimization problem. Such a broad and balanced dataset will allow us to train a regression that takes into account the relationship between the different variables considered. \subsection{Analysis} Once we have an unbiased dataset, an analysis of the impact of the graph metrics on the computation time can be made. Since the SPARSE representation of the graph is based on sparse matrix multiplication, we pulled some ideas on which are the most impactful metrics in studies on matrix multiplication \cite{castano2008performance, yeom2016data}. In this work, the most impactful metric was the number of non-zero elements, which is translated to graphs as the number of edges. Additionally, through the experiments, we found that the most relevant metrics were the number of nodes, the number of edges, the maximum degree, and the clustering of the graph. \begin{figure}[b] \centering \includegraphics[width=\columnwidth]{img/data_visualization.eps} \vspace{-0.4cm} \caption{Correlation between number of edges and computation time for the GCN model using edge list graph representation for the synthetic training dataset (gray) and the validation dataset (blue).} \label{fig:edge_time} \end{figure} In Figure \ref{fig:edge_time}, we can see how the number of edges impacts the computation time of the training set and the validation set. We can see also that there is no linearity when the graphs are small. Also, we see that the variation increases the more edges the graph has, indicating that other metrics could explain that variation. A similar figure can be plotted for all the analyzed configurations, resulting in a similar shape. The impact of these metrics on each of the models and graph representations varies, effectively generating a breakpoint between when each of the designs is preferred, as we see later. The impact of the number of nodes and number of edges can be explained because they indicate the number of operations on the aggregation, though their influence is different for SPARSE and EDGE\_LIST representations. The impact of the maximum degree lies in the fact that highly connected nodes become a bottleneck where all computations of the aggregation on the neighbors must be completed before the calculation of the update. Also, the clustering may be an indicator of the complex dependency between the calculations of the nodes. The non-linearity shown at the left of the graph may be explained by the fact that small graphs use a small portion of the GPU memory and pipeline, and thus slightly bigger graphs use more GPU resources over the same amount of time instead of the same amount of resources during more time. This non-linearity will vary with the hardware used. \subsection{Regression} \input{tables/regression} \begin{figure*} \centering \includegraphics[width=\columnwidth]{img/regression_edge.eps} \includegraphics[width=\columnwidth]{img/regression_sparse.eps} \vspace{-0.2cm} \caption{Predicted against real time for both graph representations (to the left edge list and to the right saprse representation) in the GIN model case for training dataset (gray) and testing dataset (blue).} \label{fig:reg} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{img/impact.eps} \vspace{-0.5cm} \caption{Mean impact factor of the different metrics on the regression.} \label{fig:impf} \end{figure} Based on the analysis we made, we built a model using the metrics (number of nodes, number of edges and maximum degree, and the mean degree) able to predict the training time of the different designs with high accuracy (mean $R^2$ of 0.98 in both training and testing set). The model used is a mix of linear regression with ridge normalization and SVM regression with radial basis function kernel (RBF). The SVM is used to predict the residuals of the linear regression to account for the non-linearity. This compound approach was taken because the SVM by itself is unable to generalize to bigger graphs, and the linear regression is unable to fit the non-linear parts of the data. Table \ref{tab:reg} shows that the results for SPARSE graph representation are in general better than the ones with EDGE\_LIST. We see that the MSE is the value that increases more on the validation set for EDGE\_LIST indicating that some values have higher errors. In general, we can conclude that the models work well for both representations, though slightly better for SPARSE representation. This can be explained in Figure \ref{fig:reg}, where we see that the EDGE\_LIST representation has a higher variation for the same set of metrics, maybe indicating that one more metric could be used, or because of the implications of the method like the order in which the nodes are processed may impact the performance. To corroborate what we have observed in the analysis, Figure \ref{fig:impf} shows the impact factor, defined as the standard deviation of the variable times the coefficient of the linear regression for each metric. We can see that the number of edges is the more impactful metric, followed by the maximum degree and that the impact of the metric varies from one representation to the other. \subsection{Classification} \input{tables/classification} Once the regression models are built for each design, the results of the models can be used to classify the graphs by which design will perform better. From Table \ref{tab:class}, we can obtain the accuracy of the classification between graphs that will work better with the SPARSE representations and the ones that will work better with the EDGE\_LIST representations. These results are good all over 0.9. From Figure \ref{fig:class}, we can see that the graphs that are not being correctly classified are close to the diagonal, meaning that the time for both graph representations is similar. Therefore, classifying them wrongly may have a low impact on the GNN acceleration results. \begin{figure} \centering \includegraphics[width=\columnwidth]{img/classification.eps} \vspace{-0.5cm} \caption{Scatter plot with the training time, per epoch, for each of the graph representations (sparse and edge list) for the GraphSAGE model. The diagonal line in blue indicates the frontier where both representations lead to the same processing time, whereas the color of the dots represent the prediction made by \textsc{ProGNNosis}.} \label{fig:class} \end{figure} \subsection{Use Case Summary} \begin{figure} \centering \includegraphics[width=\columnwidth]{img/time.eps} \vspace{-0.5cm} \caption{Mean training time for multiple strategies. \emph{time\_edge} refers to always using the edge list representation, \emph{time\_sparse} refers to always using the sparse matrix representation, \emph{time\_regression} plots the time obtained with the regressions of \textsc{ProGNNosis} and, finally, \emph{time\_best} represents the ideal case that always selects the fastest option.} \label{fig:time_bar} \end{figure} Using the designed method, we demonstrate through Figure~\ref{fig:time_bar} that GNN computation can be accelerated through informed decisions. By using \textsc{ProGNNosis}, we can achieve a speedup close to the one we would obtain by always selecting the best option between the two designs. A summary of the obtained speedups is also shown in Table \ref{tab:class}. Models such as GAT or SAGE showed potential speedups over 1.30$\times$. We have also seen that when sweeping other variables out of the scope of this study, such as the number of input features, the tradeoffs vary but, still, \textsc{ProGNNosis} allows to identify the best representation strategy. However, the results are not shown in the paper for the sake of brevity.
{ "attr-fineweb-edu": 1.901367, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUct_xK4sA-5fmwwIx
\section{Introduction} Based on the use of a single {\it product} state, Hartree-Fock-Bogoliubov (HFB) theory~\cite{RiSc80} provides a variational mean-field approximation method to many-fermion systems capable of tackling pairing correlations responsible for (nuclear) superfluidity. It does so at the price of breaking $U(1)$ global gauge symmetry associated with particle-number conservation. As a result, the HFB solution can only be constrained to carry the correct particle number A on {\it average} and displays a non-zero particle-number dispersion reflecting, i.e., varying with, the amount of pairing correlations in the system. While the HFB state captures the essence of static (strong) correlations associated with superfluidity in open-shell nuclei, additional correlations must be incorporated in order to reach a fully quantitative description of the exact, e.g., ground state of $H$, i.e., \begin{enumerate} \item dynamical (weak) correlations can be efficiently captured by expanding the exact ground-state around the HFB reference state in a perturbative fashion, i.e., via Bogoliubov many-body perturbation theory (BMBPT)~\cite{Tichai:2018mll,Arthuis:2018yoo,Tichai:2020dna} or in a non-perturbative way via, e.g., Bogoliubov coupled cluster (BCC)~\cite{Si15} or Gorkov self-consistent Green's function (GSCGF)~\cite{So11,Soma:2013xha} theories. \item Given that the HFB state is not an eigenstate of the particle-number operator $A$, it contains, together with the (truncated) expansions built on it, a symmetry contamination. Removing the contaminants by restoring $U(1)$ global gauge symmetry leads to capturing additional static correlations. This can be achieved via projected BMBPT (PBMBPT)~\cite{Duguet:2015yle} or projected BCC (PBCC) theory~\cite{Duguet:2015yle,Qiu:2018edx}. At lowest order, PBMBPT and PBCC reduce to projected HFB (PHFB) theory~\cite{RiSc80}. \end{enumerate} In conclusion, the HFB state typically acts as a reference state for more advanced, potentially exact, many-body expansion methods. It is often casually stated that the HFB state, or any Bogoliubov state for that matter, reduces to a Slater determinant (i) when taking a zero-pairing limit or (ii) when targeting a closed-shell system, typically implying that both statements are essentially equivalent knowing that (iii) the zero-pairing limit is ill-defined for an open-shell system, i.e., it can only be safely considered for a closed-(sub)shell nucleus. Because these statements are only partially correct, the goal of the present contribution is to investigate analytically and illustrate numerically \begin{enumerate} \item the zero-pairing limit of a HFB state constrained to carry an arbitrary particle-number A on average, \item the behavior of BMBPT in such a limit. \end{enumerate} In the present work, the HFB state is shown to reach a mathematically well-defined zero-pairing limit, even for open-shell nuclei. However, the nature and characteristics of that limit state depend strongly on the closed- or open-shell character of the system, i.e., on the nature of the underlying shell structure and of the associated naive filling reached in the zero-pairing limit. Furthermore, these features may themselves depend on the self-consistent spatial symmetry assumed in the calculation. To the best of our knowledge, these basic properties of HFB theory and of the underlying Bogoliubov algebra have never been fully uncovered. Last but not least, the impact of taking the zero-pairing limit on PHFB and BMBPT is further discussed. The present paper is organized as follows. While Sec.~\ref{basics} introduces the necessary ingredients for the remainder of the paper, Sec.~\ref{zeropairingSec} proceeds to the analytical investigation of the zero-pairing limit. Next, Sec.~\ref{Numresults} displays the results of the numerical calculations illustrating the analytical conclusions reached in the previous section. Eventually, Sec.~\ref{conclusions} provides the conclusions of the present work. A short appendix complements the paper. \section{Basic ingredients} \label{basics} The present section briefly introduces constrained and unconstrained HFB, PHFB and BMBPT formalisms in order to be in position to discuss their zero-pairing limit in Sec.~\ref{zeropairingSec}. \subsection{Hartree-Fock-Bogoliubov formalism} \subsubsection{Unconstrained calculations} The Bogoliubov state $| \Phi \rangle$ is a vacuum for the set of quasi-particle operators obtained via a unitary linear transformation of the form~\cite{RiSc80} \begin{subequations} \begin{align} \beta_{\nu} &\equiv \sum_p U^*_{p\nu} c_p + V_{p\nu} c^\dagger_p\, , \\ \beta_{\nu}^\dagger &\equiv \sum_p U^*_{p\nu} c^\dagger_p + V_{p\nu} c_p \, . \end{align} \end{subequations} where $\{c^{\dagger}_{p}\}$ ($\{c_{p}\}$) defines the set of creation (annihilation) operators associated with the working basis of the one-body Hilbert space ${\cal H}_1$. While $| \Phi \rangle$ is not an eigenstate of the particle-number operator $A$, its expectation value must be constrained to match the number of particles A of the targeted system. This is enforced by adding a Lagrange term to the Hamiltonian, thus introducing the so-called grand potential $\Omega = H - \lambda (A-\text{A})$. The Lagrange multiplier $\lambda$ plays the role of the chemical potential and is to be adjusted so that the particle number is indeed correct on average\footnote{In actual applications, one Lagrange multiplier relates to constraining the neutron number N and one Lagrange multiplier is used to constrain the proton number Z. In our discussion A stands for either one of them.}. In this context, the HFB formalism corresponds to minimizing the expectation value of $\Omega$, i.e., the {\it Routhian}, \begin{align} \Omega_{| \Phi \rangle} & \equiv \langle \Phi | \Omega | \Phi \rangle \label{HFBrouthian} \\ &= \sum_{ij} t_{ij} \, \rho_{ij} + \frac{1}{2} \sum_{ijkl} \overline{v}_{ijkl} \, \rho_{ki} \, \rho_{lj} \nonumber \\ & \hspace{0.2cm} + \frac{1}{4} \sum_{ijkl} \overline{v}_{ijkl} \, \kappa^{\ast}_{ij} \, \kappa_{kl} -\lambda \big(\sum_{ij} \delta_{ij} \, \rho_{ij} - \, \text{A}\big) \nonumber \, , \end{align} within the manifold of Bogoliubov states. This procedure delivers the HFB eigenvalue equation~\cite{RiSc80} \begin{align} \label{eq:hfb_equationunconstrained} \begin{pmatrix} h - \lambda & \Delta \\ - \Delta^\ast & -(h -\lambda)^\ast \end{pmatrix} \begin{pmatrix} U \\ V \end{pmatrix}_{\mu} &= E_{\mu} \begin{pmatrix} U \\ V \end{pmatrix}_{\mu} \, , \end{align} providing the set of quasi-particle eigenstates making up the columns of the transformation matrices $(U,V)$ as well as the set of quasi-particle energies $\{E_{\mu}\}$ as eigenvalues. In Eq.~\eqref{eq:hfb_equationunconstrained}, the Hartree-Fock and Bogoliubov fields depend on matrix elements of the one-body kinetic energy operator $\{t_{ij}\}$ and of the two-body interaction\footnote{In the present investigation, original and induced three-nucleon forces are omitted for simplicity given that none of the conclusions depend on their inclusion. When taking them into consideration, the particle-number conserving normal-ordered two-body (PNO2B) approximation introduced in Ref.~\cite{Ripoche:2019nmy} can be used to take the dominant part of three-body forces into account via an effective two-body-like interaction as was done in, e.g., Ref.~\cite{Tichai:2018mll}.} operator $\{\overline{v}_{ikjl}\}$ according to\footnote{One subtracts the center-of-mass kinetic energy to the Hamiltonian $H$ in actual calculations of finite nuclei. As far as the present work is concerned, this simply leads to a redefinition of the one- and two-body matrix elements $t_{ij} $ and $\overline{v}_{ikjl}$ of the Hamiltonian without changing any aspect of the analysis.} \begin{subequations} \label{fields} \begin{align} h_{ij} &\equiv t_{ij} + \sum_{kl} \overline{v}_{ikjl} \, \rho_{lk} \, , \label{field1} \\ \Delta_{ij} &\equiv \frac{1}{2} \sum_{kl} \overline{v}_{ijkl} \, \kappa_{kl} \, , \label{field2} \end{align} \end{subequations} where \begin{subequations} \label{densities} \begin{align} \rho_{ij} &\equiv \langle \Phi | c_{j}^{\dagger} c_{i} | \Phi \rangle = \sum_{\nu} V^{\ast}_{i\nu} V_{j\nu} , \\ \kappa_{ij} &\equiv \langle \Phi | c_{j} c_{i} | \Phi \rangle = \sum_{\nu} V^{\ast}_{i\nu} U_{j\nu} \, , \end{align} \end{subequations} respectively denote the normal and anomalous density matrices associated with $| \Phi \rangle$. When building the density matrices in Eq.~\eqref{densities} from the solutions of Eq.~\eqref{eq:hfb_equationunconstrained} , the sum is actually restricted to quasi-particle states associated with positive quasi-particle energies $\{E_{\mu} \geq 0\}$; i.e., the fully paired vacuum carrying even-number parity is considered throughout the present paper. Once Eq.~\eqref{eq:hfb_equationunconstrained} is solved, the HFB state can be most conveniently written in its canonical, i.e., BCS-like, form~\cite{RiSc80} \begin{equation} | \Phi \rangle \equiv \prod_{k>0} \left[u_k + v_k a^{\dagger}_k a^{\dagger}_{\bar{k}}\right] | 0 \rangle \, . \label{HFBstate} \end{equation} In Eq.~\eqref{HFBstate}, operators $\{ a^{\dagger}_k, a_k\}$ characterize the so-called canonical one-body basis in which pairs of conjugated states $(k,\bar{k})$ are singled out by the Bogoliubov transformation. Conventionally, the two members of the conjugated pair are distinguished as $k>0$ and $\bar{k}<0$, thus, effectively splitting the basis into two halves. The coefficients $u_k=+u_{\bar{k}}$ and $v_k=-v_{\bar{k}}$ are BCS-like occupation numbers. They make up the simplified Bogoliubov transformation obtained through the Bloch-Messiah-Zumino decomposition~\cite{RiSc80} of the full Bogoliubov transformation extracted from Eq.~\eqref{eq:hfb_equationunconstrained}. The BCS-like occupation numbers can be chosen real and satisfy the identity $u^2_k + v^2_k = 1$. Employing Eq.~\eqref{HFBstate}, the canonical form of the norm and density matrices of the HFB state are obtained as \begin{equation} \langle \Phi | \Phi \rangle = \prod_{k>0} (u^2_k + v^2_k) = 1 \, , \label{norm} \end{equation} and \begin{subequations} \label{densitiescano} \begin{align} \rho_{kk'} &= v^2_k \, \delta_{kk'}, \\ \kappa_{kk'} &= u_kv_k \, \delta_{\bar{k}k'}, \end{align} \end{subequations} respectively. Thanks to an appropriate adjustment of the chemical potential $\lambda$ in Eq.~\eqref{eq:hfb_equationunconstrained}, the average particle number carried by the HFB state is constrained to the integer value A, which in the canonical basis reads as \begin{align} \langle \Phi | A | \Phi \rangle & = \sum_{k} v_k^2 \nonumber \\ &\equiv \sum_{k} \frac{1}{2} \left(1 - \frac{\epsilon_k - \lambda}{\sqrt{(\epsilon_k - \lambda)^2 + \Delta_k^2}}\right) \nonumber \\ &= \text{A} \, , \label{partnumbconstr} \end{align} where $\epsilon_k \equiv h_{kk} = h_{\bar{k}\bar{k}}$ and $\Delta_k \equiv \Delta_{k\bar{k}} = - \Delta_{\bar{k}k}$. Eventually, the total HFB energy is obtained as \begin{align} E_{| \Phi \rangle} & \equiv \langle \Phi | H | \Phi \rangle \nonumber \\ & \equiv E^{\text{kin}}_{| \Phi \rangle} + E^{\text{HF}}_{| \Phi \rangle} + E^{\text{B}}_{| \Phi \rangle} \nonumber \\ &= \sum_{ij} t_{ij} \, \rho_{ij} + \frac{1}{2} \sum_{ijkl} \overline{v}_{ijkl} \, \rho_{ki} \, \rho_{lj} \nonumber \\ & \hspace{2cm} + \frac{1}{4} \sum_{ijkl} \overline{v}_{ijkl} \, \kappa^{\ast}_{ij} \, \kappa_{kl} \nonumber \\ &= \sum_{k} t_{kk} \, v_k^2 + \frac{1}{2} \sum_{kk'} \overline{v}_{kk'kk'} \, v_k^2 \, v_{k'}^2 \nonumber \\ & \hspace{2cm} + \frac{1}{4} \sum_{kk'} \overline{v}_{k\bar{k}k'\bar{k}'} \, u_{k}v_{k} \, u_{k'}v_{k'} \, , \label{HFBenergy} \end{align} and is equal to the Routhian $\Omega_{| \Phi \rangle}$ (Eq.~\eqref{HFBrouthian}) as long as the constraint on the average particle number is indeed satisfied. \subsubsection{Constrained calculations} The zero-pairing limit is to be achieved by constraining the {\it variational determination} of the HFB state, i.e., by subtracting from the grand potential $\Omega$ a Lagrange term proportional to an appropriate operator $O$ in such a way that {\it the pairing field is entirely driven to zero in the resulting HFB Hamiltonian matrix}. Once the constrained HFB state is obtained, associated many-body observables, e.g., the binding energy and particle-number variance, can be computed. Any quantity $O$ varying monotonically with the amount of pairing correlations carried by the HFB state, i.e., acting as an order parameter of the breaking of $U(1)$ global-gauge symmetry, can be employed as a constraint. A typical example is the two-body operator associated with the particle-number variance $(A- \langle \Phi | A | \Phi \rangle)^2$~\cite{siegal72a,faessler73a,faessler75a,meyer91a,Fernandez:2005ux,Bender:2006tb,Vaquero:2011hq,Vaquero:2013paa}. In the present paper, the Hermitian one-body particle-number non-conserving operator \begin{align} \label{constraint} \Delta_{\text{C}} &\equiv \frac{1}{2} \sum_{ij} \Delta_{ij} \, c^{\dagger}_i c^{\dagger}_j + \frac{1}{2} \sum_{ij} \Delta^{\ast}_{ij} \, c_j c_i \, , \end{align} with $\Delta_{ij}$ defined in Eq.~\eqref{field2}, is employed. With this operator at hand, $\Omega$ is replaced by the {\it constrained} grand potential \begin{equation} \Omega(\delta)\equiv \Omega - \frac{1}{2}(1-\delta) \Delta_{\text{C}} \end{equation} to perform the minimization within the manifold of even-number parity Bogoliubov states. This leads to minimizing the modified, i.e., constrained, Routhian \begin{align} \Omega(\delta)_{| \Phi \rangle} & \equiv \langle \Phi | \Omega(\delta) | \Phi \rangle \label{HFBrouthianconstrained} \\ &= \sum_{ij} t_{ij} \, \rho_{ij} + \frac{1}{2} \sum_{ijkl} \overline{v}_{ijkl} \, \rho_{ki} \, \rho_{lj} \nonumber \\ & \hspace{0.2cm} + \frac{\delta}{4} \sum_{ijkl} \overline{v}_{ijkl} \, \kappa^{\ast}_{ij} \, \kappa_{kl} -\lambda \big(\sum_{ij} \delta_{ij} \, \rho_{ij} - \, \text{A}\big) \nonumber \, , \end{align} which takes the same form as the unconstrained Routhian, except that the pairing, i.e., Bogoliubov, term is now rescaled by the parameter $\delta$. Of course, constrained and unconstrained Routhians match for $\delta = 1$. The minimization of $\Omega(\delta)_{| \Phi \rangle}$ leads to solving a constrained HFB eigenvalue equation taking the form\footnote{While providing the same end results as the particle-number variance, the constraining one-body operator $\Delta_{\text{C}}$ is gentler numerically and allows one to reach the zero-pairing limit in a controlled fashion. The numerical easiness is also largely due to the constraining method employed here. Instead of using an actual Lagrange method in which the driving parameter $\delta$ is self-consistently adjusted to make the constraint $\langle \Phi(\delta)| \Delta_{\text{C}}| \Phi(\delta) \rangle$ equate a set of predefined values, the calculation is performed for a fixed value of $\delta$ and is repeated such that $\delta$ scans a chosen interval $[0,\delta_{\text{max}}]$. This approach is appropriate because (i) the specific value of the quantity $\langle \Phi(\delta)| \Delta_{\text{C}}| \Phi(\delta) \rangle$ is of no particular interest and because (ii) the particle-number variance varies monotonically with $\delta$ such that the end results can anyway be displayed as a function of it. In this context, $\delta_{\text{max}}$ can always be taken large enough to cover any desired range of particle-number variance values.} \begin{align} \label{eq:hfb_equation} \begin{pmatrix} h - \lambda & \delta \times \Delta \\ -\delta \times \Delta^\ast & -(h -\lambda)^\ast \end{pmatrix}_{(\delta)} \begin{pmatrix} U(\delta) \\ V(\delta) \end{pmatrix}_{\mu} &= E_{\mu}(\delta) \begin{pmatrix} U(\delta) \\ V(\delta) \end{pmatrix}_{\mu} \, , \end{align} under the additional constraint that the solution, denoted as $| \Phi(\delta) \rangle$, carries the average nucleon number $\text{A}$. This procedure delivers $\delta$-dependent quasi-particle states and energies $\{E_{\mu}(\delta)\}$, and thus a $\delta$-dependent many-body state $| \Phi(\delta) \rangle$. In Eq.~\eqref{eq:hfb_equation}, the Hartree-Fock and Bogoliubov fields themselves depend on $\delta$ through their functional dependence on the normal and anomalous density matrices associated with $| \Phi(\delta) \rangle$. As visible in Eq.~\eqref{eq:hfb_equation}, the use of $\Omega(\delta)$ eventually boils down to the fact that the pairing field at play in the HFB matrix is obtained by multiplying the unconstrained one by the parameter $\delta$, which is itself effectively equivalent to rescaling all two-body matrix elements entering the pairing field (Eq.~\eqref{field2}) by that same factor. Whereas $\delta =1$ corresponds to the unconstrained calculation, taking $\delta \rightarrow 0$ characterizes the zero-pairing limit of present interest. Eventually, all quantities introduced in the context of unconstrained calculations can be similarly defined here at the price of providing them with a $\delta$ dependence. The expression of the constrained HFB energy $E_{| \Phi (\delta) \rangle}$ is formally identical to the unconstrained one given in Eq.~\eqref{HFBenergy} except that it acquires an implicit dependence on $\delta$ through the density matrices of the constrained HFB state $| \Phi (\delta) \rangle$. This is at variance with the constrained Routhian $\Omega(\delta)_{| \Phi (\delta) \rangle}$ and the pairing field matrix elements $\Delta_{ij}(\delta)$ that additionally carry an {\it explicit} dependence on $\delta$ in their very definition. In any case, the $\delta$ dependence of the various quantities at play is omitted for simplicity in the remainder of the paper, except if specified otherwise. \subsection{Bogoliubov many-body perturbation theory} \label{BMBPTformalism} One of the focus of the present study is to investigate the consequence of driving the HFB state to the zero-pairing limit when performing a BMBPT calculations on top of it. This investigation happens to raise non-trivial questions regarding the way a perturbative expansion is best formulated when performed on top of a reference state delivered via a {\it constrained} minimization. \subsubsection{Unconstrained HFB reference state} \label{BMBPTunconstrained} To be in position to address these questions, let us first briefly recall the main ingredients of BMBPT based on an {\it unconstrained} HFB reference state. For a detailed account of the BMBPT formalism, the reader is referred to Ref.~\cite{Arthuis:2018yoo}. Because the HFB reference state is not an eigenstate of $A$, the operator meaningfully driving the BMBPT expansion\footnote{The "driving" operator is at play in the (imaginary) time evolution operator transforming the HFB reference state into the fully correlated ground-state and that is Taylor expanded to build the perturbative series~\cite{Duguet:2015yle}.} is not $H$ but $\Omega$~\cite{Duguet:2015yle} and is thus the same as the one at play in the HFB minimization. To set up BMBPT, $\Omega$ must be first normal ordered with respect to $| \Phi \rangle$ \begin{align} \Omega &= \Omega^{[0]}_{| \Phi \rangle} + \Omega^{[2]}_{| \Phi \rangle} + \Omega^{[4]}_{| \Phi \rangle} \notag \\ &= \Omega^{00}_{| \Phi \rangle} \notag \\ &\phantom{=} + \Omega^{20}_{| \Phi \rangle} + \Omega^{11}_{| \Phi \rangle} +\Omega^{02}_{| \Phi \rangle} \notag \\ &\phantom{=} + \Omega^{40}_{| \Phi \rangle} + \Omega^{31}_{| \Phi \rangle} +\Omega^{22}_{| \Phi \rangle} +\Omega^{13}_{| \Phi \rangle} +\Omega^{04}_{| \Phi \rangle} \, , \label{eq:NO} \end{align} where $\Omega^{ij}_{| \Phi \rangle}$ denotes the normal-ordered component involving $i$ ($j$) quasi-particle creation (annihilation) operators associated with $| \Phi \rangle$, e.g., \begin{align} \Omega^{31}_{| \Phi \rangle} &\equiv \frac{1}{3!}\sum_{\mu_1 \mu_2 \mu_3 \mu_4} \Omega^{31}_{\mu_1 \mu_2 \mu_3 \mu_4} \beta^{\dagger}_{\mu_1}\beta^{\dagger}_{\mu_2}\beta^{\dagger}_{\mu_3}\beta_{\mu_4} \, . \end{align} In Eq.~\eqref{eq:NO}, $\Omega^{00}_{| \Phi \rangle}$ is nothing but the Routhian $\Omega_{| \Phi \rangle}$ introduced in Eq.~\eqref{HFBrouthian}, $\Omega^{[2]}_{| \Phi \rangle}$ is an effective, i.e., normal-ordered, one-body operator and $\Omega^{[4]}_{| \Phi \rangle}$ is an effective two-body one. Details on the normal-ordering procedure as well as expressions of the matrix elements of each operator $\Omega^{ij}_{| \Phi \rangle}$ in terms of the original matrix elements of the Hamiltonian and of the $(U,V)$ matrices can be found in Ref.~\cite{Si15}. To actually set up the perturbation theory, the grand potential is split into an unperturbed part $\Omega_{0}$ and a residual part $\Omega_1$ \begin{equation} \label{split1} \Omega = \Omega_{0} + \Omega_{1} \ , \end{equation} such that \begin{subequations} \label{split2} \begin{align} \Omega_{0} &\equiv \Omega^{00}_{| \Phi \rangle}+\tilde{\Omega}^{11}_{| \Phi \rangle; \{\tilde{E}_{\mu}\}} \ , \\ \Omega_{1} &\equiv \Omega^{20}_{| \Phi \rangle} + \breve{\Omega}^{11}_{| \Phi \rangle; \{\tilde{E}_{\mu}\}} + \Omega^{02}_{| \Phi \rangle} \notag \\ &\phantom{=} + \Omega^{40}_{| \Phi \rangle} + \Omega^{31}_{| \Phi \rangle} +\Omega^{22}_{| \Phi \rangle} +\Omega^{13}_{| \Phi \rangle} +\Omega^{04}_{| \Phi \rangle} \ , \label{e:perturbation} \end{align} \end{subequations} with $\breve{\Omega}^{11}_{| \Phi \rangle; \{\tilde{E}_{\mu}\}}\equiv\Omega^{11}_{| \Phi \rangle}- \tilde{\Omega}^{11}_{| \Phi \rangle; \{\bar{E}_k\}}$. The one-body part of $\Omega_{0}$ is diagonal, i.e., \begin{equation} \tilde{\Omega}^{11}_{| \Phi \rangle; \{ \tilde{E}_{\mu}\}} \equiv \sum_{\mu} \tilde{E}_{\mu} \, \beta^{\dagger}_{\mu} \beta_{\mu} \, , \label{onebodypiece} \end{equation} with $\{\tilde{E}_{\mu}\}$ denoting an arbitrary set of positive energies. As $|\Phi \rangle$ solves the unconstrained HFB variational problem, i.e., Eq.~\eqref{eq:hfb_equationunconstrained}, one has that $\Omega^{20}_{| \Phi \rangle}=\Omega^{02}_{| \Phi \rangle}=0$. Furthermore, while the choice of $\tilde{\Omega}^{11}_{| \Phi \rangle; \{\tilde{E}_{\mu}\}}$, i.e., of the set of energies $\{\tilde{E}_{\mu}\}$, is arbitrary, a natural choice is to pick the eigenvalues of the HFB eigenvalue equation, i.e., to choose $\tilde{E}_{\mu}\equiv E_{\mu} > 0$ for all $\mu$ in Eq.~\eqref{onebodypiece}. This choice additionally leads to $\breve{\Omega}^{11}_{| \Phi \rangle; \{E_{\mu}\}}=0$ such that the residual interaction $\Omega_1$ in Eq.~\eqref{e:perturbation} reduces to its effective two-body part $\Omega^{[4]}_{| \Phi \rangle}$. This particular setting defines the \emph{canonical} version of BMBPT and reduces significantly the number of non-zero diagrams to be considered. Contrarily, not making such a choice leads to the appearance of \emph{non-canonical} diagrams involving $\Omega^{20}_{|\Phi \rangle}$, $\breve{\Omega}^{11}_{| \Phi \rangle; \{\tilde{E}_{\mu}\}}$ and $\Omega^{02}_{|\Phi \rangle}$ vertices. The power of BMBPT relies on the fact that the superfluid character of open-shell nuclei ensures that the HFB reference state is non-degenerate, i.e., elementary quasi-particle excitations of the HFB vacuum display non-zero energies. This key property relates to the fact that the quasi-particle energies $\{E_{\mu}\}$ are bound from below by the superfluid \emph{pairing gap} at the Fermi energy \begin{align} \text{Min}_{\mu} \{E_{\mu}\} \geq \Delta_{\text{F}} > 0 \, , \label{fermigap} \end{align} when the system is indeed superfluid, i.e., exhibits pairing correlations. The benefit if this feature can be best appreciated by considering as an example the first BMBPT correction that, added to the reference HFB energy~\cite{Arthuis:2018yoo}, defines second-order BMBPT calculations, i.e., BMBPT(2), \begin{align} E^{(2)}_{| \Phi \rangle; \{\tilde{E}_{\mu}\}} =& - \frac{1}{2} \sum_{\mu_1 \mu_2} \frac{H^{20}_{\mu_1 \mu_2} \Omega^{02}_{\mu_1 \mu_2} }{\tilde{E}_{\mu_1}+\tilde{E}_{\mu_2}} \label{BMBPTcorrection2} \\ &-\frac{1}{4!} \sum_{\mu_1 \mu_2 \mu_3 \mu_4} \frac{H^{40}_{\mu_1\mu_2\mu_3\mu_4} \Omega^{04}_{\mu_3 \mu_4 \mu_1\mu_2} }{\tilde{E}_{\mu_1}+\tilde{E}_{\mu_2}+\tilde{E}_{\mu_3}+\tilde{E}_{\mu_4}} \, . \nonumber \end{align} While the first term in Eq.~\eqref{BMBPTcorrection2} is non-canonical and thus cancels in the present context, using $\tilde{E}_{\mu}\equiv E_{\mu} > 0$ leads to strictly positive energy denominators. As $E^{(2)}$ is representative of all correction terms, the expansion is in this case, if not necessarily convergent~\cite{Demol:2020mzd,Tichai:2020dna}, at least ensured to be non singular in open-shell nuclei. This would not be the case in standard MBPT due to the degenerate character of the Hartree-Fock (HF) Slater determinant reference state in open-shell nuclei, i.e., elementary particle-hole excitations of the Slater determinant within the valence shell are zero in such a situation. \subsubsection{Constrained HFB reference state} \label{BMBPTconstrained} Because the reference state $| \Phi(\delta) \rangle$ is now obtained by solving the constrained HFB eigenvalue equation, the operator $\Omega$ driving the BMBPT expansion differs from the one, i.e., $\Omega(\delta)$, at play in the HFB minimisation. This results in the fact that \begin{itemize} \item the normal-ordered form of $\Omega$ with respect to $| \Phi(\delta) \rangle$ is not canonical\footnote{The operator $\Omega(\delta)$ is in canonical form when normal ordered with respect to $| \Phi(\delta) \rangle$ but $\Omega$ is not, except for $\delta =1$ of course.}, i.e., $\Omega^{20}_{| \Phi(\delta) \rangle}$ and $\Omega^{02}_{| \Phi(\delta) \rangle}$ are not zero, \item the partitioning of $\Omega$, i.e., Eqs.~\eqref{split1}-\eqref{split2} associated with the choice of $\{\tilde{E}_{\mu}(\delta)\}$ in the definition of $\tilde{\Omega}^{11}_{| \Phi(\delta) \rangle; \{\tilde{E}_{\mu}(\delta)\}}$ is neither natural nor obvious. \end{itemize} Thus, the application of BMBPT on top of a constrained HFB state necessarily requires to evaluate non-canonical diagrams. As for the partitioning of $\Omega$, two choices of quasi-particle energies are presently tested at any given value of $\delta$ \begin{enumerate} \item $\tilde{E}_{\mu}(\delta) = E_{\mu}(\delta)$, \item $\tilde{E}_{\mu}(\delta) = E_{\mu} = E_{\mu}(1)$ , \end{enumerate} where the second choice is thus independent of $\delta$. These two options are respectively denoted as {\it Option 1} (BMBPT-1) and {\it Option 2} (BMBPT-2) in the remainder of the paper. Of course, both options coincide for unconstrained calculations, i.e., for $\delta=1$. \subsection{Projected Hartree-Fock-Bogoliubov formalism} The PHFB formalism invokes gauge-rotated HFB states obtained as \begin{align} | \Phi(\varphi) \rangle &\equiv R(\varphi)| \Phi \rangle \notag \\ &= \prod_{k>0} (u_k + e^{2i\varphi}v_k a^{\dagger}_k a^{\dagger}_{\bar{k}}) | 0 \rangle \, , \end{align} where the rotation operator spanning the $U(1)$ group is given by $R(\varphi)\equiv e^{iA\varphi}$, with $\varphi \in [0,2\pi]$. The off-diagonal norm kernel between the HFB state and any gauge-rotated partner generalizes the norm overlap of Eq.~\eqref{norm} according to \begin{equation} \langle \Phi | \Phi(\varphi) \rangle = \prod_{k>0} (u^2_k + e^{2i\varphi}v^2_k) \, . \label{normoverlap} \end{equation} One notices that $\langle \Phi| \Phi(\pi/2) \rangle = 0$ whenever a specific shell is such that $u^2_{k}=v^2_{k}=1/2$. Whenever the corresponding shell is characterized by $p_k$ conjugated pairs, the gauge-dependent integrand at play in the PHFB calculation of an observable associated with a $(p_k\!+\!1)$-body (or higher-body) operator displays an apparent pole~\cite{tajima92a,donau98,almehed01a,anguiano01b}. While this pole is in fact a mere intermediate artefact and disappears when combining the various terms contributing to the observable, it can generate numerical difficulties\footnote{If the operator at play is only of $p_k$-body character, there is no apparent pole but a zero-over-zero when multiplying the off-diagonal norm kernel with the connected off-diagonal operator kernel that can still lead to numerical difficulties. When the operator is of even lower rank, no difficulty arises.} in applications if not accurately resolved~\cite{almehed01a,anguiano01b,doba05a,Bender:2008rn}. \section{Zero-pairing limit} \label{zeropairingSec} With the ingredients of Sec.~\ref{basics} at hand, the goal is now to actually investigate the even-number parity solution of HFB equations zero-pairing limit. \subsection{Naive filling} The discussion below crucially relies on the {\it naive filling} characterizing a given system of interest {\it in the zero-pairing limit}. The naive filling corresponds to occupying single-particle canonical states characterized by the A lowest energies $\epsilon_k$. Doing so, one exhausts the A nucleons in such a way that $0\leq a_v \leq d_v$ nucleons sit in the so-called {\it valence}, i.e., last occupied, shell characterized by energy $\epsilon_v$ and degeneracy $d_v$ (and thus $p_v\equiv d_v/2$ pairs of conjugated states). The naive occupation of each canonical state belonging to the valence shell denoted as \begin{equation} o_v \equiv \frac{a_v}{d_v} \, , \end{equation} ranges between 0 and 1, i.e., $0<o_v \leq 1$. Two crucially different categories of nuclei emerge in this context, i.e., a nucleus is either of closed-(sub)shell character when $o_v=1$ or of open-shell character whenever $0<o_v < 1$. In the present context, the definition of these two categories must be understood in a broad sense, i.e., independently of the symmetries, and thus of the degeneracies, characterizing the spectrum $\{\epsilon_k\}$. While the fact that a given nucleus belongs to one category or the other can only be inferred \textit{a posteriori} and will depend on the symmetries characterizing the employed numerical code\footnote{A system whose HFB solution is of open-shell character in the zero-pairing limit whenever spherical symmetry is enforced can relax to a closed-shell system if $SU(2)$ rotational symmetry is allowed to break.}, the general features of the zero-pairing limit will not depend on the fact that the closed-shell/open-shell system is spherical or deformed. Whenever the canonical energy shells $\{\epsilon_k\}$ obtained in the zero-pairing limit is characterized by spherical symmetry, they each display a degeneracy $d_k \equiv 2j_k+1$ where $j_k$ denotes the one-body total angular momentum shared by all degenerate single-particle states. For example, the text-book expectation is that $^{26}$O respects spherical symmetry and is characterized by a $\text{d}_{3/2}$ valence shell carrying degeneracy $d_v=4$ and fitting 2 neutrons such that $o_v =1/2$. Whenever $SU(2)$ rotational symmetry is allowed to break, the canonical spectrum $\{\epsilon_k\}$ associated with the ground-state of any even-even nucleus qualifying as a {\it spherical} doubly open-shell is only left with the two-fold Kramers degeneracy such that these nuclei all eventually qualify as {\it deformed} closed-shell systems\footnote{As will be exemplified in the numerical applications, such a situation also occurs for semi-magic nuclei.}. \subsection{Definition of the limit} Except when the naive filling reached in the zero-pairing limit corresponds to a closed-(sub)shell system, i.e., whenever $o_v=1$, $| \Phi \rangle$ cannot reduce to a Slater determinant. Thus, open-shell nuclei characterized by $0<o_v<1$ constitute the non-trivial focus of the present study. Of course, the higher the degree of symmetry, i.e., the degenerate character of single-particle energy shells, the larger the occurrence rate of open-shell systems. For such nuclei, the zero-pairing limit must be formally defined and performed with care. How the zero-pairing limit is taken is important and non-trivial because the HFB state \begin{enumerate} \item must be constrained to fulfilling Eq.~\eqref{partnumbconstr}, \item is solution of the iterative HFB eigenvalue problem (Eq.~\eqref{eq:hfb_equation}) involving an interference between the Hartree-Fock and the Bogoliubov fields\footnote{When explicitly performing the constraint on the particle-number variance, the constrained can eventually be shared at will between both fields entering the HFB matrix via the use of identities deriving from the unitarity of the Bogoliubov transformation~\cite{Ripochethesis2019}. If $100\%$ of the constraint is injected into the Hartree-Fock field, the constraint induces more and more spread out nuclear shells that makes the pair scattering less and less efficient. Contrarily, when $100\%$ of the constraint is injected into the Bogoliubov field, the effective strength of the pairing field is reduced. While observable, e.g., the HFB total energy, and the eigenstates of the HFB matrix are independent of this partitioning, the quasi-particle energies are not. The present way of directly scaling the Bogoliubov field is close in spirit to the second case.}. \end{enumerate} As the zero-pairing limit ($\delta \rightarrow 0$) is taken by scaling the Bogoliubov field down to zero in Eq.~\eqref{eq:hfb_equation}, the search for the limit reached by $| \Phi(\delta) \rangle$ expressed in its canonical basis can be analytically materialized by \begin{equation} \Delta_k \longrightarrow 0 \,\, \forall k \,\,\, \text{subject to} \,\,\, \langle \Phi | A | \Phi \rangle = \text{A} \label{limit} \, . \end{equation} Applying Eq.~\eqref{limit} in a meaningful fashion leads to distinguishing three categories of canonical single-particle states, i.e., states characterized by \begin{enumerate} \item $\epsilon_k - \lambda <0$, casually denoted as "hole states", \item $\epsilon_k - \lambda =0$, casually denoted as "valence states", \item $\epsilon_k - \lambda >0$, casually denoted as "particle states", \end{enumerate} when reaching the limit. Valence states, which can only concern one shell, must be explicitly considered in order to satisfy Eq.~\eqref{limit} under the assumption that $0<o_v\leq 1$. When driving the system towards the limit, the canonical basis changes with $\delta$ such that not only the chemical potential $\lambda$ but also the location of the shells evolve. Under the hypothesis that the spatial symmetry and the associated degeneracies remain unchanged along the way, one cannot exclude (i) the occurrence of shell crossings or (ii) the occurrence/lifting of an accidental degeneracy. Furthermore, if the numerical code allows the system to break spherical symmetry, one also cannot exclude a change of spatial symmetry along the constraining path and, thus, in the limit. As a result, one cannot exclude that the effective degeneracy of the valence shell, and thus the closed- or open-shell character of the associated nucleus, may be different in the zero-pairing limit and in, e.g., the unconstrained calculation. In any case, the hole, valence or particle character of the shells, as well as the associated naive filling, relevant to the present analysis are the ones reached {\it in the zero-pairing limit}. \subsection{Closed-(sub)shell system} \label{closedshell} For reference, let us first study closed-(sub)shell systems. In this case, one can arbitrarily defines the valence shell to be the last fully occupied ($o_v=1$) or the first fully empty ($o_v=0$) shell when proceeding to the naive filling. While the first choice is presently made here, general formulae derived later can be used with both conventions. Whenever $SU(2)$ symmetry is self-consistently satisfied, closed-(sub)shell systems are obtained each time a spherical shell is fully occupied when proceeding to the naive filling, e.g., in the semi-magic $^{22}$O for which the neutron 1d$_{5/2}$ shell is fully occupied. Consequently, only a small subset of semi-magic nuclei do (potentially) belong to this category. When relaxing $SU(2)$ rotational symmetry, all even-even nuclei that are not already of spherical doubly-closed-(sub)shell nature tend to deform in the zero-pairing limit to acquire such a character as a result of the residual two-fold Kramers degeneracy and thus display $o_v=1$. In this situation, no specific surprise occurs. Equation~\eqref{partnumbconstr} can be trivially fulfilled in the zero-pairing limit by fully occupying (emptying) the A lowest (remaining) canonical single-particle states such that \begin{equation} \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_k \to 0}} v^{2}_k = 1 (0) \, . \label{limit2} \end{equation} Equation~\eqref{limit2} stipulates that no genuine valence shell emerges through the zero-pairing limit and that all single-particle states converge towards either a hole or a particle state. Consequently, the HFB state itself converges trivially to the closed-(sub)shell Slater determinant \begin{align} | \bar{\Phi} \rangle & \equiv \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_k \to 0 \,\, \forall k}} | \Phi \rangle \nonumber \\ &= \prod_{h=1}^{A/2} a^{\dagger}_h a^{\dagger}_{\bar{h}}| 0 \rangle \, , \label{HFBstatelimit0} \end{align} which is an eigenstate of $A$ with eigenvalue A and zero particle-number variance \begin{align} \text{VAR}_{| \bar{ \Phi} \rangle} &\equiv \langle \bar{ \Phi} | (A - \langle \Phi |A | \Phi \rangle)^2| \bar{ \Phi} \rangle \nonumber \\ &= \langle \bar{ \Phi} | A^2 | \bar{\Phi} \rangle - \langle \bar{ \Phi} | A | \bar{ \Phi} \rangle^2 \nonumber \\ &= 0 \, . \end{align} Correspondingly, the HFB energy (Eq.~\eqref{HFBenergy}) takes the standard mean-field, i.e., HF, form associated with a Slater determinant \begin{align} \bar{E} & = \sum_{k=1}^{\text{A}} t_{kk} + \frac{1}{2} \sum_{kl=1}^{\text{A}} \overline{v}_{klkl} \, , \label{HFBenergylimit1} \end{align} where the Bogoliubov term, i.e., the pairing energy, is strictly zero. \subsection{Open-shell system ($o_v=1/2$)} \label{specific} Let us continue with the simplest non-trivial case where the naive valence fractional occupation is $o_v=1/2$. The next section will address the general case. Dealing with even particle numbers and spherical symmetry, $o_v=1/2$ corresponds to a half-filled valence shell, e.g., to 2 nucleons sitting in a $\text{p}_{3/2}$ or $\text{d}_{3/2}$ valence shell, 4 nucleons sitting in a $\text{f}_{7/2}$ or $\text{g}_{7/2}$ valence shell, etc. This situation is also encountered when one nucleon sits in a doubly-degenerate valence shell. This occurs for odd-even nuclei whenever enforcing spherical symmetry and whenever a s shell lies at the Fermi energy as well as for any odd-even deformed system whose even-number parity vacuum is characterized by canonical single-particle states displaying two-fold Kramers degeneracy\footnote{These Bogoliubov states correspond to the zero-pairing limit of the fully-paired even number-parity vacua appropriate to odd systems discussed at length in Ref.~\cite{Duguet:2001gr}.}. To conduct the present discussion, let us consider the simplest situation of two nucleons eventually sitting in a $\text{d}_{3/2}$ valence shell. Based on a text-book spherical single-particle spectrum, this situation is expected to encountered in, e.g., $^{26}$O. It corresponds to having $a_v=2$ and $d_v=4$. The $p_v=2$ pairs of conjugate valence states, generically denoted as $(v,\bar{v})$, are presently specified as $(v_1,v_{\bar{1}})$ and $(v_2,v_{\bar{2}})$. The zero-pairing limit of hole and particle states works as in Eq.~\eqref{limit2} with $\text{A}-2$, i.e., 24, particles eventually occupying hole states. For valence states, the situation is more subtle. To reach the average occupation $o_v=1/2$ in the limit in a controlled fashion, one must assume\footnote{This property is validated numerically in Sec.~\ref{Numresults}.} that \begin{equation} \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_{v} \to 0}} \left|\frac{\epsilon_{v} - \lambda}{\Delta_{v}}\right| = 0 \, , \label{limit3} \end{equation} i.e., that $(\epsilon_{v} - \lambda)$ goes to 0 faster than $\Delta_{v}$. With Eq.~\eqref{limit3} at hand, one indeed obtains the required limit for the average single-particle valence state occupation \begin{equation} \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_{v} \to 0}} v^{2}_{v} = \frac{1}{2} = o_v \, . \label{limit4} \end{equation} Together with Eq.~\eqref{limit2} for 24 particles, Eq.~\eqref{limit4} allows one to fulfill Eq.~\eqref{partnumbconstr} for $\text{A}=26$ in the zero-pairing limit. With this carefully performed limit, one obtains \begin{align} | \bar{\Phi} \rangle & \equiv \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_k \to 0 \,\, \forall k}} | \Phi \rangle \label{HFBstatelimit} \\ &= \frac{1}{2}(1 + a^{\dagger}_{v_1} a^{\dagger}_{v_{\bar{1}}}) (1 + a^{\dagger}_{v_2} a^{\dagger}_{v_{\bar{2}}}) \prod_{h=1}^{(A-2)/2} a^{\dagger}_h a^{\dagger}_{\bar{h}}| 0 \rangle \, , \nonumber \end{align} which is a linear combination of four Slater determinants, one of which has $\text{A}-2=24$ particles (0 particles in the valence shell), two of which have $\text{A}=26$ particles (2 particles in the valence shell) and one that has $\text{A}+2=28$ particles (4 particles in the valence shell). The fact that the limit state $| \bar{ \Phi} \rangle$ can be written as the sum of a {\it finite} number (different from 1) of Slater determinants is remarkable given that $| \Phi \rangle$ can only be expanded over an {\it infinite} sum of Slater determinants as soon as one moves away from the zero-pairing limit. Given the form of $| \bar{ \Phi} \rangle$ in Eq.~\eqref{HFBstatelimit}, it can easily be checked that the constraint defined through Eq.~\eqref{partnumbconstr} is indeed satisfied \begin{align} \langle \bar{ \Phi} | A | \bar{\Phi} \rangle &= \frac{1}{4}\left[(\text{A}-2) + 2\text{A} + (\text{A}+2) \right] \nonumber \\ &= \text{A} \, , \end{align} even though $| \bar{ \Phi} \rangle$ is {\it not} eigenstate of $A$ in spite of being obtained through the zero-pairing limit. Accordingly, the particle-number variance of $| \bar{ \Phi} \rangle$ is given by \begin{align} \text{VAR}_{| \bar{ \Phi} \rangle} &= \frac{1}{4}\left[(\text{A}-2)^2 + 2\text{A}^2 + (\text{A}+2)^2\right] -\text{A}^2 \nonumber \\ &= 2 \, , \end{align} and is thus different from zero. It is worth noting that $| \bar{ \Phi} \rangle$ does {\it not} correspond to the so-called equal-filling approximation (EFA) that is rigorously formulated on the basis of a mixed-state density matrix operator~\cite{PerezMartin:2008yv} in the sense of statistical quantum mechanics. The limit state $| \bar{ \Phi} \rangle$ obtained in Eq.~\eqref{HFBstatelimit} is a {\it pure} state obtained through the straight resolution of the HFB eigenvalue problem. Its normal density matrix is the same as in the EFA, i.e., $v^2_v = 1/2$ but its anomalous density matrix is also non-zero in the valence shell, i.e., $\kappa_{v\bar{v}}=u_v v_v = 1/2$. As a result, the pairing (i.e., Bogoliubov) contribution to the total energy (Eq.~\eqref{HFBenergy}) \begin{align} \bar{E}^{\text{B}}_{| \bar{\Phi} \rangle} & = \frac{1}{4}(\overline{v}_{v_{1}v_{\bar{1}}v_{1}v_{\bar{1}}} +\overline{v}_{v_{1}v_{\bar{1}}v_{2}v_{\bar{2}}} \nonumber \\ &\hspace{0.7cm} +\overline{v}_{v_{2}v_{\bar{2}}v_{1}v_{\bar{1}}} +\overline{v}_{v_{2}v_{\bar{2}}v_{2}v_{\bar{2}}}) \label{HFBenergylimit2} \end{align} is {\it not} zero in the limit state. While $| \bar{ \Phi} \rangle$ is indeed obtained through a zero-pairing procedure in the sense that the pairing field is strictly driven to zero in the HFB eigenvalue problem (Eq.~\eqref{eq:hfb_equation}), the pairing energy of the limit state is not zero due to the remaining non-zero anomalous density matrix within the valence shell. Eventually, the above analysis provides several key insights. When driving the spherical HFB state associated with $^{26}$O towards its zero-pairing limit $| \bar{ \Phi} \rangle$, one observes that \begin{itemize} \item the limit state $| \bar{ \Phi} \rangle$ carrying A particles on average is mathematically well-defined and takes the form of a linear combination of a {\it finite number}, i.e., 4, of Slater determinants that do not all carry the physical number of particles. As a result, the limit state is not an eigenstate of the particle-number operator. \item There exists a non-zero lower bound\footnote{The fact that it is indeed a lower bound is proven in App.~\ref{APPvariance}.} to the particle-number variance that can actually be achieved within the manifold of spherical HFB states constrained to carry 26 nucleons (18 neutrons), i.e., \begin{equation} \text{VAR}_{| \Phi \rangle} \geq \text{VAR}_{| \bar{ \Phi} \rangle}=2 \ . \end{equation} \item Consistently with this non-zero particle-number variance, the limit state carries a non-zero pairing energy even though it is obtained by diagonalizing an HFB matrix in which the pairing field is vanishing. \item As a consequence of Eq.~\eqref{limit4}, the off-diagonal norm overlap associated with $| \bar{ \Phi} \rangle$ (Eq.~\eqref{normoverlap}) is 0 at $\varphi=\pi/2$. While the particle-number projection (PNP) on the value $\text{A}$ is well-defined given that a component with the targeted number of particles does enter $| \bar{ \Phi} \rangle$ according to Eq.~\eqref{HFBstatelimit}, it requires numerical care given that for $d_v=4$ the computation of two-body observables requires a fine-tuned treatment of a zero-over-zero. \item By virtue of Eqs.~\eqref{limit} and~\eqref{limit3}, the lowest quasi-particle energy solution of Eq.~\eqref{eq:hfb_equation} fulfills\footnote{The analytical proof of Eq.~\eqref{limit6} relies on using the BCS expression $E_k=\sqrt{(\epsilon_k - \lambda)^2 + \Delta_k^2}$ that is known to be a good approximation of HFB quasi-particle energies, except for s states near the threshold~\cite{doba96a}.} \begin{align} \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_k \to 0 \,\, \forall k}} \text{Min}_\mu E_{\mu} &= 0 \, , \label{limit6} \end{align} such that Eq.~\eqref{fermigap} does not apply anymore in the zero-pairing limit. Thus, the BMBPT expansion based on Option 1 becomes ill-defined in the limit given that zero energy denominators enter; even if the reference energy $\bar{E}_{| \Phi(0) \rangle}$ is well-defined and contains a non-zero Bogoliubov contribution, the second-order correction $E^{(2)}_{| \Phi(0) ; \{E_{\mu}(0)\}\rangle}$ (Eq.~\eqref{BMBPTcorrection2}) diverges. Obviously, no such problem arises in closed-shell nuclei ($o_v=1$) given that BMBPT safely reduces to standard HF-MBPT~\cite{Tichai:2018mll,Arthuis:2018yoo} in this case, with the lowest two quasi-particle excitation converging towards the non-zero particle-hole gap at the Fermi energy. \end{itemize} \subsection{General case} \label{general} Let us now consider the general open-shell case characterized by $0<o_v<1$ and $o_v \neq 1/2$\footnote{As will be shown below, the case $o_v = 1/2$ must be treated separately such that it was not only a question of convenience to cover it first in Sec.~\ref{specific}.}. The valence shell gathers $p_v=d_v/2$ pairs of conjugated states generically denoted as $(v,\bar{v})$ and presently specified as $(v_1,v_{\bar{1}}), \ldots, (v_{p_v},v_{{\bar{p}_v}})$. The zero-pairing limit of hole and particle states works as before such that $\text{A}-a_v$ particles eventually occupy hole states. As for the valence shell, one needs to fit $a_v$ particles in $d_v$ degenerate states characterized by identical occupations $v^{2}_{v_k}\equiv v^{2}_{v}$. To do so, the identity \begin{align} a_v &= 2\sum_{k=1}^{p_v} v_{v_k}^2 = d_v \, v^{2}_{v} \, , \label{occuplimit} \end{align} must be fulfilled in the zero-pairing limit in order to ensure that \begin{align} v^{2}_{v} &= o_v \, . \label{occuplimit2} \end{align} The only way to satisfy Eq.~\eqref{occuplimit2} requires now that $\epsilon_{v} - \lambda$ and $\Delta_{v}$ go to 0 in a strictly proportional fashion, i.e., that \begin{equation} \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_{v} \to 0}} \left|\frac{\Delta_{v}}{\epsilon_{v} - \lambda}\right| = \gamma \, , \label{limit7} \end{equation} with $\gamma$ a non-zero real number. In fact, this property is indeed consistent with Eq.~\eqref{occuplimit2} under the condition that \begin{equation} \gamma = \frac{2\sqrt{o_v(1-o_v)}}{|1- 2 o_v|} \, . \label{propconstant} \end{equation} One observes that $\gamma$ is ill-defined for $o_v=1/2$, which reflects the fact that Eq.~\eqref{limit7} is inappropriate in that case and must be replaced by Eq.~\eqref{limit3}, i.e., when $o_v=1/2$ one must rather make the hypothesis that \begin{equation} \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_{v} \to 0}} \left|\frac{\Delta^n_{v}}{\epsilon_{v} - \lambda}\right| = \gamma' \, , \label{limit8} \end{equation} for some real number $n>1$. With this at hand, one eventually obtains \begin{align} | \bar{\Phi} \rangle & \equiv \lim_{\underset{\langle A \rangle = \text{A}}{\Delta_k \to 0 \,\, \forall k}} | \Phi \rangle \label{HFBstatelimit2} \\ &= \prod_{k=1}^{p_v} (\sqrt{1-o_v} + \sqrt{o_v} \, a^{\dagger}_{v_{k}} a^{\dagger}_{v_{\bar{k}}}) \prod_{h=1}^{(A-a_v)/2} a^{\dagger}_h a^{\dagger}_{\bar{h}}| 0 \rangle \, . \nonumber \end{align} Thus, the HFB state carrying A particles on average is well-defined in the zero-pairing limit and takes the form of a linear combination of a {\it finite} number, i.e., $2^{p_v}$, of Slater determinants. Again, the fact that $| \bar{ \Phi} \rangle$ is a finite sum of Slater determinants is remarkable. Among the $2^{p_v}$ Slater determinants, $\binom{b}{p_v}$ of them carry $B(b)=\text{A}-a_v+2b$ particles\footnote{The number of particles carried by the Slater determinants thus ranges from $\text{A}-a_v$ to $\text{A}+ (d_v-a_v)$. The total number of summed Slater determinants is indeed $\sum_{b=0}^{p_v} \binom{b}{p_v} = 2^{p_v}$.}, with the integer $b$ ranging from $0$ to $p_v$. It is easy to see from Eq.~\eqref{HFBstatelimit2} that the weight of each Slater determinant carrying $B(b)$ particles is equal to $o^{b}_v(1-o_v)^{p_v-b}$. Given the form of $| \bar{ \Phi} \rangle$, it can first be checked\footnote{Identities~\eqref{binomial1} and~\eqref{binomial2} provided in App.~\ref{formulae} are employed to derive Eq.~\eqref{averGEN} while the additional identity~\eqref{binomial3} is necessary to derive Eq.~\eqref{varGEN}. Similar analytical results could be derived for higher moments of $A$ at the price of considering higher derivatives of Newton's binomial formula.} that the constraint defined through Eq.~\eqref{partnumbconstr} is indeed satisfied in the zero-pairing limit \begin{align} \langle \bar{ \Phi} | A | \bar{\Phi} \rangle &= \sum_{b=0}^{p_v} \binom{b}{p_v} o^{b}_v(1-o_v)^{p_v-b} (\text{A}-a_v+b) \nonumber \\ &= (\text{A}-a_v)\sum_{b=0}^{p_v} \binom{b}{p_v} o^{b}_v(1-o_v)^{p_v-b} \nonumber \\ &\phantom{=} + 2 o_v \sum_{b=1}^{p_v} \binom{b}{p_v} b \, o^{b-1}_v(1-o_v)^{p_v-b}\nonumber \\ &= \text{A}-a_v + 2 o_v p_v \nonumber \\ &= \text{A} \, , \label{averGEN} \end{align} where $o_v=a_v/d_v$ and $p_v=d_v/2$ were eventually used. Similarly, the particle-number variance is obtained after a long but straightforward calculation as \begin{align} \text{VAR}_{| \bar{ \Phi} \rangle} &= \sum_{b=0}^{p_v} \binom{b}{p_v} o^{b}_v(1-o_v)^{p_v-b} (\text{A}-a_v+b)^2 \nonumber \\ &\phantom{=} - \text{A}^2 \nonumber \\ &= 2 a_v (1-o_v)\, , \label{varGEN} \end{align} and constitutes a lower bound as proven in App.~\ref{APPvariance}. Last but not least, a non-zero pairing contribution \begin{align} \bar{E}^{\text{B}}_{| \bar{ \Phi} \rangle} & = o_v(1-o_v)\sum_{kl=1}^{p_v} \overline{v}_{v_{k}v_{\bar{k}}v_{l}v_{\bar{l}}} \, , \label{HFBenergylimit3} \end{align} to the total HFB energy is once again obtained in the limit given that the anomalous density matrix is non-zero within the valence shell and equal, for each canonical pair $(v,\bar{v})$, to $\kappa_{v\bar{v}}= u_v v_v = \sqrt{o_v(1-o_v)}$. From a general perspective, the present analysis demonstrates that HFB theory does {\it not} reduce to HF even when the pairing field is driven to zero in the HFB Hamiltonian matrix. \subsection{Illustrative examples} \label{examples} Given the above analysis, typical examples can be discussed on the basis that the expected, i.e., {\it text-book}, sequence of shells is indeed obtained in each case in the zero-pairing limit. This assumption can only be checked \textit{a posteriori} from an actual numerical calculation associated with a code characterized by a certain set of constrained/relaxed symmetries. This important aspect will be scrutinized in Sec.~\ref{Numresults}. Relying for now on the text-book sequence of shells, the following results can reasonably be anticipated. \begin{itemize} \item The first typical example, already discussed in Sec.~\ref{specific}, is $^{26}$O. Based on a text-book spherical canonical spectrum, this semi-magic nucleus corresponds to having $a_v=2$ and $d_v=4$, and thus $o_v=1/2$. Inserting these numbers into Eq.~\eqref{varGEN}, the minimal particle-number variance reached in the zero-pairing limit is indeed $\text{VAR}_{| \bar{ \Phi} \rangle} = 2$. \item A similar but slightly different case relates to the semi-magic $^{44}$Ca nucleus whose naive filling based on a text-book spherical canonical spectrum corresponds to putting 4 particles in the $\text{f}_{7/2}$ shell, i.e., $a_v=4$ and $d_v=8$, thus also leading to $o_v=1/2$. As a result, the minimal particle-number variance obtained in the zero-pairing limit is $\text{VAR}_{| \bar{ \Phi} \rangle} = 4$. \item The text-book naive filling of the semi-magic $^{18}$O nucleus corresponds to putting 2 particles in the spherical $\text{d}_{5/2}$ shell, i.e., $a_v=2$ and $d_v=6$, thus leading to $o_v=1/3$. This eventually provides the zero-pairing particle-number variance $\text{VAR}_{| \bar{ \Phi} \rangle} = 8/3$. \item One can focus next on the semi-magic $^{22}$O nucleus whose naive filling corresponds to putting 6 particles in the same $\text{d}_{5/2}$ shell, i.e., $a_v=6$ and $d_v=6$, thus leading to $o_v=1$ and a zero minimal variance $\text{VAR}_{| \bar{ \Phi} \rangle} = 0$ as expected for a spherical closed-subshell system. \item Considering an even-even doubly open-shell nucleus, e.g., $^{240}$Pu, the unconstrained HFB minimum is typically obtained for a deformed configuration, which is even more so true when the pairing is decreased. Given that the associated canonical spectrum only retains Kramers two-fold degeneracy, the zero-pairing limit state necessarily takes the form of a deformed closed-shell Slater determinant with a zero particle-number variance $\text{VAR}_{| \bar{ \Phi} \rangle} = 0$. \item Considering the odd-even neighbor, i.e., $^{241}$Pu, the naive filling corresponds to putting 1 particle in a doubly-degenerate valence shell, i.e., $a_v=1$ and $d_v=2$, thus leading to $o_v=1/2$ and to a non-trivial HFB state with $\text{VAR}_{| \bar{ \Phi} \rangle} = 1$. \end{itemize} \begin{figure*} \scalebox{0.86}{\includegraphics{VS_exp_loglog.pdf}} \caption{(Color online) Results of constrained HFB calculations of $^{18}$O (left column), $^{26}$O (center column) and $^{44}$Ca (right column). Top row: log-log plot of the valence shell canonical pairing gap $|\Delta_{v}|$ against $|\epsilon_{v} - \lambda|$. The slope $1/n$ of the curve in the limit $\Delta_{v} \rightarrow 0$ (see Eqs.~\eqref{limit7}-\eqref{limit8}) is extracted through a numerical fit. Bottom row: $|\Delta^n_{k}/(\epsilon_{k} - \lambda)|$ as a function of $|\Delta_{k}|$ for the valence shell and for the particle (hole) shell just above (below). The power $n$ employed corresponds to the value extracted in the associated top panel. \label{fig:limitcharacterization}} \end{figure*} \begin{figure*} \includegraphics[width=1.0\textwidth]{WeightDelta.pdf} \caption{Weights of the Slater determinants associated with a given particle number (solid lines) making up the constrained HFB state as a function of $\delta$. The weights are obtained via PNP after variation calculations. The numerical results are compared to the predicted weights in the zero-pairing limit (dashed curves). \label{weights}} \end{figure*} \section{Applications} \label{Numresults} In this section, results obtained from constrained HFB and BMBPT calculations based on it, are presented. \subsection{Numerical set up} \label{sec:num} The computations are performed using a realistic nuclear Hamiltonian $H$ derived from chiral effective field theory ($\chi$EFT). The Hamiltonian contains a two-nucleon (2N) interaction derived at next-to-next-to-next-to leading order (N3LO) in the chiral expansion~\cite{Hamil} and evolved down to lower resolution scale ($\alpha = 0.08$ fm$^4$) via a similarity renormalization group (SRG) transformation~\cite{SRGsoft}. \begin{figure*} \includegraphics[width=0.8\textwidth]{Decomp.pdf} \caption{Distribution of weights of the good particle-number components of the constrained HFB state in $^{18}$O (left panel) and $^{22}$O (right panel) for $\delta=0,2,4$. \label{weights2}} \end{figure*} Two HFB solvers dedicated to \textit{ab initio} calculations, i.e., capable of handling 2N and 3N interactions (either in full or within the normal-ordered two-body approximation \cite{Roth:2011vt}), are presently employed. The first code is restricted to spherical symmetry and is based on the actual diagonalization of the HFB matrix~\cite{Hergert:2009nu}. The second code, named TAURUS$_{\text{vap}}$, solves HFB or variation after particle-number projection (VAPNP) equations for symmetry-unrestricted (real) Bogoliubov quasi-particle states~\cite{bally20a}, thus allowing for spatially deformed solutions. Employing a gradient method, the code can actually solve the variational equations under a large variety of constraints and was recently used to perform first practical calculations~\cite{Bally:2019miu}. \begin{figure*} \includegraphics[width=1.0\textwidth]{Delta_Variance.pdf} \caption{Neutron-number variance of the constrained HFB solution for $^{18}$O (left column), $^{22}$O (center-left column), $^{26}$O (center-right column) and $^{44}$Ca (right column) as a function of the constraining parameter $\delta$. In each case, the grey zone materializes the interval of neutron-number variance values that cannot be reached within the manifold of appropriate HFB solutions. The upper limit of the grey zone denotes the predicted value in the zero-pairing limit (Eq.~\eqref{varGEN}) that is provided on each panel as $\text{VAR}_{\text{min}}$. \label{fig:variancevsdelta} } \end{figure*} In both codes, one-, two- and three-body operators are represented in the eigenbasis of the spherical harmonic oscillator (SHO) Hamiltonian. In the present calculations, the one-body basis is characterized by a SHO frequency $\hbar \omega = 20$ MeV and includes single-particle states up to $e_{\text{max}} \equiv (2 n + l)_{\text{max}} = 4$ whereas the two-body basis is built from its tensor product. While realistic \emph{ab initio} calculations typically require to use $e_{\text{max}}=12$ or 14 in mid-mass nuclei to reach convergence with respect to the basis set, calculations performed in a reduced model space are sufficient to investigate the points of present interest. \subsection{Constrained Hartree-Fock-Bogoliubov} \label{resultsHFB} \subsubsection{Characterization of the limit} For the zero-pairing limit to be analytically meaningful, canonical matrix elements of the pairing field have been predicted in Sec.~\ref{zeropairingSec} to be driven to zero in a specific way when the constraining parameter $\delta$ goes itself to zero. In this context, the half-filled valence-shell case ($o_v=1/2$) had to be explicitly distinguished, i.e., see Eq.~\eqref{limit8} versus Eq.~\eqref{limit7}. In Fig.~\ref{fig:limitcharacterization}, these predictions are tested via numerical calculations of three representative semi-magic nuclei, i.e., $^{18,26}$O and $^{44}$Ca. Under the assumption that they remain spherical all the way down to the zero-pairing limit, the expected text-book shell structures stipulate that these systems all qualify as open-shell nuclei as discussed in Sec.~\ref{examples}. The top panels of Fig.~\ref{fig:limitcharacterization} display the valence-shell canonical pairing gap $|\Delta_{v}|$ against $|\epsilon_{v} - \lambda|$ in log-log scale. The slope $1/n$ of the curve in the limit $\Delta_{v} \rightarrow 0$ is extracted through a numerical fit. In the general case, i.e., $o_v\neq 1/2$, Eq.~\eqref{limit7} stipulates that both quantities must go to zero in a strictly proportional fashion, i.e., $n=1$. It is indeed what is obtained for $^{18}$O ($o_v=1/3$), thus validating the theoretical prediction. Moving to $^{26}$O and $^{44}$Ca characterized by a half-filled valence shell, the extracted slope is such that $n>1$, also corroborating the prediction. Based on the numerical extraction of the parameter $n$ from the top panels, the bottom panels of Fig.~\ref{fig:limitcharacterization} display the ratio $|\Delta^n_{k}/(\epsilon_{k} - \lambda)|$ as a function of $|\Delta_{k}|$ for the valence shell and for the particle (hole) shell just above (below) it. As predicted theoretically, this ratio behaves characteristically in the zero-pairing limit, i.e., it goes to zero for all shells except for the valence shell of open-shell nuclei where it goes to a non-zero value. This behavior is indeed numerically obtained in the three cases. More over, the non-zero limit $\gamma$ was predicted analytically for $o_v\neq 1/2$ (Eq.~\eqref{propconstant}) and is indeed accurately obtained numerically for $^{18}$O. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{spec_HFB.pdf} \caption{(Color online) Results of constrained HFB calculations of $^{18}$O (left column), $^{22}$O (center-left column), $^{26}$O (center-right column) and $^{44}$Ca (right column) as a function of the neutron-number dispersion. Top row: pairing gaps of neutron canonical states around the Fermi energy. Middle row: average occupation of neutron canonical states around the Fermi energy. Full (red) lines relate to the valence shell whereas the (blue) dotted/dashed-dotted lines relate to the highest hole/lowest particle shells. Bottom row: lowest neutron eigenvalues (i.e., quasi-particle energies) of the HFB matrix. The right-hand limit of the grey zone stipulates the theoretical lower bound of the neutron-particle variance accessible within the manifold of appropriate Bogoliubov states that is reached in the zero-pairing limit (Eq.~\eqref{varGEN}). Horizontal full lines in the center row denote the theoretical value of the valence shell average occupation $o_v$ reached in the zero-pairing limit. Vertical dashed lines characterize the neutron-number dispersion of the unconstrained calculation. \label{fig:limitdetails}} \end{figure*} \subsubsection{Characterization of the limit state} Now that the analytical premises of the zero-pairing limit have been validated numerically, its consequences on the structure of the HFB solution can be investigated. Based on Eq.~\eqref{HFBstatelimit2}, the limit state $| \bar{\Phi} \rangle$ is predicted to display a specific structure, i.e., to be a linear combination of $2^{p_v}$ Slater determinants. Among them, $\binom{b}{p_v}$ carry $B(b)=\text{A}-a_v+2b$ particles, with $b$ ranging from $0$ to $p_v$, each entering the sum with the weight $o^{b}_v(1-o_v)^{p_v-b}$. This prediction is put to the test in Fig.~\ref{weights} where the weights of each component obtained via particle-number projection after variation (PNPAV) calculations are displayed as a function of $\delta$ for $^{18,22,26}$O and $^{44}$Ca. The numerical weights are compared to the zero-pairing limit prediction, i.e., $\binom{b}{p_v} \, o^{b}_v(1-o_v)^{p_v-b}$. As visible from the four panels, the HFB state is a linear combination of an infinite number\footnote{In practice, this infinity is of course made finite by the truncation of the one-body Hilbert space to a finite dimension $n_{\text{dim}}$. In this condition, the HFB state mixes Slater determinants spanning the full range of possible (even) particle numbers, i.e., from $0$ to $n_{\text{dim}}$.} of Slater determinants as long as $\delta \neq 0$, albeit with weights quickly decreasing for components moving away from A. In the zero-pairing limit, a qualitatively different structure is obtained, i.e., the linear combination does collapse to $2^{p_v}$ states carrying the predicted neutron numbers, i.e., from 8 to 14 in $^{18}$O, only 14 in $^{22}$O, from 16 to 20 in $^{26}$O and from 20 to 28 in $^{44}$Ca. Furthermore, the predicted weights are indeed exactly recovered in the limit. It must be noted that the single Slater determinant in $^{22}$O is actually reached for a non-zero value of $\delta$. This feature relates to the well-celebrated {\it BCS collapse} and reflects the point at which the pairing strength is too weak to sustain a non-zero pairing field against the finite single-particle gap at the Fermi energy. As visible from Fig.~\ref{weights}, no such pairing collapse occurs in open-shell nuclei. Eventually, the numerical results fully validate the non-trivial structure of the HFB state predicted to be obtained in the zero-pairing limit. To complement the vision given in Fig.~\ref{weights}, the weights obtained from the PNPAV calculation are displayed differently in Fig.~\ref{weights2} for $^{18,22,26}$O, i.e., the distribution of weights is shown as a function of the particle number for $\delta = 0, 2, 4$. For strong enough pairing, i.e., $\delta = 4$, one recovers the text-book distribution following quite closely a Gaussian distribution in all three cases~\cite{RiSc80}. For a moderate pairing regime, i.e., $\delta = 2$, the distribution may be distorted\footnote{Notice that unconstrained calculations ($\delta=1$) based on the presently used 2N interaction and the omission of any 3N interaction provides too little pairing compared to empirical data.}, e.g., in $^{18}$O. Eventually, the distribution obtained in the zero-pairing limit takes the usual/unusual form for closed-(sub)shell/open-shell isotopes predicted in Sec.~\ref{zeropairingSec}. While the HFB state contains a single non-zero weight associated with the Slater determinant limit in $^{22}$O, the distribution extends over a {\it finite} number of isotopes for open-shell $^{18,26}$O. The distribution over this finite interval is symmetric (asymmetric) in $^{26}$O ($^{18}$O) as a testimony of the naive valence-shell occupation $o_v=1/2$ ($o_v=1/3$). \begin{figure} \includegraphics{PES_HFB_scal.pdf} \caption{(Color online) Results of constrained HFB calculations of $^{18}$O (blue curves) and $^{22}$O (red curves) as a function of the constraining parameter $\delta$ in the HFB matrix. Top panel: total binding energy rescaled to the zero-pairing limit. Bottom panel: contribution of the Bogoliubov term, i.e., pairing energy, to the total binding energy. \label{fig:collapseVSnocollapse} } \end{figure} \subsubsection{Particle-number variance} With the aim to further characterize the zero-pairing limit HFB state, the neutron-number variance is displayed in Fig.~\ref{fig:variancevsdelta} as a function of the constraining parameter $\delta$ for $^{18,22,26}$O and $^{44}$Ca. As expected, the closed-subshell Slater determinant describing $^{22}$O in this limit exhibits a zero neutron-number variance. Contrarily, a non-trivial HFB state displaying a non-zero neutron-number variance $\text{VAR}_{\text{min}}\equiv \text{VAR}_{| \bar{\Phi} \rangle}$ characterizes the three open-shell nuclei. As anticipated, $\text{VAR}_{\text{min}}$ acts as a minimum along the constraining path whose numerical value corresponds in all cases to the one predicted through Eq.~\eqref{varGEN}. One must once again note that the zero particle-number variance is reached for a non-zero value for $\delta$ in $^{22}$O, whereas no such pairing collapse occurs in open-shell nuclei. \begin{figure} \includegraphics{CaShell.pdf} \caption{(Color online) Rescaled pairing energy (see text) in the zero-pairing limit for $^{40-48}$Ca compared to the analytical prediction $o_v(1-o_v)$. To compute the latter, $o_v$ is taken to be the value obtained for each nucleus on the basis that the neutron f$_{7/2}$ shell indeed acts as the valence shell in the zero-pairing limit. \label{fig:pairingenergy} } \end{figure} \subsubsection{Spectroscopic quantities} The HFB state $| \bar{\Phi} \rangle$ reached in the zero-pairing limit is further scrutinized in Fig.~\ref{fig:limitdetails} where several key quantities are displayed as a function of the neutron-number variance for $^{18,22,26}$O and $^{44}$Ca. Whereas canonical pairing gaps near the Fermi energy are visible in the top panels, associated canonical single-particle occupations are shown in the middle panels. While pairing gaps are driven to zero when the neutron-number variance reaches $\text{VAR}_{\text{min}}$, single-particle occupations converge to the expected values for all four nuclei, e.g., the valence-shell occupation smoothly attains the naive-filling value associated with a text-book spherical canonical spectrum, e.g., $v^2_v = o_v = 1/3$ in $^{18}$O. In the bottom panel, quasi-particle energies, i.e., eigenvalues of the constrained HFB equation are displayed below $6$\,MeV. The lowest quasi-particle energy reaches a non-zero value for the closed-subshell nucleus $^{22}$O in the zero-pairing limit. Contrarily, the lowest quasi-particle energy does go to zero in $^{18,26}$O and $^{44}$Ca. While the former characteristic reflects the presence of the finite particle-hole gap at the Fermi energy in $^{22}$O, the latter is a fingerprint of the open character of the valence shell in $^{18,26}$O and $^{44}$Ca. The two different behaviors have decisive consequences for the application of BMBPT in the zero-pairing limit as discussed in Sec.~\ref{resultsBMBPT} below. \begin{figure} \scalebox{0.72}{\includegraphics{Deformation.pdf}} \caption{(Color online) Results of constrained HFB calculations of $^{18}$O in the zero-pairing limit ($\delta =0$) as a function of the number of iterations. Top panel: axial quadrupole ($\beta_{20}$) and hexadecapole ($\beta_{40}$) deformations. Second-to-top panel: total constrained grand potential and Hamiltonian expectation values. Middle panel: canonical single-particle spectrum. Second-but-last panel: canonical single-particle occupations. Bottom panel: particle-number variance. Horizontal dashed lines stipulate the theoretical limits for a spherical solution with a text-book single-particle spectrum, i.e., a neutron d$_{5/2}$ valence shell. The vertical full line denotes the point at which the converged spherical solution is constrained to deformation $\beta_{20} = 0.001$ during one iteration. \label{fig:symmetrychange}} \end{figure} \subsubsection{Binding energy} To complement the numerical analysis, Fig.~\ref{fig:collapseVSnocollapse} provides the total HFB energy, as well as its pairing (i.e., Bogoliubov) contribution, in $^{18,22}$O as a function of the constraining parameter $\delta$. The behavior is again qualitatively distinct for closed-(sub)shell and open-shell nuclei. In $^{22}$O, the pairing energy goes to zero for a non-zero values of $\delta$, a point at which the total energy is non-differentiable, thus, signaling the sharp transition associated with the BCS collapse\footnote{The sharp transition to the non-superfluid phase is non-physical in finite systems such as atomic nuclei and is known to be an artefact of the HFB/BCS theory. This deficiency is resolved at the VAPNP level.} to the non-superfluid phase. Contrarily, the constrained HFB energy evolves smoothly all the way down to $\delta =0$ in $^{18}$O, point at which its pairing component is still different from zero. According to Eq.~\eqref{HFBenergylimit3}, the pairing contribution to the energy is predicted to evolve characteristically when filling a given valence shell in the zero-pairing limit. Under the assumption that the sum of valence-shell interaction matrix elements making up the second factor at play in Eq.~\eqref{HFBenergylimit3} is constant while filling the shell, the pairing energy should be strictly proportional to $o_v(1-o_v)$. This prediction is put to the test in Fig.~\ref{fig:pairingenergy} from $^{40}$Ca till $^{48}$Ca, i.e., when filling up the neutron f$_{7/2}$ shell. Dividing $\bar{E}^{\text{B}}_{| \bar{ \Phi} \rangle}$ by the sum of matrix elements evaluated in $^{44}$Ca, the rescaled pairing energy is compared to $o_v(1-o_v)$, where the latter is evaluated on the basis that the neutron f$_{7/2}$ shell indeed acts as the valence shell in the zero-pairing limit. In spite of the fact that the canonical basis, and thus the sum of matrix elements at play in Eq.~\eqref{HFBenergylimit3}, differs in each nucleus in principle, the rescaled pairing energy closely follows $o_v(1-o_v)$, thus confirming the theoretical prediction. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{PES_HFB_BMBPT.pdf} \caption{(Color online) Potential energy surface of $^{18}$O (left column), $^{22}$O (center-left column), $^{26}$O (center-right column) and $^{44}$Ca (right column) as a function of the constraining parameter $\delta$. Results are showed for HFB and for the two variants of BMBPT(2) (see text). Vertical dashed lines localize the unconstrained HFB calculation. \label{fig:HFB_BMBPT2} } \end{figure*} \subsubsection{Impact of spatial symmetries} So far, the zero-pairing limit has been investigated in semi-magic nuclei under the hypothesis that the associated solution of the constrained HFB problem remains spherical all the way down to the pairing limit. In the present section, the possibility that the system breaks spherical symmetry, i.e., lowers the constrained Routhian by relaxing spherical symmetry, is investigated. Because it is clear that all doubly open-shell systems do deform in the zero-pairing limit, the situation is more subtle, and thus more informative, in semi-magic nuclei. As a result, $^{18}$O is first used as a primer example. Figure~\ref{fig:symmetrychange} displays the results of a HFB calculation of $^{18}$O in the zero-pairing limit, i.e., constrained to $\delta =0$, as a function of the number of iterations in the minimization process. Inputting a spherical ansatz at iteration 0, the system evolves freely until iteration 499, at which point it is subjected to an infinitesimal constraint on the axial quadrupole deformation ($\beta_{20} = 0.001$)\footnote{The multipole moments are computed as $\beta_{lm} \equiv 4\pi/[3A(1.2 A^{1/3})^l] \langle \Phi | Q_{lm} | \Phi \rangle $, where $Q_{lm} \equiv r^l Y_{lm}(\hat{\boldsymbol{r}})$ denotes the multipole operator~\cite{Ryssens:2015kga}.} during one iteration, before continuing the unconstrained iteration process until convergence. The top panel of Fig.~\ref{fig:symmetrychange} shows that the solution remains strictly spherical for the first 499 iterations given that rotational invariance is a {\it self-consistent} symmetry, i.e., the input solution at iteration 0 being spherically symmetric, the invariance cannot be spontaneously broken during the minimization process. Because the solution is provided with an infinitesimal quadrupole deformation at iteration 500, the system can take advantage of deformation for the remaining iteration process. Indeed, $^{18}$O constrained to the zero-pairing limit does deform and converges to a state with non-zero axial quadrupole and hexadecapole deformations. The second panel of Fig.~\ref{fig:symmetrychange} confirms that it is indeed advantageous for the system to exploit spatial deformation, i.e., while a fully converged spherical solution is obtained by the time iteration 499 is reached, the system does further lower the constrained Routhian once allowed to deform such that a newly converged solution is obtained by the time iteration 1500 is reached\footnote{While the present calculation is performed in the zero-pairing limit, it is not always advantageous for $^{18}$O to deform when $\delta \neq 0 $. With the present Hamiltonian and for parity-conserving axially-deformed calculations (which is the framework to consider after the perturbation at iteration 500), the minimum of the constrained Routhian is a spherical Bogoliubov state for $0.70 \lesssim \delta \leq 1$, a non-trivial deformed Bogoliubov state for $0.30 \lesssim \delta \lesssim 0.70$ and a deformed Slater determinant for $0 \leq \delta \lesssim 0.3$.}. However, while the constrained Routhian is indeed lower for the deformed configuration, the constrained HFB energy $E_{| \Phi(\delta) \rangle}$ is not. This means that the deformed solution corresponds to an {\it excited} configuration higher in energy than the converged spherical configuration reached before iteration 499 in the zero-pairing limit. The fact that the spherical configuration has lower energy than the deformed one is a consequence of their distinct nature as can be understood from the last three panels displaying canonical single-particle energies and occupations as well as the neutron-number variance. Before iteration 499, the spherical solution corresponds to the zero-pairing limit discussed at length for $^{18}$O in previous sections, i.e., a non-trivial HFB state corresponding to a partially filled d$_{5/2}$ valence shell and a neutron-number variance equal to $8/3$. Contrarily, the converged solution obtained after nearly 1500 iterations is a deformed Slater determinant with zero neutron-number variance. From iteration 500 till convergence, the spherical degeneracy of canonical single-particle energies is progressively lifted to give rise to the two-fold Kramers degeneracy. In particular, the d$_{5/2}$ valence shell is split into three pairs of doubly degenerate shells among which the lowest pair becomes gradually fully filled whereas the other two become fully empty. Consequently, the constrained solution transitions in the zero-pairing limit from a spherical open-shell HFB state characterized by $o_v=1/3$ to a deformed closed-shell Slater determinant characterized by $o_v=1$. This change of structure indeed has a marked impact on the energetic of the system. While the spherical HFB solution benefits from a non-zero pairing contribution responsible for the lowering of the constrained energy compared to the constrained Routhian, it is not the case for the deformed Slater determinant for which both quantities are equal. The net result is that the constrained HFB energy is eventually lower for the spherical configuration than for the deformed one. However, it happens that this behavior is also a consequence of the specificities of the perturbation imposed at iteration 500, in particular of its remaining symmetries. Indeed, while applying a constraint on $\beta_{20}$ for one iteration opens up the possibility for the system to deform later on, it does so only for parity-conserving axial deformations. When considering a fully symmetry-unrestricted ansatz\footnote{Still preserving the separation between neutrons and protons.}, the HFB reference state obtained at $\delta=0$ is a deformed Slater determinant with a total energy below the one of the spherical solution even if only by a few keV. Moreover, systematic calculations over the full set of even-even sd-shell nuclei performed in the zero-pairing limit show that, although a non-trivial HFB solution exists for semi-magic open-shell nuclei when enforcing spherical symmetry, the actual symmetry-unrestricted ground-state is systematically provided by a Slater determinant with zero particle-number variance\footnote{For odd-even nuclei, the deformed even-number parity solution remains a non-trivial HFB state in the zero-pairing limit. Running $^{19}$O as an example, its spherically-symmetric solution is a non-trivial HFB state associated with the spherical d$_{5/2}$ valence shell ($a_v=3$, $d_v=6$, $o_v=1/2$) and a neutron-number variance equal to 3 whereas the symmetry-unrestricted solution is a deformed HFB state with neutron-number variance equal to 1. If one where to search for a odd-number parity solution associated with one quasi-particle excitation, $^{19}$O would converge in the zero-pairing limit to a deformed Slater determinant breaking time-reversal symmetry, and thus Kramers degeneracy, carrying zero neutron-number variance.}, either spherical or deformed depending on the nucleus considered. \subsection{Bogoliubov many-body perturbation theory} \label{resultsBMBPT} Based on spherical HFB reference states, BMBPT(2) calculations have been performed as a function of the pairing constraint using the numerical code whose first results were reported in Ref.~\cite{Tichai:2018mll}. The two options regarding the definition of the unperturbed Hamiltonian $\Omega_{0}$ discussed in Sec.~\ref{BMBPTconstrained} have been tested. The associated potential energy surfaces (PES) are displayed in Fig.~\ref{fig:HFB_BMBPT2} for four nuclei along with the first order, i.e., HFB, one. Away from the unconstrained HFB minimum, the average particle number receives a non-zero contribution at second order such that the reference value must be iteratively reajusted in order for the sum of both contributions to match the physical value A. In Ref.~\cite{Demol:2020mzd}, a so-called {\it a posteriori} correction was shown to provide an excellent approximation to this costly readjustment method. This {\it a posteriori} correction is presently utilized. Focusing on Option 1, i.e., BMBPT(2)-1 results, one observes that the PES essentially retains the memory of the HFB one, albeit the several tens of MeV of added correlation energy. Looking closer, one however remarks that the PES becomes markedly different in $^{18}$O, $^{26}$O and $^{44}$Ca as the zero-pairing limit is approached. Because the lowest quasi-particle energy of the constrained HFB spectrum $\{E_{\mu}(\delta)\}$ goes to zero in open-shell nuclei as the limit is reached, the second-order correction $E^{(2)}_{| \Phi(\delta) \rangle; \{E_{\mu}(\delta)\}}$ (Eq.~\eqref{BMBPTcorrection2}) diverges as $\delta \rightarrow 0$. Obviously, no such problem occurs in $^{22}$O given that BMBPT safely reduces to HF-MBPT in this case, with the lowest two quasi-particle excitation converging towards the non-zero particle-hole gap at the Fermi energy. Moving to BMBPT-2, the issue arising in the zero-pairing limit is regularized by construction, i.e., defining $\Omega_0$ from the unconstrained HFB spectrum $\{E_{\mu}(1)\}$ for all values of $\delta$, the energy denominators at play in the second-order energy correction can never be singular. As a result, none of the PES diverges as $\delta \rightarrow 0$. However, the PES behave at odd with HFB and BMBPT-1 when increasing $\delta$. This behavior relates to the non-canonical term $\breve{\Omega}^{11}_{| \Phi(\delta) \rangle; \{E_{\mu}(1)\}}$ becoming very large, probably leading to a highly diverging BMBPT expansion. Thus, except for the benefit brought by construction in zero-pairing limit, the (non-standard) partitioning associated with BMBPT-2 is not to be trusted in general and shall probably only remain as an academic exercise performed for the sake of the present study. \section{Conclusions} \label{conclusions} The zero-pairing limit of an even-number parity Bogoliubov state solution of the Hartree-Fock-Bogoliubov equation under the constraint to carry a fixed number of particles A on average has been investigated in details, i.e., both analytically and numerically. This investigation is both of academic interest and of relevance to calculations involving a constraint on a collective variable that directly or indirectly impacts the amount of pairing correlations in the system. It was demonstrated that the HFB state reaches a mathematically well-defined limit, independently of the closed- or open-shell character of the system. While the HFB state trivially goes to a Slater determinant carrying A particles in closed-(sub)shell systems, it converges in open-shell systems to a specific linear combination of a finite number of Slater determinants, among which only a subset carries the physical particle number A. Consequently, and in spite of being obtained through a zero-pairing limit, the corresponding state carries a non-zero pairing energy and a non-zero particle-number variance acting as the lower bound accessible within the manifold of appropriate HFB states. From a general perspective, the present analysis demonstrates that HFB theory does {\it not} reduce to HF theory even when the pairing field is driven to zero in the HFB Hamiltonian matrix. All the characteristics of the HFB state predicted analytically in the zero-pairing limit have been validated numerically for a selected set of representative nuclei. Calculations were performed on the basis of a realistic two-nucleon interaction derived within the frame of chiral effective field theory but are actually generically valid. Eventually, the consequences of taking the zero-pairing of the HFB state on expansion many-body methods built on top of it, e.g., Bogoliubov many-body perturbation theory, have been further illustrated. While BMBPT smoothly goes to standard, i.e., Slater-determinant-based, many-body perturbation theory for closed-(sub)shell systems, it becomes ill-defined for open-shell systems when taking the zero-pairing limit. It will be interesting to extend the investigation of the zero-pairing limit to the finite-temperature Hartree-Fock-Bogoliubov formalism in the future. \section{Acknowledgements} The authors thank J. Ripoche for having numerically identified the lower bound on the particle-number variance for open-shell systems and for pointing it out to them and W. Ryssens for useful discussions. This project is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.~839847, the Max Planck Society and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 279384907 -- SFB 1245. The authors thank Heiko Hergert for sharing his spherical HFB code and Robert Roth for providing them with the interaction matrix elements. \begin{appendix} \section{Useful formulae} \label{formulae} Newton's binomial formula along with its first and second derivatives with respect to $x$ provide three useful identities \begin{align} (x+y)^n &= \sum_{k=0}^{n} \binom{k}{n} \, x^{k} y^{n-k} \, , \label{binomial1} \\ n(x+y)^{n-1} &= \sum_{k=1}^{n} \binom{k}{n} \, k \, x^{k-1} y^{n-k} \, , \label{binomial2} \\ n(n-1)(x+y)^{n-2} &= \sum_{k=2}^{n} \binom{k}{n} \, k(k-1) x^{k-2} y^{n-k} \, . \label{binomial3} \end{align} \section{Minimal particle-number variance} \label{APPvariance} Given the second-quantized form of $A$, the average particle-number variance associated with a Bogoliubov state is easily obtained via Wick's theorem under the form \begin{align} \text{VAR}_{| \Phi \rangle} &= \sum_{\alpha\beta} \left( \kappa_{\alpha\beta}^* \kappa_{\alpha\beta} - \rho_{\beta\alpha} \rho_{\alpha\beta} \right) + \sum_{\alpha} \rho_{\alpha\alpha} \notag \\* &= - \text{Tr}[\kappa\kappa^\ast] - \text{Tr}[\rho^2] + \text{Tr}[\rho] \, , \label{eq:pnvar_rho_kappa} \end{align} which is a positive or null quantity. Resorting to the unitarity of the Bogoliubov transformation~\cite{RiSc80}, the identity \begin{align} \label{eq:rho_kappa_trace_relation} -\text{Tr}[\kappa \kappa^\ast] + \text{Tr}[\rho^2] - \text{Tr}[\rho] &= 0 \, , \end{align} can be proven and added $(2\alpha-1)$ times, with $\alpha$ an arbitrary real number, to Eq.~\eqref{eq:pnvar_rho_kappa} to generate alternative expression \begin{align} \text{VAR}_{| \Phi \rangle} &\equiv -2\alpha \text{Tr}[\kappa\kappa^\ast] + 2(1-\alpha) (\text{Tr}[\rho] - \text{Tr}[\rho^2]) \, . \label{eq:pnvar_rho_kappa2} \end{align} This procedure allows one to vary the proportion with which the terms depending on normal or anomalous density matrices contribute to the particle number variance. Choosing $\alpha = 0$, an expression depending solely on the normal density matrix is conveniently obtained \begin{align} \text{VAR}_{| \Phi \rangle} &= 2 (\text{Tr}[\rho] - \text{Tr}[\rho^2]) \nonumber \\ &= 2 \big( \sum_{k} v^2_k - \sum_{kk'} v^2_k v^2_{k'}\big)\, , \label{eq:pnvar_rho} \end{align} where the latter expression results from using the canonical basis. Next, the differential form of the particle-number variance under an infinitesimal variation of the canonical pairing field matrix elements is computed \begin{align} \delta \text{VAR}_{| \Phi \rangle} &= \sum_{kk'>0} \frac{\delta \text{VAR}_{| \Phi \rangle}}{\delta \Delta_{k\bar{k}'}} \delta \Delta_{k\bar{k}'} \nonumber \\ &= \sum_{kk'>0} \sum_{l>0} \frac{\delta \text{VAR}_{| \Phi \rangle}}{\delta v^2_{l} } \frac{\delta v^2_{l} }{\delta \Delta_{k\bar{k}'}} \delta \Delta_{k\bar{k}'} \nonumber \\ &= \sum_{k>0} \frac{(\epsilon_k - \lambda)^2|\Delta_k|}{[(\epsilon_k - \lambda)^2 + \Delta_k^2]^{2}} \, \delta |\Delta_k|. \label{diff} \end{align} where the partial derivatives at play have been obtained using both Eqs.~\eqref{eq:pnvar_rho} and~\eqref{partnumbconstr} \begin{subequations} \label{partialderivatives} \begin{align} \frac{\delta \text{VAR}_{| \Phi \rangle}}{\delta v^2_{l} } &= 2(1-2v^2_{l}) \nonumber \\ &= \frac{2(\epsilon_l - \lambda)}{\sqrt{(\epsilon_l - \lambda)^2 + \Delta_l^2}}\, , \label{partialderivatives1} \\ \frac{\delta v^2_{l} }{\delta \Delta_{k\bar{k}'}} &= \frac{(\epsilon_k - \lambda)|\Delta_k|}{2[(\epsilon_k - \lambda)^2 + \Delta_k^2]^{3/2}} \delta_{kk'} \delta_{kl} \, . \label{partialderivatives2} \end{align} \end{subequations} Consequently, the particle-number variance is an increasing function of each of the canonical pairing gap matrix elements and takes its minimum value when all these pairing gap matrix elements go to zero, i.e., in the zero-pairing limit. While the actual value of the particle-number variance reached in the zero-pairing limit does depend on the situation, i.e., on the particle number A and the (symmetry of the) canonical spectrum, as discussed at length in the body of the paper, this value acts as a lower bound within the manifold of HFB states characterized by a given symmetry and a given average particle number A. \end{appendix} \bibliographystyle{apsrev4-1}
{ "attr-fineweb-edu": 1.525391, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUcyc4eILhQCVbeHQx
\section{Introduction} Coulomb gauge plays a prominent role in the Hamiltonian formulation of non-Abelian gauge theories \cite{Gribov:1977wm,Jackiw:1977ng,Christ:1980ku,Schutte:1985sd,Zwanziger:2002sh,Reinhardt:2011fq}; within this framework, variational \emph{Ans\"atze} offer a promising approach to determine the vacuum state \cite{Schutte:1985sd,Szczepaniak:2001rg,Feuchter:2004mk,Reinhardt:2004mm}. In the last years much effort has been invested in this direction, achieving a large number of interesting analytical results which combine to a rather concise picture of the low-energy sector in gauge theories, see e.g. Refs.~\cite{Zwanziger:2002sh,Schleifenbaum:2006bq,Reinhardt:2008ek}. {The picture of the vacuum conveyed by this approach is the Gribov-Zwanziger (GZ) confinement scenario, which in turn is based on a restriction of the functional integral to the first Gribov region. Applied to Coulomb gauge, this scenario leads to a number of general predictions which are not tied to the variational approach, and which can be accessed directly in lattice simulations:} \begin{enumerate} \item the Coulomb potential should be bound from below \cite{Zwanziger:2002sh} by the physical Wilson potential \cite{Wilson:1974sk}, i.e.~the presence of Coulomb confinement should be a {\it necessary} condition for the physical confinement mechanism to take place; \item the gluon dispersion relation should be infra-red (IR) divergent, naturally providing a confining scale \cite{Gribov:1977wm}; \item the Coulomb gauge ghost form factor should be IR-divergent. \end{enumerate} {The variational approach of Refs.~\cite{Feuchter:2004mk,Reinhardt:2004mm,Schleifenbaum:2006bq,Epple:2006hv} realizes this scenario, provided that the third condition (often called \emph{horizon condition} in this context) is implemented as a boundary condition.\footnote{The horizon condition selects among several possible solutions in the variational approach, while it comes out self-consistently in the renormalization group approach \cite{Leder:2010ji}. Physically, this can be interpreted as a vanishing dielectric constant of the vacuum, i.e.~a manifestation of the dual Meissner effect \cite{Reinhardt:2008ek}.} Therefore, a} lattice investigation of the above listed Coulomb gauge correlators represents a powerful tool to gain insight in the mechanism of quark confinement while offering a direct bridge to continuum setups; this program has been thoroughly carried out in Refs.~\cite{Burgio:2008jr,Burgio:2009xp,Quandt:2010yq,Burgio:2012ph,Burgio:2012bk,Burgio:2013naa,Burgio:2015hsa}. While the gluon sector has been found to agree with the continuum predictions, confirming the dynamical generation of a Gribov mass $M \approx 0.9\,\mathrm{GeV}$ and the validity of Gribov's formula for the gluon propagator \cite{Burgio:2008jr,Burgio:2009xp}, the ghost sector was shown to agree only qualitatively with the continuum predictions. In particular, the IR divergence of the ghost form factor determined in lattice simulations \cite{Burgio:2012bk,Burgio:2013naa,Burgio:2015hsa} is much weaker than the one predicted by continuum calculations \cite{Feuchter:2004mk,Schleifenbaum:2006bq,Epple:2006hv}, and a Coulomb string tension could be extracted from the IR behavior of the Coulomb potential only under very optimistic assumptions \cite{Burgio:2012bk,Burgio:2013naa,Burgio:2015hsa}. Furthermore the lattice results are in conflict with the sum rule for the infrared exponents \cite{Schleifenbaum:2006bq}, which merely assumes that the ghost-gluon vertex in Coulomb gauge is bare, or at least non-singular, in the deep infra-red. In a recent work Cooper and Zwanziger \cite{Cooper:2015sza} have proposed to implement Coulomb gauge by picking the Gribov copy with the lowest eigenvalue of the Faddeev--Popov\ operator, instead of the ``best copy'' (\emph{bc}) with the maximal value of the Coulomb gauge functional. They argue that a lattice simulation based on such a setup would lead to a better agreement with continuum predictions. The aim of this paper is to directly implement this proposal on the lattice and analyze its consequences on the correlators which should bear the signature of the Gribov-Zwanziger confinement mechanism. As a by-product, we will also be able to re-analyze the \emph{bc} strategy with very high statistics, as finding a small eigenvalue of the Faddeev--Popov\ operator requires the analysis of a very large number of gauge copies. \section{The Gribov problem} As Gribov has shown long ago \cite{Gribov:1977wm} the Coulomb gauge condition $\partial_i A_i = 0$, among others, is not sufficient to select a single configuration from the gauge orbit uniquely. On the lattice, gauge fixing amounts to selecting, for each given configuration $\{ U_\mu(x) \}$ of links, a gauge rotation $g(x) \in SU(N_c)$ such that some (unique) condition is met. In particular, Coulomb gauge fixing is achieved by maximizing, for each time slice $t$, the functional \begin{equation} \label{eq:gribov:gff} F_t^U[g] = \frac{1}{N_c N_d V}\sum_{\vec x,i} \mathsf{Re}\, \mathrm{tr} \left[ U^g_i(t,\vec{x}) \right]\,, \end{equation} where $V$ is the spatial volume of the lattice and the sum extends over all spatial links only. A \emph{local} maximum of \eqref{eq:gribov:gff} picks out - more or less randomly -- one copy in the first Gribov region (where the Faddeev--Popov\ operator is positive definite), out of many others that all satisfy the same condition. A unique prescription, which would solve the Gribov problem completely, would amount to finding the \emph{global} maximum, i.e.~the representative of the gauge orbit in the so-called fundamental modular region (FMR). Finding such a global maximum of a function with many degrees of freedom is, however, analogous to finding the ground state of an \SU{N} spin glass \cite{Marinari:1991zv}, a problem which is known to be NP-hard even for the much simpler case of the $\mathbb Z_2$ gauge theory \cite{Barahona:1982}. In the past, two approaches have been widely used to tackle the problem of Gribov copies in lattice gauge theory. The first one is to simply neglect that there is a problem at all, essentially stating that Gribov copies have no physical significance. In this case, the first (local) maximum found by the algorithm is selected and one proceeds in calculating all relevant (gauge dependent) quantities. In the literature, this process goes under the name of ``minimal gauge" \cite{Maas:2008ri}.\footnote{In the literature the term {\it minimal gauge} had originally been applied in Landau gauge to the representative of the fundamental modular region along the gauge orbit \cite{Cucchieri:1997dx,Cucchieri:1997ns}. Later the term {\it absolute gauge} stuck for this case, while minimal gauge was ``downgraded" to its present use \cite{Maas:2008ri}.} The second approach is to choose the copy with the highest value of the gauge functional as the ``best representative'' of the global maximum, based on the conjecture that results for gauge dependent quantities will be strongly correlated with the value of the gauge functional. In order to clarify this statement, let $\{U_\text{FMR}\}$ be the ensemble of gauge configurations which are in the FMR, i.e.\ $F[U_\text{FMR}] = \text{max}$, and let $\{U_\text{bc}\}$ be the ensemble with gauge configurations close to such a maximum \begin{equation} F[U_\text{FMR}] \gtrsim F[U_\text{bc}], \end{equation} i.e.\ the set of configurations which correspond to the best maximum one could find numerically. The assumption is that the $U_\text{bc}$ are, in some sense, ``close'' to the $U_\text{FMR}$, and this carries over to the expectation value of any gauge variant quantity $\Omega$, i.e. \begin{equation} \left< \Omega(U_\text{bc}) \right> \approx \left< \Omega(U_\text{FMR}) \right> \equiv \langle \Omega \rangle_\text{phys}\,. \label{assump-1} \end{equation} No mathematical proof of this assumption exists, and a direct numerical test is only feasable for toy models on very small lattices. One such test, a \groupU{1} lattice theory on a 2-dimensional sphere, actually provides numerical evidence \emph{against} the hypothesis in Eq.~\eqref{assump-1} \cite{deForcrand:1994mz}. For historical reasons, we will call the ensemble $\{U_\text{bc}\}$ the \emph{best copy} (bc) ensemble. A third approach for resolving the Gribov problem has been discussed for Landau gauge in Refs.~\cite{Sternbeck:2012mf,Sternbeck:2013zja}: instead of choosing the copy with the best value of the gauge functional, one picks the copy for which the first non-trivial eigenvalue of the Faddeev--Popov\ operator is smallest, the so-called \emph{lowest copy} (lc). We will borrow this notation from the aforementioned papers. The idea behind the lc-approach is that this should choose configurations that are close to the Gribov horizon where the Faddeev--Popov\ operator becomes singular. According to Gribov's and Zwanziger's entropic reasoning, such configurations should be the relevant ones in the thermodynamic limit. The authors of Refs.~\cite{Sternbeck:2012mf,Sternbeck:2013zja} found that both the ghost dressing function and -- to a much smaller extent -- the gluon propagator are enhanced in the IR for the lowest-eigenvalue copy when compared to the bc-approach, while they become flatter if one chooses a copy with a large eigenvalue of the Faddeev--Popov\ operator instead. Similar attempts to tweak the Landau gauge fixing procedure in order to make the IR-behaviour of the ghost propagator match the decoupling solutions found in the continuum (eq.~by Dyson--Schwinger\ or Functional Renormalization Group\ techniques) had previously been put forward with mixed results \cite{Maas:2009se,Maas:2015nva}. As discussed in the introduction, a quantitative discrepancy exists in Coulomb gauge between the IR exponent of the ghost dressing function in the Hamiltonian variational approach \cite{Feuchter:2004mk,Reinhardt:2004mm,Schleifenbaum:2006bq,Epple:2006hv} and the corresponding lattice results \cite{Burgio:2012bk,Burgio:2013naa,Burgio:2015hsa}. On the other hand the behavior of the gluon propagator agrees very well between the two approaches \cite{Burgio:2008jr,Burgio:2009xp}. Since the IR exponents of the ghost form factor and the gluon propagator should be related by a sum rule which is based on the sole assumption that the ghost-gluon vertex should be bare, or at least non-singular, in the deep infra-red \cite{Schleifenbaum:2006bq} (a fact that it known to hold in Landau gauge and expected to carry over to Coulomb gauge\footnote{In Landau gauge, the vertex is expected to be unrenormalized based on Slavnov-Taylor identities \cite{Taylor:1971ff}; this is confirmed by lattice simulations which find only mild deviations from a bare vertex over the entire momentum range \cite{Cucchieri:2008qm}. A similar conclusion can also be made in Coulomb gauge within the variational approach (using the continuum propagators as input) \cite{Huber:2014isa}.}), this poses an unresolved puzzle. {One possible explanation for such disagreement is that the variational approach would have to be improved in order to better reproduce the lattice results. This goes beyond a mere improvement of the variational \emph{ansatz}, since the sum rule must hold for any ansatz (assuming a non-singular ghost-gluon vertex in the IR). One possible idea is that the proper implementation of the GZ idea would go beyond the standard Coulomb Hamiltonian combined with the horizon condition, and additional terms in the action or Hamiltonian would be required, which could eventually reconcile the sum rule with lattice propagators. There are some indications that such a refinement is necessary in Landau gauge, where additional condensates can be introduced in the GZ action in order to make the GZ scenario agree with lattice data \cite{Cucchieri:2011ig}. In the Hamiltonian approach, however, we see no compelling evidence for such a modification, in particular since the present investigation will show that there is no such thing as ``the lattice propagators'' in Coulomb gauge, at least with current computational power. It would then be very hard to identify the proper extension of the Coulomb gauge GZ scenario required to match the inconsistent lattice data.} {This leaves us with the second logical explanation for the sum-rule puzzle, namely that the current lattice simulations in Coulomb gauge do not describe continuum physics and hence need refinement. More precisely,} the bc-strategy on the lattice could be biased by artifacts related to the Gribov problem, being unable to come close enough to the Gribov horizon, and the lc-strategy might provide a better description of continuum physics \cite{Cooper:2015sza}. To check this conjecture we will adapt in the following the lc-strategy to Coulomb gauge. \section{Lattice Setup} For our study {we use the colour group $G=SU(2)$ for simplicitly and employ} the isotropic and the anisotropic Wilson gauge action \cite{Burgio:2003in} \begin{align} S = \sum_x \biggl\{&\beta_s \sum_{j> i=1}^3\left(1- \frac{1}{2} \mathsf{Re}\,\mathrm{tr}{U_{ij}(x)}\right)\nonumber\\ +&\beta_t \sum_{i=1}^3 \left(1-\frac{1}{2}\mathsf{Re}\,\mathrm{tr}{U_{i4}(x)}\right) \biggr\}\,, \label{s_anis} \end{align} where we parameterize $\beta_s = \beta \gamma$ and $\beta_t = \beta/\gamma$, with $\gamma$ the bare anisotropy, while $\xi = a_s/a_t$ denotes the renormalized anisotropy in the following. We have used isotropic lattices of three different sizes and discretizations in our analysis. Since the ghost propagator is known to suffer from strong scaling violations on isotropic lattices we include two anisotropic lattices of fixed size. Our setup is summarized in Tab.~\ref{tab:gribov:configs}. To fix the lattice spacing we used the {$SU(2)$} results known from the literature as summarized in the tables given in Ref.~\cite{Burgio:2008jr}. We have also fixed $\sqrt{\sigma} = 0.44\, \giga \electronvolt$ to set the physical scale. \renewcommand{\arraystretch}{1.4} \begin{table}[htb] \centering \begin{tabular}{cccccc} \toprule Label & \qquad Size\qquad\qquad & \quad$\xi$\qquad & \quad$\beta$ \quad\quad & $a_s$ [$\giga \electronvolt^{-1}$] & $L$ [$\femto\meter$]\\ \colrule A1 & $16^4$ & 1 & 2.2 & 1.07 & 3.4 \\ A2 & $16^4$ & 1 & 2.3 & 0.84 & 2.6 \\ A3 & $16^4$ & 1 & 2.4 & 0.61 & 1.9 \\ B1 & $24^4$ & 1 & 2.2 & 1.07 & 5.0 \\ B2 & $24^4$ & 1 & 2.3 & 0.84 & 4.0 \\ B3 & $24^4$ & 1 & 2.4 & 0.61 & 2.9 \\ C1 & $32^4$ & 1 & 2.2 & 1.07 & 6.7 \\ C2 & $32^4$ & 1 & 2.3 & 0.84 & 5.3 \\ C3 & $32^4$ & 1 & 2.4 & 0.61 & 3.8 \\ D1 & $128\times 32^3$ & 4 & 2.25 & 1.11 & 7.0 \\ \botrule \end{tabular} \caption{Lattice setup.} \label{tab:gribov:configs} \end{table} \renewcommand{\arraystretch}{1.0} \section{Gauge fixing and Gribov copies.} \label{sec:gribov:gaugefixing} Both for the lc and the bc strategy we use the over-relaxation technique \cite{Mandula:1990vs} in the CUDA implementation cuLGT \cite{Schroeck2013a}. In Ref.~\cite{Schroeck2013a}, simulated annealing \cite{Kirkpatrick:1983zz,Kirkpatrick1984} is also discussed as a technique to increase the probability to find the absolute maximum of the gauge fixing functional, i.e.\ to find a better best-functional copy. By now, the de-facto standard technique to find the (best approximation of the) global maximum is a combination of repeated gauge fixing and a pre-conditioning with simulated annealing \cite{Bogolubsky:2005wf,Bogolubsky:2007bw}. In this context, repeated gauge fixing means to start the gauge fixing multiple times from a random gauge transformation and select the copy which best satisfies the bc (or lc) condition. In Fig.~\ref{fig:gribov:oriter_example} we show an illustrative plot of the evolution of the gauge fixing precision \begin{align} \label{eq:landau:gaugeprec:max} \theta \equiv \frac{1}{N_c} \max_{\vec{x}} \mathrm{tr}\left[ \Delta(x)\Delta^\dag(x)\right] \end{align} with \begin{align} \Delta(x) \equiv \left[\partial_i A_i\right]^\text{lat} = \sum_i\left[A^\text{lat}_i(x)-A^\text{lat}_i(x-\hat{\imath})\right] \nonumber \end{align} over the number of gauge fixing steps. \begin{figure}[phtb] \center \includegraphics[width=.494\columnwidth]{fig1a} \includegraphics[width=.494\columnwidth]{fig1b} \caption{Gauge precision $\theta$ over the number of over-relaxation steps (color online). \emph{Left panel:}: 5 gauge copies of the same configuration. The light blue (top) curve is with simple relaxation, the other lines correspond to over-relaxation ($\omega=1.7$). The pink line with the smallest slope corresponds to a significantly smaller value (compared to the other copies) of the first non-trivial eigenvalue $\lambda_1$ of the FP operator. \emph{Right panel:} the red (top) and green (bottom) line correspond to over-relaxation ($\omega=1.7$) without (red) and with (green) simulated annealing preconditioning. As can be seen, the preconditioning removes the first phase where the algorithm tries to locate a maximum, while the slope of the second phase (the eventual convergence speed) is unchanged. The blue curve in the middle employs proconditioning with simple relaxation ($\omega = 1$) and shows no fluctuating first phase but a much smaller convergence speed. All three lines converge towards the same Gribov copy, as was confirmed by identical functional values and an identical first non-trivial FP eigenvalue.} \label{fig:gribov:oriter_example} \end{figure} In the figure on the left hand side four runs with over-relaxation parameter $\omega = 1.7$ and one run with $\omega = 1$ (pure relaxation) are shown. The gauge fixing has two characteristic stages: in the first stage the precision is fluctuating strongly at a rather high value until a maximum is located with a precision of about $\theta \approx 10^{-4}$. Then, in the second stage, the precision monotonically approaches zero. As shown on the right hand side, if simulated annealing pre-conditioning is used the first stage is already overcome in the simulated annealing phase (which is not shown in the plot). As we focus on the lc-approach in this study, our goal is not to bias our algorithm towards copies with a high value of the g.f.~functional, and we thus have to waive simulated annealing preconditioning. Since the bc results in this chapter are mostly obtained as a byproduct of the main search for the lowest-eigenvalue copy, they are also not preconditioned with simulated annealing, unless explicitly stated otherwise. Unfortunately, no algorithm is known that would precondition the gauge fixing to a low eigenvalue of the Faddeev--Popov\ operator and we have to rely on pure over-relaxation with a high number of gauge copies $N_r$. In a first run we calculated the lowest eigenvalue $\lambda_1$ on $N_r = \mathcal O (10^3)$ copies of the small lattices. In Ref.~\cite{Greensite:2004ur} it was noticed that the size of the smallest eigenvalue is correlated with the number of gauge fixing iterations~$N_\text{it}$ that are necessary to achieve a given accuracy $\theta$, as indicated in Fig.~\ref{fig:gribov:oriter_example}. The reason for this behavior is that a low eigenvalue means an almost flat direction in the g.f.~functional and an ill-conditioned Faddeev--Popov\ operator, leading to a slow convergence of the iteration process. In Fig.~\ref{fig:gribov:iter_lambda_corr} we investigated this behavior in more detail. We find a perfect correlation of $\lambda_1$ and $N_\text{it}$, independently of the coupling $\beta$, with the slope only depending very weakly on the over-relaxation parameter $\omega$. In fact, we find that all data can be perfectly described by the simple power law \begin{equation} \lambda_1\left( N_\text{it} \right) = \frac{c}{N_\text{it}^\gamma}. \label{eq:eigen_copy} \end{equation} with $\gamma \approx 1.1$ and the proportionality factor $c$ strongly depending on $\omega$. \begin{figure}[phtb] \center \includegraphics[width=0.78\columnwidth]{fig2} \caption{Smallest eigenvalue $\lambda_1$ of the Faddeev--Popov\ operator as a function of the number of gauge-fixing iterations. From each set A1, A2 and A3, we used 10 configurations and calculated 1000 gauge-copies. The data points which correspond to fewer iterations (left) are from runs with $\omega = 1.9$, for the points with more iterations (right) we used $\omega=1$.} \label{fig:gribov:iter_lambda_corr} \end{figure} To rule out that the over-relaxation parameter $\omega$ conditions the algorithm to find a gauge copy with specific value of $\lambda_1$, we verified that $\omega$ has no influence, on average, on how often a configuration with small eigenvalue is found. This is also indicated in Fig.~\ref{fig:gribov:iter_lambda_corr}, though in the plot it is obfuscated by the bulk around the minimal number of iterations. The correlation of the number of iterations and the smallness of the Faddeev--Popov\ eigenvalue allows us to tweak our algorithm: since the calculation of eigenvalues is computationally the most demanding part in our gauge fixing program, we implemented a (self-adjusting) threshold, where the eigenvalues are calculated only for ``promising'' gauge copies for which the number of iterations exceeds a certain value. Since the smallest eigenvalue (which we are able to find) differs from configuration to configuration, we usually re-start with a small threshold for each configuration. If we do find a small eigenvalue, the threshold is updated to a factor $\alpha$ of the number of iterations that were needed to find this particular (small) eigenvalue. We find that $\alpha = 0.8$ provides a suitable update strategy: with this setting eigenvalues are calculated in a typical run for many gauge copies up to a point where a small eigenvalue is found and the threshold is changed. Since usually the Gribov copies with the smallest eigenvalue are well separated from the one with the next-to-smallest eigenvalue, this procedure constrains the program to only evaluate the eigenvalues for promising configurations with the smallest $\lambda_1$. \begin{figure}[phtb] \includegraphics[width=0.98\columnwidth]{fig3a} \includegraphics[width=0.98\columnwidth]{fig3b} \includegraphics[width=0.98\columnwidth]{fig3c} \caption{Smallest eigenvalue vs.\ functional value for 1000 copies from 4 arbitrary configurations of the $16^4$ lattices A1, A2, A3 (from top to bottom). The number of distinct Gribov copies decreases with finer lattices.} \label{fig:gribov:lambda_vs_gff} \end{figure} Since the first Gribov region and the FMR have a common boundary, one may wonder if the bc-approach, which can be seen as an approximated search for configurations in the FMR, and the lc-approach, as an approximated search for configurations close to the Gribov horizon, eventually converge to the same configuration. However, from the Landau gauge data \cite{Sternbeck:2012mf} there is no such indication. Also for our Coulomb gauge data there is no evidence that the bc- and lc-procedure may coincide. In Fig.~\ref{fig:gribov:lambda_vs_gff} we show scatterplots for four arbitrarily selected configurations of each of the small ($16^4$) lattices A1, A2, A3 from top to bottom. The data points are from 1,000 different gauge copies, but there are much fewer points as the same Gribov copy is often found multiple times. In fact, the number of distinct Gribov copies varies strongly between configurations, compare for instance the third and fourth configuration of the A1 lattice (top right). As expected the number of distinct Gribov copies decreases with finer lattice spacing. \begin{figure}[phtb] \includegraphics[width=0.49\columnwidth]{fig4a} \includegraphics[width=0.49\columnwidth]{fig4b} \caption{Best approximation of the FMR, i.e.\ the bc-approach, and the Gribov horizon, i.e.\ the lc-approach. Full light-colored symbols denote 1,000 trials; the empty, dark-colored symbols denote 10,000 gauge copies. Lattices: B1~(left), B3~(right). There is no configuration where bc = lc.} \label{fig:gribov:fmr_vs_GH} \end{figure} Another indication that bc and lc are different gauges is the result of Fig.~\ref{fig:gribov:fmr_vs_GH}. There we compare the best approximation of the FMR and the Gribov horizon for all 100 configurations of the $24^4$ sets B1 and B3 after 1,000 and 10,000 gauge copies, respectively. Neither on the coarse lattice (B1) nor on the fine lattice (B3) could we find any configuration where the best-functional and the lowest-eigenvalue copy coincide. For the coarse lattice with 10,000 copies the smallest eigenvalues are well separated, by at least an order of magnitude, in a first region with all the bc copies, and a second region with the lc copies. While only very few (B1) or no configurations (B3) see a decrease of the lowest eigenvalue $\lambda_1$ in the bc case as we go from 1,000 to 10,000 copies, the lc data still sees a considerable reduction of $\lambda_1$. A similar comparison was made for Landau gauge in Ref.~\cite{Maas:2015nva}. There, the authors used the value of the ghost propagator at the smallest non-zero momentum as an estimate of the smallness of the lowest FP eigenvalue. While they used a much larger ensemble of $\approx \mathcal{O}(10^3)$ configurations, they used much less gauge fixing repetitions $\approx \mathcal{O}(10)$. With this setup they found configurations that are close to \emph{both} the FMR and the Gribov horizon. However, it is clear that their setup (many configurations, small $N_r$) is specifically biased towards finding such configurations, while our setup is biased in the opposite direction (fewer configurations, large $N_r$). For a detailed study of this effect, which is not our focus, we would have to significantly increase the statistics. \begin{figure}[phtb] \includegraphics[width=0.49\columnwidth]{fig5a} \includegraphics[width=0.49\columnwidth]{fig5b} \caption{Number of distinct Gribov copies vs.\ the number of gauge copies for the $16^4$ (left) and the $24^4$ (right) lattices. Only for the fines $16^4$ lattice a saturation is observed. The Gribov copy is identified only by the value of the gauge fixing functional.} \label{fig:gribov:number_gribovcopies} \end{figure} Finally, we try to estimate the number of Gribov copies in Fig.~\ref{fig:gribov:number_gribovcopies} by counting how many distinct Gribov copies we are able to find for a given number of g.f.~attempts. For this study we use only the functional value to identify the Gribov copy, since we do not have the smallest eigenvalue available for all copies (due to the threshold strategy described above). In general, an unambiguous identification of a Gribov copy would require identical values for \emph{all} gauge-dependent quantities; the use of only a single quantity (the g.f.~functional) may therefore erroneously take distinct copies as identical, i.e.~the procedure is biased towards finding too many identical and too few distinct copies.\footnote{Additionally, the authors of Ref.~\cite{Maas:2015nva} found that there are gauge copies with same functional value but different value for the ghost propagator at smallest non-zero momentum.}. An unambiguous estimate would furthermore require that each Gribov copy is found with equal probability; however very likely there are local maxima which are easier to locate by the algorithm. This effect will lead to an underestimation of the number of distinct Gribov copies. Thus the result in Fig.~\ref{fig:gribov:number_gribovcopies} has to be treated very carefully. More comprehensive studies of the number of Gribov copies in lattice gauge theory can be found for example in Refs.~\cite{Hughes:2012hg,Mehta:2014jla}. Since the number of Gribov copies varies considerably between different configurations, the error bars are rather large. Only on the smallest and finest lattice a saturation of the number of Gribov copies is observed within 10,000 g.f.~attempts. The main conclusion we can draw from Fig.~\ref{fig:gribov:number_gribovcopies} is that we are far from having explored the whole Gribov region, which would be essential if the absolute lowest-eigenvalue copy still differs substantially from the lowest-eigenvalue copy in our limited search space. \section{Results} \label{sec:gribov:results} While there is no compelling reason for the lc-approach to have a large effect on the gluon propagator, we expect a clear impact on the ghost propagator, given its spectral representation \begin{equation} G(\vec p) = \sum_{n} \frac{\phi_n(\vec{p})\phi_n(\vec{-p})}{\lambda_n}, \label{eq_spectral} \end{equation} where $\lambda_n$ are the eigenvalues and $\phi_n(\vec p )$ the momentum space eigenfunctions of the Faddeev--Popov\ operator. As for the Coulomb potential, one also expects a large effect from the lc-strategy. Let us discuss them case by case. \subsection{Gluon propagator} In Landau gauge a small Gribov copy dependence was observed for the gluon propagator on a large $54^4$ lattice in the deep IR \cite{Sternbeck:2012mf}. With our lattice setup we are not able to reach that far in the IR and do not see a significant effect on the Coulomb gauge gluon propagator $D(\vec{p})$ as defined in Refs.~\cite{Burgio:2008jr,Burgio:2009xp}; see Fig.~\ref{fig:gribov:gluonprop}, where we plot $D(\vec{p})/|\vec{p}|$ to underline its IR-behavior.\footnote{The expert reader will notice a strong similarity between the IR-behavior of the gluon (and to a lesser extent ghost) propagators in Coulomb and Landau gauge. This has been extensively discussed in Ref. \cite{Burgio:2009xp,Burgio:2012bk} and can be simply ascribed to the presence of common IR (Gribov) mass-scale in both cases.} \begin{figure}[hptb] \center \includegraphics[width=0.8\columnwidth]{fig6} \caption{The gluon propagator for the B1 lattice with the bc and the lc-approach from 1000 trials. The solid line is a fit to the Gribov formula \cite{Burgio:2008jr,Burgio:2009xp}. The choice of Gribov copy apparently makes no visible difference.} \label{fig:gribov:gluonprop} \end{figure} Since the accurate calculation of $D(\vec{p})$ requires the technique illustrated in Refs.~\cite{Burgio:2008jr,Burgio:2009xp}, Coulomb gauge needs to be fixed for \emph{all} timeslices. This limits the number of g.f.~repetitions as compared to the study of Faddeev--Popov-operator dependent quantities in the following sections, which can all be evaluated on a single time slice. \subsection{Ghost propagator} As expected from Eq.~\eqref{eq_spectral}, the Gribov copy effect (i.e.~the different g.f.~prescriptions of picking Gribov copies) has a huge impact on the ghost propagator as defined in Refs.~\cite{Burgio:2012bk}. In Fig.~\ref{fig:gribov:ghost:lc:n24t24_allcopies} we see that for the $24^4$ lattice the ghost form factor is drastically enhanced in the IR as the number of repetitions of the lc-strategy increases. \begin{figure}[phtb] \includegraphics[width=0.49\columnwidth]{fig7a} \includegraphics[width=0.49\columnwidth]{fig7b} \caption{The ghost form factor after gauge fixing to the lowest-eigenvalue copy with increasing number of trials from 10 to 10,000 on $24^4$ lattices at $\beta = 2.2$ (B1, l.h.s.) and $\beta = 2.4$ (B3, r.h.s.).} \label{fig:gribov:ghost:lc:n24t24_allcopies} \end{figure} For both the coarse and the fine lattice the effect first becomes visible upon reaching about 100 repetitions. From this point on, the form factor is clearly increased when going from 100 to 1,000 copies, while the further increase between 1,000 and 10,000 copies is less pronounced. This may be taken as a hint towards an eventual convergence, although no saturation of the effect can be really observed within our available data. The huge difference in the IR is mainly due to a sharper bending in the region between 1 and 3 GeV. It should be noted that the data for different $\beta$ have been presented in different plots on purpose: the ghost form factor is known to suffer from scaling violations on isotropic lattices \cite{Burgio:2012bk}, so that data points for different $\beta$ do not fall on top of each other over the whole momentum range (after multiplicative renormalization). Moreover, since the curves in the lc-approach curves have not yet converged, the data from different $\beta$ cannot be compared, as the quality of the lc-gauge fixing for given $N_r$ most likely depends on the coupling $\beta$. In Fig.~\ref{fig:gribov:ghost:bc:n24t24_allcopies} we compare the ghost form factor within the bc-approach for the same lattices. \begin{figure}[phtb] \includegraphics[width=0.49\columnwidth]{fig8a} \includegraphics[width=0.49\columnwidth]{fig8b} \caption{The ghost form factor after gauge fixing to the best-functional copy with increasing number of trials from 10 to 1,000 on $24^4$ lattices at $\beta = 2.2$ (B1, l.h.s.) and $\beta = 2.4$ (B3, r.h.s.). The data points for 10,000 copies are omitted since no better copy is found, compare Fig.~\ref{fig:gribov:fmr_vs_GH}.} \label{fig:gribov:ghost:bc:n24t24_allcopies} \end{figure} First of all, the effect of taking more g.f.~repetitions is much less pronounced as compared to the lc-approach results in Fig.~\ref{fig:gribov:ghost:lc:n24t24_allcopies}. Secondly, the effect goes in the opposite direction: while the ghost form factor for the lc-approach was enhanced in the IR, the IR form factor in the bc-approach becomes slightly suppressed as the number of g.f.~attempts is increased. The (small) effect is negligible between 10 and 100 repetitions, but becomes somewhat more pronounced in the region between 100 and 1,000 repetitions. In Fig.~\ref{fig:gribov:ghost:n24t24lcbc} we compare the results for lc and bc-approach with 10,000 copies, our best values at this lattice size, \begin{figure}[phtb] \center \includegraphics[width=0.8\columnwidth]{fig9} \caption{The ghost form factor from the lattices B1 and B3 after 10,000 copies of bc and lc-strategy.} \label{fig:gribov:ghost:n24t24lcbc} \end{figure} where we have renormalized the form factor to \begin{equation} d(p=3 \,\giga \electronvolt) = 1\,. \end{equation} The bc-approach data at different $\beta$ fit quite well on top of each other, especially when considering the scaling violations~\cite{Burgio:2012bk}. Compared to the lc-strategy, the error bars for the bc-strategy are much smaller. {To extract the IR-exponent of $d(p)$ we have fitted the data at different $N_r$ for the D1 ensembles in Tab.~\ref{tab:gribov:configs}. Since their UV-tail is not extended enough to extract reliably any UV-logarithmic exponent, we used the simplest function interpolating between a power-law in the IR and a constant in the UV: \begin{equation} d(p) = p^{-\kappa} \frac{P_{n-1}(p)+a\, p^{n+\kappa}}{R_{n-1}(p)+p^n} \label{eq:fit} \end{equation} where $P_{n-1}$ and $R_{n-1}$ are polynomials of degree $n-1$ and the denominator is constrained not to have any real poles (see e.g. Ref~\cite{Burgio:2012bk}). $n=2$ gives already a good enough fit and the results are given in Fig.~\ref{fig:gribov:ghost:ani_22509} (continuous lines). \begin{figure}[hptb] \center \includegraphics[width=0.8\columnwidth]{fig10} \caption{Fits of $d(p)$ to an IR-power law for the D1 lattices. The continuous lines are the fits to Eq.~\eqref{eq:fit}, the dotted lines just fit the last three points to a power law. The respective $\kappa$ values are given in the legend.} \label{fig:gribov:ghost:ani_22509} \end{figure} While in the bc-strategy we consistently found $\kappa ~ \simeq 0.5$ (see Refs.~\cite{Burgio:2012bk,Burgio:2013naa,Burgio:2015hsa}), for the lc-strategy the exponent reaches $\kappa \simeq 0.9$ already for $N_r = 50$ and keeps on growing as $N_r$ increases, reaching $\kappa \simeq 1.6$ for $N_r = 5,000$, with no apparent saturation; the values of $\chi^2$/d.o.f. range between 0.9 and 1.4. The fits seem however to miss the underlying behavior in the lowest IR region: indeed, the good $\chi^2$/d.o.f. values come from the $p>1$ GeV data, while below these the curves clearly overestimate the IR behavior; changing $n$ does not improve the situation. We have also tried to directly fit the last points to a power-law, neglecting any sub-leading behavior: $d(p) = A \, p^{-\kappa}$. The results are also shown in Fig.~\ref{fig:gribov:ghost:ani_22509}. Although for $N_r = 5000$ we obtain for $\kappa$ a value close to the continuum predictions, the evident lack of saturation still means that by increasing $N_r$ we would probably overshoot $\kappa = 1$ again. Moreover, the low momentum data are known to be effected by large IR-cut-off effects: only simulations at higher volumes could eventually deliver reliable results. All in all, the lack of saturation in the data will pose a challenge to any fitting strategy, even if a theoretically sound Ansatz for $d(p)$ over the whole momentum range, going beyond a simple power law, could be found. We will see in the next section that such lack of saturation is an even bigger problem for the calculation of the Coulomb potential.} \subsection{Coulomb potential} The most important quantity for Coulomb gauge confinement is the Coulomb potential, since it provides direct access to the Coulomb string tension; this quantity can be computed from the momentum space Coulomb kernel \cite{Burgio:2012bk}: \begin{equation} \label{eq:coulombpotential} V_C(\vec{p}) = g^2\, \mathrm{tr}\left\langle \left( -\hat{\vec{D}} \cdot \nabla \right)^{-1} \left(-\nabla^2\right)\left( -\hat{\vec{D}} \cdot \nabla \right)^{-1} \right\rangle\,. \end{equation} A linearly rising potential for large distances corresponds to a momentum space potential diverging like $1/p^4$ in the IR. Thus, it is convenient to plot the potential such that its intercept with the y-axis yields the Coulomb string tension $\sigma_C$ in units of the physical (Wilson) string tension $\sigma$, \begin{equation} \frac{p^4V_C(p)}{8\pi\sigma} \xrightarrow{p \rightarrow 0} \frac{\sigma_C}{\sigma}\,. \label{eq_CP} \end{equation} In Fig.~\ref{fig:gribov:coulpot:n24t24_allcopies} the ratio eq.~\eqref{eq_CP} is shown, within the bc-approach, for the same configurations used in Fig.~\ref{fig:gribov:ghost:bc:n24t24_allcopies} for the ghost propagator.\footnote{In the $\beta=2.2$ plot on the left hand side the data for $N_r = 10$ is omitted since it contained a configuration with a small eigenvalue leading to very big error-bars. We will discuss the issue in more detail below.} \begin{figure}[phtb] \includegraphics[width=0.49\columnwidth]{fig11a} \includegraphics[width=0.49\columnwidth]{fig11b} \caption{Coulomb potential in the bc-approach as a function of $N_r$. The data with $N_r=10,000$ are omitted, since they show no difference as compared to $N_r=1,000$, see Fig.~\ref{fig:gribov:fmr_vs_GH}. } \label{fig:gribov:coulpot:n24t24_allcopies} \end{figure} In earlier studies of the Coulomb potential a bump in $p^4V_C(p)$ was observed at around 0.5 to 1 $\giga \electronvolt$, affecting direct estimates of the intercept on the vertical axis with large uncertainties \cite{Nakagawa:2008zza,Voigt:2008rr,Greensite:2009eb,Burgio:2012bk,Burgio:2013naa,Burgio:2015hsa}. As Fig.~\ref{fig:gribov:coulpot:n24t24_allcopies} shows, this bump does indeed vanish as the number of gauge copies is increased; at the same time the statistical precision on the MC-data strongly improve. One might be tempted to assume that one is actually approaching the absolute maximum of the gauge fixing functional as the number of gauge copies is increased, and the ensemble eventually samples a FMR free of Gribov copies.\footnote{We had put forward such hypothesis in Ref.~\cite{Vogt:2013jha}, although in a slightly different context.} If this was the case, however, one should expect that the Coulomb potential from the alternative lc-approach should yield the same (or a similar) result, as the Gribov-Zwanziger entropic argument in the thermodynamic limit states that the partition function is dominated by configurations lying on the common boundary of the FMR and the first Gribov region. For such configurations, the bc and lc approach -- once they converged -- would give identical results. Unfortunately, the lc-result for the Coulomb potential does not corroborate such a conjecture. In Fig.~\ref{fig:gribov:coulpot:n24t24lcbc} the data for the bc- and the lc-approach are compared for the B1 and B3 lattices. \begin{figure}[phtb] \center \includegraphics[width=0.8\columnwidth]{fig12} \caption{The Coulomb potential from the lattices B1 and B3 after 10,000 copies.} \label{fig:gribov:coulpot:n24t24lcbc} \end{figure} While for the ghost propagator the different gauge fixing strategies provided a nice overlap in the UV regime (see Fig.~\ref{fig:gribov:ghost:n24t24lcbc}), the Coulomb potential, over the whole momentum range, is increased by \emph{several orders of magnitude!} The same happens for all lattices that we investigated. Since this result is quite surprising, we have repeated the calculation with a different solver. We have usually adopted a conjugate gradient algorithm with Laplace pre-conditioning. To ensure the validity of our solver for exceptional configurations\footnote{The lc-strategy generally attempts to make the Faddeev--Popov\-operator ill-conditioned, but for some configurations with a very small eigenvalue $\lambda_1$, it becomes nearly singular and its precise inversion in the Coulomb potential is numerically challenging.} we compared the results of our conjugate gradient to a publicly available C++ implementation \cite{Villa2012} of the MINRES algorithm \cite{Paige:1975:SSI}. Both algorithms yield the same solution up to numerical precision. Interestingly, while the Coulomb potential in the lc-approach computed from the kernel Eq.~\eqref{eq:coulombpotential} apparently yields physically non-sensible results, the alternative definition proposed in Refs.~\cite{Marinari:1992kh,Greensite:2003xf}, which is based on short Polyakov lines $P_t$ of length $t$ and the correlator of temporal links $U_0$, \begin{align} \label{eq:correlator} a V_C(|\vec{x}-\vec{y}|) &= -\lim_{t\rightarrow 0} \frac{d}{dt} \log \left\langle \mathrm{tr}\,{P_t(\vec{x})P_t^\dagger(\vec{y})}\right\rangle\nonumber\\[2mm] &= -\log \left\langle \mathrm{tr}\,{U_0(\vec{x})U_0^\dag(\vec{y})}\right\rangle\,, \end{align} seems to work in all cases, cf.~Fig.~\ref{fig:gribov:u0u0_b22509}. As in the case of the gluon propagator, the effect of choosing different g.f.~strategies and selection of Gribov copies is quite modest. To extract the Coulomb string tension we fitted $V_C$ from Eq.~\eqref{eq:correlator} to \begin{equation} V_C(r) = \frac{\alpha}{r}+\sigma_C\, r + \text{const.}\,, \end{equation} where the L\"uscher-term $\alpha = -\frac{\pi}{12}$ is kept fixed. In the range $[6/a,14/a]$ we find a Coulomb string tension varying between $(0.66 \,\giga \electronvolt)^2$ (bc 5) and $(0.77 \,\giga \electronvolt)^2$ (lc 500), with $\chi^2$/d.o.f. between $0.58$ (lc 500) and $0.95$ (bc 5). \begin{figure}[phtb] \center \includegraphics[width=0.8\columnwidth]{fig13} \caption{Coulomb potential in position space from the $\left<U_0\,U_0^\dag\right>$ correlator in eq.~(\ref{eq:correlator})(D1 lattice). The bc configurations are preconditioned with simulated annealing.} \label{fig:gribov:u0u0_b22509} \end{figure} \section{Conclusions} In this paper we have studied the effect of fixing, on the lattice, the Coulomb gauge to the copy closest to the Gibov horizon, i.e.~the copy with the smallest non-vanishing eigenvalue of the Faddeev--Popov\ operator (\emph{lc-approach}). This prescription {\it de facto} implements the proposal of Ref.~\cite{Cooper:2015sza}. The main observation we made is that the size of the smallest eigenvalue saturates very slowly, if at all, with the number of gauge-fixing attempts, see e.g.~Fig.~\ref{fig:gribov:fmr_vs_GH}. Of course we are still far from exploring the whole Gribov region, as Fig.~\ref{fig:gribov:number_gribovcopies} suggests; still our result is somehow surprising: in light of the entropic argument usually made within the Gribov-Zwanzier scenario, one would have intuitively expected that the small eigenvalue of the Faddeev--Popov\ operator should be bounded from below by some effective IR cutoff induced by the finite lattice volume; we however see no such saturation even after $N_r = 10,000$ gauge fixing repetitions. The small eigenvalues heavily affect the IR behavior of the ghost propagator and, more importantly, the Coulomb potential extracted from the kernel in Eq.~\eqref{eq:coulombpotential}. {The first effect can be regarded as positive to some extent, as it moves the infrared behaviour of the ghost propagator towards the continuum prediction and thus reduces the violation of the sum rule. As with the size of the smallest eigenvalue, we do not see a saturation with the number of gauge copies, and the ghost exponent eventually tends to overshoot the continuum prediction. However, given the arbitrariness in the fits used to extract the exponent, it is at least conceivable that the \emph{lc approach} could indeed be made to agree with the continuum.} {Much more severe is the second effect, on the Coulomb potential, which yields results that are at odds with physical expectations.} The dramatic increase of the potential extends of the entire momentum range and also affects the Coulomb string tension, to the point that the results are physically non-sensible. As the effect on the eigenvalues has not yet saturated with $N_r = 10000$ gauge fixing repetitions, exploring the Gribov region further by increasing $N_r$ should make things even worse. We can think of {several} possible interpretations of our result. First, it could be that merely constraining the lowest eigenvalue is insufficient to detect the physical relevant configurations. From the entropic argument one expects the partition function to be peaked on the common boundaries of the first Gribov region and the fundamental modular region, i.e.~on configurations where the absolute maxima of Eq.~\eqref{eq:gribov:gff} become degenerate. The multiple flat directions allow for further refinements of the lc-prescription; for instance, a restriction to configurations where {\it at least} the two lowest eigenvalues are small {\it and} (nearly) degenerate could lead to the correct physics. Such an investigation is, although numerically demanding, in principle feasible and its implementation is currently under scrutiny. A second possibility is that the Coulomb potential as calculated from Eq.~\eqref{eq:coulombpotential} involves the inverse of the ill-conditioned lattice Faddeev--Popov\ operator whose kernel may be sensitive to the exact lattice definition and (yet to be determined) discretization artifacts. The lc-procedure would then bring this defect to the fore and amplify it, ultimately making the lattice definition impractical. The fact that the alternative definition given in Eq.~\eqref{eq:correlator}, which requires no such operator inversion, always works well might indeed point in this direction. Also the fact that no saturation for the smallest eigenvalue could be found hints towards spurious artifacts in the low-lying spectrum of the lattice Faddeev--Popov\ operator. If this issue could be resolved and a saturation could be found, it is also conceivable that a theoretically motivated \emph{Ansatz} for the fit to the data in Fig.~\ref{fig:gribov:ghost:ani_22509} might still bring the results in agreement with the continuum predictions, e.g.~the ghost exponent $\kappa=1$. Alternatively one could argue that, since the fundamental discretization of Yang-Mills theory is known to possess lattice artifacts which affect gauge invariant, topological observables \cite{Barresi:2004qa,Burgio:2006dc,Burgio:2006xj,Burgio:2014yna}, it {is} conceivable for them to also influence gauge dependent quantities. In this case, it is the discretization of the model itself which would introduce spurious quasi-zero modes in the FP operator which subsequently affect all quantities that require its inversion (such as the ghost propagator or the Coulomb potential). By contrast, ordinary correlators that require no FP inversion are benign, cf.~Eq.~\eqref{eq:correlator}. To test such a hypothesis one would, however, need to explore Coulomb gauge in algorithmically demanding alternative discretizations of the Yang-Mills action \cite{Barresi:2003jq,Barresi:2006gq,Burgio:2006dc,Burgio:2006xj}. {Finally, it is also conceivable that the GZ scenario realised in the Hamiltonian approach does not describe the lattice results at all, and a refinement of the Coulomb Hamiltonian would be necessary, similarly to what was conjectured in Landau gauge \cite{Cucchieri:2011ig}. If such a refinement is to remain renormalizable, however, the additonal terms would dominantly affect the infrared regime, and hence the sum rule, but they could not explain for the dramatic increase of the Coulomb potential observed in the \emph{lc} approach over the \emph{entire} momentum range. More generally, the numerical investigations in this paper show that there is not one single consistent version of Coulomb gauge on the lattice, at least not within current computational limits, and it is hence unclear in which way to extend the continuum GZ scenario. At the moment, we have no strong evidence for an extension of the present continuum formulation, i.e. the standard Hamiltonian approach realising the GZ horizon condition remains our best continuum description so far.} A by-product of our investigation was the systematic improvement of the search for the best gauge functional value (bc-approach) with a high number of g.f.~repetitions. While in this case gluon and ghost propagators (Figs.~\ref{fig:gribov:gluonprop} and \ref{fig:gribov:ghost:bc:n24t24_allcopies}) do not change as compared to previous investigations \cite{Burgio:2008jr,% Burgio:2009xp,Burgio:2012ph,Burgio:2012bk,Burgio:2013naa,Burgio:2015hsa}, the Coulomb potential loses the ``bump" in the low momentum region found in previous works, which allows for a much more reliable estimate of the Coulomb string tension in this setup. For a true high-precision determination of $\sigma_C$, however, larger volumes and a systematical finite-size analysis would be required. \begin{acknowledgments} This work was partially supported by the Deutsche Forschungsgemeinschaft under the contract DFG-Re 856/10-1. H.V. wishes to thank the Evangelisches Studienwerk Villigst for financial support. \end{acknowledgments} \bibliographystyle{apsrev4-1}
{ "attr-fineweb-edu": 1.944336, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd-Y4dbghX5eqgGDo
\section{Public implementation} \label{sec:code} The \phik correlation analyzer code is publicly available as a Python library through the PyPi server, and from GitHub at \url{https://github.com/KaveIO/PhiK}. Install it with the command: \texttt{pip install phik} The web-page \url{https://phik.readthedocs.io} contains a description of the source code, a tutorial on how to set up an analysis, and working examples of how to use and run the code. \section{Conclusion} \label{sec:conclusion} We have presented a new correlation coefficient, \phik, based on the $\chi^2$ contingency test, with Pearson-like behavior and the practical feature that it applies to all variable types alike. Compared to Cram\'er's $\phi$, the calculation of \phik is stable against the binning per interval variable, making it easy to interpret, and contains a noise correction against statistical fluctuations. The asymptotic approximation breaks down for sparse and low-statistics data sets. To evaluate the statistical significance of the hypothesis test of variable independence, a hybrid approach is proposed where, using the $G$-test statistic, a number of Monte Carlo simulations is used to determine the effective number of degrees of freedom and to fit an analytical, empirical description of the $\chi^2$ distribution. We have evaluated the statistical significance of outlier frequencies with respect to the factorization assumption, which is a helpful technique for interpreting any dependency found, \textit{e.g.} between categorical variables. Three practical use-cases are discussed, studying the numbers of insurance claims, survey responses, and clustering compatibility, but plenty of other applications exist. The methods described are easy to apply through a Python analysis library that is publicly available. \section*{Acknowledgments} \label{sec:Acknowledgments} We thank our colleagues of KPMG's Advanced Analytics \& Big Data team for fruitful discussions and in particular Ralph de Wit for carefully reading the manuscript. This work is supported by KPMG Advisory N.V. \addcontentsline{toc}{section}{References} \section{Introduction} \label{sec:intro} The calculation of correlation coefficients between paired data variables is a standard tool of analysis for every data analyst. Pearson's correlation coefficient~\cite{pearson_1895} is a \textit{de facto} standard in most fields, but by construction only works for interval variables. While many coefficients of association exist, each with different strengths, we have not been able to identify a correlation coefficient\footnote{The convention adopted here is that a correlation coefficient is bound, \textit{e.g.} in the range $[0,1]$ or $[-1,1]$, and that a coefficient of association is not.} with Pearson-like characteristics and a sound statistical interpretation that works for interval, ordinal and categorical variable types alike. This paper describes a novel correlation coefficient, \phik, with properties that -- taken together -- form an advantage over existing methods. Broadly, it covers three related topics typically encountered in data analysis: \begin{enumerate} \item Calculation of the correlation coefficient, \phik, for each variable-pair of interest. The correlation \phik follows a uniform treatment for interval, ordinal and categorical variables. This is particularly useful in modern-day analysis when studying the dependencies between a set of variables with mixed types, where some variables are categorical. The values for levels of correlation are bound in the range $[0,1]$, with $0$ for no association and $+1$ for complete association. By construction, the interpretation is similar to Pearson's correlation coefficient, and is equivalent in case of a bi-variate normal input distribution. Unlike Pearson, which describes the average linear dependency between two variables, \phik also captures non-linear relations. Finally, \phik is extendable to more than two variables. \item Evaluation of the statistical significance of each correlation. The correlation \phik is derived from Pearson's $\chi^2$ contingency test~\cite{barnard_1992}, \textit{i.e.} the hypothesis test of independence between two (or more) variables in a contingency table, henceforth called factorization assumption. In a contingency table each row is the category of one variable and each column the category of a second variable. Each cell describes the number of records occurring in both categories at the same time. The asymptotic approximation commonly advertised to evaluate the statistical significance of the hypothesis test, \textit{e.g.} by statistics libraries such as \texttt{R}~\cite{r.chisq.test} and \texttt{scipy}~\cite{scipy.stats.chi2contingency}, makes particular assumptions on the number of degrees of freedom and the shape of the $\chi^2$ distribution. This approach is unusable for sparse data samples, which may happen for two variables with a strong correlation and for low- to medium-statistics data samples, leading to incorrect $p$-values. (Examples follow in Section~\ref{sec:significance}.) Presented here is a robust and practical statistical prescription for the significance evaluation of the level of variable association, based on an adjustment of the $\chi^2$ distribution when using the $G$-test statistic~\cite{sokal_rohlf_2012}. \item Insights in the correlation of each variable-pair, by studying outliers and their significances. To help interpret any relationship found, we provide a method for the detection of significant excesses or deficits of records with respect to the expected values in a contingency table, so-called outliers, using a statistically independent evaluation for expected frequency of records. We evaluate the significance of each outlier frequency, putting particular emphasis on the statistical uncertainty on the expected number of records and on the scenario of low statistics data samples. \end{enumerate} The methods presented in this work can be applied to many analysis problems. Insights in variable dependencies serve as useful input to all forms of model building, be it classification or regression based, such as the identification of customer groups, outlier detection for predictive maintenance or fraud analytics, and decision making engines. More general, they can be used to find correlations across (big) data sets, and correlations over time (in correlograms). Three use-cases are discussed, the study of numbers of insurance claims, survey responses, and clustering compatibility. This document is organized as follows. A brief overview of existing correlation coefficients is provided in Section~\ref{sec:overview}. Section~\ref{sec:pearson} describes the contingency test, which serves as input for Section~\ref{sec:phik}, detailing the derivation of the correlation coefficient \phik. The statistical significance evaluation of the contingency test is discussed in Section~\ref{sec:significance}. In Section~\ref{sec:outliers} we zoom in on the interpretation of the dependency between a specific pair of variables, where the significance evaluation of outlier frequencies in a contingency table is presented. Three practical applications of this can be found in Section~\ref{sec:applications}. Section~\ref{sec:code} describes the implementation of the presented algorithms in publicly available Python code, before concluding in Section~\ref{sec:conclusion}. \section{Interpretation of relation between two variables} \label{sec:outliers} After the evaluation of \phik and its significance, the specific relationship between two variables is typically inspected. To facilitate the interpretation of any dependency found, the significance of observed excesses or deficits of records with respect to expected values in the contingency table is discussed here. The statistical significance for each cell in the table is obtained from an hypothesis test between a background-only and signal-plus-background hypothesis for a Poisson process. Such hypothesis tests, \textit{i.e.} for the presence of new sources of (Poisson) counts on top of known ``background'' processes, are frequently performed in many branches of science, for example gamma ray astronomy and high energy physics, and have been discussed extensively in the literature~\cite{Cousins:2007bmb}. We employ a measure of statistical significance commonly used in both fields, one that accounts for the mean background rate having a non-negligible uncertainty. The background estimate and its uncertainty have been derived from an auxiliary or side-band measurement, typically assumed to be a Poisson counting setup, as in the case of the ABCD estimate of Section~\ref{sec:indepfreq}. Here we use as background estimate the statistically independent frequency estimate (and related uncertainty) of Eqn.~\ref{eq:abcd} (\ref{eq:abcderror}). The hybrid Bayesian-Frequentist method from Linneman~\cite{Linnemann:2003vw} is used to evaluate the probability of the hypothesis test ($p$-value). Per cell, Linneman's probability calculation requires the observed count $n_o$, the expected count $n_e$, and the uncertainty on the expectation $\sigma_e$: \begin{equation} \label{eq:linneman} p_{B} = B \big( 1/(1+\tau),\, n_o,\, n_e\, \tau + 1 \big) \,, \end{equation} where $B$ is the incomplete Beta function, and $\tau = n_e / \sigma_e^2$. We apply four corrections on top of this calculation: \begin{enumerate} \item The incomplete Beta function returns no number for $n_o=0$, when by construction the $p$-value should be $1$. \item The incomplete Beta function is undefined when $\sigma_e=0$, in which case we simply revert to the standard Poisson distribution. \item The incomplete Beta function always returns $1$ when $n_e=0$, irrespective of $n_o$ and $\sigma_e$. The scenarios $n_o=0$ and $\sigma_e=0$ are captured by the previous two fixes. In all other cases we set $n_e=\sigma_e$ before evaluating Eqn.~\ref{eq:linneman}. In particular, this procedure prevents (minus) infinite significances for low statistics cells where uncertainty-wise these are not expected\footnote{When $n_e=0$, $B$ or $C$ is zero in Eqn.~\ref{eq:abcd}, so Eqn.~\ref{eq:abcderror} typically gives $\sigma_e<1$. For example, for $n_e = 0$, $\sigma_e =0.14$, and $n_o=0$ ($1$), correction three to $p_B$ results in $Z = -0.29$ $(1.10)$. Varying $n_e$ between $\sigma_e/2$ and $3\sigma_e/2$ gives a maximum absolute shift in $Z$ of $0.05$ ($0.12$). To do outlier detection, for this procedure we deem this level of systematic error acceptable.}. \item As we combine an integer-valued measurement (namely the observed frequency) with a continuous expectation frequency and uncertainty, resulting in a continuous (combined) test statistic, we correct $p_B$ to Lancaster's mid-$P$ value~\cite{lancaster_1961}, which is the null probability of more extreme results plus only half the probability of the observed result\footnote{The standard $p$-value definition is: $p = P(s \geq {\rm observed}\, |\, {\rm background} )$.}: \begin{equation} p = P(s = {\rm observed}\, |\, {\rm background} ) / 2 + P(s > {\rm observed}\, |\, {\rm background} )\,, \end{equation} with $s$ the integrated-over number of cell counts. This $p$-value is then translated into the $Z$-value using Eqn.~\ref{eq:zscore}. When observing the expected frequency by construction Lancaster's mid-$P$ value ($Z$-value) is close to $0.5$ ($0$), even at low statistics. Likewise, for background-only samples the Lancaster's mid-$P$ correction centers the $Z$ distribution around zero. \end{enumerate} \begin{figure}[htp] \centering \begin{subfigure}[t]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{zdistribution_200.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{zdistribution_500.pdf} \caption{} \end{subfigure}% \caption{The distribution of outlier significances measured in 1000 randomly generated data sets of two variables obeying a uniform probability mass distribution for a data set containing a) 200 and b) 500 records, collected in a $10\times 10$ contingency table. Normal distributions have been overlaid. In plot a) the $Z$ distributions from $0$, $1$, $2$, and $3$ observed entries per cell are shown as well.} \label{fig:outlierz} \end{figure} Fig.~\ref{fig:outlierz}a shows the $Z$ distribution from 1000 randomly generated samples of two variables obeying a uniform probability mass distribution, \textit{i.e.} the samples have no variable dependency. Each sample contains only 200 records collected in a $10\times 10$ contingency table, so on average each cell contains $2.0$ records. As can be seen from the Gaussian curve, even for such low statistics samples the $Z$ distribution is fairly consistent with a normal distribution, albeit slightly shifted towards negative values. Fig.~\ref{fig:outlierz}b shows a similar distribution, built from samples with on average $5.0$ records per contingency table cell. Clearly, with more statistics the distribution converges to the normal distribution relatively quickly\footnote{With an average of less than $1.0$ records per bin, the $Z$ distribution gets more distorted, and breaks up into individual peaks of $0$, $1$, $2$, etc. observed entries per cell. The distribution peaks at negative $Z$ values, corresponding to no observations, and the tail at negative $Z$ gets truncated. Relevant here: the mean of the distribution remains close to zero, its width is (slightly less than) one, and the positive tail is similar to that of a normal distribution.}. To filter out significant excesses or deficits of records over expected values in the contingency table, one simply demands $|Z|$ to be greater than a specified value, \textit{e.g.} 5 standard deviations. For two variables with a dependency, note that excesses and deficits always show up together, since the frequency estimates of Section~\ref{sec:indepfreq} smooth the input distribution. \begin{figure}[htp] \centering \begin{subfigure}[t]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{outlier_sifinifcance_fake_data2_area_car_color.pdf} \caption{} \end{subfigure} \begin{subfigure}[t]{0.49\linewidth} \centering \includegraphics[width=\textwidth]{outlier_sifinifcance_fake_data2_mileage_car_size.pdf} \caption{} \end{subfigure}% \caption{Significances of excesses or deficits of records over the expected values in a contingency table for a) the categorical variables ``car color'' and ``area'' and b) the ordinal variable ``car size'' and the interval variable ``mileage'', measured on the synthetic data of Table~\ref{tab:data}.} \label{fig:contingency} \end{figure} Two example contingency tables are shown in Fig.~\ref{fig:contingency}, one for a combination of categorical variables, and one for the combination of an interval and ordinal variable, both based on the synthetic car insurance data of Table~\ref{tab:data}. Per cell each figure shows the evaluated $Z$-value. For example, black-colored cars occur significantly more in suburbs and significantly less down-town, and XXL-sized cars have significantly higher mileage. In practice these turn out to be valuable plots to help interpret correlations, in particular between categorical variables. In essence, for a data sample with a dependency, the contingency table cells with large $|Z|$ values show the variable dependency. \section{Three practical applications} \label{sec:applications} Given a set of mixed-type variables and using the methods described in this work, one can: \begin{itemize} \item Find variable pairs that have (un)expected correlations; \item Evaluate the statistical significance of each correlation; \item Interpret the dependency between each pair of variables. \end{itemize} The methods presented in this work can be applied to many analysis problems, and in particular they are useful for model building purposes. Three interesting applications using the methods presented in this paper are briefly discussed below. \subsection{Modeling the frequency of insurance claims} One interesting application is the modeling of numbers of expected insurance claims, \textit{e.g.} car damage claims as a function of car type, type of residential area, mileage, age of driver, etc. -- a set of variables with a mixture of types. The aggregate loss incurred by an insurer $S$ is the total amount paid out in claims over a fixed time period: $S = \sum_{n=1}^N s_n$, where $s_n$ is an individual claim amount, known as the severity, and $N$ is the total number of claims paid out in the time period. Traditionally it is assumed that the individual claim amounts are mutually independent, and that $N$ does not depend on the values of the claims. The total expected severity is then expressed as a product of the expected number of claims times the average claim amount: $E(S) = E(N)\cdot E(s)$, where each term can be estimated separately. When a vector of variables $\vec{x}$ is available at the individual claim level, this information is incorporated through two independent generalized linear models (GLMs): one for the claim frequency $N$, and the other for the severity $s$. See Ref.~\cite{garrido_genest_schulz_2016} for more information. Here we focus on the GLM for modeling the frequency of insurance claims. Suppose that claims data are available for $m$ different classes of policy holders, and that class $i$ has $N_i$ claims. Assume the claim frequency for each class is Poisson distributed, $N_i \sim P (\nu_i)$, where $\nu_i$ is the expectation for $N_i$. Let $\vec{x}_i = (x_{i0}, ... , x_{ik})$ be the vector of variables for class $i$ at claim level, with the baseline convention that $x_{i0} \equiv 1$. One writes: \begin{equation} \label{eq:expnu} \nu_i = E(N_i|\vec{x}_i) = g^{-1} (\vec{\alpha}\cdot \vec{x}_i)\,, \end{equation} where $\vec{\alpha} = (\alpha_{i0}, ... , \alpha_{ik})$ is a vector of regression constants\footnote{Sometimes the ratio of claims to no claims per class of policy holders is modeled instead.}. In GLM terminology $g$ is the link function. When the frequency GLM uses a logarithmic link function, Eqn.~\ref{eq:expnu} simplifies to: \begin{equation} \label{eq:expnu2} \nu_i = e^{\vec{\alpha}\cdot \vec{x}_{i}}\,, \end{equation} yielding a simple rating structure which ensures that $\nu_i > 0$. The logarithmic function reflects the common practice that each variable alters the baseline claim rate by a multiplicative factor. In initial GLM risk models, no relations are typically assumed between the input variables, and each variable category $j$ (or interval bin) is one-hot-encoded, $x_{ij} \in \{0,1\}$, and assigned one model parameter. The number of regression parameters per variable equals its number of categories or interval bins. Note that it is common practice to merge low-statistics categories until they contain sufficient records. Take the example variables of residential area and car type, each with multiple categories. Three classes of policy holders could be: ``city, small car'', ``city, SUV'', and ``countryside, SUV'', where the first two share the regression parameter $\alpha_{\rm city}$, and the last two the regression parameter $\alpha_{\rm SUV}$. The predicted, factorized number of claims for class ``city, SUV'' simply reads: $N_0\, e^{\alpha_{\rm city} + \alpha_{\rm SUV}}$, where $N_0\equiv e^{\alpha_0}$ is the nominal number of claims shared between all classes, and $x_{\rm city} = x_{\rm SUV} = 1$. In a refinement modeling step, to improve the factorized estimates, cross-terms between categories of variable pairs can be added to the linear sum in the power of Eqn.~\ref{eq:expnu2}. However, there is an exponentially large number of cross-terms to choose from. Practical modeling questions are: which are the most relevant terms to add? And can they be picked in an effective way that limits their number? To help answer these, realize that the shape of Eqn.~\ref{eq:expnu2} and the assumption of variable independence are identical to the factorization assumption of Eqn.~\ref{eq:dep_est}. A practical approach can then be: \begin{enumerate} \item Using the \phik values and their significances, select the variable pairs with the strongest correlations. \item The most relevant model cross-terms for each variable pair $pq$, having the largest impact in the model's likelihood, can be identified by studying the outliers in the correlation plots of Section~\ref{sec:outliers}. \item Cross-terms can also be included in a manner that limits the number of extra regression parameters. For example, for a given variable pair $pq$, introduce one cross-term parameter $\beta_{pq}$ that affects only the contingency table cells with a $Z$ value greater than a predefined value (and one for those smaller). To model those outlier cells, use Eqn.~\ref{eq:abcderror}: the cross term for each selected cell $i\!j$ should scale with the uncertainty on the statistically independent estimate for that cell, $\sigma_{E_{ij}} \beta_{pq} x_{p,i} x_{q,j}$. \end{enumerate} \subsection{Finding unexpected answers in questionnaires} When interpreting questionnaires one is often interested in finding all ``unexpected'' correlations between ordinal or categorical answers given to a set of survey questions (the definition of what constitutes an unexpected correlation is typically survey specific). The methods presented in this paper can help to do so: \begin{enumerate} \item By selecting question-pairs that have an interesting (``unexpected'') $\phi_K$ correlation and significance on the one hand; \item And selecting those with relatively high $|Z|$ values in the contingency tables of their respective answers on the other hand. \end{enumerate} This allows one to compile a list with all answer-pairs significantly deviating from the norm of no correlation, of which the unexpected pairs are a subset. \subsection{Comparison of clustering algorithms} When looking for groups of similar data records, a typical approach is to run multiple unsupervised clustering algorithms to cluster the data, and study the results. In trying to understand the compatibility in clusters created by the various algorithms, the methods presented in this work come in useful. For each data record, store the cluster-ID assigned by each clustering algorithm. Using this information, one can now: \begin{enumerate} \item Calculate the correlation matrix between the various clustering algorithms; \item For two specific algorithms, study where the two sets of predicted clusters overlap and deviate. \end{enumerate} \section{Measures of variable association} \label{sec:overview} A correlation coefficient quantifies the level of mutual, statistical dependence between two variables. Multiple types of correlation coefficients exist in probability theory, each with its own definition and features. Some focus on linear relationships where others are sensitive to any dependency, some are robust against outliers, etc. Typically their values range from $-1$ to $+1$ or $0$ to $+1$, where $0$ means no statistical association, $+1$ means the strongest possible association, and $-1$ means the strongest negative relation. In general, different correlation coefficients are used to describe dependencies between interval, ordinal, and categorical variables. This section briefly discusses existing correlations coefficients and other measures of variable association. This is done separately for interval, ordinal, and categorical variables. In addition, several related concepts used throughout this work are presented. An \textbf{\textit{interval variable}}, sometimes called continuous or real-valued variable, has well-defined intervals between the values of the variable. Examples are distance or temperature measurements. The Pearson correlation coefficient is a \textit{de facto} standard to quantify the level of association between two interval variables. For a sample of size $N$ with variables $x$ and $y$, it is defined as the covariance of the two variables divided by the product of their standard deviations: \begin{equation} \label{eq:pearsonrho} \rho = \frac{\sum_{i=1}^{N}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_{i=1}^{N} (x_i - \bar{x})^2 }\sqrt{\sum_{i=1}^{N} (y_i - \bar{y})^2 }}\,, \end{equation} where $\bar{x}$ and $\bar{y}$ are the sample means. Notably, $\rho$ is symmetric in $x$ and $y$, and $\rho \in [-1,1]$. Extending this to a set of input variables, Pearson's correlation matrix $C$, containing the $\rho$ values of all variable pairs, is obtained from the covariance matrix $V$ as: \begin{equation} C_{ij} = \frac{V_{ij}}{\sqrt{V_{ii}V_{jj}}}\,, \end{equation} where $ij$ are the indices of a variable pair. The Pearson correlation coefficient measures the strength and direction of the linear relationship between two interval variables; a well-known limitation is therefore that non-linear dependencies are not (well) captured. In addition, $\rho$ is known to be to sensitive to outlier records. Pearson's correlation coefficient, like many statistics formulas, requires interval variables as input, which can be unbinned or binned. It cannot be evaluated for categorical variables, and ordinal variables can only be used when ranked (see below). A direct relationship exists between $\rho$ and a bi-variate normal distribution: \begin{align} \label{eq:bivar} f_{\mathrm{b.n.}} (x,y\, |\, \bar{x}, \bar{y}, \sigma_{x},\sigma_{y}, \rho) &=& \\ && \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \frac{1}{2\pi\sigma_{x}\sigma_{y}\sqrt{1-\rho^2}} \exp \Bigg( -\frac{1}{2(1-\rho^2)}\bigg[\frac{(x-\bar{x})^2}{\sigma_{x}^2} + \frac{(y-\bar{y})^2}{\sigma_{y}^2} - \frac{2\rho(x-\bar{x})(y-\bar{y})}{\sigma_{x}\sigma_{y}}\bigg] \Bigg)\,, \nonumber \end{align} where $\sigma_{x}$ ($\sigma_{y}$) is the width of the probability distribution in $x$ ($y$), and the correlation parameter $\rho$ signifies the linear tilt between $x$ and $y$. We use this relation in Section~\ref{sec:phik} to derive the correlation coefficient \phik. Another measure is the global correlation coefficient~\cite{james_roos_1975}, which is a number between zero and one obtained from the covariance matrix $V$ that gives the highest possible correlation between variable $k$ and the linear combination of all other variables: \begin{equation} \label{eq:globalcorr} g_k = \sqrt{ 1 - \big[ V_{kk} * (V^{-1})_{kk} \big]^{-1} }\,. \end{equation} An \textbf{\textit{ordinal variable}} has two or more categories with a clear ordering of these categories. For example, take the variable ``level of education'' with six categories: no education, elementary school graduate, high school graduate, college and university graduate, PhD. A rank correlation measures the statistical relationship between two variables that can be ordered. The rank of a variable is its index in the ordered sequence of values. For ordinal variables a numbering is assigned to the categories, \textit{e.g.} 0, 1, 2, 3. Note the equidistant spacing between the categorical values. Examples of rank correlation coefficients are Spearman's $\rho$~\cite{Spearman:1904}, Kendall's $\tau$~\cite{kendall_1938}, Goodman-Krustall's $\gamma$~\cite{Goodman:1954,Goodman:1959,Goodman:1963,Goodman:1972}, and the polychoric correlation~\cite{drasgow_2006}. The definition of Spearman's $\rho$ is simply Eqn.~\ref{eq:pearsonrho}, using the ranks of $x_i$ and $y_i$ as inputs, essentially treating the ranks as interval variables. This makes Spearman's $\rho$ very robust against outliers. Noteworthy, Goodman-Krustall's $\gamma$ is dependent on the order of the two input variables, resulting in an asymmetric correlation matrix. Although ranking is regular practice, the assumption of equidistant intervals -- often made implicitly -- can sometimes be difficult to justify. Adding the category of ``MBA'' to the above example increases the distance between ``PhD'' and ``no education'', where one could argue that this distance should be independent of the number of educational categories. A \textbf{\textit{categorical variable}}, sometimes called a nominal or class variable, has two or more categories which have no intrinsic ordering. An example is the variable gender, with two categories: male and female. Multiple measures of association exist that quantify the mutual dependence between two (or more) categorical variables, including Pearson's $\chi^2$ contingency test~\cite{barnard_1992}, the $G$-test statistic~\cite{sokal_rohlf_2012}, mutual information~\cite{cover_thomas_2006}, Fisher's exact test~\cite{fisher_1922,fisher_1970}, Barnard's test~\cite{Bernard:1945,barnard_1947}. For an overview see Ref.~\cite{agresti_1992}. These measures determine how similar the joint distribution $p(x,y)$ is to the product of the factorized marginal distributions $p(x)p(y)$. Each measure of association consists of a sum of contributions, one from each cell of the contingency table, and therefore does not depend on the intrinsic ordering of the cells. Though typically limited to categorical variables, these test statistics can also be applied to interval and ordinal type variables. However, their values are not bound in the range $[0,1]$, and can become large. Moreover, their interpretation is often complex, as their values not only depend on the level of association, but also on the number of categories or intervals and the number of records. Most comparable to this work is Cram\'er's $\phi$~\cite{cramer_harald_1999}, a correlation coefficient meant for two categorical variables, denoted as $\phi_C$, based on Pearson's $\chi^2$ test statistic, and with values between $0$ (no association) and $+1$ (complete association): \begin{equation} \label{eq:phic} \phi_{C} = \sqrt{\frac{\chi^2}{N\min(r-1,k-1)}}\,, \end{equation} where $r$ ($k$) is the number of rows (columns) in a contingency table. Notably, with a relatively small number of records, comparable with the number of cells, statistical fluctuations can result in large values of $\phi_C$ without strong evidence of a meaningful correlation. (An example of this follows in Fig.~\ref{fig:rho0}a.) Cram\'er's $\phi$ can also be used for ordinal and binned interval variables. Fig.~\ref{fig:phic} shows $\phi_C$ for a binned bi-variate normal input distribution with correlation parameter $\rho$. Compared to Pearson's $\rho$, $\phi_C$ shows relatively low values for most values of $\rho$, and only shoots up to one for values of $\rho$ close to one. Moreover, the value found for $\phi_C$ is dependent on the binning chosen per variable, as also seen in the figure. This effect make $\phi_C$ difficult to interpret, and essentially unsuitable for interval variables. \begin{figure}[htp] \centering \begin{minipage}[b]{0.6\linewidth} \centering \includegraphics[width=\textwidth]{pearson_vs_cramer500.pdf} \end{minipage}% \caption{Cram\'er's $\phi$ versus Pearson's $\rho$. The two curves for Cram\'er's $\phi$ have been evaluated with different numbers of rows $r$ and columns $k$: $10\times10$ and $5\times20$ bins. The value found for $\phi_C$ is dependent on the number or rows and columns. Smaller values for $\phi_C$ are found for the case of equal number of rows and columns (red) compared with different number of rows and columns (green).} \label{fig:phic} \end{figure} One more alternative is the contingency coefficient $C_p$, which suffers from the disadvantage that its maximum value depends on the number of categories $r$ and $k$, and does not reach a maximum of one. The recommendation~\cite{smith_2009} is not to use $C_p$ to compare correlations in tables with variables that have different numbers of categories (\textit{i.e.} when $r \neq k$). To address the aforementioned issues, in this paper we define the coefficient of correlation \phik, derived from Pearson's $\chi^2$ contingency test in Section~\ref{sec:phik}, and its statistical significance, derived using the $G$-test in Section~\ref{sec:significance}. \section{Test of variable independence} \label{sec:pearson} The contingency test, also called the test of variable independence, determines if a significant relationship exists between two (or more) categorical variables. Though usually performed on two categorical variables, the test can equally be applied to ordinal and binned interval variables, and can be extended to an arbitrary number of variables. Specifically, the contingency test indicates how well the joint data distribution $p(x,y)$ of variables $x$ and $y$ is described by the product of its factorized marginal distributions $p(x)p(y)$. Throughout this paper we employ two contingency tests, where each compares the observed frequency of each category for one variable with the expectation across the categories of the second variable: \begin{enumerate} \item Pearson's $\chi^2$ test: \begin{equation} \label{eq:chi2} \chi^2 = \sum_{i,j} \frac{(O_{ij} - E_{ij})^2}{E_{ij}}\,, \end{equation} which is used to define the correlation coefficient \phik in Section~\ref{sec:phik}. Pearson's $\chi^2$ test is the standard test for variable independence. \item The $G$-test, sometimes called log-likelihood ratio test: \begin{equation} \label{eq:gtest} G = 2 \sum_{i,j} O_{ij} \log (O_{ij} / E_{ij}) \,, \end{equation} which is used to evaluate the significance of the contingency test in Section~\ref{sec:significance}. The sum is taken over all non-empty cells. \end{enumerate} In both formulas, $O_{ij}$ ($E_{ij}$) is the observed (expected) frequency of records for row $i$ and column $j$ of the contingency table. The stronger the dependency between $x$ and $y$, the less well modeled is their distribution by the factorized distribution $p(x)p(y)$, and the larger each test statistic value. Under the factorization assumption, the expected frequencies can be obtained in two ways: statistically dependent and independent. \subsection{Dependent frequency estimates} \label{sec:depfreq} The default method of frequency estimation for row $i$ and column $j$ includes $O_{ij}$, so $E_{ij}$ is statistically dependent on the observed frequency of its bin. The expected value of the two nominal variables is calculated as: \begin{equation} \label{eq:dep_est} E_{ij} = N\, p_{r}(i)\, p_{k}(j) = \frac{(\sum_{n=1}^{k}O_{in}) ( \sum_{m=1}^{r}O_{mj} )} {N}\,, \end{equation} where $p_{r}(i)$ ($p_{k}(j)$) is the $i^{\rm th}$ ($j^{\rm th}$) bin of the row-projected (column-projected) marginal probability mass function (p.m.f.) and $N$ is the number of records. The statistical dependency between $E_{ij}$ and $O_{ij}$ arises as the expectation $E_{ij}$ for cell $ij$ includes the cell's observation $O_{ij}$ in both the sum over columns and rows, and as part of $N$. The formula can be easily extended to an arbitrary number of variables. We use Eqn.~\ref{eq:dep_est} for the definition of \phik in Section~\ref{sec:phik} and for the calculation of its significance in Section~\ref{sec:significance}, as this distribution matches the observed frequencies most closely. \subsection{Independent frequency estimates} \label{sec:indepfreq} The second method of estimation of $E_{ij}$ excludes $O_{ij}$, \textit{i.e.} is statistically independent of the observed frequency of records for row $i$ and column $j$. This estimate, known in high energy physics as the ABCD formula~\cite{Aaboud:2017nhr}, is given by: \begin{equation} \label{eq:abcd} E_{ij} = \frac{B_{ij}\,C_{ij}}{D_{ij}} = \frac{(\sum_{n\neq j} O_{in}) ( \sum_{m\neq i} O_{mj} )} { \sum_{m\neq i} \sum_{n\neq j} O_{mn} }\,, \end{equation} where by construction $O_{ij}$ is not part of $E_{ij}$, which allows for an objective comparison between observed and expected frequencies per bin. This formula can also be extended to more variables, except that the denominator of Eqn.~\ref{eq:abcd}, which is different for each pair of indices, can easily become zero for low statistics samples. Note that $B_{ij}$, $C_{ij}$, and $D_{ij}$ are sums of frequencies, each obeying Poisson statistics, and are statistically independent. Consequently, the statistical uncertainty on $E_{ij}$ is evaluated with straight-forward error propagation~\cite{ku_1965} as: \begin{equation} \label{eq:abcderror} \sigma_{E_{ij}}^{2} = \frac{\sigma_{B_{ij}}^2 C_{ij}^2}{D_{ij}^{2}} + \frac{\sigma_{C_{ij}}^2 B_{ij}^2}{D_{ij}^{2}} + \frac{\sigma_{D_{ij}}^2 E_{ij}^2}{D_{ij}^{2}}\,. \end{equation} For an observed frequency of $Q$ records, $\sigma_{Q} = \sqrt{Q}$, except when $Q=0$, in which case we set $\sigma_{Q} = 1$. By doing so, when $B_{ij}$ or $C_{ij}$ is zero, and thus $E_{ij}=0$, this approach results in a non-zero error on $E_{ij}$. The statistical uncertainty on the expected frequency, $\sigma_{E_{ij}}$, is only zero when both $B_{ij}$ and $C_{ij}$ are zero. The expectation from Eqn.~\ref{eq:abcd} is built with fewer statistics than Eqn.~\ref{eq:dep_est} and thus is slightly less accurate. Another difference is that the ABCD formula is not a true product of two (or more) factorized marginal distributions, \textit{i.e.} the relative predictions for one row are not identical to those for another row, as is the case for dependent frequency estimates. We use the independent frequency estimates of Eqn.~\ref{eq:abcd} for the detection of significant excesses or deficits of records over expected values in a contingency table in Section~\ref{sec:outliers}, for reasons described there. \section{Definition of \phik} \label{sec:phik} The correlation coefficient \phik is obtained by inverting the $\chi^2$ contingency test statistic through the steps outlined below. Although the procedure can be extended to more variables, the method is described with two variables for simplicity. We define the bi-variate normal distribution of Eqn.~\ref{eq:bivar} with correlation parameter $\rho$ and unit widths, centered around the origin, and in the range $[-5,5]$ for both variables. Using uniform binning for the two interval variables, with $r$ rows and $k$ columns, results in a corresponding bi-variate p.m.f.. With $N$ records, the observed frequencies, $O_{ij}$, are set equal to the probability per bin multiplied by $N$. The expected frequencies $E_{ij}$, are set to the predictions from the bi-variate normal distribution with $\rho\!=\!0$, with $N$ records and the same binning. We then evaluate the $\chi^2$ value of Eqn.~\ref{eq:chi2}. Let us define this function explicitly. First, we perform the integral of the bi-variate normal distribution over the area of bin $ij$ \begin{equation} \label{eq:bnintegral} F_{ij}(\rho) = \int_{\mathrm{area}_{\,ij}} f_{\mathrm{b.n.}} (x,y \,| \,\rho) \,{\rm d}x{\rm d}y \,, \end{equation} leading to the sum over bins: \begin{equation} \label{eq:chi2bn} \chi^2_{\rm b.n.}(\rho,N,r,k) = N\, \sum_{i,\,j}^{k,\,r} \frac{\left(F_{ij}\left(\rho=\rho\right) - F_{ij}\left(\rho=0\right)\right)^2} {F_{ij}\left(\rho=0\right)} \,. \end{equation} This $\chi^2$ value explicitly ignores statistical fluctuations in observed frequencies, and is a function of the numbers of rows and columns, $N$, and the value of $\rho$. To account for statistical noise, we introduce a sample-specific pedestal related to a simple estimate of the effective number of degrees of freedom of the bi-variate sample, $n_{\rm sdof}$: \begin{equation} \label{eq:simpleendof} n_{\rm sdof} = (r-1)(k-1) - n_{\rm empty}({\rm expected})\,, \end{equation} with number of rows $r$ and columns $k$, and where $n_{\rm empty}({\rm expected})$ is the number of empty bins of the dependent frequency estimates of the sample. The pedestal is defined as: \begin{equation} \label{eq:chi2min} \chi^2_{\rm ped} = n_{\rm sdof} + c \cdot \sqrt{2 n_{\rm sdof}}\,. \end{equation} The noise pedestal is configurable through parameter $c$, and by default $c=0$. See Section~\ref{sec:noise} for the impact of the noise pedestal on \phik and Section~\ref{sec:significance} for a discussion on the effective number of degrees of freedom. The maximum possible $\chi^2$ value~\cite{cramer_harald_1999} of the contingency test is: \begin{equation} \label{eq:chi2_max} \chi^2_{\rm max}(N,r,k) = N\min(r-1,k-1)\,, \end{equation} which depends only the number of records $N$, rows $r$, and columns $k$, and is reached when there is a one-on-one dependency between the two variables. Specifically note that $\chi^2_{\rm max}$ is independent of the shape of distribution\footnote{Note that the $G$-test does not have this useful feature, making the $G$-test unsuitable for the calculation of \phik.} $p(x,y)$. We scale Eqn.~\ref{eq:chi2bn} to ensure it equals $\chi^2_{\rm ped}$ for $\rho=0$ and $\chi^2_{\rm max}$ for $\rho=1$. \begin{equation} \label{eq:chi2bnscaled} X^2_{\rm b.n.}(\rho,N,r,k) = \chi^2_{\rm ped} + \bigg\{ \frac{\chi^2_{\rm max}(N,r,k) - \chi^2_{\rm ped}}{\chi^2_{\rm b.n.}(1,N,r,k)} \bigg\} \cdot \chi^2_{\rm b.n.}(\rho,N,r,k) \,. \end{equation} This function is symmetric in $\rho$, and increases monotonically from $\chi^2_{\rm ped}$ to $\chi^2_{\rm max}$ as $\rho$ goes from zero to one. We can now perform the necessary steps to obtain the correlation coefficient \phik: \begin{enumerate} \item In case of unbinned interval variables, apply a binning to each one. A reasonable binning is generally use-case specific. As a default setting we take $10$ uniform bins per variable. \item Fill the contingency table for a chosen variable pair, which contains $N$ records, has $r$ rows and $k$ columns. \item Evaluate the $\chi^2$ contingency test using the Pearson's $\chi^2$ test statistic (Eqn~\ref{eq:chi2}) and the statistically dependent frequency estimates, as detailed in Section~\ref{sec:depfreq}. \item Interpret the $\chi^2$ value as coming from a bi-variate normal distribution without statistical fluctuations, using Eqn.~\ref{eq:chi2bnscaled}. \begin{itemize} \item If $\chi^2 < \chi^2_{\rm ped}$, set $\rho$ to zero. \item Else, with fixed $N$, $r$, $k$, invert the $X^2_{\rm b.n.}$ function, \textit{e.g.} using Brent's method~\cite{brent_1973}, and numerically solve for $\rho$ in the range $[0,1]$. \item The solution for $\rho$ defines the correlation coefficient \phik. \end{itemize} \end{enumerate} The procedure can be extended to more variables by using a multi-variate Gaussian instead of a bi-variate one. In summary, we interpret the $\chi^2$ value found in data as coming from a bi-variate normal distribution with a fixed amount of statistical noise and with correlation parameter \phik. Non-linear relations are captured by \phik through the $\chi^2$ test of variable independence. The correlation \phik reverts to the Pearson correlation coefficient in case of a bi-variate normal input distribution, with uniformly binned interval variables. Unlike Cram\'er's $\phi$, the value of \phik is stable against the number of bins chosen per interval variable, making it unambiguous to interpret. (In Fig.~\ref{fig:phic}, overlaying the \phik values evaluated with (a)symmetric binning gives a line indistinguishable from Pearson's $\rho$.) Like Cram\'er's $\phi$, \phik is affected by statistical fluctuations, which is relevant when the number of records is comparable with the number of cells (or lower); however, unlike Cram\'er's $\phi$, \phik has a correction for the statistical noise. Note that \phik is independent of the order of the two input variables, and that the procedure can be extended to more than two variables\footnote{For more than two variables, follow the same procedure and assume a common correlation for each variable pair of the multivariate normal input distribution.}. All coefficients presented in Section~\ref{sec:overview} are computationally inexpensive to evaluate. The calculation of \phik is computationally expensive because of the integrals of correlated bi-variate normal distributions evaluated in Eqn.~\ref{eq:chi2bnscaled}, but is well-doable on any modern laptop, typically taking only a fraction of a second per \phik calculation. \subsection{Performance on benchmark samples} A comparison with alternative correlation coefficients based on benchmark samples is given in Fig.~\ref{fig:benchmark}. By construction, the interpretation of \phik is similar to that of Pearson's correlation coefficient, in particular for the bi-variate normal input distributions and the linear shapes, shown in the left and middle columns. Unlike Pearson, however, \phik also captures non-linear relations as shown in the right column. Moreover, \phik can be determined for categorical, ordinal, and interval variables alike. Note that Cram\'er $\phi$ gives relatively low values for all samples. \begin{figure}[htp] \centering \begin{minipage}[b]{0.75\linewidth} \centering \includegraphics[width=\textwidth]{benchmark_correlation_2000.pdf} \end{minipage}% \caption{Benchmark sample results for \phik. Each synthetic data set contains 2000 data points. For the left column, from top to bottom, the bi-variate normal distributions have been generated with true correlations: $\{0.9,0.7,0.4,0,-0.4,-0.7,-0.9\}$. For the middle column a linear data set is generated which is rotated around the origin. In the right column various data sets are generated with non-linear correlations. Note that these non-linear correlations are well-captured by \phik, while Pearson's $\rho$ is close to zero for all cases.} \label{fig:benchmark} \end{figure} \subsection{Example correlation matrix} When studying the dependencies of a set of variables with a mixture of types, one can now calculate the correlation matrix for all variable pairs, filled with \phik values, which is a useful overview to have for a data analyst. For illustration purposes a synthetic data set with car insurance data has been created. The data set consists of 2000 records. Each record contains 5 (correlated) variables of mixed variable types, see Table~\ref{tab:data}. These data are used throughout the paper to provide insights in the practical application of the methods introduced in this work. The \phik correlation matrix measured on the car insurance data set is shown in Fig.~\ref{fig:phik_example}. \begin{table} \centering \begin{minipage}[t]{0.55\textwidth} \small \begin{tabular}{rrrrr} \hline car color & driver age & area & mileage & car size \\ \hline blue & 60.4 & suburbs & 3339 & XS \\ blue & 30.9 & suburbs & 53370 & XL \\ blue & 18.5 & suburbs & 112557 & XL \\ green & 40.9 & downtown & 29605 & L \\ gray & 23.7 & downtown & 15506 & M \\ multicolor & 60.3 & downtown & 33148 & L \\ white & 66.7 & suburbs & 91132 & XL \\ red & 69.2 & downtown & 152445 & XXL \\ metallic & 43.5 & hills & 147275 & S \\ ... & ... & ... & ... & ... \\ \hline \end{tabular} \end{minipage} \caption{A synthetic data set with car insurance data. The data set consists of 2000 records and is used to illustrate the calculations of \phik, statistical significance (in Section~\ref{sec:significance}) and outlier significance (in Section~\ref{sec:outliers}).} \label{tab:data} \end{table} \begin{figure}[htp] \captionsetup[subfigure]{position=b} \centering \begin{subfigure}[t]{0.55\textwidth} \vspace{0pt} \includegraphics[width=\linewidth]{phik_matrix_fake_data2.pdf} \caption{ } \end{subfigure} \hspace{0pt} \begin{subfigure}[t]{0.32\textwidth} \centering \vspace{0pt} \includegraphics[width=\linewidth]{global_correlation_fake_data2.pdf} \vspace{18pt} \caption{} \end{subfigure} \caption{Correlation coefficients calculated on the synthetic car insurance data set (Table~\ref{tab:data}) containing mixed variables types. a) The \phik correlation matrix. b) The global correlations $g_k$.} \label{fig:phik_example} \end{figure} \subsection{Global correlation coefficients} Besides the variable-pair information available from the correlation matrix $C$ in Fig.~\ref{fig:phik_example}a, it is also interesting to evaluate per variable the global correlation coefficient, $g_k$, of Eqn.~\ref{eq:globalcorr}. Strictly speaking, $g_k$ is only defined for interval variables, as it requires a covariance matrix $V$. Here, we set the variances of all variable types to one (anyhow undefined for categorical variables\footnote{Interval variable can always be re-scaled to have unit variance.}) and use $V = C$. Example global correlations measured in the car insurance data are shown in Fig.~\ref{fig:phik_example}b. They give a tenable estimate of how well each variable can be modeled in terms of all other variables, irrespective of variable type. \subsection{Statistical noise correction} \label{sec:noise} The calculation of \phik contains a correction for statistical fluctuations: for any $\chi^2$ value below the sample-specific noise threshold $\chi^2_{\rm ped}$ of Eqn.~\ref{eq:chi2min}, indicating that no meaningful correlation can be determined, \phik is set to $0$ by construction. \begin{figure}[htp] \centering \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{rho0_correlation_coefficient.pdf} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{pearson_vs_phik_pedestal500.pdf} \caption{} \end{subfigure} \caption{a) The correlation coefficients Pearson's $\rho$, Cram\'er's $\phi$ and \phik, measured for 1000 synthetic data sets with 500 data points each, which are simulated using a bi-variate normal with parameter $\rho=0$. The absolute value of the Pearson's $\rho$ is taken as the measured $\rho$ can also take on negative values. b) The median \phik value measured in 1000 synthetic data sets containing 500 data points each, simulated using a bi-variate normal distribution, as a function of true correlation $\rho$. The value of \phik is evaluated using three different configurations of the noise pedestal parameter $c$ (see Eqn.~\ref{eq:chi2min}).} \label{fig:rho0} \end{figure} The impact of the noise correction is seen in Fig.~\ref{fig:rho0}a, showing the absolute value of Pearson's $\rho$, Cram\'er's $\phi$, and \phik measured for 1000 synthetic data sets with only 500 records each, simulated from a bi-variate normal distribution with no correlation, and each binned in a 10x10 grid. Without absolute function applied, the distribution of Pearson's $\rho$ values would be centered around zero, as expected. The calculation of Cram\'er's $\phi$ results in a seemingly significant bump at $0.2$. This cannot be interpreted as a meaningful correlation, but results from the statistical noise contributing to each sample's $\chi^2$ value. For \phik, only when $\chi^2 > \chi^2_{\rm ped}$ does the calculation of \phik kick into action. The noise threshold is set such that about 50\% of the simulated samples gets assigned $\phik=0$. The remaining samples result in a wide distribution of \phik values\footnote{Without noise correction, the \phik distribution shows a similar peak as Cram\'er's $\phi$, at value 0.5.}. Fig.~\ref{fig:rho0}b shows \phik as a function of true correlation, where \phik is obtained from the median $\chi^2$ value of 1000 synthetic data sets with 500 data points each. The median gives the most representative, single synthetic data sample. In the calculation of \phik three configurations for the noise pedestal of Eqn.~\ref{eq:chi2min} are tested: no pedestal, and $c\in\{0,1\}$. No pedestal gives \phik values that overshoot the true correlation significantly at low values. Configuration $c=1$ undershoots: the calculation of \phik turns on too late. Configuration $c=0$ follows the true correlation most closely. The residual differences disappear for larger data samples, and we deem this acceptable for this level of statistics (with on average only 5 records per bin). The sample-specific noise threshold $\chi^2_{\rm ped}$ depends mostly on the number of filled cells, and stabilizes for larger sample sizes. Consequently, its impact is rather limited for large samples with a meaningful, non-zero correlation, typically having $\chi^2 \gg \chi^2_{\rm ped}$. For small sample sizes, as is also obvious from Fig.~\ref{fig:rho0}b, any correlation coefficient value should first be held up against the significance of the hypothesis test of variable independence -- the topic of Section~\ref{sec:significance} -- before further interpretation. \section{Statistical significance} \label{sec:significance} In practice, when exploring a data set for variable dependencies, the studies of correlations and their significances are equally relevant: a large correlation may be statistically insignificant, and vice versa a small correlation may be very significant. Both Pearson's $\chi^2$ test and the $G$-test asymptotically approach the $\chi^2$ distribution~\cite{barnard_1992}. For samples of a reasonable size (Cochran's rule on what defines ``reasonable size'' follows below), the default approach to obtain the $p$-value for the hypothesis test of variable independence is to integrate the $\chi^2$ probability density function $g(x|k)$ over all values\footnote{The integral runs up to infinity, even though the contingency test has a maximum test statistic value. In practice the difference is negligible.} equal to or greater than the observed test statistic value $t_{\rm obs}$: \begin{equation} \label{eq:pvalue} p = \int_{t_{\rm obs}}^{\infty} g(x| k) \,{\rm d}x\,, \end{equation} with the p.d.f. of the $\chi^2$ distribution: \begin{equation} \label{eq:chi2dist} g(x | k) = \frac{1}{2^{\mu}\Gamma(\mu)}\cdot x^{\mu-1} \cdot e^{-x/2} \,, \end{equation} where $\mu = k/2$, $\Gamma(\mu)$ is the gamma function, and $k$ is set to the number of degrees of freedom $n_{\rm dof}$. The solution of this integral is expressed as the regularized gamma function. This approach holds for samples of a reasonable size, and when using the $\chi^2$ test statistic or $G$-test. For the independence test of $n_{\rm dim}$ variables, the number of degrees of freedom is normally presented~\cite{bock_velleman_d._2007} as the difference between the number of bins $n_{\rm bins}$ and model parameters $n_{\rm pars}$ \begin{eqnarray} \label{eq:ndof} n_{\rm dof} &=& n_{\rm bins} - n_{\rm pars} \nonumber \\ &=& \Bigg[ \prod_{i=1}^{n_{\rm dim}} n_i \Bigg] - \Bigg[ \sum_{i=1}^{n_{\rm dim}} (n_i \!-\! 1)\ +\ 1 \Bigg] \,. \end{eqnarray} where $n_i$ is the number of categories of variable $i$. Explained using Eqn.~\ref{eq:dep_est}, each dimension requires $(n_i - 1)$ parameters to model its p.m.f., which is normalized to one, and the p.m.f. product is scaled to the total number of events, which requires one more parameter. For just two variables this reduces to: \begin{equation} \label{eq:ndof2d} n_{\rm dof} = (r-1)(k-1)\,. \end{equation} In practice Eqn.~\ref{eq:ndof} does not hold for many data sets, in particular for distributions with unevenly filled or unfilled bins. For example, in the case of two (binned) interval variables with a strong dependency. The \textit{effective} number of degrees of freedom, $n_{\rm edof}$, is often smaller than the advocated value, $n_{dof}$, and can even take on floating point values, because the number of available bins is effectively reduced. The asymptotic approximation, Eqns.~\ref{eq:pvalue}-\ref{eq:chi2dist}, breaks down for sparse data sets, for example for two (interval) variables with a strong correlation, and for low-statistics data sets. The literature on evaluating the quality of this approximation is extensive; for an overview see Refs.~\cite{agresti_2001,kroonenberg_verbeek_2018}. Cochran's rule of thumb is that at least $80\%$ of the expected cell frequencies is $5$ counts or more, and that no expected cell frequency is less than $1$ count. For a 2x2 contingency table, Cochran recommends~\cite{Cochran:1952,cochran_1954} that the test should be used only if the expected frequency in each cell is at least $5$ counts. How to properly evaluate the $p$-value if the test statistic does not follow the $\chi^2$ distribution and hence Eqn.~\ref{eq:pvalue} cannot be be safely applied. A reasonable approach is to evaluate Eqn.~\ref{eq:pvalue} directly with Monte Carlo data sets, sampled randomly from the distribution of expected frequencies. However, this approach quickly becomes cumbersome for $p$-values smaller than $0.1\%$, \textit{i.e.} once more than 1000 simulations are needed for a decent $p$-value estimate, and practically impossible when needing at least a million simulations. Given that variable dependencies can be very significant, we prefer a common approach that works for both strong and weak dependencies and both low- and high-statistics samples. In this section we propose another option: a hybrid approach where a limited number of Monte Carlo simulations is used to fit an analytical, empirical description of the $\chi^2$ distribution. Specifically, we describe two corrections to Eqn.~\ref{eq:pvalue}: \begin{enumerate} \item A procedure to evaluate the effective number of degrees of freedom for a contingency test; \item A correction to Eqn.~\ref{eq:chi2dist} for low statistics samples, when using the $G$-test statistic. \end{enumerate} We conclude the section with a prescription to evaluate the statistical significance of the hypothesis test of variable independence, and a brief overview of sampling methods to help evaluate the $p$-value. \subsection{Effective number of degrees of freedom} \label{sec:nedof} To obtain the effective number of degrees of freedom of any sample, we use the property of Eqn.~\ref{eq:chi2dist} that, for a test statistic distribution obeying $g(x|k)$, to good approximation the average value of $g(x|k)$ equals $k$. The effective number of degrees of freedom for any sample is obtained as follows: \begin{enumerate} \item For the two variables under study, the dependent frequency estimates form the factorized distribution most accurately describing the observed data. Using Monte Carlo sampling techniques, this distribution is used to randomly generate $500$ independent synthetic data sets with the same number of records as in the observed data set. Optionally, sampling with fixed row and/or column totals may be chosen. A short discussion of sampling methods is held in Section~\ref{sec:simapp}. \item For each synthetic data set, evaluate the $G$-test statistic using the statistically dependent frequency estimates, as detailed in Section~\ref{sec:pearson}. \item The effective number of degrees of freedom, $n_{\rm edof}$, is taken as the average value of the $G$-test distribution of all generated Monte Carlo samples. \end{enumerate} \begin{figure}[htp] \centering \begin{minipage}[b]{0.7\linewidth} \centering \includegraphics[width=\textwidth]{smiley_2000.pdf} \end{minipage}% \caption{Example ``smiley'' data set of two interval variables binned in 20 bins in the $x$ and $y$ direction.} \label{fig:smiley} \end{figure} \begin{figure}[htp] \centering \begin{minipage}[b]{0.7\linewidth} \centering \includegraphics[width=\textwidth]{smiley_ndatapoints_vs_ndof_nchi2sim_5000_different_test_statistics.pdf} \end{minipage}% \caption{The effective number of degrees of freedom as a function of the number of data points in the input data set (Figure~\ref{fig:smiley}). The theoretical number of degrees of freedom, $n_{dof} = 361$, is indicated with the dashed line.} \label{fig:nedof} \end{figure} Fig.~\ref{fig:smiley} shows a ``smiley'' data set of two interval variables, consisting of two blobs and a wide parabola, which are binned into a 20x20 histogram, for which we can generate an arbitrary number of records. The bottom two curves in Fig.~\ref{fig:nedof} show $n_{\rm edof}$ obtained for this sample, as a function of the number of records in the data set, $N$, and evaluated using the $G$-test and $\chi^2$ test statistic. Using Eqn.~\ref{eq:ndof2d}, the advocated number of degrees of freedom of this sample equals $361$. For both test statistics this number is only reached for very large sample sizes ($\ge 10^6$). and drops significantly for smaller values of $N$, where the drop is slightly steeper for the $G$-test statistic. The top two curves show the same data set on top of a uniform background of 1 record per cell, ensuring that each is always filled, again evaluated using the $G$-test or $\chi^2$ test statistic. Now the $G$-test overshoots, and the $\chi^2$ test statistic happens to level out at the expected value. To understand the behavior of under- and overshooting, realize that $n_{\rm edof}$ relates directly to the distribution of dependent frequency estimates. By construction, the dependent frequency estimates of Eqn.~\ref{eq:dep_est} make non-zero predictions for each bin in the distribution, as long as the input data set contains at least one record per row and column. Under the assumption of variable independence, each bin in the distribution is expected to be filled. First consider the bottom two curves of Fig.~\ref{fig:nedof}. For an uneven input distribution, for example two strongly correlated interval variables, one may expect many bins with low frequency estimates. A data set sampled randomly from a distribution with very low frequency estimates, such as the data set in Fig.~\ref{fig:smiley}, is likely to contain empty bins. On average, high-statistics bins contribute $n_{\rm dof} / n_{\rm bins}\ (\lesssim 1)$ to the $G$-test or $\chi^2$ test statistic, but the low-statistics bins do not obey this regime. As an example, let us focus on the empty bins. By construction their contribution to the $G$-test is zero. The contribution to the $\chi^2$ test statistic is non-zero: $\sum_i E_i$, where the sum runs over all empty bins. It is clear however, when $E_{i} \ll 1$, that this sum is relatively small and contributes only marginally. Taken over many randomly sampled data sets, this effect reduces the average value of the $G$-test or $\chi^2$ test statistic distribution to lower values, and likewise decreases $n_{\rm edof}$ compared with $n_{\rm dof}$. For the top two curves, by construction $E_{i} > 1$ for each bin, bringing them closer to the nominal regime and increasing the $G$-test and $\chi^2$ test statistics. For a discussion of the contribution of low-statistics contingency table cells to the $\chi^2$ test statistic, see Ref.~\cite{yates_1934}. In summary, depending on the shape and statistics of the input data set, the effective number of degrees of freedom of a contingency table can differ from the advocated value of $n_{dof}$ (Eqn.~\ref{eq:ndof}). To be certain of the effective value to use, this is best derived as the average value of the test statistic distribution, which is obtained with Monte Carlo simulations of the expected frequency distribution. \subsection{Modified $\chi^2$ distribution} \label{sec:chi2mod} Given a large enough data sample, and given the hypothesis that the observed frequencies result from a random sampling from the distribution of expected frequencies, the $G$-test statistic can be approximated\footnote{The approximation is obtained with a second-order Taylor expansion of the logarithm around 1.} by Pearson's $\chi^2$. In this scenario both the $G$-test and $\chi^2$ value are described by the $\chi^2$ distribution of Eqn.~\ref{eq:chi2dist}, with the same number of degrees of freedom, and applying any one test leads to the same conclusions. \begin{figure}[htp] \centering \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{test_statistics_chi2_ndata_50_nchi2_5000_nbins_20.pdf} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{test_statistics_gtest_ndata_50_nchi2_5000_nbins_20.pdf} \caption{} \end{subfigure}\\% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{pvalues_chi2test_ndata_50_nchi2_5000_nbins_20.pdf} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{pvalues_Gtest_ndata_50_nchi2_5000_nbins_20.pdf} \caption{} \end{subfigure}% \caption{The simulated distribution of $\chi^2$ and $G$-test statistics using the smiley data set as input (Figure~\ref{fig:smiley}). a) The $\chi^2$ distribution (green) is wider than the expected distribution (dashed lines), and b) the $G$-test distribution (green) is narrower. The corresponding $p$-value distributions are shown in panels c) and d), both when using $n_{dof}$ (blue) and $n_{edof}$ (blue).} \label{fig:divergence} \end{figure} For low statistics samples -- to be more specific, samples with many bins of low expected and observed frequencies -- the distributions of $G$ and $\chi^2$ start to differ, and both distributions diverge from the nominal $\chi^2$ distribution. This can be seen in Fig.~\ref{fig:divergence}, which uses the smiley data set of Fig.~\ref{fig:smiley} as input. The simulated distribution of test statistics is wider than the $\chi^2$-distribution in case of the Pearson $\chi^2$-test statistic (Fig.~\ref{fig:divergence}a) and narrower than the $\chi^2$-distribution in case of the $G$-test statistic (Fig.~\ref{fig:divergence}b). This results in $p$-value distributions with elevated frequencies around zero and one for the Pearson $\chi^2$-test statistic (Fig.~\ref{fig:divergence}c) and lower frequencies near zero and one for the $G$-test statistic (Fig.~\ref{fig:divergence}d). Note that the effective number of degrees of freedom is much lower than the theoretical value; using $n_{\rm dof}$ in the $p$-value calculation results in an uneven distributions peaked towards one. This section addresses the question whether the test statistic distribution for the contingency test can be modeled for all sample sizes, knowing that Eqn.~\ref{eq:chi2dist} cannot be safely used for low statistics data sets. In particular we are interested in assessing the $p$-values of large test statistic values, coming from possibly strong variable dependencies. To evaluate these correctly, it is important to properly model the high-end tail of the test statistic distribution. We observe empirically that for low-statistics samples the $G$-test statistic distribution converges towards a Gaussian distribution $G(x|\mu,\sigma)$, with mean $\mu=n_{\rm edof}$ and width $\sigma=\sqrt{n_{\rm edof}}$. For high-statistics samples the distribution is modeled by $g(x|k)$, with $k=n_{\rm edof}$ degrees of freedom. Experimentally we find that, for any sample size, the $G$-test statistic distribution can be well described by the combined probability density function $h(x|f)$: \begin{equation} \label{eq:chi2mod} h(x|f) = f \cdot g(x|n_{\rm edof}) + (1-f)\cdot G(x|n_{\rm edof},\sqrt{n_{\rm edof}})\,, \end{equation} where the parameters of $g(x|k)$ and $G(x|\mu,\sigma)$ are fixed as above, and $f$ is a floating fraction parameter between $[0,1]$. Below we use $h(x|f)$ as the modified $\chi^2$ p.d.f. to model the $G$-test statistic distribution for any data set. \begin{figure}[htp] \centering \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{fit_h_50_nbins_20.pdf} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{fit_h_400_nbins_20.pdf} \caption{} \end{subfigure}% \caption{The $G$-test statistic distribution for two smiley data sets containing a) 50 and b) 400 data points. The distribution is modeled with the $h(x|f)$ distribution. } \label{fig:gtestdistfit} \end{figure} Fig.~\ref{fig:gtestdistfit} shows the results of binned log-likelihood fits of $h(x|f)$ to two $G$-test statistic distributions, each with 10k entries generated with the procedure of Section~\ref{sec:nedof}, using the smiley data set with 20x20 bins with: a) $N=50$ and b) $N=400$ records for the simulated data sets. Clearly, these distributions are not well modeled using $g(x|n_{\rm edof})$ or $G(x|n_{\rm edof},\sqrt{n_{\rm edof}})$ alone. The fit of $h(x|f)$ can separate the two component p.d.f.'s given that the RMS-value of $g(x|n_{\rm edof})$ is $\sqrt{2n_{\rm edof}}$ and the width of the Gaussian is fixed to $\sqrt{n_{\rm edof}}$. For $N=50$, the distribution is dominated by the Gaussian, and for $N=400$ by the theoretical $\chi^2$ distribution. Note that $G(x|n_{\rm edof},\sqrt{n_{\rm edof}})$, when present, contributes to the core of the distribution while $g(x|n_{\rm edof})$ dominates in the tails. \begin{figure}[htp] \centering \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{ndata_fraction.pdf} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{ndataperbin_fraction.pdf} \caption{} \end{subfigure}% \caption{a) The fit fraction $f$ as a function of the number of records per simulated data set, $N$. b) The same data points, but here $f$ is shown as a function of the average number of records per bin.} \label{fig:ffitrize} \end{figure} Fig.~\ref{fig:ffitrize} uses a similar setup, with 20x20 or 50x50 bins, where the fit fraction $f$ is shown as a function of a) the number of records per simulated data set, $N$, and b) the average number of records per cell, $\bar{n}$. The fraction $f$ rises as a function of sample size, such that $h(x|f)$ turns into $g(x|n_{\rm edof})$ for large enough data sets. With 20x20 bins, for a fraction of $0.50$ ($0.99$) the approximately sample size equals $175$ ($700$), and the average number of entries per cell equals $0.4$ ($1.8$). Note that the fraction reaches $1$ well before $n_{\rm edof}$ reaches the advocated value of $n_{\rm dof}$ in Fig.~\ref{fig:nedof}. In summary, to assess the $p$-value for the hypothesis test of variable independence, in this work we choose to work with the $G$-test statistic, and not Pearsons's $\chi^2$, for two reasons: \begin{enumerate} \item We manage to describe the $G$-test statistic distribution most successfully for any sample size. \item As seen from Fig.~\ref{fig:divergence}b, for a large observed test statistic value, corresponding to a large significance of variable dependency, applying the naive formula of Eqn.~\ref{eq:pvalue} over-covers, \textit{i.e.} gives a conservative $p$-value (the green distribution is narrower than expected). \end{enumerate} We use the distribution $h(x|f)$ of Eqn.~\ref{eq:chi2mod} as modified $\chi^2$ distribution in Eqn.~\ref{eq:pvalue} to assess the $p$-value for the hypothesis test. \subsection{Evaluation of significance} \label{sec:signfeval} The statistical significance of the hypothesis test of any variable independence is obtained with the following procedure: \begin{enumerate} \item Calculate the average number of entries per cell, $\bar{n}$. If $\bar{n} < 4$, set $n_{\rm sim} = 2000$, else $n_{\rm sim} = 500$ samples. \item Follow the procedure of Section~\ref{sec:nedof} to generate $n_{\rm sim}$ synthetic data sets based on the dependent frequency estimates of the input data set. For each synthetic data set evaluate its $G$-test value. Take the average of the $G$-test distribution to obtain $n_{\rm edof}$. \item If $\bar{n} < 4$, to obtain $f$ fit the probability density function $h(x|f)$ to the $G$-test distribution, with $n_{\rm edof}$ fixed. Else, skip the fit and set $f=1$. \item With this fraction, use Eqn.~\ref{eq:pvalue} with $h(x|f)$ as modified $\chi^2$ distribution to obtain the $p$-value for the hypothesis test, using the $G$-test value from data as input. \item The $p$-value is converted to a normal $Z$-score: \begin{equation} \label{eq:zscore} Z = \Phi^{-1}(1-p)\ ;\quad \Phi(z)=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{z} e^{-t^{2}/2}\,{\rm d}t\,, \end{equation} where $\Phi^{-1}$ is the quantile (inverse of the cumulative distribution) of the standard Gaussian, \textit{e.g.} $Z$ is the significance in 1-sided Gaussian standard deviations. For example, the threshold $p$-value of $0.05$ (95\% confidence level) corresponds to $Z=1.64$. When the $p$-value is too small to evaluate Eqn.~\ref{eq:zscore} numerically, at $p\lesssim 10^{-310}$, anyhow a very strong variable dependency, $Z$ is estimated using Chernoff's bound~\cite{lin_genest_banks_molenberghs_scott_wang_2014} to ensure a finite value. Let $z\equiv G/n_{\rm edof}$, Chernoff states when $z>1$: \begin{equation} p \leq f \cdot (z e^{1-z})^{n_{\rm edof} / 2}\,, \end{equation} where we safely ignore the contribution from the narrow Gaussian in $h(x|f)$. This is converted to $Z$ with the approximation (valid for large $Z>1.5$): \begin{equation} Z = \sqrt{u- \log{u}} ;\quad u = -2\log{(p \sqrt{2\pi})}\,. \end{equation} \end{enumerate} \begin{figure}[htp] \centering \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{pvalues_50.pdf} \caption{} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{pvalues_400.pdf} \caption{} \end{subfigure}% \caption{The $p$-value distributions corresponding of the two $G$-test distributions of Fig.~\ref{fig:gtestdistfit}, with a) $N=50$ and b) $N=400$ records per sample. See the text for a description of the two $p$-value calculations performed.} \label{fig:significance} \end{figure} The significance procedure is illustrated in Fig.~\ref{fig:significance}, which shows the $p$-value distributions of the two $G$-test distributions of Fig.~\ref{fig:gtestdistfit}, with $N=50$ and $N=400$ records per sample. The two $p$-value distributions in each figure have been calculated in two ways. \begin{enumerate} \item Using the original $\chi^2$ distribution $g(x|k)$ of Eqn.~\ref{eq:pvalue}, with the effective number of degrees of freedom, $n_{\rm edof}$. This results in the blue distributions. \item Fitting each test statistic distribution with $h(x|f)$ of Eqn.~\ref{eq:chi2mod}, and using that to calculate the $p$-values, resulting in the red distributions. \end{enumerate} The blue distributions drop around zero and one, in particular for the low statistics sample ($N=50$). This is because the $G$-test distribution is more narrow than the $\chi^2$ distribution, as shown in Fig.~\ref{fig:divergence}. The red $p$-value distributions, evaluated with $h(x|f)$, are uniform, as desired in both setups. Let us apply the statistical procedure to a low-statistics data sample. A smiley data set with 100 entries, in a histogram with 20x20 bins, has correlation value $\phik=0.73$ and test statistic value $G = 227.4$. The $Z$ calculation is done in three consecutively more refined ways: \begin{enumerate} \item \textit{The asymptotic approximation}: using $n_{\rm dof} = 361$ and the asymptotic $\chi^2$ distribution $g(x|k)$ gives: $Z = -5.7$; \item \textit{Effective number of degrees of freedom}: using $n_{\rm edof} = 189.3$ and the asymptotic $\chi^2$ distribution $g(x|k)$ results in: $Z = 1.9$; \item \textit{Modified $\chi^2$ distribution}: with $n_{\rm edof} = 189.3$, the modified $\chi^2$ distribution $h(x|f)$, and fit fraction $f = 0.10$ one finds: $Z = 2.5$. \end{enumerate} In this example, between the three approaches the $Z$-value increases with more than 8 units! Typically, using the effective number of degrees of freedom gives the largest correction to $Z$, and the modified $\chi^2$ distribution only gives a small correction on top of that. The choice of 2000 synthetic data sets for the fit of $h(x|f)$ is a compromise between accuracy and speed. With this number, $Z$ typically varies at the level of $0.04$, and is calculated in just a fraction of a second. Based on our findings, for any sample size we recommend the $p$-value to be calculated with the modified $\chi^2$ distribution $h(x|f)$, using $n_{\rm edof}$ degrees of freedom. If not, the $p$-value may over-cover for strong variable dependencies and at low-statistics, resulting in a $Z$-value that is too small, possibly by multiple units. This is important to know, as it can lead to rather incorrect conclusions regarding the studied variable dependency. \subsection{Example significance matrix} In practice a correlation value may be small but its statistical significance can still be large, and vice versa. For this reason, when exploring a data set, the levels of correlation and significance should always be studied together. Fig.~\ref{fig:significance_matrix} shows the significance matrix determined for the car insurance data set of Table~\ref{tab:data}. Compared with the correlation matrix of Fig.~\ref{fig:phik_example}, the low \phik values happen to be statistically insignificant, but the higher values are very significant. \begin{figure}[htp] \centering \begin{minipage}[b]{0.6\linewidth} \centering \includegraphics[width=\textwidth]{significance_matrix_fake_data2.pdf} \end{minipage}% \caption{The significance matrix, showing the statistical significances of correlated and uncorrelated variable pairs. The color scale indicates the level of significance, and saturates at $\pm5$ standard deviations.} \label{fig:significance_matrix} \end{figure} \subsection{Sampling approaches} \label{sec:simapp} Based on the statistically dependent frequency estimates, three sampling approaches are offered to generate synthetic data sets for testing the hypothesis of no variable association: \begin{itemize} \item \textit{Multinomial sampling}: with only the total number of records fixed. The hypothesis of no association is independent of the row and column variables. \item \textit{Product-multinomial sampling}: with the row or column totals fixed in the sampling. The hypothesis of no association is also called homogeneity of proportions. This approach is commonly used in cohort and case-control studies. \item \textit{Hypergeometric sampling}: both the row or column totals are fixed in the sampling. This approach is also known as Fisher's exact test. We use Patefield's algorithm~\cite{patefield_1981} to generate the samples. \end{itemize} There is an ongoing debate about sampling design for tests of variable independence. Although in practice most people are not too worried about the sampling approach, at least not in the high-statistics regime, because asymptotically the different approaches lead to the same result. The default approach used in this paper is multinomial sampling. For a discussion and further references see Ref.~\cite{kim_agresti_1997}.
{ "attr-fineweb-edu": 1.732422, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd-s25V5jCst-Sj5F
\section{NLO EbyE EKRT model and its tests} \label{Sec:intro} The EKRT model \cite{Eskola:1999fc,Niemi:2015qia} rests on the idea that primary particle production in high energy heavy-ion collisions is dominated by few-GeV gluons, minijets \cite{Eskola:1988yh}, whose production rates are computable from collinear factorization of perturbative QCD (pQCD) but controlled by the phenomenon of saturation locally in the transverse plane \cite{Eskola:2000xq,Paatelainen:2012at,Paatelainen:2013eea}. The produced minijet densities can then be converted into initial conditions for relativistic fluid dynamics simulations. In NLO pQCD, the infrared- and collinear-safe quantity computed here is the transverse energy $E_T$ carried by the minijets into a mid-rapidity window $\Delta y$ \cite{Eskola:2000ji_Eskola:2000my,Paatelainen:2012at} per transverse area $d^2\mathbf{r}$ in $A$+$A$ collisions at cms-energy $\sqrt{s_{NN}}$ and impact parameter $\mathbf{b}$, \begin{equation} \frac{dE_T}{d^2{\bf r}}(p_0, \sqrt{s_{NN}}, A, \Delta y, \mathbf{r}, \mathbf{b}; \beta) \stackrel{\rm pQCD}{=} T_A(\mathbf{r}+ \mathbf{b}/2)T_A(\mathbf{r}- \mathbf{b}/2)\sigma\langle E_T \rangle_{p_0,\Delta y,\beta} \stackrel{\rm saturation}{=} \frac{K_{\rm sat}}{\pi}p_0^3\Delta y, \label{eq:dET} \end{equation} where the transverse momentum cut-off $p_0\sim$ few GeV, and $T_A$ is the nuclear thickness function. The NLO quantity $\sigma\langle E_T \rangle_{p_0,\Delta y,\beta}$ is computed using collinear factorization and the subtraction method \cite{Kunszt:1992tn}. It contains the CTEQ6M parton distributions \cite{Pumplin:2002vw} with EPS09s nuclear effects \cite{Helenius:2012wd}, $2\rightarrow 3$ and UV-renormalized $2\rightarrow 2$ parton scattering matrix elements \cite{Ellis:1985er_Paatelainen:2014fsa}, and the measurement functions to define the $E_T$. The minimum $E_T$ in $\Delta y$ is controlled by the parameter $\beta \in [0,1]$, fixed to 0.8 here \cite{Paatelainen:2012at}. Saturation here is the limit where $E_T$ production from $(n>2)\rightarrow 2$ parton processes starts to dominate over the usual $2\rightarrow 2$ ones. This can be cast into the form of the saturation condition appearing on the r.h.s. of Eq.~(\ref{eq:dET}), where $K_{\rm sat}$ is a free parameter \cite{Paatelainen:2012at}. Equation (\ref{eq:dET}) gives the saturation momentum $p_0 = p_{\rm sat}(\sqrt{s_{NN}},A,\mathbf{r},\mathbf{b};\beta,K_{\rm sat})$ locally in the transverse plane. With a formation time $\tau_s(\mathbf{r}) = p_{\rm sat}(\mathbf{r})^{-1}$ the initial local energy density is then \begin{equation} e(\mathbf{r},\tau_{\mathrm{s}}(\mathbf{r})) = \frac{\mathrm{d}E_T}{\mathrm{d}^2\mathbf{r}}\frac{1}{\tau_{\mathrm{s}} (\mathbf{r}) \Delta y } = \frac{K_{\rm sat}}{\pi}[p_{\rm sat}(\mathbf{r})]^4. \end{equation} The key observation \cite{Paatelainen:2013eea,Eskola:2001rx} enabling the recently developed NLO EbyE EKRT model framework of Ref.~\cite{Niemi:2015qia} is that $p_{\rm sat}(\mathbf{r},\mathbf{b})\approx p_{\rm sat}(T_AT_A)$ which can be parametrized. Then the $T_A$s can be made to fluctuate EbyE: we sample the nucleon positions from the standard Woods-Saxon density, setting a Gaussian gluon thickness function of a width $\sigma = 0.43$ fm \cite{Chekanov:2004mw} around each nucleon, and then computing the $T_A$ as a sum of these gluon clouds. Thus, the fluctuations of $T_A$ determine how $e(\mathbf{r},\tau_{\mathrm{s}}(\mathbf{r}))$ fluctuates here EbyE. Finally, to start our hydro simulations at a constant time, we evolve the $e$-profile from $\tau_s(\mathbf{r})$ to $\tau_0 =1/p_{\rm sat}^{\rm min}=0.2$~fm using 0+1 D Bjorken hydrodynamics. At the edges of the system, we assume a binary $e$-profile. With such initial conditions, we describe the spacetime evolution of produced QCD matter then EbyE, using 2nd-order dissipative relativistic 2+1 D hydro with transient fluid-dynamics equation of motion for the shear-stress tensor $\pi^{\mu\nu}$ from Refs.~\cite{Denicol:2012cn, Molnar:2013lta}. The transverse flow and $\pi^{\mu\nu}$ are initially zero. Our equation of state is $s95p$-PCE-v1 \cite{Huovinen:2009yb}, with chemical decoupling at $T_{\rm chem} = 175$ MeV. Kinetic freeze-out is at $T_{\rm dec}=100$ MeV, and on this surface we assume, as usual, that the viscous $\delta f$-corrections are $\propto p_\mu p_\nu \pi^{\mu\nu}$. We neglect the bulk viscosity and heat conductivity. We study the $T$ dependence of $\eta/s(T)$ with the parametrizations of Fig.~\ref{fig:etapers}a, all of which are designed to reproduce the flow coefficients $v_n\{2\}$ measured in 2.76 TeV Pb+Pb collisions at the LHC, as shown in Fig.~\ref{fig:etapers}b. The parameter $K_{\rm sat}$ is fixed separately for each $\eta/s(T)$ parametrization, by using the $dN_{\rm ch}/d\eta(0-5 \%)$ measured by ALICE in 2.76 TeV Pb+Pb collisions (Fig.~\ref{fig:predictions}a). \begin{figure}[hbt] \begin{center} \includegraphics[width=5.1cm]{etapers_bestfit.pdf} \includegraphics[width=4.9cm]{vnintegrated_RG043.pdf} \includegraphics[width=4.9cm]{vnintegrated_RHIC_all_RG043.pdf} \end{center} \vspace{-0.5cm} \caption{(a) The tested $\eta/s(T)$ parametrizations. Flow coefficients $v_n\{2\}$ vs. ALICE data \cite{ALICE:2011ab} in 2.76 TeV Pb+Pb collisions at the LHC (b), and $v_{2}\{2\}$, $v_{3}\{2\}$ and $v_{4}\{3\}$ vs. STAR data \cite{Adams:2004bi, Adamczyk:2013waa, Adams:2003zg} in 200 GeV Au+Au collisions (c). From \cite{Niemi:2015voa,Niemi:2015qia}.} \vspace{-0.1cm} \label{fig:etapers} \end{figure} We have extensively tested the NLO EbyE EKRT model in \cite{Niemi:2015qia}, arriving at a very good simultaneous description of the centrality dependences of charged hadron multiplicities, $p_T$ spectra, and flow coefficients in 2.76 TeV Pb+Pb collisions at the LHC and 200 GeV Au+Au at RHIC. As seen in Fig.~\ref{fig:etapers}c, the RHIC $v_n$s favor 0.2 (blue) and \textit{param1} (black) for $\eta/s(T)$. Also the correlations of 2 and 3 event-plane angles measured by ATLAS systematically favor these two $\eta/s(T)$ parametrizations, see Fig.~\ref{fig:symm_cum}a \cite{Niemi:2015qia}. Furthermore, these constraints are obtained in the centrality region where the $\delta f$ effects remain small in these observables \cite{Niemi:2015qia}. Relative EbyE fluctuations of $v_2$ measured by ATLAS provide a stringent $\eta/s$-independent test for the computed initial states. The EKRT model passes also this test remarkably well, demonstrating the necessity of a hydro evolution in understanding the centrality systematics of this observable \cite{Niemi:2015qia}. As a measure of our hydro validity, we plot in Fig.~\ref{fig:symm_cum}f also \textit{(i)} the average Knudsen numbers $\langle {\rm Kn}\rangle$, expansion rate ($\theta = \partial_\mu u^\mu$) per thermalization time ($\tau_\pi=5\eta/(e+p)$) averaged over entropy density throughout the evolution ($T>100$~MeV), and \textit{(ii)} the shear stress over pressure $\langle\sqrt{\pi_{\mu\nu}\pi^{\mu\nu}}/p\rangle$ averaged over the entropy flux through the freeze-out surface. This reflects the average $\delta f$ corrections in the end of the evolution. The facts that these indicators increase towards peripheral collisions only gradually and that $\langle {\rm Kn}\rangle={\cal O}(1)$ speak for the hydro validity at least up to 50\% centralities. Towards peripheral collisions, $\langle {\rm Kn}\rangle$ increases due to the increasing relative weight of the early stages where $\langle {\rm Kn}\rangle$ is large (see the $T>180$~MeV curve). \section{Further predictions from the EbyE NLO EKRT model} \label{Sec:latest} We have made a series of predictions from the EbyE NLO EKRT model without any further tuning. For ALICE, we have computed the symmetric 2-harmonic 4-particle cumulants, ${\rm SC}(m,n)=\langle\langle \cos(m\phi_1+n\phi_2-m\phi_3-n\phi_4)\rangle\rangle = \langle v_m^2v_n^2\rangle - \langle v_m^2\rangle \langle v_n^2\rangle$ normalized by $\langle v_m^2\rangle \langle v_n^2\rangle$ shown in Fig.~\ref{fig:symm_cum}b,c. Our best-fit $\eta/s$ parametrizations predict rather well the positive correlation seen by ALICE \cite{ALICE:2016kpq} in ${\rm SC}(4,2)$ and also the trend of the negative correlation in ${\rm SC}(3,2)$. We emphasize, however, the importance of a 1-to-1 comparison: we expect that once we include the multiplicity weighting assumed in the ALICE analysis, our prediction will be systematically closer to the data. In Fig.~\ref{fig:symm_cum}d we show a prediction of the $p_T$ dependence of ${\rm SC}(4,2)/\langle v_4^2\rangle \langle v_2^2\rangle$. Fig.~\ref{fig:symm_cum}e in turn suggests that the low-to-high-$p_T$ ratios of these normalized correlators might be able to distinguish between our best-fit $\eta/s$ parametrizations. Similarly, we have provided the STAR collaboration with our predictions for the centrality dependence of mixed harmonic correlators $C_{m,n,m+n}=\langle\langle\cos(m\phi_1+n\phi_2-(m+n)\phi_3)\rangle\rangle$. As shown in \cite{Adamczyk:2017byf}, our best-fit parametrizations reproduce the $C_{2,2,4}$ rather well. However, we underestimate the measured $C_{2,3,5}$, which we believe is due to large $\delta f$ effects in this observable, possibly combined also with non-flow and rapidity effects which we cannot consider, yet. Further studies on this are ongoing. \begin{figure} \begin{center} \includegraphics[width=4.6cm]{EP2_4.pdf} \includegraphics[width=10.2cm]{scnm_RG043.pdf} \includegraphics[width=4.9cm]{scnm_ratios_RG043_a.pdf} \includegraphics[width=4.9cm]{scnm_ratios_RG043_b.pdf} \includegraphics[width=5.1cm]{piperp_average.pdf} \end{center} \vspace{-0.5cm} \caption{Centrality dependence of various correlators and Knudsen number in 2.76 TeV Pb+Pb collisions. (a) Correlation of the event-plane angles $\Psi_2$ and $\Psi_4$ vs. ATLAS data \cite{Aad:2014fla}. From \cite{Niemi:2015qia}. (b) Normalized cumulants ${\rm SC}(4,2)/\langle v_4^2\rangle \langle v_2^2\rangle$ vs. ALICE data \cite{ALICE:2016kpq}. (c) Same for ${\rm SC}(3,2)/\langle v_3^2\rangle \langle v_2^2\rangle$. (d) ${\rm SC}(4,2)/\langle v_4^2\rangle \langle v_2^2\rangle$ in one low-$p_T$ and one high-$p_T$ interval, computed with our two best-fit $\eta/s$ parametrizations. (e) Low-to-high-$p_T$ ratio of ${\rm SC}(4,2)/\langle v_4^2\rangle \langle v_2^2\rangle$. (f) Average Knudsen numbers $\langle {\rm Kn} \rangle$ in our hydro evolution (red, green), and average shear stress over pressure $\langle \pi/p \rangle$ on the freeze-out surface (black), computed with the \textit{param1} $\eta/s$ parametrization. } \vspace{-0.3cm} \label{fig:symm_cum} \end{figure} Thanks to the predictive power of the EKRT model, we have also made predictions for the 5.02 TeV Pb+Pb run at the LHC \cite{Niemi:2015voa}. Figure \ref{fig:predictions} shows our predictions for the multiplicity and flow-coefficient ratios. In the latter, notice the slight increase with increasing $n$. Again, as seen in the figure, the EbyE NLO EKRT model fairs very well in the data comparison. \begin{figure} \begin{center} \includegraphics[width=6.1cm]{charged_multiplicity_LHC5023_RG043.pdf} \includegraphics[width=6.1cm]{charged_multiplicity_sqrts_RG043.pdf} \vspace{-0.2cm} \includegraphics[width=10.5cm]{vnintegrated_ratio_RG043.pdf} \end{center} \vspace{-0.5cm} \caption{EbyE NLO EKRT model predictions for 5.023 TeV Pb+Pb collisions \cite{Niemi:2015voa}. (a) Centrality dependence of charged particle multiplicity, vs. ALICE data \cite{Aamodt:2010cz,Adam:2015ptt}. (b) Predicted $\sqrt{s_{NN}}$ dependence of charged particle multiplicity from RHIC Au+Au to LHC Pb+Pb collisions vs. data from ALICE \cite{Aamodt:2010cz,Adam:2015ptt}, CMS \cite{Chatrchyan:2011pb}, STAR \cite{Abelev:2008ab} and PHENIX \cite{Adler:2004zn}. (c-e) Ratio of the flow coefficients $v_n\{2\}$ in 5.023 TeV and 2.76 TeV Pb+Pb collisions, vs. ALICE data \cite{Adam:2016izf}. } \vspace{-0.5cm} \label{fig:predictions} \end{figure} To conclude, the EbyE NLO EKRT model \cite{Niemi:2015qia} explains consistently the bulk observables and various correlators at mid-rapidity in LHC and RHIC heavy-ion collisions. Its predictive power in cms-energy, centrality and nuclear mass number has been demonstrated with various observables. Via a multi-energy and multi-observable analysis we have managed to constrain the $\eta/s(T)$ ratio, for which two best-fit parametrizations have been identified. Similar results have been found also in Ref.~\cite{Bernhard:2016tnd}. Systematic further tests of the hydro results validity are, however still needed, especially in the case of more complicated correlators, as well as more work for including further dissipative phenomena. \vspace{0.1cm} \noindent{\small\textbf{Acknowledgments}.\ K.J.E.\ is supported by the Academy of Finland, Project 297058, and H.N.\ by the EU's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no.\ 655285.}
{ "attr-fineweb-edu": 1.953125, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd0g5qoTDtv38v7WG
\section{Introduction} \label{sec_intro} Low-resolution transmission spectroscopic observations of transiting gas giant exoplanets have been extensively used to probe their atmospheric compositions. The multi-object spectroscopy (MOS) technique (\citealt{Bean2010}, \citeyear{Bean2011}) has produced spectrophotometric measurements of exoplanet atmospheres at low-resolution (R $\sim$ 10-100) with various ground-based observatories from optical to near infrared (Gemini/GMOS: see e.g. \cite{Crossfield2013}, \cite{Gibson2013}, \cite{Stevenson2014}, \cite{Huitson2017}, \cite{Todorov2019}, \cite{Wilson2021}; VLT/FORS2: see e.g. \cite{Bean2010}, \cite{Sedaghati2017}, \cite{Nikolov2016}, \citeyear{Nikolov2018}, \cite{Carter2019}, \cite{Wilson2020}; Magellan/MMIRS and IMACS: see e.g. \citeauthor{Bean2011} (\citeyear{Bean2011}, \citeyear{Bean2013}), \cite{Rackham2017GJ1214}, \cite{Espinoza2019}, \cite{Weaver2020}, \cite{McGruder2020}; LBT/MODS: see e.g. \cite{Mallonn2016}, \cite{Yan2020}) and long-slit spectrographs at low-resolution (GTC/OSIRIS: see e.g. \cite{Sing2012}, \citeauthor{Murgas2014} (\citeyear{Murgas2014}, \citeyear{Murgas2019}), \cite{Nortmann2016}, \citeauthor{Chen2017} (\citeyear{Chen2017}, \citeyear{Chen2018}, \citeyear{Chen2020}, \citeyear{Chen2021})) which have resulted in the detection of spectral features due to Rayleigh scattering, atomic and molecular absorption, and/or grey opacity clouds (in the form of flat or featureless spectra). The detection of pressure broadened profile of Na I doublet weakly in the atmosphere of WASP-4b (e.g. \citealt{Huitson2017}), significantly in the atmosphere of the hot Saturn WASP-96b (\citealt{Nikolov2018}) consistent with a cloud-free atmosphere, and the detection of Na, Li, and K absorption along with the signatures of scattering due to haze in the atmosphere of hot Neptune WASP-127b (\citealt{Chen2019}) are some examples that demonstrate that ground-based MOS observations are capable of estimating absolute abundances in some of the gas giants with clear atmospheres. Notably, transit observations using large ground-based telescopes like Gemini and VLT have yielded transit depth precision comparable to space-based observations from HST in their white light curves (e.g. \cite{Bean2010}, \cite{Todorov2019}). Observations for the same planet repeated over multiple epochs and from different instruments have helped in ascertaining the robustness of results (e.g. WASP-4b \cite{May2018}, \cite{Bixel2019}; WASP-19b \cite{Sedaghati2017}, \cite{Espinoza2019}) and in interpreting and mitigating the transit light source effect due to stellar photospheric heterogeneity which has the strongest observable effect on a transmission spectrum in the optical wavelength range (\cite{Rackham2018}). Furthermore, ground-based MOS observations have pushed the limits of atmospheric characterisation down to terrestrial planets (\citealt{Diamond-Lowe2018}, \citeyear{Diamond-Lowe2020a}, \citeyear{Diamond-Lowe2020b}) for which the optical transmission spectra have been able to rule out the presence of clear and low mean molecular weight atmospheres. Spectrophotometric observations obtained using ground-based multi-object spectrographs are affected by telluric and instrumental systematics at levels comparable or even more than the amplitude of variations due to the planetary atmosphere in the transmission spectrum that we aim to measure. The conventional technique to compensate for systematics in ground-based low resolution spectra has been to simultaneously observe one or more reference or comparison stars in the instrument's field of view (\citealt{Bean2010}) and use that to correct for the systematics similarly affecting the target star light curve through differential photometry. The Rossiter-McLaughlin effect based observations measure changes in line shape for detecting transits (\citealt{Sluijs2019}) and deriving low-resolution transmission spectra (\citealt{Oshagh2020}, \citealt{DiGloria2015}). Such observations follow a parallel approach to measure transmission spectrum that does not need a comparison star but typically yield low resolution transmission spectra at sub-optimal precision. All the aforementioned ground-based MOS studies, however, have always used comparison stars to deal with systematics in the transit light curves and extract the signals of planetary atmosphere. The leftover systematics after Target/Comparison star light curve normalisation, arising due to brightness or differences in spectral types between the target and comparison stars, are then conventionally modelled by parametric models constructed using polynomials based on a set of decorrelation parameters (e.g. \cite{Gibson2014}, \cite{Stevenson2014}, \cite{Todorov2019}), or a non-parametric approach using Gaussian Processes (GP) regression (e.g. \citeauthor{Gibson2012} (\citeyear{Gibson2012}, \citeyear{Gibson2013})). Some previous works, instead of dividing the target star light curve by the comparison star light curve, fit the target star light curves directly by using comparison star light curves or a PCA components of multiple comparison stars as linear regressors (see e.g. \citealt{Jordan2013}, \citealt{Espinoza2019}). The \texttt{Divide-White} method introduced by \cite{Stevenson2014} extracts the transmission spectrum from target star light curves by using non-analytic models of wavelength dependent systematics derived from comparison star light curves. Note that all the aforementioned approaches: doing differential spectrophotometry, using comparison star light curves as linear regressors or non-analytic models of systematics, assume a linear relationship between the systematics in target and comparison star flux variations. Differential photometric corrections in particular perform best when the comparison stars are similar to the target stars in brightness and spectral type (\citealt{Broeg2005}, \citealt{Croll2015}). In most cases, it is likely that light from the target and comparison stars may not have travelled through the same column of atmosphere, especially in scenarios where the separation between the target and comparison stars in the sky is comparable or more than the typical spatial scale of variations in atmospheric turbulence. Systematics at the instrumental level and stellar variability in the comparison star can further cause complex non-linear variations between the target and comparison star fluxes. This implies that the linear functional forms of mapping between the two assumed by conventional methods are sub-optimal and may even be a source of additional systematics. The conventional strategy of MOS observations has relied on the availability of suitable close-by comparison stars which presents some issues. In situations when comparison stars are fainter than the target star or of a different spectral type, the Target/Comparison normalisation is photon-limited by the brightness of the comparison stars (in the whole bandpass or a range of wavelengths where the spectral shape and relative brightness of the comparison and target star differs the most, see e.g. \citeauthor{Diamond-Lowe2020a} (\citeyear{Diamond-Lowe2020a}, \citeyear{Diamond-Lowe2020b})). On the other hand, if the comparison stars are brighter than the target stars (as it happens to be the case for comparison star for the GMOS observations of HAT-P-26b presented in this paper), the duty cycle of the observations gets limited. Moreover, if the target star is in a sparse field, which is often the case for bright host stars, then there is less choice of an optimal comparison star given the limited instrument's field of view which has been a limiting factor in ground-based high precision spectrophotometric follow-up of exoplanets orbiting bright stars. In view of these several limitations, there is a need for a more generalised and robust approach to marginalise systematics in ground-based spectrophotometric light curves which accounts for non-linear relationship between target and comparison star fluxes, and does not explicitly rely on the availability of comparison stars. We present a novel alternative method in this paper which takes a more generalised approach when using a set of auxiliary time series (e.g., comparison star light curves, target star PSF width, airmass, etc.) to model systematics in the target star transit light curves. Our new method in essence lets a Gaussian Process model explore the underlying unknown and likely non-linear functional form between the regressors used to model the systematics in the target star transit light curves. This can be achieved for both the integrated white light curve and spectroscopic light curves. Through our method, we also demonstrate that remarkably precise wavelength-dependent transit depth measurements of exoplanet spectra can be reached when not using the comparison star light curves at all. We describe the method and its application in detail to our observations of the warm Neptune HAT-P-26b observed by Gemini/GMOS in Section \ref{sec_analysis}. The paper is distributed as follows: in Section \ref{sec_obs} we describe in detail our observational setup for the 6 transits of HAT-P-26b observed by GMOS to which we apply the new method we introduce in this paper. In Section \ref{sec_data_reduc}, we describe the data reduction steps to extract stellar spectra from raw data, and in Section \ref{sec_analysis} we discuss the analysis to model the GMOS transit light curves. Specifically, in Section \ref{noise_model} we introduce our new method to model the telluric and instrumental systematics directly in the target star light curves. In Section \ref{sec:method_comparison} we compare our new analysis method with the conventional approach, and discuss its caveats and implications for future ground-based observations of exoplanet atmospheres. In Section \ref{sec:interpretation} we interpret the optical to infrared transmission spectrum for HAT-P-26b from combined GMOS, HST, and Spitzer measurements using atmospheric models. We discuss the indications of transit timing variations for the planet in Section \ref{sec:ttv}, and in Section \ref{sec:conclusions} we present our conclusions. \section{Observations} \label{sec_obs} \subsection{The warm Neptune HAT-P-26b} HAT-P-26b is a low density warm (T$_{\rm eq}$ = 990 K) Neptune discovered by \cite{Hartman2011} orbiting its chromospherically quiet K1 host star in a close orbit of period $\sim$ 4.23 days. Given its large scale height the planet has been the subject of multiple atmospheric characterisation studies, including those constraining its atmospheric metallicity. Constraining the atmospheric metallicity of exo-Neptunes is crucial for tracing the dominant planet formation scenarios governing the formation of these planets, and distinguishing between scenarios of core accretion (\citealt{Pollack1996}) and in situ formation (\citealt{Bodenheimer2000}). Both of these scenarios can lead to significantly different metal enrichment of the atmosphere of a Neptune mass planet like HAT-P-26b. Initial studies of HAT-P-26b using Magellan/LDSS-3C and Spitzer by \cite{Stevenson2016} indicated tentative evidence of water vapour features in the red optical. \cite{Wakeford2017} reported a strong detection of the 1.4 $\mu$m water vapour feature muted by a grey opacity cloud as evident from the near-infrared and visible observations from HST/WFC3 and STIS respectively. From these observations \cite{Wakeford2017} retrieve a near solar metallicity atmosphere with a high altitude cloud deck suppressing the transit spectral features. \cite{MacDonald2019} further combined the observations from \cite{Stevenson2016} and \cite{Wakeford2017} to perform a comprehensive retrieval analysis reporting the presence of several species of metal hydrides with absorption features ranging from optical to near infrared and a tentative hint of Rayleigh scattering. In this paper, we present 6 Gemini/GMOS transit observations to measure the transmission spectrum of HAT-P-26b in the visible from 490 to 900 nm, extending the wavelength coverage of the transmission spectrum published by \cite{Wakeford2017} further towards blue optical. The primary motivations of our study are to investigate the exoplanet spectrum in the optical, expanding the wavelength coverage blueward, and independently test the presence of clouds and Rayleigh scattering. Additionally, from the precise mid-transit times obtained from our high SNR GMOS transit light curves we also investigate the transit timing variations (TTVs) for the planet previously indicated by \cite{Stevenson2016} and \cite{vonEssen2019}. \subsection{GMOS Transmission spectroscopy} \label{gmos_obs} We observed a total of six transits of HAT-P-26b using the Gemini North telescope located at Mauna Kea, Hawaii, and the Gemini South telescope located at Cerro Pachon, Chile. Three transits were observed using Gemini North and three transits were observed using Gemini South. The observations used the same technique and setup as described in \cite{Huitson2017} (hereafter referred to as \citetalias{Huitson2017}), which is similar to that of previous observations using GMOS (e.g. \citeauthor{Bean2010} (\citeyear{Bean2010}, \citeyear{Bean2011}, \citeyear{Bean2013}), \cite{Gibson2013}, \cite{Stevenson2014}). All transits were observed as part of two survey programs of hot Jupiter atmospheres from GMOS North and South (P.I. J-.M. D\'esert) described in \citetalias{Huitson2017} (see Table \ref{obsstats} for program numbers). For each observation, we used the MOS mode of GMOS to observe the time series spectrophotometry of HAT-P-26b and a comparison star TYC 320-426-1, simultaneously. HAT-P-26 and the comparison star are separated by $\sim 3.8$ arcmin. HAT-P-26 has a V magnitude of 11.76 and TYC 320-426-1 has a V magnitude of 11.08, and similar spectral type from visual inspection of prominent stellar spectral features. Each observation lasted approximately 5 to 5.5 hours. To avoid slit losses, our MOS mask had wide slits of 10 arcsec width for each star. The slits were 30 arcsec long to ensure adequate background sampling for each star. In order to provide similar wavelength coverage between HAT-P-26 and the comparison star, the PA of the MOS mask needs to be as close as possible to the PA between the two stars. The PA for HAT-P-26 and the comparison star is 23 deg. E of N. However, at this PA, no suitable guide stars fell into the patrol field of Gemini's guider, the On Instrument Wave Front Sensor (OIWFS). We therefore used the Peripheral Wavefront Sensor (PWFS) instead for three of our observations from Gemini South (see Table \ref{obsstats} for details). The PWFS has a larger patrol field, but a lower guiding precision and so is used as a backup option if there are no suitable guide stars available for OIWFS. This setup enabled us to orient the instrument so that the instrument PA matched the PA between the two stars. However, from the initial analysis, we found that the photometric precision was lower when using the PWFS than for our previous survey observations obtained using the OIWFS due to higher dispersion direction drift in case of PWFS as compared to OIWFS. The dispersion direction drift over a night is $\sim$15 pixels for PWFS as compared to $\sim$1 pixel for OIWFS. We therefore modified the setup for three of our observations at Gemini North (see Table \ref{obsstats}) to be able to continue using OIWFS. In this new setup, we selected the PA of the MOS mask to be 7 deg. E of N. While this meant that the wavelength coverage was different for both stars, it meant that we could orient the GMOS field of view such that a suitable guide star fell within the range of the OIWFS. We therefore achieved improved guiding in exchange for the loss of approximately 1/3rd of the wavelength coverage. Three transits were observed in the red optical with the R150 grating, covering a wavelength range of 530-900 nm with ideal resolving power $R=631$. Two transits were observed in the blue optical with the B600 grating, covering a wavelength range of 490-680 nm with ideal resolving power $R=1688$. The ideal resolving powers assume a slit width of 0.5 arcsec. In our case, due to using a wide slit, our resolution was seeing limited. Given the range of seeing measured in Table \ref{obsstats}, our resolution is up to $4\times$ lower than the ideal value depending on observation. For each observation, we used the gratings in first order. For the R150 observation, the requested central wavelength was 620 nm and we used the OG515\_G0330 filter to block light below 515 nm. The blocking filter was used to avoid contamination from light from higher orders. For B600 observation, the requested central wavelength was 520 nm and no blocking filter was needed. For all observations, we windowed regions of interest (ROI) on the detector to reduce readout time. We used one ROI for each slit, with each ROI covering the whole detector in the dispersion direction and approximately 40 arcsec in the cross-dispersion direction. We binned the output $1\times2$, binning in the cross-dispersion direction, to further reduce readout time. For the observations at Gemini South, the detector was read out with 3 amplifiers. For the observations at Gemini North, the detector was read out with 6 amplifiers. All amplifiers had gains of approximately 2 $e^-$/ADU. Exposure times were chosen to keep count levels between 10,000 and 30,000 peak ADU and well within the linear regime of the CCDs. Table \ref{obsstats} shows the observation log for each transit, as well as which observations were obtained at Gemini South and which at Gemini North. The numbers given under `No' in the table are the numbers by which we will refer to each transit observation in this paper. \begin{table*} \centering \caption{Observing Conditions for GMOS Runs. The numbers in the first column are the numbers by which we will refer to each transit observation throughout the rest of the paper. Observation IDs starting with ``GS" were observed at Gemini South using the ideal PA and the PWFS, while those starting with ``GN" were observed at Gemini North using the non-ideal PA and OIWFS (see Section \ref{sec_obs} for more details).} \begin{tabular}{cccccccccc} \hline \hline No. & Program ID & UT Date & Grating & Guider and PA & Exposure & No. of & Duty & Seeing & Airmass \\ & & & & &Time (s) & Exposures & Cycle (\%) & (arcsec) & Range \\ \hline 1 & GS-2013A-Q-27 & 2013 Mar 20 & R150 & PWFS, ideal PA& 50 & 226 & 63 & 0.6 - 1.8 & 1.21 - 2.06 \\ 2 & GN-2013A-Q-38 & 2013 Apr 10 & R150 & OIWFS, non-ideal PA & 15 & 574 & 45 & 0.3 - 0.9 & 1.04 - 1.97 \\ 3 & GS-2014A-Q-59 & 2014 May 09 & R150 & PWFS, ideal PA & 20 & 318 & 40 & 0.3 - 1.0 & 1.21 - 2.00 \\ 6 & GS-2014A-Q-59 & 2014 Jun 29 & R150 & PWFS, ideal PA & 25-40 & 299 & 47 & 0.6 - 1.0 & 1.21 - 1.96 \\ 4 & GN-2016A-LP-6 & 2016 Mar 12 & B600 & OIWFS, non-ideal PA & 90-150 & 161 & 85 & 0.4 - 1.6 & 1.04 - 1.74 \\ 5 & GN-2016A-LP-6 & 2016 Apr 15 & B600 & OIWFS, non-ideal PA & 110-150 & 131 & 88 & 0.6 - 1.3 & 1.04 - 1.97 \\ \hline \end{tabular} \label{obsstats} \end{table*} \section{Data Reduction} \label{sec_data_reduc} \subsection{GMOS data} \label{gmos_data_reduc} We used our custom pipeline designed for reducing the GMOS data, the steps for which are described in more detail in \citetalias{Huitson2017}. We extract the 1D spectra and apply corrections for additional time- and wavelength-dependent shifts in the spectral trace of target and comparison stars on the detector due to atmospheric dispersion and airmass. In this section, we describe the main points of the pipeline and the additional corrections we apply to the data before extracting and analysing the transit light curves. For the R150 grating, we only use 2/3 of the detector in the dispersion direction. For all observations we use a moving boxcar median of 20 frames in time for each pixel to compare its value in the frames immediately before and after. We flag pixels deviating more than 5 times the boxcar median value as cosmic rays and replace it with the median boxcar value. The cosmic-ray removal flagged a few percent of pixels per observation. Our pipeline flagged 1.8-3.8 \% of columns as bad depending on observation. The majority (80 \%) of flagged columns are consistent between the transits for each detector. For observations 1 and 3, these include columns of shifted charge occurring mostly in the transition regions between amplifiers, as discussed in \citetalias{Huitson2017}. These columns are not present on the GMOS-North detector (observations 2, 4, and 5) and are also not present in observation 6, which was taken after a detector upgrade at GMOS-South. We tested our extraction with and without flat-fielding and find that flat-fielding does not significantly affect the scatter of the resulting transit light curves. For this reason, and since flat-fielding did not improve the scatter blueward of 700 nm, we chose not to perform flat-fielding for all transit observations. We notice no slit tilt in the spectra of HAT-P-26 and the comparison star unlike as seen in \citetalias{Huitson2017} and \cite{Todorov2019}. The sky lines in the frames for all transits are parallel to the pixel columns. Thus, we choose to not perform any tilt correction. We subtracted the background while performing optimal extraction (\citealt{Horne1986}), and found that taking the median background value in each cross-dispersion column provided the best fit to the background fluxes compared to using fits to the flux profile in each cross-dispersion column. The background fluxes were 1-10 \% of the stellar flux for the R150 observations and 2-20 \% of the stellar flux for the B600 observations, depending on the wavelength range and exposure number. After spectral extraction, we performed wavelength calibration using CuAr lamp spectra taken on the same day as each science observation. To obtain the CuAr spectra at high resolution, we used a separate MOS mask to that used for science, which had the same slit position and slit length as the science mask of only 1 arcsec width. We used the same grating and filter setup as the corresponding science observation. We used the \textsc{identify} task in the Gemini \textsc{iraf} package to identify spectral features in the CuAr spectra. A wavelength solution was then constructed by a linear fit to the pairs of wavelength vs. pixel number in each ROI and then refined by comparison with known stellar and telluric feature locations. The final uncertainties in the wavelength solution are approximately 1 nm for all observations, which is $\sim$5 \% of the bin widths used to construct the final transmission spectrum. Before generating the transit light curves, we performed further reduction of the extracted 1D spectra. This is because, as in \citetalias{Huitson2017}, we found that there is a dispersion-direction shift of the spectra on the detector during each observation that is a function of time and of wavelength, such that the spectra `stretch' over time. The result is that wavelength bins identified in fixed pixel space will not sample the same wavelength in each exposure. The effect therefore needs to be accounted for in order to build transit light curves that sample a constant wavelength region over time. Failure to account for this effect can introduce spurious slopes in the transmission spectra, as discussed in \citetalias{Huitson2017}. In \citetalias{Huitson2017}, we found that a model for differential atmospheric refraction explained the shifts well for our previous observations. This is consistent with the fact that GMOS has no atmospheric dispersion compensator, and so we expect an effect from differential atmospheric refraction. However, the differential atmospheric refraction model does not adequately fit the shifts observed in the HAT-P-26 data studied here. We therefore use the alternative method developed in \citetalias{Huitson2017}, in which we use multiple spectral features for cross-correlation as a function of time to account for the wavelength-dependent shifting empirically. However, instead of simply using the shifted spectra corresponding to the measured shift value from cross correlation with respect to the nearest feature for constructing light curves for each spectral bin (as done in \citetalias{Huitson2017}), we proceed further to use the information from cross-correlation with the spectral features to apply corrective shifts to each pixel in the 1D spectrum. From the measured shifts of the spectral features for an exposure, we estimate the shifts in the spectra for pixels in between and away from the features by a linear interpolation between the shift values for three features used for the cross-correlation. The interpolated shift values thus obtained for each pixel for an exposure are then applied to the whole 1D spectrum. We then repeat this step for every exposure so that in the end we have the same wavelength solution for the spectra across all exposures. We repeat this step for the comparison star as well, using the same set of spectral features as those used for the target star spectrum for cross-correlation. Finally, we interpolate the comparison star's spectrum from all exposures onto the wavelength solution of the target star, omitting detector gaps and bad columns, which ensures that both the target and comparison star spectra for every exposure have the same wavelength solution within the uncertainty of our estimates on the shifts derived from cross-correlation. As final step in our reduction process, we also correct for the dispersion direction offset between the target and comparison star spectra. This occurs because the PA of the instrument is not exactly the same as the PA between the target and comparison star for observations taken using OIWFS guider. We used cross-correlation to measure the offset, which was between -18.2 and -16.0 pixels for the southern observations. For the northern observations, the offset was between -830 and -600 pixels due to the non-deal PA. We then interpolated the comparison star's spectrum onto the target star's wavelength grid, while omitting bad columns for both spectra (which are the same columns on the detector but are at different wavelengths for each star). We show the final wavelength-calibrated 1D spectra for an arbitrarily chosen exposure for the target and comparison star in Figure \ref{fig:1Dspec} for all the observations. Note that some residual shifts (at the order of a few pixels) still remain between the target and comparison star spectra, especially towards the redder end for R150 observations (beyond 710 nm where fringing also becomes strong) as seen in Figure \ref{fig:1Dspec}. This is because the shifts between target and comparison star spectra vary both in time and wavelength, and constant offsets followed by interpolation to a common wavelength grid does not entirely correct for it. We refrain from further empirical corrections at this stage and to minimize the effects of any residual shifts we choose to use broad 20 nm wide bins for our spectroscopic light curves. This wavelength width is significantly larger than spatial scale of the shifts between the target and comparison star spectra. We nevertheless do not include the spectra beyond 730 nm for Observation 1 and 3 for computing the transmission spectrum due to excessive fringing in that region. We also emphasize that the residual shifts between the comparison star and HAT-P-26 spectra are not an issue for the new method we introduce in the paper of using only the target star to extract the transmission spectrum (see Section \ref{noise_model}). \begin{figure*} \centering \includegraphics[scale=0.45]{figures/Stellar_Spectra_1D_all_revised.png} \caption{Optimally extracted spectra for HAT-P-26 and the comparison star from an arbitrarily chosen exposure, corrected for dispersion direction shifts and normalised by their exposure time for the 6 GMOS observations of HAT-P-26. Each panel shows one exposure for each observation, and the observation numbers correspond to the programs described in Table \ref{obsstats}. For all observations, especially the GMOS-N observations taken using non-ideal PA, the comparison star spectrum has been shifted to the same wavelength grid as the target star spectrum using prominent common stellar features in the spectra which have been marked by the black dashed vertical line. The green vertical lines show the wavelength range considered for obtaining the transmission spectrum for each observation. The gaps in the spectra correspond to physical gaps in the CCD and bad columns. } \label{fig:1Dspec} \end{figure*} \section{Transit Light Curve Analysis} \label{sec_analysis} We now describe our light curve analysis methods that we apply to the 6 transit observations of HAT-P-26b. We first briefly discuss the noise models that we use to correct for the systematics in the light curves in Section \ref{noise_model}. In this section, we also introduce and motivate a new method to directly model the systematics in the target star light curves. We have summarised the the conventional method used to date and the new method introduced in this paper and their various types of applications to white and spectroscopic light curves in Table \ref{tab:method_summary}. The novel aspect of the new method in the context of both the white light curves and the spectroscopic light curves is that instead of assuming a linear functional form, we explore a distribution of functions (described by a GP) to explore the likely non-linear functional form of the mapping between the target transit light curves and one or more decorrelation time series (e.g. comparison star light curves). The new method in the context of all its applications is an alternative to the applications of the conventional linear method to fit for systematics in MOS transit light curves as described in Table \ref{tab:method_summary}. We describe the shortcomings of the conventional method and motivate the need for the new method in Sections \ref{sec:conv_method} and \ref{sec:new_method} respectively. The conventional method to fit the white light curves specifically has two different types of applications: 1) \texttt{Conv1:WLC} : two step method of first performing differential spectrophotometry (normalising the target star light curve by the comparison star light curve) and then fitting the resultant light curve with a GP, and 2) \texttt{Conv2:WLC} : one step method of using a linear model with one or more comparison star light curves or their PCA components as one of the regressors to fit the target transit light curves. \texttt{Conv2:WLC} is especially suited for when there are more than one comparison stars available, which is not the case in this paper. In Section \ref{sec:WLC} we apply the conventional method \texttt{Conv1:WLC}, and the new methods \texttt{New:WLC} and \texttt{New:WLC;No\_Comp} to fit white-light curves for each observation. In the context of fitting spectroscopic light curves, a frequently used method to correct for wavelength independent systematics in particular, in addition to using the comparison star light curves, is to perform a `common-mode correction'. This is the approach of the \texttt{Conv1:$\lambda$LC} method which subtracts a white light curve derived common-mode trend from each wavelength binned light curve. However, this approach also assumes a linear relationship between the common-mode trend and the spectroscopic light curves. Our new method \texttt{New:$\lambda$LC} explores the likely non linear relationship in this context (e.g. arising from wavelength dependent effects with changing airmass) by using the common-mode trend as a GP regressor. We fit the spectroscopic light curves using \texttt{Conv1:$\lambda$LC} and \texttt{New:$\lambda$LC} in Section \ref{sec:binned_LC_old_method} and \ref{sec:binned_LC_new_method} respectively. \begin{table*} \caption{Summary of the conventional and new methods used to model the systematics in white light curves (WLC) and spectroscopic light curves ($\lambda$LC) in this paper in Section \ref{noise_model}. The `Application' column specifies the different ways of applying the methods with a more detailed description in the column `Description'. `Abbreviation' specifies how we refer to each of these applications in this paper.} \label{tab:method_summary} \begin{tabular}{lllll} \hline \hline Method & Application & Description & Abbreviation & Example \\ & & & & References \\ \hline \multirow{12}{*}{Conventional} &\multirow{2}{*}{Differential spectrophotometry}& Target/Comparison WLC: fit with GP & \texttt{Conv1:WLC} & \multirow{4}{*}{\cite{Gibson2013}} \\ \multirow{12}{*}{method} & \multirow{2}{*}{using comparison star LCs} & & & \\ & & Target/Comparison $\lambda$LC: & \texttt{Conv1:$\lambda$LC} & \\ & & common-mode subtracted, fit with GP & & \\ \\ \cline{2-5} \\ & \multirow{6}{*}{Comparison star(s) LC} & Target WLC: & \texttt{Conv2:WLC} & \multirow{6}{*}{\cite{Espinoza2019}} \\ & \multirow{6}{*}{as linear regressor} & fit with linear model & & \\ & & including comparison star(s) white LC & & \\ & & or their PCA as regressors & & \\ \\ & & Target $\lambda$LC: & \texttt{Conv2:$\lambda$LC} & \\ & & fit with linear model & & \\ & & including comparison star(s) $\lambda$LC& & \\ & & or their PCA as regressors & & \\ & & & & \\ \hline \multirow{10}{*}{New method} & Comparison star LC & Target WLC: & \texttt{New:WLC} & \multirow{10}{*}{This work} \\ & as GP regressor & fit with comparison star & & \\ & & as a GP regressor & & \\ \\ \cline{2-4} \\ & No comparison stars & Target WLC: & \texttt{New:WLC;No\_Comp} & \\ & & fit with GP regressors & & \\ & & excluding comparison star LC & & \\ \\ \cline{2-4} \\ & common-mode trend as & Target $\lambda$LC: & \texttt{New:$\lambda$LC} & \\ & GP regressor & fit with common-mode trend & & \\ & & as a GP regressor & & \\ \hline \end{tabular} \end{table*} \subsection{Modelling systematics in transit light curves} \label{noise_model} In the following sections, we model the instrumental and telluric time-dependent systematics in the HAT-P-26 transit light curves by following both the conventional method and the new method we introduce in this paper. We describe them both in the next two subsections to compare them and motivate the need for the new method. \subsubsection{Conventional method: using comparison star or common-mode trend as linear regressor} \label{sec:conv_method} The conventional method involves first dividing the target star light curve by comparison star light curve and then fitting the transit signal and systematics in the Target/Comparison light curve simultaneously using a transit model and a GP respectively. In the case of spectroscopic light curves, there is an additional step of removing the common-mode trend before fitting with a GP. The GP model takes as regressors, or inputs, a set of decorrelation time series which include e.g., time (time stamps of individual exposures), width (FWHM) and spatial shifts of the traces of target and comparison stars on the detector (e.g. \cite{Nikolov2018}, \cite{Diamond-Lowe2020b}). However, the step of doing differential spectrophotometry itself in this approach raises concerns on the relevance of decorrelation parameters derived from the individual target and comparison star spectral traces in the context of modelling the differential Target/Comparison light curve. In general, the step of doing differential spectrophotometry, assumes that the target and comparison star fluxes are affected by the same or linearly related time and wavelength-dependent systematics. Subtracting common-mode trend also assumes a linear relationship between the white light curve and the spectroscopic light curves. Given the complex nature of both instrumental and telluric systematics this is likely not the case. Considering the transit depth precisions ($\sim$ 100-500 ppm per $\sim$ 20 nm bins) we are aiming for, dividing the target star light curve by the comparison star light curve or subtracting common-mode trend can potentially propagate unwanted systematics and deteriorate the light curve SNRs which can be difficult to correct for when fitting the Target/Comparison light curve. An example of this are the B600 observations of HAT-P-26b we present in this paper where the target and comparison star light curves have systematics significantly different from each other. In this context, simply normalising the target star by the comparison star contaminates the transit signal originally present in the Target star light curve (see white light curves for observation 4 and 5 in Figure \ref{fig:WLC_all}). In cases when the instrument's field of view is large, in the order of $\sim$ 10 arcminutes, recent works (\citealt{Jordan2013}, \citealt{Espinoza2019}) have used Principal Component Analysis (PCA) to optimally use information from multiple comparison stars in the field of view. This approach (\texttt{Conv2:WLC} and \texttt{Conv2:$\lambda$LC}) relies on the availability of multiple comparison stars, and involves using the PCA components of more than one comparison star in log-space as regressors in a linear regression model to fit the systematics in the target star light curve. Specifically \cite{Espinoza2019}, and other studies analysing IMACS/Magellan observations e.g., \cite{Weaver2020}, \cite{McGruder2020}, \cite{Kirk2021} use the model averaging scheme for linear regression models outlined by \cite{Gibson2014} to incorporate the number of relevant PCA components as an additional uncertainty in their model. Since we only observed one comparison star, we do not test the PCA based approach in this work. \subsubsection{New method: using comparison star or common-mode trend as a GP regressor} \label{sec:new_method} The intrinsic limitations of differential spectrophotometry using one or more comparison stars to correct for systematics in the light curves (also described in more detail in Section \ref{sec_intro}) narrows down the set of exoplanets around bright host stars that can be followed up for atmospheric characterisation from ground-based multi-object spectrographs. Hence, there is a need for a new method that doesn't explicitly rely on the comparison stars and can model the transit light curve systematics and extract the transmission spectra solely from the target star. With the new method we introduce in this paper, we present a way to directly fit the target star light curves using a set of time series recorded at the same time as target light curve as GP regressors. This includes the comparison star light curve (when fitting the white light curves, described in more detail in Section \ref{sec:WLC}), and common-mode trend when fitting the spectroscopic light curves (see Section \ref{sec:binned_LC}). This is essentially the novel aspect of our method: for both white and spectroscopic light curves, we use the set of time series, which have traditionally been used linearly to correct them either as simple normalising factors or as linear regressors, directly as regressors in a GP model. This allows for letting the GP itself explore an exhaustive set of non-linear mappings between the target transit light curve and regressors like comparison stars or the common-mode trend. This approach is more capable of incorporating the complex differences in which the target and comparison stars are affected by systematics during any observations. Our method also provides more accurate uncertainties by propagating them through the Bayesian framework of GPs. The underlying GP framework we use for our new method is the same as that introduced by \cite{Gibson2012} to model the transit signal and systematics simultaneously in the wavelength integrated white light curves and the wavelength binned light curves for each transit. In our new method, instead of fitting the Target/Comparison light curves with a GP model, we model the target star light curves directly as a numerical transit model combined with an additive GP model to account for systematics affecting the light curve. This means that we skip the step of dividing the target star light curve by one or multiple comparison star light curves, and instead use the comparison star light curve as one of the GP regressor. In the case of fitting spectroscopic target light curves, we use the common-mode trend derived from the white light curve as a GP regressor(see Section \ref{sec:binned_LC}). We describe the GP formalism used for both the conventional and new methods in more detail in the next section. \subsubsection{Gaussian Process regression model} A Gaussian Process model to account for the systematics in a transit light curve means that we model the observed transit light curve (which for the conventional method is Target/Comparison and for the new method just the target star light curve) as a multivariate Gaussian Process distribution with the mean function as the numerical transit model, and a covariance matrix $\mathbf{\Sigma}$: \begin{equation} f = \mathcal{GP}(\mathbf T(t,\phi), \mathbf{\Sigma} (\mathbf X,\theta)) \label{gp_model} \end{equation} where $f$ is the flux time series representing a transit light curve, $t$ is time, $\phi$ is the set of planet transit parameters, $T(\bmath t,\phi)$ is the astrophysical transit light curve model, and $\mathbf{\Sigma} (\bmath X,\theta))$ is the covariance matrix described by a kernel function for a set of regressors or input parameter vectors $\mathbf X$ and hyperparameters $\theta$ : \begin{equation} \mathbf\Sigma_{ij} = k({x}_i,{x}_j | \theta) \end{equation} Note that here we assume that the systematics we are attempting to model using the GP are additive, and we could just modify Equation \ref{gp_model} to instead have the GPs model multiplicative systematics, which subsequently gives identical results within the precision of our data as we tested and was also reported by \cite{Gibson2013}. The kernel function takes a set of input parameter vectors $\mathbf X$ ($\mathbf x_{1}$, $\mathbf x_{2}$, $\mathbf x_{3}$, ... $\mathbf x_{P}$) (each vector $\mathbf x_{p}$ of the same length as the number of points ($N$) in the light curve) which could be time, Cassegrain Rotator Position Angle (CRPA), the airmass, FWHM of the PSF of the spectra trace of the target star, and the measured position of the spectral trace corresponding to each exposure (averaged across the dispersion direction). This is analogous to using these time series quantities as decorrelation parameters to construct parametric models. In particular, in our new method we additionally also test the use of the comparison star light curve as one of the regressors to the GP. We choose to use the Mat\'ern 3/2 kernel function as it is known to provide a good prescription for time correlated noise at the time scales typically observed in GMOS transit light curves (\citealt{Gibson2012}): \begin{equation} k({x}_i, {x}_j | \theta) = A \left( 1+{\sqrt{3}R_{ij}} \right) \exp \left( -{\sqrt{3}R_{ij}}\right) + \delta_{ij}\sigma_\text{w}^2 \label{kernel} \end{equation} where $A$ is the hyperparameter specifying the amplitude of covariance, $\sigma_\text{w}$ is the white noise term (which we fit for) and $\delta$ is the Kronecker delta. We emphasize that keeping the white noise term $\sigma_\text{w}$ free when fitting the light curves is an important aspect of our proposed method in this paper. The best fit value of $\sigma_\text{w}$ represents the combined noise variances in the target star light curve and in the individual decorrelation parameters used as GP regressors, assuming no heteroscedasticity in our observed light curves (see Equation 6 in \cite{Mchutchon2011}). When we use comparison star light curve as one of the GP regressors, the best fit value of $\sigma_\text{w}$ represents the combined noise variance from the comparison star light curve and the target star light curves. We highlight that this is a way to propagate the relevant uncertainties from the comparison stars within the Bayesian framework of GPs (which we use to fit for $\sigma_\text{w}$ as described below) in contrast to just adding them in quadrature as done in the case of differential photometry. This is analogous to fitting for a jitter term in the methods that use comparison stars as an input to linear regression models (e.g. \citealt{Espinoza2019}). The term $R_{ij}$ in the Equation \ref{kernel} is a quadrature summation of pairwise difference between regressor points ($\eta$ being the inverse length scale hyperparameter corresponding to each input vector). The $R_{ij}$ term for P number of input vectors can be described as: \begin{equation} \mathit{{R}_{ij}} = \sqrt{ \sum_{p=1}^{P} \left( \frac{{x}_{p,i}-{x}_{p,j} }{\eta_{p}} \right)^{\!\!2} } \label{eq:Rij} \end{equation} This is one of the few ways in which information from multiple input parameters or regressors can be combined to describe the covariance matrix of the GP, and involves a single amplitude hyperparameter ($A$) and length scale hyperparameters ($\eta$) for each of the input parameters respectively. We also considered and tested another type of combination where we take the kernel in Equation \ref{kernel} for each regressor, and construct the final kernel as the sum of kernels for each regressor (similar to the approach followed by \cite{Aigrain2016}, \texttt{k2sc}). This combination leads to using more number of hyperparameters as each GP regressor now also has a respective amplitude hyperparameter in addition to a length scale hyperparameter. For all the observations we analyse in this paper, we find that the first type of kernel combination (described in Equation \ref{eq:Rij}) performs consistently better in terms of root mean square (RMS) of the residuals and consistency of best fit transit parameters with the literature values as compared to the other combinations. The joint GP posterior probability distribution we marginalise over to estimate the transit parameters and hyperparameters corresponding to the best fit to the observed light curves is: \begin{equation} p(f | \bmath t,\phi,\theta) = \pi(\phi,\theta) \times \mathcal{L} [ \mathcal{GP} \left (\mathbf T(t,\phi) , \mathbf{\Sigma} (\mathbf X,\theta) \right) ] \end{equation} where $\pi(\phi,\theta)$ encodes the prior probability on the transit model parameters ($\theta$) and hyperparameters ($\phi$), and $\mathcal{L} [ \mathcal{GP} \left (\mathbf T(\bmath t,\phi) , \mathbf{\Sigma} (\mathbf X,\theta) \right) ]$ is the GP likelihood, written in form of log-likelihood as: \begin{equation} \label{eq:likelihood} \log \mathcal{L}(r | \mathbf X,\phi,\theta) = -\frac{1}{2} r^T\, \mathbf{\Sigma}^{-1} \, r -\frac{1}{2}\log | \mathbf{\Sigma}| -\frac{N}{2} \log\left(2\pi\right) \end{equation} \\ where $r$ is the vector of residuals of the observed light curve from the mean function ($f - \mathbf T(t,\phi)$) and $N$ is the number of data points in the light curve. We used the transit modelling package {\texttt{batman}} (\citealt{Kreidberg2015} which is an implementation of the formalism of \cite{Mandel2002}) to calculate the numerical transit model $\mathbf T(t,\phi)$, and the package {\texttt{george}} (\citealt{Ambikasaran2015}) for constructing and computing the GP kernels and likelihoods. \subsection{Analysis of White Transit Light Curves} \label{sec:WLC} \subsubsection{Constructing white light curves} For each observation, we constructed the target and comparison star white light curves by summing the measured flux over 530 to 700 nm for observations 1 and 3, 530 to 900 nm for observations 2 and 6 (as these R150 observations do not show fringing redward of 700 nm), and 490 to 680 nm for observations 4 and 5. We then normalise the total flux in each exposure by the corresponding exposure time for both the target and comparison star. The white transit light curves thus obtained are shown in Figure \ref{fig:WLC_all}. The white light curves for each observation contain information on the dominant time-dependent systematics affecting all the wavelength channels, and analysing them prior to fitting the wavelength dependent light curves is an important step to constraining transit parameters and understanding the sources of systematics that can affect the final transmission spectrum. \subsubsection{Fitting the white transit light curves} We obtain the best fits for each transit observation independently using both the conventional method \texttt{Conv1:WLC} and the two applications of the new method \texttt{New:WLC} and \texttt{New:WLC;No\_Comp} as described in \ref{noise_model}. For both methods and for each transit white light curve, we fix the orbital period ($P$) and eccentricity ($e$) to literature values, and fit for the orbital inclination ($i$), orbital separation ($a/R_\star$), mid transit time ($T_0$), and planet to star radius ratio ($R_{\rm P}/R_\star$). For $i$ and $a/R_\star$, we put a Gaussian prior with the mean and standard deviation as the mean and 3 times the 1 $\sigma$ uncertainty values measured by \cite{Stevenson2016} respectively. We use truncated wide uniform priors for $R_{\rm P}/R_\star$ and for the mid transit time ($T_0$) around the values predicted by a linear ephemeris. We adopt a linear stellar limb darkening law and calculate the limb darkening coefficients and uncertainties on them (stemming from uncertainties in stellar parameters) for the wavelength range integrated to obtain the white light curve, and the wavelength bins we adopt for spectroscopic light curves (see Section \ref{sec:binned_LC}) using \texttt{PyLDTk} (\citealt{Parviainen2015}), which uses the spectral library in (\citealt{Husser2013}), based on the PHOENIX stellar models. We put a Gaussian prior on the linear limb darkening coefficient with the mean value and the standard deviation as the mean and 3 times the 1 $\sigma$ uncertainty calculated from \texttt{PyLDTk} respectively. We summarise all the priors we use in this paper in Table \ref{tab:priors}. \begin{table} \centering \caption{Summary of priors and fixed values for the parameters (transit model and GP hyperparameters) used to fit the transit light curves of HAT-P-26b. We fixed the planet orbital period (P) and eccentricity (e) for all fits. $\mathcal{U}$ represents a uniform prior applied within the specified range, and $\mathcal{N}$ represents a Gaussian prior with the specified mean and standard deviation. T$_{c}$ is the predicted mid transit time for each epoch using the ephemeris from \citealt{Hartman2011}. For the linear limb darkening coefficient mean of the Gaussian prior is taken as the theoretically calculated value from \texttt{PyLDTk} (\citealt{Parviainen2015}) for the B600 and R150 wavelength ranges.} \begin{tabular}{ccc} \hline \texttt{batman} model \\ \hline \hline Parameter & Prior/Fixed value & Reference \\ \hline P [d] & 4.2345 & \cite{Hartman2011} \\ $e$ & 0.124 & \cite{Hartman2011} \\ $i$ [$^\circ$] & $\mathcal{N}$ (88.09, 1.5) & \cite{Wakeford2017}\\ $R_{\rm P}/R_\star$ & $\mathcal{U}$ (0, 1) & -- \\ $a/R_\star$ & $\mathcal{N}$ (11.89, 1.2) & \cite{Wakeford2017} \\ $T_{0}$[d] & $\mathcal{U}$ (T$_{c}$-0.001, T$_{c}$+0.001) & \cite{Hartman2011} \\ u$_{1}$[B600] & $\mathcal{U}$ (0.603, 0.03) & \texttt{PyLDTk} \\ u$_{1}$[R150] & $\mathcal{U}$ (0.73, 0.03) & \texttt{PyLDTk} \\ \\ \\ \hline GP model \\ \hline \hline ln (A) & $\mathcal{U}$ (-100, 100) & -- \\ ln ($\eta_{p}$) & $\mathcal{U}$ (-100, 100) & -- \\ $\sigma_{w}$ & $\mathcal{U}$ (0.00001, 0.005) & -- \\ \hline \end{tabular} \label{tab:priors} \end{table} We also fit for the white noise hyperparameter $\sigma_{w}$ as described in Section \ref{noise_model} which lets the GP model fit for the white noise variance in the target star light curves along with contributions from the variance in the GP regressors (e.g. the comparison star light curve). This is one of the key advantages and an important feature of our method as instead of propagating the variance from comparison star light curve by simply adding in quadrature (as is the case when the target star light curve is normalised by the comparison star light curve), our method provides a way to propagate uncertainties from the comparison star light curve to our fit of the target star light curve within the Bayesian framework of GPs described in Section \ref{noise_model}. We further emphasize that fitting for $\sigma_{w}$ is crucial for allowing the GP model to capture the white noise in the target star light curves. \begin{figure*} \centering \includegraphics[scale=0.45]{figures/WTLC_all_revised.png} \caption{Raw wavelength integrated white target and comparison star light curves of GMOS observations of HAT-P-26b first normalised by the exposure times of the individual exposures and then by their out of transit median flux. Note the low frequency trend present in both the target and comparison star light curves due to the changing airmass (shown in grey) through the night. Observations 2, 4, and 5 were taken using non-ideal PA which is reflected here in the deviating trends between the target and comparison star light curves of the corresponding observations which subsequently contaminates the transit signal in the Target/Comparison light curve and are examples of sub-optimal results from Target/Comparison star light curve normalisation. Observation 6, which was taken with the newly installed Hamamatsu detector on Gemini South, also shows a similar deviating trend between the the target and comparison star light curves. } \label{fig:WLC_all} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/R150_WTLC_revised.png} \caption{White transit light curves for HAT-P-26b obtained using the GMOS-R150 grism integrated in the range of 530 to 700 nm for observations 1 and 3, and in the range of 530 to 900 nm for the observations 2 and 6. Purple points show the comparison star light curve, black points show the target star (HAT-P-26) light curve overplotted with the best fit \texttt{New:WLC} model in red and the corresponding residuals plotted in the bottom panel of each observation, and green points show the detrended target star light curve overplotted with the {\texttt{batman}} transit model corresponding to the best fit transit parameters in blue. For observation 4, in pink is overplotted the PSF width time series for the spectral trace of target star. Note that the target and comparison star light curve are affected by the known odd-even pattern in GMOS datasets due to unequal travel times of the GMOS shutter blades which are known to differ slightly with the direction of motion (\citealt{Stevenson2014}, \citealt{Jorgensen2009}) as seen significantly in observations 1 and 3. } \label{fig:r150_WLC} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/B600_WTLC_revised.png} \caption{White transit light curves for HAT-P-26b obtained using the GMOS-B600 grism integrated in the range of 490 to 680 nm for observations 4 and 5. Purple points show the comparison star light curve, black points show the target star (HAT-P-26) light curve overplotted with the best fit \texttt{New:WLC} model in red and the corresponding residuals plotted in the bottom panel of each observation, green points show the detrended target star light curve overplotted with the {\texttt{batman}} transit model corresponding to the best fit transit parameters in blue. For observation 4, in pink is overplotted the PSF width time series for the spectral trace of target star. } \label{fig:b600_WLC} \end{figure*} For the conventional method application \texttt{Conv1:WLC}, we perform fits for both R150 and B600 observations using all combinations of following GP regressors common to both target and comparison star light curves: time, CRPA, and airmass. For the new method application \texttt{New:WLC} we use all combinations of the following GP regressors: comparison star light curve, time, CRPA, airmass, PSF full width at half maxima (FWHM) of the spectral trace for every exposure (averaged across the dispersion direction) for the target star. For \texttt{New:WLC;No\_Comp} we use all GP regressors as for \texttt{New:WLC} except comparison star light curve to demonstrate the performance of fits without using the comparison star at all. We determine the GP regressor combination that best describes the systematics for all the methods in Sections \ref{model_selection} and \ref{app:model_selection}. For all applications of the conventional and new methods, we first find the Maximum a-Posteriori (MAP) solution by optimising the GP posterior using the Powell optimiser in the {\texttt{SciPy}} python package. We put wide uniform priors on the GP hyperparameters and sample them logarithmically. The logarithmic sampling of hyperparameters effectively puts a shrinkage prior on the hyperparameters which pushes them to smaller values if the corresponding GP regressor truly does not represent the correlated systematics in the data (\citealt{Gibson2012}, \citealt{Gibson2014}). Using the MAP solution as the starting point we marginalise the GP posterior over all the hyperparameters and transit model parameters through an MCMC using the package {\texttt{emcee}}, a pure-Python implementation of the affine-invariant Markov chain Monte Carlo (MCMC) ensemble sampler (\citealt{Goodman2010}, \citealt{ForemanMackey2013}). We use 50 walkers for 10000 steps and check for the convergence of chains by estimating the integrated auto-correlation times for each walker following the method described in (\citealt{Goodman2010}). We ensure that the total length of our chains are at least 50 times the auto-correlation times to make sure our samples are effectively independent and have converged. We discard the first 1000 samples as burn-in. We judge the final goodness of fit based on the consistency of best fit transit parameters with the literature values, and the model selection criteria described in Section \ref{model_selection} for each combination of GP regressors and the various forms of kernel combinations (described in Section \ref{noise_model} ). We also tested the robustness of our fits using a nested sampler {\texttt{dynesty}} (\citealt{Speagle2020}) and obtain posteriors consistent with those from {\texttt{emcee}}. We measure the best fit transit parameters as the median of the corresponding posteriors and $\pm 34$ percentile from the median as their 1 $\sigma$ uncertainties. \subsubsection{Selecting the best GP regressor combinations for white light curve fits} \label{model_selection} We select the best combination of GP regressors for both new and conventional methods of fitting the white light curves separately by comparing the Bayesian Information Criterion (BIC, \citealt{Schwarz1978}) and Bayesian evidence estimated using \texttt{dynesty}. For each GP regressor combination we calculate the BIC corresponding to the GP likelihood computed for the best fit parameters. BIC computed using the GP likelihood takes into account the covariance structure in the data through the covariance matrix (see Equation \ref{eq:likelihood}). We discuss the model selection threshold in more detail in Appendix \ref{app:model_selection}. We have highlighted the best GP regressor combinations in the Tables \ref{tab:obs1_bicevi} to \ref{tab:obs5_bicevi} for the following applications of new and conventional methods that we compare further in Section \ref{sec:method_comparison}: 1) \texttt{New:WLC} - Target LC fit with Time and comparison LC, and additional regressors if that is favoured by higher log$_{e}$Z in some cases, 2) \texttt{New:WLC;No\_Comp} - Target LC fit without comparison LC as a regressor (Time and/or an additional regressor), 3) \texttt{Conv1:WLC} - Target/Combination LC fit using the best regressor combination. For each of the three cases above, we perform the model selection by separately comparing the log$_{e}$Z for the set of GP regressor combinations applicable to each case. Also note that since the new and conventional methods are not fitting the same light curves exactly, we do not use log$_{e}$Z or BIC to perform comparison between the methods themselves but only to choose the best GP regressor combinations for each of them. \subsubsection{Odd-even effect in GMOS light curves} \label{odd_even} The consecutive exposures in the GMOS light curves have been known to suffer from an odd-even effect due to the unequal travel times of the GMOS shutters (\citealt{Jorgensen2009}) with respect to the direction of motion. This has also been previously observed by \cite{Stevenson2014}. We observe the level of this effect for our HAT-P-26b observations to be as high as 700 ppm just for the target star light curves, and as high as 200 ppm for the Target/Comparison light curves, varying with the observation and the corresponding exposure time, and observed most significantly in the R150 observations 1, 2, and 3 (Figures \ref{fig:WLC_all} and \ref{fig:r150_WLC}). The comparison star light curve also suffers from the same odd-even effect at similar time scales as the target star as confirmed from the Lomb Scargle periodograms of both the light curves. Normalising the target light curve by the comparison light curve does not correct for this effect entirely as can be seen for observation 2 (which has the shortest exposure time among all observations) in Figure \ref{fig:WLC_all} where the odd even effect is still visible in the Target/Comparison light curve. This shows that the odd-even effect prevalent in GMOS observations doesn't affect the target and comparison light curves in the same manner from one exposure to the next and hence cannot be corrected for completely through a linear method like differential spectrophotometry. This is especially true for observations with shorter exposure times. Instead, the odd-even effect is superimposed on existing high frequency noise in the Target/Comparison light curves due to other variations between systematics affecting the target and comparison light curves individually. This further motivates the need for methods alternative to performing differential spectrophotometry to correct for the effect in the target star light curves directly, which is what our new method does. In particular when considering the residual RMS for observation 2, which has the shortest exposure time of all observations (and hence the largest amplitude of odd-even effect difference between the target and comparison stars), the new method \texttt{New:WLC} performs much better at modelling the odd-even effect in the target star light curves compared to the conventional method \texttt{Conv1:WLC}. In \texttt{New:WLC} the odd-even effect is accounted for by using the comparison star as one of the GP regressors. \cite{Stevenson2014} use different flux offsets on odd and even frames respectively to correct for this effect in another target HAT-P-7 in their survey, but the method they ultimately use for WASP-12 in \cite{Stevenson2014} is \texttt{Divide-White} which corrects for this effect automatically. Essentially \cite{Stevenson2014} use a linear mapping between the Target/Comparison light curves and an analytical functional form (different offsets for alternative exposures) or a non-analytical form (the \texttt{Divide-White} method) to correct for this effect. In this paper we correct for the effect in the target star light curve directly by letting the GP model do the non-linear mapping between the target star light curves and the odd-even effect information in the comparison star light curve. For spectroscopic light curves the white light curve derived common-mode trend when used as a GP regressor accounts for this effect for \texttt{New:$\lambda$LC} as described in Section \ref{sec:binned_LC_new_method}. It should be noted that it is not just because of the presence of differential odd-even effect in the data that makes our new method more effective than the conventional method. We performed a simple transit injection and retrieval test by applying both methods to a pair of synthetic target and comparison star light curves both sharing the same correlated systematics but different levels of white noise. We find that when the comparison star light curve has higher level of white noise than the target star light curve, our new method performs much better than the conventional method in terms of both accuracy and precision of retrieving injected transit parameters. We highlight that besides the odd-even effect, there are additional possible sources of instrumental and atmospheric systematics that can affect the comparison and target star fluxes differently, which would be potentially be present in data from other multi-object spectrographs as well. These effects can range from low-frequency trends e.g., due to changing CRPA through the night, or high and low frequency telluric absorption variations. The latter effect could be even more significant in near-infrared bands due to second-order colour-dependent extinction effect (e.g. \citealt{Blake2011}, \citealt{Young1991}). After performing fits to the white transit light curves and gleaning information about the dominant time-dependent systematics affecting each of our observations, we fit the spectroscopic light curves to obtain the transmission spectrum, as described in more detail in the following section. \begin{landscape} \begin{table} \centering \caption{Best fit transit parameters obtained from the fits to white transit light curves of 6 GMOS observations of HAT-P-26b presented in this work. Three sub-rows for each observation number (specified in the first column) show the best fit transit parameters and residual RMS from the applications of our new method (two sub-rows marked \texttt{New:WLC} and \texttt{New:WLC;No\_Comp}) and from the conventional method (third sub-row \texttt{Conv1:WLC}). The third column shows the light curve to which the method is applied to (`Targ' referring to Target and `Targ/Comp' referring to Target/Comparison light curve). The fourth column shows the the combination of regressors for the GP noise model in Section \ref{noise_model}, where `Time' is the time of each exposure in the light curves, `Comp' is the comparison star light curve, and `PSF' is the full width half maxima of the spectral trace PSF. The bottom section of the table shows the weighted average of transit parameters measured for R150 and B600 observations from the applications of both new and conventional methods, and the transit parameters (weighted average of \texttt{New:WLC} from B600 and R150) eventually used to derive the common-mode trend used to fit the spectroscopic light curves ($\lambda$LC) in Section \ref{sec:binned_LC}.} \label{tab:WLC_bestfitparams} \begin{tabular}{ccccccccccc} \hline \hline \\ No.& Method & Light Curve & GP & $R_{\rm P}/R_\star$ & $T_{0}$[BJD$_{\rm TDB}]$ & $a/R_\star$ & $i$ [$^\circ$] & u$_{1}$ &$\sigma_{w}$ & RMS \\ & & type & regressors & & & & & & & [ppm] \\ \\ \hline \\ 1 (R150) & \texttt{New:WLC} & Targ & Time, Comp, PSF & 0.0725 $\pm$ 0.0023 & 2456371.74717 $\pm$ 0.00024 & 11.35 $\pm$ 0.48 & 87.18 $\pm$ 0.49 & 0.611 $\pm$ 0.025 & 0.000268 & 249 \\ & \texttt{New:WLC;No\_Comp} & Targ & Time & 0.0685 $\pm$ 0.053 & 2456371.74731 $\pm$ 0.006 & 11.23 $\pm$ 0.44 & 87.03 $\pm$ 0.43 & 0.605 $\pm$ 0.029 & 0.0015 & 1254 \\ & \texttt{Conv1:WLC} & Targ/Comp & Time & 0.0730 $\pm$ 0.0034 & 2456371.74728 $\pm$ 0.00029 & 11.21 $\pm$ 0.49 & 86.99 $\pm$ 0.47 & 0.605 $\pm$ 0.023 & 0.000299 & 281 \\ \\ 2 (R150) & \texttt{New:WLC} & Targ & Time, Comp & 0.0703 $\pm$ 0.0034 & 2456392.92016 $\pm$ 0.00024 & 12.18 $\pm$ 0.45 & 88.09 $\pm$ 0.58 & 0.602 $\pm$ 0.026 & 0.00038 & 366 \\ & \texttt{New:WLC;No\_Comp} & Targ & Time, PSF & 0.0713 $\pm$ 0.0042 & 2456392.91988 $\pm$ 0.00027 & 11.98 $\pm$ 0.47 & 87.82 $\pm$ 0.54 & 0.602 $\pm$ 0.026 & 0.000347 & 332 \\ & \texttt{Conv1:WLC} & Targ/Comp & Time & 0.0703 $\pm$ 0.0045 & 2456392.91946 $\pm$ 0.00038 & 11.28 $\pm$ 0.61 & 87.18 $\pm$ 0.66 & 0.603 $\pm$ 0.026 & 0.000569 & 557 \\ \\ 3 (R150) & \texttt{New:WLC} & Targ & Time, Comp & 0.0694 $\pm$ 0.0032 & 2456786.72782 $\pm$ 0.00032 & 10.88 $\pm$ 0.56 & 86.59 $\pm$ 0.52 & 0.595 $\pm$ 0.024 & 0.000412 & 398 \\ & \texttt{New:WLC;No\_Comp} & Targ & Airmass & 0.0625 $\pm$ 0.0031 & 2456786.72849 $\pm$ 0.00038 & 11.67 $\pm$ 0.75 & 87.67 $\pm$ 0.86 & 0.603 $\pm$ 0.026 & 0.001567 & 1536 \\ & \texttt{Conv1:WLC} & Targ/Comp & Time & 0.0696 $\pm$ 0.0029 & 2456786.72790 $\pm$ 0.0003 & 11.07 $\pm$ 0.72 & 86.81 $\pm$ 0.68 & 0.611 $\pm$ 0.029 & 0.000431 & 420 \\ \\ 6 (R150) & \texttt{New:WLC} & Targ & Time, Comp & 0.0683 $\pm$ 0.0048 & 2456837.54218 $\pm$ 0.00038 & 12.12 $\pm$ 0.58 & 87.85 $\pm$ 0.70 & 0.599 $\pm$ 0.025 & 0.000580 & 560 \\ & \texttt{New:WLC;No\_Comp} & Targ & Time & 0.0706 $\pm$ 0.0055 & 2456837.54214 $\pm$ 0.00046 & 12.05 $\pm$ 0.59 & 87.77 $\pm$ 0.67 & 0.598 $\pm$ 0.026 & 0.000589 & 562 \\ & \texttt{Conv1:WLC} & Targ/Comp & Time & 0.0671 $\pm$ 0.0044 & 2456837.54211 $\pm$ 0.00043 & 11.92 $\pm$ 0.62 & 87.54 $\pm$ 0.69 & 0.602 $\pm$ 0.026 & 0.000609 & 590 \\ \\ 4 (B600) & \texttt{New:WLC} & Targ & Time, Comp, PSF & 0.0654 $\pm$ 0.0045 & 2457460.01335 $\pm$ 0.00036 & 11.82 $\pm$ 0.55 & 87.71 $\pm$ 0.58 & 0.73 $\pm$ 0.026 & 0.000125 & 101 \\ & \texttt{New:WLC;No\_Comp} & Targ & Time, PSF & 0.0673 $\pm$ 0.0067 & 2457460.01315 $\pm$ 0.00062 & 11.50 $\pm$ 0.72 & 87.44 $\pm$ 0.74 & 0.733 $\pm$ 0.027 & 0.000193 & 166 \\ & \texttt{Conv1:WLC} & Targ/Comp & Time, Airmass & 0.0735 $\pm$ 0.0057 & 2457460.01402 $\pm$ 0.00046 & 11.29 $\pm$ 0.54 & 87.26 $\pm$ 0.54 & 0.732 $\pm$ 0.025 & 0.000194 & 165 \\ \\ 5 (B600) & \texttt{New:WLC} & Targ & Time, Comp & 0.0685 $\pm$ 0.0053 & 2457493.89006 $\pm$ 0.00031 & 11.97 $\pm$ 0.55 & 87.81 $\pm$ 0.64 & 0.733 $\pm$ 0.026 & 0.000164 & 133 \\ & \texttt{New:WLC;No\_Comp} & Targ & Time, Airmass, PSF & 0.0726 $\pm$ 0.0021 & 2457493.89000 $\pm$ 0.00024 & 12.50 $\pm$ 0.44 & 88.46 $\pm$ 0.69 & 0.721 $\pm$ 0.024 & 0.000283 & 264 \\ & \texttt{Conv1:WLC} & Targ/Comp & Time, Airmass & 0.0764 $\pm$ 0.0089 & 2457493.88987 $\pm$ 0.00048 & 11.64 $\pm$ 0.68 & 87.57 $\pm$ 0.73 & 0.730 $\pm$ 0.029 & 0.000218 & 181 \\ \\ \hline \\ R150 & \texttt{New:WLC} & Targ & & 0.0703 $\pm$ 0.0014 & & 11.67 $\pm$ 0.25 & 87.33 $\pm$ 0.28 & 0.602 $\pm$ 0.012 & & \\ R150 & \texttt{New:WLC;No\_Comp} & Targ & & 0.067 $\pm$ 0.0022 & & 11.68 $\pm$ 0.25 & 87.45 $\pm$ 0.28 & 0.602 $\pm$ 0.013 & & \\ R150 & \texttt{Conv1:WLC} & Targ/Comp & & 0.0702 $\pm$ 0.0018 & & 11.36 $\pm$ 0.29 & 87.09 $\pm$ 0.3 & 0.605 $\pm$ 0.013 & & \\ \\ B600 & \texttt{New:WLC} & Targ & & 0.067 $\pm$ 0.0034 & & 11.89 $\pm$ 0.39 & 87.75 $\pm$ 0.42 & 0.732 $\pm$ 0.018 & & \\ B600 & \texttt{New:WLC;No\_Comp} & Targ & & 0.0721 $\pm$ 0.002 & & 12.22 $\pm$ 0.38 & 87.98 $\pm$ 0.5 & 0.726 $\pm$ 0.017 & & \\ B600 & \texttt{Conv1:WLC} & Targ/Comp & & 0.0743 $\pm$ 0.0048 & & 11.42 $\pm$ 0.42 & 87.37 $\pm$ 0.43 & 0.731 $\pm$ 0.019 & & \\ \\ For $\lambda$LC fits & & & & 0.0701 $\pm$ 0.0013 & & 11.73 $\pm$ 0.21 & 87.45 $\pm$ 0.23 & & & \\ \hline \hline \end{tabular} \end{table} \end{landscape} \subsection{Analysis of Spectroscopic Light Curves} \label{sec:binned_LC} \subsubsection{Construction of spectroscopic light curves} We constructed the spectroscopic transit light curves ($\rm \lambda$LC) for both target and comparison stars by summing the flux in $\sim$ 20 nm wide bins within the same wavelength range as the respective white light curves. We normalise each exposure in the individual target and comparison $\rm \lambda$LC by the corresponding exposure times. Similar to our white light curve analyses, we fit the $\rm \lambda$LC for each observation using the conventional method \texttt{Conv1;$\lambda$LC} and the new method \texttt{New:$\lambda$LC} as described in Section \ref{noise_model}. \subsubsection{Fitting the spectroscopic light curves using conventional method} \label{sec:binned_LC_old_method} We first describe the application \texttt{Conv1;$\lambda$LC} of the conventional method of fitting $\rm \lambda$LCs. We divide the target $\rm \lambda$LCs by the corresponding comparison star $\rm \lambda$LCs. GMOS observations, like many other ground-based MOS observations, have been conventionally corrected for wavelength-independent systematics through common-mode correction (e.g. \cite{Stevenson2014}, \citetalias{Huitson2017}, \cite{Todorov2019}, \cite{Wilson2021}), which leverages the information about the time dependent systematics contained in the white light curve to correct the individual wavelength bins. However, while performing common-mode correction using the white light curve provides an effective way to remove dominant time-dependent systematics, it also implies that we effectively lose information on the absolute value of transit depths and obtain relative transit depths, which is nevertheless useful to search for dominant features in the transmission spectrum. In \texttt{Conv1;$\lambda$LC}, we follow the conventional common-mode correction approach and derive the common-mode trend as the residuals obtained by subtracting the transit model computed using the weighted averaged transit parameters for the white light curves (last row of Table \ref{tab:WLC_bestfitparams}) from the white light curves for all observations. Except the limb darkening coefficient we use the same transit parameters for both B600 and R150 observations to construct the transit model. We perform the common-mode correction by subtracting the white light curve derived common-mode trend from the corresponding Target/Comparison $\rm \lambda$LC. Note that using the weighted averaged transit parameters to derive the common-mode trend across all observations is valid here as HAT-P-26 is known to be inactive and any potential contamination from stellar activity for the individual epochs that could affect the transit depth is below the precision of our measurements (discussed in more detail in Section \ref{sec:interpretation}. We then fit the common-mode corrected $\rm \lambda$LCs with the model described in \ref{noise_model}, using only time as a GP regressor. This is mainly to account for wavelength-dependent trends not removed by the common-mode correction and arising likely due to wavelength-dependent differential atmospheric extinction between the target and comparison stars with changing airmass through the night (discussed in more detail in Section \ref{sec:method_comparison}). Since our main goal is to measure the wavelength-dependent transit depths, we fix the orbital inclination ($i$), orbital separation ($a/R_\star$), and mid transit time ($T_0$) to the best fit values for the corresponding white light curve in Section \ref{sec:WLC} (see Table \ref{tab:WLC_bestfitparams}), and orbital period and eccentricity to literature values. We use a linear limb darkening law for each wavelength bin and fix the limb darkening coefficients to pre-calculated values by \texttt{PyLDTk} (approximating a top hat transmission function for each wavelength bin). We find that doing common-mode correction prior to fitting the Target/Comparison $\rm \lambda$LCs improved the precision of measured transit depths in R150 observations by $\sim15$\% on average per wavelength bin compared to when not performing common-mode correction. The Target/Comparison R150 $\rm \lambda$LCs along with their best fit models, detrended light curves, and the residuals are shown in the top three panels of Figures \ref{fig:sptlcs_1}, \ref{fig:sptlcs_2}, \ref{fig:sptlcs_3}, and \ref{fig:sptlcs_6}. For the B600 observations, the target and comparison star light curves suffer from significantly different trends through the night as already discussed in Section \ref{sec:WLC}. Hence, doing Target/Comparison normalisation contaminates the transit signal. This can be noticed by visually inspecting the B600 white light curves in Figure \ref{fig:WLC_all} and also in the B600 $\rm \lambda$LCs. Nevertheless, same as done for R150 $\rm \lambda$LCs, we apply \texttt{Conv1;$\lambda$LC} to B600 $\rm \lambda$LCs by doing common-mode correction followed by fitting common-mode corrected Target/Comparison light curves. The resultant fits, detrended light curves, and the residuals are shown in Figures \ref{fig:sptlcs_4} and \ref{fig:sptlcs_5}. \subsubsection{Fitting the spectroscopic light curves using the new method} \label{sec:binned_LC_new_method} We now describe the application \texttt{New:$\lambda$LC} of our new method to fitting the $\rm \lambda$LCs. One of the motivations behind our new method is that as we observe the planetary transit through a range of airmass during a night, the differential atmospheric extinction between the target and the comparison star across the optical wavelength range due to the difference in brightness and/or spectral type between the stars implies that simple normalisation of the target $\rm \lambda$LCs by the comparison $\rm \lambda$LCs introduces wavelength dependent systematics in the light curves. This is also evident as residual trends in the $\rm \lambda$LCs after conventional common-mode correction as done Section \ref{sec:binned_LC_old_method}. When inspecting the individual target and comparison star spectra, we find that for our GMOS observations the bluest end of the stellar spectrum suffers from $\sim$ 5 to 10 \% more extinction at high airmasses compared to the reddest end. For the B600 observations specifically, the difference between the atmospheric extinction between the target (fainter than the comparison star) and the comparison star is 5 \% more at the bluest end than the reddest end of the spectrum. The conventional method to mitigate this residual wavelength-dependent noise remaining after common-mode correction as mentioned in Section \ref{sec:binned_LC_old_method} is to fit the common-mode corrected Target/Comparison $\rm \lambda$LCs using a linear or quadratic function of airmass or time as the baseline, or a GP model with time as a regressor. In this conventional method, however, there is no straightforward way to ascertain additional systematics propagated to the light curves during the division by comparison $\rm \lambda$LCs and then common-mode correction. The linear approach of conventional method is also sub-optimal due to non-linear wavelength dependent difference between target and comparison $\rm \lambda$LCs, lack of wavelength dependent information present in the common-mode trend, and other potential non-linear differences between the target $\rm \lambda$LCs and common-mode trend. With our new method to fit the $\rm \lambda$LCs, we propose to neither perform normalisation by the comparison star $\rm \lambda$LCs nor perform common-mode correction to the $\rm \lambda$LCs. We instead use the information of the time dependent systematics contained in the white light curves as one of the regressors in the GP noise model described in Section \ref{noise_model} for fitting the corresponding $\rm \lambda$LCs. This is possible through two different combinations of GP regressors : 1) Time and GP noise model of the white light curve (from Section \ref{sec:WLC}), 2) Time and normalised residuals between the white light curve and its best fit transit model (these residuals are same as the conventional common-mode trend). The first combination effectively still uses information from the comparison star (which was used to fit the white target light curve and obtain the GP noise model in Section \ref{sec:WLC}). The second combination however for both R150 and B600 observations doesn't rely on the comparison star directly and simply leverages the information contained in the common-mode trend to inform the GP systematics model for each $\rm \lambda$LC. This combination is in part analogous to the combination employed by \texttt{Divide-White} method (\citealt{Stevenson2014}) which uses the white target light curve residuals as a common-mode correction factor in combination with non-analytic models of wavelength dependent systematics derived from the comparison star $\rm \lambda$LCs. In contrast to the conventional \texttt{Divide-White} method, we do not use any information from the comparison star $\rm \lambda$LCs and simply subtract the transit model from the white target light curve and use the residuals or the common-mode trend hence obtained as a regressor in the GP model for the individual $\rm \lambda$LCs. We eventually use the second combination (time and common-mode trend) to fit the target $\rm \lambda$LCs. It should be noted that the white light curve transit parameters we obtain from not using comparison stars at all (\texttt{New:WLC;No\_Comp} sub-row in Table \ref{tab:WLC_bestfitparams}) are consistent with those obtained from using comparison star as one of the GP regressor (\texttt{New:WLC} sub-row in Table \ref{tab:WLC_bestfitparams}, see detailed comparison in Section \ref{sec:method_comparison_WLC}). Hence, the derived common-mode trend is consistent between whether we use the comparison stars or not to fit the white light curves, and hence the common-mode trend is not a function of the comparison star light curve in our new method. Similar to the conventional method described in Section \ref{sec:binned_LC_old_method}, we use the same weighted averaged transit parameters (except the limb darkening coefficient) for both the B600 and R150 observations and the respective transit models used to obtain the common-mode trend from the white light curves. The common-mode trend is then used as a GP regressor to fit the target $\rm \lambda$LCs. Similar to the conventional method, when fitting $\rm \lambda$LCs we keep all the transit model parameters except the transit depth fixed to the best weighted average values derived from the white light curve, and also fix the linear limb darkening coefficients to the pre-computed values from \texttt{PyLDTk} for each spectral bin. Using the common-mode trend as a GP regressor to fit target $\rm \lambda$LCs as an alternative to subtracting it from $\rm \lambda$LCs is a novel approach and we test its robustness using a transit injection and recovery test described in detail in Appendix \ref{cmode_gp_test}. We find from this test that using the common-mode trend as a GP regressor yields transmission spectra consistent with and on average 25 \% better precision than that obtained from the conventional common-mode correction. Through our transit injection test in Appendix \ref{cmode_gp_test} (see right panel of Figure \ref{fig:cmode_gp_test}) we also demonstrate the choice of using time as a GP regressor in addition to the common-mode trend to fit the target $\rm \lambda$LCs. The common-mode trend by itself models the high frequency systematics in the target $\rm \lambda$LCs which also includes the odd-even effect described in Section \ref{odd_even}. Time as an additional GP regressor models the wavelength dependent low frequency trend across the $\rm \lambda$LCs. It is possible to use additional GP regressors to fit $\rm \lambda$LCs, but since we independently fit each $\rm \lambda$LCs in this paper, it is not possible to perform model selection for all the $\rm \lambda$LCs together as done for the white light curves in Section \ref{model_selection}. Hence, we stick to the simplest choice of using only time as the additional GP regressor to model the wavelength-dependent trend. It would be possible for a future study into joint modelling of systematics for all $\rm \lambda$LCs in both time and wavelength dimension to comprehensively explore the use of additional regressors. The Target $\rm \lambda$LCs for both B600 and R150 observations along with their best fit models from the new method, detrended light curves, and the residuals are shown in the bottom three panels of Figures \ref{fig:sptlcs_1}, \ref{fig:sptlcs_2}, \ref{fig:sptlcs_3}, \ref{fig:sptlcs_6}, \ref{fig:sptlcs_4}, and \ref{fig:sptlcs_5}. The resulting transmission spectra are tabulated in Tables \ref{tab:r150_ts_targ} to \ref{tab:b600_ts_targ}, and shown in Figure \ref{fig:r150_ts} and \ref{fig:b600_ts}. We compare the transmission spectrum of HAT-P-26b constructed from the best fit wavelength-dependent transit depths for each observation obtained from the conventional and the new method introduced in this paper, and interpret and discuss them in the context of previous transmission spectroscopy measurements of HAT-P-26b. \begin{table*} \begin{center} \caption{\texttt{New:$\lambda$LC}, R150 : Wavelength dependent transit depths (in ppm) for the individual GMOS-R150 observations (marked by the columns) and combined from all observations obtained using the new method described in Section \ref{sec:binned_LC}. } \label{tab:r150_ts_targ} \begin{tabular}{cccccc} \hline Wavelength [\AA] & & & Transit Depth [ppm] \\ & 1 & 2 & 3 & 6 & Combined \\ \hline 5301 - 5501 & 4530 $\pm$ 473 & 4813 $\pm$ 850 & 4897 $\pm$ 526 & 4404 $\pm$ 995 & 4681 $\pm$ 301 \\ 5501 - 5701 & 4530 $\pm$ 406 & 5038 $\pm$ 809 & 4684 $\pm$ 385 & 5034 $\pm$ 922 & 4682 $\pm$ 252 \\ 5701 - 5999 & 4958 $\pm$ 339 & 5073 $\pm$ 578 & 5199 $\pm$ 397 & 4440 $\pm$ 593 & 4971 $\pm$ 216 \\ 5999 - 6199 & 5099 $\pm$ 277 & 4773 $\pm$ 464 & 4797 $\pm$ 99 & 4608 $\pm$ 576 & 4830 $\pm$ 90 \\ 6199 - 6600 & 4867 $\pm$ 183 & 4655 $\pm$ 332 & 5177 $\pm$ 176 & 4663 $\pm$ 414 & 4982 $\pm$ 106 \\ 6600 - 6800 & 5308 $\pm$ 364 & 5047 $\pm$ 191 & 4962 $\pm$ 503 & 5356 $\pm$ 316 & 5148 $\pm$ 138 \\ 6800 - 7000 & 5037 $\pm$ 609 & - & 4642 $\pm$ 369 & - & 4752 $\pm$ 310 \\ 6799 - 7399 & - & 4896 $\pm$ 91 & - & 4937 $\pm$ 183 & 4903 $\pm$ 87 \\ 7799 - 7999 & - & 4968 $\pm$ 211 & - & 5018 $\pm$ 411 & 4978 $\pm$ 184 \\ 7999 - 8201 & - & 4831 $\pm$ 273 & - & 5205 $\pm$ 364 & 4954 $\pm$ 211 \\ 8201 - 8801 & - & 4836 $\pm$ 240 & - & 4740 $\pm$ 391 & 4809 $\pm$ 204 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{\texttt{Conv1:$\lambda$LC}, R150 : Wavelength dependent transit depths (in ppm) for the individual GMOS-R150 observations (marked by the columns) and combined from all observations obtained using the conventional method described in Section \ref{sec:binned_LC}. } \label{tab:r150_ts_targ_by_comp} \begin{tabular}{cccccc} \hline Wavelength [\AA] & & & Transit Depth [ppm] \\ & 1 & 2 & 3 & 6 & Combined \\ \hline 5301 - 5501 & 3947 $\pm$ 1112 & 4637 $\pm$ 673 & 4886 $\pm$ 766 & 4853 $\pm$ 331 & 4762 $\pm$ 272 \\ 5501 - 5701 & 5145 $\pm$ 413 & 4951 $\pm$ 474 & 4850 $\pm$ 81 & 4420 $\pm$ 287 & 4837 $\pm$ 77 \\ 5701 - 5999 & 5396 $\pm$ 311 & 4921 $\pm$ 290 & 4875 $\pm$ 109 & 4729 $\pm$ 228 & 4892 $\pm$ 100 \\ 5999 - 6199 & 5457 $\pm$ 220 & 4546 $\pm$ 202 & 4776 $\pm$ 110 & 4907 $\pm$ 334 & 4867 $\pm$ 82 \\ 6199 - 6600 & 5222 $\pm$ 183 & 4601 $\pm$ 159 & 4716 $\pm$ 65 & 4678 $\pm$ 203 & 4760 $\pm$ 54 \\ 6600 - 6800 & 5178 $\pm$ 171 & 4936 $\pm$ 243 & 4491 $\pm$ 177 & 4722 $\pm$ 199 & 4841 $\pm$ 98 \\ 6800 - 7000 & 4943 $\pm$ 1096 & - & 4400 $\pm$ 353 & - & 4441 $\pm$ 321 \\ 6799 - 7399 & - & 4569 $\pm$ 111 & - & 4989 $\pm$ 93 & 4844 $\pm$ 74 \\ 7799 - 7999 & - & 5074 $\pm$ 193 & - & 4709 $\pm$ 275 & 4976 $\pm$ 140 \\ 7999 - 8201 & - & 5115 $\pm$ 223 & - & 4667 $\pm$ 293 & 4975 $\pm$ 161 \\ 8201 - 8801 & - & 4654 $\pm$ 214 & - & 4271 $\pm$ 289 & 4525 $\pm$ 164 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{\texttt{New:$\lambda$LC}, B600 : Wavelength dependent transit depths (in ppm) for the individual GMOS-B600 observations (marked by the columns) and combined from all observations obtained using the new method described in Section \ref{sec:binned_LC}. } \label{tab:b600_ts_targ} \begin{tabular}{cccc} \hline Wavelength [\AA] & & Transit Depth [ppm] \\ & 4 & 5 & Combined \\ \hline 4900 - 5100 & 4925 $\pm$ 75 & 4266 $\pm$ 550 & 4913 $\pm$ 75 \\ 5100 - 5300 & 5197 $\pm$ 418 & 4755 $\pm$ 448 & 4992 $\pm$ 306 \\ 5300 - 5500 & 4879 $\pm$ 324 & 4713 $\pm$ 289 & 4787 $\pm$ 216 \\ 5500 - 5700 & 4925 $\pm$ 255 & 4819 $\pm$ 393 & 4894 $\pm$ 214 \\ 5700 - 6000 & 4735 $\pm$ 197 & 4784 $\pm$ 212 & 4757 $\pm$ 145 \\ 6000 - 6200 & 5026 $\pm$ 41 & 4401 $\pm$ 223 & 5006 $\pm$ 40 \\ 6200 - 6400 & 4830 $\pm$ 187 & 4759 $\pm$ 381 & 4816 $\pm$ 168 \\ 6400 - 6600 & 4583 $\pm$ 226 & 5091 $\pm$ 307 & 4761 $\pm$ 182 \\ 6600 - 6800 & 4760 $\pm$ 410 & 5196 $\pm$ 398 & 4984 $\pm$ 286 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{\texttt{Conv1:$\lambda$LC}, B600 : Wavelength dependent transit depths (in ppm) for the individual GMOS-B600 observations (marked by the columns) and combined from all observations obtained using the new method described in Section \ref{sec:binned_LC}. } \label{tab:b600_ts_corr} \begin{tabular}{cccc} \hline Wavelength [\AA] & & Transit Depth [ppm] \\ & 4 & 5 & Combined \\ \hline 4900 - 5100 & 2577 $\pm$ 1280 & 4572 $\pm$ 1373 & 3504 $\pm$ 937 \\ 5100 - 5300 & 4151 $\pm$ 791 & 5174 $\pm$ 774 & 4674 $\pm$ 553 \\ 5300 - 5500 & 4199 $\pm$ 553 & 4510 $\pm$ 429 & 4393 $\pm$ 339 \\ 5500 - 5700 & 3776 $\pm$ 1130 & 4974 $\pm$ 294 & 4898 $\pm$ 284 \\ 5700 - 6000 & 4825 $\pm$ 229 & 4868 $\pm$ 246 & 4845 $\pm$ 168 \\ 6000 - 6200 & 4976 $\pm$ 166 & 5041 $\pm$ 127 & 5017 $\pm$ 101 \\ 6200 - 6400 & 4945 $\pm$ 283 & 4968 $\pm$ 212 & 4960 $\pm$ 170 \\ 6400 - 6600 & 4819 $\pm$ 238 & 4906 $\pm$ 237 & 4862 $\pm$ 168 \\ 6600 - 6800 & 4860 $\pm$ 383 & 4903 $\pm$ 260 & 4889 $\pm$ 215 \\ \hline \end{tabular} \end{center} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs1_c_sptlcs_fit_residuals.png} \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs1_t_sptlcs_fit_residuals.png} \caption{Spectroscopic light curves for observation 1 (R150) fit using the conventional method (\texttt{Conv1:$\lambda$LC}, top three panels) and the new method introduced in this paper (\texttt{New:$\lambda$LC}, bottom three panels). The leftmost panel for each method shows the best fit to the light curves for each wavelength bin, the middle panel shows the detrended light curves, and the rightmost panel shows the corresponding residuals, their histograms, and the RMS of their scatter. The target $\lambda$LCs show a wavelength dependent low frequency trend due to changing airmass through the night. } \label{fig:sptlcs_1} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs2_c_sptlcs_fit_residuals.png} \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs2_t_sptlcs_fit_residuals.png} \caption{Same as Figure \ref{fig:sptlcs_1} for observation 2 (R150). } \label{fig:sptlcs_2} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs3_c_sptlcs_fit_residuals.png} \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs3_t_sptlcs_fit_residuals.png} \caption{Same as Figure \ref{fig:sptlcs_1} for observation 3 (R150). } \label{fig:sptlcs_3} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs6_c_sptlcs_fit_residuals.png} \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs6_t_sptlcs_fit_residuals.png} \caption{Same as Figure \ref{fig:sptlcs_1} for observation 6 (R150). } \label{fig:sptlcs_6} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs4_c_sptlcs_fit_residuals.png} \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs4_t_sptlcs_fit_residuals.png} \caption{Spectroscopic light curves for observation 4 (B600) fit using the conventional method (\texttt{conv1:$\lambda$LC}, top three panels) and the new method introduced in this paper (\texttt{New:$\lambda$LC}, bottom three panels). The leftmost panel for each method shows the best fit to the light curves for each wavelength bin, the middle panel shows the detrended light curves, and the rightmost panel shows the corresponding residuals, their histograms, and the RMS of their scatter. The target $\lambda$LCs show a wavelength dependent low frequency trend due to changing airmass through the night. } \label{fig:sptlcs_4} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs5_c_sptlcs_fit_residuals.png} \includegraphics[scale=0.4]{figures/wavelength_dependent_lightcurves/Obs5_t_sptlcs_fit_residuals.png} \caption{Same as Figure \ref{fig:sptlcs_4} for observation 5 (B600). } \label{fig:sptlcs_5} \end{figure*} \section{Results and Discussion} \label{sec:discuss} \subsection{Comparison of the two Methods and Implications} \label{sec:method_comparison} \subsubsection{Comparing the white light curve fits} \label{sec:method_comparison_WLC} We first compare the performance of the conventional and new methods applied to fitting the white light curves. We compare the three cases \texttt{Conv1:WLC}, \texttt{New:WLC}, and \texttt{New:WLC;No\_Comp} used to fit the white transit light curves for each observation highlighted in Table \ref{tab:WLC_bestfitparams}. From Table \ref{tab:WLC_bestfitparams} we find that the new method (\texttt{New:WLC} and \texttt{New:WLC;No\_Comp}) provides similar results compared to the conventional method \texttt{Conv1:WLC} at a precision better than 2$\sigma$ level. \texttt{New:WLC} yields on average lower residual RMS compared to \texttt{Conv1:WLC} for all observations. For observations 1, 2, 4, and 5 \texttt{New:WLC} also yields marginally smaller (by $\sim$20\% on average) uncertainties on $R_{\rm P}/R_\star$ as compared to the \texttt{Conv1:WLC}. With \texttt{New:WLC;No\_Comp} when not using the comparison star at all, we achieve marginally larger (by $\sim$10-20\% on average) $R_{\rm P}/R_\star$ uncertainties for all observations except observations 1,3, and 5. For observation 5, \texttt{New:WLC;No\_Comp} gives $\sim$80\% smaller uncertainty on $R_{\rm P}/R_\star$. For observations 1 and 3, \texttt{New:WLC;No\_Comp} leads to an order of magnitude larger uncertainty on $R_{\rm P}/R_\star$. This is because for these two R150 observations the odd-even effect is particularly high and using comparison star light curve, either as a GP regressor or linearly (as in \texttt{Conv1:WLC}), is crucial to account for the odd-even effect in the target light curve. For the B600 observations (4 and 5) specifically, the comparison star light curves have time dependent trends significantly different from the target star light curve due to the non-ideal PA of the observational setup, which significantly contaminates the transit signal in the resulting Target/Comparison light curves (as seen in Figure \ref{fig:WLC_all}). From a visual inspection of the B600 light curves in Figure \ref{fig:WLC_all}, Target/Comparison corrects for the odd-even effect in the transit light curve but adds additional low frequency trend not present in the original target transit light curve. This effect especially strong before and during the transit. Nevertheless, the conventional method \texttt{Conv1:WLC} for fitting the B600 Target/Comparison light curves using a GP with time and airmass as regressors retrieves transit parameters consistent with R150 observations. Notably, \texttt{New:WLC} achieves lower uncertainties on $R_{\rm P}/R_\star$ as compared to \texttt{Conv1:WLC}. Not using the comparison star with \texttt{New:WLC;No\_Comp} yields lower (observation 5) or marginally larger but comparable (observation 4) uncertainties on $R_{\rm P}/R_\star$. We conclude from comparison across the aforementioned three cases of fitting the white light curves that both applications of our proposed new method perform consistently and even better in some instances compared to the conventional method. We also conclude that in most instances it is possible to detrend the target light curves and achieve decent precision on transit parameters even without using the comparison stars at all. \subsubsection{Comparing the spectroscopic light curve fits} \label{sec:method_comparison_sptlc} We now compare the transmission spectra obtained from the conventional method \texttt{Conv1:$\lambda$LC} and the new method \texttt{New:$\lambda$LC} to fit $\lambda$LCs. We emphasize here that when not using comparison star at all to fit the white target light curves we obtain consistent transit parameters and hence the common-mode trend as when we use the comparison star (also see Section \ref{sec:binned_LC_old_method}). Hence all conclusions below about the new method are valid whether the common-mode trend is derived by using the comparison star indirectly as in \texttt{New:WLC} or not using it at all in \texttt{New:WLC;No\_Comp}. For both B600 and R150 observations, the transmission spectra shown in Figures \ref{fig:r150_ts} and \ref{fig:b600_ts} and corresponding wavelength-dependent transit depth values in Tables \ref{tab:r150_ts_targ}, \ref{tab:r150_ts_targ_by_comp} reveal that the individual and combined transmission spectra across observations from the \texttt{Conv1:$\lambda$LC} and \texttt{New:$\lambda$LC} are on average consistent within their 2$\sigma$ uncertainties. \texttt{New:$\lambda$LC} on average yields $\sim 40\%$ smaller RMS of the residuals per wavelength bin (see Figures \ref{fig:sptlcs_1} to \ref{fig:sptlcs_5}). The per wavelength bin uncertainties on the transmission spectra are on average $\sim 30\%$ larger from \texttt{New:$\lambda$LC} as compared to that from \texttt{Conv1:$\lambda$LC} for the R150 observations. For the B600 observation 4 \texttt{New:$\lambda$LC} yields $\sim 50\%$ smaller uncertainties, especially for the bluest wavelength bins. \texttt{New:$\lambda$LC} also performs similarly well in terms of precision for the three bluest bins for the B600 observation 5, but yields nearly $\sim 30\%$ larger uncertainties for the redder bins. This difference in uncertainties on the transmission spectra points towards fundamental differences between the two methods in their approach of dealing with the systematics which we elaborate on. One clear difference is the number of free hyperparameters used for the GP models in both methods. For \texttt{Conv1:$\lambda$LC}, the GP model uses only one regressor (time) and hence two hyperparameters (amplitude and length scale, see Equation \ref{kernel} and \ref{eq:Rij} in Section \ref{noise_model}). \texttt{New:$\lambda$LC} in comparison uses two regressors (time and common-mode trend) and hence three hyperparameters (common amplitude, and one length scale hyperparameters for each of the regressors). Using a more flexible model with more hyperparameters is one of the reasons behind larger uncertainties in the transmission spectra from \texttt{New:$\lambda$LC}. Note that \texttt{Conv1:$\lambda$LC} before fitting the GP model also involves two additional steps: dividing by comparison $\lambda$LCs and subtracting the common-mode trend. Both of these steps are linear corrections which do not explicitly propagate uncertainties arising from non-linear differences between the target $\lambda$LCs and the comparison $\lambda$LCs or common-mode trend. It can be seen from the target $\lambda$LCs in Figures \ref{fig:sptlcs_1} to \ref{fig:sptlcs_5} that target star light curves suffer from a low frequency trend in time that varies with wavelength due to wavelength-dependent extinction that is changing with the airmass. These low frequency trends still remain after division by comparison $\lambda$LCs and subtracting the common-mode trend, as seen in the Target/Comparison $\lambda$LCs in Figures \ref{fig:sptlcs_1} to \ref{fig:sptlcs_5}. There is also a high frequency trend (e.g. odd-even effect described in Section \ref{odd_even}) which affects every wavelength bin in a similar manner. Our new method \texttt{New:$\lambda$LC} for fitting target $\lambda$LCs doesn't use comparison $\lambda$LCs and accounts for trends at both frequencies in addition to accounting for their wavelength dependence in one step. The Bayesian framework of GPs propagates the uncertainties in the information from the common-mode trend as relevant to each target $\lambda$LCs. Specifically, the common-mode trend helps in accounting for the high frequency trends while time accounts for the low frequency trend varying with respect to wavelength (as demonstrated in Appendix \ref{cmode_gp_test}). Moreover, when not using comparison star $\lambda$LCs, using common-mode correction as a GP regressor can potentially provide better precision as compared to conventional common-mode correction as we demonstrate in Appendix \ref{cmode_gp_test}. By not using the comparison star $\lambda$LCs, we prevent possible introduction of additional systematics due to different instrumental systematics or differential atmospheric extinction between the target and comparison stars with changing airmass. This is supported by the superior performance of \texttt{New:$\lambda$LC} for $\lambda$LCs for the bluest bins of observation 4 as compared to \texttt{Conv1:$\lambda$LC} in terms of precision and accuracy of transit depths. \subsubsection{Implications of measuring transmission spectra without using comparison stars} In the previous subsection, we show that decent precision on fits to both white transit light curves and $\lambda$LCs can be achieved even when not using the comparison star at all. We highlight the usefulness of this aspect of our method for cases when the comparison star is not a suitable reference for systematics in the target star light curve either due to large differences in brightness or spectral type, or issues with the observational setup, as is the case of our GMOS-B600 observations. In fact, our new method essentially removes the transit signal from the white target light curves and uses the information in the residuals (common-mode trend) to fit the target $\lambda$LCs. In this context, a further step could be that we may not need to fit the white target light curves and we can rely on using the previously measured planet transit parameters from other observatories e.g. TESS, HST/STIS in the bandpass significantly overlapping with GMOS and theoretical priors on the limb darkening for the star to compute the transit signal used to obtain the common-mode trend which can be used to fit $\lambda$LCs. Caveats of this approach of bypassing the fit of the white light curve are observations of planets with variable broadband transit depths due to e.g., stellar host variability over multiple epochs. In such cases, it would be essential to fit the white light curve transit depths for individual epoch first to be able to obtain common-mode trend with normalisation to the transit depth for that epoch leading to accurate absolute transmission spectra. In particular, for active host stars, instead of using the same transit depth to derive the common-mode trend across all epochs (as we do for HAT-P-26 in this paper), we advise using the individual best fit white light curve transit depths for each epoch. Our new method of extracting transmission spectra from solely target star light curves has further implications for ground-based follow-up atmospheric observations of exoplanets orbiting bright host stars especially those discovered by TESS. In particular, the majority of TESS stellar host stars are bright in optical, with median V$_{mag}$ $\sim$ 11 as indicated by simulations from \cite{Barclay2018}, and may not have a choice of comparison stars with similar brightness and spectral type in the limited field of view of up to 10 arcminutes for most ground based multi object spectrographs. We recommend using \texttt{New:WLC;No\_Comp} followed by \texttt{New:$\lambda$LC} for obtaining ground-based transmission spectra of such exoplanets orbiting bright stars. Furthermore, another strength of our new method is that it can potentially mitigate significant second order colour dependent extinction effects arising due to differences in target and comparison star spectral types (\citealt{Young1991}, \citealt{Blake2011}). \begin{figure*} \centering \includegraphics[scale=0.45]{figures/R150_TS_fixed_to_GMOS_revised.png} \caption[1]{Transmission spectra for the GMOS-R150 observations obtained using \texttt{Conv1:$\lambda$LC} and \texttt{New:$\lambda$LC} (slightly shifted in wavelength for clarity) described in Section \ref{sec:binned_LC}. The average GMOS optical transit depth (corresponding to the weighted average white light curve $R_{\rm P}/R_\star$, $0.0701^{2} = $4914 ppm), which is consistent with the median HST STIS/G750L transit depth from \cite{Wakeford2017}, is marked by the dashed line. For each observation, in black are shown the spectra obtained through conventional method \texttt{Conv1:$\lambda$LC} of fitting Target/Comparison $\rm \lambda$LCs using a GP model with time as a regressor. In red are the transmission spectra obtained by using the new method \texttt{New:$\lambda$LC} to extract transmission spectra only from Target $\rm \lambda$LCs: using a GP model with time and common-mode trend as regressors are shown in red. Overplotted is the observed stellar spectrum for the target star (HAT-P-26b) in green.} \label{fig:r150_ts} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.45]{figures/B600_TS_fixed_to_GMOS_revised.png} \caption[1]{Transmission spectra for the GMOS-B600 observations obtained using \texttt{Conv1:$\lambda$LC} and \texttt{New:$\lambda$LC} (slightly shifted in wavelength for clarity) as described in Section \ref{sec:binned_LC}. The average GMOS optical transit depth (corresponding to the weighted average white light curve $R_{\rm P}/R_\star$, $0.0701^{2} = $4914 ppm), which is consistent with the median HST STIS/G750L transit depth from \cite{Wakeford2017}, is marked by the dashed line. For each observation, in black are shown the spectra obtained through conventional method \texttt{Conv1:$\lambda$LC} of fitting Target/Comparison $\rm \lambda$LCs using a GP model with time as a regressor. In red are the transmission spectra obtained by using the new method \texttt{New:$\lambda$LC} to extract transmission spectra only from Target $\rm \lambda$LCs: using a GP model with time and common-mode trend as regressors are shown in red. Both the B600 observations were obtained using non-ideal PA which manifests as widely different time dependent trends in the target and comparison $\rm \lambda$LCs. This leads to contamination of the transit signal in \texttt{Conv1:$\lambda$LC} (black points), especially for the bluest wavelength bins as seen here for both B600 observations. Overplotted is the observed stellar spectrum for the target star (HAT-P-26b) in green. } \label{fig:b600_ts} \end{figure*} \subsection{Interpretation of the Optical to NIR Transmission Spectrum} \label{sec:interpretation} We generate the combined transmission spectrum by weighted averaging the wavelength-dependent transit depths across common wavelength bins covered by individual R150 and B600 observations, taking the squared reciprocal of the transit depth uncertainties as weights for the respective observations. The combined transmission spectrum values from both methods for R150 observations are shown in Tables \ref{tab:r150_ts_targ} and \ref{tab:r150_ts_targ_by_comp}, and for B600 observations in Tables \ref{tab:b600_ts_targ} and \ref{tab:b600_ts_corr}. Since for the B600 observations, \texttt{New:$\lambda$LC} performs much better than \texttt{Conv1:$\lambda$LC}, we only consider the combined transmission spectra obtained from \texttt{New:$\lambda$LC} for further comparison with atmospheric models. We use the open source atmospheric modelling code \texttt{platon} (\citealt{Zhang2019}, \citeyear{Zhang2020} ) based on \texttt{ExoTransmit} (\citealt{Kempton2016}) to conduct a simple retrieval analysis for the atmosphere of HAT-P-26b to interpret our combined GMOS observations in conjunction with the near infrared transmission spectra measurements from HST and Spitzer reported by \cite{Wakeford2017}. For the self-consistent retrieval framework of \texttt{platon} we consider equilibrium chemistry models for three cases: 1) both metallicity and C/O fixed to solar values, and 2) both metallicity and C/O free to fit, 3) metallicity free and C/O fixed to solar value. For all three cases we also let free the pressure level of a grey opacity cloud deck. Since early measurements of chromospheric activity indicator S$_{HK}$ index (\citealt{Hartman2011}) and subsequent photometric follow-up observations by \cite{vonEssen2019} show no signs of activity or significant spot modulated variability of stellar photospheric brightness, we do not include contributions from transit light source effect (\citealt{Rackham2018}) in our retrieval analysis. From the stellar photometry reported by \cite{vonEssen2019} no signatures of spot modulations of stellar flux are observed and the upper limit on V band photometric variability for HAT-P-26 (a K1 dwarf) is 2.3 parts per thousand or 0.23 \% (which is the maximum scatter in the light curves). Referring to empirical relationship between the peak to peak optical variability amplitude vs spot covering fraction for K dwarfs from \cite{Rackham2019}, we note that 0.2 \% variability would correspond to less than 1 \% spot covering fraction. Considering the upper limit of 1 \% spot covering fraction, we use Equation 3 from \cite{Rackham2019} to estimate the upper limit on the amplitude of wavelength dependent stellar contamination factor on the transmission spectrum and find it to be 0.9901. Considering the average transit depth of HAT-P-26b to be around 5000 ppm, this would correspond to a maximum offset of ~50 ppm to the transmission spectrum, which is about a factor 5 to 10 less as compared to the average precision of the transmission spectrum in the individual epochs. We hence conclude that given the precision of our observations, we would not be able to detect the offsets due to stellar contamination corresponding to the available upper limits from stellar photometry. Note that our GMOS-B600 observation 4 was taken at the same time (on 12/03/2016 UT) as one of the HST/WFC3 observations of \cite{Wakeford2017} and the consistency of the median wavelength-dependent transit depth between both observations taken simultaneously from two different further underscores the suitability of combining them. Hence, we do not introduce any vertical offset between the measurements from GMOS, HST, and Spitzer in further analysis. We find that the transmission spectrum of HAT-P-26b from the combined GMOS, HST, and Spitzer measurements are best explained by a model corresponding to a solar metallicity and solar C/O atmosphere with a grey opacity cloud deck at log$_{10}$P (bar) = -2.5$^{+0.53}_{-0.28}$ , which is consistent with the pressure level of the cloud deck constrained by \cite{Wakeford2017} (log$_{10}$P (bar) $\sim$ -2 ) using STIS/G750L observations. The $\chi^{2}_{red}$ for the best fit model with a grey opacity cloud is 1.68 compared to 17.4 for a cloud-free model. The resulting best fit model along with the cloud-free model for comparison is shown in Figure \ref{fig:platon_model_TS}. Given the lack of coverage at the bluest optical end of the transmission spectrum due to the drop in throughput of GMOS observations blueward of 490 nm our observations cannot constrain the signatures of tentative Rayleigh scattering predicted by \cite{MacDonald2019}. We also do not confirm or rule out the $\sim$ 400 ppm TiH feature at 0.54 $\mu$m predicted by \cite{MacDonald2019} due to our precisions around this region (see Tables \ref{tab:r150_ts_targ} to \ref{tab:b600_ts_targ}) being comparable to the amplitude of the feature as well as our seeing limited resolution restricting us to 20 nm wide wavelength bins. \begin{figure*} \centering \includegraphics[scale=0.45]{figures/combined_TS_with_platon_model_fit_Z_CO_revised.png} \caption[1]{Combined optical transmission spectrum from the 4 GMOS-R150 (red points) and 2 GMOS-B600 (blue points) observations obtained from \texttt{New:$\lambda$LC} presented in Section \ref{sec:binned_LC_new_method} along with the previous measurements in the optical and near infrared from HST/STIS-G750L and HST/WFC3-G102 and WFC3-G141, and in infrared from Spitzer as presented by \cite{Wakeford2017}. Overplotted is the best fit transmission spectroscopy model obtained using \texttt{platon} which has a cloud deck at 3.5 millibar (10$^{-2.5}$ bar), in solid green, and a cloud free model in dotted green for comparison. } \label{fig:platon_model_TS} \end{figure*} \subsection{Transit Timing Variations} \label{sec:ttv} Ground based transit observations from multi object spectrographs like GMOS can provide high precision (of the order of 10s of seconds) on the mid-transit time as a result of the high signal to noise nature of observations and continuous sampling of the transit including the ingress and egress without gaps. An example is the mid transit times from the Gemini/GMOS observations of WASP-4b (\citetalias{Huitson2017}) which when combined with other timing measurements including those from TESS by \citeauthor{Bouma2019} (\citeyear{Bouma2019}, \citeyear{Bouma2020}) have been used to study the transit timing variations of the planet at high precision. For HAT-P-26b, we obtain an average precision of $\sim 25$ seconds on the mid-transit times across the 6 GMOS transit observations as shown in Table \ref{tab:WLC_bestfitparams} (mid-transit times from \texttt{New:WLC}). We combine our mid-transit times from \texttt{New:WLC} with those compiled by \cite{vonEssen2019}. The mid-transit time measured from GMOS for observation 4 is consistent with that measured from the simultaneous HST/WFC3 transit observation from \citealt{Wakeford2017} within the 1 $\sigma$ uncertainty. Taking the zeroth epoch same as that considered by \cite{vonEssen2019} we compute the observed minus calculated (O -- C) for the GMOS mid-transit times assuming a linear ephemeris for the calculated or predicted mid-transit times. To these O -- C values combined with the measurements from \cite{vonEssen2019} we then fit a sinusoidal model with three free parameters : amplitude of the TTVs $A_{TTV}$, period (P, in number of epochs), and a phase value ($\phi_{TTV}$) using \texttt{emcee}. The resulting best fit and fits from random samples from the posteriors computed by \texttt{emcee} are shown in Figure \ref{fig:ttv}. Our best fit sinusoidal fit has an amplitude of $\rm A_{\rm TTV}$ $= 1.21^{+0.040}_{−0.039}$ minutes, with period P $ = 366.016^{+14.76}_{-14.19}$ epochs and $\phi_{TTV}$ $= −2.74^{+0.38}_{−0.37}$. The reduced chi-squared value (with a degree of freedom 22) for the sinusoidal fit to the O $-$ C including the GMOS and \cite{vonEssen2019} measurements is $\sim$5 as compared to $\sim$288 for O $-$ C = 0 which is the case when the measured O $-$ C values would be consistent with a linear ephemeris. This is consistent with the indication of TTVs for HAT-P-26b previously reported by \cite{vonEssen2019} and also indicated by \cite{Stevenson2016}, and motivates future follow up using both transit and secondary eclipse measurements to determine the physical explanation behind the TTVs. \begin{figure*} \centering \includegraphics[scale=0.5]{figures/O_C_TTV_2_revised.png} \caption[1]{Observed minus calculated mid transit times (O -- C, from a linear ephemeris) from the mid transit times presented by \cite{vonEssen2019} (black points, including a compilation of all the previously published mid transit times and those measured by them) and those presented in this paper (red and blue points, numbers corresponding to observation number in Table \ref{obsstats}). Overplotted in dashed black line is the best fit sinusoidal model to only the O -- C values from \cite{vonEssen2019}, in solid black is the best fit sinusoidal model fit to O -- C values from the \cite{vonEssen2019} and the GMOS observations, and in orange are the randomly sampled fits from the MCMC posteriors.} \label{fig:ttv} \end{figure*} \section{Conclusions} \label{sec:conclusions} We have introduced a new method to model systematics in ground based spectrophotometric observations that allows for a generalised non-linear mapping between the target star transit light curves and the time series used as regressors to detrend them. We test and demonstrate the performance of the new method in comparison to the conventional method by applying both methods to ground-based optical transmission spectra of the warm Neptune HAT-P-26b from 6 transits observed by Gemini/GMOS as part of our ground-based survey of exoplanet atmospheres in the optical. We summarise the key aspects and conclusions for the new method we introduce in this paper: 1) With the new method, we fit the systematics and transit signal in the target star white light curves directly by using a GP regression model conditioned with various combinations of regressors which include the simultaneously observed comparison star white light curve. This is a generalisation of conventional linear methods which have used comparison star white light curves as a linear regressor. The new method when using comparison star white light curves as a GP regressor lets the GP determine the underlying non-linear mapping between the comparison and target star light curves. This approach utilises the information about systematics from the comparison star light curves without introducing additional uncertainties as often is the case when doing differential photometry. It also propagates uncertainties appropriately within the Bayesian framework of GPs when using the comparison star light curve as a GP regressor. \\ 2) The application of the new method \texttt{New:WLC;No\_Comp} to fit the target white light curves without using the comparison light curves emulates a scenario when suitable comparison stars may not be available. We show that even in the absence of suitable comparison stars, accurate transit parameters with comparable precisions can be obtained from the white target transit light curve fit using our new method.\\ 3) The new method when applied to $\lambda$LCs lets the GP determine the non-linear mapping between the white target light curve derived common-mode trend and the individual target $\lambda$LCs. We show by application to observed and transit injected $\lambda$LCs that this approach without needing to perform normalization by comparison $\lambda$LCs is robust and achieves accurate transmission spectra. From the transit injection test, we conclude that using common-mode trend as a GP regressor achieves $\sim$ 20\% better precision on the transmission spectra compared to that from conventional common-mode correction. \\ 4) Except for the bluest bins in B600 observations, the new method yields marginally higher uncertainties on the transmission spectra. We interpret this increase in uncertainties as an outcome of fitting for both low and high frequency systematics in $\lambda$LCs in one step and propagating the uncertainties in the process. In contrast, the conventional linear method with multiple steps of dividing by comparison $\lambda$LCs and subtracting the common-mode trend does not explicitly propagate uncertainties at each step.\\ 5) In the context of bluest bins in B600 observations, where in addition to effects due to non-ideal PA we also expect largest differential atmospheric extinction between the target and comparison star spectra due to changing airmass, we show that our new method is able to extract the transmission spectra for scenarios when the conventional Target/Comparison normalisation strongly contaminates the transit signal. \\ 6) We demonstrate that just the target white light curve itself can be used to model the time and wavelength-dependent systematics in the spectroscopic target light curves, albeit at the cost of $\sim$30 \% larger uncertainties on the transmission spectra. This approach can ultimately be used for future optical and near infrared ground-based atmospheric characterisation of exoplanets orbiting bright host stars with little or no available choice of comparison stars with similar brightness and spectral type in the instrument field of view. \\ 7) The current prescription of the new method as applied to $\lambda$LCs in this paper fits each $\lambda$LC independently and hence does not explicitly model potential covariance in the wavelength dimension. A future possible extension to our method, especially when applied to medium resolution spectrophotometric observations, is to jointly model the $\lambda$LCs accounting for potential covariance due to systematics in wavelength dimension. \\ Based on our analyses, we obtain the following conclusions about the atmosphere of HAT-P-26b : 1) Through equilibrium chemistry retrieval analysis of combined GMOS optical observations with near infrared HST and Spitzer observations, we conclude that the terminator of HAT-P-26b is consistent with solar metallicity and C/O atmosphere with a grey opacity cloud layer at log$_{10}$P (bar) = -2.5$^{+0.53}_{-0.28}$ obscuring the alkali absorption features in optical and suppressing the water absorption features in the near infrared, consistent with the findings of \cite{Wakeford2017}. The low resolution nature of our observations and comparatively low precision on the transit depths preclude confirmation of presence of metal hydride features predicted by \cite{MacDonald2019}.\\ 2) Based on the mid transit times constrained by the GMOS transits we find further indications of TTVs for HAT-P-26b in agreement with previous studies. This warrants future follow up primary and secondary eclipse observations of the planet to investigate the physical origin of TTVs. \\ Finally our results add to the growing library of optical transmission spectra of exoplanets obtained using ground-based low-resolution spectrographs. The precision and accuracy of our measurements combined with the repeatability of the observations over multiple epochs emphasize the importance of optical ground-based observations in complementing the upcoming observations of transiting exoplanets in the infrared using JWST. \section*{Acknowledgements} Based on observations obtained at the Gemini Observatory (acquired through the Gemini Observatory Archive and Gemini Science Archive), which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil). Based in part on Gemini observations obtained from the National Optical Astronomy Observatory (NOAO) Prop. ID: 2012B-0398; PI: J-.M D\'{e}sert. We thank the anonymous reviewer for their thorough comments and feedback which helped in improving the paper. V.P. acknowledges stimulating discussions with Lorenzo Pino, Claire Baxter, Jacob Arcangeli, Niloofar Khorshid, Bob Jacobs, Saugata Barat, Ernst de Mooij, Neale Gibson, and Dan-Foreman Mackey which helped in shaping the paper. V.P. also acknowledges inspiring discussions on Gaussian Processes with Mehmet Deniz Aksulu. J.M.D acknowledges support from the Amsterdam Academic Alliance (AAA) Program, and the European Research Council (ERC) European Union's Horizon 2020 research and innovation programme (grant agreement no. 679633; Exo-Atmos). This work is part of the research programme VIDI New Frontiers in Exoplanetary Climatology with project number 614.001.601, which is (partly) financed by the Dutch Research Council (NWO). This material is based upon work supported by the NWO TOP Grant Module 2 (Project Number 614.001.601). This material is based upon work supported by the National Science Foundation (NSF) under Grant No. AST-1413663. This research has made use of NASA's Astrophysics Data System. The authors also acknowledge the significant cultural role and reverence the summit of Mauna Kea has within the indigenous Hawaiian community. This research has made use of \texttt{Astropy},\footnote{http://www.astropy.org} a community-developed core Python package for Astronomy \cite{astropy:2013, astropy:2018}, \texttt{NumPy} \cite{harris2020array}, \texttt{matplotlib} \cite{Hunter2007}, \texttt{SciPy} \cite{Virtanen2020} and \texttt{IRAF} \cite{Tody1986} distributed by the NOAO, which is operated by AURA under a cooperative agreement with the NSF. \section*{Data Availability} The data underlying this article and Python notebooks from which the results and figures of this paper can be obtained will be made available upon publication.
{ "attr-fineweb-edu": 1.974609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd1TxK6wB9jjDgo6_
\section{Introduction} More than twenty years ago, in \cite{Ik} the author obtained the extraction formula of the support function of an unknown polygonal source domain in an inverse source problem governed by the Helmholtz equation and polygonal penetrable obstacle in an inverse obstacle problem governed by an inhomogeneous Helmholtz eqution. All the problems considered therein are in two dimensions and employ only a single set of Cauchy data of a solution of the governing equation at a fixed wave number in a bounded domain. Those results can be considered as the first application of a single measurment version of the {\it enclosure method} introduced in \cite{Ik0}. Succeding to \cite{Ik}, in \cite{IkC} the author found another unexpected application of the enclosure method out to the Cauchy problem for the stationary Schr\"odinger equation $$\displaystyle -\Delta u+V(x)u=0 \tag {1.1} $$ in a bounded domain $\Omega$ of $\Bbb R^n$, $n=2,3$. Here $V\in L^{\infty}(\Omega)$ and both $u$ and $V$ can be complex valued functions. We established an explicit representation or computation formula for an arbitrary solution $u\in H^2(\Omega)$ to the equation (1.1) in $\Omega$ in terms of its Cauchy data on a part of $\partial\Omega$. See also \cite{IS} for its numerical implementation. Note also that the idea in \cite{IkC} has been applied to an inverse source problem governed by the heat equation together with an inverse heat conduction problem in \cite{IkH}, \cite{IkHC}, respectively. The idea introduced therein is to make use of the complex geometrical optics solutions (CGO) with a large parameter $\tau$ for the modified equation instead of (1.1): $$\begin{array}{ll} \displaystyle -\Delta v+V(x)v=\chi_{D_y}(x)v, & x\in\Omega, \end{array} $$ where $y$ is a given point in $\Omega$, $D_y\subset\subset\Omega$ is the inside of a triangle, tetrahedron for $n=2,3$, respectively with a vertex at $y$ and $\chi_{D_y}(x)$ is the characteristic function of $D_y$. The solution is the same type as constructed one in \cite{SU1} for $n=2$, \cite{SU2} for $n=3$ and has the following form as $\tau\rightarrow\infty$ $$\displaystyle v\sim e^{x\cdot z}, $$ where $z=\tau(\omega+i\vartheta)$ and both $\omega$ and $\vartheta$ are unit vectors perpendicular to each other. This right-hand side is just the complex plane wave used in the Calder\'on method \cite{C}. Note that, in \cite{IkS} another simpler idea to make use of the CGO solutions of another modified equation described below is presented: $$\begin{array}{ll} \displaystyle -\Delta v+V(x)v=\chi_D(x)e^{x\cdot z}, & x\in\Omega. \end{array} $$ Using integration by parts we reduced the problem of computing the value of $u$ at given point $y$, essentially, to clarifying the leading profile of the following oscillatory integral as $\tau\rightarrow\infty$: $$\displaystyle \int_{D_y} e^{x\cdot z}\,\rho(x)dx, $$ where $\rho(x)$ is uniformly H\"older continuous on $\overline{D_y}$\footnote{In this case $\rho=u$.}. Note that the asymptotic behaviour of this type of oscillatory integral in {\it two dimensions} is the key point of the enclosure method developed in \cite{Ik}. In \cite{IkC} we clarified the leading profile in more general setting as follows. Given a pair $(p,\omega)\in\Bbb R^n\times S^{n-1}$ and $\delta>0$ let $Q$ be an arbitrary non empty bounded open subset of the plane $x\cdot\omega=p\cdot\omega-\delta$ with respect to the relative topology from $\Bbb R^n$. Define the bounded open subset of $\Bbb R^n$ by the formula $$\displaystyle D_{(p,\omega)}(\delta,Q) =\cup_{0<s<\delta}\, \left\{p+\frac{s}{\delta}(z-p)\,\left\vert\right.\,z\in Q\,\right\}. \tag {1.2} $$ This is a cone with the base $Q$ and apex $p$, and lying in the slab $\{x\in\Bbb R^n\,\vert\,p\cdot\omega-\delta<x\cdot\omega<p\cdot\omega\,\}$. Note that $\delta=\text{dist}\,(\{p\},Q)$ is called the height. If $Q$ is given by the inside of a polygon, the cone (1.2) is called a {\it solid pyramid}. In particular, if $Q$ is given by the inside of a triangle, cone (1.2) becomes a tetrahedron. On (2.2) in \cite{IkC} we introduced a special complex constant associated with the domain (1.2) which is given by $$\displaystyle C_{(p,\omega)}(\delta, Q,\vartheta)=2s\int_{Q_s}\frac{dS_z}{\{s-i(z-p)\cdot\vartheta\}^n}, \tag {1.3} $$ where $i=\sqrt{-1}$, $0<s<\delta$ and $Q_s=D_{(p,\omega)}(\delta,Q)\cap\{x\in\Bbb R^n\,\vert x\cdot\omega=p\cdot\omega-s\,\}$ and the direction $\vartheta\in S^{n-1}$ is perpendicular to $\omega$. Note that in \cite{IkC} complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ is simply written as $C_D(\omega,\omega^{\perp})$ with $\omega^{\perp}=\vartheta$. As pointed therein out this quantity is independent of the choice $s\in\,]0,\,\delta[$ because of the one-to-one correspondence between $z\in Q_s$ and $z'\in Q_{s'}$ by the formula $$ \left\{ \begin{array}{l} \displaystyle z'=p+\frac{s'}{s}\,(z-p), \\ \\ \displaystyle \displaystyle dS_{z'}=(\frac{s'}{s})^{n-1}\,dS_z. \end{array} \right. $$ The following lemma describes the relationship between complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ and an integral over (1.2). \proclaim{\noindent Proposition 1.1 (Lemma 2 in \cite{IkC}).} Let $n=2, 3$. Let $D=D_{(p,\omega)}(\delta,Q)$ and $\rho\in C^{0,\alpha}(\overline D)$ with $0<\alpha\le 1$. It holds that, for all $\tau>0$ $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \left\vert e^{-\tau p\cdot(\omega+i\vartheta)} \int_D\rho(x)e^{\tau x\cdot(\omega+i\vartheta)}\,dx -\frac{n-1}{2\tau^n} \rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)\right\vert \\ \\ \displaystyle \le\vert\rho(p)\vert\frac{\vert Q\vert}{\delta^{n-1}} \{(\tau\delta+1)^{n-1}+n-2\} \frac{e^{-\tau\delta}}{\tau^n} +\Vert\rho\Vert_{C^{0,\alpha}(\overline D)} \frac{\vert Q\vert}{\delta^{n-1}} (\frac{\text{diam}\,D}{\delta})^{\alpha}\frac{C_{n,\alpha}}{\tau^{n+\alpha}}, \end{array} $$ where $\Vert\rho\Vert_{C^{0,\alpha}(\overline D)}= \sup_{x,y\in\overline D, x\not=y}\frac{\vert\rho(x)-\rho(y)\vert}{\vert x-y\vert^{\alpha}}$ and $$\displaystyle C_{n,\alpha} =\int_0^{\infty}s^{n-1+\alpha} e^{-s}ds. $$ \endproclaim Thus we have, as $\tau\rightarrow\infty$ $$\displaystyle e^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\rho(x)e^{\tau x\cdot(\omega+i\vartheta)}\,dx =\frac{n-1}{2\tau^n} \rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)+O(\tau^{-(n+\alpha)}). $$ This is the meaning of complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$. Note that the remainder estimate $O(\tau^{-(n+\alpha)})$ is uniform with respect to $\vartheta$. And also as a direct corollary, instead of (1.3) we have another representation of $C_{(p,\omega)}(\delta,Q,\vartheta)$: $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) =\frac{2}{n-1} \lim_{\tau\longrightarrow\infty}\tau^ne^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)} e^{\tau x\cdot(\omega+i\vartheta)}dx. \tag {1.4} $$ The convergence is uniform with respect to $\vartheta$. Proposition 1.1 is the one of two key points in \cite{IkC} and gives the role of the H\"older continuity of $\rho$. Another one is the {\it non-vanishing} of $C_{(p,\omega)}(\delta,Q,\vartheta)$ as a part of the leading coefficient of the integral in Proposition 1.1 as $\tau\rightarrow\infty$. This is not trivial, in particular, in three dimensional case. For this we have shown therein the following fact. \proclaim{\noindent Proposition 1.2(Theorem 2 in \cite{IkC}).} $\bullet$ If $n=2$ and $Q$ is given by the inside of an {\it arbitrary line segment}, then for all $\vartheta$ perpendicular to $\omega$ we have $C_{(p,\omega)}(\delta,Q,\vartheta)\not=0$. $\bullet$ If $n=3$ and $Q$ is given by the inside of an {\it arbitrary triangle}, then for all $\vartheta$ perpendicular to $\omega$ we have $C_{(p,\omega)}(\delta,Q,\vartheta)\not=0$. \endproclaim The nonvanishing of complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ in case $n=2$ has been shown in the proof of Lemma 2.1 in \cite{Ik}. The proof therein employs a local expression of the corner around apex as a graph of a function on the line $x\cdot\omega=x\cdot p$ and so the proof by viewing $D_{(p,\omega)}(\delta,Q)$ as a cone in \cite{IkC} is not developed. Note that, in the survey paper \cite{IkS} on the enclosure method it is pointed out that ``the Helmholtz version'' of Proposition 1.1 is also valid. That is, roughly speaking, we have $$\displaystyle e^{-p\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\,\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\,\rho(x)e^{x\cdot(\tau \omega+i\sqrt{\tau^2+k^2}\,\vartheta)}\,dx =\frac{n-1}{2\tau^n} \rho(p)\,C_{(p,\omega)}(\delta,Q,\vartheta)+O(\tau^{-(n+\alpha)}) \tag {1.5} $$ with the {\it same constant} $C_{(p,\omega)}(\delta,Q,\vartheta)$, where $k\ge 0$. See Lemma 3.2 therein. The proof can be done by using the same argument as that of Proposition 1.1. Note that the function $v=e^{x\cdot(\tau \omega+i\sqrt{\tau^2+k^2}\,\vartheta)}$ satisfies the Helmholtz equation $\Delta v+k^2 v=0$ in $\Bbb R^n$. \subsection{Role of nonvanishing in an inverse source problem} As an application of the nonvanishing of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$, we present here its direct application to the inverse source problem considered in \cite{Ik}, however, in {\it three dimensions}. Let $\Omega$ be a bounded domain of $\Bbb R^3$ with $\partial\Omega\in C^2$. We denote by $\nu$ the normal unit outward vector field on $\partial\Omega$. Let $k\ge 0$. Let $u\in H^1(\Omega)$ be an arbitrary weak solution of the Helmholtz equation in $\Omega$ at the wave number $k$: $$\begin{array}{ll} \displaystyle \Delta u+k^2 u=F(x), & x\in\Omega, \end{array} \tag {1.6} $$ where $F(x)$ is an unknown source term such that $\text{supp}\,F\subset\Omega$. Both $u$ and $F$ can be complex-valued functions. See \cite{Ik} for the meaning of the solution and the formulation of the Cauchy data on $\partial\Omega$ in the weak sense. It is well known that, in general, one can not obtain the uniqueness of the source term $F$ itself from the Cauchy data of $u$ on $\partial\Omega$. In fact, given $\varphi\in C^{\infty}_0(\Omega)$ let $G=F+\Delta\varphi+k^2\varphi$. We have $\text{supp}\,G\subset\Omega$ and the function $\tilde{u}=u+\varphi$ satisfies $$\begin{array}{ll} \displaystyle \Delta\tilde{u}+k^2\tilde{u}=G(x), & x\in\Omega. \end{array} $$ Both $u$ and $\tilde{u}$ have the same Cauchy data on $\partial\Omega$. It should be pointed out that, however, $F$ and $G$ coincides each other modulo $C^{\infty}$. This means that the singularity of $F$ and $G$ coincides each other. This suggests a possibility of extracting some information about a singularity of $F$ or its support from the Cauchy data of $u$ on $\partial\Omega$. As done in \cite{Ik} in two dimensions, we introduce the special form of the unknown source $F$: $$F(x)=F_{\rho,D}(x)= \left\{\begin{array}{lr} \displaystyle 0, & \quad\text{if $x\in\Omega\setminus D$,}\\ \\ \displaystyle \rho(x), & \quad\text{if $x\in\,D$.} \end{array} \right. \tag {1.7} $$ Here $D$ is an unknown non empty open subset of $\Omega$ satisfying $\overline D\subset\Omega$ and $\rho\in L^{2}(D)$ also unknown. We call $D$ the {\it source domain}, however, we do not assume the connectedness of not only $D$ but also $\Omega\setminus\overline D$. The $\rho$ is called the strength of the source. We are interested in the following problem. $\quad$ {\bf\noindent Problem 1.} Extract information about a singularity of the source domain $D$ of $F$ having form (1.7) from the Cauchy data $(u(x), \frac{\partial u}{\partial\nu}(x))$ for all $x\in\partial\Omega$. $\quad$ \noindent Note that we are seeking a {\it concrete procedure} of the extraction. Here we recall the notion of the regularity of a direction introduced in the enclosure method \cite{Ik}. The function $h_D(\omega)=\sup_{x\in D}\,x\cdot\omega$, $\omega\in S^{2}$ is called the {\it support function} of $D$. It belongs to $C(S^2,\Bbb R)$ because of the trivial estimae $\vert h_D(\omega_1)-h_D(\omega_2)\vert\le \sup_{x\in D}\,\vert x\vert\cdot\vert\omega_1-\omega_2\vert$ for all $\omega_1,\omega_2\in S^2$. Given $\omega\in S^{2}$, it is easy to see that the set $$\displaystyle H_{\omega}(D)\equiv\left\{x\in \overline D\,\left\vert\right. x\cdot\omega=h_D(\omega)\,\right\} $$ is non empty and contained in $\partial D$. We say that $\omega$ is {\it regular} with respect to $D$ if the set $H_{\omega}(D)$ consists of only a single point. We denote the point by $p(\omega)$. We introduce a concept of a singularity of $D$ in (1.7). {\bf\noindent Definition 1.1.} Let $\omega\in S^{2}$ be regular with respect to $D$. We say that $D$ has a {\it conical singularity} from direction $\omega$ if there exists a positive number $\delta$, an open set $Q$ of the plane $x\cdot\omega=h_D(\omega)-\delta$ with respect to the relative topology from $\Bbb R^3$ such that $$\displaystyle D\cap\left\{x\in\Bbb R^3\,\vert\,h_D(\omega)-\delta<x\cdot\omega<h_D(\omega)\,\right\}=D_{(p(\omega),\omega)}(\delta,Q). $$ Second we introduce a concept of an {\it activity} of the source term. {\bf\noindent Definition 1.2.} Given a point $p\in\partial D$ we say that the source $F=F_{\rho,D}$ given by (1.7) is {\it active} at $p$ if there exist an open ball $B_{\eta}(p)$ centered at $p$ with radius $\eta$, $0<\alpha\le 1$ and a function $\tilde{\rho}\in C^{0,\alpha}(\overline{B_{\eta}(p)})$ such that $\rho(x)=\tilde{\rho}(x)$ for almost all $x\in B_{\eta}(p)\cap D$ and $\tilde{\rho}(p)\not=0$. Note that $\rho$ together with $\tilde{\rho}$ can be a complex-valued function. Now let $u\in H^1(\Omega)$ satisfies the equation (1.6) in the weak sense with $F=F_{\rho, D}$ given by (1.7). Given a unit vector $\omega\in S^2$ define $S(\omega)=\{\vartheta\in S^2\,\vert \omega\cdot\vartheta=0\}$. Using the Cauchy data of $u$ on $\partial\Omega$, we define the indicator function as \cite{Ik} $$\displaystyle I_{\omega,\vartheta}(\tau)=\int_{\partial\Omega} \left(\frac{\partial u}{\partial\nu}v-\frac{\partial v}{\partial\nu} u\right)\,dS, $$ where $\vartheta\in S(\omega)$ and $$\displaystyle v=e^{x\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\vartheta)},\,\,\tau>0. $$ And also its derivative with respect to $\tau$ $$\displaystyle I_{\omega,\vartheta}'(\tau) =\int_{\partial\Omega}\left(\frac{\partial u}{\partial\nu}\,v_{\tau}-\frac{\partial\,v_{\tau}}{\partial\nu} u\right)\,dS, $$ where $$\displaystyle v_{\tau}=\partial_{\tau}v=\left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v. $$ The following theorem clarifies the role of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ in the asymptotic behaviour of the indicator function together with its derivative as $\tau\rightarrow\infty$. \proclaim{\noindent Theorem 1.1.} Let $\omega$ be regular with respect to $D$ and assume that $D$ has a conical singularity from direction $\omega$. Then, we have $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}(\tau)= \tilde{\rho}(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}) \tag {1.8} $$ and $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}'(\tau)= \tilde{\rho}(p(\omega))(h_D(\omega)+ip(\omega)\cdot\vartheta)\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}). \tag {1.9} $$ The remainder $O(\tau^{-\alpha})$ is uniform with respect to $\vartheta\in S(\omega)$. \endproclaim {\it\noindent Proof.} Integration by parts yields $$\displaystyle I_{\omega,\vartheta}(\tau)=\int_D\rho(x)\,v\,dx $$ and thus $$\displaystyle I_{\omega,\vartheta}'(\tau)=\int_D\rho(x)\,v_{\tau}\,dx. $$ Recalling Definition 1.1, one has the decomposition $$\displaystyle D=D_{(p(\omega),\omega)}(\delta,Q)\cup D', \tag {1.10} $$ where $$ D'=D\setminus D_{(p(\omega),\omega)}(\delta,Q)\subset\left\{x\in\Bbb R^3\,\vert\,x\cdot\omega\le h_D(\omega)-\delta\,\right\}. \tag {1.11} $$ Besides, choosing $\delta$ smaller if necessary, one may assume that $D_{(p(\omega),\omega)}(\delta, Q)\subset B_{\eta}(p(\omega))$, where $\eta$ and $B_{\eta}(p(\omega))$ are same as those of Definition 1.2. Hereafter we set $p=p(\omega)$ for simplicity of description. According to the decomposition (1.10), we have the decomposition of both $I_{\omega,\vartheta}(\tau)$ and $I_{\omega,\vartheta}'(\tau)$ as follows: $$\begin{array}{l} \displaystyle \,\,\,\,\,\, e^{-\tau h_D(\omega)} e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}I_{\omega,\vartheta}(\tau) \\ \\ \displaystyle =e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta} \int_{D_{(p,\omega)}(\delta, Q) }\tilde{\rho}(x)\,vdx +e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}\int_{D'}\rho v dx \end{array} \tag {1.12} $$ and $$\begin{array}{l} \displaystyle \,\,\,\,\,\, e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}I_{\omega,\vartheta}'(\tau) \\ \\ \displaystyle =e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta} \int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, v_{\tau}dx+ e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p\cdot\vartheta}\int_{D'}\rho v_{\tau} dx, \end{array} \tag {1.13} $$ where $p=p(\omega)$. By (1.11), we see that the second terms on the right-hand sides of (1.12) and (1.13) have the common bound $O(e^{-\tau\delta}\Vert\rho\Vert_{L^{2}(D)})$. Thus from (1.5) and (1.12) we obtain (1.8) with the remainder $O(\tau^{-\alpha})$ which is uniform with respect to $\vartheta\in S(\omega)$. For (1.13) we write $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, v_{\tau}dx \\ \\ \displaystyle =\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, \left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v\,dx \\ \\ \displaystyle =\int_{D_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, x\cdot\omega\,v\,dx +i\frac{\tau}{\sqrt{\tau^2+k^2}}\int_{C_{(p,\omega)}(\delta, Q)}\tilde{\rho}(x)\, x\cdot\vartheta\,v\,dx. \end{array} $$ Thus applying (1.5) to each of the last terms and using (1.13), we obtain (1.9) with the remainder $O(\tau^{-\alpha})$ which is uniform with respect to $\vartheta\in S(\omega)$. \noindent $\Box$ Thus under the same assumptions as Theorem 1.1, for each $\vartheta\in S(\omega)$ one can calculate $$\displaystyle I(\omega,\vartheta)\equiv \tilde{\rho}(p(\omega))\,\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) $$ via the formula $$\displaystyle I(\omega,\vartheta) =\lim_{\tau\rightarrow\infty}\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta} I_{\omega,\vartheta}(\tau) \tag {1.14} $$ by using the Cauchy data of $u$ on $\partial\Omega$ if $p(\omega)$ is known. As a direct corollary of formulae (1.8) and (1.9), we obtain a partial answer to Problem 1 and the starting point of the main purpose in this paper. \proclaim{\noindent Theorem 1.2.} Let $\omega$ be regular with respect to $D$. Assume that $D$ has a conical singularity from direction $\omega$, $F_{\rho,D}$ is active at $p=p(\omega)$ and that direction $\vartheta\in S(\omega)$ satisfies the condition $$\displaystyle C_{(p(\omega),\,\omega)}(\delta,Q,\vartheta)\not=0. \tag {1.15} $$ Then, there exists a positive number $\tau_0$ such that, for all $\tau\ge\tau_0$ $\vert I_{\omega,\vartheta}(\tau)\vert>0$ and we have the following three asymptotic formulae. The first formula is $$\displaystyle \lim_{\tau\longrightarrow\infty}\frac{\log\vert I_{\omega, \vartheta}(\tau)\vert}{\tau}=h_D(\omega) \tag {1.16} $$ and second one $$\displaystyle \lim_{\tau\rightarrow\infty} \frac{I_{\omega,\vartheta}'(\tau)}{I_{\omega,\vartheta}(\tau)} =h_D(\omega)+i\,p(\omega)\cdot\vartheta. \tag {1.17} $$ The third one is the so-called $0$-$\infty$ criterion: $$\displaystyle \lim_{\tau\longrightarrow\infty}e^{-\tau t}\vert I_{\omega, \vartheta}(\tau) \vert = \left\{ \begin{array}{ll} 0, & \text{if $t\ge h_D(\omega)$,}\\ \\ \displaystyle \infty, & \text{if $t<h_D(\omega)$.} \end{array} \right. \tag {1.18} $$ \endproclaim This provides us the framework of the approach using the enclosure method for the source domain with a conical singularity from a direction. Some remarks are in order. $\bullet$ In two dimensions, by Proposition 1.2 the condition (1.15) is redundant and we have the same conclusion as Theorem 1.2. $\bullet$ The formula (1.17) is an application of the idea ``taking the logarithmic derivative of the indicator function'' introduced in \cite{IkL}. Therein inverse obstacle scattering problems at a fixed frequency in two dimensions are considered. Needless to say, formula (1.17) is not derived in \cite{Ik}. The condition (1.15) is {\it stable} with respect to the parturbation of $\vartheta\in S(\omega)$ since from the expression (1.3) we see that the function $S(\omega)\ni\vartheta\longmapsto C_{(p(\omega),\,\omega)}(\delta,Q,\,\vartheta)$ is continuous, where the topology of $S(\omega)$ is the relative one from $\Bbb R^3$. This fact yields a corollary as follows. \proclaim{\noindent Corollary 1.1.} Let $\omega$ be regular with respect to $D$. Under the same assumptions as those in Theorem 1.2 the point $p(\omega)$ is uniquely determined by the Cauchy data of $u$ on $\partial\Omega$. \endproclaim {\it\noindent Proof.} From (1.16) one has $h_D(\omega)=p(\omega)\cdot\omega$. Choose $\vartheta'\in S(\omega)$ sufficiently near $\vartheta$ in such a way that $C_{(p(\omega),\omega)}(\delta,Q,\vartheta')\not=0$. Then from the formula (1.17) for two linearly independent directions $\vartheta$ and $\vartheta'$ one gets $p(\omega)\cdot\vartheta$ and $p(\omega)\cdot\vartheta'$. \noindent $\Box$ As another direct corollary of Theorem 1.2 and Proposition 1.2 in the case $n=3$ we have the following result. \proclaim{\noindent Corollary 1.2.} Assume that $D$ is given by the inside of a convex polyhedron and in a neighbourhood of each vertex $p$ of $D$, the $D$ coincides with the inside of a tetrahedron with apex $p$ and that the source $F=F_{\rho, D}$ given by (1.7) is active at $p$. Then, we have all the formulae (1.16), (1.17) and (1.18) for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. \endproclaim {\it\noindent Proof.} We have: $D$ has a conical singularity from the direction $\omega$ that is regular with respect to $D$ with a triangle $Q$ at each $p(\omega)$. Thus (1.15) is valid for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. Therefore, we have all the formulae (1.16), (1.17) and (1.18) for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. \noindent $\Box$ {\bf\noindent Remark 1.1.} Under the same assumptions as Corollary 1.2 one gets a uniqueness theorem: the Cauchy data of $u$ on $\partial\Omega$ uniquely determines $D$. The proof is as follows. From (1.16) one gets $h_D(\omega)$ for all $\omega$ regular with respect to $D$. The set of all $\omega$ that are not regular with respect to $D$ consists of a set of finite points and arcs on $S^2$. This yields the set of all $\omega$ that are regular with respect to $D$ is dense and thus one gets $h_D(\omega)$ for all $\omega\in S^2$ because of the continuity of $h_D$. Therefore one obtains the convex hull of $D$ and thus $D$ itself by the convexity assumption. This proof is remarkable and unique since we never make use of the {\it traditional contradiction argument}`` Suppose we have two different source domains $D_1$ and $D_2$ which yields the same Cauchy data,...''; any {\it unique continuation argument} of the solution of the governing equation. One can see such two arguments in \cite{N} in the case when $k=0$ for an inverse problem for detecting a source of {\it gravity anomaly}. Some of typical examples of $D$ covered by Corollary 1.2 are tetrahedron, regular hexahedron (cube), regular dodecahedron. So now the central problem in applying Theorem 1.2 to Problem 1 for the source with various source domain under our framework is to clarify the condition (1.15) for general $Q$. In contrast to Proposition 1.2, when $Q$ is general, we do not know whether there exists a unit vector $\vartheta\in S(\omega)$ such that (1.15) is valid or not. Going back to (1.3), we have an explicit vector equation for the constant $C_{(p,\omega)}(\delta,Q,\vartheta)$, if $Q$ is given by the inside of a polygon. See Proposition 4 in \cite{IkC}. However, comparing with the case when $Q$ is given by the inside of a triangle, it seems difficult to deduce the non-vanishing $C_{(p,\omega)}(\delta,Q,\vartheta)$ for all $\vartheta\in S(\omega)$ from the equation directly. This is an open problem. \subsection{Explicit formula and its implication} In this paper, instead of considering general $Q$, we consider another special $Q$. It is the case when $Q$ is given by the section of the inside of a {\it circulr cone} by a plane. Given $p\in\Bbb R^3$, $\mbox{\boldmath $n$}\in S^2$ and $\theta\in\,]0,\,\frac{\pi}{2}[$ let $V_p(-\mbox{\boldmath $n$},\theta)$ denote the inside of the {\it circular cone} with {\it apex} at $p$ and the opening angle $\theta$ around the direction $-\mbox{\boldmath $n$}$, that is $$\displaystyle V_p(-\mbox{\boldmath $n$},\theta)=\left\{x\in\Bbb R^3\,\left\vert\right. \,(x-p)\cdot(-\mbox{\boldmath $n$})>\cos\theta\,\right\}. $$ Given $\omega\in S^2$ set $$\displaystyle Q=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta) \cap\left\{x\in\Bbb R^3\,\left\vert\right.\,x\cdot\omega=p\cdot\omega-\delta\,\right\}. \tag {1.19} $$ To ensure that $Q$ is non empty and bounded, we impose the restriction between $\omega$ and $\mbox{\boldmath $n$}$ as follows: $$ \omega\cdot\mbox{\boldmath $n$}>\cos(\pi/2-\theta)=\sin\theta(>0). \tag{1.20} $$ This means that the angle between $\omega$ and $\mbox{\boldmath $n$}$ has to be less than $\frac{\pi}{2}-\theta$. Then it is known that $Q$ is an ellipse and we have $$\displaystyle D_{(p,\omega)}(\delta, Q)=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta) \cap\left\{x\in\Bbb R^3\,\left\vert\right.\,x\cdot\omega>p\cdot\omega-\delta\,\right\}. \tag {1.21} $$ The problem here is to compute the complex constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ with all $\vartheta\in S(\omega)$ for this domain $D_{(p,\omega)}(\delta,Q)$ with $Q$ given by (1.19). Instead of (1.3) we employ the formula (1.4) with $D=D_{(p,\omega)}(\delta,Q)$ with $n=3$: $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) = \lim_{\tau\longrightarrow\infty}\tau^3e^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\,e^{\tau x\cdot(\omega+i\vartheta)}dx. \tag {1.22} $$ Here we rewrite this formula. Choosing sufficiently small positive numbers $\delta'$ and $\delta''$ with $\delta''<\delta'$, we see that the set $$\displaystyle D_{(p,\omega)}(\delta, Q)\cap\left\{x\in\Bbb R^3\,\left\vert\right.\,x\cdot\mbox{\boldmath $n$}<p\cdot\mbox{\boldmath $n$}-\delta'\,\right\} $$ is containted in the half-space $x\cdot\omega<p\cdot\omega-\delta''$. This yields $$ \displaystyle e^{-\tau p\cdot(\omega+i\vartheta)} \int_{D_{(p,\omega)}(\delta,Q)}\,e^{\tau x\cdot(\omega+i\vartheta)}dx =e^{-\tau p\cdot(\omega+i\vartheta)} \int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx+O(e^{-\tau\delta''}), $$ where $$\displaystyle V=\mbox{\boldmath $V$}_p(-\mbox{\boldmath $n$},\theta) \cap\left\{x\in\Bbb R^3\,\left\vert\right.\,x\cdot\mbox{\boldmath $n$}>p\cdot\mbox{\boldmath $n$}-\delta'\,\right\}. $$ Thus from (1.22) we obtain a more convenient expression $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) = \lim_{\tau\longrightarrow\infty}\tau^3e^{-\tau p\cdot(\omega+i\vartheta)} \int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx. \tag {1.23} $$ Using this expression we have the following explicit formula of $C_{(p,\omega)}(\delta,Q,\vartheta)$ for $D_{(p,\omega)}(\delta,Q)$ given by (1.21). \proclaim{\noindent Proposition 1.3.} We have $$\displaystyle C_{(p,\omega)}(\delta, Q,\vartheta) =6\,V(\theta)\, (\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)\,)^{-3}, \tag {1.24} $$ where $$\displaystyle V(\theta)=\frac{\pi}{3}\cos\,\theta\sin^2\,\theta. $$ \endproclaim Note that the value $V(\theta)$ coincides with the volume of the circular cone with the height $\cos\theta$ and the opening angle $\theta$. This function of $\theta\in\,]0,\,\frac{\pi}{2}\,[$ is monotone increasing in $]0,\,\tan^{-1}\sqrt{2}[$ and decreasing in $]\tan^{-1}\sqrt{2},\,\frac{\pi}{2}[$; takes the maximum value $\frac{2\pi}{9\sqrt{3}}$ at $\theta=\tan^{-1}\sqrt{2}$. Now we describe an application to Problem 1. First we introduce a singularity of a circular cone type for the source domain. {\bf\noindent Definition 1.3.} Let $D$ be a non empty bounded open set of $\Bbb R^3$. Let $p\in\partial D$. We say that $D$ has a {\it circular cone singularity} at $p$ if there exist a positive number $\epsilon$, unit vector $\mbox{\boldmath $n$}$ and number $\theta\in\,]0,\,\frac{\pi}{2}[$ such that $$\displaystyle D\cap B_{\epsilon}(p)=V_{p}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p). $$ It is easy to see that notion of the circular cone singularity is a special case of that of the conical one in the following sense. \proclaim{\noindent Lemma 1.1.} Let $\omega\in S^2$ be regular with respect to $D$. Assume that $D$ has a circular cone singularity at $p(\omega)$. Then, $D$ has a conical singularity from direction $\omega$ at $p(\omega)$. More precisely, for a sufficiently small $\delta$ we have the expression $$\displaystyle D\cap\left\{x\in\Bbb R^3\,\vert\, h_D(\omega)-\delta<x\cdot\omega<h_D(\omega)\,\right\} =D_{(p(\omega),\omega)}(\delta, Q), $$ where $Q$ is given by (1.19) with $V_{p}(-\mbox{\boldmath $n$},\theta)$ at $p=p(\omega)$ in the definition 1.3 satisfying (1.20). \endproclaim As a diect corollary of Theorems 1.1-1.2, Proposition 1.3 and Lemma 1.1 we immediately obtain all the results in Theorem 1.2 without the condition (1.15). We suumarize one of the result as Corollary 1.3 as follows. \proclaim{\noindent Corollary 1.3(Detecting the point $p(\omega)$).} Let $u\in H^1(\Omega)$ be an arbitrary solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p=p(\omega)$; the source $F$ is active at $p(\omega)$. Choose two linearly independent vectors $\vartheta=\vartheta_1$ and $\vartheta_2$ in $S(\omega)$. Then, the point $p(\omega)$ itself and thus $h_D(\omega)=p(\omega)\cdot\omega$ can be extracted from the Cauchy data of $u$ on $\partial\Omega$ by using the formula $$\displaystyle p(\omega)\cdot\omega+i\,p(\omega)\cdot\vartheta_j =\lim_{\tau\rightarrow\infty} \frac{I_{\omega,\vartheta_j}'(\tau)}{I_{\omega,\vartheta_j}(\tau)},\,\,\,j=1,2. \tag {1.25} $$ \endproclaim By virtue of the formula (1.24), the function $I(\omega,\,\cdot\,)$ has the expression $$\displaystyle I(\omega,\vartheta)=6\,\tilde{\rho}(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^{-3}. \tag {1.26} $$ Formula (1.26) yields the following results. \proclaim{\noindent Corollary 1.4.} Let $u\in H^1(\Omega)$ be a solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. \noindent (i) Assume that $F$ is active at $p(\omega)$. The vector $\omega$ coincides with $\mbox{\boldmath $n$}$ if and only if the function $I(\omega,\,\cdot\,)$ is a constant function. \noindent (ii) The vector $\mbox{\boldmath $n$}$ and $\theta$ of $V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)$ and the source strength $\tilde{\rho}(p(\omega))$ satisfies the following two equations: $$\displaystyle 6\,\vert\tilde{\rho}(p(\omega))\vert\,V(\theta)=(\mbox{\boldmath $n$}\cdot\omega)^3 \max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert; \tag {1.27} $$ $$\displaystyle 6\,\tilde{\rho}(p(\omega)) \,V(\theta)\,(3(\mbox{\boldmath $n$}\cdot\omega)^2-1) =\frac{1}{\pi}\,\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta). \tag {1.28} $$ \endproclaim Using the equations (1.26), (1.27) and (1.28) one gets the following corollary. \proclaim{\noindent Corollary 1.5.} Let $u\in H^1(\Omega)$ be a solution of (1.6) with the source $F=F_{\rho,D}$ given by (1.7). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. Assume that $F$ is active at $p(\omega)$ and that $\omega\approx\mbox{\boldmath $n$}$ in the sense that $$\displaystyle \mbox{\boldmath $n$}\cdot\omega>\frac{1}{\sqrt{3}}. \tag {1.29} $$ Then, the value $\gamma=\mbox{\boldmath $n$}\cdot\omega$ is the unique solution of the following quintic equation in $]\,\frac{1}{\sqrt{3}},\,1]$: $$\displaystyle \gamma^3(3\gamma^2-1)= \frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert}. \tag {1.30} $$ Besides, for an arbitrary $\vartheta\in S(\omega)$ the value $\mu=\mbox{\boldmath $n$}\cdot\vartheta$ is given by the formulae $$ \displaystyle \mu^2=\frac{\displaystyle\gamma^3-\text{Re}\,T(\omega,\vartheta)} {3\gamma} \tag {1.31} $$ and $$\displaystyle \mu=\frac{\displaystyle\text{Im}\,T(\omega,\vartheta)}{3\gamma^2-\mu^2}, \tag {1.32} $$ where $$\displaystyle T(\omega,\vartheta) =\frac{\displaystyle \int_{S(\omega)}\,I(\omega,\vartheta)\,ds(\vartheta)} {\pi(3\gamma^2-1)I(\omega,\vartheta)}. \tag{1.33} $$ \endproclaim The condition (1.29) is equivalent to the statement\footnote{We have $$\displaystyle \frac{3\pi}{10}+\frac{\pi}{100}>\tan^{-1}\sqrt{2}>\frac{3\pi}{10}. $$ }: the angle between $\omega$ and $\mbox{\boldmath $n$}$ is less than $\tan^{-1}\sqrt{2}$. Thus it is not so strict condition. The denominator of (1.32) is not zero because of $3\gamma^2-\mu^2\ge 3\gamma^2-1$ and (1.29). Under the same assumptions as Corollary 1.5, one can finally calculate the quantity $$\displaystyle \tilde{\rho}(p(\omega))\,V(\theta) \tag {1.34} $$ and $\mbox{\boldmath $n$}$ from the Cauchy data of $u$ on $\partial\Omega$. This is the final conclusion. The procedure is as follows. \noindent {\bf Step 1.} Calculate $p(\omega)$ via the formula (1.25). \noindent {\bf Step 2.} Calculate $I(\omega,\vartheta)$ via the formula (1.14) and the computed $p(\omega)$ in Step 1. \noindent {\bf Step 3.} If $I(\omega,\vartheta)$ looks like a constant function, decide $\omega\approx\mbox{\boldmath $n$}$ in the sense (1.29). If not so, search another $\omega$ around the original one in such a way that $\omega\approx\mbox{\boldmath $n$}$ as above by try and error and finally fix it. \noindent {\bf Step 4.} Find the value $\gamma=\mbox{\boldmath $n$}\cdot\omega$ by solving the quintic equation (1.30). \noindent {\bf Step 5.} Find the value (1.34) via the formulae (1.28) with the computed $\mbox{\boldmath $n$}\cdot\omega$ in Step 4. \noindent {\bf Step 6.} Choose linearly independent vectors $\vartheta_1, \vartheta_2\in S(\omega)$ and calculate $T(\omega,\vartheta_j)$, $j=1,2$ via the formula (1.33) using the computed value $\gamma$ in Step 4. \noindent {\bf Step 7.} Find $\mu=\mu_j=\mbox{\boldmath $n$}\cdot\vartheta_j$ by solving (1.31) and (1.32) using the computed $T(\omega,\vartheta_j)$ in Step 6. \noindent {\bf Step 8.} Find $\mbox{\boldmath $n$}$ by solving $\mbox{\boldmath $n$}\cdot\omega=\gamma$, $\mbox{\boldmath $n$}\cdot\vartheta_j=\mu_j$, $j=1,2$. Note that, in addition, if the opening angle $\theta$/the source strength $\tilde{\rho}(p(\omega))$ is known, then one obtains the value of $\tilde{\rho}(p(\omega))$/the volume $V(\theta)$ via the computed value (1.34) in Step 5. This paper is organized as follows. In the next section we give a proof of Proposition 1.3. It is based on the integral representation (2.8) of the complex constant $C_{(p,\omega)}(\delta, Q,\vartheta)$ and the residue calculus. Proofs of Corollaries 1.4 and 1.5 are given in Section 3. In Section 4, an inverse obstacle problem for a penetrable obstacle in three dimensions is considered. The corresponding results in this case are given and in Section 5 a possible direction of the extension of all the results in this paper is commented. Appendix is devoted to an example covered by the results in Section 4. \section{Proof of Proposition 1.3} In order to compute this right-hand side, we choose two unit vectors $\mbox{\boldmath $l$}$ and $\mbox{\boldmath $m$}$ perpendicular to each other in such a way that $\mbox{\boldmath $n$}=\mbox{\boldmath $l$}\times\mbox{\boldmath $m$}$. We see that the intersection of $\partial V_p(-\mbox{\boldmath $n$},\theta)$ with the plane $(x-p)\cdot\mbox{\boldmath $n$}=-(1/\tan\,\theta)$ coincides with the circle with radius $1$ centered at the point $p-(1/\tan\,\theta)\mbox{\boldmath $n$}$ on the plane. The pointing vector of an arbitrary point on the circle with respect to point $p$ has the expression $$\displaystyle \vartheta(w)=\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$} -\frac{1}{\tan\,\theta}\,\mbox{\boldmath $n$} \tag {2.1} $$ with a parameter $w\in\,[0,2\pi]$. Besides, from the geometrical meaning of $\vartheta(w)$, we have $$ \displaystyle\max_{w\in[0,\,2\pi]}\,\vartheta(w)\cdot\omega<0. \tag {2.2} $$ \proclaim{\noindent Lemma 2.1.} We have the expression $$\displaystyle (\omega+i\vartheta)\,C_{(p,\omega)}(\delta,Q,\vartheta) =\frac{1}{\tan\,\theta} \int_0^{2\pi} \frac{\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$}+\tan\,\theta\,\mbox{\boldmath $n$}} {\{\vartheta (w)\cdot(\omega+i\vartheta)\}^2}dw. \tag {2.3} $$ \endproclaim {\it\noindent Proof.} Let $\mbox{\boldmath $a$}$ be an arbitrally three dimensional complex vector. We have $$\displaystyle \int_{V}\,\nabla\cdot(e^{\tau x\cdot(\omega+i\vartheta)}\mbox{\boldmath $a$})\,dx =\tau(\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\,\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx. $$ The divergence theorem yields $$\displaystyle (\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\,\int_{V}\,e^{\tau x\cdot(\omega+i\vartheta)}dx =\tau^{-1}\int_{\partial V}\,e^{\tau x\cdot(\omega+i\vartheta)}\mbox{\boldmath $a$}\cdot\mbox{\boldmath $\nu$}\,dS(x), \tag {2.4} $$ where $\mbox{\boldmath $\nu$}$ denotes the outer unit normal vector to $\partial V$. Decompose $\partial V=V_1\cup V_2$ with $V_1\cap V_2=\emptyset$, where $$\begin{array}{l} \displaystyle V_1=\{x\,\vert\,-(x-p)\cdot\mbox{\boldmath $n$}=\vert x-p\vert\cos\,\theta,\, -\delta'<(x-p)\cdot\mbox{\boldmath $n$}<0\},\\ \\ \displaystyle V_2=\{x\,\vert\,\vert x-(p-\delta'\,\mbox{\boldmath $n$})\vert\le\delta'\,\tan\,\theta,\, (x-p)\cdot\mbox{\boldmath $n$}=-\delta'\}. \end{array} $$ To compute the surface integral over $V_1$, we make use of the change of variables as follows: $$\begin{array}{ll} \displaystyle x & \displaystyle =(p-\delta'\,\mbox{\boldmath $n$})+r(\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$})+\left(\delta'-\frac{r}{\tan\,\theta}\right)\,\mbox{\boldmath $n$} \\ \\ \displaystyle & \displaystyle =p+r\vartheta(w), \end{array} \tag {2.5} $$ where $(r,w)\in\,[0,\,\delta'\tan\,\theta]\times[0,\,2\pi[$ and $\vartheta(w)$ is given by (2.1). Then the surface element has the expression $$\displaystyle dS(x)=\frac{r}{\sin\,\theta}\,drdw $$ and outer unit normal $\mbox{\boldmath $\nu$}$ to $V_1$ takes the form $$\displaystyle \nu=\sin\,\theta\left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$}+\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta} \right). $$ Now from (2.4) and the decomposition $\partial V=V_1\cup V_2$, we have $$\begin{array}{l} \displaystyle \,\,\,\,\,\, e^{-\tau p\cdot(\omega+i\vartheta)} (\omega+i\vartheta)\cdot\mbox{\boldmath $a$}\int_{V} v\,dx\\ \\ \displaystyle =e^{-\tau p\cdot(\omega+i\vartheta)}\tau^{-1} \int_{V_1} v\mbox{\boldmath $a$}\cdot\nu\,dS(x) -e^{-\tau p\cdot(\omega+i\vartheta)}\tau^{-1} \int_{V_2} v\mbox{\boldmath $a$}\cdot\mbox{\boldmath $n$}\,dS(x)\\ \\ \displaystyle \equiv I+II, \end{array} \tag {2.6} $$ where $v=e^{\tau x\cdot(\omega+i\vartheta)}$. Since the set $V_2$ is contained in the half-space $x\cdot\omega\le p\cdot\omega-\delta''$, one gets $$ \displaystyle II=O(\tau^{-1}e^{-\tau\delta''}). \tag {2.7} $$ On $I$, using the change of variables given by (2.5), one has $$\begin{array}{c} \displaystyle x\cdot\omega=p\cdot\omega+r\,\vartheta(w)\cdot\omega,\\ \\ \displaystyle x\cdot\vartheta=p\cdot\vartheta+r\,\vartheta(w)\cdot\vartheta. \end{array} $$ And also noting (2.2), one gets $$\begin{array}{ll} \displaystyle \tau\,I & \displaystyle =\int_0^{2\pi}dw \int_0^{\delta'\tan\,\theta} rdr e^{\tau r\vartheta(w)\cdot\omega+i\tau\,r\vartheta(w)\cdot\vartheta} \left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}\\ \\ \displaystyle & \displaystyle =\frac{1}{\tau^2} \int_0^{2\pi}dw \int_0^{\tau\delta'\tan\,\theta} sds e^{s\vartheta(w)\cdot\omega+i\,s\vartheta(w)\cdot\vartheta} \left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}\\ \\ \displaystyle & \displaystyle =\frac{1}{\tau^2} \int_0^{2\pi}dw \int_0^{\infty} sds e^{s\vartheta(w)\cdot\omega+is\vartheta(w)\cdot\vartheta} \left(\mbox{\boldmath $n$}+\frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}}{\tan\,\theta}\right)\cdot\mbox{\boldmath $a$}+O(\tau^{-4}). \end{array} $$ Here one can apply the following formula to this right-hand side: $$\displaystyle \int_0^{\infty}se^{as}e^{ibs}ds=\frac{1}{(a+ib)^2},\,\,a<0. $$ Then one gets $$\displaystyle I=\frac{1}{\tau^3\tan\,\theta} \int_0^{2\pi} \frac{\cos\,w\,\mbox{\boldmath $l$} +\sin\,w\,\mbox{\boldmath $m$}+\tan\,\theta\,\mbox{\boldmath $n$}} {\{\vartheta(w)\cdot(\omega+i\Theta)\}^2}\,dw+O(\tau^{-5}). $$ Now this together with (1.23), (2.6) and (2.7) yields the desired formula. \noindent $\Box$ Now from (1.20) and (2.3) we have the integral representation of $C_{(p,\omega)}(\delta,Q,\vartheta)$: $$\displaystyle C_{(p,\omega)}(\delta,Q,\vartheta) =\frac{1}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} \int_0^{2\pi}\frac{dw}{\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2}. \tag {2.8} $$ This formula shows that the constant $C_{(p,\omega)}(\delta,Q,\vartheta)$ is independent of $p$ and $\delta$ when $Q$ is given by (1.19). By computing the integral of the right-hand side on (2.8) we obtain the explicit value of $C_{(p,\omega}(\delta, Q,\vartheta)$. \proclaim{\noindent Lemma 2.2.} We have: $C_{(p,\omega)}(\delta, Q,\vartheta)\not=0$ if and only if $$\displaystyle \frac{\sin\,\theta} {1+\cos\,\theta}<\left\vert \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)} \right\vert<\frac{1+\cos\,\theta}{\sin\,\theta} \tag {2.9} $$ and then $$\displaystyle C_{(p,\omega)}(\delta, Q,\vartheta) =2\pi\cos\theta\,\sin^2\,\theta\, (\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)\,)^{-3}. \tag {2.10} $$ \endproclaim {\it\noindent Proof.} Set $$\displaystyle A=\mbox{\boldmath $l$}\cdot(\omega+i\vartheta), \,\,B=\mbox{\boldmath $m$}\cdot(\omega+i\vartheta),\,\, C=-\frac{1}{\tan\,\theta}\mbox{\boldmath $n$}\cdot(\omega+i\vartheta) $$ and $z=e^{iw}$. One can write $$\begin{array}{ll} \displaystyle \vartheta(w)\cdot(\omega+i\vartheta) & \displaystyle =A\cos\,w+B\sin\,w+C\\ \\ \displaystyle & \displaystyle =\frac{A}{2}(z+z^{-1})-i\frac{B}{2}(z-z^{-1})+C\\ \\ \displaystyle & \displaystyle =\frac{1}{2z} \{(A-iB)z^2+2Cz+(A+iB)\}. \end{array} $$ Here we claim $$ \displaystyle A-iB\equiv(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\not=0. \tag {2.11} $$ Assume contrary that $A-iB=0$. Since we have $$\displaystyle A-iB =\mbox{\boldmath $l$}\cdot\omega+\mbox{\boldmath $m$}\cdot\vartheta +i(\mbox{\boldmath $l$}\cdot\vartheta-\mbox{\boldmath $m$}\cdot\omega), $$ it must hold that $$\displaystyle \mbox{\boldmath $l$}\cdot\omega=-\mbox{\boldmath $m$}\cdot\vartheta,\,\, \mbox{\boldmath $m$}\cdot\omega=\mbox{\boldmath $l$}\cdot\vartheta. \tag {2.12} $$ Then we have $$\begin{array}{ll} \displaystyle (\mbox{\boldmath $n$}\cdot\vartheta)^2 & \displaystyle =\vert\vartheta\vert^2-(\mbox{\boldmath $l$}\cdot\vartheta)^2-(\mbox{\boldmath $m$}\cdot\vartheta)^2 \\ \\ \displaystyle & \displaystyle =\vert\omega\vert^2-(\mbox{\boldmath $l$}\cdot\omega)^2-(\mbox{\boldmath $m$}\cdot\omega)^2 \\ \\ \displaystyle & \displaystyle =(\mbox{\boldmath $n$}\cdot\omega)^2. \end{array} \tag {2.13} $$ On the other hand, we have $$\displaystyle 0=\omega\cdot\vartheta =(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta) +(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta). $$ Here by (2.12) one has $(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)=0$. Thus one obtains $$ \displaystyle 0=(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta). $$ Now a combination of this and (2.13) yields $\mbox{\boldmath $n$}\cdot\omega=0$. However, by (1.20) this is impossible. Therefore we obtain the expression $$\displaystyle \vartheta(w)\cdot(\omega+i\vartheta) =\frac{A-iB}{2z}f(z)\vert_{z=e^{iw}}, \tag {2.14} $$ where $$\displaystyle f(z)= \left(z+\frac{C}{A-iB}\right)^2 -\frac{C^2-(A^2+B^2)}{(A-iB)^2}. $$ Here we write $$\begin{array}{ll} \displaystyle C^2-(A^2+B^2) & \displaystyle =\frac{1}{\tan^2\,\theta} (\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^2 -\{(\mbox{\boldmath $l$}\cdot(\omega+i\vartheta))^2 +(\mbox{\boldmath $m$}\cdot(\omega+i\vartheta))^2\}\\ \\ \displaystyle & \displaystyle = \frac{1}{\tan^2\,\theta} \{(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta\}\\ \\ \displaystyle & \displaystyle \,\,\, -\{(\mbox{\boldmath $l$}\cdot\omega)^2 +(\mbox{\boldmath $m$}\cdot\omega)^2 -(\mbox{\boldmath $l$}\cdot\vartheta)^2 -(\mbox{\boldmath $m$}\cdot\vartheta)^2 +2i(\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +2i(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)\}\\ \\ \displaystyle & \displaystyle =\left(\frac{1}{\tan^2\,\theta}+1\right)(\mbox{\boldmath $n$}\cdot\omega)^2 - \left(\frac{1}{\tan^2\,\theta}+1\right)(\mbox{\boldmath $n$}\cdot\vartheta)^2 +\frac{1}{\tan^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)\\ \\ \displaystyle & \displaystyle \,\,\, -2i\{ (\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta)\}\\ \\ \displaystyle & \displaystyle =\frac{1}{\sin^2\,\theta}\{(\mbox{\boldmath $n$}\cdot\omega)^2 -(\mbox{\boldmath $n$}\cdot\vartheta)^2\} +\frac{1}{\sin^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta) \\ \\ \displaystyle & \displaystyle \,\,\, -2i\{ (\mbox{\boldmath $l$}\cdot\omega)(\mbox{\boldmath $l$}\cdot\vartheta) +(\mbox{\boldmath $m$}\cdot\omega)(\mbox{\boldmath $m$}\cdot\vartheta) +(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta)\} \\ \\ \displaystyle & \displaystyle =\frac{1}{\sin^2\,\theta}\{(\mbox{\boldmath $n$}\cdot\omega)^2 -(\mbox{\boldmath $n$}\cdot\vartheta)^2\} +\frac{1}{\sin^2\,\theta}\,2i(\mbox{\boldmath $n$}\cdot\omega)(\mbox{\boldmath $n$}\cdot\vartheta) \\ \\ \displaystyle & \displaystyle \,\,\, -2i\omega\cdot\vartheta. \end{array} $$ Since $\omega\cdot\vartheta=0$, we finally obtain $$\displaystyle C^2-(A^2+B^2) =\left(\frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {\sin\,\theta}\right)^2. $$ Now set $$\displaystyle z_{\pm}=\frac{(\cos\,\theta\pm 1)}{\sin\,\theta} \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}. \tag {2.15} $$ Then one gets the factorization $$\displaystyle f(z)=(z-z_+)(z-z_{-}). $$ By (2.15) we have $\vert z_+\vert>\vert z_{-}\vert$. Besides, from (2.2), (2.11) and (2.14) we have $f(e^{iw})\not=0$ for all $w\in\,[0,\,2\pi]$. This ensures that the complex numbers $z_{+}$ and $z_{-}$ are not on the circle $\vert z\vert=1$. Thus from (2.14) one gets $$ \displaystyle \int_0^{2\pi} \frac{dw} {\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2} =\frac{4}{i(A-iB)^2} \int_{\vert z\vert=1}\frac{zdz}{(z-z_{+})^2(z-z_{-})^2}. \tag {2.16} $$ The residue calculus yields $$\displaystyle \int_{\vert z\vert=1}\frac{zdz}{(z-z_{+})^2(z-z_{-})^2} = \left\{ \begin{array}{ll} \displaystyle 0 & \text{if $\vert z_{-}\vert>1$,} \\ \\ \displaystyle 0 & \text{if $\vert z_{-}\vert<1$ and $\vert z_{+}\vert<1$,} \\ \\ \displaystyle 2\pi i\frac{z_{+}+z_{-}}{(z_{+}-z_{-})^3}\not=0 & \text{if $\vert z_{-}\vert<1<\vert z_{+}\vert$.} \end{array} \right. $$ And also (2.15) gives $$\begin{array}{ll} \displaystyle 2\pi i\frac{z_{+}+z_{-}}{(z_{+}-z_{-})^3} & \displaystyle =2\pi i\cdot2\frac{\cos\theta}{\sin\theta} \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)} \cdot (\frac{\sin\theta}{2})^3 \left\{\frac{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} \right\}^3 \\ \\ \displaystyle & \displaystyle =\frac{\pi i}{2}\cos\,\theta\sin^2\,\theta \left\{\frac{(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2 \\ \\ \displaystyle & \displaystyle =\frac{\pi i}{2}\cos\,\theta\sin^2\,\theta \left\{\frac{A-iB} {\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2. \end{array} $$ Thus (2.16) yields $$ \displaystyle \int_0^{2\pi} \frac{dw} {\{\vartheta(w)\cdot(\omega+i\vartheta)\}^2} =2\pi\cos\,\theta\sin^2\,\theta \left\{\frac{1}{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)}\right\}^2 $$ provided $\vert z_{-}\vert<1<\vert z_{+}\vert$. From these together with (2.8) we obtain the desired conclusion. \noindent $\Box$ Note that (2.10) is nothing but (1.24). Since (2.9) looks like a condition depending on the choice of $\mbox{\boldmath $l$}$ and $\mbox{\boldmath $m$}$ we further rewrite the number $$\displaystyle K(\vartheta;\omega,\mbox{\boldmath $n$}) =\left\vert \frac{\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)} {(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)} \right\vert. $$ We have $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \vert(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\vert^2 \\ \\ \displaystyle =(\mbox{\boldmath $l$}\cdot\omega+\mbox{\boldmath $m$}\cdot\vartheta)^2+ (\mbox{\boldmath $l$}\cdot\vartheta-\mbox{\boldmath $m$}\cdot\omega)^2 \\ \\ \displaystyle =2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2(\mbox{\boldmath $l$}\cdot\omega\,\mbox{\boldmath $m$}\cdot\vartheta-\mbox{\boldmath $l$}\cdot\vartheta\,\mbox{\boldmath $m$}\cdot\omega). \end{array} $$ Here we see that $$\displaystyle \mbox{\boldmath $n$}\cdot(\omega\times\vartheta) =\mbox{\boldmath $l$}\cdot\omega\,\mbox{\boldmath $m$}\cdot\vartheta-\mbox{\boldmath $l$}\cdot\vartheta\,\mbox{\boldmath $m$}\cdot\omega. $$ Thus one has $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \vert(\mbox{\boldmath $l$}-i\mbox{\boldmath $m$})\cdot(\omega+i\vartheta)\vert^2 \\ \\ \displaystyle =2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta). \end{array} $$ Therefore we obtain $$\displaystyle K(\vartheta;\omega,\mbox{\boldmath $n$})=\frac{\displaystyle \sqrt{(\mbox{\boldmath $n$}\cdot\omega)^2+(\mbox{\boldmath $n$}\cdot\vartheta)^2 }} {\displaystyle \sqrt{ 2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta)} }. $$ Besides, we have $$\displaystyle \frac{1-\cos\,\theta}{\sin\,\theta}=\tan\,\frac{\theta}{2} $$ and $$\displaystyle \frac{1+\cos\,\theta}{\sin\,\theta}=\frac{1}{\tan\,\frac{\theta}{2}}. $$ Thus (2.9) is equivalent to the condition $$\displaystyle \tan\,\frac{\theta}{2}< K(\vartheta;\omega,\mbox{\boldmath $n$}) <\frac{1}{\tan\,\frac{\theta}{2}}. \tag {2.17} $$ Here consider the case $\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$. Choose $$\displaystyle \vartheta=\frac{\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}} {\vert\mbox{\boldmath $\omega$}\times\mbox{\boldmath $n$}\vert}. $$ We have $\vartheta\cdot\omega=\vartheta\cdot\mbox{\boldmath $n$}=0$ and $\vartheta\in S^2$. Since we have $$\displaystyle \mbox{\boldmath $n$}\cdot(\omega\times\vartheta)=-\vert\omega\times\mbox{\boldmath $n$}\vert $$ and $$\displaystyle 1=(\mbox{\boldmath $n$}\cdot\omega)^2+\vert\omega\times\mbox{\boldmath $n$}\vert^2, $$ one gets $$\begin{array}{l} \displaystyle \,\,\,\,\,\, 2-(\mbox{\boldmath $n$}\cdot\omega)^2-(\mbox{\boldmath $n$}\cdot\vartheta)^2 +2\mbox{\boldmath $n$}\cdot(\omega\times\vartheta) \\ \\ \displaystyle =1+\vert\omega\times\mbox{\boldmath $n$}\vert^2-2\vert\omega\times\mbox{\boldmath $n$}\vert\\ \\ \displaystyle =(1-\vert\omega\times\mbox{\boldmath $n$}\vert)^2. \end{array} $$ Therefore, we obtain $$\displaystyle K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$}) =\frac{\omega\cdot\mbox{\boldmath $n$}}{1-\vert\omega\times\mbox{\boldmath $n$}\vert}. $$ Note that we are considering $\omega$ satisfying (1.20). Let $\varphi$ denote the angle between $\omega$ and $\mbox{\boldmath $n$}$. Under the condition $\omega\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$, we see that (1.20) is equivalent to the condition $$\displaystyle 0<\varphi<\frac{\pi}{2}-\theta. \tag {2.18} $$ Then one can write $$\begin{array}{ll} \displaystyle K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$}) & \displaystyle =\frac{\cos\,\varphi}{1-\sin\,\varphi} \\ \\ \displaystyle & \displaystyle =\frac{1+\sin\,\varphi}{\cos\,\varphi} \\ \\ \displaystyle & \displaystyle =\frac{\displaystyle 1+\cos\,(\frac{\pi}{2}-\varphi)}{\displaystyle \sin\,(\frac{\pi}{2}-\varphi)} \\ \\ \displaystyle & \displaystyle =\frac{1}{\displaystyle\tan\,\frac{1}{2}\,(\frac{\pi}{2}-\varphi)} \end{array} $$ Thus (2.18) gives $$\displaystyle 1<K(\omega\times\mbox{\boldmath $n$};\omega,\mbox{\boldmath $n$})<\frac{1}{\displaystyle \tan\,\frac{\theta}{2}}. \tag {2.19} $$ Since we have $\tan\,\frac{\theta}{2}<1$ for all $\theta\in\,]0,\,\frac{\pi}{2}[$, (2.19) yields the validity of (2.17). Next consider the case $\omega\times\mbox{\boldmath $n$}=\mbox{\boldmath $0$}$. By (1.20) we have $\omega=\mbox{\boldmath $n$}$. Then, for all $\vartheta$ perpendicular to $\mbox{\boldmath $n$}$ satisfies $$\displaystyle K(\vartheta;\mbox{\boldmath $n$}, \mbox{\boldmath $n$}) =1. $$ This yields that (2.17) is valid for all $\theta\in\,]0,\,\frac{\pi}{2}[$. The results above are summarized as follows. Given $\omega\in S^2$ with (1.20) define the subset of $S^2$ $$\displaystyle {\cal K}(\omega;\mbox{\boldmath $n$},\theta) = \left\{ \vartheta\in S^2\,\left\vert\right.\,\vartheta\cdot\omega=0, \,\,\text{$K(\vartheta;\omega,\mbox{\boldmath $n$})$ satisfies (2.19)\,}\,\right\}. $$ Then, we have $\bullet$ If $\omega\not\not=\mbox{\boldmath $n$}$, then $\omega\times\mbox{\boldmath $n$}\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. $\bullet$ If $\omega=\mbox{\boldmath $n$}$, then ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)= \{\vartheta\in S^2\,\vert\,\vartheta\cdot\omega=0\}\equiv S(\omega)$. Thus, any way the set ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is not empty and clearly open with respect to the topology of the set $S(\omega)$ which is the relative topology of $S^2$. Besides, we can say more about ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. We claim set ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is closed. For this, It suffices to show that if a sequence $\{\vartheta_n\}$ of ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ converges to a point $\vartheta\in S(\omega)$, then $\vartheta\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. This is proved as follows. By assumption, each $\vartheta_n$ satisfies $$\displaystyle \tan\,\frac{\theta}{2}< K(\vartheta_n;\omega,\mbox{\boldmath $n$}) <\frac{1}{\tan\,\frac{\theta}{2}}. $$ Taking the limit, we have $$\displaystyle \tan\,\frac{\theta}{2}\le K(\vartheta;\omega,\mbox{\boldmath $n$}) \le\frac{1}{\tan\,\frac{\theta}{2}}. $$ By (2.15) this is equivalent to $\vert z_{+}\vert\ge 1$ and $\vert z_{-}\vert\le 1$. However, in the proof of Lemma 2.2 we know that $\vert z_{+}\vert\not=1$ and $\vert z_{-}\vert\not=1$. Thus we have $\vert z_{+}\vert>1$ and $\vert z_{-}\vert<1$. This is equivalent to $\vartheta\in{\cal K}(\omega;\mbox{\boldmath $n$},\theta)$. Since $S(\omega)$ is connected, ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)$ is not empty, open and closed we conclude ${\cal K}(\omega;\mbox{\boldmath $n$},\theta)=S(\omega)$. This completes the proof of Proposition 1.3. \section{Proof of Corollaries 1.4 and 1.5} Note that $\omega$ satisfies (1.20). \subsection{On Corollary 1.4} From (1.26) we have, if $\omega=\mbox{\boldmath $n$}$, then for all $\vartheta\in S(\omega)$ $$\displaystyle I(\omega,\vartheta)= 6\tilde{\rho}(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot\omega)^{-3}. $$ On the other hand, if $\omega\not=\mbox{\boldmath $n$}$, then we have $\omega\times\mbox{\boldmath $n$}\not=\mbox{\boldmath $0$}$ (under the condition (1.19)) and $$\displaystyle S(\omega)\cap S(\mbox{\boldmath $n$})=\left\{\pm\frac{\omega\times\mbox{\boldmath $n$}}{\vert \omega\times\mbox{\boldmath $n$}\vert}\right\}. $$ Thus one gets $$\displaystyle I(\omega,\vartheta) = \begin{array}{ll} \displaystyle 6\tilde{\rho}(p(\omega))\,V(\theta) \left(\mbox{\boldmath $n$}\cdot\omega\mp\,i\frac{\vert\omega\times\mbox{\boldmath $n$}\vert^2}{\vert\omega\times(\omega\times\mbox{\boldmath $n$})\vert}\right)^{-3} & \text{for $\displaystyle\vartheta=\pm\frac{\omega\times(\omega\times\mbox{\boldmath $n$})}{\vert\omega\times(\omega\times\mbox{\boldmath $n$})\vert}$.} \end{array} $$ Thus one gets the assertion (i) and (1.27) in (ii). For (1.28) it suffices to prove the following fact. \proclaim{\noindent Lemma 3.1.} Let the unit vectors $\omega$ and $\mbox{\boldmath $n$}$ satisfy $\omega\cdot\mbox{\boldmath $n$}\not=0$. We have $$\displaystyle \int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3} =\pi(3(\mbox{\boldmath $n$}\cdot\omega)^2-1). \tag {3.1} $$ \endproclaim {\it\noindent Proof.} The right-hand side on (3.1) is invariant with respect to the change $\omega\rightarrow-\omega$, it is easy to see that the case $\omega\cdot\mbox{\boldmath $n$}<0$ can be derived from the result in the case $\omega\cdot\mbox{\boldmath $n$}>0$. Thus, hereafter we show the validity of (3.1) only for this case. If $\mbox{\boldmath $n$}\cdot\omega=1$, then $\omega=\mbox{\boldmath $n$}$. Thus $S(\omega)=S(\mbox{\boldmath $n$})$. Then for all $\vartheta\in S(\omega)$ we have $\mbox{\boldmath $n$}\cdot(\omega+i\vartheta)=1$. This yields $$\displaystyle \int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3} =2\pi. $$ Thus the problem is the case when $\mbox{\boldmath $n$}\cdot\omega\not=1$. Choose an orthogonal $3\times 3$-matrix $A$ such that $A^T\omega=\mbox{\boldmath $e$}_3$. Introduce the change of variables $\vartheta=A\vartheta'$. We have $\vartheta\in S(\omega)$ if and only if $\vartheta'\in S(\mbox{\boldmath $e$}_3)$ and $$\begin{array}{ll} \displaystyle \mbox{\boldmath $n$}\cdot(\omega+iA\vartheta') & \displaystyle =\mbox{\boldmath $n$}'\cdot(\mbox{\boldmath $e$}_3+i\vartheta'), \end{array} $$ where $\mbox{\boldmath $n$}'=A^T\mbox{\boldmath $n$}\in S^{2}$. Here we introduce the polar coordinates for $\vartheta'\in S(\mbox{\boldmath $e$}_3)$: $$\begin{array}{ll} \displaystyle \vartheta'=(\cos\,\vartheta,\sin\varphi, 0)^T, & \varphi\in\,[0,\,2\pi[. \end{array} $$ Then, we have $$\begin{array}{ll} \displaystyle I\equiv\int_{S(\omega)}\frac{ds(\vartheta)}{(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^3} & \displaystyle =\int_0^{2\pi}\frac{d\varphi} {(\mbox{\boldmath $n$}'\cdot(i\cos\,\varphi,i\sin\,\varphi,1)^T)^3} \\ \\ \displaystyle & \displaystyle =-\frac{1}{i}\int_0^{2\pi}\frac{d\varphi} {(\mbox{\boldmath $n$}'\cdot(\cos\,\varphi,\sin\,\varphi,-i)^T)^3} \\ \\ \displaystyle & \displaystyle =i\int_0^{2\pi}\frac{d\varphi} {(a\cos\,\varphi+b\sin\,\varphi-ic)^3}, \end{array} \tag {3.2} $$ where $\mbox{\boldmath $n$}'=(a,b,c)^T$. The numbers $a, b, c$ satisfy $a^2+b^2+c^2=1$ and $0<c<1$ since we have $c=\mbox{\boldmath $n$}'\cdot\mbox{\boldmath $e$}_3=\mbox{\boldmath $n$}\cdot\omega$. Thus $a^2+b^2\not=0$. To compute the integral on the right-hand side of (3.2) we make use of the residue calculus. The change of variables $z=e^{i\varphi}$ gives $$\begin{array}{l} \displaystyle \,\,\,\,\,\, a\cos\,\varphi+b\sin\,\varphi-ic \\ \\ \displaystyle =\frac{1}{2} \left\{a\left(z+\frac{1}{z}\right)+\frac{b}{i}\left(z-\frac{1}{z}\right)-2ic\right\} \\ \\ \displaystyle =\frac{1}{2z} \left\{(a-ib)z^2-2icz+(a+ib)\right\} \\ \\ \displaystyle =\frac{a-ib}{2z} \left\{\left(z-\frac{ic}{a-ib}\right)^2-\left(\frac{i}{a-ib}\right)^2,\right\} \\ \\ \displaystyle =\frac{a-ib}{2z}(z-\alpha)(z-\beta), \end{array} \tag {3.3} $$ where $$\begin{array}{ll}\displaystyle \alpha=\frac{i(c+1)}{a-ib}, & \displaystyle \beta=\frac{i(c-1)}{a-ib}. \end{array} $$ Since $1-c<1+c$ and $a\cos\,\varphi+b\sin\,\varphi-ic\not=0$ for $z=e^{i\varphi}$, we have $\vert\beta\vert<1<\vert\alpha\vert$. Substituting (3.3) into (3.2) and using $d\varphi=\frac{dz}{iz}$, we have $$\begin{array}{ll} \displaystyle I & \displaystyle =i\int_{\vert z\vert=1}\frac{2^3}{(a-ib)^3}\cdot\frac{z^3}{(z-\alpha)^3(z-\beta)^3}\cdot\frac{dz}{iz} \\ \\ \displaystyle & \displaystyle =\left(\frac{2}{a-ib}\right)^3\int_{\vert z\vert=1}\,\frac{z^2 dz}{(z-\alpha)^3(z-\beta)^3}. \end{array} \tag {3.4} $$ The residue calculus yields $$\begin{array}{ll} \displaystyle \int_{\vert z\vert=1}\,\frac{z^2 dz}{(z-\alpha)^3(z-\beta)^3} & \displaystyle =2\pi i\,\text{Res}_{z=\beta}\,\left(\frac{z^2}{(z-\alpha)^3(z-\beta)^3}\right) \\ \\ \displaystyle & \displaystyle =2\pi i\cdot\frac{1}{2}\frac{d^2}{dz^2}\left(\frac{z^2}{(z-\alpha)^3}\right)\vert_{z=\beta} \\ \\ \displaystyle & \displaystyle =2\pi i\cdot\frac{\alpha^2+4\alpha\beta+\beta^2}{(\beta-\alpha)^5}. \end{array} \tag {3.5} $$ Here we have the expression $$\displaystyle \alpha-\beta=\frac{2i}{a-ib} $$ and $$\begin{array}{ll} \displaystyle \alpha^2+4\alpha\beta+\beta^2 & \displaystyle =-\frac{(c+1)^2+4(c^2-1)+(c-1)^2}{(a-ib)^2} \\ \\ \displaystyle & \displaystyle =-\frac{2(3c^2-1)}{(a-ib)^2}. \end{array} $$ Thus from (3.4) and (3.5) we obtain $$\begin{array}{ll} \displaystyle I & \displaystyle =-2\pi\left(\frac{a-ib}{2}\right)^2(\alpha^2+4\alpha\beta+\beta^2) \\ \\ \displaystyle & \displaystyle =\pi(3c^2-1). \end{array} $$ This completes the proof of (3.1). \noindent $\Box$ \subsection{On Corollary 1.5} Let us explain the uniqueness of the solution of the quintic equation (1.30) in $]\frac{1}{\sqrt{3}},\,1]$. From (1.27), (1.28) and (1.29) we have $$\displaystyle \frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert} =(\mbox{\boldmath $n$}\cdot\omega)^3(3(\mbox{\boldmath $n$}\cdot\omega)^2-1) $$ and thus $$\displaystyle 0< \frac{\displaystyle\left\vert\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta)\right\vert}{\pi\,\max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert} \le 2. $$ Since $]\,\frac{1}{\sqrt{3}},\,\,1]\ni\gamma\longmapsto\gamma^3(3\gamma^2-1)\in\,]0,\,2]$ is bijective, the solution of quintic equation (1.30) in $]\frac{1}{\sqrt{3}},\,1]$ is unique and its solution is just $\gamma=\mbox{\boldmath $n$}\cdot\omega$. The formulae (1.31) and (1.32) are derived as follows. A combination of (1.26) and (1.28) yields $$\displaystyle (\mbox{\boldmath $n$}\cdot\omega+i\mbox{\boldmath $n$}\cdot\vartheta)^3 =T(\omega,\vartheta). $$ By expanding the left-hand side, we obtain immediately the desired formulae. \section{Application to an inverse obstacle problem} As pointed out in \cite{Ik} the enclosure method developed here can be applied also to an inverse obstacle problem in three dimensions governed by the equation $$\begin{array}{ll} \displaystyle \Delta u+k^2 n(x)u=0, & x\in\Omega, \end{array} \tag {4.1} $$ where $k$ is a fixed positive number. We assume that $\partial\Omega\in C^{\infty}$, for simplicity. Both $u$ and $n$ can be complex-valued functions. In this section we assume that $n(x)$ takes the form $n(x)=1+F(x)$, $x\in\Omega$, where $F=F_{\rho,D}(x)$ is given by (1.7). We assume that $\rho\in L^{\infty}(D)$ instead of $\rho\in L^2(D)$ and that $u\in H^2(\Omega)$ is an arbitrary non trivial solution of (4.1) at this stage. We never specify the boundary condition of $u$ on $\partial\Omega$. By the Sobolev imbedding theorem \cite{G} one may assume that $u\in C^{0,\alpha}(\overline\Omega)$ with $0<\alpha<1$. In this section we consider {\bf\noindent Problem 2.} Extract information about the singularity of $D$ from the Cauchy data of $u$ on $\partial\Omega$. We encounter this type of problem, for example, $u$ is given by the restriction to $\Omega$ of the total wave defined in the whole space and generated by a point source located outside of $\Omega$ or a single plane wave coming from infinity. The surface where the measurements are taken is given by $\partial\Omega$ which encloses the penetrable obstacle $D$ with a different reflection index $1+\rho$, $\rho\not\equiv 0$. See \cite{CK} for detailed information about the direct problem itself. Any way we start with having the Cauchy data of an arbitrary (nontrivial) $H^2(\Omega)$ solution of (4.1). Using the Cauchy data of $u$ on $\partial\Omega$, we introduce the indicator function $$\displaystyle I_{\omega,\vartheta}(\tau)=\int_{\partial\Omega} \left(\frac{\partial u}{\partial\nu}v-\frac{\partial v}{\partial\nu} u\right)\,dS, \tag {4.2} $$ where the function $v=v(x), x\in\Bbb R^3$ is given by $$\displaystyle v=e^{x\cdot(\tau\omega+i\sqrt{\tau^2+k^2}\vartheta)},\,\,\tau>0 $$ and $\vartheta\in S(\omega)$. And also its derivative with respect to $\tau$ is given by the formula $$\displaystyle I_{\omega,\vartheta}'(\tau) =\int_{\partial\Omega}\left(\frac{\partial u}{\partial\nu}\,v_{\tau}-\frac{\partial\,v_{\tau}}{\partial\nu} u\right)\,dS, \tag {4.3} $$ where $$\displaystyle v_{\tau}=\partial_{\tau}v=\left\{x\cdot\left(\omega+i\frac{\tau}{\sqrt{\tau^2+k^2}}\,\vartheta\,\right)\,\right\}\,v. $$ As done the proof of Theorem 1.1 integration by parts yields $$\displaystyle I_{\omega,\vartheta}(\tau)=-k^2\int_D\rho(x)u(x)v\,dx $$ and $$\displaystyle I_{\omega,\vartheta}'(\tau)=-k^2\int_D\rho(x)u(x)v_{\tau}\,dx. $$ Thus this can be viewed as the case $\rho(x)$ in Problem 1 is given by $-k^2\rho(x)u(x)$ and $\tilde{\rho}(x)$ in Definition 1.2 by $-k^2\tilde{\rho}(x)u(x)$. Thus we obtain \proclaim{\noindent Theorem 4.1.} Let $\omega$ be regular with respect to $D$ and assume that $D$ has a conical singularity from direction $\omega$. Then, we have $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}(\tau)= -k^2\tilde{\rho}(p(\omega))\,u(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}) $$ and $$\displaystyle \tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta}I_{\omega,\vartheta}'(\tau)= -k^2\tilde{\rho}(p(\omega))\,u(p(\omega))(h_D(\omega)+ip(\omega)\cdot\vartheta)\,C_{(p(\omega),\omega)}(\delta,Q,\vartheta) +O(\tau^{-\alpha}). $$ The remainder $O(\tau^{-\alpha})$ is uniform with respect to $\vartheta\in S(\omega)$. \endproclaim Thus under the same assumptions as Theorem 4.1, for each $\vartheta\in S(\omega)$ one can calculate $$\displaystyle I(\omega\,\vartheta)\equiv -k^2\tilde{\rho}(p(\omega))\,u(p(\omega))\,C_{(p(\omega),\omega)}(\delta,Q) $$ via the formula $$\displaystyle I(\omega,\vartheta) =\lim_{\tau\rightarrow\infty}\tau^3e^{-\tau h_D(\omega)}e^{-i\sqrt{\tau^2+k^2}p(\omega)\cdot\vartheta} I_{\omega,\vartheta}(\tau) \tag {4.4} $$ by using the Cauchy data of $u$ on $\partial\Omega$ if $p(\omega)$ is known. And also we have \proclaim{\noindent Theorem 4.2.} Let $\omega$ be regular with respect to $D$. Assume that $D$ has a conical singularity from direction $\omega$; $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies $$\displaystyle u(p(\omega))\not=0. \tag {4.5} $$ If the direction $\vartheta\in S(\omega)$ satisfies the condition (1.15), then all the formulae (1.16), (1.17) and (1.18) for the indicator function defined by (4.2) together with its derivative (4.3) are valid. \endproclaim Note that the assumption (4.5) ensures $u\not\equiv 0$. See Appendix for an example of $u$ satisfying (4.5). The following corollaries corresponds to Corollaries 1.1 and 1.2. \proclaim{\noindent Corollary 4.1.} Let $\omega$ be regular with respect to $D$. Under the same assumptions as those in Theorem 4.2 the point $p(\omega)$ is uniquely determined by the Cauchy data of $u$ on $\partial\Omega$. \endproclaim \proclaim{\noindent Corollary 4.2.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Assume that $D$ is given by the inside of a convex polyhedron and that in a neighbourhood of each vertex $p$ of $D$, the $D$ coincides with the inside of a tetrahedron with apex $p$ and that $n-1=F_{\rho, D}$ given by (1.7) is active at $p$ and the value of $u$ at $p$ satisfies (4.5). Then, all the formulae (1.16), (1.17) and (1.18) for the indicator function defined by (4.2) together with its derivative (4.3) are valid for all $\omega$ regular with respect to $D$ and $\vartheta\in S(\omega)$. Besides, the Cauchy data of $u$ on $\partial\Omega$ uniquely determines $D$. \endproclaim The following result is an extension of Theorem 4.1 in \cite{Ik} to three dimensional case. \proclaim{\noindent Corollary 4.3.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p=p(\omega)$; $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5). Choose two linearly independent vectors $\vartheta=\vartheta_1$ and $\vartheta_2$ in $S(\omega)$. Then, the point $p(\omega)$ itself and thus $h_D(\omega)=p(\omega)\cdot\omega$ can be extracted from the Cauchy data of $u$ on $\partial\Omega$ by using the formula $$\displaystyle p(\omega)\cdot\omega+i\,p(\omega)\cdot\vartheta_j =\lim_{\tau\rightarrow\infty} \frac{I_{\omega,\vartheta_j}'(\tau)}{I_{\omega,\vartheta_j}(\tau)},\,\,\,j=1,2. \tag {4.6} $$ \endproclaim By virtue of the formula (1.24), the function $I(\omega,\,\cdot\,)$ has the expression $$\displaystyle I(\omega,\vartheta)=-6k^2\,\tilde{\rho}(p(\omega))u(p(\omega))\,V(\theta)(\mbox{\boldmath $n$}\cdot(\omega+i\vartheta))^{-3}. \tag {4.7} $$ Simillarly to Corollary 1.4 formula (4.7) yields immediately the following results. \proclaim{\noindent Corollary 4.4.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. \noindent (i) Assume that $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5). The vector $\omega$ coincides with $\mbox{\boldmath $n$}$ if and only if the function $I(\omega,\,\cdot\,)$ is a constant function. \noindent (ii) The vector $\mbox{\boldmath $n$}$ and $\theta$ of $V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)$ and $\tilde{\rho}(p(\omega))\,u(p(\omega))$ satisfies the following two equations: $$\displaystyle 6k^2\,\vert\tilde{\rho}(p(\omega))\,u(p(\omega))\vert\,V(\theta)=(\mbox{\boldmath $n$}\cdot\omega)^3 \max_{\vartheta\in S(\omega)}\vert I(\omega,\vartheta)\vert; \tag {4.8} $$ $$\displaystyle -6k^2\,\tilde{\rho}(p(\omega))u(p(\omega)) \,V(\theta)\,(3(\mbox{\boldmath $n$}\cdot\omega)^2-1) =\frac{1}{\pi}\,\int_{S(\omega)}\,I(\omega,\vartheta) \,ds(\vartheta). \tag {4.9} $$ \endproclaim Using the equations (4.7), (4.8) and (4.9) one gets the following corollary. \proclaim{\noindent Corollary 4.5.} Let $u\in H^2(\Omega)$ be a solution of (4.1). Let $\omega\in S^2$ be regular with respect to $D$. Assume that: $D$ has a circular cone singularity at $p(\omega)$ such as $D\cap B_{\epsilon}(p(\omega))=V_{p(\omega)}(-\mbox{\boldmath $n$},\theta)\cap B_{\epsilon}(p(\omega))$ with a $\epsilon>0$. Assume that $n(x)-1=F_{\rho,D}(x)$ is active at $p(\omega)$ in the sense of Definition 1.2 and the value of $u$ at $p(\omega)$ satisfies (4.5). Assume also that $\omega\approx\mbox{\boldmath $n$}$ in the sense that (1.29) holds. Then, we have the completely same statement and formulae as those of Corolloary 1.5. \endproclaim Note that under the same assumptions as Corollary 4.5, one can finally calculate the quantity $$\displaystyle \tilde{\rho}(p(\omega))\,u(p(\omega))\,V(\theta) \tag {4.10} $$ and $\mbox{\boldmath $n$}$ from the Cauchy data of $u$ on $\partial\Omega$. Since the steps for the calculation are similar to the steps presented in Subsection 1.2 for the inverse souce problem, we omit its description. However, it should be noted that, in addition, if $\tilde{\rho}(p(\omega))$ is known to be a {\it real number}, then one can recover the phase of the complex number $u(p(\omega))$ modulo $2\pi n$, $n=0,\pm 1,\pm 2,\cdots$ from the computed value (4.10). {\bf\noindent Remark 4.1.} One can apply the result in \cite{IkC} to the computation of the value $u(p(\omega))$ itself. For simplicity we assume that $\Omega$ is convex like a case when $\Omega=B_R(x_0)$ centered at a point $x_0$ with a large radius $R$. From formula (4.6) we know the position of $p(\omega)$ and thus the domain $\Omega\cap\{x\in\Bbb R^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$. Because of the continuity of $u$ on $\overline\Omega$, one has, for a sufficiently small $\epsilon>0$ $$\displaystyle u(p(\omega))\approx u(p(\omega)+\epsilon\,\omega). $$ Since the point $p(\omega)+\epsilon\,\omega\in\Omega\cap\{x\in\Bbb R^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$ and therein $u\in H^2$ satisfies the Helmholtz equation $\Delta u+k^2u=0$, one can calculate the value $u(p(\omega)+\epsilon\,\omega)$ itself from the Cauchy data of $u$ on $\partial\Omega\cap\{x\in\Bbb R^3\,\vert\,x\cdot\omega>x\cdot p(\omega)\}$ by using Theorem 1 in \cite{IkC}. \section{Final remark} All the results in this paper can be extended also to the case when the governing equation of the background medium is given by a Helmholtz equation with a known coefficient $n_0(x)$. It means that if one considers, instead of (1.6) and (4.1) the equations $$\begin{array}{ll} \displaystyle \Delta u+k^2n_0(x)u=F_{\rho,D}(x), & x\in\Omega \end{array} $$ and $$\begin{array}{ll} \displaystyle \Delta u+k^2(n_0(x)+F_{\rho,D}(x))u=0, & x\in\Omega, \end{array} $$ respectively, then one could obtain all the corresponding results. \section{Appendix. On condition (4.5)} As suggested in \cite{Ik} the condition (4.5) can be satisfied if $k$ is sufficiently small under the situation when $u$ is given by the restriction onto $\Omega$ of the total field $U$ in the whole space scattering problem generated by, for example, the point source located at a point $z$ in $\Bbb R^3\setminus\overline\Omega$. The $U$ has the expression $U=\Phi(x,z)+w_z(x)$, where $$\begin{array}{ll} \displaystyle \Phi(x,z)=\frac{1}{4\pi}\frac{e^{ik\vert x-z\vert}}{\vert x-z\vert}, & x\in\Bbb R^3\setminus\{z\} \end{array} $$ and $w_z\in H^2_{\text{local}}(\Bbb R^3)$ is the unique solution of the inhomogeneous Helmholtz equation $$\begin{array}{ll} \displaystyle \Delta w_z+k^2w_z+k^2F(x)(w_z+\Phi(x,z))=0, & x\in\Bbb R^3 \end{array} $$ with the outgoing Sommerfeld radiation condition $$\displaystyle \lim_{r\rightarrow\infty}r\left(\frac{\partial}{\partial r}w_z(x)-ik w_z(x)\right)=0, $$ where $r=\vert x\vert$ and $F=F_{\rho,D}$ is given by (1.7). See \cite{CK} for the solvabilty. Here we claim \proclaim{\noindent Proposition A.} Let $0<R_1<R_2$ satisfy $D\subset B_{R_2}(z)\setminus\overline B_{R_1}(z)$. Let $M>0$ and $R>0$ satisfy $\vert D\vert\le M$ and $\Vert\rho\Vert_{L^{\infty}(D)}\le R$, respectively. If $k$ satisfies the system of inequalities $$\displaystyle C\equiv \frac{3k^2R}{2}\left(\frac{M}{4\pi}\right)^{2/3}<1 \tag {A.1} $$ and $$\displaystyle \frac{C}{1-C}<\frac{R_1}{R_2}, \tag {A.2} $$ then, for all $x\in\overline D$ we have $$\displaystyle \vert U(x)\vert\ge \frac{1}{4\pi}\left(\frac{1}{R_2}-\frac{C}{1-C}\frac{1}{R_1}\,\right). \tag {A.3} $$ \endproclaim {\it\noindent Proof.} Note that $w\in C^{0,\alpha}(\overline\Omega)$ with $0<\alpha<1$ by the Sobolev imbedding theorem. It is well known that the function $w_z$ satisfies the Lippman-Schwinger equation $$\begin{array}{ll} \displaystyle w_z(x) & \displaystyle =k^2\int_{D}\Phi(x,y)\rho(y)w_z(y)\,dy+k^2\int_{D}\Phi(x,y)\Phi(y,z)\rho(y)\,dy \end{array} $$ and thus, for all $x\in\overline{D}$ we have $$\displaystyle \vert w_z(x)\vert \le \frac{k^2 R}{4\pi} \left(\Vert w_z\Vert_{L^{\infty}(D)} +\frac{1}{4\pi\,R_1}\right)\, \int_D\frac{dy}{\vert x-y\vert}. \tag {A.4} $$ Let $\epsilon>0$. We have $$\begin{array}{ll} \displaystyle \int_D\frac{dy}{\vert x-y\vert} & \displaystyle =\int_{D\cap B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert}+\int_{D\setminus B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert} \\ \\ \displaystyle & \displaystyle \le \int_{B_{\epsilon}(x)}\,\frac{dy}{\vert x-y\vert}+\frac{\vert D\vert}{\epsilon} \\ \\ \displaystyle & \displaystyle \le 2\pi\epsilon^2+\frac{M}{\epsilon}. \end{array} $$ Choose $\epsilon$ in such a way that this right-hand side becomes minimum, that is, $$\displaystyle \epsilon=\left(\frac{M}{4\pi}\right)^{1/3}. $$ Then one gets $$\begin{array}{ll} \displaystyle \int_D\frac{dy}{\vert x-y\vert} & \displaystyle \le 6\pi \left(\frac{M}{4\pi}\right)^{2/3}. \end{array} $$ Thus this together with (A.4) yields $$\displaystyle \left(1-C\,\right)\Vert w_z\Vert_{L^{\infty}(D)} \le \frac{C}{4\pi\,R_1}. $$ This together with the estimate $$\displaystyle \vert U(x)\vert\ge \frac{1}{4\pi\,R_2}-\Vert w_z\Vert_{L^{\infty}(D)} $$ yields the desired estimate (A.3) under the assumptions (A.1) and (A.2). \noindent $\Box$ Note that since $R_2>R_1$, the set of inequalities (A.1) and (A.2) are equivalent to the single inequality $$\displaystyle C<\frac{R_1}{R_1+R_2}. \tag {A.5} $$ Thus we choose $k^2$ sufficiently small in the sense of (A.5) we have, for all $x\in\overline D$ $$\displaystyle \vert u(x)\vert \ge\frac{1}{4\pi}\left(\frac{1}{R_2}-\frac{C}{1-C}\frac{1}{R_1}\,\right)>0. $$ Thus the condition (4.5) for $u=U\vert_{\Omega}$ is satisfied. The choice of $k$ depends only on the a-priori information about $D$ and $\rho$ described by $R_1$, $R_2$, $M$ and $R$. $$\quad$$ \centerline{{\bf Acknowledgments}} This research was partially supported by Grant-in-Aid for Scientific Research (C) (No. 17K05331) and (B) (N. 18H01126) of Japan Society for the Promotion of Science. $$\quad$$
{ "attr-fineweb-edu": 1.794922, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd2A5qU2Ap6XtZDi8
\section{Introduction} Bitcoin is the first and the largest decentralized electronic cryptocurrency system that uses blockchain technology \cite{Nakamoto}. It adapts a cryptographic proof of work (PoW) mechanism that allows anonymous peers to create and validate transactions through the underlying peer-to-peer (P2P) network. The peers that maintain and update the chain of blocks are called miners \cite{miningPool, miningview}. In addition to transaction generation by user nodes, transaction handling in Bitcoin is done by the full nodes, among which, the miners play a central role: They find the mathematical puzzle to generate a valid block confirming the related transactions. Due to the design and structure of proof of work (PoW) in Bitcoin, the difficulty of finding the mathematical puzzle increases exponentially, every 2016 blocks. As a consequence, independent miners struggle to find the puzzle. This has forced miners to collaborate to form a team to find the puzzle through a combined computational effort, a {\em mining pool} \cite{miningEvolu}, and earn a reward, depending on their overall mining power share and the reward mechanism and policy of the mining pool \cite{strategy} \cite{Profit}. The mining pools' behavior significantly affects the Bitcoin end users since the mining pools process most of the users' transactions: The throughput of Bitcoin depends partially on those major miners \cite{miningview}. Additionally, as the number of users increases, the system's internal traffic of transaction handling escalates faster than expected, and at the same time, the throughput requirement increases proportionally with the number of users. This paper investigates how transactions are handled by the Bitcoin system. The aim is to, through analyzing transaction handling, provide valuable insights to both users and miners: \begin{itemize} \item A user may expect when his/her transaction will be confirmed and hence choose an appropriate time to request a transaction to reduce the waiting time. \item A miner may define block generation strategies that utilize the current state of the system. \item A miner may also explore which mining pools are more recognizable in the block generation and use this knowledge to join or dis-join a mining pool. \end{itemize} Specifically, through an exploratory data analysis, we reveal key transaction handling characteristics and provide answers to several fundamental transaction handling questions, such as, what is the current throughput, how frequently blocks are generated, how long it takes for a transaction to be approved, and who has created a block. Besides, through a predictability analysis on throughput related features and classification of mining pools, we provide additional insights on these fundamental questions. The investigation is based on a dataset collected at a Bitcoin full node which contains transaction handling information over a period of 543 days from 7th March, 2019 to 31st August 2020. As a highlight, the dataset includes locally available information that cannot be found on the public ledger blockchain. The results indicate that with a proper prediction model taking into account both internal and external factors, the prediction performance can be appealing for block size and number of transactions in a block, as well as for block generation intensity. However, in terms of predicting when a next block will be generated and a transaction be approved, the effort does not lead to conclusive observation. In addition, also surprisingly, in predicting / classifying the mining pool, clear distinguishing is only found for one specific mining pool, the F2Pool. Discussion is provided for these findings, including the surprising ones, with the help of findings from the exploratory analysis. The rest of the paper is organized as follows. Section \ref{work} illustrates the workflow of transaction handling in Bitcoin, and introduces the dataset used in the analysis. Then, Section~\ref{sec-met} introduces our analysis approach, highlighting the adopted statistical and artificial intelligence techniques. Following that, an exploratory analysis on the dataset is conducted and results are reported and discussed in Section~\ref{sec-sa}. Next, Section~\ref{sec-pa} reports results and findings from the predictability study. The current state of the art is covered in Section~\ref{sec-stateArt}. Finally, Section~\ref{sec-con} concludes the paper. \section{Bitcoin Transaction Handling: Workflow and Dataset \label{work} \subsection{Workflow} Bitcoin is a distributed ledger platform that enables information about transactions to be distributed than centralized, where the ledger is the Bitcoin blockchain that records the transactions. In Bitcoin, all full nodes, also called miners, take part in creating and validating/invalidating transaction blocks and propagating such information, independently~\cite{Nakamoto}. Specifically, the users generate transactions for being processed, and the distributed ledger components, i.e. the full nodes or miners, work together to generate and validate transaction blocks and add them to the blockchain. Fig.\ref{intro} illustrates the workflow of transaction handling in Bitcoin, which includes transaction arrival, block formation, propagation and validation. Briefly, after transactions are generated by the users, they are sent to all full nodes for validation. At a full node, upon the arrival of a transaction, the node stores the transaction in its mining pool, called mempool in Bitcoin, waiting for confirmation. \begin{figure}[htb!] \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{intro.pdf} \caption{An illustration of the work flow of Bitcoin \label{intro} \end{figure} In addition, a full node may choose unconfirmed transactions in the backlog to pack into a new transaction block, and perform mining to find the mathematical puzzle given by the Bitcoin to gain the right to add the block to the ledger. If the puzzle finding is successful, this newly generated block is added to the blockchain, and this information is sent to all the nodes. % At each node, the validity of the newly generated block is checked. If the validity is confirmed with consensus, the updated blockchain is accepted and the transactions in the new block are validated. Such validated transactions are removed from the mempool at each full node that then repeats the above process. Note that, while the above description is brief, the essence of the workflow is kept. For more details about how Bitcoin works, the original introduction ~\cite{Nakamoto} is the best source. \subsection{Dataset} \label{sec-dat} To analyze transaction handling in Bitcoin, we implemented a server installation of a full Bitcoin node to collect related information. The information has two parts. One part records information from the ledger that is globally available, called the {\em global information part}. Another part records locally available information about the backlog status of the mempool. This part is called the {\em local information part}. More specifically, the global information part includes, for each block $i$ on the blockchain, the number of transactions ($n_i$) in the block, its miner ($m_i$), the size of the block in bytes ($s_i$), the timestamp or generation time of the block ($T_i$), and the average per-transaction fee of the block ($f_i$). The local information part records the mempool' status ($ms_i$) in terms of size and fee of backlogged transactions in mempool when each block $i$ is received at our full node. In total, the dataset consists of information related to 80,408 Bitcoin blocks with more than two hundred million (203432240) transactions for a period of 543 days from 7th March 2019 to 31st August 2020. \section{The Analysis Approach}\label{sec-met} The dataset is essentially a composition of time series. We hence employ time series analysis on the dataset to provide insights and/or gain findings about transaction handling in Bitcoin. In the rest, the following time series are specifically used: $y=[y_1,y_2,\ \ldots., y_M]$, $x=[x_1,x_2,\ \ldots., x_M]$, $c=[c_1,c_2,\ \ldots., c_M]$, and $D= \{\{y_1, x_1, c_1\}, \{y_2, x_2, c_2\}, \ldots, \{y_M, x_M, c_M\}\}$, defined with: \begin{equation} \begin{split} y_i= \{ s_i, n_i\} \\ x_i=\{Td_i, f_i, ms_i\}\\ c_i= \{ m_i\} \end{split} \end{equation} where $Td_i \equiv T_i - T_{i-1}$ denotes the inter-block time, $s_i, n_i, f_i, ms_i$ and $m_i$ are defined in the previous section, and $M=80408$ representing the total number of blocks in the dataset. Our analysis consists of two parts. In the first part, i.e., Section~\ref{sec-sa}, the focus is on revealing fundamental characteristics and/or basic statistical properties of transaction handling related time series, using exploratory data analysis techniques such as histogram, scatter plot and curve fitting. In the second part of the analysis, i.e. Section~\ref{sec-pa}, the focus is on investigating if / how Bitcoin transaction handling may be predicted. To this aim, both classical and modern time series forecasting approaches are considered for prediction of various transaction related attributes. In addition, a decision tree based classification approach is adopted for miner inference. The following subsections give an introduction of these approaches. \subsection{Autoregressive models for forecasting} For time series forecasting, a large number of approaches are available, including both classical ones and modern artificial intelligence (AI) based approaches \cite{faloutsos2019classical}. For the former, we tested various autoregressive (AR) models. Due to their generally better performance, this paper focuses on ARIMA (AutoRegressive Integrated Moving Average) and ARIMAX (Autoregressive Integrated Moving Average with Exogenous input). Equations (\ref{ARIMA}) and (\ref{ARIMAx}) define these models respectively, where $B$ is the backshift operator and $\nabla$ the difference operator. \begin{align} \label{ARIMA} \begin{split} {y}^+_{i} = \phi_{1}{y}_{t-1} + \cdots + \phi_{p}{y}_{i-p} + \theta_{1}\varepsilon_{i-1} + \cdots + \\ \theta_{q}\varepsilon_{i-q} + \varepsilon_{i}, \\ \Phi(B)\nabla^{d}{y}^+_{i}=\Theta(B)\varepsilon_{i}, \end{split} \end{align} \begin{align} \label{ARIMAx} \begin{split} ({y}^+_{i}|T_i=t)= \phi_{1}\{{x}_{i-1}, {y}_{i-1}\} + \cdots + \phi_{p}\{{x}_{i-p}, {y}_{i-p}\} \\ + \theta_{1}\varepsilon_{i-1, t_{i-1}} + \cdots + \theta_{q}\varepsilon_{i-q, t_{i-q}} + \varepsilon_{i, t_i}, \\ \Phi(B)\nabla^{d}({y}^+_{i}|T_i=t)=\beta x_i + \Theta(B)\varepsilon_{i,t_i}, \end{split} \end{align} where $({y}^+_{i}|T_i=t)$ or $ {y}^+_{i}$) is the predicted block, $E(\varepsilon_{i,t_i})=0$, Var($\varepsilon_{i,t_i})$ = $\sigma^2,$ $ \nabla^d$=(1-B)$^d$ is difference factor, $\nabla ^{d}({y}^+_{i}|T_i=t)$ is the sequence of $y_i$ by $d$ times differed, $\Phi(B)$= 1$-\phi_1B, \dots, \phi_pB^p$ is an auto regressive coefficient polynomial, and $\Theta(B)$=1$-\theta_1B, \dots, \theta_qB^q$ is a moving smoothing coefficient polynomial of the smooth invertible autoregressive moving average model ARMA $(p, q)$. To assess the forecasting performance, we use mean average error (MAE) and root mean square error (RMSE), which are respectively defined as: with $e_i = y_i-y_i^+$, \begin{equation} \begin{split} MAE=\frac{\sum_{i=1}^{N} |e_i|}{N} \\ RMSE=\sqrt{\frac{\sum_{i=1}^{N} e_i^2}{N} \end{split} \end{equation} where $N$ denotes the number of predicted data points. \subsection{AI-based forecasting models} For AI-based models, NAR (nonlinear autoregressive neural network) and NARX (nonlinear Autoregressive Network with Exogenous Inputs) are chosen because they have a feedback connection that encloses several layers of the network, which uses memory to remember the time series's past values to get better performance~\cite{Nonlinear}~\cite{NarxModel}. Additionally, the models have nonlinear filtering that helps to capture the dynamic fluctuations of the input values. Equations (\ref{nareq}) and (\ref{narxeq}) describe NAR and NARX network's function to predict a particular value of data series $y^+_i$ using $p$ previous values of $y$ and $x$. \begin{align} \label{nareq} (y^+_{i})=f_{\rm{NAR}}(y_{i-1},y_{i-2}, \ldots.,y_{i-p}) \end{align} \begin{equation}\label{narxeq} \begin{split} (y^+_{i}|T_i=t)=f_{\rm{NARX}}(\{x_{i-1},y_{i-1}\},\{x_{i-2},y_{i-2}\}, \\ \ldots.,\{x_{i-p},y_{i-p}\}) \end{split} \end{equation} The functions $f_{\rm{NARX}}$ and $f_{\rm{NAR}}$ in (\ref{nareq}) and (\ref{narxeq}) are unknown, and the neural network training approximates the function by optimizing the network weights and neuron bias. The NAR and NARX model uses Levenberg-Marquardt, Bayesian regularization, and scaled conjugate gradient training algorithms to train the model \cite{neuralNStateofart}. Specifically, Bayesian regularization (BR) is used to conduct the analysis. BR minimizes a combination of squared errors and weights then determines the correct combination to produce a network that generalizes well. It uses network training function Levenberg-Marquardt to optimize network weights and neuron bias. The Levenberg–Marquardt is a popular numerical solution to find the smallest nonlinear function over parameter space. The following explains the input and output of the neural network model we use. \begin{itemize} \item Input: Block values in the form of vector length, which indicate the number of previous values of the block time series. The models without external input take a vector of the input $y_i$ = \{$n_i$, $s_i$\} while predicting the next blocks content either $n_i$ or $s_i$. Similarly, the models with external input additionally take \{$x_i$\} as an input when the model is used to predict the subsequent blocks. \item Hidden layer: For NAR and NARX, the number of hidden neurons is determined by performing a pre-analysis using the collected dataset. Based on this analysis, the models satisfy the Mean Square Error (MSE) value when the neurons are equal to ten. \item The input delay $p$ and $q$ are approximated by using an autocorrelation $(p)$ and partial-autocorrelation $(q)$ plot. \item Output: The predicted blocks $({y}^+_{i}|T_i=t)$ or $ {y}^+_{i}$)$ $ contain the predicted values of the blocks \{$n_{i}$, $s_{i}$\} of the weekend, working, and the combinations. \end{itemize} \subsection{Decision tree based classification } Starting in 2010, there are more than 23 mining pools worldwide, as reported in Fig.~\ref{minersList}. It has been illustrated that mining pools compete to find the mathematical puzzle and the mining behavior is a game \cite{bitcoinGame}\cite{bitcoinGameCorr}. In this paper, we investigate if the mining pools are detectable using a machine learning, decision tree based approach \cite{trees}\cite{Dtree}\cite{DctreeAnal}. It has a tree structure: Each branch represents the outcome of the test, and each leaf node represents a class label. In some cases, it is essential to combine several decision trees to produce a better classification performance. Such a combination produces an ensemble of different methods. In the present work, we considered two methods booted and RSUbooted \cite{matlab}. The accuracy, area under curve (AUC), sensitivity, and miss rate are used to test the classification performance, in addition to false negative rate (FN), true positive rate (TP), and receiver operating characteristic (ROC) curve of TP versus TN, as commonly used for machine learning based classification \cite{perfMetrics}. \begin{figure*}[th!] \centering \subfigure[Empirical CDF of $s_i$] { \includegraphics[width=0.3\linewidth,height=0.27\linewidth]{BsizeCdf.pdf} \label{bsize} } \subfigure[Empirical CDF of $n_i$] { \includegraphics[width=0.3\linewidth, height=0.27\linewidth]{NutCDF.pdf} \label{nut} } \subfigure[Empirical CDF of $f_i$] { \includegraphics[width=0.3\linewidth, height=0.27\linewidth]{AveragefeeCdf.pdf} \label{averageFee} } \caption{CDFs of basic block attributes \end{figure*} \section{Results: Exploratory Analysis \label{sec-sa} This section reports results and observations from an exploratory analysis of the collected data. \subsection{Basic block attributes} The size $s_i$, the number of transactions $n_i$ and the fee $f_i$ are fundamental attributes of a block. Since the transaction activities are time-varying process~\cite{TransBitcoin}, they may have distributions that vary from time to time, e.g., the weekend and working day demands have a different distribution. Fig.~\ref{bsize} reports in most cases (80\%), $s_i$ in working days has less than 1.4 MB, whereas it has a 1.2 MB size during weekends. In both cases, the $s_i$ can grow more than 1.5 MB in 1\% of the cases. Relatively, 30\% of the blocks have a size less than the default legacy size $s_i$ of 1 MB on weekend days; nevertheless, in working days, less than 20\% of the blocks have a size less than 1 MB. Similarly, Fig. \ref{nut} illustrates that the $n_i$ also varies as $s_i$: 50\% of the blocks have less than 2200 transactions per block in weekend days, while 2500 transactions per block in working days. In working days, only 20\% of the blocks have $n_i$ less than 2100 transactions wherein the weekend, 40\% of the generated blocks have $n_i$ less than 2200. In addition, the miner's economic incentives affect which transactions to include in a block and this financial interest may also show some differences over time. Fig.~\ref{averageFee} reports, 50\% of the blocks, during weekdays, the $f_i$ is smaller than $1.3*10^{-4}$ while during working days, the $f_i$ is less than $1.43*10^{-4}$. In both cases, 80\% of the $f_i$ is smaller than 0.00033 BTC, and with less than one percent, the $f_i$ can grow more significantly than 0.0004 BTC. \begin{table*}[ht] \centering \caption{Major mining pools block related attributes properties} \begin{tabular}{|l|l|l|l|l|} \hline Mining pool & $\mu(s_i, n_i, f_i)$& $\sigma(s_i, n_i, f_i)$& min($s_i, n_i, f_i$) & max($s_i, n_i, f_i$)\\ \hline ?&(1.1252, $2.14*10^3$, $1.83*10^{-4}$) &(0.3657, 844.2627, $2.18*10^{-4}$) & ($2*10^{-4}$, 1, 0.00) & (2.4229, 4402, 0.0065)\\ \hline AntPool&(1.1141, $2.18*10^3$, $1.8*10^{-4}$) &(0.3622, 844.2076, $1.9*10^{-4}$) & ($3.34*10^{-4}$, 1, 0.00) & (2.2151, 4063, 0.0050) \\ \hline BTC.com& (1.0960, $2.15*10^3$, $1.86*10^{-4}$) &(0.3782, 868.4394, $2.487*10^{-4}$) & ($2.38*10^{-4}$, 1, 0.00) & (2.3056, 4243, 0.0121)\\ \hline F2Pool&(1.1099, $2.14*10^3$, $1.76*10^{-4}$) &(0.3680, 845.6503, $2.16*10^{-4}$) & ($2.66*10^{-4}$, 1, 0.00) & (2.3316, 4377, 0.0086)\\ \hline Poolin&(1.1091, $2.17*10^3$, $1.67*10^{-4}$) &(0.3635, 842.1800, $1.87*10^{-4}$) & ($2.17*10^{-4}$, 1, 0.00) & (2.3165, 3988, 0.0038)\\ \hline \end{tabular} \label{majordistribution} \end{table*} \subsection{Miners} \begin{figure}[th!] \centering \includegraphics[width=0.9\linewidth,height=0.53\linewidth]{MinersList.pdf} \vspace{-5pt} \caption{Miners} \label{minersList} \end{figure} \begin{figure}[th!] \centering \includegraphics[width=0.9\linewidth,height=0.5\linewidth]{FeatureRelation.pdf} \vspace{-5pt} \caption{$f_i$ vs $s_i$} \label{fsize} \end{figure} \begin{figure}[th!] \centering \includegraphics[width=0.9\linewidth,height=0.6\linewidth]{RelationNut.pdf} \vspace{-5pt} \caption{$n_i$ vs $s_i$} \label{nsize} \end{figure} Fig. \ref{minersList} reports miners' contribution in terms of the number of valid blocks in the main chain. As we can observe from the figure, unknown(?), F2Pool, BTC.com, Poolin, and AntPool contribute a higher number of blocks. Combined, these five major mining pools generate around 50\% of the valid blocks. Driven by the financial interest, a mining pool might use a strategy to increase the financial gain \cite{majority}. To explore, we analyze the blocks generated by the major mining pools. Fig. \ref{fsize} and \ref{nsize} report that when the size $s_i$ is greater than 1.5 MB, it is visible from the figures that some of the major mining pools become more recognizable. However, when $s_i$ is less than 1 MB, it is challenging to see any difference between the pools. Similarly, When the $s_i$ is between 1- 1.5 MB, we can see a high concentration of the mining pools. The figures also report that as the $s_i$ increases, the $n_i$ and $f_i$ also rise together. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth,height=0.4\linewidth]{MinersNew.pdf} \caption{Miners contribution} \vspace{-5pt} \label{MinerContr} \end{figure} To further investigate the number of block contributions in the working and weekend days, we focus on the five major miners. Fig. \ref{MinerContr} illustrates that these miners contribute similar number of blocks in the working days, except for the unknown(?) pool. The same observation is also found for the weekend days. The unknown(?) pool generates a higher number of blocks in all cases To gain a deeper insight into the block contents than the number of blocks, Table \ref{majordistribution} is presented, where the mean $\mu$, standard deviation $\sigma$, minimum and maximum values of the basic block attributes ($s_i, n_i, f_i$) are shown. Note that these major mining pools become operational starting 2016, except for Pooling in 2018 \cite{miningview}. Even though there is a gap in years between Poolin and the rest, Table \ref{majordistribution} shows that Poolin, F2Pool, BTC.com generate blocks with similar average size, standard deviation and max values. However, the unknown (?) and AntPool generates block with size greater than the three. The unknown (?) has a block size mean close to 1.214 MB, and the maximum block size is also found in this mining pool. Additionally, the public mining pool, Poolin, comparing the maximum values of $f_i$ and $n_i$, has the smallest than the other four mining pools. \subsection{Block generation \subsubsection{Distribution of inter-block generation time} Based on the Bitcoin design \cite{Nakamoto}, it has been expected that the inter-block generation time follows an exponential distribution, and the validity has also been checked~\cite{TransBitcoin}. Along the same line, Fig. \ref{inter} reports the fitting of the inter-block generation time to an exponential distribution. Additionally, to check the independence of block generation time, its autocorrelation plot is illustrated in Fig. \ref{auto}. As can be seen from Fig. \ref{inter} and Fig. \ref{auto}, the inter-block generation time fits well with an exponential distribution with increasing mismatch at the tail, partly due to the limited number of blocks in the dataset, and the autocorrelation is close to zero under all the lags in the figure, with the most significant difference only around 1\%, indicating that block generation is little correlated. \begin{figure}[t!] \centering \subfigure[Inter-block generation] { \includegraphics[width=0.45\linewidth,height=0.23\linewidth]{interGeneration.pdf} \label{inter} } \subfigure[Autocorrelation plot] { \includegraphics[width=0.45\linewidth, height=0.23\linewidth]{autoInter.pdf} \label{auto} } \caption{Fitting of inter-block generation time to n.e.d} \end{figure} \begin{figure}[th!] \centering \subfigure[100 minutes; $\lambda=9.44707$] { \includegraphics[width=0.43\linewidth,height=0.3\linewidth]{Intensity100.pdf} \label{inte100} } \subfigure[1000 minutes; $\lambda=103.184$] { \includegraphics[width=0.47\linewidth, height=0.3\linewidth]{Intensity1000.pdf} \label{inter1000} } \caption{Block generation histogram fitting to a Poisson distribution with intensity $\lambda$ under different time slot length} \end{figure} \subsubsection{Fitting to a Poisson process Since the block generation process has exponentially distributed inter-generation times, we investigate if it can also be further treated as a Poisson process. For this, we make histograms of the number of blocks generated in different length of time and fit them with Poisson distributions. If the process is Poisson, these Poisson distributions must have the same intensity. For this investigation, Fig. \ref{inte100} and Fig. \ref{inter1000} are presented, where the best fitting intensity of Poisson distribution is shown under two time lengths, 100 and 1000 minutes. Clearly, the obtained two intensities differ noticeably, after taking into consideration that there is 10x scaling difference. This observation, which is surprising to us, implies that block generations can at most be approximated but cannot be treated as a Poisson process. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth,height=0.5\linewidth]{InterBlockMiner.pdf} \caption{Major mining pools' inter-block generation time} \label{InterBlock} \end{figure} \subsubsection{Relation with miners} To have a closer look on block generation, we made further investigation over the five major mining pools. Fig. \ref{InterBlock} reports the inter-block generation time of the major mining pools. As the figure shows, the average inter-block generation time is almost the same among the major mining pools. However, for the median, there is visible difference: While for Unknown(?) and F2Pool, the median time is close to 52 minutes, for BTC.com and Poolin, it is near 45 minutes and for AntPool, it is close to half-hour. The minimum inter-block generation time is the same for all major mining pools, close to zero. However, for the maximum inter-block generation time, while AntPool and Unknown(?) need 14 hours and 30 minutes, BTC.com demands 16 hours. In addition, the public mining pool, Poolin, requires 12 hours, and unlike or shorter than the others, F2Pool needs only 10 hours. As a highlight from Fig. \ref{InterBlock}, F2Pool stands clearly out of the others with shortest tail. \subsubsection{Relation with basic block attributes} We further explored the relationship between block generation and the three basic block attributes, shown by Fig. \ref{Interbsize}, \ref{Internut}, and \ref{Interfee}. Specifically, Fig. \ref{Interbsize} illustrates that when the block size $s_i$ is greater than 1.5 MB, the inter-block generation time seen by the blocks is less than two hours. However, when the block size is concentrated between 1-1.5 MB, Unknown(?), AntPool, and BTC.com block can have the inter-block generation time greater than 13 hours. On the other hand, the blocks from Poolin and F2Pool seem to be generated with shorter interval than the rest three, which is also indicated by Fig. \ref{InterBlock}. \begin{figure}[th!] \centering \subfigure[Inter-block generations vs $s_i$] { \includegraphics[width=0.45\linewidth,height=0.45\linewidth]{InterBlocksize.pdf} \label{Interbsize} } \subfigure[Inter-block generations vs $n_i$] { \includegraphics[width=0.45\linewidth, height=0.45\linewidth]{InterNut.pdf} \label{Internut} } \subfigure[Inter-block generations vs $f_i$] { \includegraphics[width=0.45\linewidth, height=0.45\linewidth]{InterFee.pdf} \label{Interfee} } \caption{Inter-block generations v.s. block size, transaction number and fee} \end{figure} In addition, Fig. \ref{Internut} demonstrates that the number of transactions $n_i$ in a block of Poolin is on average smaller than the other mining pools. Most of the $n_i$ from F2Pool seem to have a shorter inter-block generation time. However, it is hard to say for the Unknown(?) and AntPool, because the plot shows most of the block with $n_i$ seems to have a larger inter-block generations time. These effects may arise from the state of the mempool, when the mempool contains more transactions then the miners can pick as much number of transaction to include in block. Furthermore, it is natural the miners prioritize the finical incentives, which encourages the miners to pick up transactions with a higher fee. Fig. \ref{Interfee} illustrates this fact. Specifically, when the fee $f_i$ is higher, the inter-block generation time of the block is lower, maybe even shorter than an hour. The figure also shows that the blocks with the smaller average fee from Unknown(?), AntPool, and BTC.com may experience inter-block generation time greater than 14 hours. On the other hand, the blocks from Poolin seem to have a less average fee and seeingly smaller inter-block generation time. \subsection{Transaction arrival and confirmation time Users generate transactions for validation. New arrivals stay at the backlog (memory pool) until the nonce finding is successful and they are picked up by the miner. \begin{figure}[tb!] \centering \subfigure[Transaction inter-arrival time { \includegraphics[width=0.45\linewidth,height=0.23\linewidth]{exportArrival.pdf} \label{transconinterarrival} } \subfigure[Autocorrelation plot] { \includegraphics[width=0.45\linewidth, height=0.23\linewidth]{ArrivalAuto.pdf} \label{autoarrival} } \caption{Transaction inter-arrival time fitting n.e.d} \end{figure} \begin{figure}[th!] \centering \subfigure[Transaction confirmation time { \includegraphics[width=0.45\linewidth,height=0.23\linewidth]{transactionWaiting.pdf} \label{transconfirmfit} } \subfigure[Autocorrelation plot] { \includegraphics[width=0.45\linewidth, height=0.23\linewidth]{autoWait.pdf} \label{autocorr} } \caption{Transaction confirmation time fitting n.e.d} \end{figure} \subsubsection{Transaction inter-arrival time} Fig. \ref{transconinterarrival} shows that the fitting of transaction inter-arrival times to a negative exponential distribution is only reasonable well with visible deviation. Additionally, Fig \ref{autoarrival} reports the inter-arrival between the transactions is correlated. These reflect that there exists some level of dependence between transaction arrivals. \subsubsection{Transaction confirmation time} Fig. \ref{transconfirmfit} reports the transaction confirmation time fitting to a negative exponential distribution, with a sharp drop at the tail. Additionally, Fig. \ref{autocorr} illustrates that the transaction confirmation time is uncorrelated, reflecting that the transaction confirmation time is independent. Since a miner tends to choose transactions with a higher fee, to demonstrate this effect on the confirmation time, Fig .\ref{waiting} is presented. Specifically, it demonstrates the relationship of confirmation time and fee for Q1 (25\%), Q2 (50\%), Q3 (75\%), and greater than Q3, i.e., (Q4) for $f_b$. Their intervals are respectively (0,Q1), (Q1,Q2), (Q2,Q3), and (Q4,$\infty$). As Fig \ref{waiting} shows, low fee transactions exhibit a higher confirmation time. On average, the low fee transactions (Q1) wait 22 minutes for validation. However, for higher fee (Q4) transactions, the average confirmation time is less than half of that of the low fee transactions. For Q2 and Q3, the transactions exhibit close to a ten-minute average confirmation time. Still, transactions from Q2, on average, wait one more minute extra than Q3. Overall, transactions wait on average 13 minutes, and we also observed a few transactions waiting for more than 24 hours at the backlog. At the same, these few transactions also tend to have a fee associated relatively very small. \begin{figure}[t!] \subfigure[Transaction fee effect] { \includegraphics[width=0.45\linewidth, height=0.3\linewidth]{waiting.pdf} \label{waiting} } \subfigure[log plot] { \includegraphics[width=0.45\linewidth, height=0.3\linewidth]{FeeEffect.pdf} \label{feeeffec} } \caption{Transaction fee effect on transaction confirmation time} \end{figure} \section{Results: Predictability Analysis}\label{sec-pa} Having explored the various characteristics of transaction handling in the previous section, this section is devoted to investigating if and what such characteristics can be predicted. For this predictability analysis, the prediction approaches introduced in Section \ref{sec-met} are used. The results are reported and discussed in the rest of this section, where the dataset is divided into three parts, i.e, training, test and validation, and the details of this division is reported in Table \ref{database}. \begin{table}[t!] \centering \caption{Division of the dataset} \begin{tabular}{|p{16mm}|p{10mm}|p{10mm}|p{10mm}|p{16mm}|} \hline Dataset & Training & Test & Validation & \#No of blocks \\ \hline Working\_day & 40095 & 8591 & 8591 & 57277\\ \hline Weekend\_day & 16190 & 3469 & 3469 & 23128\\ \hline All\_db & 56286 & 12061 & 12061 & 80408\\ \hline \end{tabular} \label{database} \end{table} \subsection{Basic block attributes Table \ref{sourcecomparison} compares the performance of the various models in predicting the target block attributes: size $s_i$ and number $n_i$, where as a benchmark, the basic autoregressive (AR) model is also included. For these models, the symbol $p$ is order of the autoregressive part, $d$ is the number of nonseasonal differences needed for stationarity, and $q$ is order of the moving average part. In this investigation, the values for $p=2$ and $q=2$ are calculated from autocorrelation and partial-autocorrelation plot, and we set $d=0$. MAE and RMSE are used to compare models' performance. In addition, to give a more direct impression, we illustrate the prediction results by the models for randomly chosen ten consecutive weekend blocks, as an example, in Fig. \ref{weekendbsize} and Fig. \ref{weekendnut}. Table \ref{sourcecomparison}, Fig. \ref{weekendbsize} and Fig. \ref{weekendnut} indicate that, the prediction results by the considered forecasting approaches all follow the actual trend well. However, the models that additionally make use of the locally available information $x$, which are ARIMAX and NARX, generally produce better results than their counterpart models ARMA and NARX that do not have exogenous input. In addition, the AI-based models perform better than the classical autoregressive models under the same condition. Overall, NARX' performance is best, which is an encouraging finding for applying AI-based approaches in predicting the basic block attributes' values. {\bf Remark:} The alert reader may have noticed that among the three basic block attributes investigated in the exploratory study, we have left the fee $f_i$ out in the predictability study. This is simply because a large related literature exists, which will be discussed in the related work section, and the results therein show that the price can be excellently predicted. \begin{table*}[ht] \caption{Forecasting performance of basic block attributes} \centering \begin{tabular}{|l||l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Models} & \multicolumn{3}{c|}{MAE} & \multicolumn{3}{c|}{RMSE}\\ & Weekend($s_i$,$n_i$) & Working($s_i$,$n_i$) & All($s_i$,$n_i$) & Weekend($s_i$,$n_i$) & Working($s_i$,$n_i$) & All($s_i$,$n_i$) \\ \hline AR(p) & {0.53, 264} & {0.6, 117.35} & {0.5, 127.12} & {0.5, 122.14}& {0.5, 141.91} & {0.3, 264} \\ \hline ARIMA(p,d,q) & {0.15, 15.373} & {0.077, 12.840} & {0.13, 12.969} & {0.04, 12.461} & {0.01, 10.833} & {0.025, 10.942}\\ \hline ARIMAX(p,d,q) & {0.12, 13.364} & {0.07, 12.092} & {0.06, 11.735} & {0.02, 11.052} & {0.006, 10.408} & {0.006, 10.408} \\ \hline NAR(p) & {0.01, 14.770} & {0.06, 12.969} & {0.06, 12.840} & {0.03, 12.214} & {0.008, 11.275} & {0.008, 10.942} \\ \hline NARX(p) & {0.011, 10.942}& {0.06, 10.471} & {0.013, 10.460} & {0.01, 10.121} & {0.006, 10.035} & {0.0003, 10.030} \\ \hline \end{tabular} \label{sourcecomparison} \end{table*} \begin{figure}[ht] \centering \subfigure[measured vs predicted $s_i$ ]{ \includegraphics[width=0.9\linewidth,height=0.5\linewidth]{BsizeWeekend.pdf} \label{weekendbsize} } \subfigure[measured vs predicted $n_i$ ]{ \includegraphics[width=0.9\linewidth,height=0.5\linewidth]{NutWeekend.pdf} \label{weekendnut} } \label{weekend} \caption{Sample prediction results} \end{figure} \subsection{Block generation and transaction confirmation time Encouraged by the prediction results for the basic block attributes, we used the NARX model to test if block generation and transaction confirmation time can also be predicted. For predicting block generation, we used $T_i$ as the input while x= $\{f_i, n_i, s_i, ms_i\}$ as the external input. Fig. \ref{performanceTime} reports the model's performance. For predicting transaction confirmation, we used transaction confirmation times as the input, while the size of the transactions and the fee associated are used as an external input. Fig. \ref{conf} exemplifies the model's performance at a number of random points. As indicated by Fig. \ref{performanceTime} and Fig. \ref{conf}, the predication does not work. While this observation seems to be contradictory to the observation in predicting $s_i$ and $n_i$, a closer look at the characteristics of block generation time and transaction confirmation time enables to explain. Reported in the exploratory analysis in Section \ref{sec-sa}, both the inter-block generation time and the transaction confirmation time has or can be closely approximated by an exponential distribution. Then, because of the memoryless property of exponential distribution, the likelihood of something happening in the future has little relation to whether it has happened in the past. Implied by this and as also confirmed by Fig. \ref{performanceTime} and Fig. \ref{conf}, any effort of predicting these two transaction handling aspects may, ``surpringly'', lead to no solid conclusion. \begin{figure}[th!] \centering \subfigure[Block generation time] { \includegraphics[width=0.45\linewidth, height=0.35\linewidth]{timePred.pdf} \label{performanceTime} } \subfigure[Transaction confirmation time] { \includegraphics[width=0.45\linewidth,height=0.35\linewidth]{Confirmation.pdf} \label{conf} } \caption{Block generation and transaction confirmation time sample prediction} \end{figure} We conduct further investigations on predicting block generation intensity. In this case, for the AI-based models, we only used NAR because we do not have additional input for NARX. To be in line with the counterpart exploratory investigation, we fixed the slot size of 100 and 1000 minutes and predicted the number of blocks within the slot, respectively. Fig. \ref{per100} and \ref{per100} report the performance of both the classical autoregressive models and the AI-based NAR model. In general, the AR models follow the trend better than the NAR model. Nevertheless, all models struggle to perform better than the average. This, we believe, attributes largely from that while not exactly, the number of blocks in a time period can is approximately Poisson-distributed, as reported in Section \ref{sec-sa}. \begin{figure}[th!] \centering \subfigure[Block generation intensity with fixed time slot of 100 minutes] { \includegraphics[width=0.45\linewidth, height=0.37\linewidth]{Intensi100.pdf} \label{per100} } \subfigure[Block generation intensity with fixed time slot of 1000 minutes] { \includegraphics[width=0.45\linewidth,height=0.37\linewidth]{Intensi1000.pdf} \label{per1000} } \caption{Block generation intensity sample prediction} \end{figure} \subsection{Miner classification} \begin{figure*}[th!] \subfigure[Weekend days] { \includegraphics[width=0.32\linewidth, height=0.35\linewidth]{WeekendMiner.pdf} \label{Weekendminer} } \subfigure[Working days] { \includegraphics[width=0.32\linewidth, height=0.35\linewidth]{WorkingMinerNew.pdf} \label{Workingminer} } \subfigure[All] { \includegraphics[width=0.32\linewidth,height=0.35\linewidth]{ALLminer.pdf} \label{confAll} } \caption{Confusion Matrix of major miners (RSUBoosted decision tree)} \end{figure*} As we saw in the previous sections, the $f_i$, $s_i$, $Td_i$, $n_i$, and $ms_i$ have a significant effect on the evolution of the Bitcoin ledger. Due to this, we use these feature sets to test if they can help infer a miner's relationship, and if some mining pools use some specified strategies while generating a block. To study these, we take two cases, first working and weekend days, and in the second case, considering all the data together. The feature set, including $f_i$, $s_i$, $n_i$, and $Td_i$, is used to perform classifications of mining pools ($c_i$). As a remark, we have also tried other features in the mempool state $ms_i$ but observed that they do not bring significant increase over the accuracy. \subsubsection{Case-I (Working and Weekend day)} The top-eight mining pools are used to detect the block generation behavior. Fig.~ \ref{Weekendminer} and Fig.~\ref{Workingminer} report that the major mining pools have a true positive rate (TP) more significant than the rest of the pools. As Fig. \ref{Weekendminer} and Fig. \ref{Workingminer} report, the better model, the RSUBoosted decision tree with the booted method, shows a promising result classifying the F2Pool in better approximation relative to the other pools. As we can see from Fig. \ref{Weekendminer} and Fig. \ref{Workingminer}, the TP for BTC.com, AntPool, and Poolin is smaller than 25\%, but for the SlushPool and BTC.TOP, it is more significant than 25\%. Especially in the case of the public mining pool, Poolin, the false-negative rate is five times higher than the TP. This indicates the Poolin has less detectable block generation strategy than the rest. However, for SlushPool, it is has a block generation behaviour more distinguishable than the top five major mining pools. \subsubsection{Case-II (All Data)} The previous case showed that F2Pool was approximated very reasonably from the major mining pools. Fig. \ref{Weekendminer} and Fig. \ref{Workingminer} report a confusion matrix illustrating the F2Pool and SlushPool having a higher positive rate than the rest of the mining pools. Additionally, Fig.\ref{confAll} reports that the two major mining pools, SlushPool and F2Pool, the TP are more significant than 70\%, which is 40\% more accurate than the first case for SlushPool. Similarly, the false-negative rate is less than 20\%, especially in F2Pool, which is even less than 3\%. To have a better understanding, we performed further investigation on only these two mining pools, F2Pool and SlushPool. The results are reported in Table \ref{twominer}, Fig. \ref{auc} and \ref{auc2}, and Fig. \ref{two}. Table \ref{twominer} compares the performance of the two DT methods. Due to better accuracy of the RSUBoosted-tree, it is used in Fig. \ref{auc} and \ref{auc2}, and Fig. \ref{two}. Specifically, the true-positive rate (TPR) and the false-negative rate (FNR) are shown in Fig. \ref{two}, and Fig. \ref{auc} and \ref{auc2} further illustrate the model accuracy in terms of AUC and ROC. \begin{table}[ht!] \centering \caption{Performance of classification between F2Pool and SlushPool} \begin{tabular}{|p{20mm}|p{10mm}|p{10mm}|p{12mm}|} \hline Models & Accuracy & Sensitivity & Miss rate\\ \hline RSUBoosted-tree & 0.90 & 0.885 & 0.115\\ \hline Boosted-tree & 0.883 & 0.881 &0.119\\ \hline \end{tabular} \label{twominer} \end{table} \begin{figure}[ht!] \centering \includegraphics[width=0.6\linewidth,height=0.33\linewidth]{two.pdf} \caption{F2Pool and SlushPool} \label{two} \end{figure} \begin{figure}[th!] \centering \subfigure[F2Pool AUC curve] { \includegraphics[width=0.45\linewidth, height=0.43\linewidth]{Aucf2pool.pdf} \label{auc} } \subfigure[? AUC curve] { \includegraphics[width=0.45\linewidth,height=0.43\linewidth]{SlushPool.pdf} \label{auc2} } \caption{AUC curves for F2Pool and SlushPool} \end{figure} \subsubsection{Discussion} Fig. \ref{Weekendminer}, Fig. \ref{Workingminer}, and Fig. \ref{confAll} essentially show that other than for a few mining pools, particularly F2Pool, mining pools have a minimal positive classification rate, implying they are hard to distinguish. This is in line with Fig. \ref{InterBlock} in the exploratory analysis part, which shows that while the block generation distributions of other miners are similar, for F2Pool it is visually distinguishable from the others. We believe this characteristic difference has been explored by the decision tree approach in the classification. In addition, a closer investigation as illustrated by Fig. \ref{confAll} and Fig. \ref{two} implies that the two major private mining pools P2Pool and SlushPool use different strategies that have caused their block generations with special properties making the classification with higher accuracy. \section{Related Work} \label{sec-stateArt} \subsubsection{Statistical analysis of transaction handling characteristics} While a lot of such analysis results are available, e.g., various Bitcoin statistics~\cite{Btc}, block propagation delay~\cite{infoProp2013}, block arrival process \cite{blkArrival2020}, transaction rate and confirmation time~\cite{DiscreteBlockchain}~\cite{ trasactionConfirmation}, we focus on fundamental aspects underlying transaction handling and particularly their distributions, different from the literature. Through analyzing these distributions, we have been able to reason some seemly surprising observations in the predictability study. In addition, very few results in the literature take into account information that is only locally available. In this sense, the work \cite{TransBitcoin} is most related. However, except for inter-block generation time fitting, which is similar as we already highlighted, the other results are not found in \cite{TransBitcoin}, due to different focuses of \cite{TransBitcoin} and the present work. \subsubsection{Forecasting transaction handling characteristics The focus of the literature has been on bitcoin price. For instance, Huisu Jang and Jaewook Lee \cite{Bayesian} developed a neural network-based forecast model on the volatility of a Bitcoin price and extended analysis to identify the best feature set that gives more information about the Bitcoin price process. Similarly, Edwin Sin and Lipo Wang \cite{bitcoinPrice} implemented an artificial neural network to predict the next Bitcoin price and the amount of profit that could be gained by making such predictions. Shah et al. \cite{bayesianBit} considered the Bayesian regression method to predict the price of Bitcoin. Pavel Ciaian et al. \cite{economics} estimates Bitcoin price formation based on a linear model by introducing several factors such as market forces, attractiveness for investors, and global macro-financial factors. Greaves et al. \cite{greaves} analyzed the Bitcoin blockchain data to predict the price of Bitcoin using SVM and ANN, which score 55\% accuracy. Similarly, models such as Random Forest, SVM, and Binomial Logistic algorithms are used to predict short-term Bitcoin price and achieve a high accuracy result of 97\% in \cite{madan}. To the best of our knowledge, no previous work combines the feature sets to predict the transaction handling characteristics focused in this paper. \subsubsection{Mining pool classification} There have been some research works that studied block withholding and unfair distribution of reward. For instance, Schrijvers et al. \cite{incentive} analyzed the incentive compatibility of the Bitcoin reward mechanism. In their model, a miner can decide between honest mining and delaying her found blocks' submission. They proved that the proportional mining reward mechanism is not incentive compatible. Eyal \cite{eyaldilemma} computed the pools' optimal strategy in the block withholding attack and their corresponding revenues. It was demonstrated that the no-pool-attack strategy is not a Nash equilibrium in these games because if none of the pools run the attack, one pool can increase its revenue by launching the attack. Luu et al. \cite{luu} experimentally demonstrate that block withholding can increase the attacker's revenue. They do not address the question of mutual attacks. Courtois and Bahack \cite{courtois} have recently noted that a pool can increase its overall revenue with block withholding if honest pools perform all other mining. We consider the general case where not all mining is performed through public pools and analyze situations where pools can attack one another. M. Salimitari et al. \cite{Profit} used prospect theory to predict a miner's Profit from joining one of the major mining pools. The hash rate power, total number of the pool members, reward distribution policy of the pool, electricity fee in the new miner's region, pool fee, and the current Bitcoin value are used to predict which pools are profitable specific miners. Most mining pool studies do either emphasis on {\bf (i)} block withholding \cite{bitcoinGame} \cite{MininGamemodel} or {\bf (ii)} unfair distribution of rewards \cite{socialMining} \cite{miningEvolu} \cite{Intstrategy} \cite{bitcoinGameCorr}, but none or little has been investigated to detect the major mining pools with hidden block generation strategies. Our work tries to further investigate these block formation strategies, by introducing decision tree to distinguish one of the major mining pools following having a detectable block formation strategy. \section{Conclusion}\label{sec-con} An exploratory analysis on fundamental transaction handling characteristics of Bitcoin is conducted, together with a novel analysis on their predictability. The results from the former have been used to help reason the findings from the latter. Specifically, the focused block attributes include the size, the number of transactions and the fee. In addition, block generation and transaction confirmation, two fundamental processes resulted from transaction handling, are investigated. Furthermore, the contribution of miners to these attributes and processes is particularly taken into consideration. The results show that while it is possible to use measurement-based collected data in predicting the basic attributes of the next block with reasonable accuracy, care is needed in predicting block generation and transaction confirmation. While the latter seems contradicting the expectation from the former, the explanation is supported and implied by results from the exploratory analysis. Additionally, it shows that combining internal and external factors enables better performance in prediction / classification. Furthermore, although it is difficult to distinguish among mining pools through prediction in general, the investigation shows that F2Pool is well distinguished from the others. A closer investigation in the exploratory analysis shows that block generation of F2Pool has a distribution with visible characteristic difference, implying that it has used a different strategy than the other miners. These results shed new light and may also be considered by users and miners when deciding their transaction strategies. \AtNextBibliography{\footnotesize} {\footnotesize \printbibliography} \end{document}
{ "attr-fineweb-edu": 1.744141, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd3k5qX_Bvg1Ro0MY
\section{Introduction and Results} Operator spreading, or growth, in local systems is a question of primary interest, which encodes transport properties, emergence of chaos and other aspects of many-body quantum dynamics \cite{Roberts:2014isa,Roberts:2014ifa,Nahum:2017yvy,Khemani:2017nda,Qi:2018bje,Bentsen_2019,Parker_2019,Barbon:2019wsy}. A classic result of Lieb and Robinson \cite{LR} (see also \cite{Huang_2018,chen2019operator} for recent progress) establishes that under time evolution the fastest possible spatial spreading of local operators is ballistic. There is no norm growth in this case since the time evolution is unitary. Ballistic spreading of operators, and signals, has been established for many models \cite{Calabrese:2006rx,ALEINER2016378,Luitz_2017,Nahum:2017yvy,Patel2017vfp,das2018light, Tibor1, Tibor2} and seems to be a universal feature of local systems in any dimensions. At the same time, evolution of local operators in Euclidean time \begin{eqnarray} A(-i\beta)=e^{\beta H}A\, e^{-\beta H}, \label{Agr} \end{eqnarray} which we study in this paper, is much more nuanced. Since the Euclidean evolution is not unitary, the norm of $A(-i\beta)$ quickly grows with $\beta$. Moreover, as we explain below, the operator growth is not universal and reflects if the system in question is integrable or chaotic. We start in section \ref{sec:normbound} by deriving a bound on $|A(t)|$ valid uniformly for $|t|=\beta$ by expanding \eqref{Agr} in Taylor series and bounding corresponding nested commutators. For local $H$ there is a combinatorial problem of counting contributing nested commutators, which we solve exactly for short range systems defined on Bethe lattices, which includes local systems in 1D. In higher dimensions we conjecture an asymptotically tight bound. Hence, we expect our bounds on operator norm to be optimal in the class of Hamiltonians we consider -- lattice Hamiltonians with local interactions. We find that maximal rate of growth is very different in 1D, where it is at most double-exponential, and in higher dimensions or Bethe lattices, where the norm can become infinite in finite Euclidean time. We extend the analysis to include spatial growth in section \ref{sec:LB}, where we find that in 1D operators spread at most exponentially, while in higher dimensions, including Bethe lattices, they can reach spatial infinity in finite Euclidean time. When the 1D system is finite, the minimal time necessary for an operator to reach the boundary is logarithmic, which may explain logarithmic convergence of the numerical Euclidean time algorithm proposed in \cite{Beach_2019}. We further speculate in section \ref{sec:S} that the timescale originating from the Euclidean Lieb-Robinson bound might be related to the Thouless energy of the corresponding quantum many-body system \cite{Chan_2018}. In section \ref{sec:ME} the results on norm growth are used to constrain individual matrix elements. We find that matrix elements in energy eigenbasis $\langle E_i|A|E_j\rangle$ must decay at least exponentially with $\omega=|E_i-E_j|$, while in 1D the decay must be faster than exponential, as provided by \eqref{meb} and \eqref{bme}. We also establish a number of bounds on the auto-correlation function at finite temperature $C_T(t)$, and its Fourier transform -- the power spectrum $\Phi_T(\omega)$, \begin{eqnarray} \label{Cdef} C_T(t)\equiv {\rm Tr}(\rho\, A(t) A)=\int\limits_{-\infty}^\infty \Phi_T(\omega)e^{i\omega t}d\omega, \\ \nonumber \rho \propto e^{-H/T},\, {\rm Tr}(\rho)=1. \end{eqnarray} The bounds have integral form, see (\ref{mu1d},\ref{mubound}) and \eqref{abaninbound}. At the physical level of rigor, they suggest that $\Phi_T(\omega)$ decreases exponentially with $\omega$ in $D\geq 2$, while in 1D the decay at large frequencies is superexponential. This emphasizes that one-dimensional systems are indeed very special, and many numerical results established for one dimensional systems may not necessarily apply to higher-dimensional systems. The bound on $|A(t)|$ established in section \ref{sec:normbound} depends only on the absolute value $|t|$. Obviously, it is overly conservative for real $t$ when the time evolution is unitary. We argue, however, in section \ref{sec:S} that it does correctly capture the Euclidean growth $t=-i\beta$ of chaotic systems. We also consider system size dependence of $|A(t)|$ and find it to be consistent with the Eigenstate Thermalization Hypothesis (ETH). For the integrable systems we find the growth of $|A(t)|$ to be much slower than maximal possible, and in particular spatial growth of $A(-i\beta)$ in this case is not exponential but polynomial. The bound on $|A(-i\beta)|$ can be translated into a bound on the growth of Lanczos coefficients $b_n$, appearing as a part of the recursion method to numerically compute $C_T(t)$. This is provided we {\it assume} that asymptotically $b_n$ is a smooth function of $n$. To perform this calculation, we introduce a formalism of summing over weighted Dyck paths in section \ref{sec:L}, and evaluate the corresponding path integral via saddle point approximation. The obtained bound on Lanczos coefficients growth \eqref{boundonLT} is valid at all temperatures. Translating it into a bound on Lyapunov exponent of OTOC, we find a new bound on chaos \begin{eqnarray} \lambda_{\rm OTOC}\leq {2\pi T\over 1+2 T \bar{\beta}(T)}, \end{eqnarray} where $\bar{\beta}$ is such that $C_T(t)$ is analytic inside the strip $|\Im(t)|\leq \bar{\beta}(T)$. For local systems we find $\bar{ \beta}(T)\geq 2\beta^*$ with $\beta^*$ given by \eqref{betastar} for all $T$. We illustrate this bound for SYK model in section \ref{sec:L}, see Fig.~\ref{Fig:SYK}. We conclude with a discussion in section \ref{sec:summary}. \section{Bound on operator norm growth in Euclidean time} \label{sec:normbound} Our goal in this section is to bound the infinity norm of a local operator evolved in Euclidean time \begin{eqnarray} A(-i\beta)=e^{\beta H}A\, e^{-\beta H}, \quad |A(-i\beta)|\leq |A| f(\beta). \end{eqnarray} Here $f(\beta)$ is a bound which depends on the inverse temperature $\beta$, the strength of local coupling $J$ and geometrical properties of the underline lattice model. We argue that our bound \eqref{answer1} (for 1D systems) and \eqref{answer} (for higher dimensions) is optimal for the class of models characterized by the same strength of the local coupling constant $J$ and lattice geometry encoded in the Klarner's constant $\lambda$ and animal histories constant $\varepsilon$ which we introduce later in this work. For simplicity, we first consider nearest neighbor interaction Hamiltonian in 1D \begin{eqnarray} \label{sumh} H=\sum_{I=1}^L h_I, \end{eqnarray} where each $h_I$ acts on sites $I$ and $I+1$ and for all $|h_I|\leq J$ for some $J$ \footnote{Time-evolved $A(t)$ will not change if any of the local Hamiltonians $h_I$ is shifted by a constant. Therefore we define $h_I$ such that the absolute value of its largest and smallest eigenvalues are the same}. Any nearest neighbor interaction spin chain would be an example. The operator $A$ will be an one-site operator. An example with $L+1=6$ sites is shown in Fig.~\ref{Fig:1}. \begin{figure} \begin{tikzpicture} \foreach \s in {1,5} { \draw[double,double distance = 3pt] ({1.5*(\s - 1)+0.15},0) -- ({-0.15+1.5*(\s)},0); } \foreach \s in {2,3,4} { \draw[double distance = 3pt,double=lightgray] ({1.5*(\s - 1)+0.15},0) -- ({-0.15+1.5*(\s)},0); } \foreach \s in {1,...,6} { \node[draw, circle,radius=5pt] at ({1.5*(\s - 1)},0) {}; } \node[draw,fill, circle,radius=5pt] at ({1.5*(3 - 1)},0) {}; \node[] at ({1.5*(3 - 1)},0.5) {\large $A$}; \foreach \s in {1,...,5} { \node[] at ({1.5*(\s - 1)+0.75},-0.5) {\large $h_{\s}$}; } \end{tikzpicture} \caption{One-dimensional lattice with short-range interactions Hamiltonian $H=\sum_{I=1}^5 h_I$. Local operator $A$ sits at a third site counting from the left, between second and third bonds. Bonds highlited in gray form a lattice animal $I=2,3,4$. } \label{Fig:1} \end{figure} Euclidean time-evolved $A(-i\beta)$ can be expanded in Taylor series \begin{eqnarray} \label{Taylor} A(-i\beta)=A+\beta[H,A]+{\beta^2\over 2}[H,[H,\beta]]+\dots \end{eqnarray} Using decomposition \eqref{sumh} operator $A(-i\beta)$ can be represented as a sum of nested commutators of the form \begin{eqnarray} A(-i\beta)=A+\sum_{k=1}^\infty \sum_{\{I_1,\dots,I_k\}} [h_{I_k},[\dots,[h_{I_1},A]]] {\beta^k\over k!}. \label{nestedc} \end{eqnarray} Here the sum is over all sets of indexes $\{I_1,\dots,I_k\}$ which satisfy the following ``adjacency'' condition: first index $I_1$ must be adjacent to the site of $A$, $I_2$ must be adjacent to the endpoints of $I_1$ (which include the site of $A$), $I_3$ is adjacent to the endpoints of the union of $I_1,I_2$, etc. In other words, any subset of bonds $I_1,I_2,\dots,I_\ell$ for $\ell\leq k$ defines a connected cluster. Otherwise, the commutator in \eqref{nestedc} vanishes. A connected cluster of bonds of any particular shape is called a bond lattice animal. In 1D, all lattice animals consisting of $j$ bonds are easy to classify: they are strings of consecutive bonds from some $I$ to $I+j-1$. In higher dimensions, the number of different bond lattice animals consisting of $j$ bonds grows quickly with $j$. Each set $\{I_1,\dots,I_k\}$ in \eqref{nestedc} defines a lattice animal, but the same animal may correspond to different sets. This is because indexes can repeat and appear in different orders, subject to the constraints outlined above. If we think of the set $\{I_1,\dots,I_k\}$ as a ``word'' written in terms of ``letters'' $I_\ell$, then corresponding lattice animal defines the alphabet. There is a more nuanced characteristics of index sets from \eqref{nestedc}, the order in which new indexes appear. Namely, we take a set $\{I_1,\dots,I_k\}$ and while going from left to write remove indexes which have already appeared. In this way we obtain a new (shorter) set which also satisfies the adjacency condition. A particular order is called ``history.'' For example, two sets $\{2,3,2,4,3\}$ and $\{3,3,4,2,4\}$ define the same lattice animal consisting of bonds $I=2,3,4$ but different histories, $\{2,3,4\}$ and $\{3,4,2\}$ correspondingly, see Fig.~\ref{Fig:1}. Going back to the sum \eqref{nestedc}, to bound the infinity norm of $A(-i\beta)$ we can bound each nested commutator by $(2J)^k|A|$. Then \begin{eqnarray} |A(-i\beta)|&\leq& |A|f(\beta),\\ f(\beta)&=&\left(1+\sum_{k=1}^\infty \sum_{\{I_1,\dots,I_k\}} {(2J|\beta|)^k\over k!}\right), \label{boundsum} \end{eqnarray} and the non-trivial task is to calculate the number of sets $\{I_1,\dots,I_k\}$ for any given $k$, which satisfy the adjacency condition. Evaluating sum \eqref{boundsum} can be split into two major steps. First step is to calculate the total number $\phi(j)$ of animal histories associated with all possible lattice animals consisting of $j$ bonds. Second step is to calculate the sum over sets $\{I_1,\dots,I_k\}$ associated with any given history $\{J_1,\dots,J_j\}$. This last problem can be solved exactly in full generality. Let's assume we are given a history - a set $\{J\}=\{J_1,\dots,J_j\}$ which satisfies the adjacency condition. We want to know the number of different sets $\{I\}=\{I_1,\dots,I_k\}$ for $k\geq j$ satisfying the adjacency condition such that $\{J\}$ is the history of $\{I\}$. We denote this number by $S(k,j)$. An important observation here is that any given set $\{I\}$ defines a partition of $\{1,2,\dots, k\}$ into $j$ groups labeled by elements from $\{J\}$ by assigning each number $1\leq i \leq k$ to a group specified by $I_i$. And vice verse, each partition of $\{1,2,\dots, k\}$ into $j$ groups defines a proper set $\{I\}$ satisfying the adjacency condition. To see that we need to assign each group a unique label from $\{J\}$. We do it iteratively. The element $1$ belongs to a group, which will be assigned the label $J_1$. Then we consider element $2$. If it belongs to the same group labeled by $J_1$ we move on to element $3$, otherwise we assign the group it belongs label $J_2$. Then we consider elements $3$, $4$ and so on. In this way all $j$ groups will by labeled by the unique elements from $\{J\}$ such that the adjacency condition is satisfied. In other words, we have established a one-to-one correspondence between the space of proper sets $\{I\}$ for the given history $\{J\}$ with the space of partitions of $k$ elements into $j$ groups. The number $S(k,j)$ of such partitions is the Stirling numbers of the second kind which admits the following representation \cite{abramovich1964handbook} \begin{eqnarray} \label{S2} S(k,j)=\sum_{s=1}^j {(-1)^{j-s} s^{k-1}\over (j-s)!(s-1)!}. \label{n} \end{eqnarray} If we introduce the number of proper sets $\{I_1,\dots,I_k\}$ in \eqref{boundsum} consisting of $k$ bonds by ${\mathcal N}(k)$, such that \begin{eqnarray} \label{k-expansion} f(\beta)=1+\sum_{k=1}^\infty {\mathcal N}(k){(2J|\beta|)^k \over k!}, \end{eqnarray} then ${\mathcal N}(k)$ and $\phi(j)$ are related by the Stirling transform, \begin{eqnarray} \label{Nphi} {\mathcal N}(k)=\sum_{j=1}^k S(k,j) \phi(j). \end{eqnarray} The inverse relation is $\phi(j)=\sum_{k=1}^j s(j,k) {\mathcal N}(k)$, where $s(j,k)$ are the Stirling numbers of the first kind. From here in full generality follows \cite{bernstein1995some} \begin{eqnarray}\label{ourresult} f(\beta)=1+ \sum_{j=1}^\infty \phi(j) {q^j\over j!}, \end{eqnarray} where \begin{eqnarray} \label{qdef} q:=\left(e^{2|\beta| J}-1\right). \end{eqnarray} We will derive this identity below The expansion in $q$ \eqref{ourresult} has an obvious advantage over \eqref{k-expansion}. Locality is implicit in \eqref{k-expansion}, where the terms at the order $\beta^k$ come from the lattice animals of all sizes. At the same time \eqref{ourresult} makes locality manifest, terms at the order $q^j$ come only from the lattice animals which have at least $j$ bonds. This representation therefore can be used to establish Euclidean version of Lieb-Robinson bound, see section \ref{sec:LB}. To evaluate \eqref{ourresult} we still need to know the number of lattice animal histories $\phi(j)$ for a given $j$. In case of 1D systems, those can be calculated exactly, while in higher dimensions we propose an asymptotically tight bound. Hence, we consider these cases separately. \subsection{1D systems} \label{1d} In one dimension, all lattice animals consisting of $j$ bonds are simply the strings of $j$ consecutive bonds. There are $N(j)=j+1$ such animals which include the site of the operator $A$. A convenient way to enumerate them is to count the number of bonds $j_1$ and $j_2$, $j_1+j_2=j$, to the left and to the right of $A$, respectively. For the given $j_1,j_2$ there is, obviously, only one animal, $N(j_1,j_2)=1$. For any given $j_1,j_2$ we denote by $h(j_1,j_2)$ the number of histories associated with this animal, i.e.~the number of different sets $\{J\}=\{J_1,\dots,J_j\}$ such that each $J_i$ belongs to the animal, all $J_i$ in the set are unique and $\{J\}$ satisfies the adjacency condition. Each history $\{J\}$ can be completely parametrized by the order in which the cluster ``grew'' in left and right directions, for example histories $\{2,3,4\}$ and $\{3,4,2\}$ from Fig.~\ref{Fig:1} can be parametrized as ``left,right,right'' and ``right,right,left'' correspondingly. In other words histories with given $j_1,j_2$ are in one to one correspondence with strings of $j$ elements, each element being either ``left'' or ``right,'' and there are in total $j_1$ and $j_2$ elements of each kind. Obviously, the total number of such strings is \begin{eqnarray} h(j_1,j_2)={(j_1+j_2)!\over j_1!\, j_2!} \end{eqnarray} Combining all ingredients together, we find the number of lattice histories for all lattice animals of size $j$ \begin{eqnarray} \label{phij1j2} \phi(j_1,j_2)&=&N(j_1,j_2)h(j_1,j_2)={(j_1+j_2)!\over j_1!\, j_2!},\\ \phi(j)&=&\sum_{j_1+j_2=j} \phi(j_1,j_2)={2^j\over j!}. \label{phi1D} \end{eqnarray} from \eqref{k-expansion} and \eqref{Nphi} we find in full generality \begin{eqnarray} \label{f-from-phi} f(\beta)=1+\sum_{j \geq 1}^\infty\, \sum_{k=j}^\infty \phi(j) S(k,j) {(2|\beta| J)^k\over k!}. \end{eqnarray} By definition $k\geq j$. Crucially, expression \eqref{n} vanishes for $1\leq k<j$. Therefore the sum over $k$ can be extended to go from $k=1$ and can be easily evaluated, \begin{eqnarray} \nonumber f(\beta)=1+\sum_{j\geq 1}^\infty \sum_{s=1}^{j} {(-1)^{j-s} \phi(j) \over (j-s)!(s-1)!}{e^{2 \beta J s}-1\over s}. \end{eqnarray} The sum over $s$ can be evaluated explicitly, yielding \eqref{ourresult} \footnote{As a side note, that evaluation of \eqref{f-from-phi} in section \ref{1d} imply Lemma 5 of \cite{Kliesch_2014}. Let us consider a fixed lattice animal consisting of $j$ bonds, listed in some arbitrary order $\{J_1,\dots,J_j\}$. One may want to calculate $G=\sum_{k\geq j}\sum_{\{I_1,\dots, I_k\}}(2J|\beta|)^k/k!$, where the sum is over all sets $\{I_1,\dots, I_k\}$, where each $I_i$ belongs to the set $\{J_1,\dots,J_j\}$, and each $J_i$ appears in the set $\{I_1,\dots, I_k\}$ at least once. This is a simplified version of our main calculation, with the adjacency condition being ignored. It is the sum evaluated in Lemma 5 of \cite{Kliesch_2014}. By taking a set $\{I_1,\dots, I_k\}$ from the sum we can associate to it a set $\{I_1,I_{i_2},\dots I_{i_j}\}$ by going from the left to the right and removing repeating labels. As a set (i.e.~ignoring the order) $\{I_1,I_{i_2},\dots I_{i_j}\}$ coincides with $\{J_1,\dots,J_j\}$. The key point here is the same, the number of sets $\{I_1,\dots, I_k\}$ associated with the same set $\{I_1,I_{i_2},\dots I_{i_j}\}$ is equal to $S(k,j)$. If we now sum over all sets $\{I_1,\dots, I_k\}$ associated with a particular $\{I_1,I_{i_2},\dots I_{i_j}\}$, this is exactly the sum evaluated in \eqref{f-from-phi} with $\phi(j)=1$. Since there are $j!$ different permutations of labels in $\{J_1,\dots,J_j\}$, and thus $j!$ sets $\{I_1,I_{i_2},\dots I_{i_j}\}$ we therefore obtain $G=q^j$}. Using the explicit value of $\phi(j)$ \eqref{phi1D} we find \begin{eqnarray} \label{answer1} f(\beta)=\sum_{j=0}^\infty f(j,\beta)=e^{2q}, \quad f(j,\beta)={(2q)^{j}\over j!}. \end{eqnarray} Here $f(j,\beta)$ is a contribution to the bound coming from the clusters which include at least $j$ bonds. This result can be further refined. In \eqref{phij1j2} we introduced the number of lattice histories associated with the lattice animal which consists of $j_1$ bonds to the left of $A$, and $j_2$ bonds to the right. Repeating the summation in \eqref{f-from-phi} we readily find \begin{eqnarray} f(\beta)=\sum_{j_1, j_2\geq 0}^\infty f(j_1,j_2,\beta),\quad f(j_1,j_2,\beta)={ q^{j_1+j_2}\over j_1!\, j_2!}.\ \ \ \end{eqnarray} Here $f(j_1,j_2,\beta)$ is the bound on the norm of the part of $A(-i\beta)$ supported on the cluster of size $j_1+j_2$. It therefore can be used to obtain the bound in the case of finite 1D lattice, or an infinite 1D lattice with a boundary. By re-expanding \eqref{answer1} intro Taylor series in $\beta$, \begin{eqnarray} f(\beta)=\sum_{k=0}^\infty {B_k(2)\over k!} (2J|\beta|)^k, \end{eqnarray} where $B_k$ are Bell polynomials, we find a bound on the norm of individual nested commutators, \begin{eqnarray} \label{Bell} | \underbrace{[H,[\dots,[H,A]]]}_{k\ \rm commutators} |\leq |A|\, B_k(2) (2J)^k. \end{eqnarray} \subsection{Bethe lattices} The behavior of $f(\beta)$ differs drastically in one and higher dimensions. To better understand this difference we consider an ``intermediate'' scenario of a short range Hamiltonian define on a Bethe lattice of coordination number $z$ \cite{PhysRevB.30.391}. Namely, we assume that each $h_I$ from \eqref{sumh} ``lives'' on a bond and acts on the Hilbert spaces associated with two vertexes adjacent to that bond. For any finite $k$ in the Taylor series expansion \eqref{nestedc} only finite number of bounds are involved and the corresponding lattice animals (clusters) live on the Cayley tree. Thus, similarly to 1D, there are no loops, but the total number of lattice animals consisting of $j$ bonds grows exponentially, $N(j)\sim \lambda(z)^j$, \begin{eqnarray} \ln\lambda(z)=(z-1)\log(z-1)-(z-2)\log(z-2). \end{eqnarray} This exponential growth is typical for lattices in higher dimensions $D>1$. The total number of lattice animal histories $\phi(j)$ can be calculated exactly in this case (see appendix \ref{appx:B}), \begin{eqnarray} \label{phi} \phi(j)=(z-2)^j {\Gamma(j+z/(z-2))\over \Gamma(z/(z-2))}, \end{eqnarray} leading to the bound \begin{eqnarray} \label{Betheanswer} f(\beta)= (1-(z-2)q)^{-{z/(z-2)}}. \end{eqnarray} In other words, the total number of histories $\phi(j)$ grows as a factorial. The same qualitative behavior applies for all higher dimensional lattices. As a final remark, we notice that taking $z\rightarrow 2$ in \eqref{Betheanswer} yields $f(\beta)=e^{2q}$, in full agreement with \eqref{answer1}. \subsection{Higher dimensional systems} \label{sec:hd} The calculations of previous sections can in principle be extended to an arbitrary lattice system, but the number of lattice animal histories is difficult to evaluate exactly. Nevertheless it is known that the number of different lattice animals $N(j)$ consisting of $j$ bonds (which include a particular site) grows rapidly in higher dimensions. While the exact formula is not known, the asymptotic growth is known to be exponential, and is controlled by the so-called Klarner's constant $\lambda$, \begin{eqnarray} N(j) \sim \lambda^j. \label{Klarner} \end{eqnarray} By introducing a sufficiently large but $j$-independent constant $C$ we can uniformly bound the number of lattice animals consisting of $j$ bonds by \footnote{To account for a polynomial pre-exponential factor, coefficient $\lambda$ in \eqref{animalgrowth} may need to be taken strictly larger than the Klarner's constant $\lambda$ in \eqref{Klarner}} \begin{eqnarray} \label{animalgrowth} N(j)\leq C\, \lambda^j. \end{eqnarray} The number of histories for any given animal is the number of different sets $\{J_1,\dots,J_j\}$ where all indexes are distinct, subject to the adjacency condition. Let us denote by $h(j)$ the average number of histories for all animals consisting of $j$ bonds. Then, it is trivially bounded by $h(j)\leq j!$. It can be shown that for sufficiently large $j$ \cite{Bouch} \begin{eqnarray} h(j)\geq {j!\over a^j}, \end{eqnarray} for some $a>1$. We, therefore, conjecture that for higher dimensional lattices $h(j)$ is uniformly bounded by \begin{eqnarray} h(j) \leq C'\, {j!\over \varepsilon^j}, \label{Evdokiya} \end{eqnarray} for some $\varepsilon>1$ and a $j$-independent constant $C'\geq 1$. This bound is trivially satisfied for $\varepsilon=1$. The non-trivial part here is the expectation that \eqref{Evdokiya} correctly captures the leading (exponential) asymptotic behavior of $h(j)$ with some $\varepsilon>1$, i.e.~\eqref{Evdokiya} is the optimal bound which can not be further improved (excluding polynomial pre-factors). We therefore introduce here the constant $\varepsilon$ which we call {\it animal histories} constant and conjecture that it is strictly larger than $1$. In the end of this section we also derive a lower bound on $\varepsilon/\lambda$. By combining \eqref{animalgrowth} together with \eqref{Evdokiya} \begin{eqnarray} \label{ee} \phi(j) = N(j) h(j)\leq C'(\lambda/\varepsilon)^jj!, \end{eqnarray} we find the bound \begin{eqnarray}\nonumber f(\beta)=\sum_{j=0}^\infty f(j,\beta),\quad f(j,\beta)=C'{\left(q/q_0\right)^{j}}, \end{eqnarray} Here $f(\beta)$ is defined to be larger than the sum in \eqref{boundsum}. The coefficient \begin{eqnarray} \label{q0} q_0= {\varepsilon\over \lambda}. \end{eqnarray} characterizes lattice geometry. Unlike in 1D, where \eqref{answer1} has an additional factorial suppression factor, $f(j,\beta)$ in higher dimensions grows exponentially for sufficiently large $\beta$. Summing over $j$ yields \begin{eqnarray} \label{answer} f(\beta)={C' \over 1-q/q_0}. \end{eqnarray} In contrast to 1D, while \eqref{answer1} is finite for all $\beta$, \eqref{answer} is finite only for \begin{eqnarray} \label{betastar} |\beta|<\beta^*\equiv \ln(1+q_0)/(2J). \end{eqnarray} While \eqref{answer} is only a bound on $f(\beta)$ defined in \eqref{boundsum}, location of the singularity in both cases is the same because it is only sensitive to the asymptotic behavior of $N(j)$ and $h(j)$. Expanding \eqref{answer} in Taylor series \begin{eqnarray} f(\beta)=C'\sum_{k=0}^\infty {P_k(q_0^{-1})\over k!}(2J|\beta|)^k, \end{eqnarray} where $P_k$ are the polynomials defined via \begin{eqnarray} \label{Pk} P_k(x)={1\over 1+x} \left(x(1+x){\partial \over \partial x}\right)^k(1+x), \end{eqnarray} yields a bound on individual nested commutators \begin{eqnarray} \label{nested} | \underbrace{[H,[\dots,[H,A]]]}_{k\ \rm commutators} |\leq |A|C'\, {P_k(q_0^{-1})} (2J)^k. \end{eqnarray} The divergence of bound \eqref{answer} at $|\beta|=\beta^*$ is not an artifact of an overly conservative counting, as confirmed by a 2D model introduced in \cite{Bouch}, for which $|A(-i\beta)|$ is known to diverge. We will argue in section \ref{sec:S} that the growth outlined by the bounds (\ref{answer1},\ref{answer}) reflects actual growth of $|A(-i\beta)|$ in non-integrable systems and singularity of \eqref{answer} at finite $\beta$ is a sign of chaos. We also note that in case of 1D systems the bound \eqref{answer1} ensures that the operator norm of $A(t)$ remains bounded for any complex $t$. This is consistent with analyticity of correlation functions in 1D \cite{araki1969gibbs}. On the contrary, in higher dimensions, physical observables may not be analytic. We discuss the relation between the singularity of $|A(-i\beta)|$ and non-analyticity of physical observables due to a phase transition in section \ref{sec:S} and show that they have different origin. It is interesting to compare our result for a general lattice in $D>2$ with the exact result for Bethe lattices obtained in the previous section. From \eqref{phi} and \eqref{ee} we obtain lattices animal histories constant $\varepsilon$ for Bethe lattices, \begin{eqnarray} \varepsilon=\left({z-1\over z-2}\right)^{z-1}, \quad q_0={\varepsilon\over \lambda}={1\over z-2}. \end{eqnarray} For any $z\geq 2$, $\varepsilon >1$ supporting our conjecture that $\varepsilon$ is always strictly larger than $1$. Our universal expression \eqref{answer} bounds the exact result \eqref{Betheanswer} from above with any $q_0<1/(z-2)$ and sufficiently large $C'$. Bethe lattices provide a lower bound on the combination $q_0={\varepsilon\over \lambda}$ and hence on the critical value $\beta^*$. We show in the Appendix \ref{loop} that for any lattice of coordination number $z$, such that each vertex is attached to at most $z$ bonds the number of lattice animal histories is bounded by $\phi(j)\leq (z-2)^j {\Gamma(j+z/(z-2))\over \Gamma(z/(z-2))}$. We therefore find in full generality \begin{eqnarray} \label{Bethebound} q_0={\varepsilon\over \lambda} \geq {1\over z-2}. \end{eqnarray} This bound is stronger than any previously known, as we explain below. To conclude this section, we demonstrate the advantage of counting lattice animal histories as is done in \eqref{ourresult} over previously explored approaches. There is a straightforward way to estimate the number of sets ${I_1,\dots, I_k}$ in \eqref{boundsum} from above by counting the number of ways a new bond can be added to the set at each step. Provided the lattice has coordination number $z$, starting from the site of $A$, there are $z$ ways to choose $I_1$, at most $z(2z)$ ways to choose $I_2$, $z(2z)(3z)$ ways to choose $I_3$ and so on. As a result we would get an estimate for $f(\beta)$, \begin{eqnarray} f(\beta)\leq f_{\rm approx}=\sum_{k=0}^\infty (2J|\beta|)^k z^k = {1\over 1-2J |\beta|z}. \label{naive} \end{eqnarray} This result was previously obtained in \cite{Abanin_2015,arad2016connecting}. This gives the following estimate for the location of the pole \begin{eqnarray} \label{naivebetastar} |\beta|={z^{-1}\over 2J}. \label{naivesin} \end{eqnarray} The approximation \eqref{naivebetastar} is naive as it overcounts the number of sets $\{I_1,\dots,I_k\}$ assuming the underlying cluster is always of size $k$. We therefore expect \eqref{naive} to be weaker than our \eqref{answer}, $f(\beta)\leq f_{\rm approx}(\beta)$, and in particular the location of the singularity \eqref{naivesin} to be smaller than $\beta^*$ defined in \eqref{betastar}. This can be written as an inequality \begin{eqnarray} \varepsilon/\lambda\geq e^{1/z}-1, \label{ah} \end{eqnarray} which is indeed satisfied due to \eqref{Bethebound}. The advantage of \eqref{Bethebound} becomes apparent in the limit $z\rightarrow 2$ when $\beta^*$ becomes infinite while \eqref{naivebetastar} remains finite. A result analogous to \eqref{answer} has been previously established in \cite{de_Oliveira_2018}, but importantly there $q_0$ was just inverse of the lattice animal constant, i.e.~Klarner's constant introduced in previous section, $q_0=\lambda^{-1}$. Crucially, we improve this result to account for proper lattice animal histories by introducing $\varepsilon>1$ in \eqref{q0}. Without $\varepsilon$ critical value of $\beta$ where $f(\beta)$ diverges is given by $q_0=e^{2J\beta}-1=\lambda^{-1} $ and e.g.~for a cubic lattice in $D$ dimensions $\lambda$ asymptotes to $2De$ when $D\rightarrow \infty$ \cite{de_Oliveira_2018,miranda2011growth}. This value is smaller than \eqref{naivebetastar} with $z=2D$, meaning the inequality \eqref{ah} is not satisfied. To conclude, without taking lattice animal histories into account, even exact value of $\lambda$ results in a less stringent bound than \eqref{naivebetastar}, while our bound is always stronger than that due to \eqref{Bethebound}. \section{Spatial growth in Euclidean time} \label{sec:LB} While deriving the bound on the norm of local operators evolved in Euclidean time, \eqref{answer1} and \eqref{answer}, we obtained a stronger result -- a bound $f(j,\beta)$ (or $f(j_1,j_2,\beta)$ in 1D) on spatial growth of $A(-i\beta)$. It can be immediately translated into the Euclidean analog of the Lieb-Robinson bound \cite{LR} on the norm of the commutator of two spatially separated local operators. If $B$ is an operator with finite support located distance $\ell$ away from $A$ (measured in the Manhattan norm in case of a cubic lattice), then in $D\geq 2$ \begin{eqnarray} \left|[A(i\beta),B]\right|\leq 2|A||B| \sum_{j=\ell}^\infty f(j,\beta)=2|A||B| {C'\, (q/q_0)^{\ell}\over 1-(q/q_0)}, \nonumber \end{eqnarray} where we assumed that $|\beta|<\beta^*$. For larger $|\beta|$ there is no bound as the sum does not converge. This result means that the local operator can spread to the whole system, no matter how large or even infinite that is, in finite Euclidean time $\beta=\beta^*$. We will argue in section \ref{sec:S} that this is the true physical behavior in the chaotic case and therefore the bound can not be improved to get rid of the divergence at $|\beta|=\beta^*$ in full generality. In 1D the situation is very different. Assuming local operator $B$ is located $\ell$ bonds away from $A$ we find \begin{eqnarray} \nonumber \left|[A(i\beta),B]\right|&\leq& 2|A||B| \sum_{j_1=0,j_2=\ell}^\infty f(j_1,j_2,\beta)=\\ &&2|A||B| {e^{2q}\over (\ell-1)!}\int_0^q e^{-t} t^{\ell-1}dt. \label{LR} \end{eqnarray} (If the system is infinite only in one direction and $A$ is sitting at the boundary, one factor of $e^{q}$ should be removed.) Qualitatively the RHS of \eqref{LR} behaves as \begin{eqnarray} \label{LRsimplify} \left|[A(-i\beta),B]\right|\leq 2|A||B|\, {q^\ell \over \ell!}e^{q} , \end{eqnarray} for $\ell \gg q+1$, and asymptotes to $2|A||B| e^{2q}$ for $\ell \ll q+1$. This means a local operator spreads exponentially fast, to distances $\ell \sim e^{2J\beta}$, in Euclidean time $\beta$. Exponential spreading of operators in 1D seems to be in agreement with the convergence of the Euclidean variational algorithm of \cite{Beach_2019} in logarithmic time. The connection between Euclidean Lieb-Robinson bound and the convergence time is intuitive, but difficult to establish rigorously, in particular, because the latter is sensitive to the choice of initial wave-function. For the integrable models, for which the spreading of operators is at most polynomial (see section \ref{sec:S}), convergence time might be even shorter because of a well-tuned initial wave-function. For the chaotic systems we expect no fine-tuning of the initial state and hence a direct relation between the convergence time and Euclidean Lieb-Robinson bound. Another possibly intriguing connection is with the studies of Thouless times in chaotic Floquet systems without conserved quantities \cite{Chan_2018}. There, it was noticed that in 1D Thouless time is logarithmic in system size (see also \cite{Gharibyan:2018jrp}), and finite in $D\geq 2$ (see, however, \cite{Bertini:2018wlu}). That is exactly the same behavior as in the case of Euclidean operator spreading. One potential interpretation would be that Thouless time can be associated with the slowest Euclidean mode propagating in the system. Under Euclidean time evolution with a time-dependent random Hamiltonian our extension of Lieb-Robinson bound holds. We also surmise that in this case spatial growth of all quantities, including the slowest, is qualitatively and outlined by the bound with some effective $J,q_0$. When the system in question has a local conserved quantity, the slowest transport mode is diffusive, leading to $L^2$ scaling of Thouless time \cite{friedman2019spectral}. Thus, to compete this picture it would be necessary to establish that under Euclidean time evolution time necessary for a diffusive mode to travel across the system is the same as in the Minkowski case, i.e~$\beta\sim L^2$, where $L$ is the system size. Finally, we notice that the Euclidean analog of the Lieb-Robinson bound in 1D \eqref{LR} looks similar to the conventional Minkowski bound \cite{chen2019operator} \begin{eqnarray} \label{LRM} \left|[A(t),B]\right|\leq 2|A||B|\, {(2Jt)^\ell \over \ell!}, \end{eqnarray} with $2Jt$ substituted by $q(\beta)$. \section{Constraints on matrix elements} \label{sec:ME} \subsection{Individual matrix elements} Constraints on the infinity-norm of $A(i\beta)^\dagger=A(-i\beta)$ provide an upper bound on the magnitude of matrix elements $A_{ij}=\langle E_i|A|E_j\rangle$ in the energy eigenbasis. Starting from \begin{eqnarray} \label{identity} A(-i\beta)_{ij}\equiv \langle E_i|A(-i\beta)|E_j\rangle=A_{ij} e^{\beta(E_i-E_j)} \end{eqnarray} we find \begin{eqnarray} \label{boundSpain} |A_{ij}|\leq e^{-\beta(E_i-E_j)} |A(-i\beta)|. \end{eqnarray} This inequality holds for any $\beta$ and we therefore can optimize it over $\beta$. Using explicit form of the bound \eqref{answer1} in 1D we find optimal value of $\beta$ to be (without loss of generality we assumed $\omega=E_i-E_j\geq 0$) \begin{eqnarray} \label{ob} \beta=\left\{\begin{array}{lr} \ln\left({\omega\over 4J}\right)/(2J), & \omega \geq 4J,\\[2pt] 0, & 4J\geq \omega. \end{array} \right \end{eqnarray} This yields \begin{eqnarray} \label{meb} |A_{ij}|\leq |A| \kappa(\omega),\qquad \omega=|E_i-E_i|, \end{eqnarray} where \begin{eqnarray} \label{bme1} \kappa(\omega)&\equiv &\left\{\begin{array}{lr} {\rm exp}\left\{ 2\,\tilde{\omega}\left(1-\ln\tilde{\omega}\right)-2 \right\}, & \tilde{\omega}=\omega/(4J)\geq 1,\\[2pt] 1, & \tilde{\omega} \leq 1. \end{array} \right.\nonumber \end{eqnarray} These results shows that in 1D for large energy difference $\omega=|E_i-E_i|\gg J$ off-diagonal matrix elements $A_{ij}$ decay faster than exponential. For $\omega \leq 4J$ the bound trivializes to $|A_{ij}|\leq |A|$. In higher dimensions the bound on $A_{ij}$ from \eqref{boundSpain} can not be better than exponential. This is because $f(\beta)$ is a monotonically increasing function of $\beta$ which diverges for some $|\beta|=\beta^*$. In particular \begin{eqnarray} \label{boundonbound} e^{-\beta \omega}|A(-i\beta)|\geq e^{-\beta^* \omega} |A| \end{eqnarray} for any $\beta$ and $\omega\geq 0$. To find leading exponent we optimize \eqref{boundSpain} over $\beta$ to find, \begin{eqnarray} \label{optimalbeta} \beta={\ln\left({\omega(1+q_0^{-1})\over 2(\omega+J)}\right)\over 2J}, \end{eqnarray} and $|A_{ij}|\leq |A| \kappa(\omega)$, where $\omega=|E_i-E_i|$, \begin{eqnarray} \label{bme} \kappa(\omega)&=& C' q_0^{-1} \tilde{\omega} \left({\tilde{\omega}(1+q_0^{-1})\over 1+\tilde{\omega}}\right)^{-1-\tilde{\omega}},\quad \tilde{\omega}=\omega/(2J). \nonumber \end{eqnarray} Taking $\omega\rightarrow \infty$ limit, we find that the asymptotic exponential behavior is given by \eqref{boundonbound}, \begin{eqnarray} \kappa(\omega) \lesssim C'' \omega e^{-\beta^*\omega}, \quad \omega\gg J, \end{eqnarray} where $C''$ is some $\omega$-independent constant. Constraints on individual matrix elements \eqref{bme1} and \eqref{boundonbound} only depend on energy difference $\omega$. In the case when the system satisfies ETH, off-diagonal matrix elements for $i\neq j$ are known to be exponentially suppressed by the entropy factor, $|A_{ij}|^2\sim e^{-S}$. Therefore for the chaotic systems the bound will be trivially satisfied unless $\omega$ is extensive. The bound analogous to \eqref{bme} has previously appeared in \cite{de_Oliveira_2018}, with $\beta^*$ given by \eqref{betastar} with $\varepsilon=1$. \subsection{Constraints on power spectrum} \label{sec:ps} Bounds on individual matrix elements found above can be extended to the autocorrelation function of a Hermitian local $A$, \begin{eqnarray} \label{C} C(t)\equiv {\rm Tr}(\rho A(t) A), \end{eqnarray} and its power spectrum \begin{eqnarray} \nonumber \Phi(\omega)&=& {1\over 2\pi}\int_{-\infty}^\infty dt\, e^{-i\omega t} C(t)=\\ && \sum_{i,j} p_i |A_{ij}|^2\delta (E_i-E_j-\omega). \label{ps} \end{eqnarray} Here $\rho$ is an arbitrary density matrix which commutes with the Hamiltonian, $\rho=\sum_i p_i |E_i\rangle \langle E_i|$, ${\rm Tr}\rho=1$. Although bounds on moments $M_k$ derived below are universal for all $\rho$, in what follows we will be most interested in the case when $\rho$ is the Gibbs ensemble $\rho=e^{-H/T}/Z$, in which case autocorrelation function and power spectrum will be denotes by $C_T$ and $\Phi_T$ correspondingly. As a function of complex argument $C_T$ satisfies, \begin{eqnarray} \label{reflsymm} C_T(t-i/(2T))&=&C_T(-t-i/(2T)),\\ C_T(t^*)&=&(C_T(t))^*. \end{eqnarray} First we notice that \begin{eqnarray} |C(t)|\leq |A(t/2)|^2\leq |A|^2 f^2(|t|/2), \end{eqnarray} for any complex $t$, which guarantees analyticity of $C(t)$ for 1D systems on the entire complex plane. Using the bound on individual nested commutators \eqref{Bell} and \eqref{nested} one can bound the growth of Taylor coefficients of $C$, \begin{eqnarray} \label{momenta} M_k=\int\limits_{-\infty}^\infty \Phi(\omega)\omega^k d\omega={\rm Tr}(\rho \underbrace{[H,[\dots,[H,A]]]}_{k\ \rm commutators} A).\quad \end{eqnarray} To obtain an optimal bound, nested commutators should be split equally between two $A$'s using cyclicity of trace \begin{eqnarray} |M_{2k+i}| &\leq & |A|^2 (2J)^{2k+i} R_k R_{k+i},\quad i=0,1. \end{eqnarray} Here $R_k=B_k(2)$ for infinite 1D system, $R_k=B_k$ for semi-infinite 1D system with a boundary, and $R_k=C' P_k(q_0^{-1})$ for $D\geq 2$. Using the asymptotic behavior of Bell polynomials \cite{khorunzhiy2019asymptotic} \begin{eqnarray} B_n(x)\sim \left({n\,\big(1+o(1)\big)\over e \log(n/x)}\right)^n,\quad n\gg x, \end{eqnarray} and the Stirling approximation formula, the bound on moments for $k\gg 1$ can be rewritten as (for the infinite 1D system) \begin{eqnarray} \label{mu1d} |M_k|\leq |A|^2 (2J)^k \left({k\over 2\, e \log k}\right)^k \times e^{o(k)}. \end{eqnarray} It is easy to see that the Taylor series of $C_T(t)$ converges in the whole complex plane, as was pointed out above. In higher dimensions, to find asymptotic behavior of $P_k(x)$ for large $k$, we use the following representation \begin{eqnarray} P_k(x)=\sum_{j=1}^n j!\, S(k,j)\, x^j ={1\over 1+x}\sum_{j=1}^\infty j^k \left({x\over 1+x}\right)^j. \nonumber \end{eqnarray} Substituting the sum over $j$ by an integral and taking saddle point approximation gives \begin{eqnarray} \label{mubound} |M_k| \leq |A|^2 \left({q_0\over 1+q_0}\right)^2\left({k\over 2 e \beta^*}\right)^k \times e^{o(k)}. \end{eqnarray} Focusing on the case when $\rho=e^{-H/T}/Z$, \eqref{mubound} guarantees that Taylor series of $C_T(t)$ converges absolutely inside the disc $|t|\leq 2\beta^*$. By representing $C_T$ as a sum over individual matrix elements it is easy to see that if the sum for $C_T(-i\beta)$ is absolutely convergent, then it is absolutely convergent for any $C_T(t)$, $\Im(t)=-i\beta$. Therefore $C_T(t)$ is analytic inside the strip $2\beta^*>\Im(t)> -2\beta^*$. Because of reflection symmetry \eqref{reflsymm} function $C_T(t)$ must be analytic inside a wider strip $2\beta^*>\Im(t)> -2\beta^*-1/T$ \footnote{If $1/(2T)\leq 2\beta^*$, a union of an original strip $|\Im(t)|<2\beta^*$ and its reflection around the point $\beta=-1/(2T)$ is a wider strip $2\beta^*>\Im(t)>-2\beta^*-1/T$. Function $C_T(t)$ has to be analytic there. If $1/(2T)>2\beta^*$ the same union consists of two strips, $2\beta^*>\Im(t)>-2\beta^*$ and $2\beta^*-1/T>\Im(t)>-2\beta^*-1/T$. It is easy to show though that $C_T$ has to be analytic also in between, $-2\beta^*>\Im(t)>2\beta^*-1/T$. From the definition $C_T(t)={\rm Tr}(\rho^a A \rho^b A)$, $a=i t+1/T$, $b=it$, and positivity $\Re(a),\Re(b)>0$ it follows that the sum over Hilbert space converges, $C_T$ is well defined and therefore analytic.}. Hence symmetrically ordered autocorrelation function \begin{eqnarray} \label{Wight} C^W_T(t)\equiv {\rm Tr}(\rho^{1/2}A(t)\rho^{1/2}A)=C_T(t-i/(2T)), \end{eqnarray} is analytic inside the strip $2\beta^*+1/(2T)>\Im(t)> -2\beta^*-1/(2T)$, which is wider than the strip of analyticity of $C_T(t)$, and indicates a more rapid exponential decay of the power spectrum $\Phi^W_T$ of \eqref{Wight} in comparison with $\Phi_T(\omega)$. The logic above is general and does not require any specific details of $M_k$. Using reflection symmetry \eqref{reflsymm} we have shown in full generality that if $C_T(t)$ develops a singularity at $t=\pm i 2\beta^*$, then $C^W_T(t)$ is analytic at least inside the strip $|\Im(t)|\leq 2\beta^* +1/(2T)$. There is another integral bound on power spectrum, valid for any density matrix $\rho$ which commutes with $H$. By integrating \eqref{ps} we find the following inequality \begin{eqnarray} \nonumber \int_{\omega}^\infty d\omega' \Phi(\omega')\equiv \sum_{E_i\geq E_j+\omega} p_i\, |A_{ij}|^2=\qquad \\ \nonumber \sum_{E_i \geq E_j+\omega} p_i\, e^{-2\beta(E_i-E_j)}|A(-i\beta)_{ij}|^2 \leq \\ \nonumber \sum_{E_i \geq E_j+\omega} \! \! \! \!\!\! p_i\, e^{-2\beta \omega }|A(-i\beta)_{ij}|^2 \leq e^{-2\beta \omega }\sum_{i, j} p_i\, |A(-i\beta)_{ij}|^2 =\\ \nonumber e^{-2\beta \omega }\, {\rm Tr}(\rho A(-i\beta) A(i\beta))\leq e^{-2\beta \omega } |A(-i\beta)|^2. \end{eqnarray} Here $\omega$ is non-negative and in the second equality we used \eqref{identity} with an arbitrary positive $\beta$. Now we can use $|A(-i\beta)|\leq |A|f(\beta)$ and optimize over $\beta$, yielding \begin{eqnarray} \label{abaninbound} \int_{\omega}^\infty d\omega' \Phi(\omega')\leq |A|^2 \kappa(\omega)^2. \end{eqnarray} Function $\kappa$ is given by \eqref{bme1} and \eqref{bme} in $D=1$ and $D\geq 2$ correspondingly. When $\rho$ is maximally mixed state, i.e.~temperature $T$ is infinite, the bound can be strengthen to \begin{eqnarray} \label{ka} \int\limits_{|\omega'|\geq \omega} d\omega' \Phi(\omega')\leq |A|^2 \kappa(\omega)^2. \end{eqnarray} We would like to emphasize that all bounds discussed above, i.e.~bounds on $M_k$ and \eqref{abaninbound}, are integral in form. We do not know a rigorous way to directly constrain asymptotic behavior of $\Phi(\omega)$. At the same time at physical level of rigor, if we assume that $\Phi(\omega)$ is a smoothly behaving function at large $\omega$, analyticity of $C(t)$ inside the strip $|\Im(t)|<2\beta^*$ immediately implies that power spectrum in $D\geq 2$ is exponentially suppressed by \begin{eqnarray} |\Phi(\omega)|\lesssim |A|^2 e^{-2\beta^* \omega}, \qquad \omega \rightarrow \infty. \end{eqnarray} In 1D we similarly find super-exponential suppression \begin{eqnarray} |\Phi(\omega)|\lesssim |A|^2 e^{-\omega(1+\ln(4J/\omega))/J}, \qquad \omega \rightarrow \infty. \end{eqnarray} The bound on moments for large $k$ (\ref{mu1d},\ref{mubound}) and the integral bound \eqref{abaninbound} for large $\omega$ follow from here via saddle point approximation. Superexponential suppression of $\Phi(\omega)$ emphasizes peculiarity of one-dimensional systems. In particular, it implies that high frequency conductivity \cite{Mukerjee_2006} and energy absorption \cite{Abanin_2015} for such systems will be superexponentially suppressed. This is a very special behavior, which should be kept in mind in light of the numerical studies, which are often limited to one dimensions, and therefore may not capture correct physical behavior. An exponential bound on the integral of $\Phi(\omega)$ was first established in \cite{Abanin_2015}, where the authors also noted superexponential suppression in 1D, albeit without proposing an explicit analytic form. \section{Finite size scaling and chaos} \label{sec:S} The bounds obtained in section \ref{sec:normbound} correctly account for the number of non-trivial nested commutators $[h_{I_k},[\dots,[h_{I_1},A]]]$ but do not take into account peculiarities of individual local Hamiltonians $h_I$. We therefore expect our bound to be strongest possible among the uniform bounds for the entire family of local short-ranged Hamiltonians defined on a particular lattice. We further assumed that each nested commutator is equal to its maximal possible value $(2J)^k |A|$. This is certainly too conservative, but for the chaotic systems, i.e.~in absence of some additional symmetries, we expect a finite fraction of nested commutators to grow as a power of $k$. We therefore expect that for large $\beta$ our bounds \eqref{answer1} and \eqref{answer} to correctly describe growth of operator norm in local chaotic systems with some effective values of $J$, as it happens in \cite{Bouch}. In particular in one dimensions we expect $|A(-i\beta)|$ to grow double-exponentially, and in higher dimensions we expect $|A(-i\beta)|$ to diverge at some finite $\beta^*$. We similarly expect the bound on spatial growth outlined in section \ref{sec:LB} to correctly capture the spread of local operators when the system is chaotic. An indirect evidence to support that comes from the numerical results of \cite{Beach_2019}, i.e.~logarithmic convergence time of a numerical Euclidean time algorithm, in agreement with \eqref{LRsimplify}. Below we further outline how $|A(-i\beta)|$ reflects chaos of the underlying system when the system size is finite. It follows from \eqref{ourresult} that for large $\beta$ animal histories with the largest number of bonds will dominate, \begin{eqnarray} f(\beta)\propto q^j\sim e^{2J j |\beta|}, \label{growth} \end{eqnarray} where $j$ is the total number of bonds in the system, i.e. $j$ is proportional to the volume. Let us compare this behavior with the growth of the Frobenius norm, \begin{eqnarray} C(-i\beta)={{\rm Tr}(A(-i\beta)A)\over {\rm Tr(1)}}=\sum_{ij} e^{\beta(E_i-E_j)}{|A_{ij}|^2\over {\rm Tr(1)}}. \end{eqnarray} At large $\beta$ leading behavior is \begin{eqnarray} C(-i\beta) \propto e^{\beta \Delta E} \label{asym}, \end{eqnarray} where $\Delta E$ is the maximal value of $\Delta E=E_i-E_j$ such that corresponding matrix element $A_{ij}$ is not zero. (In other words $\Delta E$ is the support of $\Phi(\omega)$.) For the chaotic systems satisfying Eigenstate Thermalization Hypothesis we expect most matrix elements to be non-zero, even for extensive $\Delta E$, matching extensive behavior of $2J j$ in \eqref{growth}. Assuming qualitative behavior of \eqref{answer} is correct for non-integrable systems, going back to thermodynamic limit in $D\geq 2$, we expect a singularity of $|A(-i\beta)|$ and $C(-2i\beta)$ at some finite $\beta$. This singularity has a clear interpretation in terms of $A$ spreading in the operator space. We first interpret $(A|B):={\rm Tr}(A^\dagger B)/{\rm Tr}(1)$ as a scalar product in the space of all operators and denote corresponding Frobenius norm of $A$ by $|A|_F\equiv (A|A)^{1/2}$. Then if $A$ were typical, i.e.~random in the space of all operators, \begin{eqnarray} \nonumber C(-i\beta)={\rm Tr}(A(-i\beta)A)=|A(-i\beta/2)|_F^2 {Z(\beta)Z(-\beta)\over Z(0)^2},\quad \\ Z(\beta)\equiv {\rm Tr}\, e^{-\beta H}.\, \,\, \nonumber \end{eqnarray} Euclidean time evolution can be split into two parts, $A(-i(\beta+\beta'))=e^{\beta' H}A(-i\beta)e^{-\beta' H}$ such that \begin{eqnarray} \label{toda} C(-i(\beta+\beta'))=(A(-i\beta/2)|e^{i\,\beta' {\rm adj}_H}|A(-i\beta/2)). \end{eqnarray} At time $\beta=0$ we start with a local operator, which is not typical. In principle $A(-i\beta/2)$ only explores a particular trajectory in the space of all operators, and therefore can not be fully typical at any $\beta$. Yet, if we assume that by the time $\beta$ the trajectory of $A$ has explored substantial part of operator space such that $A(-i\beta/2)$ can be considered typical enough, we obtain \begin{eqnarray} C(-i(\beta+\beta'))\approx C(-i\beta) {Z(\beta')Z(-\beta')\over Z(0)^2}. \label{CC} \end{eqnarray} Taking into account that free energy $\ln(Z)$ is extensive, we immediately see that \eqref{CC} diverges for any $\beta'>0$. Hence, the singularity of $C(-i\beta)$ and thus also of $|A(-i\beta/2)|$ marks the moment when $A(-i\beta/2)$ becomes typical. This picture is further developed in \cite{dymarsky2019toda}, where we show that the singularity of $|A(-i\beta/2)|$ can be associated with delocalization of $A$ in Krylov space. It is interesting to note that since $C(t)$ is analytic for local one-dimensional systems, for such systems, even non-integrable, $A$ never becomes typical and hence these systems can not be regarded as fully chaotic. We separately remark that the conventional time evolution $C(t)=(A(t/2)|A(-t/2))$ does not have an interpretation as the Frobenius norm-squared of $A(t/2)$, therefore \eqref{toda} does not apply and even if $A(t/2)$ becomes sufficiently typical at late $t$, the analog of \eqref{CC} may not hold. If the system is finite, at large $\beta$ free energy simply becomes $\ln Z(\beta)/Z(0) \sim -\beta E_{\rm m}$, where $E_{\rm m}$ is extensive (minimal or maximal ) energy of the system. Hence \eqref{CC} will be proportional to $e^{\beta' \Delta E}$, where $\Delta E$ is extensive, in full agreement with \eqref{asym}. This gives the following qualitative behavior of $C(-i\beta)$ when the chaotic system is sufficiently large but finite. For small $\beta$, $\ln C(-i\beta)$ will behave as $\propto e^q$ in 1D and $\propto \ln(q_0-q)$ in higher dimensions. This growth will stop at $\beta\sim \log(L)$ in 1D or $\beta\sim \beta^*$ in $D\geq 2$, at which point in both cases $\ln C(-i\beta)$ will be extensive. At later times $\ln C(-i\beta)$ will grow as ${\beta \Delta E}$ with some extensive $\Delta E$. In the non-integrable case the transition between two regimes, ``thermodynamic'' when $C(-i\beta)$ has not yet been affected by the finite system size, and ``asymptotic,'' is very quick, at most double-logarithmic in 1D. Behavior of chaotic systems described above should be contrasted with integrable models. In this case most matrix elements $A_{ij}$ are zero and for a wide class of systems, including classical spin models and systems with projector Hamiltonians, support of $\Phi(\omega)$ remains bounded in the thermodynamic limit. (In terms of the Lanczos coefficients, introduced in the next section, this is the case of $\lambda=0$.) For such systems the bounds (\ref{answer1}) and (\ref{answer}) will be overly conservative. For sufficiently large systems and large $\beta$ we expect \eqref{asym} with a system size independent $\Delta E$. This asymptotic behavior will emerge in finite system-independent Euclidean time. Infinity norm $|A(-i\beta)|$ will behave similarly. We further can use \eqref{asym} to estimate the Frobenius norm of nested commutators $| \underbrace{[H,[\dots,[H,A]]]}_{k\ \rm commutators} |_F \leq |A|\Delta E^k$. Assuming infinity and Frobenius norms exhibit qualitatively similar behavior we can substantially improve the Euclidean analog of the Lieb-Robinson bound \begin{eqnarray} |[A(-i\beta),B]|\leq 2|A||B| \sum_{k=\ell}^\infty {\beta^k \Delta E^k\over k!} \sim 2|A||B|{\beta^\ell \Delta E^\ell\over \ell !}, \nonumber \end{eqnarray} where last step assumes $\beta \Delta E\ll \ell$. This bound has the same structure as the conventional Lieb-Robinson bound in Minkowski space \eqref{LRM}. Thus, in the case of non-interacting models or projector Hamiltonians ($\lambda=0$ in the language of next section) we find ballistic spreading of operators for any complex $t$. In the case of a general integrable model, the support of $\Phi(\omega)$ is extensive and the behavior is more intricate. In many explicit examples in the thermodynamic limit $\Phi(\omega)$ decays as a Gaussian, and $C(-i\beta)\propto e^{(J\beta)^2}$ with some appropriate local coupling $J$ \cite{brandt1976exact,perk1977time,liu1990infinite,viswanath2008recursion,calabrese2012quantum}. (This is the case of $\lambda=1$ in terms of the next section. See appendix \ref{appx:D} where we derive the Gaussian behavior starting from $\lambda=1$.) Using the same logic as above this leads to the Euclidean Lieb-Robinson bound of the form \begin{eqnarray} |[A(-i\beta),B]|\lesssim 2|A||B|{(\beta J)^{2\ell}\over \ell !}, \end{eqnarray} which indicates a polynomial propagation of the signal $\ell\propto \beta^2$. For a finite system of linear size $L$ we may expect Gaussian behavior $C(-i\beta)\propto e^{(J\beta)^2}$ up to the times $\beta\propto L^{1/2}$, after which the asymptotic behavior \eqref{asym} should emerge. Although the model is integrable, $\Delta E$ is extensive, which implies the transition between ``thermodynamic'' and ``asymptotic'' behavior is long and will take up to $\beta \sim L$. This indicates the qualitative difference between integrable and non-integrable (chaotic) models. When the system is finite in both cases the asymptotic behavior is given $C(-i\beta)\propto e^{\beta \Delta E}$ with an extensive $\Delta E$ (except for the $\lambda=0$ case), but asymptotic behavior will emerge quickly, in finite (for $D\geq 2$) or logarithmic (for $D=1$) times in the non-integrable case, while in the integrable case asymptotic behavior will emerge much slower, after polynomial times in $L$. A qualitatively similar picture will also apply if integrability is broken weakly, by a parametrically small coupling. For an operator initially characterized by $\lambda=0$, the correlation function will first exhibit \eqref{asym} with some sub-extensive $\Delta E$, which will gradually grow to extensive values. It would be interesting to study this transition in detail, to see if the required times may be parametrically longer than $\beta \sim L$. We stress that non-analyticity of $C(t)$ at imaginary times is due to $A(-i\beta)$ becoming typical and is not related to non-analyticity of free energy $\ln Z(\beta)$ due to a phase transitions at some temperature $\beta$. Indeed, $C(t)$ for the SYK model is known to have a pole at imaginary time \cite{MS}, while there is no phase transition and free energy is analytic. On the contrary, for the 3d Ising $\ln Z(\beta)$ is non-analytic due to a phase transition, but $C(t)$ is entire, simply because $A(t)$ explores only a very small part of the corresponding Hilbert space. In conclusion, we note that the singularity of $|A(-i\beta)|$ and $C(-i\beta)$ at finite $\beta$ in the thermodynamic limit has an IR origin. A straightforward attempt to extend the analysis of this section to field theoretic systems, which can be obtained from lattice systems via an appropriate limit, fails because both $|A(-i\beta)|$ and $C(-i\beta)$ are UV-divergent, and this obscures the IR divergence due to chaos. Formulating the criterion of chaos for QFTs using Euclidean operator growth thus remains an open question. \section{Constraints on Lanczos coefficients} \label{sec:L} The bound on power spectrum established in section \ref{sec:ps} can be used to constrain the growth of Lanczos coefficients. To remind the reader, Lanczos coefficients $b_n$ are non-negative real numbers associated with an orthonormal basis in the Krylov space $A_n$ generated by the action of $H$ on a given operator $A_0=A$. Starting from a scalar product \begin{eqnarray} \label{sp} (A,B)\equiv {\rm Tr}(\rho^{1/2}A^\dagger \rho^{1/2}B), \end{eqnarray} and choosing $A$ normalized such that $|A|^2=(A,A)=1$, Lanczos coefficients are fixed iteratively from the condition that operators $A_n$ defined via $A_{n+1}=([H,A_n]-b_n A_{n-1})/b_{n+1}$ are orthonormal, $(A_n,A_m)=\delta_{nm}$. An autocorrelation function $C^W=(A(t),A)$, defined via scalar product \eqref{sp}, can be parametrized in a number of ways, via its power spectrum $\Phi^W(\omega)$, Taylor coefficients (moments) $M_k$, or Lanczos coefficients $b_n$. Schematically an asymptotic growth of $b_n$ for large $n\gg 1$ is related to the behavior of $M_k$, $k\gg 1$, high-frequency tail of $\Phi^W(\omega)$, $\omega\rightarrow \infty$, or growth of $C^W(t)$ at the Euclidean time $t=-i\beta$. But the detailed relation is not always trivial. {\it Assuming} exponential behavior of power spectrum at large frequencies \begin{eqnarray} \label{asps} \Phi^W(\omega) \sim e^{-(\omega/\omega_0)^{2/ \lambda}}, \end{eqnarray} it is trivial to obtain the growth of $M_k$ and $C^W(\beta)$ by calculating corresponding integrals over $\omega$ using saddle point approximation. Although much less trivial, but starting from the power spectrum \eqref{asps}, it is also possible to establish an asymptotic behavior of Lanczos coefficients \cite{lubinsky1993update} \begin{eqnarray} b_n^2\propto n^\lambda. \label{lambda} \end{eqnarray} The converse relations between asymptotic behavior of $b_n$, $M_k$, $\Phi^W(\omega)$ and $C^W(-i\beta)$ are much more subtle and may not hold. Thus, we show in the appendix \ref{appx:bfromM} that smooth asymptotic behavior of $M_k$ does not imply smooth asymptotic of $b_n$. It was proposed long ago that $\lambda$ defined in \eqref{lambda} falls into several universality classes, characterizing dynamical systems \cite{liu1990infinite}. In particular it was observed that $\lambda=0$ for non-interacting and $\lambda=1$ for interacting integrable models. (It should be noted that since $\lambda$ characterizes a particular operator, the same system may exhibit several different values of $\lambda$.) Recently it was argued in \cite{Parker_2019} that $\lambda=2$ is a universal behavior in chaotic systems in $D\geq 2$. To thoroughly investigate possible implications of this conjecture, it is desirable to derive the constraints on the behavior of $C^W$, $\Phi^W$, and $M_k$ starting directly from the assumption that $b_n$ is a smooth function of $n$ for large $n$. In full generality Lanczos coefficients $b_n$ are related to the moments $M_k$ via \begin{eqnarray} \label{Dyck} M_k=\!\!\!\!\sum_{h_1\dots h_{k-1}} b_{(h_0+h_1)/2} b_{(h_1+h_2)/2} \dots b_{(h_{k-1}+h_{k})/2}.\,\, \end{eqnarray} Here the sum is over Dyck paths parameterized by the sets satisfying $h_0=h_k=1/2$, and $h_{i+1}=h_i\pm 1$, $h_i> 0$. {Assuming} \eqref{lambda} out goal is to deduce an asymptote of $M_{2k}$ using \eqref{Dyck}. We develop the approach of summing over weighted Dyck paths in the appendix \ref{appx:D}. Here we just mention main results. If $b_{n}^2$ is asymptotically a smooth function of $n$, path integral over Dyck paths can be evaluated via saddle point approximation by identifying a trajectory in the space of indexes, which gives the leading contribution. Thus if $b_n$ is smooth, $M_k$ is also smooth. Furthermore, if $\lambda=2$, $b_{n}^2 \sim \alpha^2 n^2$, and the leading order behavior is \begin{eqnarray} M_{2k}\approx \left(2\alpha\over \pi\right)^{2k} (2k)! \label{DyckM} \end{eqnarray} Thus, starting from the asymptotic behavior $b_n^2\propto \alpha^2 n^2$ we necessarily find that $C^W$ has a singularity at $\beta=\pi/(2\alpha)$, in full agreement with the conjecture of previous section that singularity in Euclidean time is the characteristic property of chaos. Provided $C^W(t)$ is analytic inside a strip $\Im(t)\leq \bar{\beta}_W$ for some $\bar{\beta}_W$ would immediately imply a bound \begin{eqnarray} \label{boundonL} \alpha\leq {\pi\over 2\bar{\beta}_W}. \end{eqnarray} When $\rho\propto e^{-H/T}$, provided autocorrelation function $C_T$ \eqref{C} is analytic inside $|\Im(t)|\leq \bar{\beta}(T)$, function $C^W_T$ defined in \eqref{Wight} will b analytic at least inside $|\Im(t)|\leq \bar{\beta}_W=\bar{\beta}(T)+1/(2T)$ (see the discussion in section \ref{sec:ps}) and therefore \begin{eqnarray} \label{boundonLT} \alpha\leq {\pi T \over 1+2 T \bar{\beta}(T)}. \end{eqnarray} We have also established in section \ref{sec:hd} that $\bar{\beta}(T) \geq 2\beta^*$ for all $T$. The coefficient $\alpha$ has been recently conjectured to bound maximal Lyapunov exponent governing exponential growth of the out of time ordered correlation function (OTOC) \cite{Parker_2019,Murthy:2019fgs}, $\lambda_{\rm OTOC}\leq 2\alpha$. This leads to the improved bound on chaos \begin{eqnarray} \lambda_{\rm OTOC}\leq {2\pi T\over 1+4 T \beta^*}, \label{OTOC} \end{eqnarray} which is stronger than the original bound $\lambda_{\rm OTOC}\leq {2\pi T}$ of \cite{MSS}. In the limit of quantum field theory, $(\beta^*)^{-1}$ will be of the order of UV-cutoff, reducing \eqref{OTOC} to the original bound. Yet the new bound is non-trivial for discrete models exhibiting chaos. To illustrate the improved bound, we plot \eqref{OTOC} in Fig.~\ref{Fig:SYK} for the SYK model in the large $q$-limit \footnote{Here $q$ is a parameter of SYK model and should not be mixed with $q(\beta)$ defined in \eqref{qdef}.} against the exact value of $\lambda_{\rm OTOC}$, evaluated in \cite{MS,Parker_2019}. We take $2\beta^*=1$ to ensure that the autocorrelation function $C_T$ is analytic inside $\Im(t)<2\beta^*=1$ for all $T$. Temperature $T$ is parametrized via $1\geq v\geq 0$, $\pi v T=\cos(\pi v/2)$ such that the exact Lyapunov exponent is \begin{eqnarray} \label{OTOCexact} \lambda_{\rm OTOC}=2\cos(\pi v/2). \end{eqnarray} \begin{figure}[t] \includegraphics[width=0.5\textwidth]{syk.pdf} \caption{Lyapunov exponent $\lambda_{\rm OTOC}$ for the SYK model as a function of parameter $v$, which is related to temperature, $\pi v T=\cos(\pi v/2)$. Limit $v\rightarrow 0$ corresponds to high temperatures, $v\rightarrow 1$ to small temperatures. Blue line -- exact analytic result \eqref{OTOCexact}, orange dashed line -- improved bound \eqref{OTOC} with $2\beta^*=1$, green dotted line -- original Maldacena-Shenker-Stanford bound $2\pi T$.} \label{Fig:SYK} \end{figure} We have emphasized above that for 1D systems with short range interactions $C_T(t)$ has to be analytic in the entire complex plane. This imposes a bound on the growth of Lanczos coefficients. Assuming $b_n$ is a smooth function of $n$ \cite{Parker_2019} proposed that the asymptotic growth in 1D non-integral systems will acquire a logarithmic correction \begin{eqnarray} \label{1Dlog} b_{n+1}\approx \alpha {n\over \log(n/n_0)}. \end{eqnarray} Using the integral over weighted Dyck paths in the appendix \ref{appx:D}, we find this to be consistent with the behavior of $M_k$ outlined in \eqref{mu1d} provided \begin{eqnarray} \alpha=\pi J/2. \end{eqnarray} Sum over Dyck paths in the case of $\lambda=1$ associated with integrable systems is discussed in the appendix \ref{appx:D}. Since for the local models $C^W(t)$ is analytic inside a sufficiently small vicinity of $t=0$, asymptotic behavior with $\lambda>2$ in such systems is excluded. \section{Conclusions} \label{sec:summary} We have derived a number of rigorous bounds on the infinity norm of a local operator evolved in Euclidean time, and extended them to autocorrelation function \eqref{Cdef}. The novel ingredient of our approach is the counting of {\it lattice animal histories} and formula \eqref{ourresult}, using which we solved exactly combinational problem of counting nested commutators for Bethe lattices (and establish acorrect asymptotic for lattices in $D\geq 2$). Some of the bounds derived in this paper were known before. We improved numerical coefficients, including the location of the singularity $\beta^*$ in $D\geq 2$. Our results are strongest possible among the bounds uniformly valid for all local Hamiltonians characterized by the same $|h_I|\leq J$ defined on a lattice of a particular geometry. We have also established Euclidean version of Lieb-Robinson bound on the spatial operator growth. In 1D operators spread at most exponentially, while in $D\geq 2$ operators can reach spatial infinity in finite Euclidean time. When the system is integrable, in all $D$ operators spread polynomially. As a main point of this paper, we advocated that Euclidean operator growth reflects chaos in the underlying quantum system. If the system is chaotic, the norm growth and spatial growth are maximal possible and the operator norm diverges at some finite Euclidean time. We interpreted this divergence as a consequence of typicality in Krylov space. There are several distinct characteristic properties of chaos for many-body quantum systems. One is the Eigenstate Thermalization Hypothesis \cite{srednicki1994chaos}, which is concerned with individual matrix elements. Another popular probe is out of time ordered correlation function, which extends the notion of exponential Lyapunov growth to quantum case. Its use as a characteristic of many-body quantum chaos was pioneered in \cite{PhysRevE.89.012923,Elsayed_2015,Tarkhov_2018} and brought to the spotlight by applications to quantum gravity \cite{MSS}. Despite recent efforts \cite{Lensky:2018hwa,Foini_2019,Chan:2018fsp,Murthy:2019fgs} there is no clear understanding of how to relate these two characteristics of chaos to each other. We hope that the Euclidean growth, which on the one hand is related to ETH via the behavior of $C(-i\beta)$ at large $\beta$, see \eqref{asym}, and on the other hand is related to OTOC via the bound \eqref{OTOC}, may provide such a bridge. \section*{Acknowledgments} We would like to thank Dima Abanin, Xiangyu Cao, Nick Hunter-Jones, Vadim Oganesyan, and Dan Parker for discussions. We are also grateful to Vladimir Kravtsov, William Berdanier, and Sarang Gopalakrishnan for raising our interest in Bethe lattices and for discussions. AD gratefully acknowledges support from the Simons Center for Geometry and Physics, Stony Brook University at which some of the research for this paper was performed. AD is supported by the National Science Foundation under Grant No. PHY-1720374.
{ "attr-fineweb-edu": 1.65332, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd4k4uzlh0_z-eodK
\section{Introduction}\label{sec1} Low-rank matrix recovery problems aim at recovering a true unknown low-rank matrix $M^*\in\mathbb{R}^{n\times m}$ from as few observations as possible, and have wide applications in a host of fields such as statistics, control and system identification, signal and image processing, machine learning, quantum state tomography, and so on (see, e.g., \cite{Fazel02,Davenport16,GroLFBE10,Zhou15} and the reference therein). When the rank $r^*$ of $M^*$ or its tight upper estimation, say a positive integer $\kappa$, is available, these problems can be modeled as the following rank constrained optimization model \[ \min_{X\in \mathbb{R}^{n\times m}}\Big\{f(X)\ \ {\rm s.t.}\ \ {\rm rank}(X)\le\kappa\Big\} \] or its factorized form $\min_{U\in \mathbb{R}^{n\times\kappa},V\in\mathbb{R}^{m\times\kappa}}f(UV^{\mathbb{T}})$, where $f\!:\mathbb{R}^{n\times m}\rightarrow \mathbb{R}_{+}$ is a loss function. However, in many scenarios, such an upper estimation is unavailable. Now it is reasonable to consider \begin{equation}\label{rank-reg} \min_{X\in \mathbb{R}^{n\times m}}\Big\{f(X)+\lambda\,{\rm rank}(X)\Big\}, \end{equation} which leads to a desirable low-rank solution by tuning the regularization parameter $\lambda>0$. Unless otherwise stated, we assume that $f$ is smooth and its gradient $\nabla\!f$ is Lipschitz with modulus $L_{\!f}$. Owing to the combinatorial property of the rank function, problem \eqref{rank-reg} is NP-hard and it is impossible to seek a global optimal solution via an algorithm with polynomial-time complexity. A common way to deal with them is to achieve a desirable solution by solving its convex relaxation problems. For the rank regularized problem \eqref{rank-reg}, the popular nuclear norm relaxation method (see, e.g., \cite{Fazel02,Recht10,Candes09,Candes11}) yields a desirable solution by solving a single convex minimization problem \begin{align}\label{Nuclear-norm} \min_{X\in \mathbb{R}^{n\times m}}\Big\{f(X)+\lambda\|X\|_*\Big\}. \end{align} In the past decade, this method has made great progress in theory (see, e.g., \cite{Candes09,Candes11,Recht10,Negahban11,Negahban12}). In spite of the favorable theoretical results, improving its computational efficiency remains a challenge. In fact, almost all convex relaxation algorithms for \eqref{rank-reg} require an SVD of a full matrix in each iteration, which poses the major computational bottleneck and restricts their scalability to large-scale problems. Inspired by this, recent years have witnessed the renewed interest in the Burer-Monteiro factorization model \cite{Burer03} of low-rank optimization problems. By replacing $X$ with its factored form $UV^\mathbb{T}$, where $(U,V)\in\!\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$ for some $r\in(r^*,\min(n,m))$, the factorized form of \eqref{Nuclear-norm} is \begin{equation}\label{MC-Fnorm} \min_{U\in \mathbb{R}^{n\times r},V\in \mathbb{R}^{m\times r}} \left\{F_{\lambda}(U,V):=f(UV^\mathbb{T}\!)+\frac{\lambda}{2}\big(\|U\|_F^2+\|V\|_F^2\big)\right\}. \end{equation} Although the factorization form tremendously reduces the number of optimization variables since $r$ is usually smaller than $\min(n,m)$, the intrinsic bi-linearity makes the factorized objective functions nonconvex and introduces additional critical points that are not global optima of factored optimization problems. A recent research line for factorized models focuses on their nonconvex geometry landscape, especially the strict saddle property (see, e.g., \cite{Park17,Ge17,Bhojanapalli16,Li18,Zhu181}), that is, each critical point of the nonconvex factorized models is shown to be either a global optimizer or a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Another research line considers the (regularized) factorization models from a local view and aims to characterize the growth behavior of objective functions around the set of global optimal solutions (see, e.g., \cite{Jain13,Park16,SunLuo16,Tu16,Zheng16}). Most of these results are achieved for the factorized model under an implicit assumption that $\kappa=r^*$ or $r=r^*$. As mentioned above, in many scenarios, we only obtain a rough upper estimation for $r^*$. Thus, to ensure that these theoretical results fully work in practice, it is necessary to seek a factorized model involving a regularized term to reduce $r$ to $r^*$ automatically. The square of the Frobenius-norm in \eqref{MC-Fnorm} plays such a role, but its ability to reduce $r$ is weak since the nuclear norm has a big difference from the rank function. In fact, as shown by the numerical results in \cite{Fang18}, the nuclear norm regularized model has a worse performance on matrix completion in non-uniform sampling setting (see also Figure \ref{fig1} in Section \ref{sec5.3}). To bridge the gap between the nuclear norm and rank function, Shang et al. \cite{Shang2016} considered the factorization model involving the bi-trace and tri-trace quasi-norm of factor matrices. Their bi-trace and tri-trace quasi-norm is only an approximation of the rank function, and it is unclear whether their model is effective or not for matrix completions in non-uniform sampling. Notice that for any $X\in\mathbb{R}^{n\times m}$ with ${\rm rank}(X)\le r$, \begin{equation}\label{rank-chara} {\rm rank}(X)=\min_{U\in\mathbb{R}^{n\times r},V\in\mathbb{R}^{m\times r}} \Big\{\frac{1}{2}\big(\|U\|_{2,0}+\|V\|_{2,0}\big)\ \ {\rm s.t.}\ \ X=UV^{\mathbb{T}}\Big\} \end{equation} where $\|U\|_{2,0}$ is the column $\ell_{2,0}$-norm (the number of nonzero columns) of $U$. This, along with the work on the zero-norm (see \cite{LuZhang13,Lu14}), inspires us to study the column $\ell_{2,0}$-norm regularized model \begin{equation}\label{MS-FL20} \min_{U\in \mathbb{R}^{n\times r},V\in\mathbb{R}^{m\times r}} \Big\{\Phi_{\lambda,\mu}(U,V)\!:=f(UV^\mathbb{T}\!) +\frac{\mu}{2}\big(\|U\|_F^2+\|V\|_F^2\big)+\lambda\big(\|U\|_{2,0}+\|V\|_{2,0}\big)\Big\} \end{equation} where $\mu>0$ is a small constant, and the term $\frac{\mu}{2}(\|U\|_F^2+\|V\|_F^2)$ is added to ensure that \eqref{MS-FL20} has a nonempty solution set. As will be shown in Proposition \ref{prop1-Phi}, the introduction of the nonsmooth term $\lambda(\|U\|_{2,0}+\|V\|_{2,0})$ does not induce additional critical points. Moreover, every (strong) local minimizer of the smooth function $F_{\mu}$ is a (strong) local optimizer of \eqref{MS-FL20}, while the submatrix pair consisting of nonzero columns of every (strong) local optimizer to \eqref{MS-FL20} is a (strong) local minimizer of $F_{\mu}$ defined on the space $\mathbb{R}^{n\times|J|}\times\mathbb{R}^{m\times|J|}$ with $|J|\le r$; see Proposition \ref{prop2-Phi}. To compute the nonsmooth and non-Lipschitz model \eqref{MS-FL20}, we develop in Section \ref{sec3} an alternating majorization-minimization (AMM) method with extrapolation. Although the AMM method is a special case of the inertial proximal alternating linearized minimization (iPALM) method \cite{Pock16}, which is an inertial version of the PALM proposed by Bolte et al. \cite{Botle14}, our global convergence analysis removes the assumption on the boundedness of the generated sequence and provides a quantification on the upper bound for the inertial parameter by leveraging the structure of $F_{\mu}$. Though the AMM method belongs to the block prox-linear method framework proposed by Xu and Yin \cite{XuYin17}, the convergence analysis there for acceleration is not applicable to it since the proximal operator of the column $\ell_{2,0}$-norm is not single-valued and it is unclear whether Condition 1 there holds or not for $\Phi_{\lambda,\mu}$. When $f$ is the least squares loss function from matrix completion problems, one may apply the method proposed in \cite{YangPC18} to solving \eqref{MS-FL20}, but the subsequential convergence there can not be obtained since the column $\ell_{2,0}$-norm is not continuous on its domain. Observe that the AMM method is also a majorized alternating proximal (MAP) method with a variable metric proximal term. We develop in Section \ref{sec4} an MAP method for solving \eqref{MS-FL20} which can yield a stable nonzero column index set after finite iterates. Motivated by this, we propose a hybrid AMM with a global convergence guarantee, in which the MAP method is first employed to seek an initial factor pair with less nonzero columns and the AMM with extrapolation is then used to minimize $F_{\mu}$. We apply the developed AMM methods to the matrix completion problems with non-uniform sampling schemes. Numerical experiments are conducted with synthetic data and real datasets including the Jester joke, MovieLens, and Netflix datasets. Comparison results with the alternating least squares (ALS) method proposed in \cite{Hastie15} for computing model \eqref{MC-Fnorm} and the ADMM developed in \cite{Fang18} for the SDP reformulation of the max-norm regularized convex model demonstrate that the AMM and the hybrid AMM for model \eqref{MS-FL20} have a remarkable advantage in offering solutions of lower error and rank for simulated data, while for three real datasets, the hybrid AMM is superior to other three methods in terms of the NMAE and rank except for jester-3, and it requires the least running time and can yield a favorable result for $10000\times 10000$ Netflix data within $300$ seconds. \medskip \noindent {\bf Notation:} $\mathbb{R}^{n\times m}$ represents the vector space of all $n\times m$ real matrices, equipped with the trace inner product $\langle X,Y\rangle={\rm trace}(X^{\mathbb{T}}Y)$ and its induced Frobenius norm $\|\cdot\|_F$, $\mathbb{O}^{n\times r}$ denotes the set of matrices with orthonormal columns and $\mathbb{O}^{n}$ signifies $\mathbb{O}^{n\times n}$. For a matrix $X\in\mathbb{R}^{n\times m}$ and an integer $k\in[1,n]$, $\sigma(X):=(\sigma_1(X),\ldots,\sigma_n(X))^{\mathbb{T}}$ with $\sigma_1(X)\ge\cdots\ge\sigma_n(X)$ denotes the singular value vector of $X$ arranged in a nonincreasing order and $\Sigma_{k}(X):={\rm diag}(\sigma_1(X),\ldots,\sigma_k(X))$. For a matrix $X\in\mathbb{R}^{n\times m}$, $X_i$ means the $i$th column of $X$, $J_{\!X}$ and $\overline{J}_{\!X}$ respectively denote the index set of its nonzero and zero columns, and $\|X\|$ and $\|X\|_*$ respectively denote the spectral norm and the nuclear norm of $X$. For a self-adjoint PD linear operator $\mathcal{Q}\!:\mathbb{R}^{n\times m}\to\mathbb{R}^{n\times m}$, $\|\cdot\|_{\mathcal{Q}}=\sqrt{\langle \cdot,\mathcal{Q}\cdot\rangle}$ means its induced norm. Given a point $(\overline{U},\overline{V})\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$ and a constant $\delta>0$, $\mathbb{B}_{\delta}(U,V):=\{(U,V)\,|\,\|(U,V)-(\overline{U},\overline{V})\|_F\le\delta\}$. The notation $\mathbb{N}$ and $\mathbb{N}_0$ respectively denotes the natural number set and the nonnegative integer set. For an integer $k\ge 1$, the notation $[k]$ means the set $\{1,\ldots,k\}$. \section{Preliminaries}\label{Sec2} In order to characterize the critical points of $\Phi_{\lambda,\mu}$, we first recall from \cite[Chapter 8]{RW98} the definition of generalized subdifferentials of an extended real-valued function. \subsection{Subdifferentials and subderivative of column $\ell_{2,0}$-norm}\label{Sec2.1} \begin{definition}\label{gsubdiff} (see \cite[Definition 8.3]{RW98}) Consider a function $h\!:\mathbb{R}^p\to[-\infty,+\infty]$ and a point $x$ with $h(x)$ finite. The regular subdifferential of $h$ at $x$ is defined as \[ \widehat{\partial}h(x):=\bigg\{v\in\mathbb{R}^p\ \big|\ \liminf_{x\ne x'\to x}\frac{h(x')-h(x)-\langle v,x'-x\rangle}{\|x'-x\|}\ge 0\bigg\}; \] the (basic) subdifferential (also known as the limiting subdifferential) of $h$ at $x$ is defined as \[ \partial h(x):=\Big\{v\in\mathbb{R}^p\ |\ \exists\,x^k\xrightarrow[h]{}x\ {\rm and}\ v^k\in\widehat{\partial}h(x^k)\ {\rm with}\ v^k\to v\Big\}; \] and the horizon subdifferential of $h$ at $x$, denoted by $\partial^{\infty}h(x)$, is defined as \[ \partial^{\infty}h(x):=\Big\{v\in\mathbb{R}^p\ |\ \exists\,x^k\xrightarrow[h]{}x\ {\rm and}\ v^k\in\widehat{\partial}h(x^k)\ {\rm with}\ \lambda^kv^k\to v\ {\rm for\ some}\ \lambda^k\downarrow 0\Big\}, \] where the above notation $x^k\xrightarrow[h]{}x$ means $x^k\to x$ with $h(x^k)\to h(x)$. \end{definition} \begin{remark}\label{remark-Fsubdiff} {\bf(a)} The set $\widehat{\partial}h(x)$ is always closed and convex, and the set $\partial h(x)$ is closed but generally nonconvex, and $\widehat{\partial}h(x)\subseteq \partial h(x)$. When $h$ is convex, $\partial h(x)=\widehat{\partial}h(x)$ and coincides with the subdifferential of $h$ at $x$ in the sense of convex analysis. Let $\{(x^k,v^k)\}$ be a sequence converging to $(\overline{x},\overline{v})$ from the graph of the mapping $\partial h$. If $h(x^k)\to h(\overline{x})$ as $k\to\infty$, then $(\overline{x},\overline{v})\in{\rm gph}\partial h$. \noindent {\bf(b)} By \cite[Theorem 10.1]{RW98}, a necessary (but not sufficient) condition for $\overline{x}\in\mathbb{R}^p$ to be a local minimizer of $h$ is $0\in\widehat{\partial}h(\overline{x})$. In the sequel, a point $x\in\mathbb{R}^p$ with $0\in\partial h(x)$ is called a (limiting) critical point of $h$. The set of critical points of $h$ is denoted by ${\rm crit}\,h$. \end{remark} When $h$ is the indicator function of a set $C\subset\mathbb{R}^p$, its regular subdifferential at $x\in C$ reduces to the regular normal cone $\widehat{\mathcal{N}}_{C}(x)$ to $C$ at $x$; while its basic and horizon subdifferentials at $x\in C$ both reduce to the normal cone $\mathcal{N}_{C}(x)$ to $C$ at $x$. We say that a function $h\!:\mathbb{R}^p\to\overline{\mathbb{R}}$ is regular at a point $x$ with $h(x)$ finite if $\widehat{\mathcal{N}}_{{\rm epi}h}(x,h(x)) =\mathcal{N}_{{\rm epi}h}(x,h(x))$ where ${\rm epi}h$ denotes the epigraph of $h$. \begin{definition}\label{subderive} (see \cite[Definition 8.1]{RW98}) Consider a function $h\!:\mathbb{R}^p\to[-\infty,+\infty]$ and a point $x$ with $h(x)$ finite. The subderivative function $dh(x)\!:\mathbb{R}^p\to[-\infty,+\infty]$ is defined by \[ dh(x)(w):=\liminf_{t\downarrow 0\atop w'\to w}\frac{h(x+tw')-h(x)}{t} \quad\ \forall w\in\mathbb{R}^p. \] \end{definition} Next we characterize the subdifferential and subderivative of the column $\ell_{2,0}$-norm. \begin{lemma}\label{gsubdiff-L20} Let $g(Z)\!:=\!\|Z\|_{2,0}$ for $Z\in\!\mathbb{R}^{n\times r}$. Fix any $(U,V)\in\!\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$. Then, \begin{itemize} \item [(i)] $\widehat{\partial}g(U)\!=\partial g(U)=\partial^{\infty}g(U) \!=\Lambda_1\times\cdots\times \Lambda_r$ with $\Lambda_i=\!\left\{\begin{array}{cl} \!\{0\}^n& {\rm if}\ i\in\!J_{U},\\ \mathbb{R}^n &{\rm if}\ i\notin\!J_{U}; \end{array}\right.$ \item[(ii)] for any $\Gamma\in\mathbb{R}^{n\times r}$, \( dg(U)(\Gamma)=\!\left\{\begin{array}{cl} 0 & {\rm if}\ \overline{J}_{\!U}\cap J_{\Gamma}=\emptyset;\\ \infty &{\rm if}\ \overline{J}_{\!U}\cap J_{\Gamma}\ne\emptyset, \end{array}\right. \) and for any $(S,W)$, \begin{equation}\label{glambda} d(g(U)+g(V))(S,W) =\left\{\begin{array}{cl} 0 & {\rm if}\ \overline{J}_{U}\cap J_{S}=\emptyset,\overline{J}_{V}\cap J_{W}=\emptyset;\\ +\infty & {\rm otherwise}. \end{array}\right. \end{equation} \end{itemize} \end{lemma} \begin{proof} Let $\vartheta(z):={\rm sign}(\|z\|)$ for $z\in\mathbb{R}^n$. Fix an arbitrary $x\in\mathbb{R}^n$. Then, it holds that \[ \partial^{\infty}\vartheta(x) =\partial\vartheta(x) =\widehat{\partial}\vartheta(x) =\left\{\begin{array}{cl} \!\{0\} & {\rm if}\ x\ne0;\\ \mathbb{R}^n &{\rm if}\ x=0 \end{array}\right.=\big[\widehat{\partial} \vartheta(x)\big]^{\infty}, \] where $\big[\widehat{\partial}\vartheta(x)\big]^{\infty}$ denotes the recession cone of the closed convex set $\widehat{\partial}\vartheta(x)$. This by \cite[Corollary 8.11]{RW98} shows that $\vartheta$ is regular at $x$. In addition, for any given $w\in\mathbb{R}^n$, it is easy to calculate that \[ d\vartheta(x)(w)=0\ \ {\rm when}\ \ x\ne 0 \ \ {\rm and}\ \ d\vartheta(0)(w) =\left\{\begin{array}{cl} 0 & {\rm if}\ w=0;\\ +\infty &{\rm if}\ w\ne 0. \end{array}\right. \] This means that $d\vartheta(x)(0)=0$. Together with \cite[Proposition 10.5]{RW98} and $g(Z)=\sum_{j=1}^r\vartheta(\|Z_j\|)$ for $Z\in\mathbb{R}^{n\times r}$, it is immediate to obtain part (i) and the first part of (ii). By combining the first part of (ii) and \cite[Proposition 10.5]{RW98}, we obtain the second part of (ii). \end{proof} Combining \cite[Exercise 8.8]{RW98} with Lemma \ref{gsubdiff-L20}, we get the following characterization on $\partial\Phi_{\lambda,\mu}$. \begin{lemma}\label{subdiff-Phi} Fix any $\lambda>0$ and $\mu>0$. Consider any $(\overline{U},\overline{V})\!\in\mathbb{R}^{n\times r}\!\times\mathbb{R}^{m\times r}$. Then, it holds that $\widehat{\partial}\Phi_{\lambda,\mu}(\overline{U},\overline{V}) \!=\!\partial\Phi_{\lambda,\mu}(\overline{U},\overline{V}) \!=\partial_{U}\Phi_{\lambda,\mu}(\overline{U},\overline{V}) \times\partial_{V}\Phi_{\lambda,\mu}(\overline{U},\overline{V})$ with \begin{subequations} \begin{align*} \partial_{U}\Phi_{\lambda,\mu}(\overline{U},\overline{V}) &=\!\Big\{G\in\mathbb{R}^{n\times r}\,|\ G_{j}=\nabla f(\overline{U}\overline{V}^{\mathbb{T}})\overline{V}_{\!j} +\mu \overline{U}_{\!j}\ \ {\rm for}\ j\in J_{\overline{U}}\Big\},\\ \partial_{V}\Phi_{\lambda,\mu}(\overline{U},\overline{V}) &=\!\Big\{H\in \mathbb{R}^{m\times r}\,|\ H_{j}=\big[\nabla f(\overline{U}\overline{V}^{\mathbb{T}})\big]^{\mathbb{T}}\overline{U}_{\!j} +\mu\overline{V}_{\!j}\ \ {\rm for}\ j\in J_{\overline{V}}\Big\}, \end{align*} which implies that $[\widehat{\partial}\Phi_{\lambda,\mu}(\overline{U},\overline{V})]^{\infty} =\partial^{\infty}\Phi_{\lambda,\mu}(\overline{U},\overline{V})$, and consequently $\Phi_{\lambda,\mu}$ is a regular function. \end{subequations} \end{lemma} \subsection{Critical points of function $\Phi_{\lambda,\mu}$}\label{sec2.2} In this part, we focus on the property of the critical points of $\Phi_{\lambda,\mu}$. The following proposition states that $\Phi_{\lambda,\mu}$ and $F_{\mu}$ have the same critical point set. \begin{proposition}\label{prop1-Phi} Fix any $\lambda>0$ and $\mu>0$. If $(\overline{U},\overline{V})\in{\rm crit}\Phi_{\lambda,\mu}$, then it holds that $J_{\overline{U}}=J_{\overline{V}}$ and $\overline{U}^\mathbb{T}\overline{U}=\overline{V}^\mathbb{T}\overline{V}$. In particular, $(\overline{U},\overline{V})\in{\rm crit}\Phi_{\lambda,\mu}$ if and only if $(\overline{U},\overline{V})\in{\rm crit}F_{\mu}$. \end{proposition} \begin{proof} Since $(\overline{U},\overline{V})\in{\rm crit}\Phi_{\lambda,\mu}$, we have $(0,0)\in\partial\Phi_{\lambda,\mu}(\overline{U},\overline{V})$, which by Lemma \ref{subdiff-Phi} means that \[ \nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})\overline{V}_{\!j}+\mu\overline{U}_{\!j}=0 \ {\rm for}\ j\in J_{\overline{U}}\ \ {\rm and}\ \ \big[\nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})\big]^{\mathbb{T}}\overline{U}_{\!j}+\mu\overline{V}_{\!j}=0 \ {\rm for}\ j\in J_{\overline{V}}. \] The first equality implies that $\overline{V}_{\!j}\ne 0$ for $j\in J_{\overline{U}}$, and then $J_{\overline{U}}\subseteq J_{\overline{V}}$, while the second one implies that $\overline{U}_{\!j}\ne 0$ for $j\in J_{\overline{V}}$, and then $J_{\overline{V}}\subseteq J_{\overline{U}}$. Thus, $J_{\overline{U}}=J_{\overline{V}}:=J$. Consequently, \begin{equation}\label{key-equa} \left\{\begin{array}{ll} \nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})\overline{V}_{\!J}+\mu\overline{U}_{\!J}=0; \\ \ [\nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})]^{\mathbb{T}}\overline{U}_{\!J}+\mu\overline{V}_{\!J}=0. \end{array}\right. \end{equation} By multiplying the last two equalities with $\overline{U}_{\!J}^{\mathbb{T}}$ and $\overline{V}_{\!J}^{\mathbb{T}}$, respectively, we obtain \[ \left\{\begin{array}{ll} \overline{U}_{\!J}^{\mathbb{T}}\nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})\overline{V}_{\!J} +\mu\overline{U}_{\!J}^{\mathbb{T}}\overline{U}_{\!J}=0, \\ \overline{V}_{\!J}^{\mathbb{T}}\big[\nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})\big]^{\mathbb{T}} \overline{U}_{\!J}+\mu \overline{V}_{\!J}^{\mathbb{T}}\overline{V}_{\!J}=0.\\ \end{array}\right. \] This implies that $\overline{U}^\mathbb{T}\overline{U}=\overline{V}^\mathbb{T}\overline{V}$. The first part follows. From \eqref{key-equa} and the expression of $F_{\mu}$, clearly, if $(\overline{U},\overline{V})\in{\rm crit}\Phi_{\lambda,\mu}$, then $(\overline{U},\overline{V})\in{\rm crit}F_{\mu}$. Now assume that $(\overline{U},\overline{V})\in{\rm crit}F_{\mu}$, i.e., $\nabla\!F_{\mu}(\overline{U},\overline{V})=0$. Then, $\nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})\overline{V}+\mu\overline{U}=0$ and $[\nabla\!f(\overline{U}\overline{V}^{\mathbb{T}})]^{\mathbb{T}}\overline{U}+\mu\overline{V}=0$. By Lemma \ref{subdiff-Phi}, $(0,0)\in \partial\Phi_{\lambda,\mu}(\overline{U},\overline{V})$, which means that $(\overline{U},\overline{V})\in{\rm crit}\Phi_{\lambda,\mu}$. The proof is completed. \end{proof} Proposition \ref{prop1-Phi} implies that every critical point of $\Phi_{\lambda,\mu}$ can be viewed as an approximate critical point of the loss function $F(U,V):=f(UV^{\mathbb{T}})$ since $\mu$ is very small constant. In addition, when $f$ is twice continuously differentiable, by Proposition \ref{prop1-Phi}, every $(\overline{U},\overline{V})\in{\rm crit}\Phi_{\lambda,\mu}$ with \begin{align*} 0<\nabla^2F_{\mu}(\overline{U},\overline{V})(\Delta,\Delta) &=\nabla^2f(\overline{U}^{\mathbb{T}}\overline{V})(\overline{U}\Delta_V^\mathbb{T}+\Delta_U\overline{V}^\mathbb{T}, \overline{U}\Delta_V^\mathbb{T}+\Delta_U\overline{V}^\mathbb{T})\nonumber\\ &\quad+2\langle\nabla\!f(\overline{U}^{\mathbb{T}}\overline{V}),\Delta_U \Delta_V^\mathbb{T} \rangle+\mu\langle \Delta,\Delta\rangle\quad{\rm for\ any}\ 0\ne \Delta=(\Delta_U,\Delta_V) \end{align*} is a strong local minimizer of $F_{\mu}$, i.e., there exist $\alpha>0$ and $\delta>0$ such that for all $(U,V)\in\mathbb{B}_{\delta}(\overline{U},\overline{V})$, \begin{equation}\label{strong-min} F_{\mu}(U,V)\ge F_{\mu}(\overline{U},\overline{V})\|_F^2+\alpha\|(U,V)-(\overline{U},\overline{V})\|_F^2. \end{equation} In fact, it is also a strong local minimizer of $\Phi_{\lambda,\mu}$ by the following proposition. \begin{proposition}\label{prop2-Phi} Fix any $\lambda>0$ and $\mu>0$. If $(\overline{U},\overline{V})$ is a (strong) local minimizer of $F_{\mu}$, then it is a (strong) local minimizer of $\Phi_{\lambda,\mu}$. If $(\overline{U},\overline{V})$ is a (strong) local minimizer of $\Phi_{\lambda,\mu}$, then $(\overline{U}_{\!J},\overline{V}_{\!J})$ with $J=J_{\overline{U}}$ is a (strong) local minimizer of $F_{\mu}$ defined on $\mathbb{R}^{n\times |J|}\times\mathbb{R}^{m\times |J|}$. \end{proposition} \begin{proof} Suppose that $(\overline{U},\overline{V})$ is a strong local minimizer of $F_{\mu}$. Then, there exist $\alpha>0$ and $\delta>0$ such that \eqref{strong-min} holds for all $(U,V)\in\mathbb{B}_{\delta}(\overline{U},\overline{V})$. In addition, by the continuity, there exists $\delta'>0$ such that $\|U\|_{2,0}\geq\|\overline{U}\|_{2,0}$ and $\|V\|_{2,0}\geq\|\overline{V}\|_{2,0}$ for all $(U,V)\in\mathbb{B}_{\delta'}(\overline{U},\overline{V})$. Thus, for any $(U,V)\in\mathbb{B}_{\varepsilon}(\overline{U},\overline{V})$ with $\varepsilon=\min(\delta,\delta')$, it is immediate to obtain that \begin{align*} \Phi_{\lambda,\mu}(U,V)&=F_{\mu}(U,V)+\lambda(\|U\|_{2,0}+\|V\|_{2,0})\\ &\ge F_{\mu}(\overline{U},\overline{V})+\lambda(\|\overline{U}\|_{2,0}+\|\overline{V}\|_{2,0}) +\alpha\|(U,V)-(\overline{U},\overline{V})\|_F^2\\ & =\Phi_{\lambda,\mu}(\overline{U},\overline{V})+\alpha\|(U,V)-(\overline{U},\overline{V})\|_F^2. \end{align*} This shows that $(\overline{U},\overline{V})$ is a strong local minimizer of the function $\Phi_{\lambda,\mu}$. Conversely, let $(\overline{U},\overline{V})$ be a strong local minimizer of $\Phi_{\lambda,\mu}$. Clearly, $J_{\overline{U}}=J_{\overline{V}}$ by Proposition \ref{prop1-Phi}. Moreover, there exist $\alpha>0$ and $\varepsilon>0$ such that for all $(U,V)\in\mathbb{B}_{\varepsilon}(\overline{U},\overline{V})$, \begin{align*} \Phi_{\lambda,\mu}(U,V) &\ge \Phi_{\lambda,\mu}(\overline{U},\overline{V})+\alpha\|(U,V)-(\overline{U},\overline{V})\|_F^2\\ &= f(\overline{U}_{\!J}\overline{V}_{\!J}^\mathbb{T}\!) +\frac{\mu}{2}\big(\|\overline{U}_{\!J}\|_F^2+\|\overline{V}_{\!J}\|_F^2\big)+2\lambda|J| +\alpha\|(U,V)-(\overline{U},\overline{V})\|_F^2. \end{align*} In addition, there exists $\varepsilon'>0$ such that $\|A'\|_{2,0}=\|B'\|_{2,0}=|J|$ for all $(A',B')\in\mathbb{B}_{\varepsilon'}(\overline{U}_{\!J},\overline{V}_{\!J})$. Pick any $(A,B)\in\mathbb{B}_{\widehat{\varepsilon}}(\overline{U}_{\!J},\overline{V}_{\!J})$ with $\widehat{\varepsilon}=\min(\varepsilon',\varepsilon)$. Let $(U_A,V_B)\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$ with $[U_A]_{J}=A$, $[U_A]_{\overline{J}}=0$, $[V_B]_{J}=B$ and $[V_B]_{\overline{J}}=0$. Then, it holds that \begin{align*} F_{\mu}(A,B)&=f(AB^\mathbb{T}\!)+\frac{\mu}{2}\big(\|A\|_F^2+\|B\|_F^2\big) =f(U_{A}U_{B}^\mathbb{T}\!)+\frac{\mu}{2}\big(\|U_A\|_F^2+\|U_B\|_F^2\big)\\ &=\Phi_{\lambda,\mu}(U_A,V_B)-2\lambda|J|\ge f(\overline{U}_{\!J}\overline{V}_{\!J}^\mathbb{T}\!) +\frac{\mu}{2}\big(\|\overline{U}_{\!J}\|_F^2+\|\overline{V}_{\!J}\|_F^2\big) +\alpha\|(U_A,V_A)-(\overline{U},\overline{V})\|_F^2\\ &=F_{\mu}(\overline{U}_{\!J},\overline{V}_{\!J})+\alpha\|(A,B)-(\overline{U}_{\!J},\overline{V}_{\!J})\|_F^2. \end{align*} This shows that $(\overline{U}_{\!J},\overline{V}_{\!J})$ is a strong local minimizer of $F_{\mu}$ defined on $\mathbb{R}^{n\times |J|}\times\mathbb{R}^{m\times |J|}$. The above arguments also hold for the local minimizer of $F_{\mu}$ and $\Phi_{\lambda,\mu}$. \end{proof} \begin{remark}\label{remark2.1} Proposition \ref{prop2-Phi} states the relation for the (strong) local minimizers of $\Phi_{\lambda,\mu}$ and $F_{\mu}$. In fact, if there exists a nonzero $(G,H)\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$ such that $d\Phi_{\lambda,\mu}(\overline{U},\overline{V})(G,H)<0$, which by \cite[Theorem 10.1]{RW98} means that $(\overline{U},\overline{V})$ is not locally optimal to \eqref{MS-FL20} and $\langle\nabla\!F_{\mu}(\overline{U},\overline{V}),(G,H)\rangle<0$ follows by \eqref{glambda} and \cite[Corollary 10.9]{RW98}. Now, such $(G,H)$ is a descent direction of $F_{\mu}$ at $(\overline{U},\overline{V})$. \end{remark} \section{An alternating MM method with extrapolation}\label{sec3} Let $F(U,V):=f(UV^\mathbb{T}\!)$ for $(U,V)\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$. Fix an arbitrary $(U,V)\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$. Since $F(\cdot,V)$ is smooth and $\nabla_{U}F(\cdot,V)$ is Lipschitz continuous with modulus $\tau_{\!V}\!:=L_{\!f}\|V\|^2$, for any $U'\in\mathbb{R}^{n\times r}$ and $\gamma\ge\tau_{\!V}$ it holds that \begin{subnumcases}{}\label{FU} F(U',V)\le F(U,V)+\langle\nabla_{U}F(U,V),U'\!-\!U\rangle +\frac{\gamma}{2}\|U'\!-\!U\|_F^2,\\ -F(U',V)\le -F(U,V)-\langle\nabla_{U}F(U,V),U'\!-\!U\rangle +\frac{\gamma}{2}\|U'\!-\!U\|_F^2. \label{nFU} \end{subnumcases} Similarly, since $F(U,\cdot)$ is a smooth function and its gradient $\nabla_{V}F(U,\cdot)$ is Lipschitz continuous with modulus $\tau_{U}\!:=L_{\!f}\|U\|^2$, for any $V'\in\mathbb{R}^{m\times r}$ and $\gamma\ge\tau_{U}$ it holds that \begin{subnumcases}{}\label{FV} F(U,V')\le F(U,V)+\langle\nabla_{V}F(U,V),V'\!-\!V\rangle +\frac{\gamma}{2}\|V'\!-\!V\|_F^2,\\ -F(U,V')\le -F(U,V)-\langle\nabla_{V}F(U,V),V'\!-\!V\rangle +\frac{\gamma}{2}\|V'\!-\!V\|_F^2. \label{nFV} \end{subnumcases} By combining \eqref{FU} and \eqref{FV} with the expression of $\Phi_{\lambda,\mu}$, for any $(U',V')\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$, \begin{align*} \Phi_{\lambda,\mu}(U',V) &\le F_{U,\gamma}(U';U,V):=\langle\nabla_{U}F(U,V),U'\rangle+\frac{\gamma}{2}\|U'\!-\!U\|_F^2 +\frac{\mu}{2}\|U'\|_F^2+\lambda\|U'\|_{2,0}\\ &\qquad\qquad\qquad\qquad\quad +F(U,V)-\langle\nabla_{U}F(U,V),U\rangle +\frac{\mu}{2}\|V\|_F^2+\lambda\|V\|_{2,0},\\ \Phi_{\lambda,\mu}(U,V') &\le F_{V,\gamma}(V';U,V):=\langle\nabla_{V}F(U,V),V'\rangle+\frac{\gamma}{2}\|V'\!-\!V\|_F^2 +\frac{\mu}{2}\|V'\|_F^2+\lambda\|V'\|_{2,0}\\ &\qquad\qquad\qquad\qquad\quad +F(U,V)-\langle\nabla_{V}F(U,V),V\rangle +\frac{\mu}{2}\|U\|_F^2+\lambda\|U\|_{2,0} \end{align*} which become an equality when $U'=U$ and $V'=V$. Hence, $F_{U,\gamma}(\cdot;U,V)$ and $F_{V,\gamma}(\cdot;U,V)$ are respectively a majorization of $\Phi_{\lambda,\mu}(\cdot,V)$ at $U$ and $\Phi_{\lambda,\mu}(U,\cdot)$ at $V$. Inspired by this, we propose an AMM method with extrapolation by minimizing such two majorizations in each iterate. \begin{algorithm}[H] \caption{\label{AMM}{\bf (AMM method for solving \eqref{MS-FL20})}} \textbf{Initialization:} Select $\beta\in[0,1],\beta_0\in[0,\beta]$ and an appropriately large $\overline{\gamma}>0$. Choose a starting point $(U^0,V^0)\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}$. Let $(U^{-1},V^{-1})\!:=(U^0,V^0)$ and set $k:=0$.\\ \textbf{while} the stopping conditions are not satisfied \textbf{do} \begin{itemize} \item[\bf 1.] Select $\gamma_{1,k}\in(\tau_{\!V^{k}},\overline{\gamma})$. Let $\widetilde{U}^k\!:=U^k+\beta_k(U^k-U^{k-1})$ and compute \begin{equation}\label{Uk-subprob} U^{k+1}\in\mathop{\arg\min}_{U\in\mathbb{R}^{n\times r}} \Big\{\langle\nabla_{\!U}F(\widetilde{U}^k,V^k),U\rangle +\frac{\gamma_{1,k}}{2}\|U\!-\!\widetilde{U}^k\|_F^2 +\frac{\mu}{2}\|U\|_F^2+\lambda\|U\|_{2,0}\Big\}. \end{equation} \item[\bf 2.] Select $\gamma_{2,k}\in(\tau_{\!U^{k+1}},\overline{\gamma})$. Let $\widetilde{V}^k\!:=V^k+\beta_k(V^k-V^{k-1})$ and compute \begin{equation}\label{Vk-subprob} V^{k+1}\!\in\mathop{\arg\min}_{\!V\in\mathbb{R}^{m\times r}} \Big\{\langle\nabla_{\!V}F(U^{k+1},\widetilde{V}^k),V\rangle +\frac{\gamma_{2,k}}{2}\|V\!-\!\widetilde{V}^k\|_F^2+\frac{\mu}{2}\|V\|_F^2 +\lambda\|V\|_{2,0}\Big\}. \end{equation} \item[\bf 3.] Update $\beta_k$ by $\beta_{k+1}\in[0,\beta]$ and let $k\leftarrow k+1$. \end{itemize} \textbf{end while} \end{algorithm} \begin{remark}\label{remark1-AMM} {\bf(a)} Algorithm \ref{AMM} is a special case of the iPALM in \cite{Pock16} with $\alpha_i^k\!=\!\beta_i^k$ for $i=1,2$, but as will be shown below our global convergence analysis is different from that of \cite{Pock16} since, the boundedness of the generated sequence $\{(U^k,V^k)\}$ is directly achieved under a mild restriction on $\beta$ by leveraging the structure of $F$, and moreover, a quantification on $\beta$ is also provided. \noindent {\bf(b)} By the expression of $F$, the columns of $U^{k+1}$ and $V^{k+1}$ have the following closed form: \begin{align*} U_i^{k+1}&={\rm sign}\Big[\max\Big(0,\|G_i^{k}\|\!-\!\sqrt{2(\mu\!+\!\gamma_{1,k})^{-1}\lambda}\Big)\Big]G_i^{k} \ \ {\rm for}\ i=1,\ldots,r;\\ V_i^{k+1}&={\rm sign}\Big[\max\Big(0,\|H_i^{k}\|\!-\!\sqrt{2(\mu\!+\!\gamma_{2,k})^{-1}\lambda}\Big)\Big]H_i^{k} \ \ {\rm for}\ i=1,\ldots,r \end{align*} where $G^k=\frac{1}{\mu+\gamma_{1,k}}(\gamma_{1,k}\widetilde{U}^k\!-\!\nabla_{\!U}F(\widetilde{U}^k,V^k))$ and $H^k=\frac{1}{\mu+\gamma_{2,k}}(\gamma_{2,k}\widetilde{V}^k\!-\!\nabla_{\!V}F(U^{k+1},\widetilde{V}^k))$. Thus, the main cost of Algorithm \ref{AMM} in each step involves one multiplication of $n\times m$ and $m\times r$ matrices, one multiplication of $m\times n$ and $n\times r$ matrices, and two multiplications of $n\times r$ and $r\times m$ matrices. \end{remark} Next we shall establish the global convergence of Algorithm \ref{AMM} by following the analysis recipe of algorithms for nonconvex nonsmooth problems in the KL framework (see \cite{Attouch10,Botle14,Pock16,LiuPong191}). Define \begin{equation}\label{alphak} \alpha_{1,k}\!:=\gamma_{1,k}-\tau_{V^{k}}\ \ {\rm and}\ \ \alpha_{2,k}\!:=\gamma_{2,k}-\tau_{U^{k+1}}\ \ {\rm for\ each}\ k\in\mathbb{N}_0. \end{equation} The following proposition characterizes an important property for the sequence $\{(U^k,V^k)\}_{k\in\mathbb{N}_0}$. \begin{proposition}\label{prop1-UVk} Let $\{(U^k,V^k)\}_{k\in\mathbb{N}_0}$ be the sequence generated by Algorithm \ref{AMM}. Then, for any given $\rho_1\in(0,1)$ and $\rho_2\in(0,1)$, the following inequality holds for each $k\in\mathbb{N}_0$: \begin{align}\label{descent-ineq1} &\Big[\Phi_{\lambda,\mu}(U^{k+1},V^{k+1})+\frac{\rho_1\alpha_{1,k}}{2}\|U^{k+1}-U^k\|_F^2 +\frac{\rho_2\alpha_{2,k}}{2}\|V^{k+1}-V^k\|_F^2\Big]\nonumber\\ &-\Big[\Phi_{\lambda,\mu}(U^{k},V^{k})+\frac{\rho_1\alpha_{1,k}}{2}\|U^{k}-U^{k-1}\|_F^2 +\frac{\rho_2\alpha_{2,k}}{2}\|V^{k}-V^{k-1}\|_F^2\Big]\nonumber\\ &\le-\Big[\frac{\rho_1\alpha_{1,k}}{2}-\frac{(2(1\!-\!\rho_1)\tau_{V^{k}}+\alpha_{1,k})\beta_k^2}{2(1\!-\!\rho_1)}\Big] \big\|U^{k}\!-\!U^{k-1}\big\|_F^2\nonumber\\ &\quad-\Big[\frac{\rho_2\alpha_{2,k}}{2}-\frac{(2(1\!-\!\rho_2)\tau_{U^{k+1}}+\alpha_{2,k})\beta_k^2}{2(1\!-\!\rho_2)}\Big] \big\|V^{k}\!-\!V^{k-1}\big\|_F^2, \end{align} and hence, with \( \overline{\beta}_{1,k}\!:=\!\sqrt{\frac{\rho_1(1-\rho_1)(\gamma_{1,k}-\tau_{V^{k}})} {2(1-\rho_1)\tau_{V^{k}}+(\gamma_{1,k}-\tau_{V^{k}})}} \) and \( \overline{\beta}_{2,k}\!:=\!\sqrt{\frac{\rho_2(1-\rho_2)(\gamma_{2,k}-\tau_{U^{k+1}})} {2(1-\rho_2)\tau_{U^{k+1}}+(\gamma_{2,k}-\tau_{U^{k+1}})}}, \) for each $\beta_k\!\in[0,\min(\overline{\beta}_{1,k},\overline{\beta}_{2,k})]$, $(U^k,V^k)\in\mathcal{L}_{\lambda,\mu}\!:=\big\{(U,V)\in\mathbb{R}^{n\times r}\times\mathbb{R}^{m\times r}\,|\, \Phi_{\lambda,\mu}(U,V)\le\Phi_{\lambda,\mu}(U^0,V^0)\big\}$. \end{proposition} \begin{proof} By the optimality of $U^{k+1}$ and the feasibility of $U^k$ to \eqref{Uk-subprob}, it follows that \begin{align}\label{temp-ineq1} &\langle\nabla_{\!U}F(\widetilde{U}^k,V^k),U^{k+1}\rangle+\frac{\mu}{2}\|U^{k+1}\|_F^2 +\frac{\gamma_{1,k}}{2}\|U^{k+1}\!-\!\widetilde{U}^k\|_F^2 +\lambda\|U^{k+1}\|_{2,0}\nonumber\\ &\le\langle\nabla_{\!U}F(\widetilde{U}^k,V^k),U^{k}\rangle+\frac{\mu}{2}\|U^{k}\|_F^2 +\frac{\gamma_{1,k}}{2}\|U^{k}\!-\!\widetilde{U}^k\|_F^2 +\lambda\|U^{k}\|_{2,0}. \end{align} By invoking inequality \eqref{FU} with $\gamma=\gamma_{1,k},V=V^k, U'=U^{k+1}$ and $U=\widetilde{U}^k$, we obtain \begin{align}\label{convex-monotone1} &F(U^{k+1},V^k)\le F(\widetilde{U}^{k},V^k) +\langle\nabla_{\!U}F(\widetilde{U}^{k},V^k),U^{k+1}\!-\!\widetilde{U}^k\rangle +\frac{\tau_{V^{k}}}{2}\|U^{k+1}\!-\!\widetilde{U}^k\|_F^2\nonumber\\ &\le F(U^k,V^k)+\langle\nabla_{\!U}F(\widetilde{U}^{k},V^k),U^{k+1}-U^k\rangle +\frac{\tau_{V^{k}}}{2}\|U^{k}\!-\!\widetilde{U}^k\|_F^2+\frac{\tau_{V^{k}}}{2}\|U^{k+1}\!-\!\widetilde{U}^k\|_F^2 \end{align} where the last inequality is by \eqref{nFU} with $\gamma=\tau_{V^{k}},V=\!V^{k},U=\!\widetilde{U}^{k}$ and $U'=U^k$. Along with \eqref{temp-ineq1}, \begin{align}\label{FU-ineq1} F(U^{k+1},V^k)&\le F(U^k,V^k)+\frac{\mu}{2}\|U^{k}\|_F^2+\lambda\|U^{k}\|_{2,0} +\frac{\gamma_{1,k}+\tau_{V^{k}}}{2}\|U^{k}\!-\!\widetilde{U}^k\|_F^2\nonumber \\ &\quad-\frac{\gamma_{1,k}-\tau_{V^{k}}}{2}\|U^{k+1}\!-\!\widetilde{U}^k\|_F^2 -\frac{\mu}{2}\|U^{k+1}\|_F^2-\lambda\|U^{k+1}\|_{2,0}. \end{align} In addition, by the optimality of $V^{k+1}$ and the feasibility of $V^k$ to \eqref{Vk-subprob}, it follows that \begin{align}\label{FV-ineq1} &\langle\nabla_{\!V}F(U^{k+1},\widetilde{V}^k),V^{k+1}\rangle+\frac{\mu}{2}\|V^{k+1}\|_F^2 +\frac{\gamma_{2,k}}{2}\|V^{k+1}\!-\!\widetilde{V}^k\|_F^2 +\lambda\|V^{k+1}\|_{2,0}\nonumber\\ &\le\langle\nabla_{\!V}F(U^{k+1},\widetilde{V}^k),V^k\rangle+\frac{\mu}{2}\|V^{k}\|_F^2 +\frac{\gamma_{2,k}}{2}\|V^{k}\!-\!\widetilde{V}^k\|_F^2 +\lambda\|V^{k}\|_{2,0}. \end{align} By invoking inequality \eqref{FV} with $U=U^{k+1},V'=V^{k+1}$ and $V=\widetilde{V}^k$, it follows that \begin{align}\label{convex-monotone2} &F(U^{k+1},V^{k+1})\le F(U^{k+1},\widetilde{V}^k) +\langle\nabla_{\!V}F(U^{k+1},\widetilde{V}^k),V^{k+1}-\widetilde{V}^k\rangle +\frac{\tau_{U^{k+1}}}{2}\|V^{k+1}-\widetilde{V}^k\|_F^2\nonumber\\ &\le F(U^{k+1},V^k)+\langle\nabla_{\!V}F(U^{k+1},\widetilde{V}^k),V^{k+1}\!-\!V^k\rangle +\frac{\tau_{U^{k+1}}}{2}\|V^{k}\!-\!\widetilde{V}^k\|_F^2 +\frac{\tau_{U^{k+1}}}{2}\|V^{k+1}\!-\!\widetilde{V}^k\|_F^2 \end{align} where the last inequality is due to \eqref{nFV} with $U=U^{k+1},V=\widetilde{V}^{k}$ and $V'=V^k$. Along with \eqref{FV-ineq1}, \begin{align*} F(U^{k+1},V^{k+1}) &\le F(U^{k+1},V^k)+\frac{\mu}{2}\|V^{k}\|_F^2+\lambda\|V^{k}\|_{2,0} -\frac{\mu}{2}\|V^{k+1}\|_F^2 -\lambda\|V^{k+1}\|_{2,0}\\ &\quad +\frac{\gamma_{2,k}+\tau_{U^{k+1}}}{2}\|V^{k}\!-\!\widetilde{V}^k\|_F^2 -\frac{\gamma_{2,k}-\tau_{U^{k+1}}}{2}\|V^{k+1}\!-\!\widetilde{V}^k\|_F^2. \end{align*} By substituting \eqref{FU-ineq1} into this inequality and using the definition of $\Phi_{\lambda,\mu}$, it follows that \begin{align}\label{key-ineq31} \Phi_{\lambda,\mu}(U^{k+1},V^{k+1}) &\le\Phi_{\lambda,\mu}(U^{k},V^{k})+\frac{\gamma_{1,k}+\tau_{V^{k}}}{2}\|U^{k}\!-\!\widetilde{U}^k\|_F^2 +\frac{\gamma_{2,k}+\tau_{U^{k+1}}}{2}\|V^{k}\!-\!\widetilde{V}^k\|_F^2\nonumber\\ &\quad -\frac{\gamma_{1,k}-\tau_{V^{k}}}{2}\|U^{k+1}\!-\!\widetilde{U}^k\|_F^2 -\frac{\gamma_{2,k}-\tau_{U^{k+1}}}{2}\|V^{k+1}\!-\!\widetilde{V}^k\|_F^2. \end{align} Together with $\widetilde{U}^k=U^k+\beta_k(U^k-U^{k-1})$ and $\widetilde{V}^k=V^k+\beta_k(V^k\!-\!V^{k-1})$ and the definition of $\alpha_{1,k}$ and $\alpha_{2,k}$, we deduce that for each integer $k\ge 0$ the left hand side of \eqref{descent-ineq1} is not more than \begin{align*} {\rm RHT}&=\frac{2\tau_{V^{k}}\beta_k^2-\rho_1\alpha_{1,k}}{2}\big\|U^{k}\!-\!U^{k-1}\big\|_F^2 -\frac{(1-\rho_1)(\gamma_{1,k}-\tau_{V^{k}})}{2}\big\|U^{k+1}\!-\!U^k\big\|_F^2\\ &\quad +(\gamma_{1,k}\!-\!\tau_{V^{k}})\beta_k\langle U^{k+1}\!-\!U^k,U^k\!-\!U^{k-1}\rangle +(\gamma_{2,k}\!-\!\tau_{U^{k+1}})\beta_k\langle V^{k+1}\!-\!V^k,V^k\!-\!V^{k-1}\rangle\\ &\quad +\frac{2\tau_{U^{k+1}}\beta_k^2-\rho_2\alpha_{2,k}}{2}\|V^{k}\!-\!V^{k-1}\|_F^2 -\frac{(1-\rho_2)(\gamma_{2,k}-\tau_{U^{k+1}})}{2}\|V^{k+1}\!-\!V^k\|_F^2\\ &\le-\Big(\frac{\rho_1\alpha_{1,k}-2\tau_{V^{k}}\beta_k^2}{2}-\frac{\beta_k^2}{2t_1^k}\Big) \|U^{k}\!-\!U^{k-1}\|_F^2\nonumber\\ &\quad -\frac{(1-\rho_1)(\gamma_{1,k}-\tau_{V^{k}})-t_1^k(\gamma_{1,k}\!-\!\tau_{V^{k}})^2}{2} \|U^{k+1}\!-\!U^k\|_F^2\\ &\quad -\Big(\frac{\rho_2\alpha_{2,k}-2\tau_{U^{k+1}}\beta_k^2}{2}-\frac{\beta_k^2}{2t_2^k}\Big) \|V^{k}\!-\!V^{k-1}\|_F^2\nonumber\\ &\quad -\frac{(1-\rho_2)(\gamma_{2,k}-\tau_{U^{k+1}})-t_2^k(\gamma_{2,k}\!-\!\tau_{U^{k+1}})^2}{2} \|V^{k+1}\!-\!V^k\|_F^2 \end{align*} for any $t_1^k>0$ and $t_2^k>0$. In particular, taking $t_1^k=\frac{1-\rho_1}{\gamma_{1,k}-\tau_{V^{k}}}$ and $t_2^k=\frac{1-\rho_2}{\gamma_{2,k}-\tau_{U^{k+1}}}$ yields \eqref{descent-ineq1}. \end{proof} \begin{remark}\label{remark-alpha} {\bf(a)} Let $\overline{\beta}:=\liminf_{k\to\infty}\min(\overline{\beta}_{1,k},\overline{\beta}_{2,k})$. Obviously, $\overline{\beta}$ is well defined. By taking \[ \gamma_{1,k}=\!\left\{\begin{array}{cl} \!\eta_1\tau_{\!V^{k}}&{\rm if}\ \tau_{\!V^{k}}\!\ne 0;\\ \overline{\gamma}/2 &{\rm otherwise} \end{array}\right. {\rm and}\ \gamma_{2,k}=\!\left\{\begin{array}{cl} \!\eta_2\tau_{\!U^{k+1}}&{\rm if}\ \tau_{\!U^{k+1}}\!\ne 0;\\ \overline{\gamma}/2 &{\rm otherwise} \end{array}\right. \] for some $\eta_1>1$ and $\eta_2>1$, we have \( {\textstyle\overline{\beta}=\min\Big(\!\sqrt{\frac{\rho_1(1-\rho_1)(\eta_1-1)} {2(1-\rho_1)+(\eta_1-1)}},\sqrt{\frac{\rho_2(1-\rho_2)(\eta_2-1)} {2(1-\rho_2)+(\eta_2-1)}}\Big)}, \) which is equal to $0.4$ when $\rho_1=\rho_2=0.6$ and $\eta_1=\eta_2=2.6$. When $\beta\in[0,\overline{\beta}]$, by Proposition \ref{prop1-UVk} we have \begin{equation}\label{tau-value} \tau\!:=\!\limsup_{k\to\infty}\max(\tau_{U^k},\tau_{V^k})<\infty, \end{equation} which implies that the parameter $\overline{\gamma}$ in Algorithm \ref{AMM} can be chosen to be $c\tau$ for some $c>1$. \noindent {\bf(b)} When $f$ is convex, the terms $\frac{\tau_{V^{k}}}{2}\|U^{k}\!-\!\widetilde{U}^k\|_F^2$ in \eqref{convex-monotone1} and $\frac{\tau_{U^{k+1}}}{2}\|V^{k}\!-\!\widetilde{V}^k\|_F^2$ in \eqref{convex-monotone2} do not appear. Now Proposition \ref{prop1-UVk} holds with \( \overline{\beta}_{1,k}\!:=\!\sqrt{\frac{\rho_1(1-\rho_1)(\gamma_{1,k}-\tau_{V^{k}})} {(1-\rho_1)\tau_{V^{k}}+(\gamma_{1,k}-\tau_{V^{k}})}} \) and \( \overline{\beta}_{2,k}\!:=\!\sqrt{\frac{\rho_2(1-\rho_2)(\gamma_{2,k}-\tau_{U^{k+1}})} {(1-\rho_2)\tau_{U^{k+1}}+(\gamma_{2,k}-\tau_{U^{k+1}})}}. \) \end{remark} Let $\alpha_1\!:=\limsup_{k\to\infty}\alpha_{1,k}$ and $\alpha_2\!:=\limsup_{k\to\infty}\alpha_{2,k}$. Obviously, $\alpha_1$ and $\alpha_2$ are well defined by Algorithm \ref{AMM}. To achieve the global convergence of Algorithm \ref{AMM}, we define the potential function \begin{equation} \Xi_{\lambda,\mu}(U,V,U',V') :=\Phi_{\lambda,\mu}(U,V)+\frac{\rho_1\alpha_1}{2}\|U-U'\|_F^2+\frac{\rho_2\alpha_2}{2}\|V-V'\|_F^2 \end{equation} for some $\rho_1,\rho_2\in(0,1)$. Then $\Xi_{\lambda,\mu}$ has the following properties on $\{(U^k,V^k,U^{k-1},V^{k-1})\}_{k\in\mathbb{N}}$. \begin{proposition}\label{prop2-UVk} Let $\{(U^k,V^k)\}_{k\in\mathbb{N}_0}$ be the sequence generated by Algorithm \ref{AMM} with $\beta\in[0,\overline{\beta}]$, where $\overline{\beta}$ is the constant defined in Remark \ref{remark-alpha} (a). Then, the following statements hold. \begin{itemize} \item [(i)] With $\nu_{1,k}=\frac{(1-\rho_1)(\rho_1\alpha_{1}-2\tau_{V^{k}}\beta_k^2)-\alpha_{1}\beta_k^2} {2(1-\rho_1)}$ and $\nu_{2,k}=\frac{(1-\rho_2)(\rho_2\alpha_{2}-2\tau_{U^{k+1}}\beta_k^2-\alpha_{2}\beta_k^2} {2(1-\rho_2)}$, \begin{align*} &\Xi_{\lambda,\mu}(U^{k+1},V^{k+1},U^k,V^k) -\Xi_{\lambda,\mu}(U^k,V^k,U^{k-1},V^{k-1})\\ &\le -\nu_{1,k}\|U^{k}-U^{k-1}\|_F^2-\nu_{2,k}\|V^k-V^{k-1}\|_F^2 \quad\ \forall k\in\mathbb{N}. \end{align*} \item [(ii)] The sequence $\{(U^k,V^k)\}_{k\in\mathbb{N}_0}$ is bounded. Therefore, the set of accumulation points of the sequence $\{(U^{k},V^{k},U^{k-1},V^{k-1})\}_{k\in\mathbb{N}}$, denoted by $\Upsilon$, is nonempty and compact. \item[(iii)] If $\beta\in[0,\min(\overline{\beta},\widetilde{\beta})]$ with $\widetilde{\beta}:=\min\Big(\!\sqrt{\frac{\rho_1(1-\rho_1)\alpha_1}{2(1-\rho_1)\tau+\alpha_1}}, \sqrt{\frac{\rho_2(1-\rho_2)\alpha_2}{2(1-\rho_2)\tau+\alpha_2}}\Big)$ for $\tau$ given by \eqref{tau-value}, then $\{\Xi_{\lambda,\mu}(U^{k},V^{k},U^{k-1},V^{k-1})\}_{k\in\mathbb{N}}$ has a limit as $k\to\infty$, say $\varpi^*$, and $\Xi_{\lambda,\mu}\equiv\varpi^*$ on $\Upsilon$. \item [(iv)] If $\beta\in[0,\min(\overline{\beta},\widetilde{\beta})]$ where $\widetilde{\beta}$ is same as in part (iii), then for each $k\in\mathbb{N}$ it holds that \begin{align*} {\rm dist}\big(0,\partial\Xi_{\lambda,\mu}(U^{k+1},V^{k+1},U^k,V^k)\big) &\leq c_1\big(\|U^{k+1}\!-\!U^{k}\!\|_F\!+\|U^k-U^{k-1}\|_F\big)\\ &\quad+c_2\big(\|V^{k+1}\!-\!V^{k}\!\|_F^2\!+\|V^k-V^{k-1}\|_F\big) \end{align*} for $c_1\!=\tau+\overline{\gamma}+2\rho_1\alpha_1$ and $ c_2\!=c_f+2\tau+\overline{\gamma}+2\rho_2\alpha_2$ with $c_f\!=\limsup_{k\to\infty}\|\nabla\!f(U^k(V^k)^{\mathbb{T}})\|$. \end{itemize} \end{proposition} \begin{proof} {\bf(i)} By following the same arguments as those for Proposition \ref{prop1-UVk}, one may obtain \begin{align*} &\Xi_{\lambda,\mu}(U^{k+1},V^{k+1},U^k,V^k)-\Xi_{\lambda,\mu}(U^{k},V^{k},U^{k-1},V^{k-1})\nonumber\\ &\le\frac{\rho_1\alpha_{1}}{2}\big(\|U^{k+1}\!-\!U^{k}\|_F^2-\|U^{k}\!-\!U^{k-1}\|_F^2\big) +\frac{\rho_2\alpha_{2}}{2}\big(\|V^{k+1}\!-\!V^{k}\|_F^2-\|V^{k}\!-\!V^{k-1}\|_F^2\big)\nonumber\\ &\quad +\frac{\gamma_{1,k}+\tau_{V^{k}}}{2}\|U^{k}\!-\!\widetilde{U}^k\|_F^2 +\frac{\gamma_{2,k}+\tau_{U^{k+1}}}{2}\|V^{k}\!-\!\widetilde{V}^k\|_F^2\nonumber\\ &\quad -\frac{\gamma_{1,k}-\tau_{V^{k}}}{2}\|U^{k+1}\!-\!\widetilde{U}^k\|_F^2 -\frac{\gamma_{2,k}-\tau_{U^{k+1}}}{2}\|V^{k+1}\!-\!\widetilde{V}^k\|_F^2. \end{align*} Then, using the same analysis technique as those for RHT after \eqref{key-ineq31} yields the result. \noindent {\bf(ii)-(iii)} Part (ii) holds by Proposition \ref{prop1-UVk} and the coerciveness of $\Xi_{\lambda,\mu}$. We next focus on the proof of part (iii). By part (i), the nonnegative sequence $\{\Xi_{\lambda,\mu}(U^{k},V^{k},U^{k-1},V^{k-1})\}_{k\in\mathbb{N}}$ is nonincreasing. So, the limit $\varpi^*$ exists. Fix an arbitrary $(\overline{U},\overline{V},\overline{Y},\overline{Z})\in\Upsilon$. There is an index set $\mathcal{K}\subseteq\mathbb{N}$ such that $({U}^{k},{V}^{k},{U}^{k-1},{V}^{k-1}) \rightarrow(\overline{U},\overline{V},\overline{Y},\overline{Z})$ when $\mathcal{K}\ni k\rightarrow\infty$. By the feasibility of $\overline{U}$ to \eqref{Uk-subprob}, for each $k$, \begin{align*} &\langle\nabla_{\!U}F(\widetilde{U}^{k-1},V^{k-1}),U^{k}\rangle +\frac{\mu}{2}\|U^{k}\|_F^2+\lambda\|{U}^{k}\|_{2,0}+ \frac{\gamma_{1,k-1}}{2}\|U^{k}-\widetilde{U}^{k-1}\|_F^2\\ &\le\langle\nabla_{\!U}F(\widetilde{U}^{k-1},V^{k-1}),\overline{U}\rangle +\frac{\mu}{2}\|\overline{U}\|_F^2+\lambda\|\overline{U}\|_{2,0} +\frac{\gamma_{1,k-1}}{2}\|\overline{U}-\widetilde{U}^{k-1}\|_F^2. \end{align*} Passing to the limit $k\xrightarrow[\mathcal{K}]{}\infty$ and using the boundedness of $\gamma_{1,k}$, \( \limsup_{k\xrightarrow[\mathcal{K}]{}\infty}\|{U}^{k}\|_{2,0} \leq\|\overline{U}\|_{2,0}. \) In addition, by the lower semicontinuity of $\|\cdot\|_{2,0}$, we have $\lim_{k\xrightarrow[\mathcal{K}]{}\infty}\|{U}^{k}\|_{2,0}\ge\|\overline{U}\|_{2,0}$. Thus, $\lim_{k\xrightarrow[\mathcal{K}]{}\infty}\|{U}^{k}\|_{2,0} =\|\overline{U}\|_{2,0}$. Similarly, we also have $\lim_{k\xrightarrow[\mathcal{K}]{}\infty}\|V^{k}\|_{2,0} =\|\overline{V}\|_{2,0}$. Together with the expression of $\Xi_{\lambda,\mu}$, $\lim_{k\xrightarrow[\mathcal{K}]{}\infty}\Xi_{\lambda,\mu}({U}^{k},{V}^{k},{U}^{k-1},{V}^{k-1}) =\Xi_{\lambda,\mu}(\overline{U},\overline{V},\overline{Y},\overline{Z})$. Since the limit of the sequence $\{\Xi_{\lambda,\mu}({U}^{k},{V}^{k},{U}^{k-1},{V}^{k-1})\}_{k\in\mathbb{N}}$ is exactly $\varpi^*$. This implies that $\Xi_{\lambda,\mu}(\overline{U},\overline{V},\overline{Y},\overline{Z})=\varpi^*$. By the arbitrariness of $(\overline{U},\overline{V},\overline{Y},\overline{Z})$ on the set $\Upsilon$, it follows that $\Xi_{\lambda,\mu}\equiv\varpi^*$ on $\Upsilon$. \noindent {\bf(iv)} By the expression of $\Xi_{\lambda,\mu}$ and \cite[Exercise 8.8]{RW98}, for any $(U,V,U',V')$ it holds that \begin{align}\label{gradXi-UV1} \partial\Xi_{\lambda,\mu}(U,V,U',V') =\left[\begin{matrix} \nabla f(UV^\mathbb{T})V+\!\mu U + \lambda \partial \|U\|_{2,0}+\rho_1\alpha_1(U-U')\\ (\nabla f(UV^\mathbb{T}))^{\mathbb{T}}U+\!\mu V+ \lambda\partial\|V\|_{2,0}+\rho_2\alpha_2(V-V')\\ \rho_1\alpha_1(U'-U)\\ \rho_2\alpha_2(V'-V) \end{matrix}\right]. \end{align} In addition, from the definition of $U^{k+1}$ and $V^{k+1}$ in Step 1 and 2, for each $k\in\mathbb{N}_0$ it follows that \begin{subnumcases}{} \label{optUk-equa} 0\in\nabla\!f(\widetilde{U}^{k}(V^{k})^\mathbb{T})V^{k}+\mu U^{k+1} +\gamma_{1,k}(U^{k+1}-\widetilde{U}^k)+\lambda\partial\|U^{k+1}\|_{2,0};\quad\\ \label{optVk-equa} 0\in[\nabla\!f(U^{k+1}(\widetilde{V}^{k})^\mathbb{T})]^\mathbb{T}U^{k+1}+\mu V^{k+1} +\gamma_{2,k}(V^{k+1}-\widetilde{V}^k)+\lambda\partial\|V^{k+1}\|_{2,0}. \end{subnumcases} Hence, \( \big(\Gamma_U^{k+1},\Gamma_V^{k+1},\rho_1\alpha_1(U^{k}\!-\!U^{k+1}), \rho_2\alpha_2(V^{k}\!-\!V^{k+1})\big) \in\partial\Xi_{\lambda,\mu}(U^{k+1},V^{k+1},U^{k},V^{k}) \) with \begin{subnumcases}{} \Gamma_U^{k+1}\!=\!\nabla f(U^{k+1}({V}^{k+1})^\mathbb{T})V^{k+1}\!-\!\nabla f(\widetilde{U}^{k}(V^{k})^\mathbb{T})V^{k}\!-\!\gamma_{1,k}(U^{k+1}\!-\!\widetilde{U}^k) \!+\! \rho_1\alpha_1(U^{k+1}\!-\!U^{k});\nonumber\\ \Gamma_V^{k+1}\!=\!\big[\!\nabla\! f(U^{k+1}({V}^{k+1})^\mathbb{T}\!)\!-\!\nabla f(U^{k+1}(\!\widetilde{V}^{k}\!)^\mathbb{T}\!)\big]^\mathbb{T}U^{k+1} \!-\!\gamma_{2,k}(V^{k+1}\!-\!\widetilde{V}^k)\!+\!\rho_2\alpha_2(V^{k+1}\!-\!V^{k}).\nonumber \end{subnumcases} This means that the distance ${\rm dist}\big(0,\partial \Xi_{\lambda,\mu}(U^{k+1},V^{k+1},U^k,V^k)\big)$ is upper bounded by \begin{align*} &\sqrt{\|\Gamma_U^{k+1}\|_F^2+\|\Gamma_V^{k+1}\|_F^2+\rho_1^2\alpha_1^2\|U^{k}-U^{k+1}\|_F^2 +\rho_2^2\alpha_2^2\|V^{k}-V^{k+1}\|_F^2}\\ &\leq (\tau_{V^{k}}\!+\!\gamma_{1,k})\|U^{k+1}\!-\widetilde{U}^{k}\|_F +2\rho_1\alpha_1\|U^{k+1}\!-\!U^{k}\|_F \!+(\tau_{U^{k+1}}\!+\!\gamma_{2,k})\|V^{k+1}\!-\widetilde{V}^{k}\|_F \\ &\quad+(c_f+2\rho_2\alpha_2\!+\!\sqrt{\tau_{U^{k+1}}\tau_{V^{k}}})\|V^{k+1}-V^{k}\|_F\\ &\le(\tau_{V^{k}}+\gamma_{1,k}+2\rho_1\alpha_1)\|U^{k+1}\!-\!{U}^{k}\|_F +(\tau_{V^{k}}+\gamma_{1,k})\beta_k\|U^{k}-U^{k-1}\|_F\!\\ &\quad +(c_f\!\!+\!2\rho_2\alpha_2+\tau_{U^{k+1}}+\gamma_{2,k}\!+\!\sqrt{\tau_{U^{k+1}}\tau_{V^{k}}})\|V^{k+1}\!-\!V^{k}\|_F +(\tau_{U^{k+1}}\!+\!\gamma_{2,k})\beta_k\|V^{k}\!-\!{V}^{k-1}\|_F. \end{align*} This implies that the desired inequality holds. Thus, we complete the proof. \end{proof} \begin{remark}\label{remark2-AMM} From the inclusion in \eqref{gradXi-UV1} and Lemma \ref{subdiff-Phi}, when $(\overline{U},\overline{V},\overline{U},\overline{V}) \in{\rm crit}\,\Xi_{\lambda,\mu}$, we have $(\overline{U},\overline{V})\in {\rm crit}\Phi_{\lambda,\mu}$. Together with Proposition \ref{prop2-UVk} (iv), when $\beta\in[0,\min(\overline{\beta},\widetilde{\beta})]$, if the sequence $\{(U^k,V^k)\}_{k\in\mathbb{N}}$ is convergent, then its limit is a critical point of $\Phi_{\lambda,\mu}$. By Remark \ref{remark-alpha} (b), when $f$ is convex, the constants $\overline{\beta}$ and $\widetilde{\beta}$ in Proposition \ref{prop2-UVk} can be further improved. \end{remark} Let $\theta(Z)\!:=(\|Z_1\|,\ldots,\|Z_{r}\|)$ for $Z\in\mathbb{R}^{n\times r}$. The column $\ell_{2,0}$-norm, as the composition of $\theta$ and the zero-norm, is semialgebraic since the zero-norm and $\theta$ are semialgebraic. So, $\Xi_{\lambda,\mu}$ is a semialgebraic and then KL function (see \cite[Section 4]{Attouch10}). By Proposition \ref{prop2-UVk} and Remark \ref{remark2-AMM}, using the same arguments as those for \cite[Theorem 3.2]{Attouch10} or \cite[Theorem 3.1]{LiuPong191} yields the following result. \begin{theorem}\label{theorem-AMM} Let $\{(U^k,V^k)\}_{k\in\mathbb{N}_0}$ be the sequence given by Algorithm \ref{AMM} with $\beta\in[0,\min(\overline{\beta},\widetilde{\beta})]$ for solving problem \eqref{MS-FL20} associated to $\lambda$ and $\mu$. Then, the sequence $\{(U^k,V^k)\}_{k\in\mathbb{N}_0}$ is convergent and its limit, say $(U^*,V^*)$, is a critical point of $\Phi_{\lambda,\mu}$, which by Proposition \ref{prop2-Phi} is also a local optimizer of problem \eqref{MS-FL20} if $(U^*,V^*)$ is a local minimizer of $F_{\mu}$. \end{theorem} \section{A hybrid alternating MM method}\label{sec4} Algorithm \ref{AMM} is actually a majorized alternating proximal method for solving \eqref{MS-FL20}. Indeed, since $\nabla\!f$ is Lipschitz continuous with modulus $L_{\!f}$, \begin{equation*} f(UV^\mathbb{T})\le \widehat{F}(U,V,G,H):= f(GH^\mathbb{T})+\langle\nabla\!f(GH^\mathbb{T}),UV^\mathbb{T}\!-\!GH^\mathbb{T}\rangle +\frac{L_{\!f}}{2}\|UV^\mathbb{T}\!-\!GH^\mathbb{T}\|_F^2 \end{equation*} for any $(U,V),(G,H)\in\mathbb{R}^{n\times r}\times \mathbb{R}^{m\times r}$, which by the expression of $\Phi_{\lambda,\mu}$ implies that \begin{align*} \Phi_{\lambda,\mu}(U,V)\le\widehat{\Phi}_{\lambda,\mu}(U,V,G,H):= \widehat{F}(U,V,G,H)+\frac{\mu}{2}\big(\|U\|_F^2\!+\!\|V\|_F^2\big)+\lambda\big(\|U\|_{2,0}\!+\!\|V\|_{2,0}\big). \end{align*} This, together with $\Phi_{\lambda,\mu}(G,H)=\widehat{\Phi}_{\lambda,\mu}(G,H,G,H)$, means that $\widehat{\Phi}_{\lambda,\mu}(\cdot,\cdot,G,H)$ is a majorization of $\Phi_{\lambda,\mu}$ at $(G,H)$. While the subproblems \eqref{Uk-subprob}-\eqref{Vk-subprob} of Algorithm \ref{AMM} are precisely minimizing $\widehat{\Phi}_{\lambda,\mu}(U,V,G,H)$ in an alternating proximal way. Specifically, they are respectively equivalent to \begin{align*} U^{k+1}&\in\mathop{\arg\min}_{U\in \mathbb{R}^{n\times r}} \Big\{\widehat{\Phi}_{\lambda,\mu}(U,V^k,\widetilde{U}^k,V^k) +\frac{1}{2}\|U-\widetilde{U}^k\|_{\mathcal{A}_{k}}^2\Big\},\\ V^{k+1}&\in\mathop{\arg\min}_{V\in \mathbb{R}^{m\times r}} \Big\{\widehat{\Phi}_{\lambda,\mu}({U}^{k+1},V,{U}^{k+1},\widetilde{V}^{k}) +\frac{1}{2}\|V-\widetilde{V}^{k}\|_{\mathcal{B}_{k+1}}^2\Big\} \end{align*} where $\mathcal{A}_{k}(X)\!:=\!X(\gamma_{1,k}I\!-\!L_{\!f}(V^k)^{\mathbb{T}}V^k)$ for $X\in\mathbb{R}^{m\times r}$ and $\mathcal{B}_{k}(Z)\!:=\!Z(\gamma_{2,k-1}I\!-\!L_{\!f}(U^k)^{\mathbb{T}}U^k)$ for $Z\in\mathbb{R}^{n\times r}$ are the self-adjoint positive definite linear operators. The positive definite proximal terms $\frac{1}{2}\|U\!-\widetilde{U}^k\|_{\mathcal{A}_{k}}^2$ and $\frac{1}{2}\|V\!-\widetilde{V}^{k}\|_{\mathcal{B}_{k+1}}^2$ are introduced to ensure that the subproblems have a closed-form solution. Next we develop another MAP method for problem \eqref{MS-FL20} by minimizing $\widehat{\Phi}_{\lambda,\mu}$. \begin{algorithm}[h] \caption{\label{MAPM}{\bf (MAP method for solving \eqref{MS-FL20})}} \textbf{Initialization:} Select the parameters $\varrho\in(0,1),\underline{\gamma_1}>0, \underline{\gamma_2}>0,\gamma_{1,0}>0$ and $\gamma_{2,0}>0$. Choose $P^0\!\in\mathbb{O}^{m\times r},Q^0\!\in\mathbb{O}^{n\times r},D^0=I_{r}$. Let $\overline{U}^0=Q^0$ and $\overline{V}^0=P^0$. Set $k:=0$.\\ \textbf{while} the stopping conditions are not satisfied \textbf{do} \begin{itemize} \item[\bf 1.] Compute \( U^{k+1}\in\displaystyle{\mathop{\arg\min}_{U\in\mathbb{R}^{n\times r}}} \Big\{\widehat{\Phi}_{\lambda,\mu}(U,\overline{V}^{k},\overline{U}^k,\overline{V}^k) +\frac{\gamma_{1,k}}{2}\|U-\overline{U}^k\|_F^2\Big\}. \) \item[\bf 2.] Perform an SVD for $U^{k+1}D^k$ such that $U^{k+1}D^k=\widehat{P}^{k+1}(\widehat{D}^{k+1})^2(\widehat{Q}^{k+1})^{\mathbb{T}}$, and set \[ \widehat{U}^{k+1}:=\widehat{P}^{k+1}\widehat{D}^{k+1}\ \ {\rm and}\ \ \widehat{V}^{k+1}\!:=P^k\widehat{Q}^{k+1}\widehat{D}^{k+1}. \] \item[\bf 3.] Compute \( V^{k+1}\in\displaystyle{\mathop{\arg\min}_{V\in \mathbb{R}^{m\times r}}} \Big\{\widehat{\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},V,\widehat{U}^{k+1},\widehat{V}^{k+1}) +\frac{\gamma_{2,k}}{2}\|V-\widehat{V}^{k+1}\|_F^2\Big\}. \) \item[\bf 4.] Perform an SVD for $V^{k+1}\widehat{D}^{k+1}$ such that $V^{k+1}\widehat{D}^{k+1}=P^{k+1}(D^{k+1})^2(Q^{k+1})^{\mathbb{T}}$, and set \[ \overline{U}^{k+1}\!:=\widehat{P}^{k+1}Q^{k+1}D^{k+1}\ \ {\rm and}\ \ \overline{V}^{k+1}\!:=P^{k+1}D^{k+1}. \] \item[\bf 5.] Set $\gamma_{1,k+1}=\max(\underline{\gamma_1},\varrho \gamma_{1,k})$ and $\gamma_{2,k+1}=\max(\underline{\gamma_2},\varrho \gamma_{2,k})$. Let $k\leftarrow k+1$. \end{itemize} \textbf{end while} \end{algorithm} \begin{remark}\label{remark-MAPM} {\bf(a)} For each $k\in\mathbb{N}_0$, let $\widehat{X}^{k+1}\!:=U^{k+1}(\overline{V}^{k})^{\mathbb{T}}\!=U^{k+1}D^k(P^k)^{\mathbb{T}}$. Since $P^k\in\mathbb{O}^{m\times r}$, Step 2 is actually performing an SVD of $\widehat{X}^{k+1}$ to seek a new factor pair $(\widehat{U}^{k+1},\widehat{V}^{k+1})$ such that the subproblem in Step 3 has a closed-form solution. As will be shown in the proof of Proposition \ref{prop1-MAPM}, $(\widehat{U}^{k+1},\widehat{V}^{k+1})$ is at least as good as $(\widehat{U}^{k+1},\overline{V}^{k})$ for the function $\widehat{\Phi}_{\lambda,\mu}(\cdot,\cdot,\overline{U}^k,\overline{V}^k)$. Similarly, by letting $\overline{X}^{k+1}\!:=\widehat{U}^{k+1}(V^{k+1})^{\mathbb{T}} =\widehat{P}^{k+1}\widehat{D}^{k+1}(V^{k+1})^{\mathbb{T}}$, Step 4 is performing an SVD of $\overline{X}^{k+1}$ to seek a factor pair $(\overline{U}^{k+1},\overline{V}^{k+1})$ such that the subproblem in Step 1 has a closed-form solution. To the best of our knowledge, such a technique appeared in the alternating least squares method of \cite{Hastie15}. \noindent {\bf(b)} Let $\overline{Z}^k\!:=\!\overline{X}^k\!-\!L_{\!f}^{-1}\nabla\!f(\overline{X}^k)$ for $k\!\in\mathbb{N}_{0}$. By the expression of $\widehat{\Phi}_{\lambda,\mu}$, Step 1 is equivalent to seeking \begin{align*} U^{k+1}&\in\mathop{\arg\min}_{U\in \mathbb{R}^{n\times r}} \Big\{\frac{L_{\!f}}{2}\big\|\overline{Z}^kP^k\!-U\!D^k\big\|_F^2 +\frac{\mu}{2}\|U\|_F^2+\lambda\|U\|_{2,0} +\frac{\gamma_{1,k}}{2}\big\|U-\overline{U}^k\big\|_F^2\Big\}\\ &=\mathop{\arg\min}_{U\in \mathbb{R}^{n\times r}} \Big\{\frac{1}{2}\big\|G^k-U\Lambda^k\big\|_F^2+\lambda\|U\|_{2,0}\Big\} \end{align*} with $G^k\!:=\!\big(L_{\!f}\overline{Z}^kP^k\!+\!\gamma_{1,k}\widehat{P}^kQ^k\big)D^k(\Lambda^k)^{-1}$ for $\widehat{P}^0=I$ and $\Lambda^k\!:=\!\big[L_{\!f}(D^k)^2\!+\!(\mu+\!\gamma_{1,k})I\big]^{1/2}$. By this, it is easy to calculate that the columns of $U^{k+1}$ take the following form \begin{equation}\label{partU-equal1} U_i^{k+1}=\left\{\!\begin{array}{cl} \frac{1}{\sigma_i(\Lambda^k)}G_i^k & {\rm if}\ \|G_i^k\|>\sqrt{2\lambda};\\ 0 &{\rm if}\ \|G_i^k\|\le\sqrt{2\lambda} \end{array}\right.\ \ {\rm for}\ \ i=1,2,\ldots,r. \end{equation} Similarly, by letting $\Delta^{k+1}:=\!\big[L_{\!f}(\widehat{D}^{k+1})^2+(\mu+\gamma_{2,k})I\big]^{1/2}$, $\widehat{Z}^{k+1}\!:=\widehat{X}^{k+1}\!-\!L_{\!f}^{-1}\nabla\!f(\widehat{X}^{k+1})$ and $H^{k+1}\!:=\!\big(L_{\!f}(\widehat{Z}^{k+1})^{\mathbb{T}}\widehat{P}^{k+1}\!+\!\gamma_{2,k}P^k\widehat{Q}^{k+1}\big) \widehat{D}^{k+1}(\Delta^{k+1})^{-1}$ for $k\!\in\mathbb{N}_0$, Step 3 is equivalent to seeking \[ V^{k+1}\in\mathop{\arg\min}_{V\in \mathbb{R}^{m\times r}} \Big\{\frac{1}{2}\big\|H^{k+1}-V\Delta^{k+1}\big\|_F^2+\lambda\|V\|_{2,0}\Big\}, \] which implies that the columns of the matrix $V^{k+1}$ take the following form \begin{equation}\label{partV-equal1} V_i^{k+1}=\left\{\!\begin{array}{cl} \frac{1}{\sigma_i(\Delta^{k+1})}H_i^{k+1} & {\rm if}\ \|H_i^{k+1}\|>\sqrt{2\lambda};\\ 0 &{\rm if}\ \|H_i^{k+1}\|\le\sqrt{2\lambda} \end{array}\right.\ \ {\rm for}\ \ i=1,2,\ldots,r. \end{equation} Thus, the main computation cost of Algorithm \ref{MAPM} in each step involves an SVD of an $n\times r$ matrix and an $m\times r$ matrix, and one multiplication of $m\times n$ and $n\times r$ matrices, one multiplication of $n\times m$ and $m\times r$ matrices, and two multiplications of $n\times r$ and $r\times m$ matrices, and one multiplication of $m\times r$ and $r\times r$ matrices, one multiplication of $n\times r$ and $r\times r$. \end{remark} The following proposition states the properties of the sequence generated by Algorithm \ref{MAPM}. \begin{proposition}\label{prop1-MAPM} Let $\big\{(U^k,V^k,\widehat{U}^{k},\widehat{V}^{k}, \overline{U}^{k},\overline{V}^{k})\big\}_{k\in\mathbb{N}}$ be generated by Algorithm \ref{MAPM}. Then, \begin{itemize} \item[(i)] for each $k\in\mathbb{N}$, it holds that \begin{align}\label{descrease-ineq1} &{\Phi}_{\lambda,\mu}(\overline{U}^{k},\overline{V}^{k}) \ge {\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},\widehat{V}^{k+1}) +\frac{\gamma_{1,k}}{2}\|U^{k+1}-\overline{U}^k\|_F^2\nonumber\\ &\quad\ge \widehat{\Phi}_{\lambda,\mu}(\overline{U}^{k+1},\overline{V}^{k+1},\widehat{U}^{k+1},\widehat{V}^{k+1}) +\frac{\gamma_{2,k}}{2}\|V^{k+1}-\widehat{V}^{k+1}\|_F^2+\frac{\gamma_{1,k}}{2}\|U^{k+1}\!-\overline{U}^k\|_F^2\nonumber\\ &\quad\ge{\Phi}_{\lambda,\mu}(\overline{U}^{k+1},\overline{V}^{k+1}) +\frac{\gamma_{1,k}}{2}\|U^{k+1}\!-\overline{U}^k\|_F^2 +\frac{\gamma_{2,k}}{2}\|V^{k+1}\!-\widehat{V}^{k+1}\|_F^2,\nonumber \end{align} and hence $\{\Phi_{\lambda,\mu}(\overline{U}^k,\overline{V}^k)\}_{k\in\mathbb{N}}$ and $\{\Phi_{\lambda,\mu}(\widehat{U}^k,\widehat{V}^k)\}_{k\in\mathbb{N}}$ are nonincreasing and convergent; \item[(ii)] the sequence $\big\{(U^k,V^k,\widehat{U}^{k},\widehat{V}^{k}, \overline{U}^{k},\overline{V}^{k})\big\}_{k\in\mathbb{N}}$ is bounded; \item [(iii)] there exists $\overline{k}\in\!\mathbb{N}$ such that for all $k\ge\!\overline{k}$, \( J_{V^{k}}\!=\!J_{U^{k}}\!=\!J_{\widehat{U}^{k}}\!=\!J_{\widehat{V}^{k}} \!=\!J_{\overline{V}^{k}}\!=\!J_{\overline{U}^{k}}\!=\!J_{\overline{U}^{k+1}}. \) \end{itemize} \end{proposition} \begin{proof} {\bf(i)} By using $\Phi_{\lambda,\mu}(\overline{U}^{k},\overline{V}^{k})= \widehat{\Phi}_{\lambda,\mu}(\overline{U}^{k},\overline{V}^{k},\overline{U}^{k},\overline{V}^{k})$ and the definition of $U^{k+1}$ and $ V^{k+1}$, \begin{subequations} \begin{align}\label{Fval-U} {\Phi}_{\lambda,\mu}(\overline{U}^{k},\overline{V}^{k}) \ge\widehat{\Phi}_{\lambda,\mu}(U^{k+1},\overline{V}^k,\overline{U}^{k},\overline{V}^{k}) +\frac{\gamma_{1,k}}{2}\|U^{k+1}-\overline{U}^k\|_F^2;\qquad\quad\\ \label{Fval-V} {\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},\widehat{V}^{k+1}) \ge\widehat{\Phi}_{\lambda,\mu}\big(\widehat{U}^{k+1},V^{k+1},\widehat{U}^{k+1},\widehat{V}^{k+1}\big) +\frac{\gamma_{2,k}}{2}\|V^{k+1}-\widehat{V}^{k+1}\|_F^2. \end{align} \end{subequations} By Remark \ref{remark-MAPM} (a) and Step 2, $\widehat{X}^{k+1}=U^{k+1}(\overline{V}^k)^\mathbb{T}=\widehat{U}^{k+1}(\widehat{V}^{k+1})^\mathbb{T}$. By the second equality, \begin{equation}\label{Fequa1} \widehat{F}(U^{k+1},\overline{V}^k,\overline{U}^{k},\overline{V}^{k}) =\widehat{F}(\widehat{U}^{k+1},\widehat{V}^{k+1},\overline{U}^{k},\overline{V}^{k}). \end{equation} In addition, by the definition of $\widehat{U}^{k+1}$ and $\widehat{V}^{k+1}$, equation \eqref{rank-chara} and \cite[Lemma 1]{Srebro05}, \begin{subnumcases}{} \frac{1}{2}\big(\|U^{k+1}\|_F^2+\|\overline{V}^k\|_F^2\big) \geq\|\widehat{X}^{k+1}\|_* = \frac{1}{2}\big(\|\widehat{U}^{k+1}\|_F^2+\|\widehat{V}^{k+1}\|_F^2\big);\nonumber\\ \frac{1}{2}\big(\|U^{k+1}\|_{2,0}+\|\overline{V}^k\|_{2,0}\big) \geq{\rm rank}(\widehat{X}^{k+1}) = \frac{1}{2}\big(\|\widehat{U}^{k+1}\|_{2,0}+\|\widehat{V}^{k+1}\|_{2,0}\big).\nonumber \end{subnumcases} By combining the two inequalities with equality \eqref{Fequa1}, it is immediate to obtain that \[ \widehat{\Phi}_{\lambda,\mu}(U^{k+1},\overline{V}^k,\overline{U}^{k},\overline{V}^{k}) \ge\widehat{\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},\widehat{V}^{k+1},\overline{U}^{k},\overline{V}^{k}) \] Similarly, by Remark \ref{remark-MAPM} (a) and Step 3, $\overline{X}^{k+1}=\widehat{U}^{k+1}(V^{k+1})^\mathbb{T}=\overline{U}^{k+1}(\overline{V}^{k+1})^\mathbb{T}$, which along with the definition of $\overline{U}^{k+1}$ and $\overline{V}^{k+1}$ implies that the following inequality holds: \[ \widehat{\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},V^{k+1},\widehat{U}^{k+1},\widehat{V}^{k+1}) \ge\widehat{\Phi}_{\lambda,\mu}(\overline{U}^{k+1},\overline{V}^{k+1},\widehat{U}^{k+1},\widehat{V}^{k+1}). \] Now substituting the last two inequalities into \eqref{Fval-U} and \eqref{Fval-V} respectively yields that \begin{subequations} \begin{align}\label{Fval-equaU} {\Phi}_{\lambda,\mu}(\overline{U}^{k},\overline{V}^{k}) \geq\widehat{\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},\widehat{V}^{k+1},\overline{U}^{k},\overline{V}^{k}) +\frac{\gamma_{1,k}}{2}\|U^{k+1}-\overline{U}^k\|_F^2;\qquad\\ \label{Fval-equaV} {\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},\widehat{V}^{k+1}) \geq\widehat{\Phi}_{\lambda,\mu}(\overline{U}^{k+1},\overline{V}^{k+1},\widehat{U}^{k+1},\widehat{V}^{k+1}) +\frac{\gamma_{2,k}}{2}\|V^{k+1}-\widehat{V}^{k+1}\|_F^2. \end{align} \end{subequations} In addition, by the definition of $F$ and $\widehat{F}$, we have $F(\widehat{U}^{k+1}\!,\widehat{V}^{k+1})\!\leq \widehat{F}(\widehat{U}^{k+1}\!,\widehat{V}^{k+1},\overline{U}^{k}\!,\overline{V}^{k})$, and $\widehat{\Phi}_{\lambda,\mu}(\widehat{U}^{{k}+1},\widehat{V}^{k+1},\overline{U}^{k},\overline{V}^{k}) \ge{\Phi}_{\lambda,\mu}(\widehat{U}^{k+1},\widehat{V}^{k+1})$. Along with \eqref{Fval-equaU}, we get the first inequality in (i). From the first inequality of (i), \eqref{Fval-equaV} and $\widehat{\Phi}_{\lambda,\mu}(\overline{U}^{k+1},\overline{V}^{k+1},\widehat{U}^{k+1},\widehat{V}^{k+1}) \ge{\Phi}_{\lambda,\mu}(\overline{U}^{k+1},\overline{V}^{k+1})$, we obtain the last two inequalities in part (i). \noindent {\bf(ii)} From Step 5 of Algorithm \ref{MAPM}, $\gamma_{1,k}\ge\underline{\gamma_1}$ and $\gamma_{2,k}\ge\underline{\gamma_2}$. Together with part (i), for each $k\in\mathbb{N}$, \begin{align*} {\Phi}_{\lambda,\mu}(\overline{U}^{0},\overline{V}^{0}) &\ge{\Phi}_{\lambda,\mu}(\widehat{U}^{1},\widehat{V}^{1}) \ge{\Phi}_{\lambda,\mu}(\overline{U}^{1},\overline{V}^{1}) \ge\cdots\\ &\ge{\Phi}_{\lambda,\mu}(\overline{U}^{k-1},\overline{V}^{k-1}) \ge {\Phi}_{\lambda,\mu}(\widehat{U}^{k},\widehat{V}^{k}) \ge {\Phi}_{\lambda,\mu}(\overline{U}^{k},\overline{V}^{k}). \end{align*} Recall that the function $\Phi_{\lambda,\mu}$ is coercive. So, the sequence $\{(\overline{U}^{k},\overline{V}^{k},\widehat{U}^{k},\widehat{V}^{k})\}_{k\in\mathbb{N}}$ is bounded. Together with part (i), it follows that the sequence $\{(U^k,V^k)\}_{k\in\mathbb{N}}$ is also bounded. \noindent {\bf(iii)} Fix an arbitrary $k\in\mathbb{N}$. We first argue that the following inclusions hold: \begin{align}\label{UVk-indxset} J_{\overline{U}^{k+1}}\subseteq J_{\widehat{U}^{k+1}}\subseteq J_{\overline{U}^{k}},\, J_{V^{k+1}}\subseteq J_{\widehat{V}^{k+1}}=J_{\widehat{U}^{k+1}} \subseteq J_{\overline{V}^{k}}\ \ {\rm and}\ \ J_{{U}^{k+1}}\subseteq J_{\overline{U}^{k}}. \end{align} By the definition of $(\overline{U}^k,\overline{V}^k)$ and $(\widehat{U}^k,\widehat{V}^k)$, it is easy to check that $J_{\overline{U}^{k}}=J_{\overline{V}^{k}}$ and $J_{\widehat{U}^{k}}=J_{\widehat{V}^{k}}$. By \eqref{partU-equal1}, $J_{{U}^{k+1}}\subseteq J_{G^{k}}$, while by the expression of $G^k$ in Remark \ref{remark-MAPM} (b), we deduce that $J_{G^{k}}\subseteq J_{D^k}$. This, by $\overline{V}^k=P^kD^k$, implies that $J_{{U}^{k+1}}\subseteq J_{\overline{V}^k}=J_{\overline{U}^{k}}$. So, the last inclusion in \eqref{UVk-indxset} holds. By the expression of $V^{k+1}$ in \eqref{partV-equal1}, we deduce that $J_{V^{k+1}}\subseteq J_{\widehat{U}^{k+1}}=J_{\widehat{V}^{k+1}}$. Together with $J_{U^{k+1}}\subseteq J_{\overline{V}^{k}}$, \[ \|\widehat{U}^{k+1}\|_{2,0}=\|\widehat{V}^{k+1}\|_{2,0} ={\rm rank}(\overline{X}^{k+1})\leq\|U^{k+1}\|_{2,0} \le\|\overline{V}^k\|_{2,0}={\rm rank}(\overline{X}^{k}). \] Thus, $J_{\widehat{U}^{k+1}}\subseteq J_{\overline{V}^{k}}=J_{\overline{U}^{k}}$, and the second group of inclusions in \eqref{UVk-indxset} hold. Notice that \[ \|\overline{U}^{k+1}\|_{2,0}=\|\overline{V}^{k+1}\|_{2,0} ={\rm rank}(\overline{X}^{k+1})\le \min(\|\widehat{U}^{k+1}\|_{2,0},\|V^{k+1}\|_{2,0}). \] So, $J_{\overline{U}^{k+1}}\subseteq J_{\widehat{U}^{k+1}}$. Since $J_{\widehat{U}^{k+1}}\subseteq J_{\overline{V}^{k}}=J_{\overline{U}^{k}}$, the first group of inclusions in \eqref{UVk-indxset} hold. Moreover, \begin{equation}\label{UV-F20value} \|\overline{U}^{k+1}\|_{2,0}\leq\|{V}^{k+1}\|_{2,0}\leq \|\widehat{V}^{k+1}\|_{2,0}=\|\widehat{U}^{k+1}\|_{2,0} \le\|U^{k+1}\|_{2,0}\leq\|\overline{U}^{k}\|_{2,0}. \end{equation} This means that the sequence $\{\|\overline{U}^{k}\|_{2,0}\}_{k\in\mathbb{N}}$ is nonincreasing and convergent. By using \eqref{UV-F20value} again, $\lim_{k\to\infty}\|U^{k}\|_{2,0}=\lim_{k\to\infty}\|V^{k}\|_{2,0} =\lim_{k\to\infty}\|\overline{U}^k\|_{2,0}=\lim_{k\to\infty}\|\widehat{U}^k\|_{2,0}$. Since $\{\|\overline{U}^k\|_{2,0}\}$ is a nonnegative integer sequence, together with \eqref{UVk-indxset} we obtain the desired result. \end{proof} Proposition \ref{prop1-MAPM} (iii) states that the nonzero column indices of $\{(\overline{U}^k,\overline{V}^k)\}_{k\in\mathbb{N}}$ tend to be stable for all $k$ large enough. Inspired by this, we develop a hybrid AMM method in which, Algorithm \ref{MAPM} is first used to generate a point pair $(\overline{U}^k,\overline{V}^k)$ with a stable nonzero column index set, and then an alternating MM method similar to Algorithm \ref{AMM} with $(\overline{U}^k,\overline{V}^k)$ as a starting point is applied to \begin{equation}\label{Fmu-min} \min_{U\in\mathbb{R}^{n\times\kappa},V\in\mathbb{R}^{m\times\kappa}}F_{\mu}(U,V) \ \ {\rm with}\ \kappa=|J_{\overline{U}^k}| \end{equation} which is an unconstrained smooth problem. The iterates of the hybrid AMM method are as follows. \begin{algorithm}[H] \caption{\label{HMAP}{\bf (Hybrid AMM method for solving \eqref{MS-FL20})}} \textbf{Initialization:} Seek an output $(\overline{U}^k,\overline{V}^k)$ with a stable $\kappa=J_{\overline{U}^k}=J_{\overline{V}^k}$ of Algorithm \ref{MAPM} for \eqref{MS-FL20}. Set $(U^{-1},V^{-1})=(U^0,V^0):=(\overline{U}^k,\overline{V}^k)$. Choose $\overline{\gamma}>0$ and $\beta_0\in[0,\beta)$ with $\beta\in[0,1]$. Let $l:=0$. \\ \textbf{while} the stopping conditions are not satisfied \textbf{do} \begin{itemize} \item[\bf 1.] Select $\gamma_{1,l}\in(\tau_{\!V^{l}},\overline{\gamma})$. Let $\widetilde{U}^l\!:=U^l+\beta_l(U^l-U^{l-1})$ and compute \begin{equation}\label{Ul-subprob} U^{l+1}\in\mathop{\arg\min}_{U\in\mathbb{R}^{n\times \kappa}} \Big\{\langle\nabla_{\!U}F(\widetilde{U}^l,V^l),U\rangle +\frac{\mu}{2}\|U\|_F^2+\frac{\gamma_{1,l}}{2}\|U\!-\!\widetilde{U}^l\|_F^2\Big\}. \end{equation} \item[\bf 2.] Select $\gamma_{2,l}\in(\tau_{\!U^{l+1}},\overline{\gamma})$. Let $\widetilde{V}^l\!:=V^l+\beta_l(V^l-V^{l-1})$ and compute \begin{equation}\label{Vl-subprob} V^{l+1}\!\in\mathop{\arg\min}_{\!V\in\mathbb{R}^{m\times \kappa}} \Big\{\langle\nabla_{\!V}F(U^{l+1},\widetilde{V}^l),V\rangle +\frac{\mu}{2}\|V\|_F^2 +\frac{\gamma_{2,l}}{2}\|V\!-\!\widetilde{V}^l\|_F^2\Big\}. \end{equation} \item[\bf 3.] Update $\beta_l$ by $\beta_{l+1}\in[0,\beta)$ and let $l\leftarrow l+1$. \end{itemize} \textbf{end while} \end{algorithm} \begin{remark}\label{remark-HMAP} When $r$ is a rough upper estimation for the optimal (or true) rank, the value of $\kappa$ is usually much less than $r$ and is close to the optimal (or true) rank due to the column $\ell_{2,0}$-norm term in \eqref{MS-FL20}. Thus, the computation cost of Algorithm \ref{HMAP} is expected to be much less than that of Algorithm \ref{AMM} and \ref{MAPM}. In particular, by following the same arguments as those for Proposition \ref{prop1-UVk} and \ref{prop2-UVk}, one may show that the sequence $\{(U^{l},V^{l})\}_{l\in\mathbb{N}}$ generated by Algorithm \ref{HMAP} is globally convergent and its limit, say $(U^*,V^*)$, is also a critical point of $\Phi_{\lambda,\mu}$ associated to $r=\kappa$ by Proposition \ref{prop1-Phi}. \end{remark} \section{Numerical experiments}\label{sec5} In this section we shall test the performance of Algorithm \ref{AMM} and \ref{HMAP} by applying them to matrix completion problems in a general sampling scheme, and our codes can be downloaded from \url{https://github.com/SCUT-OptGroup/UVFL20}. Notice that the matrix max-norm has been adopted in \cite{Lee10,Srebro10,Fang18} as a convex surrogate for the rank function, and the max-norm regularized approach was demonstrated in \cite{Fang18} to outperform the nuclear-norm based one for matrix completion and collaborative filtering under non-uniform sampling schemes. To confirm the efficiency of the column $\ell_{2,0}$-norm regularized model \eqref{MS-FL20}, we compare the obtained results with that of the ADMM developed in \cite{Fang18} for the SDP reformulation of the max-norm penalized LS model and that of the alternating least squares (ALS) method developed in \cite{Hastie15} for the factorized model \eqref{MC-Fnorm}. The ALS method has the same iterate steps as Algorithm \ref{MAPM} does except that the column $\ell_{2,0}$-norm in $\widehat{\Phi}_{\lambda,\mu}$ and the proximal term in Step 1 and 3 are removed. All the numerical tests of this section were performed in MATLAB on a desktop computer running on 64-bit Windows Operating System with an Intel(R) Core(TM) i7-7700 CPU 3.60GHz and 16 GB RAM. \subsection{Matrix completions in a general sampling}\label{sec5.1} We assume that a random index set $\Omega=\big\{(i_t,j_t)\!: t=1,\ldots,p\big\}\subset ([n]\times [m])^{p}$ is available, and that the samples of the indices are drawn independently from a general sampling distribution $\Pi=\{\pi_{kl}\}_{k\in[n],l\in[m]}$ on $[n]\times[m]$. We adopt the same non-uniform sampling scheme as in \cite{Fang18}, i.e., for each $(k,l)\in[n]\times[m]$, take $\pi_{kl}=p_kp_l$ with \begin{equation}\label{sampling-scheme} \textrm{Scheme 1}\!:\ p_k=\!\left\{\begin{array}{ll} 2p_0& {\rm if}\ k\le\frac{n}{10} \\ 4p_0& {\rm if}\ \frac{n}{10}\le k\le \frac{n}{5}\\ p_0& {\rm otherwise}\\ \end{array}\right.\ {\rm or}\ \ \textrm{Scheme 2}\!:\ p_k=\! \left\{\begin{array}{ll} 3p_0& {\rm if}\ k\le\frac{n}{10} \\ 9p_0& {\rm if}\ \frac{n}{10}\le k\le \frac{n}{5}\\ p_0& {\rm otherwise}\\ \end{array}\right. \end{equation} where $p_0>0$ is a constant such that $\sum_{k=1}^{n}p_k=1$, and $p_l$ is defined in a similar way under the two schemes. With the index set $\Omega$, for any $X\in\mathbb{R}^{n\times m}$, $X_{\Omega}\in\mathbb{R}^{n\times m}$ is the projection of $X$ onto the set $\Omega$, i.e., $[X_{\Omega}]_{ij}=X_{ij}$ if $(i,j)\in\Omega$, otherwise $[X_{\Omega}]_{ij}=0$. The function $f$ in \eqref{MC-Fnorm} and \eqref{MS-FL20} is given by \[ f(X)=\frac{1}{2}\big\|X_{\Omega}-M_{\Omega}\big\|_F^2\quad\ \forall X\in\mathbb{R}^{n\times m} \] where $M_{ij}$ for $(i,j)\in\Omega$ are the observed entries. For the simulated data, we assume that $M_{i_t,j_t}$ with $(i_t,j_t)\in\Omega$ for $t=1,2,\ldots,p$ are generated via the following observation model \begin{equation}\label{observe} M_{i_t,j_t}=M_{i_t,j_t}^*+\sigma({\xi_{t}}/{\|\xi\|})\|M_{\Omega}\|_F, \end{equation} where $M^*\!\in\mathbb{R}^{n\times m}$ is the true matrix of rank $r^*$, $\xi=(\xi_1,\ldots,\xi_p)^{\mathbb{T}}$ is the noisy vector whose entries are i.i.d. random variables obeying $N(0,1)$, and $\sigma>0$ is the noise level. \subsection{Implementation of algorithms}\label{sec5.2} For the ADMM in \cite{Fang18}, we use the default stopping criterion, starting point and parameters in the code provided by Prof. Fang. As mentioned before, the ADMM is developed for solving the SDP reformulation of the max-norm penalized LS model: \begin{equation}\label{SDP-maxnorm} \min_{Z\in\mathbb{S}_{+}^{n+m}} \Big\{\frac{1}{2}\big\|Z_{\Omega}^{12}-M_{\Omega}\big\|_F^2 +\lambda\|{\rm diag}(Z)\|_\infty\ \ {\rm s.t.}\ \ \|Z^{12}\|_{\infty}\le\alpha\Big\} \end{equation} where $Z=\left(\begin{matrix} Z^{11}& Z^{12}\\ (Z^{12})^{\mathbb{T}}& Z^{22} \end{matrix}\right)$ with $Z^{11}\in\mathbb{S}^{n}$, $Z^{22}\in\mathbb{S}^m$ and $Z^{12}\in\mathbb{R}^{n\times m}$, and $\alpha>0$ is an upper bound for the elementwise $\ell_{\infty}$-norm of the unknown true matrix $M^*$. It is worthwhile to point out that the code of ADMM is solving model \eqref{SDP-maxnorm} with a varying $\lambda$ instead of a fixed $\lambda$. Next we focus on the implementation detail of other three algorithms. By comparing \eqref{optUk-equa}-\eqref{optVk-equa} with the optimality conditions of problem \eqref{MS-FL20}, it is not hard to obtain that \begin{subnumcases}{} E_{U}^{k+1}\in\nabla\!f(U^{k+1}(V^{k+1})^\mathbb{T})V^{k+1}+\mu U^{k+1} +\lambda\partial\|U^{k+1}\|_{2,0};\nonumber\\ E_{V}^{k+1}\in[\nabla\!f(U^{k+1}(V^{k+1})^\mathbb{T})]^\mathbb{T}U^{k+1}+\mu V^{k+1} +\lambda\partial\|V^{k+1}\|_{2,0}\nonumber \end{subnumcases} where \begin{align*} E_{U}^{k+1}&:=\big[\nabla\!f(U^{k+1}(V^{k+1})^\mathbb{T})V^{k+1} -\nabla\!f(\widetilde{U}^{k}(V^{k})^\mathbb{T})V^{k}\big]+\gamma_{1,k}(\widetilde{U}^k\!-\!U^{k+1});\\ E_{V}^{k+1}&:=\big[\nabla\!f(U^{k+1}(V^{k+1})^\mathbb{T})^\mathbb{T}U^{k+1} -\nabla\!f(U^{k+1}(\widetilde{V}^{k})^\mathbb{T})^\mathbb{T}U^{k+1}\big] +\gamma_{2,k}(\widetilde{V}^k\!-\!V^{k+1}). \end{align*} In view of this, we terminate Algorithm \ref{AMM} at $(U^{k},V^{k})$ whenever ${\rm rank}(X^{k})=\cdots={\rm rank}(X^{k-19})$ with $X^k=U^k(V^k)^{\mathbb{T}}$ and either of the following conditions holds: \[ \frac{\|(E_{U}^{k},E_{V}^{k})\|_F}{1+\|X^k\|_F}\le\epsilon_1\ \ {\rm or}\ \ \frac{\max_{1\le i\le 19}|\Phi_{\lambda,\mu}(U^k,V^k)-\!\Phi_{\lambda,\mu}(U^{k-i},V^{k-i})|} {\max(1,\Phi_{\lambda,\mu}(U^k,V^k))}\le\epsilon. \] From the optimality conditions of \eqref{Fmu-min}, we terminate Algorithm \ref{HMAP} at $(U^{l},V^{l})$ when \[ \frac{\|(E_{U}^{l},E_{V}^{l})\|_F}{1+\|U^l(V^l)^{\mathbb{T}}\|_F}\le\epsilon_3\ \ {\rm or}\ \ \frac{\max_{1\le i\le 19}|F_{\mu}(U^l,V^l)-F_{\mu}(U^{l-i},V^{l-i})|}{\max(1,F_{\mu}(U^l,V^l))}\le\epsilon. \] For the ALS method, we adopt a stopping criterion stronger than the one used in \cite{Hastie15}: \[ {\rm rank}(X^{k})=\cdots={\rm rank}(X^{k-19})\ {\rm for}\ X^k\!=\overline{U}^k(\overline{V}^k)^{\mathbb{T}},\, \frac{\|\overline{U}^{k}(\overline{V}^{k})^\mathbb{T}-\overline{U}^{k-1}(\overline{V}^{k-1})^\mathbb{T}\|_F^2} {\|\overline{U}^{k-1}(\overline{V}^{k-1})^\mathbb{T}\|_F^2}\le \epsilon_2. \] We always choose $\epsilon_1=10^{-3},\epsilon=10^{-4},\epsilon_3=5\times 10^{-3}$ and $\epsilon_2=10^{-6}$ for the subsequent tests. For the parameters of Algorithm \ref{MAPM}, we choose $\varrho=0.8,\underline{\gamma_{1}} =\underline{\gamma_{2}}=10^{-8}$ and $\gamma_{1,0}=\gamma_{2,0}=0.01$. For Algorithm \ref{AMM} and \ref{HMAP}, we choose $\gamma_{1,0}=\gamma_{2,0}=2.5\|M_{\Omega}\|$, and set $\gamma_{1,k}=\rho^{l_k}\gamma_{1,0}$ and $\gamma_{2,k}=\rho^{m_k}\gamma_{2,0}$ with $\rho=1.05$, where $l_k$ and $m_k$ are the smallest nonnegative integers $l$ and $m$ such that \begin{align*} F(U(\rho^l),V^k)\le F(\widetilde{U}^k,V^k)+\langle\nabla_U\!F(\widetilde{U}^k,V^k),U(\rho^l)-\widetilde{U}^k\rangle +\frac{\rho^{l}\gamma_{1,0}}{2}\|U(\rho^l)-\widetilde{U}^k\|_F^2,\qquad\\ F(U^{k+1},\!V(\rho^m))\le F(U^{k+1},\!\widetilde{V}^k)\!+\!\langle\nabla_V\!F(U^{k+1},\!\widetilde{V}^k),V(\rho^m)\!-\!\widetilde{V}^k\rangle \!+\!\frac{\rho^m\gamma_{2,0}}{2}\|V(\rho^m)\!-\!\widetilde{V}^k\|_F^2. \end{align*} Here, $U(\rho^l)$ is an optimal solution of subproblem \eqref{Uk-subprob} with $\gamma_{1,k}$ replaced by $\rho^l\gamma_{1,0}$, and $V(\rho^m)$ is an optimal solution of subproblem \eqref{Vk-subprob} with $\gamma_{2,k}$ replaced by $\rho^m\gamma_{2,0}$. Such a backtracking search strategy is also applicable to Algorithm \ref{HMAP}. We employ Nesterov's accelerated strategy \cite{Nesterov83} to yield $\beta_k$ of Algorithm \ref{AMM} and \ref{HMAP}, i.e., $\beta_k=\frac{t_k-1}{t_{k+1}}$ with $t_0=1$ and $t_{k+1}=\frac{1+\sqrt{4t_k^2+1}}{2}$. Although our global convergence results require a restriction on $\beta_k$, numerical tests show that Algorithm \ref{AMM} and \ref{HMAP} still converge without it for such $\beta_k$. In view of this, we do not impose any restriction on such $\beta_k$ for the subsequent tests, and leave this gap for a future research topic. For the subsequent tests, the starting point $(U^0,V^0)$ of Algorithm \ref{MAPM} and the ALS method is chosen to be $(P_{1},Q_{1})$, and $(U^0,V^0)$ of Algorithm \ref{AMM} is chosen to be $(P_{1}[\Sigma_{r}(M_{\Omega})]^{1/2},Q_{1}[\Sigma_{r}(M_{\Omega})]^{1/2})$, where $P_1$ and $Q_1$ are the matrix consisting of the first $r$ largest left and right singular vectors of $M_{\Omega}$, respectively. For the parameters of model \eqref{MS-FL20}, we always choose $\mu=10^{-8}$ and $r=\min(n,m,150)$. To choose an appropriate $\lambda$, we first apply Algorithm \ref{AMM} and \ref{HMAP} to model \eqref{MS-FL20} with $\lambda=10c_{\lambda}{\rm SR}\|M_{\Omega}\|_F$ for different $c_{\lambda}$, where ${\rm SR}$ is the sample ratio and $M_{\Omega}$ is generated randomly in the same way as in Section \ref{sec5.3} with $r^*=10$ and $n=m=1500$. The first two subfigures in Figure \ref{fig1} show that there is an interval of $\lambda$ such that Algorithm \ref{AMM} and \ref{HMAP} applied to model \eqref{MS-FL20} with one of $\lambda$ in this interval yield a lower relative error and a rank equal to the true rank $r^*$. Inspired by this, we always set $\lambda=10c_{\lambda}{\rm SR}\|M_{\Omega}\|_F$ with $c_{\lambda}$ chosen heuristically. In practice, one may use cross validation to choose $c_{\lambda}$ such that the associated $\lambda$ lies in such an interval. In addition, we apply the ALS to model \eqref{MC-Fnorm} with the same $M_{\Omega}$ and $\lambda=c_{\lambda}{\rm SR}\|M_{\Omega}\|$ for different $c_{\lambda}$. The last subfigure in Figure \ref{fig1} shows that there is an interval of $\lambda$ such that the outputs of ALS applied to model \eqref{MC-Fnorm} with one of $\lambda$ in the interval have a lower relative error but their ranks are higher than $r^*$. So, we set $\lambda=c_{\lambda}{\rm SR}\|M_{\Omega}\|$ for model \eqref{MC-Fnorm} with $c_{\lambda}$ chosen heuristically so that the associated $\lambda$ could belong to this interval. \begin{figure}[h] \setlength{\abovecaptionskip}{0.2pt} \centering \includegraphics[height=5.5cm,width=6.0in]{figure_lambda.eps} \caption{The relative error and rank curves of three solvers under different $\lambda$ for ${\rm SR}=0.2$} \label{fig1} \end{figure} \subsection{Numerical results for simulated data}\label{sec5.3} We test four solvers on simulated data under the non-uniform sampling schemes in \eqref{sampling-scheme}. Specifically, we generate the true matrix $M^*$ by $M^*=M_{L}^*(M_{R}^*)^{\mathbb{T}}$, where $M_{L}^*$ and $M_{R}^*$ are two $n\times r^*$ matrices with each entry sampled independently from the standard normal distribution $N(0,1)$. The noisy observations $M_{i_t,j_t}$ with $(i_t,j_t)\in\Omega$ are obtained from \eqref{observe} with $\sigma=0.1$ and $\xi_t\sim N(0,1)$, where the sampling index set $\Omega$ is generated by Scheme 1. To evaluate the recovery performance, we adopt the metric of relative error (RE) defined by $\frac{\|X^{\rm out}-M^*\|_F}{\|M^*\|_F}$, where $X^{\rm out}$ means the output of a solver. We consider different setting of $n, r^*$ and SR, and run simulation under each setting for \textbf{5} different instances. Table \ref{Simulated} reports the averaged RE, rank and running time (in seconds) of four solvers, where the results of ADMM are not reported for $n\ge 3000$ because it is too time-consuming. We observe that for all test instances, the outputs of Algorithm \ref{AMM} and \ref{HMAP} not only have much lower RE than the outputs of ALS and ADMM do, but also their ranks equal $r^*$. From Figure \ref{fig1}, there is an interval such that their outputs when solving \eqref{MS-FL20} with $\lambda$ from this interval have similar performance. This means that the proposed column $\ell_{2,0}$-regularized factorization model is superior to other two models in capturing a solution with low rank and low RE for non-uniformly sampled data. In Table \ref{Simulated}, the columns corresponding to ADMM show that the max-norm penalized model is suitable for non-uniform sampling in terms of RE, but is unable to promote a low-rank solution; while the columns associated to ALS show that the nuclear-norm regularized factorization model can promote low-rank solutions, but is not suitable for non-uniformly sampled data due to very high RE. This coincides with the results in \cite{Fang18} for the nuclear-norm penalized model and the max-norm penalized model. \setlength{\tabcolsep}{1mm} \begin{table}[h] \setlength{\belowcaptionskip}{-0.01cm} \centering \scriptsize \caption{\small Averaged RE and running time of solvers for non-uniformly sampled data}\label{Simulated} \scalebox{1}{ \begin{tabular}{cc|lccc|lccc|lccc|lcc} \Xhline{0.7pt} $n$&\!\! ($r^*$,\,SR)& \multicolumn{4}{l}{\qquad Algorithm \ref{AMM}}&\multicolumn{4}{l}{\qquad Algorithm \ref{HMAP}} &\multicolumn{4}{l}{\qquad ALS}&\multicolumn{3}{l}{\qquad ADMM}\\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} \cmidrule(lr){11-14} \cmidrule(lr){15-17} & &$c_{\lambda}$&\!\!\!\! NMAE&\!\!\! rank \!\!\!\! &time(s)&$c_{\lambda}$&\!\!\!\! NMAE \!\!\!\!& rank \!\!\!\!& time(s)\!\!\! & $c_{\lambda}$\!\!\!& NMAE& rank&\!\!\!\! time(s)&\!\!\!\! NMAE&\!\!\! rank&\!\!\!\! time(s)\\ \hline 1000 &(8,0.10) &50& {\color{red}\bf 0.064} & 8 & 3.43 &10& 0.065 & 8 & 3.15 &0.80 & 0.766 & 22 & 6.44 &0.191&666&156.2\\ &(8,0.15) &45& {\color{red}\bf 0.046} & 8 & 4.54 &10& 0.047 & 8 & 3.44 &0.24 & 0.617 & 23 & 14.2 &0.154 &751&153.5\\ &(8,0.20) &45& {\color{red}\bf 0.038} & 8 & 4.98 &10& {\color{red}\bf 0.038} & 8 & 3.36 &0.16 & 0.612 & 21 & 13.1 &0.135 &729&153.9\\ &(8,0.25) &45& {\color{red}\bf 0.032} & 8 & 5.54 &10& {\color{red}\bf0.032} & 8 & 3.26 &0.14 & 0.347 & 16 & 7.75 &0.128 &1000&156.3\\ &(10,0.10) &45& {\color{red}\bf0.075} & 10 & 3.45 &10& {\color{red}\bf0.075} & 10 & 3.30 &3.5 & 0.850 & 20 &3.19 &0.195 &678 &151.3\\ &(10,0.15) &40& {\color{red}\bf0.052} & 10 & 4.61 &10& 0.053 & 10 & 3.52 &2.5 & 0.807 & 20 &2.93 &0.160 &741 &150.9\\ &(10,0.20) &40& {\color{red}\bf0.043} & 10 & 5.02 &10& {\color{red}\bf0.043} & 10 & 3.50 &1.8 & 0.753 & 20 &2.89 &0.142 &728 &151.4\\ &(10,0.25) &40& {\color{red}\bf0.036} & 10 & 5.72 &10& {\color{red}\bf0.036} & 10 & 3.54 &1.5 & 0.727 & 20 &2.67 &0.132 &1000&153.2\\ &(20,0.10) &40& 0.134 & 20 & 3.83 &8.0& {\color{red}\bf 0.129} & 20 & 3.76 & 1.0& 0.790 & 44 & 11.4 &0.253 &691&154.5\\ &(20,0.15) &32&{\color{red}\bf 0.082} & 20 & 4.51 &6.0& {\color{red}\bf0.082} & 20 & 4.08 & 1.0& 0.700 & 40 & 4.56 &0.187 &765&153.6\\ &(20,0.20) &32& 0.099 & 20 & 4.97 &6.0& {\color{red}\bf0.065} & 20 & 3.94 & 1.0& 0.679 & 40 & 3.45 &0.159 &719&154.5\\ &(20,0.25) &28&{\color{red}\bf0.053} & 20 & 5.59 &5.0& 0.054 & 20 & 3.79 & 1.0& 0.646 & 40 & 3.05 &0.141 &1000&156.0\\ \cmidrule(lr){1-17} 3000 &(10,0.10) &120&{\color{red}\bf 0.038} & 10 & 32.8 &30& {\color{red}\bf 0.038} & 10 & 34.0 & 1.0& 0.765 & 30 & 33.1 &- &-&-\\ &(10,0.15) &95 &{\color{red}\bf 0.028} & 10 & 38.8 &30& {\color{red}\bf 0.028} & 10 & 33.6 & 1.0& 0.660 & 20 & 28.2 &- &-&-\\ &(10,0.20) &95 &{\color{red}\bf 0.024} & 10 & 43.8 &30& {\color{red}\bf 0.024} & 10 & 35.2 & 1.0& 0.642 & 20 & 26.2 &- &-&-\\ &(10,0.25) &95 &{\color{red}\bf 0.020} & 10 & 51.7 &30& {\color{red}\bf 0.020} & 10 & 34.0 & 1.0& 0.590 & 20 & 25.9 &- &-&-\\ &(20,0.10) &100& {\color{red}\bf0.055} & 20 & 32.6 &25& {\color{red}\bf0.055} & 20 & 36.1 & 1.0& 0.766 & 45 & 93.0 &- &-&-\\ &(20,0.15) &80 & {\color{red}\bf0.041} & 20 & 38.2 &25& {\color{red}\bf0.041} & 20 & 35.6 & 1.0& 0.667 & 40 & 30.2 &- &-&-\\ &(20,0.20) &80 & {\color{red}\bf0.034} & 20 & 43.3 &25& {\color{red}\bf0.034} & 20 & 37.0 & 1.0& 0.651 & 40 & 26.7 &- &-&-\\ &(20,0.25) &80 & {\color{red}\bf0.029} & 20 & 51.0 &25& {\color{red}\bf0.029} & 20 & 35.6 & 1.0& 0.606 & 40 & 25.8 &- &-&-\\ \cmidrule(lr){1-17} 5000 &(10,0.10) &200 & 0.029 & 10 & 92.7 &40& {\color{red}\bf 0.028} & 10 & 99.2 & 1.0& 0.761 & 30 & 97.7 &- &-&-\\ &(10,0.15) &160 &{\color{red}\bf 0.022} & 10 & 111.9 &30& {\color{red}\bf 0.022} & 10 & 101.2 & 1.0& 0.656 & 20 & 79.4 &- &-&-\\ &(10,0.20) &160 &{\color{red}\bf 0.018} & 10 & 121.3 &30& {\color{red}\bf 0.018} & 10 & 106.0 & 1.0& 0.639 & 20 & 75.8 &- &-&-\\ &(10,0.25) &160 &{\color{red}\bf 0.016} & 10 & 152.8 &30& {\color{red}\bf 0.016} & 10 & 100.4 & 1.0& 0.585 & 20 & 74.8 &- &-&-\\ &(20,0.10) &200 &{\color{red}\bf 0.041} & 20 & 93.5 &40& {\color{red}\bf 0.041} & 20 & 106.6 & 1.0& 0.759 & 46 & 173.5 &- &-&-\\ &(20,0.15) &160 &{\color{red}\bf 0.031} & 20 & 113.0 &30& {\color{red}\bf 0.031} & 20 & 107.8 & 1.0& 0.662 & 40 & 84.0 &- &-&-\\ &(20,0.20) &160 &{\color{red}\bf 0.026} & 20 & 130.0 &30& {\color{red}\bf 0.026} & 20 & 112.6 & 1.0& 0.645 & 40 & 77.4 &- &-&-\\ &(20,0.25) &160 &{\color{red}\bf 0.022} & 20 & 153.9 &30& {\color{red}\bf 0.022} & 20 & 106.5 & 1.0& 0.596 & 40 & 76.6 &- &-&-\\ \Xhline{0.7pt} \end{tabular}} \end{table} In addition, for $r^*=5$ and $n=m=1000$, Figure \ref{fig2} plots the average RE over {\bf five} repetitions under ${\rm SR}=0.04,0.06,0.08,\ldots,0.2$. We see that under the two non-uniform sampling schemes, the relative errors yielded by four solvers decrease as the sampling ratio increases, but Algorithm \ref{AMM} and \ref{HMAP} have better performance than ADMM does, and the ALS method gives the worst results. \begin{figure}[h] \setlength{\abovecaptionskip}{0.2pt} \centering \includegraphics[height=6cm,width=6.0in]{figure_SR.eps} \caption{The relative errors of four solvers under different sampling ratios for noisy case} \label{fig2} \end{figure} \subsection{Numerical results for real data}\label{sec5.4} We test four methods with the matrix completion problems based on some real data sets, including the Jester joke dataset, the MovieLens dataset, and the Netflix dataset. For each data set, let $M^0$ be the original incomplete data matrix such that the $i$th row of $M^0$ corresponds to the ratings given by the $i$th user. We first consider the Jester joke dataset which is available through \url{http://www.ieor.berkeley.edu/~goldberg/jester-data/}. This dataset contains more than 4.1 million ratings for $100$ jokes from $73,421$ users. The whole Jester joke dataset contains three subdatasets: (1) jester-1: 24,983 users who rate 36 or more jokes; (2) jester-2: 23,500 users who rate 36 or more jokes; (3) jester-3: 24,938 users who rate between 15 and 35 jokes. More descriptions can be found in \cite{Toh10,Ma09,Chen12}, where the nuclear-norm convex relaxation is used to study this dataset. Due to the large number of users, we first randomly select $n_u$ rows from $M^0$ and then randomly permute the ratings from these users to generate $M\in\mathbb{R}^{{n_u}\times 100}$ as in \cite{Fang18}. Next, we adopt Scheme 1 to generate a set $\Omega$ of observed indices. Since we can only observe the entry $(j,k)$ if $(j,k)\in\Omega$ and $M_{jk}$ is given, the actual sampling ratio is less than the input SR. Since the true $M^*$ is unknown for real datasets, we cannot compute the relative error as we did for the simulated data. Similar to \cite{Toh10}, we take the metric of the normalized mean absolute error (NMAE) to measure the accuracy of the output of an algorithm: \[ {\rm NMAE}=\frac{\sum_{(i,j)\in\Gamma\backslash\Omega}|X^{\rm out}_{i,j}-M_{i,j}|} {|\Gamma\backslash\Omega|(r_{\rm max}-r_{\rm min})}, \] where $\Gamma:=\{(i,j)\in[n_{u}]\times[100]\!: M_{ij}\ \textrm{is given}\}$ denotes the set of indices for which $M_{ij}$ is given, and $r_{\rm min}$ and $r_{\rm max}$ denote the lower and upper bounds for the ratings, respectively. In the Jester joke dataset, the range is from $-10$ to $10$. Thus, we have $r_{\rm max}-r_{\rm min}=20$. \begin{table}[h] \setlength{\belowcaptionskip}{-0.01cm} \centering \scriptsize \caption{\small Averaged NMAE and running time of four methods for Jester joke dataset}\label{jester} \scalebox{1}{ \begin{tabular}{cc|lccc|lccc|lccc|lcc} \Xhline{0.7pt} Dataset&\!\! ($n_u$,\,SR)& \multicolumn{4}{l}{\qquad Algorithm \ref{AMM}}&\multicolumn{4}{l}{\qquad Algorithm \ref{HMAP}} &\multicolumn{4}{l}{\qquad ALS}&\multicolumn{3}{l}{\qquad ADMM}\\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} \cmidrule(lr){11-14} \cmidrule(lr){15-17} & &$c_{\lambda}$&\!\!\!\! NMAE&\!\!\!\! rank \!\!\!\! &time& $c_{\lambda}$&\!\!\!\! NMAE &\!\!\!\! rank \!\!\!\!& time & $c_{\lambda}$&\!\!\!\! NMAE&\!\!\!\! rank&\!\!\!\! time& NMAE& \!\!\!\! rank&\!\!\!\! time \\ \hline jester-1 &(1000,0.15)&50& 0.197 & 1 & 0.53 &18& 0.197 & 1 & 0.42 &5 & 0.220 & 2 & 0.72 &{\color{red}\bf0.195}&100&23.9\\ &(1000,0.20)&50&{\color{red}\bf 0.189} & 1 & 0.31 &18&{\color{red}\bf 0.189} & 1 & 0.16 &4 & 0.220 & 2 & 0.40 & 0.190 &100&24.2\\ &(1000,0.25)&50&{\color{red}\bf 0.187} & 1 & 0.29 &18&{\color{red}\bf 0.187} & 1 & 0.14 &3 & 0.220 & 2 & 0.37 & 0.188 &100&26.9\\ &(2000,0.15) &60&{\color{red}\bf 0.202} & 1 & 0.98 &24&{\color{red}\bf 0.195} & 1 & 0.53 &5 & 0.220 & 2 &1.11 &0.215 &81&181.9\\ &(2000,0.20) &60&{\color{red}\bf 0.194} & 1 & 0.83 &24&{\color{red}\bf 0.194} & 1 & 0.54 &4 & 0.221 & 2 &0.89 &0.208 &88&173.6\\ &(2000,0.25) &60&{\color{red}\bf 0.189} & 1 & 0.95 &24&{\color{red}\bf 0.189} & 1 & 0.57 &3 & 0.221 & 2 &1.02 &0.201 &94&157.7\\ &(4000,0.15) &82& 0.197 & 2 & 2.15 &30&{\color{red}\bf 0.195} & 1 & 1.36 & 5& 0.222 & 2 & 2.19 &- &-&-\\ &(4000,0.20) &82&{\color{red}\bf 0.190} & 1 & 1.60 &30& 0.191 & 1 & 1.02 & 4& 0.222 & 2 & 1.67 &- &-&-\\ &(4000,0.25) &82&{\color{red}\bf 0.187} & 1 & 1.43 &30&{\color{red}\bf 0.187} & 1 & 0.94 & 3& 0.221 & 2 & 1.70 &- &-&-\\ \cmidrule(lr){1-17} jester-2 &(1000,0.15) &51& 0.201 & 1 & 0.36 &18&{\color{red}\bf 0.196} & 1 & 0.18 & 5& 0.221 & 2 & 0.48 &{\color{red}\bf 0.196} &100&24.5\\ &(1000,0.20) &51&{\color{red}\bf 0.189} & 1 & 0.25 &18&{\color{red}\bf 0.189} & 1 & 0.13 & 4& 0.222 & 2 & 0.30 &0.192 &100&24.3\\ &(1000,0.25) &51&{\color{red}\bf 0.187} & 1 & 0.27 &18&{\color{red}\bf 0.187} & 1 & 0.13 & 3& 0.222 & 2 & 0.34 &0.190 &100&24.4\\ &(2000,0.15) &60& 0.196 & 2 & 1.12 &24&{\color{red}\bf 0.194} & 1 & 0.73 & 5& 0.222 & 2 & 1.16 &0.195 &100&174.5\\ &(2000,0.20) &60&{\color{red}\bf 0.190} & 1 & 0.75 &24& 0.191 & 1 & 0.52 & 4& 0.222 & 2 & 0.95 &0.192 &100&174.5\\ &(2000,0.25) &60&{\color{red}\bf 0.188} & 1 & 0.79 &24&{\color{red}\bf 0.188} & 1 & 0.51 & 3& 0.222 & 2 & 0.93 &0.190 &100&174.2\\ &(4000,0.15) &95&{\color{red}\bf0.194} & 1 & 1.50 &30&{\color{red}\bf 0.194} & 1 & 1.15 & 5& 0.223 & 2 & 2.08 &- &-&-\\ &(4000,0.20) &95&{\color{red}\bf 0.187} & 1 & 1.32 &30&{\color{red}\bf 0.187} & 1 & 0.90 & 4& 0.222 & 1 & 1.54 &- &-&-\\ &(4000,0.25) &95&{\color{red}\bf 0.186} & 1 & 1.49 &30&{\color{red}\bf 0.186} & 1 & 0.95 & 3& 0.222 & 2 & 1.50 &- &-&-\\ \cmidrule(lr){1-17} jester-3 &(1000,0.15) &7& 0.232 & 37 & 0.22 &0.7& 0.232 & 30 & 0.15 & 5& 0.231 & 3 & 0.35 &{\color{red}\bf 0.217} &88&23.8\\ &(1000,0.20) &7& 0.231 & 36 & 0.21 &0.7& 0.231 & 29 & 0.14 & 4& 0.229 & 3 & 0.35 &{\color{red}\bf 0.212} &87&26.4\\ &(1000,0.25) &7& 0.234 & 30 & 0.21 &0.7& 0.234 & 29 & 0.15 & 3& 0.231 & 3 & 0.41 &{\color{red}\bf 0.213} &91&24.1\\ &(2000,0.15) &10& 0.231 & 37 & 0.92 &0.8& 0.231 & 35 & 0.72 & 5& 0.230 & 3 & 1.39 &{\color{red}\bf 0.217} &91&173.9\\ &(2000,0.20) &10& 0.231 & 35 & 0.83 &0.8& 0.232 & 32 & 0.81 & 4& 0.230 & 3 & 1.42 &{\color{red}\bf 0.212} &91&174.7\\ &(2000,0.25) &10& 0.232 & 31 & 0.96 &0.8& 0.233 & 35 & 0.78 & 3& 0.230 & 4 & 1.20 &{\color{red}\bf 0.213} &91&174.0\\ &(4000,0.15) &13& 0.232 & 41 & 2.73 &1.2& 0.232 & 33 & 2.53 & 5& {\color{red}\bf0.231} & 2 & 3.37 &- &-&-\\ &(4000,0.20) &13& {\color{red}\bf0.232} & 40 & 2.71 &1.2& 0.233 & 35 & 2.51 & 4& {\color{red}\bf0.232} & 2 & 3.05 &- &-&-\\ &(4000,0.25) &13& 0.231 & 35 & 2.83 &1.2& 0.233 & 34 & 2.69 & 3& {\color{red}\bf0.230} & 3 & 3.17 &- &-&-\\ \Xhline{0.7pt} \end{tabular}} \end{table} For the Jester joke dataset, we consider different settings of $n_u$ and SR, and report in Table \ref{jester} the averaged NMAE, rank and running time (in seconds) after running each setting {\bf five} times. Among others, the results of ADMM for $n_{u}=4000$ are not reported since the adjusting scheme of $\lambda$ is not available in the code. We see that for jester-1 and jester-2, Algorithm \ref{AMM} and \ref{HMAP} yield comparable even a little better NMAE than ADMM does, but for jester-3 they give a little worse NMAE than ALS and ADMM do. For all settings, Algorithm \ref{AMM} and \ref{HMAP} yield much lower rank and require much less running time than ADMM does. The ALS method yields the worst NMAE for jester-1 and jester-2, and require comparable running time with that of Algorithm \ref{AMM} and \ref{HMAP}. Next we consider the MovieLens dataset from \url{http://www.grouplens.org/node/73}. The dataset contains two subdatasets: the Movie-100K dataset and the Movie-1M dataset, and the rating range is from $r_{\rm min}=1$ to $r_{\rm max}=5$. The Movie-100K dataset contains 100,000 ratings for 1682 movies by 943 users, while the latter contains 1,000,209 ratings of 3900 movies made by 6040 users. For the Movie-100K dataset, we also consider the data matrix $\widetilde{M}^0=M^0-3$ so as to be consistent with the code of ADMM. We first randomly select $n_r$ users from $\widetilde{M}^0$ and randomly select their $n_c$ column ratings, and then sample the observed entries with the schemes in \eqref{sampling-scheme}. Table \ref{Movie-100K} reports the averaged NMAE, rank and running time (in seconds) after running the setting $(n_r,n_c)=(943,1682)$ {\bf five} times. We see that Algorithm \ref{HMAP} yields a little better NMAE than other three solvers do, Algorithm \ref{AMM} gives worse NMAE than ADMM does for ${\rm SR}=0.1$ and $0.15$; and Algorithm \ref{AMM} and \ref{HMAP} yield the lowest rank solutions for all test problems, but ADMM gives the highest rank solutions. \begin{table}[h] \setlength{\belowcaptionskip}{-0.01cm} \centering \scriptsize \caption{\small Averaged NMAE and running time of four methods for Movie-100K dataset}\label{Movie-100K} \scalebox{1}{ \begin{tabular}{cc|lccc|lccc|lccc|lcc} \Xhline{0.7pt} &SR& \multicolumn{4}{l}{\qquad Algorithm \ref{AMM}}&\multicolumn{4}{l}{\qquad Algorithm \ref{HMAP}} &\multicolumn{4}{l}{\qquad ALS}&\multicolumn{3}{l}{\qquad ADMM}\\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} \cmidrule(lr){11-14} \cmidrule(lr){15-17} & &$c_{\lambda}$& NMAE& rank &time&$c_{\lambda}$& NMAE& rank& time & $c_{\lambda}$& NMAE& rank& time& NMAE& rank& time \\ \hlin Scheme 1&0.10 &8& 0.302 & 1 & 11.6 &2.7& {\color{red}\bf 0.231} & 1 & 3.60 & 3.8& 0.244 & 14 & 11.1 &0.232 &757&355.6\\ &0.15 &7& 0.226 & 1 & 10.3 &2.7& {\color{red}\bf 0.220} & 1 & 3.68 & 2.4& 0.239 & 12 & 12.2 &0.225 &868&365.6\\ &0.20 &8& 0.216 & 1 & 12.1 &2.7& {\color{red}\bf 0.212} & 1 & 3.93 & 1.8& 0.237 & 10 & 11.3 &0.220 &901&370.4\\ &0.25 &8& 0.210 & 1 & 11.9 &2.7& {\color{red}\bf 0.207} & 1 & 4.11 & 1.4& 0.234 & 7 & 9.77 &0.215 &927&370.2\\ \Xhline{0.7pt} Scheme 2&0.10 &8& 0.304 & 1 & 11.3 &2.7& {\color{red}\bf 0.232} & 1 & 3.60 & 3.8& 0.244 & 14 & 11.1 &0.233 &753&356.0\\ &0.15 &7& 0.260 & 1 & 14.5 &2.7& {\color{red}\bf 0.221} & 1 & 3.84 & 2.4& 0.242 & 12 & 10.5 &0.226 &852&358.0\\ &0.20 &8& 0.218 & 1 & 11.7 &2.7& {\color{red}\bf 0.216} & 1 & 4.28 & 1.8& 0.239 & 10 & 9.58 &0.221 &900&361.0\\ &0.25 &8& 0.211 & 1 & 11.4 &2.7& {\color{red}\bf 0.208} & 1 & 4.04 & 1.4& 0.236 & 7 & 9.72 &0.217 &922&368.0\\ \Xhline{0.7pt} \end{tabular}} \end{table} \begin{table}[h] \setlength{\belowcaptionskip}{-0.01cm} \centering \scriptsize \caption{\small Averaged NMAE and running time of four methods for Movie-1M dataset \label{Movie-1M}} \scalebox{1}{ \begin{tabular}{cc|lccc|lccc|lccc|lcc} \Xhline{0.7pt} ($n_r$,$n_c$)&\!\! SR & \multicolumn{4}{l}{\qquad Algorithm \ref{AMM}}&\multicolumn{4}{l}{\qquad Algorithm \ref{HMAP}} &\multicolumn{4}{l}{\qquad ALS}&\multicolumn{3}{l}{\qquad ADMM}\\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} \cmidrule(lr){11-14} \cmidrule(lr){15-17} & &$c_{\lambda}$&\!\!\!\! NMAE&\!\!\! rank \!\!\!\! &time&$c_{\lambda}$\!\!\!\!& NMAE \!\!\!& rank \!\!\!& time& $c_{\lambda}$&\!\!\!\! NMAE& rank&\!\!\!\! time& NMAE&\!\!\! rank&\!\!\!\! time \\ \Xhline{0.7pt} $1500\times 1500$ &0.10 &7.8& 0.280 & 2 & 14.1 &2.5& {\color{red}\bf 0.230} & 1 & 4.78 & 2.8& 0.244 & 24 & 18.7 &0.234 &850&534.2\\ &0.15 &5.9& 0.241 & 1 & 13.4 &2.2& {\color{red}\bf 0.218} & 1 & 4.80 & 2.4& 0.242 & 14 & 15.7 &0.227 &999&533.8\\ &0.20 &5.9& 0.213 & 1 & 12.8 &2.2& {\color{red}\bf 0.209} & 1 & 5.16 & 1.6& 0.239 & 13 & 15.4 &0.221 &1100&540.7\\ &0.25 &5.9& 0.207 & 1 & 13.4 &2.2& {\color{red}\bf 0.205} & 1 & 5.32 & 1.5& 0.238 & 7 & 13.2 &0.217 &1156&556.3\\ \Xhline{0.7pt} $2000\times 2000$ &0.10 &11& 0.228 & 1 & 20.5&3.5& {\color{red}\bf 0.219} & 1 & 8.85 &3.6 & 0.246 &12 & 27.1 &0.231 &1245&1289.6\\ &0.15 &9 & 0.213 & 1 & 21.2&3.5& {\color{red}\bf 0.209} & 1 & 8.97 &2.2 & 0.241 &13 & 23.8 &0.223 &1415&1307.2\\ &0.20 &9 & 0.208 & 1 & 23.6&3.5& {\color{red}\bf 0.204} & 1 & 9.50 &1.6 & 0.239 &10 & 23.9 &0.219 &1524&1309.4\\ &0.25 &9 & 0.202 & 1 & 23.8&3.5& {\color{red}\bf 0.200} & 1 & 9.88 &1.1 & 0.234 &12 & 27.5 &0.213 &1602&1333.4\\ \Xhline{0.7pt} $3000\times 3000$ &0.10 &13& 0.216 & 1 & 46.9&3.5& {\color{red}\bf 0.210 } & 1 & 20.3 &3.3& 0.245 &13& 58.7 & - &-&-\\ &0.15 &13& 0.204 & 1 & 55.9&3.5& {\color{red}\bf 0.202 } & 1 & 23.0 &1.8& 0.237 &18& 64.4 &- &-&-\\ &0.20 &13& 0.199 & 1 & 60.7&3.5& {\color{red}\bf 0.197 } & 1 & 24.8 &1.4& 0.235 &13& 52.9 &- &-&-\\ &0.25 &13& 0.195 & 1 & 58.5&3.5& {\color{red}\bf 0.194 } & 1 & 24.3 &1.1& 0.230 &8 & 49.1 &- &-&-\\ \Xhline{0.7pt} $6040\times 3706$ &0.10 &24& 0.205 & 1 & 131.9&6.0& {\color{red}\bf 0.202 } & 1 & 54.0 &2.9& 0.243 & 12 & 134.8 & -&-&-\\ &0.15 &24& 0.198 & 1 & 151.5&6.0& {\color{red}\bf 0.196 } & 1 & 58.2 &1.7& 0.236 & 11 & 136.3 &- &-&-\\ &0.20 &24& 0.194 & 1 & 138.3&6.0& {\color{red}\bf 0.194 } & 1 & 60.7 &1.2& 0.233 & 11 & 136.3 &- &-&-\\ &0.25 &24& 0.192 & 1 & 136.0&6.0& {\color{red}\bf 0.191 } & 1 & 64.4 &0.9& 0.228 & 11 & 127.8 &- &-&-\\ \Xhline{0.7pt} \end{tabular}} \end{table} For the Movie-1M dataset, we first randomly select $n_r$ users and their $n_c$ column ratings from $M^0$, and then sample the observed entries with Scheme 1 in \eqref{sampling-scheme}. We consider the setting of $n_r=n_c$ with $n_r=1500, 2000$ or $3000$ and the setting of $(n_r,n_c)=(6040,3706)$. Table \ref{Movie-1M} reports the averaged NMAE, rank and running time (in seconds) after running {\bf five} times for each setting. We see that for this dataset, the solvers have similar performance as they do for the Movie-100K. We also consider the Netflix dataset in \url{https://www.kaggle.com/netflix-inc/netflix-prize-data\#qualifying.txt}. For this dataset, we first randomly select $n_r$ users and their $n_c$ column ratings from $M^0$, and then sample the observed entries with the schemes in \eqref{sampling-scheme}. We consider the setting of $n_r=n_c$ with $n_r=6000,8000$ and $10000$. Table \ref{Netflix} reports the averaged NMAE, rank and running time (in seconds) of three solvers after running {\bf five} times for each setting (the results of ADMM are not reported for these instances since it is too time-consuming). For this dataset, the three solvers have similar performance as they do for the MovieLens dataset. Among others, Algorithm \ref{HMAP} yields better outputs than other two solvers do, and it requires less half of the time than Algorithm \ref{AMM} does. So, Algorithm \ref{HMAP} has a remarkable advantage in running time for large-scale instances. \begin{table}[h] \setlength{\belowcaptionskip}{-0.01cm} \centering \scriptsize \caption{\small Averaged NMAE and running time of three methods for Netflix Dataset \label{Netflix}} \scalebox{1}{ \begin{tabular}{ccc|lccc|lccc|lccc} \Xhline{0.7pt} &($n_r$,$n_c$)&\!\! SR & \multicolumn{4}{l}{\qquad Algorithm \ref{AMM}}&\multicolumn{4}{l}{\qquad Algorithm \ref{HMAP}} &\multicolumn{4}{l}{\qquad ALS}\\ \cmidrule(lr){4-7} \cmidrule(lr){8-11} \cmidrule(lr){12-15} & & &$c_{\lambda}$& NMAE& rank &time& $c_{\lambda}$& NMAE & rank & time & $c_{\lambda}$& NMAE& rank& time \\ \Xhline{0.7pt} scheme 1 &$6000\times 6000 $ &0.10 &14& 0.252 & 2 & 256.8& 3.8& {\color{red}\bf 0.221} & 1 & 86.1 &3.2& 0.240 &19& 220.4 \\ & &0.15 &10& 0.226 & 2 & 253.2& 3.8& {\color{red}\bf 0.209} & 1 & 87.4 &2.5& 0.237 &10& 188.2 \\ & &0.20 &10& 0.210 & 1 & 247.5& 3.8& {\color{red}\bf 0.204} & 1 & 92.2 &1.5& 0.235 &12& 232.1\\ & &0.25 &10& 0.204 & 1 & 255.7& 3.8& {\color{red}\bf 0.201} & 1 & 94.6 &1.4& 0.233 &8 & 171.1\\ &$8000\times 8000$ &0.10 &18& 0.217 & 1 & 427.1& 6.0 & {\color{red}\bf 0.209} & 1 & 159.2 &3.2& 0.239 &17 &321.8 \\ & &0.15 &18& 0.206 & 1 & 445.9& 6.0 & {\color{red}\bf 0.203} & 1 & 166.9 &2.3& 0.236 &10 &333.2\\ & &0.20 &18& 0.202 & 1 & 478.4& 6.0 & {\color{red}\bf 0.199} & 1 & 174.7 &1.4& 0.233 &12 &366.1 \\ & &0.25 &17& 0.198 & 1 & 492.4& 6.0 & {\color{red}\bf 0.196} & 1 & 180.8 &1.1& 0.229 &10 &335.4 \\ &$10000\times 10000$&0.10 &19& 0.225 & 2 & 752.3& 6.0 & {\color{red}\bf 0.207} & 1 & 245.8 &-& - & - &- \\ & &0.15 &19& 0.203 & 1 & 728.6& 6.0 & {\color{red}\bf 0.200} & 1 & 268.3 &-& - & - &- \\ & &0.20 &19& 0.201 & 1 & 833.1& 6.0 & {\color{red}\bf 0.198} & 1 & 293.9 &-& - & - &- \\ & &0.25 &19& 0.197 & 1 & 798.8& 6.0 & {\color{red}\bf 0.195} & 1 & 293.9 &-& - & - &- \\ \Xhline{0.7pt} scheme 2 &$6000\times 6000$ &0.10 &15 & 0.235 & 1 & 233.1& 3.8& {\color{red}\bf 0.222} & 1 & 86.8 &3.2& 0.241 &18 & 209.7 \\ & &0.15 &11.5& 0.238 & 2 & 274.1& 3.8& {\color{red}\bf 0.210} & 1 & 88.8 &2.3& 0.239 &11 & 180.9 \\ & &0.20 &11.5& 0.210 & 1 & 255.4& 3.8& {\color{red}\bf 0.205} & 1 & 94.4 &1.5& 0.236 &12 & 189.1\\ & &0.25 &11.5& 0.205 & 1 & 253.6& 3.8& {\color{red}\bf 0.202} & 1 & 95.3 &1.3& 0.234 &7 & 170.8\\ &$8000\times 8000$ &0.10 &19& 0.217 & 1 & 425.8& 6.0 & {\color{red}\bf 0.209} & 1 & 159.3 &3.2& 0.240 &16 &307.7 \\ & &0.15 &19& 0.207 & 1 & 443.8& 6.0 & {\color{red}\bf 0.203} & 1 & 170.4 &2.3& 0.238 &10 &296.9 \\ & &0.20 &19& 0.202 & 1 & 482.5& 6.0 & {\color{red}\bf 0.199} & 1 & 179.3 &1.5& 0.235 & 9 &312.8 \\ & &0.25 &18& 0.199 & 1 & 491.3& 6.0 & {\color{red}\bf 0.197} & 1 & 183.0 &1.1& 0.231 &10 &328.5 \\ \Xhline{0.7pt} \end{tabular}} \end{table} From the numerical tests of the previous two subsections, we conclude that for simulated data, Algorithm \ref{AMM} and \ref{HMAP} are superior to ALS and ADMM in terms of rank and relative error; and for the three real datasets, Algorithm \ref{HMAP} is superior to other three solvers in terms of rank and NMAE except for jester-3, and Algorithm \ref{AMM} is worse than ALS and ADMM do when ${\rm SR}=0.1$ and $0.15$. In addition, Algorithm \ref{HMAP} is superior to other three solvers in terms of running time. \section{Conclusion}\label{sec6} We have proposed a column $\ell_{2,0}$-regularized factorization model for low-rank matrix recovery to achieve the optimal (or true) rank from a rough estimation, so that the recent theoretical results for factorization models work fully in practice. Although this model involves a nonconvex nonsmooth and non-Lipschitz regularization term, no additional stationary points are induced, and its (strong) local optimizers are almost determined by the (strong) local minimizers of $F_{\mu}$. We have developed an AMM method and a hybrid AMM method for computing this model, and provided their global convergence analysis. Numerical experiments are conducted on simulated data and real datasets for matrix completion problems with non-uniform sampling, and comparison results with the ALS \cite{Hastie15} and the ADMM \cite{Fang18} verify that the proposed model has an advantage in promoting solutions with lower errors and ranks, and the hybrid AMM method is superior to other three solvers for most of test instances in terms of the error, rank and running time. An interesting future work is about the statistical study on the proposed model. \bigskip \noindent {\large\bf Acknowledgements}\ \ The authors would like to express their sincere thanks to Prof. Ethan X. Fang from Pennsylvania State University for providing the ADMM code for numerical comparison. \bibliographystyle{siamplain}
{ "attr-fineweb-edu": 1.641602, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd5M5i7PA9Pc3J3kC
\section{Introduction\label{s:intro}} The relationship of the BFKL equation~\cite{BFKL} --- describing small-$x$ evolution in QCD --- with the DGLAP equation~\cite{DGLAP} --- describing $Q^2$ evolution --- has been the subject of several investigations in the nineties~\cite{Kfact,CoEl91,CaCiHa93,CaHa94,RGvert,QQvertCC,QQvertFFFK}, in the attempt of providing a unified picture of small-$x$ physics in QCD. It is well known that consistency of the BFKL approach with renormalisation group (RG) factorisation is achieved by means of resummation formulas of large contributions at small-$x$ to the quark and gluon anomalous dimensions and coefficient functions, which have been derived at the leading-$\log x$ (LL$x$) level~\cite{Kfact} and, in part, at the next-to-leading (NL$x$) one~\cite{CaCiHa93,CaHa94}. In particular, Catani and Hautmann~\cite{CaHa94} have studied the LL$x$ BFKL equation in $4+2\varepsilon$ dimensions with dimensional regularisation (and frozen coupling) and have derived, on this basis, the resummation of leading logarithms~\cite{CaCiHa93,CaHa94} for the gluon coefficient function --- usually called $R(\as)$ --- and for the quark anomalous dimension $\gamma_{\pq\pg}$ in the $\overline{\mathrm{MS}}$- and DIS-schemes. This calculation extends in part into the NL$x$ level, not only because of $\gamma_{\pq\pg}$ being next-to-leading, but also because of the $R$-factor, which provides NL$x$ corrections to the gluon anomalous dimension $\gamma_{\pg\pg}$ when running coupling effects are turned on~\cite{CaCi97}. It is known, on the other hand, that full NL$x$ terms~\cite{FaLi98,CaCi98} are quite large and negative, and that doubly resummed approaches~\cite{CCS,CCSSkernel,ABF} are required in order to stabilise the subleading-log series. This raises the question of a better analysis of anomalous dimensions and coefficient functions at subleading level in various factorisation schemes. While the $\kt$-factorisation schemes (like the $Q_0$-scheme~\cite{Q0}, characterised by an off-shell initial parton) have been pushed to the doubly-resummed level~\cite{CCSSkernel}, the minimal subtraction one (characterised by dimensional regularisation, and normally used in fixed order calculations) needs yet to be extended to a full treatment of NL$x$ terms. The purpose of the present paper is to devise a method to perform such analysis, and to work it out in detail in the case of the BFKL equation with running coupling, which already contains quite important subleading effects. The method is then extended to a full treatment of NL$x$ coefficient terms, which require the $\varepsilon$-dependence of the NL$x$ BFKL kernel at least up to $\ord{\varepsilon}$. Unfortunately, the latter is yet to be extracted from the literature~\cite{RGvert,QQvertCC,QQvertFFFK}. The main tool of our analysis is the generalisation to $4+2\varepsilon$ dimensions of the $\gamma$-representation of the gluon density --- that is, of a Fourier representation of the BFKL solution in which $\gamma$ is conjugate to $t\equiv\log(\kt^2/\mu^2)$. While for $\varepsilon=0$ the running-coupling equation is a differential equation in $\gamma$, for $\varepsilon\neq0$ it becomes a finite-difference equation which is treated in Secs.~\ref{s:grLL} and \ref{s:rce} and in App.~\ref{a:sde}. This allows one to write the gluon density in a generalised $Q_0$-scheme% \footnote{The label $Q_0$ referred originally~\cite{Q0} to the fact that the initial gluon, defined by $\kt$-factorisation, was set off-mass-shell ($\kt^2 = Q_0^2$) in order to cutoff the infrared singularities. It will be shown, however, that the effective anomalous dimension at scale $\kt^2 \gg Q_0^2$ is independent of the cut-off procedure, whether of dimensional type or of off-mass-shell one.} as the product of an anomalous dimension exponential and of a fluctuation factor that we call $\ff$. We then show in Sec.~\ref{s:grLL} that the factor $\rk\equiv R/\ff$ is due to the $\ord{\varepsilon}$ dependence of the LL$x$ gluon anomalous dimension. This result offers an interpretation of the mismatch of $R$ and $\ff$ coefficients, and a hint to possible generalisations, investigated in Secs.~\ref{s:rce}-\ref{s:nlc}. The general case with running coupling ($b>0$) is treated in Sec.~\ref{s:rce}, and is qualitatively similar to the frozen coupling ($b=0$) case, except that the beta-function \begin{equation}\label{beta} \beta(\asb) \equiv \frac{\dif\asb(t)}{\dif t} = \varepsilon \asb - b \asb^2 + \ord{\asb^3} \end{equation} has both the dimensional contribution ($\varepsilon$-term) and the running-coupling one ($b$-term). We are thus able to explain how RG factorisation works for coefficient and anomalous dimension parts for $b>0$, thus relating the $Q_0$-scheme (with dimensional regularisation) and the $\overline{\mathrm{MS}}$-scheme in an unambiguous way. In particular, we confirm that the two anomalous dimensions differ by the quantity $\dif\log R / \dif t$. Secs.~\ref{s:fsl} and \ref{s:nlc} are devoted to the calculation of the NL$x$ corrections to $R$. In Sec.~\ref{s:fsl} we treat the corrections due to the running coupling, which require the (known) $\varepsilon$-dependence of the leading kernel eigenvalue up to $\ord{\varepsilon^2}$, and the corresponding refinement of the $\gamma$-representation and of the saddle-point fluctuation formalism. In Sec.~\ref{s:nlc} we calculate the remaining NL$x$ corrections, due to the inclusion of the NL$x$ kernel. Here the finite difference equation involves two steps, and is truncated at NL$x$ level by an iterative procedure. The final result involves the $\ord{\varepsilon}$ corrections to the NL$x$ kernel eigenvalue, which are not yet explicitly known. In Sec.~\ref{s:ugqg} we reconsider the calculation of $\gamma_{\pq\pg}$ in the $\overline{\mathrm{MS}}$-scheme. Contrary to the case of the DIS-scheme, in which the coefficient function $C_{\pq\pg}$ is set to zero, no closed form resummation formula is yet available for $\gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}$. Catani and Hautmann are able to provide a recursive method in order to disentangle $C_{\pq\pg}$ from $\gamma_{\pq\pg}$, so as to calculate a number of terms of their expansion in $\asb/\omega$, $\omega$ being the Mellin moment conjugate to $x$, a result further improved in the later literature~\cite{Rterms}. Here we use the $\gamma$-representation of the gluon density in order to do the same calculation: this allows us to provide an explicit resummation formula which exhibits the universality of $\gamma_{\pq\pg}$ and involves only quadratures of functions which are known in principle. However, the latter calculation requires an all-order $\varepsilon$-expansion of the coefficient function of the gluon. An interesting byproduct of the universality analysis is the proof (App.~\ref{a:cfqk}) that the {\em off-shell} $P_{\pq\pg}$ splitting function introduced in~\cite{CaHa94} is indeed process-independent. In the final Sec.~\ref{s:disc} we summarise our results and we discuss their relevance for the improved resummations of splitting functions~\cite{CCSSkernel,ABF}. \section{$\boldsymbol\gamma$-representation and $\boldsymbol R$-factor in the LL$\boldsymbol x$ case\label{s:grLL}} Investigating the BFKL equation in $4+2\varepsilon$ dimensions is of importance on its own, because --- in addition to the ultraviolet (UV) role of dimensional regularisation --- a positive $\varepsilon$ parameter allows to regularise the mass-shell singularities and, in some regime (see Sec.~\ref{s:rce}) it avoids the Landau pole. For this reason we shall consider $\varepsilon>0$ as an infrared cutoff, in alternative to setting off-mass-shell the initial condition for partonic evolution, as done in the $Q_0$-scheme~\cite{Q0}. Our purpose is, at large, to understand the relationship of such two kinds of schemes, whether or not a minimal subtraction is assumed in the partonic densities. Since the gauge coupling $g$ is dimensionful in $4+2\varepsilon$ dimensions, we shall introduce, as usual, the renormalisation scale $\mu$, the dimensionless coupling in the $\overline{\mathrm{MS}}$-scheme \begin{equation}\label{d:alphas} \as \equiv \frac{(g\mu^{\varepsilon})^2}{(4\pi)^{1+\varepsilon} \esp{\varepsilon\psi(1)}} \;, \quad \asb \equiv \as \frac{N_c}{\pi} \;, \end{equation} and the parameter $t\equiv\log(\kt^2/\mu^2)$ in terms of which the running coupling is $\asb(t) \equiv \asb \esp{\varepsilon t}$. Therefore, the LL$x$ equation shows a running coupling with infrared free evolution corresponding to the dimensional contribution of the beta function (\ref{beta}) in the $b\to0$ limit. Since $\asb(t)\to0$ for $t\to-\infty$, we can set on-shell ($\kt_0=0$) the initial condition for the gluon Green's function, so that $\varepsilon$ acquires the role of infrared cutoff. The ensuing solution of the LL$x$ BFKL equation is naturally expressed as a power series in $\asb(t)$, which was extensively studied in~\cite{CaHa94}. Our purpose here is to recast the solution for the gluon density mentioned above in the $\gamma$-representation form --- $\gamma$ being conjugate to $t$ --- so as to be able to describe in a simpler way its anomalous dimension behaviour. The BFKL equation for the unintegrated gluon density $\ugd$ in $4+2\varepsilon$ dimensions reads (the $\omega$-dependence of $\ugd$ is understood) \begin{align} \ugd_{\varepsilon}(\kt) &= \delta^{(2+2\varepsilon)}(\kt) + \frac1{\omega} \int\frac{\dif^{2+2\varepsilon}\kt'}{(2\pi)^{2+2\varepsilon}}\;\Kernel(\kt,\kt')\ugd_{\varepsilon}(\kt') \label{BFKLeq} \\ &= \delta^{(2+2\varepsilon)}(\kt) + \frac{\esp{\varepsilon\psi(1)}}{(\pi\kt^2)^{1+\varepsilon}} \widetilde{\ugd}_{\varepsilon}(\kt) \;, \nonumber \\ \Kernel(\kt,\kt') &= g^2 N_c \left[\frac1{\pi(\kt-\kt')^2} - (\pi\kt^2)^{\varepsilon} \frac{\Gamma^2(1+\varepsilon)\Gamma(1-\varepsilon)}{\varepsilon\Gamma(1+2\varepsilon)} \delta^{2+2\varepsilon}(\kt-\kt') \right] \label{Kernel} \end{align} and its power series solution is \begin{equation}\label{powerSol} \widetilde{\ugd}_{\varepsilon}(\kt) = \frac{\asb(t)}{\omega} \left[1+\sum_{n=1}^\infty \left(\frac{\asb(t)}{\omega}\right)^n \prod_{k=1}^n \chi_{\varepsilon}(k\varepsilon)\right] \;, \end{equation} where \begin{equation}\label{chie} \chie(\gamma) = \frac{\esp{\varepsilon\psi(1)}\Gamma(1+\varepsilon)}{\varepsilon} \left[ \frac{\Gamma(\gamma)\Gamma(1-\gamma)}{\Gamma(\gamma+\varepsilon)\Gamma(1-\gamma+\varepsilon)} - \frac{\Gamma(1+\varepsilon)\Gamma(1-\varepsilon)}{\Gamma(1+2\varepsilon)} \right] \end{equation} is the LL$x$ ``characteristic function'' of $\Kernel$ in $4+2\varepsilon$ dimensions, defined by \begin{equation}\label{d:chie} \int\frac{\dif^{2+2\varepsilon}\kt'}{(2\pi)^{2+2\varepsilon}}\; \Kernel(\kt,\kt') (\kt'{}^2)^{\gamma-1-\varepsilon} \equiv \asbmu \chie(\gamma) \frac{(\kt^2)^{\gamma-1}}{\mu^{2\varepsilon}} \;. \end{equation} The series (\ref{powerSol}) is well behaved in the infrared ($t\to-\infty$) apart from the $1/\varepsilon$ poles, but is not really suitable in the ultraviolet ($t\to+\infty$), where the variable $\asb(t)/\omega$ grows, and at most a finite radius of convergence is expected. We thus look for a solution in the $\gamma$-representation form \begin{equation}\label{d:gammaRep} \widetilde{\ugd}_{\varepsilon}(\kt) = \int_{\Re\gamma=c}\frac{\dif\gamma}{2\pi\ui} \; \esp{\gamma t} f_{\varepsilon}(\gamma) \;, \end{equation} where the Fourier variable $\gamma$ has the interpretation of a continuum in which $\gamma=n\varepsilon$ is the lattice counterpart. We expect Eq.~(\ref{d:gammaRep}) to be best suited for anomalous dimension properties in which $n\to\infty$ and $\varepsilon\to0$ with $\gamma=n\varepsilon$ kept fixed. Let us start noticing that the ``ansatz''~(\ref{d:gammaRep}), when replaced in the BFKL equation with a general initial condition $\displaystyle{f_{\varepsilon}^{(0)}(\gamma)}$, leads to a finite difference equation of the form \begin{equation}\label{fde} f_{\varepsilon}(\gamma+\varepsilon) = f_{\varepsilon}^{(0)}(\gamma+\varepsilon) + \frac{\asbmu}{\omega} \chie(\gamma) f_{\varepsilon}(\gamma) \;. \end{equation} In the following we shall often consider the case \begin{equation}\label{f0} f_{\varepsilon}^{(0)}(\gamma+\varepsilon) = f^{(0)}(\gamma) \equiv \frac{\asbmu}{\omega} \frac{\esp{\gamma T}}{\gamma} \;, \end{equation} which corresponds to the initial condition \begin{equation}\label{F0} \widetilde{\ugd}_{\varepsilon}^{(0)}(\kt) = \frac{\asbmu \esp{\varepsilon t}}{\omega} \Theta(t+T) \;, \end{equation} i.e.\ to the one expected from Eq.~(\ref{powerSol}) with a cutoff at $\kt^2 = Q_0^2 \equiv \mu^2 \esp{-T}$. In this way we can study the role of the cutoff, by solving (\ref{fde}) with $T$ fixed, and performing the limits $T\to +\infty$, $\varepsilon\to0$ either in this order, or in reverse order. We shall solve Eq.~(\ref{fde}) in two steps, starting from the homogeneous equation \begin{equation}\label{homEq} h_{\varepsilon}(\gamma+\varepsilon) = \frac{\asbmu}{\omega} \chie(\gamma) h_{\varepsilon}(\gamma) \equiv \esp{L_{\varepsilon}(\gamma)} h_{\varepsilon}(\gamma) \;, \end{equation} where we take the ansatz \begin{equation}\label{hForm} h_{\varepsilon}(\gamma) = \exp\left\{\int^{\gamma} \lag(\gamma')\frac{\dif\gamma'}{\varepsilon} \right\} \equiv \exp\big\{S_{\varepsilon}(\gamma)\big\} \end{equation} so that Eq.~(\ref{homEq}) implies \begin{equation}\label{lagEq} S_{\varepsilon}(\gamma+\varepsilon) - S_{\varepsilon}(\gamma) = \int_{\gamma}^{\gamma+\varepsilon} \lag(\gamma')\frac{\dif\gamma'}{\varepsilon} = L_{\varepsilon}(\gamma) \;. \end{equation} We show in App.~\ref{aa:as} that the solution for the ``Lagrangian'' $\lag(\gamma)$ is expressed, under appropriate conditions, in terms of the Bernoulli numbers $B_n$ as follows \begin{equation}\label{lagExp} \lag(\gamma) = \sum_{n=0}^{\infty} \frac{B_n}{n!} \varepsilon^n L_{\varepsilon}^{(n)}(\gamma) \;, \end{equation} where $L^{(n)}$ is the $n$-th derivative of $L$ with respect to $\gamma$, and the generating function is \begin{equation}\label{bernoulli} \sum_{n=0}^{\infty} \frac{B_n}{n!} z^n = \frac{z}{\esp{z}-1} \;, \end{equation} so that \begin{equation}\label{Bn} B_0=1, \quad B_1=-\frac1{2}, \quad B_2 = \frac1{6}, \quad B_3 = 0, \quad B_4 = -\frac1{30}, \cdots \;. \end{equation} Correspondingly the ``action'' takes the form \begin{equation}\label{action} S_{\varepsilon}(\gamma) = \frac1{\varepsilon}\int^{\gamma} L_{\varepsilon}(\gamma') \dif\gamma' -\frac1{2} L_{\varepsilon}(\gamma) + \frac{\varepsilon}{12} L_{\varepsilon}'(\gamma) + \ord{\varepsilon^2} \;, \end{equation} where $L_{\varepsilon}$ can be further expanded in $\varepsilon$ as follows \begin{equation}\label{Lexp} L_{\varepsilon}(\gamma) \equiv \log\Big(\frac{\asbmu}{\omega}\chie(\gamma)\Big) = \log\Big(\frac{\asbmu}{\omega}\chi_0(\gamma)\Big) + \varepsilon \frac{\chi_1(\gamma)}{\chi_0(\gamma)} + \varepsilon^2 \left( \frac{\chi_2(\gamma)}{\chi_0(\gamma)} - \frac12 \frac{\chi_1^2(\gamma)}{\chi_0^2(\gamma)} \right) + \ord{\varepsilon^3} \end{equation} and the BFKL eigenvalue function $\chie = \chi_0 + \varepsilon\chi_1 + \varepsilon^2\chi_2 + \ord{\varepsilon^3}$ is given, according to Eq.~(\ref{chie}), by \begin{align} \chi_0(\gamma) &= 2\psi(1)-\psi(\gamma)-\psi(1-\gamma) \label{chi0} \\ \chi_1(\gamma) &= \frac12 \left[ \chi_0^2(\gamma) + 2\psi'(1)-\psi'(\gamma)-\psi'(1-\gamma) \right] \label{chi1} \\ \chi_2(\gamma) &= \chi_0(\gamma) \big[ \chi_1(\gamma) -\frac12\psi'(1)\big] -\frac13 \chi_0^3(\gamma) + \frac16 \big[ 8\psi''(1)-\psi''(\gamma) -\psi''(1-\gamma) \big] \;. \label{chi2} \end{align} Therefore, the solution of the homogeneous equation in $t$-space (Eq.~(\ref{BFKLeq}) without delta term) takes the form \begin{equation}\label{solHomEq} \widetilde{\ugd}_{\varepsilon}(\kt) = \int\dif \gamma\; \frac{\esp{\gamma t}}{\sqrt{\chi_0(\gamma)}} \exp\left\{\frac1{\varepsilon}\int^{\gamma} \log\Big(\frac{\asbmu}{\omega}\chi_0(\gamma')\Big) \dif\gamma' +\int^{\gamma} \frac{\chi_1(\gamma')}{\chi_0(\gamma')} \dif\gamma' \right\} \times \big[1+\ord{\varepsilon}\big] \;, \end{equation} where we have truncated the $\varepsilon$-expansion of the exponent up to the finite terms. The choice of the lower bound of the $\gamma'$ integrals is of no concern, since the normalisation of the solution of the homogeneous equation is arbitrary. The inhomogeneous equation (\ref{fde}) has, on the other hand, the iterative solution \begin{align} f_{\varepsilon}(\gamma) &= f^{(0)}(\gamma-\varepsilon) + f^{(0)}(\gamma-2\varepsilon) \frac{\asbmu}{\omega}\chie(\gamma-\varepsilon) + f^{(0)}(\gamma-3\varepsilon) \frac{\asbmu}{\omega}\chie(\gamma-2\varepsilon) \frac{\asbmu}{\omega}\chie(\gamma-\varepsilon) + \cdots \nonumber \\ &= \sum_{n=1}^{\infty} f^{(0)}(\gamma-n\varepsilon)\left(\frac{\asbmu}{\omega} \right)^{n-1}\prod_{m=1}^{n-1}\chie(\gamma-m\varepsilon) = \sum_{n=1}^{\infty} f^{(0)}(\gamma-n\varepsilon) \frac{h_{\varepsilon}(\gamma)}{h_{\varepsilon}(\gamma+\varepsilon-n\varepsilon)} \;, \label{itSol} \end{align} which is closely related to the power series solution (\ref{powerSol}), as analysed in more detail in App.~\ref{aa:is}. Here we just note the $\varepsilon=0$ limit of Eq.~(\ref{itSol}) at fixed $T$. Due to the fact that $h_{\varepsilon}(\gamma)$ satisfies Eq.~(\ref{homEq}), the ratios of $h$'s has a non trivial $\varepsilon\to0$ limit, and we obtain \begin{equation}\label{eps0lim} f_0(\gamma) = f^{(0)}(\gamma) \sum_{n=0}^{\infty}\esp{n L_0(\gamma)} = \frac{f^{(0)}(\gamma)}{1-\frac{\asbmu}{\omega}\chi_0(\gamma)} \;, \end{equation} as expected for the solution of the BFKL equation with frozen coupling and with cutoff. We get in fact, up to higher twist corrections, \begin{equation}\label{frozenSol} \widetilde{\ugd}_0(\kt) = \frac{\esp{\gamma_0 T}}{-\gamma_0\,\chi_0'(\gamma_0)} \left(\frac{\kt^2}{\mu^2}\right)^{\gamma_0} \Theta(\kt^2-\mu^2\esp{-T}) \;, \end{equation} where $\gamma_0(\asbmu/\omega)$ is the LL$x$ anomalous dimension, defined by \begin{equation}\label{d:gamma0} 1=\frac{\asbmu}{\omega}\chi_0(\gamma_0) \;, \qquad \gamma_0\Big(\frac{\asbmu}{\omega}\Big) = \frac{\asbmu}{\omega} + \ord{\frac{\asbmu}{\omega}}^4 \;. \end{equation} The result (\ref{frozenSol}) shows the customary infrared singular $T$-dependence of the $Q_0$-scheme, determined by $\gamma_0$. Our emphasis in this paper is, however, in the limit of (\ref{itSol}) for $\varepsilon \ll 1$ and $T\to+\infty$, where we obtain its continuum counterpart which roughly replaces the sum in the r.h.s.\ of (\ref{itSol}) with an integral, apart from some $\varepsilon$-dependent corrections. In order to specify such corrections, we rewrite Eq.~(\ref{fde}) in terms of $\rho_{\varepsilon}(\gamma)\equiv f_{\varepsilon}(\gamma)/h_{\varepsilon}(\gamma)$ in the simpler form \begin{equation}\label{rhoEq} \rho(\gamma+\varepsilon)-\rho(\gamma) = \frac{f^{(0)}(\gamma)}{h_{\varepsilon}(\gamma+\varepsilon)} = \frac{f^{(0)}(\gamma)}{h_{\varepsilon}(\gamma)} \esp{-L_{\varepsilon}(\gamma)} \;, \end{equation} which is of the type (\ref{lagEq}) and is solved in terms of Bernoulli numbers as in (\ref{lagExp}). The r.h.s.\ of (\ref{rhoEq}) has, however, $1/\varepsilon$ singularities in the exponent (cf.\ Eqs.~(\ref{hForm}) and (\ref{action})), and its derivatives generate eventually a non trivial correction factor (see App.~\ref{aa:rho}) having a finite $\varepsilon\to0$ limit, as follows: \begin{equation}\label{rho} \rho_{\varepsilon}(\gamma) = \int^\gamma \frac{\dif\gamma'}{\varepsilon} \frac{f^{(0)}(\gamma')}{h_{\varepsilon}(\gamma'+\varepsilon)} \frac{L_0(\gamma')-\varepsilon T}{1-\esp{-L_0(\gamma')+\varepsilon T}} \times\big[1+\ord{\varepsilon}\big] \;, \end{equation} where we have kept terms $\sim\varepsilon T$, that we consider a number of order unity. Finally, by replacing Eq.~(\ref{rho}) into (\ref{d:gammaRep}) we get the solution \begin{align} \widetilde{\ugd}_{\varepsilon}(\kt) = \int\frac{\dif\gamma}{2\pi\ui} \int^{\gamma'}\dif\gamma'\; &\frac{ \exp\left\{ \gamma t + \frac1{\varepsilon} \int_0^{\gamma}L_0(z)\dif z + \gamma' T - \frac1{\varepsilon} \int_0^{\gamma'}L_0(z)\dif z +\int_{\gamma'}^{\gamma}\frac{\chi_1(z)}{\chi_0(z)}\dif z \right\} }{ \varepsilon \gamma' \sqrt{\chi_0(\gamma)} \sqrt{\chi_0(\gamma')} } \nonumber \\ &\times \frac{L_0(\gamma')-\varepsilon T}{1-\esp{-L_0(\gamma')+\varepsilon T}} \;, \label{doubleGammaRep} \end{align} where we have truncated the $\varepsilon$-expansion to the finite terms. This solution has the form of a double $\gamma$-representation, similarly to the customary case $b\neq0$, $\varepsilon=0$, in which the $\varepsilon$-parameter replaces $b$ by providing an infrared free running coupling. Our goal is now to investigate the $T\to +\infty$ limit of Eq.~(\ref{doubleGammaRep}) by removing the artificial infrared cutoff $Q_0^2 = \mu^2 \esp{-T}$. In this limit the leading $\gamma'$-dependent phase is generally large and given by \begin{equation}\label{phase1} E(\gamma') = \gamma' T - \frac1{\varepsilon}\int_0^{\gamma'} L_0(z) \dif z \;, \end{equation} so that the $\gamma'$-integral is dominated by the saddle point $\bar{\gamma}'$: \begin{equation}\label{saddle1} E'(\bar{\gamma}') = T - \frac1{\varepsilon} L_0(\bar{\gamma}') = 0 \;, \end{equation} which implies \begin{equation}\label{gammabar1} 1 = \frac{\asb(-T)}{\omega}\chi_0(\bar{\gamma}') \;. \end{equation} Therefore, $\bar{\gamma}'\sim\asb(-T)/\omega\to0$ for $T\to +\infty$ \footnote{The solution with $\bar{\gamma}' \simeq 1$ is relevant in the infrared region $\kt^2 \ll Q_0^2$ ($t \ll -T$), where it behaves as $\sim \kt^2/Q_0^2 = \esp{T+t}$.} and, by taking into account also the $\gamma'$-fluctuations \begin{equation}\label{fluct1} \sigma_{\gamma'} \equiv \frac1{\sqrt{E''(\bar{\gamma}')}} = \sqrt{\frac{\varepsilon\chi_0(\bar{\gamma}')}{-\chi'_0(\bar{\gamma}')}} \;, \end{equation} all $T$-dependent factors in (\ref{doubleGammaRep}) cancel out, yielding \begin{equation}\label{gammaRep} \widetilde{\ugd}_{\varepsilon}(\kt) \stackrel{T\to +\infty}{\longrightarrow} \int\frac{\dif\gamma}{\sqrt{2\pi\varepsilon}} \frac1{\sqrt{\chi_0(\gamma)}} \exp\left\{ \gamma t+\frac1{\varepsilon}\int_0^{\gamma} L_0(\gamma')\dif\gamma' + \int_0^{\gamma} \frac{\chi_1(\gamma')}{\chi_0(\gamma')}\dif\gamma' + \ord{\varepsilon} \right\} \;, \end{equation} which is just a solution (\ref{solHomEq}) of the {\em homogeneous} equation with an appropriate normalisation. Eq.~(\ref{gammaRep}) is the main result of this section and shows the mechanism by which the solution becomes independent of the details of the initial condition in the $T\to +\infty$ ($Q_0\to0$) limit. In fact, due to the infinite evolution path from $-\infty$ to $t$, the shape of the solution reduces to the one of the homogeneous equation, and the initial condition determines only the normalisation. This already suggests the factorisation of the $1/\varepsilon$ singularities which have replaced the $T$-dependence in Eq.~(\ref{gammaRep}). In order to prove such factorisation we resort again to the saddle point method in order to evaluate the $\varepsilon\to0$ behaviour of (\ref{gammaRep}). The stationarity condition is now \begin{equation}\label{saddle2} \varepsilon t + L_0(\bar{\gamma}) = 0 \quad \iff \quad 1 = \frac{\asb(t)}{\omega} \chi_0(\bar{\gamma}) \;, \end{equation} with a stable fluctuation along the real axis for $\chi_0'(\bar{\gamma}) < 0$, thus yielding the result ($\bar{\gamma} \equiv \bar{\gamma}_t = \gamma_0\big(\asb(t)/\omega\big)$) \begin{equation}\label{ugdTilde} \widetilde{\ugd}_{\varepsilon}(\kt) = \frac1{\sqrt{-\chi_0'(\bar{\gamma}_t)}} \exp\left\{\bar{\gamma}_t\,t + \frac1{\varepsilon}\int_0^{\bar{\gamma}_t} L_0(\gamma')\dif\gamma' + \int_0^{\bar{\gamma}_t} \frac{\chi_1(\gamma')}{\chi_0(\gamma')}\dif\gamma' \right\} \times \big[1+\ord{\varepsilon}\big] \;. \end{equation} Finally, we calculate the integrated gluon density in the form \begin{align} g_{\varepsilon}(t) &\equiv \int\dif^{2+2\varepsilon}\kt'\;\ugd_{\varepsilon}(\kt')\,\Theta(\kt^2-\kt'{}^2) \nonumber \\ &= \frac1{\bar{\gamma}_t\sqrt{-\chi_0'(\bar{\gamma}_t)}} \exp\left\{\bar{\gamma}_t \,t + \frac1{\varepsilon}\int_0^{\bar{\gamma}_t} L_0(\gamma')\dif\gamma' + \int_0^{\bar{\gamma}_t} \frac{\chi_1(\gamma')}{\chi_0(\gamma')}\dif\gamma' \right\} \times \big[1+\ord{\varepsilon}\big] \;, \label{d:gluon} \end{align} where we have performed an integration by parts of $\widetilde{\ugd}$ in (\ref{ugdTilde}). Some remarks are in order. Firstly, the saddle point condition (\ref{saddle2}) provides a real positive anomalous dimension only in the perturbative regime in which $\asb(t) < \omega / \chi_0({\textstyle\frac{1}{2}})$, value at which the LL$x$ anomalous dimension has the well known singularity at $\gamma={\textstyle\frac{1}{2}}$. Since, however, $\asb(t)$ increases with $t$ for $\varepsilon>0$, for large $t$ the solutions to (\ref{saddle2}) will become necessary complex conjugate with $\Re\bar{\gamma}={\textstyle\frac{1}{2}}$, and $g_{\varepsilon}(t)$ will oscillate asymptotically. This shows how the perturbative series (\ref{powerSol}) behaves beyond its convergence radius, so that $\ugd(\kt)$ stays marginally square-integrable,% \footnote{That is, $\ugd(\kt)\sim(\kt^2)^{-1/2} \times \text{oscillating function}$.} as required by the $\gamma$-representation. Secondly, Eq.~(\ref{d:gluon}) shows, in the perturbative regime, a fluctuation factor $\ff$ and an anomalous dimension exponential, in the form \begin{gather} g_{\varepsilon}(t) = \ff\Big(\frac{\asb(t)}{\omega}\Big) \exp\left\{\int_{-\infty}^t \dif\tau\;\left[ \gamma_0\Big(\frac{\asb(\tau)}{\omega}\Big) + \varepsilon \gamma_1\Big(\frac{\asb(\tau)}{\omega}\Big) \right]\right\} \times\big[1+\ord{\varepsilon}\big] \label{gForm} \\ \ff\Big(\frac{\asb(t)}{\omega}\Big) \equiv \frac1{\bar{\gamma}_t\sqrt{-\chi_0'(\bar{\gamma}_t)}} \;, \label{d:ff} \end{gather} where \begin{equation}\label{d:gamma1} \gamma_1\Big(\frac{\asb(t)}{\omega}\Big) \equiv -\frac{\chi_1(\bar{\gamma}_t)}{\chi_0'(\bar{\gamma}_t)} \end{equation} is the $\ord{\varepsilon}$ correction to the BFKL anomalous dimension. This interpretation follows simply by identifying the exponents in (\ref{gForm}) and (\ref{d:gluon}), which have the same $t$-derivative (by the saddle-point condition (\ref{saddle2})) and the same value at $t\to -\infty$ ($\bar{\gamma}_t=0$). Note that the $1/\varepsilon$ singularity of the exponent is a consequence of the boundary at $t\to -\infty$ in (\ref{gForm}), because of the Jacobians \begin{equation}\label{jacobian} \dif t = \frac{\dif\asb}{\varepsilon\asb} = -\frac{\chi_0'(\bar{\gamma}_t)}{\varepsilon\chi_0(\bar{\gamma}_t)}\dif\bar{\gamma}_t \;, \end{equation} which relate the various forms of the $1/\varepsilon$ exponent: \begin{equation}\label{exponents} \int_{-\infty}^t \dif\tau\; \gamma_0\Big(\frac{\asb(\tau)}{\omega}\Big) = \frac1{\varepsilon}\int_0^{\asb(t)} \frac{\dif\alpha}{\alpha}\; \gamma_0\Big(\frac{\alpha}{\omega}\Big) = \bar{\gamma}_t \, t +\frac1{\varepsilon}\int_0^{\bar{\gamma}_t} \dif\gamma'\; L_0(\gamma') \;. \end{equation} We are now in a position to factorise the $t$-dependence from the $1/\varepsilon$ singularities in Eq.~(\ref{gForm}). By simply subdividing the $\tau$ integration interval into $]-\infty,0] \cup [0,t]$, we obtain % \footnote{Splitting the integration interval at $\tau=0$ corresponds to choose the factorisation scale $\mu_f=\mu$, i.e., $t_f=0$. } \begin{equation}\label{gFact} g_{\varepsilon}(t) = \ff\Big(\frac{\asb(t)}{\omega}\Big) \exp\left\{\int_0^t\dif\tau\;\gamma_0\Big(\frac{\asb(\tau)}{\omega}\Big) \right\} \rk\Big(\frac{\asb(0)}{\omega}\Big) \exp\left\{\frac1{\varepsilon}\int_0^{\asb(0)} \frac{\dif\alpha}{\alpha}\; \gamma_0\Big(\frac{\alpha}{\omega}\Big) \right\} \;, \end{equation} where we have defined the reduced coefficient factor \begin{equation}\label{d:rk} \rk(a) \equiv \frac{R(a)}{\ff(a)} \equiv \exp\left\{\int_0^{\gamma_0(a)} \dif\gamma'\;\frac{\chi_1(\gamma')}{\chi_0(\gamma')} \right\} \;, \qquad \left( a \equiv \frac{\asb}{\omega} \right) \;, \end{equation} which arises because of the anomalous dimension correction $\varepsilon\gamma_1$ cancelling the $1/\varepsilon$ singularity of the Jacobian (\ref{jacobian}) according to the identity \begin{equation}\label{intGamma1} \varepsilon\int_{-\infty}^0 \dif\tau\; \gamma_1\Big(\frac{\asb(\tau)}{\omega}\Big) = \int_0^{\asb(0)} \frac{\dif\alpha}{\alpha} \; \gamma_1\Big(\frac{\alpha}{\omega}\Big) = \int_0^{\gamma_0\big(\frac{\asb(0)}{\omega}\big)} \dif\gamma'\; \frac{\chi_1(\gamma')}{\chi_0(\gamma')} \;. \end{equation} The expression (\ref{d:rk}) agrees, by Eq.~(\ref{chi1}), with Eqs.~(3.17,B.18) of Ref.~\cite{CaHa94}. Finally, the finite $t$-evolution in Eq.~(\ref{gFact}) becomes simply \begin{equation}\label{tEvol} \ff\Big(\frac{\asb}{\omega}\Big)\left(\frac{\kt^2}{\mu^2}\right) ^{\gamma_0\left(\frac{\asb}{\omega}\right)} \end{equation} in the $\varepsilon\to0$ limit, thus recovering the result~\cite{CaHa94} \begin{equation}\label{gRen} g_{\varepsilon}(t) \to R\Big(\frac{\asb}{\omega}\Big)\left(\frac{\kt^2}{\mu^2}\right) ^{\gamma_0\left(\frac{\asb}{\omega}\right)} \exp\left\{\frac1{\varepsilon}\int_0^{\asb} \frac{\dif\alpha}{\alpha}\; \gamma_0\Big(\frac{\alpha}{\omega}\Big) \right\} \;. \end{equation} This derivation shows the mechanism by which the $R$ coefficient factor is obtained as the product of $\ff$ (arising from the $\gamma$-fluctuations) and $\rk$ (arising from the $\varepsilon$-dependence of the BFKL anomalous dimension). In the following we shall generalise this mechanism to $b>0$ and to further terms in the $\varepsilon$-expansion. \section{The running coupling equation and its factorisation properties\label{s:rce}} Our first purpose is to generalise to $4+2\varepsilon$ dimensions the BFKL equation with running coupling. Because of the $\varepsilon$-dependence of the $\beta$-function in Eq.~(\ref{beta}), the running coupling $\asb(t)$ has the form \begin{equation}\label{runningB} \frac1{\asb(t)}-\frac{b}{\varepsilon} = \esp{-\varepsilon t} \left( \frac1{\asbmu}-\frac{b}{\varepsilon} \right) \;, \quad \text{or} \quad \asb(t) = \frac{\asbmu \esp{\varepsilon t}}{1+b\asbmu\frac{\esp{\varepsilon t}-1}{\varepsilon}} \;, \end{equation} where $b = 11/12$ in the $N_f = 0$ limit. Due to the UV fixed point of Eq.~(\ref{beta}) at $\asb=\varepsilon/b$, Eq.~(\ref{runningB}) shows two distinct regimes, according to whether {\it(i)} $\asbmu < \varepsilon/b$ or {\it (ii)} $\asbmu > \varepsilon/b$. In the regime {\it (i)} $\asb(t)$ runs monotonically from $\asb = 0$ to $\asb = \varepsilon/b$ for $-\infty < t < +\infty$, while in the regime {\it (ii)} $\asb(t)$, starting from $\varepsilon/b$ in the UV limit, goes through the Landau pole at $t_\Lambda = \log(1-\varepsilon/b\asbmu)/\varepsilon < 0$, and reaches $\asb = \varepsilon/b$ from below in the IR limit. Since the LL$x$ kernel $\Kernel(t,t')\dif t'$ scales like $\esp{\varepsilon t}$, an equation which realises such $\asb(t)$-evolution (and regimes) is obtained by setting \begin{align} \ugd_{\varepsilon,b}(\kt) &= \delta^{(2+2\varepsilon)}(\kt) + \frac1{\omega} \, \frac1{1+b\asbmu\frac{\esp{\varepsilon t}-1}{\varepsilon}} \int\frac{\dif^{2+2\varepsilon}\kt'}{(2\pi)^{2+2\varepsilon}}\;\Kernel(\kt,\kt')\ugd_{\varepsilon,b}(\kt') \label{BFKLeqB} \\ &= \delta^{(2+2\varepsilon)}(\kt) + \frac{\esp{\varepsilon\psi(1)}}{(\pi\kt^2)^{1+\varepsilon}} \widetilde{\ugd}_{\varepsilon,b}(\kt) \;. \nonumber \end{align} It is soon realised that $\widetilde{\ugd}_{\varepsilon,b}(\kt)$, given by (cf.\ Eq.~(\ref{inteq})) \begin{equation}\label{tildeEq} \widetilde{\ugd}_{\varepsilon,b}(\kt) = \frac1{\omega} \frac{\asbmu \esp{\varepsilon t}}{1+b\asbmu\frac{\esp{\varepsilon t}-1}{\varepsilon}} \left[ 1 + K_{\varepsilon} \widetilde{\ugd}_{\varepsilon,b} \right] \;, \end{equation} has a well-defined iterative solution in the regime {\it (i)} in which $0<\asb(t)<\varepsilon/b$. In fact, the $\kt$-integrations are convergent in the IR because of $\asb(t)\sim\esp{\varepsilon t}$ for $t\to-\infty$, and everywhere else because $\asb(t)$ is bounded. In this sense, $b,\varepsilon>0$ in the regime {\it (i)} act as regulators of both the IR and UV regions, and of the Landau pole, so that Eq.~(\ref{BFKLeqB}) is meaningful without any cutoffs --- a somewhat unique case in BFKL theory. In the $b \to 0$ limit Eq.~(\ref{BFKLeqB}) reduces to Eq.~(\ref{BFKLeq}), but become less convergent in the UV region: as noticed in App.~\ref{aa:is}, only a finite number of iterations $n<1/\varepsilon$ is actually possible for $b \neq 0$, for the UV integrations to be convergent. Therefore we shall solve Eq.~(\ref{BFKLeqB}) in the regime {\it (i)}, where it is nicely convergent, and we shall study the factorisation of $1/\varepsilon$ singularities by letting $b,\varepsilon \to 0$ with $\varepsilon/b > \asbmu$ kept fixed. Eventually, we are instead interested in the physical limit of $\varepsilon \to 0$ with $\asbmu$ and $b$ kept fixed, but we shall perform it only at the end, after factorisation of the $1/\varepsilon$ poles. Let us introduce the $\gamma$-representation (\ref{d:gammaRep}) into Eq.~(\ref{tildeEq}) after multiplying it by the $t$-dependent denominator. After simple algebra we get the equation \begin{align} f_{\varepsilon,b}(\gamma+\varepsilon) &= \frac1{\omega}\left(\frac1{\asbmu}-\frac{b}{\varepsilon}\right)^{-1} \left[ \frac{\esp{\gamma T}}{\gamma} + \left( \chie(\gamma) - \frac{b\omega}{\varepsilon} \right) f_{\varepsilon,b}(\gamma) \right] \label{fdeB} \\ \nonumber &\equiv f_{\varepsilon,b}^{(0)}(\gamma) + \chi_{\varepsilon,b}(\gamma) f_{\varepsilon,b}(\gamma) \;, \end{align} where we have again introduced the cutoff $t > -T$ in the initial condition (cf.\ Eq.~(\ref{F0})), in order to better control the $\varepsilon \to 0$ limit. Eq.~(\ref{fdeB}) has the form of Eq.~(\ref{fde}) and admits a similar iterative solution, which is obtained from Eq.~(\ref{itSol}) by the replacements \begin{subequations}\label{repls} \begin{align} \asbmu &\to \left(\frac1{\asbmu}-\frac{b}{\varepsilon}\right)^{-1} \;, \label{rep:alpha} \\ f^{(0)}(\gamma) &\to f_{\varepsilon,b}^{(0)}(\gamma) \equiv \frac1{\omega} \left(\frac1{\asbmu}-\frac{b}{\varepsilon}\right)^{-1}\frac{\esp{\gamma T}}{\gamma}\;, \label{rep:f0} \\ \chie(\gamma) &\to \chi_{\varepsilon,b}(\gamma) \equiv \chie(\gamma)-\frac{b\omega}{\varepsilon} \;. \label{rep:chi} \end{align} \end{subequations} In the limit of $b,\varepsilon \to 0$ with $b/\varepsilon$ and the cutoff $T$ kept fixed we get identically from Eqs.~(\ref{itSol}) and (\ref{repls}) the frozen coupling solution (\ref{eps0lim}) and (\ref{frozenSol}), as expected. We are more interested, however, in the continuum limit of the solution of Eq.~(\ref{fdeB}) in the regime {\it (i)} and for $T \to +\infty$. By formal manipulations similar to the $b=0$ case we obtain \begin{equation}\label{gammaRepB} \widetilde{\ugd}_{\varepsilon,b}(\kt) \stackrel{T\to +\infty}{\longrightarrow} \int\frac{\dif\gamma}{\sqrt{2\pi\varepsilon}} \frac1{\sqrt{\chi_0(\gamma)-\frac{b\omega}{\varepsilon}}} \exp\left\{ \gamma t+\frac1{\varepsilon}\int_0^{\gamma} L_{0,b}(\gamma')\dif\gamma' + \int_0^{\gamma} \frac{\chi_1(\gamma')}{\chi_0(\gamma')-\frac{b\omega}{\varepsilon}} \dif\gamma' + \ord{\varepsilon} \right\} \end{equation} which is again a solution of the homogeneous equation, where \begin{equation}\label{d:Lb} L_{\varepsilon,b}(\gamma) \equiv \log\left( \frac{\frac1{\omega}\chie(\gamma) - \frac{b}{\varepsilon}}{\frac1{\asbmu}-\frac{b}{\varepsilon}} \right) \;, \quad L_{0,b}(\gamma) \equiv \log\left( \frac{\frac1{\omega}\chi_0(\gamma) - \frac{b}{\varepsilon}}{\frac1{\asbmu}-\frac{b}{\varepsilon}} \right) \end{equation} and the exponent has been expanded in $\varepsilon$ (at $b \asb/\varepsilon$ fixed) up to the finite terms. Once again, the factorisation of the $1/\varepsilon$ poles in (\ref{gammaRepB}) is investigated, for $\varepsilon \to 0$, by the saddle point condition \begin{equation}\label{saddleB} \varepsilon t + L_{0,b}(\bar{\gamma}) = 0 \quad \iff \quad 1 = \frac{\asb(t)}{\omega}\chi_0(\bar{\gamma}) \end{equation} where, due to the $b$-dependence of Eq.~(\ref{d:Lb}), $\asb(t)$ has the expected form (\ref{runningB}). It follows that \begin{equation}\label{ugdTildeB} \widetilde{\ugd}_{\varepsilon,b}(\kt) = \frac1{\sqrt{-\chi_0'(\bar{\gamma}_t)}} \exp\left\{\bar{\gamma}_t\,t + \frac1{\varepsilon}\int_0^{\bar{\gamma}_t} L_{0,b}(\gamma')\dif\gamma' + \int_0^{\bar{\gamma}_t} \frac{\chi_1(\gamma')}{\chi_0(\gamma')-\frac{b\omega}{\varepsilon}}\dif\gamma' \right\} \times \big[1+\ord{\varepsilon}\big] \end{equation} and that \begin{align} g_{\varepsilon,b}(t) &= \frac1{\bar{\gamma}_t\sqrt{-\chi_0'(\bar{\gamma}_t)}} \exp\left\{\bar{\gamma}_t \,t + \frac1{\varepsilon}\int_0^{\bar{\gamma}_t} L_{0,b}(\gamma')\dif\gamma' + \int_0^{\bar{\gamma}_t} \frac{\chi_1(\gamma')}{\chi_0(\gamma')-\frac{b\omega}{\varepsilon}}\dif\gamma' \right\} \times \big[1+\ord{\varepsilon}\big] \nonumber \\ &= \ff\Big(\frac{\asb(t)}{\omega}\Big) \exp\left\{\int_{-\infty}^t \dif\tau\;\left[ \gamma_0\Big(\frac{\asb(\tau)}{\omega}\Big) + \varepsilon \gamma_1\Big(\frac{\asb(\tau)}{\omega}\Big) \right]\right\} \times\big[1+\ord{\varepsilon}\big] \;. \label{gFormB} \end{align} The final expression in Eq.~(\ref{gFormB}) is formally identical to Eq.~(\ref{gForm}) and is expected to have analogous factorisation properties. However, due to the different form of $\asb(t)$, the Jacobians induced by Eq.~(\ref{saddleB}) are different: \begin{equation}\label{jacobianB} \dif t = \frac{\dif\asb}{\asb(\varepsilon-b\asb)} = -\frac{\chi_0'(\bar{\gamma}_t)}{\varepsilon\left[\chi_0(\bar{\gamma}_t) -\frac{b\omega}{\varepsilon}\right]} \dif\bar{\gamma}_t \;, \end{equation} and this explains the different $\varepsilon$-dependence of the exponents at fixed value of $\asb(t)$: \begin{equation}\label{exponent0B} \int_{-\infty}^t \dif\tau\; \gamma_0\Big(\frac{\asb(\tau)}{\omega}\Big) = \int_0^{\asb(t)} \frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)}\; \gamma_0\Big(\frac{\alpha}{\omega}\Big) = \bar{\gamma}_t \, t +\frac1{\varepsilon}\int_0^{\bar{\gamma}_t} \dif\gamma'\; L_{0,b}(\gamma') \end{equation} and \begin{equation}\label{exponent1B} \varepsilon\int_{-\infty}^t \dif\tau\; \gamma_1\Big(\frac{\asb(\tau)}{\omega}\Big) = \varepsilon\int_0^{\asb(t)} \frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)}\; \gamma_1\Big(\frac{\alpha}{\omega}\Big) \;. \end{equation} The factorisation of the $t$-dependence in Eq.~(\ref{gFormB}) from the $1/\varepsilon$ singularities is now performed as in Sec.~\ref{s:grLL} by subdividing the $\tau$-integration into the $]-\infty,t_f]$ and $[t_f,t]$ intervals, where $t_f \equiv \log(\mu_f^2/\mu^2)$ defines the factorisation scale. The two intervals are treated differently: in the finite UV interval $[t_f,t]$ we can freely go to the $\varepsilon=0$ limit at fixed value of $\asb(t_f)$, thus recovering the $\asb(t) \to \asb(t_f)/[1+b\asb(t_f) (t-t_f)]$ limit and the normal $t$-dependence of UV free QCD in 4 dimensions. In the remaining infinite IR interval we have to factorise the $1/\varepsilon$ poles by expanding in the $b\asb/\varepsilon$ parameter, as normally done in fixed order perturbation theory. By looking at Eqs.~(\ref{exponent0B}) and (\ref{exponent1B}) (with $t$ replaced by $t_f$) we realise the following: Eq.~(\ref{exponent0B}) exponentiates $1/\varepsilon$ poles and higher order ones coming from the $b\asb/\varepsilon$-expansion, and has to be factorised in full in the $\overline{\mathrm{MS}}$-scheme; it evolves in $t_f$ according to the LL$x$ anomalous dimension. On the other hand, the term (\ref{exponent1B}) --- due to the $\ord{\varepsilon}$ correction to the BFKL anomalous dimension --- reduces, in the $b \to 0$ limit, to the $\rk\big(\asb(t_f)/\omega\big)$ factor found in Sec.~\ref{s:grLL}, which yields $R/\ff$, i.e., the normalisation mismatch of the frozen coupling evolution with respect to the $\overline{\mathrm{MS}}$ density. Furthermore, in the $b\asb/\varepsilon$-expansion, Eq.~(\ref{exponent1B}) exponentiates $1/\varepsilon$ and higher order poles which should be factored out in the $\overline{\mathrm{MS}}$ density, and therefore contribute to the $\overline{\mathrm{MS}}$ anomalous dimension. In order to perform collectively such separation, we rewrite the r.h.s.\ of Eq.~(\ref{exponent1B}) (with $t = t_f$) in the form \begin{equation}\label{gamma1split} \int_0^{\asb(t_f)} \frac{\dif\alpha}{\alpha}\; \gamma_1\Big(\frac{\alpha}{\omega}\Big) + \int_0^{\asb(t_f)} \frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)}\; b\alpha \, \gamma_1\Big(\frac{\alpha}{\omega}\Big) \;, \end{equation} where the first term provides $\log\rk\big(\asb(t_f)/\omega\big)$, and the second one is the anomalous dimension contribution to be factored out in the $\overline{\mathrm{MS}}$ density. Finally, by replacing Eq.~(\ref{gamma1split}) into Eq.~(\ref{gFormB}), we find \begin{equation}\label{gQ0} g_{\varepsilon}(t) = \ff\Big(\frac{\asb(t)}{\omega}\Big) \exp\left\{\int_{t_f}^t\dif\tau\;\gamma_0\Big(\frac{\asb(\tau)}{\omega}\Big) \right\} \rk\Big(\frac{\asb(t_f)}{\omega}\Big) g^{(\overline{\mathrm{MS}})}(t_f)\;, \end{equation} where \begin{equation}\label{gMSbar} g^{(\overline{\mathrm{MS}})}(t_f) = \exp\left\{\int_0^{\asb(t_f)} \frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)}\; \left[ \gamma_0\Big(\frac{\alpha}{\omega}\Big) +b\alpha \gamma_1\Big(\frac{\alpha}{\omega}\Big) \right] \right\} \end{equation} incorporates all the $1/\varepsilon$ poles, in a formal $b\asb/\varepsilon$-expansion. Eqs.~(\ref{gQ0}) and (\ref{gMSbar}) are the main results of this section. They confirm the relation between our generalised $Q_0$-scheme (with dimensional regularisation) and the $\overline{\mathrm{MS}}$-scheme, namely \begin{equation}\label{schemeRel} g^{(Q_0)}(t) \equiv g_{\varepsilon}(t) = R\Big(\frac{\asb(t)}{\omega}\Big) g^{(\overline{\mathrm{MS}})}(t) \;, \end{equation} and they prove the following resummation formulas \begin{align} \gamma^{(Q_0)}\big(\asb(t);\omega\big) \equiv \frac{\frac{\dif}{\dif t} g^{(Q_0)}(t)}{g^{(Q_0)}(t)} &= \gamma_0\Big(\frac{\asb(t)}{\omega}\Big) -b \asb(t) \frac{\partial\log \ff}{\partial\log\asb} \label{gammaQ0} \\ \gamma^{(\overline{\mathrm{MS}})}\big(\asb(t);\omega\big) \equiv \frac{\frac{\dif}{\dif t} g^{(\overline{\mathrm{MS}})}(t)}{g^{(\overline{\mathrm{MS}})}(t)} &= \gamma_0\Big(\frac{\asb(t)}{\omega}\Big) +b \asb(t) \frac{\partial\log \rk}{\partial\log\asb} \label{gammaMSbar} \\ \nonumber &= \gamma_0\Big(\frac{\asb(t)}{\omega}\Big) +b \asb(t) \gamma_1\Big(\frac{\asb(t)}{\omega}\Big) \end{align} where $\ff$, $\rk$, $\gamma_0$ and $\gamma_1$ are defined in Eqs.~(\ref{d:ff}), (\ref{d:rk}), (\ref{d:gamma0}) and (\ref{d:gamma1}) respectively. Let us remark that the results (\ref{gammaQ0}-\ref{gammaMSbar}) are not surprising, in view of the well studied relationship of the $\ff$ and $R$ factors already performed in the literature in a somewhat different context~\cite{CaCi97}. However they offer some important insights into the explicit RG factorisation obtained here: Eq.~(\ref{gammaQ0}) shows that the evolution of the gluon density in a $\kt$-factorisation scheme is independent of whether the BFKL equation is regularised by a cutoff ($Q_0$-scheme in strict sense) or by a positive $\varepsilon$ (as done at present). This is a consequence of the RG factorisation just proved. Eq.~(\ref{gammaMSbar}) shows that the $\overline{\mathrm{MS}}$ evolution is the one expected because of the $R$ factor in Eq.~(\ref{schemeRel}). However, this result is obtained by the explicit, $b$-dependent factorisation of the $1/\varepsilon$ poles connected with $\gamma_1$ in Eq.~(\ref{gMSbar}). Furthermore the $t_f$-dependence in Eq.~(\ref{gQ0}) cancels out because of the $\gamma_1$ evolution in $g^{(\overline{\mathrm{MS}})}$ cancelling the $t_f$-dependence of $\rk$. All together, such terms build up the $\ord{\varepsilon}$ correction to the BFKL anomalous dimension, as done in Eq.~(\ref{gamma1split}), which precisely vanishes in the $\varepsilon \to 0$ limit. Our next goal is to generalise the above factorisation procedure to subleading terms, in particular to the coefficient terms at NL$x$ level and to the corresponding NNL$x$ anomalous dimension terms. \section{Factorisation at subleading level: $\boldsymbol{b}$-dependent corrections\label{s:fsl}} We have just realised that the normalisation factor $\rk$ arises from the $\ord{\varepsilon}$ correction to the BFKL anomalous dimension (\ref{exponent1B}) after the (minimal) subtraction of a $b$-dependent contribution to the $\overline{\mathrm{MS}}$ anomalous dimension (second term of (\ref{gamma1split})). This suggests that, in order to compute corrections of relative order $b\asb$ to $\rk$ and the corresponding ones to $\gamma^{(\overline{\mathrm{MS}})}$ we have to calculate $\ord{\varepsilon^2}$ corrections to the BFKL anomalous dimension and, more generally, to the exponent of the solution (\ref{gammaRepB}) of the homogeneous equation. We thus restart our analysis from the ``action'' (\ref{action}) and its $b/\varepsilon$-dependent counterpart, and we rewrite the solution (\ref{gammaRepB}) of the homogeneous equation in the form \begin{equation}\label{solHomEqEps2} \widetilde{\ugd}_{\varepsilon,b}(\kt) = \int\frac{\dif\gamma}{\sqrt{2\pi\varepsilon}} \frac1{\sqrt{\chi_{\varepsilon}(\gamma)-\frac{b\omega}{\varepsilon}}} \exp\left\{ \gamma t+\frac1{\varepsilon}\int_0^{\gamma} L_{\varepsilon,b}(\gamma')\dif\gamma' + \frac{\varepsilon}{12} L_{\varepsilon,b}'(\gamma) + \ord{\varepsilon^2} \right\} \;, \end{equation} where we have kept the $\varepsilon$-dependence of $L$ and $\chi$, and the $\ord{\varepsilon}$ correction to the ``action'' (which is of relative order $\ord{\varepsilon^2}$). We then look at the factorisation properties of Eq.~(\ref{solHomEqEps2}) by expanding around the $\varepsilon$-dependent saddle point $\gamma=\bar{\gamma}_{\varepsilon}$: \begin{equation}\label{saddleEps2} \varepsilon t + L_{\varepsilon,b}(\bar{\gamma}_{\varepsilon}) = 0 \quad \iff \quad 1 = \frac{\asb(t)}{\omega} \chie(\bar{\gamma}_{\varepsilon}) \;, \end{equation} which, therefore, defines the $\varepsilon$-dependent BFKL anomalous dimension \begin{equation}\label{gammaEps2} \gamma_{\varepsilon}\Big(\frac{\asb}{\omega}\Big) = \gamma_0\Big(\frac{\asb}{\omega}\Big) + \varepsilon \gamma_1\Big(\frac{\asb}{\omega}\Big) + \varepsilon^2 \gamma_2\Big(\frac{\asb}{\omega}\Big) + \cdots \end{equation} where $\gamma_1$ was given before in Eq.~(\ref{d:gamma1}), and we have \begin{equation}\label{gamma2} \gamma_2\Big(\frac{\asb}{\omega}\Big) = \left. -\frac{\chi_2(\gamma)}{\chi_0'(\gamma)} + \frac{\chi_1(\gamma) \chi_1'(\gamma)}{\chi_0'{}^2(\gamma)} -\frac12\frac{\chi_1^2(\gamma) \chi_0''(\gamma)} {\chi_0'{}^3(\gamma)} \right|_{\gamma = \gamma_0(\frac{\asb}{\omega})} \;. \end{equation} Furthermore, we have to compute the $\gamma$-integral by expanding the action and the factor $\left[\chie(\gamma)-\frac{b\omega}{\varepsilon}\right]^{-1/2}$ around the saddle point. Since the $\gamma$-fluctuations are governed by the width $\sigma_\gamma = \left[-\varepsilon/L'_{\varepsilon,b}(\bar{\gamma}_{\varepsilon})\right]^{1/2}$ which is of $\ord{\sqrt{\varepsilon}}$ while the ``action'' is $\ord{1/\varepsilon}$, we need to expand, it turns out, up to 6-th order in $\gamma-\bar{\gamma}$ in order to reach all $\ord{\varepsilon}$ terms. This calculation is performed in App.~\ref{a:cgd} and, when replaced into Eq.~(\ref{solHomEqEps2}), provides the following result for the gluon density: \begin{equation}\label{gluonEps2} g_{\varepsilon}(t) = \frac1{\bar{\gamma}_{\varepsilon}\sqrt{-\chie'(\bar{\gamma}_{\varepsilon})}} \exp\left\{ \int_{-\infty}^t \dif\tau\; \bar{\gamma}_{\varepsilon}\Big(\frac{\asb(\tau)}{\omega}\Big) \right\} \times \big[ 1 + \varepsilon S_1(\bar{\gamma}_{\varepsilon},b) \big] \;. \end{equation} Here we have used the identity \begin{equation}\label{expintgamma} \bar{\gamma}_{\varepsilon}\, t +\frac1{\varepsilon}\int_0^{\bar{\gamma}_{\varepsilon}}\dif\gamma'\; L_{\varepsilon,b}(\gamma') = \int_{-\infty}^t \dif\tau\; \bar{\gamma}_{\varepsilon}\Big(\frac{\asb(\tau)}{\omega}\Big) \;, \end{equation} the Jacobians \begin{equation}\label{jacobEps} \dif t = \frac{\dif\asb}{\asb(\varepsilon-b\asb)} = -\frac{\chie'(\bar{\gamma}_{\varepsilon})}{\varepsilon\left[\chie(\bar{\gamma}_{\varepsilon}) -\frac{b\omega}{\varepsilon}\right]} \dif \bar{\gamma}_{\varepsilon} \;, \end{equation} and the $\ord{\varepsilon}$ correction to the action \begin{equation}\label{S1} S_1(\gamma,b) = \frac1{12} L_{\varepsilon,b}'+\left[\frac1{8}(-L_{\varepsilon,b}') + \frac{5}{24} \frac{L_{\varepsilon,b}''{}^2}{(-L_{\varepsilon,b}')^3} + \frac1{8}\frac{L_{\varepsilon,b}'''}{L_{\varepsilon,b}'{}^2} \right] \;, \end{equation} where the terms in square brackets are precisely the fluctuations calculated in App.~\ref{aa:cspf}. Let us now look at the factorisation properties of our result in Eq.~(\ref{gluonEps2}) in the limit of $\varepsilon, b \asb/\varepsilon \to 0$. The anomalous dimension exponential is factorised as usual, and its infrared part at the factorisation scale $t_f$ is given by \begin{equation}\label{IRexponential} \int_0^{\asb(t_f)}\frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)} \left[ \gamma_0\Big(\frac{\alpha}{\omega}\Big) + \varepsilon \gamma_1\Big(\frac{\alpha}{\omega}\Big) + \varepsilon^2 \gamma_2\Big(\frac{\alpha}{\omega}\Big) + \cdots \right] \;. \end{equation} We have now the additional $\ord{\varepsilon^2}$ term $\gamma_2$, which however still builds $1/\varepsilon$ singularities because of the $b\asb/\varepsilon$-expansion of the $\beta$-function in the denominator. The decomposition into coefficient and $\overline{\mathrm{MS}}$-anomalous dimension parts is simply done by writing $\varepsilon^2 = (b\asb)^2 + (\varepsilon-b\asb)(b\asb+\varepsilon)$, so that \begin{equation}\label{gamma2split} \int_0^{\asb(t_f)}\frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)} \; \varepsilon^2 \, \gamma_2\Big(\frac{\alpha}{\omega}\Big) = \int_0^{\asb(t_f)}\frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)} \; (b\alpha)^2 \, \gamma_2\Big(\frac{\alpha}{\omega}\Big) + \int_0^{\asb(t_f)}\frac{\dif\alpha}{\alpha}\; b\alpha \,\gamma_2\Big(\frac{\alpha}{\omega}\Big) +\ord{\varepsilon} \;, \end{equation} where the first term in the r.h.s.\ is the minimal subtraction part of the $\gamma_2$ anomalous dimension, the second one is the $\ord{b\asb}$ correction to $\log\rk$. We have also to look for $1/\varepsilon$ singularities possibly arising because of the $b\asb/\varepsilon$-expansion in the contribution (\ref{S1}). Here the situation is not so clear {\em a priori}, given the fact that \begin{equation}\label{Lprime} -L_{\varepsilon,b}'(\bar{\gamma}_{\varepsilon}) = \frac{-\chie'(\bar{\gamma}_{\varepsilon})}{\chie(\bar{\gamma}_{\varepsilon})-\frac{b\omega}{\varepsilon}} = -\frac{\asb(t)}{\omega} \frac{\chie'\big(\bar{\gamma}_{\varepsilon}(t)\big)} {1-\frac{b \asb(t)}{\varepsilon}} \end{equation} has a non-trivial $b\asb/\varepsilon$ expansion. However, after some algebra, we find a remarkable cancellation leading to the result (see App.~\ref{aa:cs}) \begin{equation}\label{S1form} S_1\big(\bar{\gamma}_{\varepsilon}(t),b\big) = \frac1{24} \left\{ \frac{\chie''}{\chie'} + \frac{\omega}{\chie'}\left(\frac{b}{\varepsilon}-\frac1{\asb(t)}\right) \left[2\left(\frac{\chie''}{\chie'}\right)^2 -3\left(\frac{\chie''}{\chie'}\right)'\right]\right\} \;, \end{equation} which is only linear in $b/\varepsilon$. It follows that $S_1$ has no minimal subtraction terms, and only contributes to the renormalisation of the $\ff$-factor, as follows: \begin{equation}\label{epsS1} \varepsilon S_1\big(\bar{\gamma}_{\varepsilon}(t),b\big) = \frac{b\omega}{24\chi_0'}\left[ 2\left(\frac{\chi_0''}{\chi_0'}\right)^2 -3\left(\frac{\chi_0''}{\chi_0'}\right)'\right] + \ord{\varepsilon} \;. \end{equation} We can thus summarise our results on the NL$x$ corrections to the coefficient factors of Eq.~(\ref{gQ0}) (and on the NNL$x$ ones to the $\overline{\mathrm{MS}}$ anomalous dimension). The $\rk$ factor takes the form \begin{equation}\label{rkEps2} \rk\big(\asb(t_f);\omega\big) = \exp\left\{\int_0^{\asb(t_f)} \frac{\dif\alpha}{\alpha} \;\left[ \gamma_1\Big(\frac{\alpha}{\omega}\Big) + b\alpha \, \gamma_2\Big(\frac{\alpha}{\omega}\Big) + \cdots \right] \right\} \;, \end{equation} and the $\overline{\mathrm{MS}}$ anomalous dimension becomes \begin{align} \gamma^{(\overline{\mathrm{MS}})}(\asb;\omega) &= \gamma_0\Big(\frac{\asb}{\omega}\Big) +b \asb \, \gamma_1\Big(\frac{\asb}{\omega}\Big) +b^2 \asb^2(t) \, \gamma_2\Big(\frac{\asb}{\omega}\Big) +\cdots \label{gammaMSbar2} \\ \nonumber &= \gamma_0\Big(\frac{\asb}{\omega}\Big) + b \asb \frac{\partial\rk(\asb;\omega)}{\partial\log\asb} \;. \end{align} Due to the form of (\ref{gammaMSbar2}) and (\ref{rkEps2}), the $t_f$-dependence of the factorisation formula (\ref{gQ0}) cancels out as it should, and as is expected on the basis of their common origin, Eq.~(\ref{IRexponential}). We also expect this mechanism to hold true to all orders in $b\asb$, so that the $\overline{\mathrm{MS}}$ anomalous dimension and coefficient can be inferred from the corresponding $\varepsilon$-expansion of the BFKL anomalous dimension (\ref{gammaEps2}). On the other hand, the fluctuation factor $\ff$ takes finite NL$x$ corrections from Eq.~(\ref{epsS1}), and becomes \begin{equation}\label{ffEps2} \ff(\asb;\omega) = \frac1{\gamma_0\sqrt{-\chi_0'}} \left\{ 1 + \frac{b\omega}{24\chi_0'}\left[ 2\left(\frac{\chi_0''}{\chi_0'}\right)^2 -3\left(\frac{\chi_0''}{\chi_0'}\right)'\right] +\cdots \right\} \;. \end{equation} Such corrections are identical to those found from the normal $\gamma$-representation in 4 dimensions. In this case also we expect that higher orders in the $\varepsilon$-expansion (\ref{action}) and the corresponding fluctuations will combine so as to provide further finite subleading corrections to Eq.~(\ref{ffEps2}). We have no formal proof of this expectation, but we notice that higher order fluctuations still involve the scale $t$ and the coupling $\asb(t)$ and are thus independent of the IR behaviour at $\tau \to -\infty$. It is then natural to believe that they can be computed directly in the $\varepsilon = 0$ limit from the normal $\gamma$-representation in 4 dimensions, as explicitly proved above for the next-to-leading terms. \section{On the treatment of the full NL$\boldsymbol{x}$ corrections to the $\boldsymbol{\rk}$ factor\label{s:nlc}} Having calculated the running coupling corrections to $\rk$, the problem arises of including {\em all} NL$x$ contributions, as embodied in the next-to-leading BFKL kernel. The structure and the explicit expression of such a kernel with dimensional regularisation were given in several papers for the gluon~\cite{RGvert} and the quark~\cite{QQvertCC,QQvertFFFK} parts, and summarised in Refs.~\cite{CaCi97,FaLi98,CaCi98}. It has the form \begin{equation}\label{NLstruc} \Kernel^{(\mathrm{NL})} = \asb \left[ \frac{b}{\varepsilon} \left(1-\esp{\varepsilon t}\right) \Kernel \right] + \Kernel^{(1)} \;, \end{equation} where we have singled out the running coupling part (proportional to the leading kernel $\Kernel$) which arises from the $\ord{\asb^2}$ expansion of the running coupling equation~(\ref{BFKLeqB}). The remaining kernel $\Kernel^{(1)}$, which scales as $\esp{2\varepsilon t}$, is the properly called NL$x$ kernel, whose eigenvalues have been worked out in the literature in the $\varepsilon \to 0$ limit, and are here required up to $\ord{\varepsilon}$ for the complete calculation of NL$x$ corrections to $\rk$. We shall thus generalise Eq.~(\ref{BFKLeqB}) to include the NL$x$ kernel in the form \begin{align} \ugd_{\varepsilon}(\kt) &= \delta^{(2+2\varepsilon)}(\kt) + \frac1{\omega} \, \frac1{1+b\asbmu\frac{\esp{\varepsilon t}-1}{\varepsilon}} \int\frac{\dif^{2+2\varepsilon}\kt'}{(2\pi)^{2+2\varepsilon}}\;\left[ \Kernel(\kt,\kt') + \Kernel^{(1)}(\kt,\kt') \right] \ugd_{\varepsilon}(\kt') \nonumber \\ &= \delta^{(2+2\varepsilon)}(\kt) + \frac{\esp{\varepsilon\psi(1)}}{(\pi\kt^2)^{1+\varepsilon}} \widetilde{\ugd}_{\varepsilon}(\kt) \;. \label{BFKLeqNL} \end{align} where NNL$x$ terms and further subleading ones have been freely added so as to reproduce the resummed $\asb(t)$ evolution.% \footnote{Here we keep terms which are of relative order $\asb \varepsilon^n$ ($n \geq 0$) with respect to the leading ones, and we drop terms of relative order $b\asb^2$ or higher, at fixed values of $\varepsilon$.} It is now straightforward to go to $\gamma$-space and to obtain a modified form of Eq.~(\ref{fdeB}). By introducing the NL$x$ ``characteristic function'' $\chie^{(1)}$ of $\Kernel^{(1)}$ in $4+2\varepsilon$ dimensions \begin{equation}\label{d:chie1} \int\frac{\dif^{2+2\varepsilon}\kt'}{(2\pi)^{2+2\varepsilon}}\; \Kernel^{(1)}(\kt,\kt') (\kt'{}^2)^{\gamma-1-2\varepsilon} \equiv \asbmu^2 \chie^{(1)}(\gamma) \frac{(\kt^2)^{\gamma-1}}{\mu^{4\varepsilon}} \;, \end{equation} the corresponding homogeneous equation reads \begin{equation}\label{fdeNL} f_{\varepsilon}(\gamma+\varepsilon) - \frac{b\asbmu}{\varepsilon} \left[ f_{\varepsilon}(\gamma+\varepsilon)-f_{\varepsilon}(\gamma)\right] = \asbmu \frac{\chie(\gamma)}{\omega} f_{\varepsilon}(\gamma) + \frac{\asbmu^2}{\omega} \chie^{(1)}(\gamma) f_{\varepsilon}(\gamma-\varepsilon) \;, \end{equation} and contains, therefore, two finite difference steps, due to the different scaling properties of the leading vs.\ next-to-leading kernels. However, to NL$x$ accuracy, we can replace the leading order equation in the last term, and we obtain \begin{equation}\label{fdeNLbis} f_{\varepsilon}(\gamma+\varepsilon) = \left[ \frac{ \frac{\chie(\gamma)}{\omega}-\frac{b}{\varepsilon} + \frac{\chie^{(1)}(\gamma)}{\chie(\gamma-\varepsilon)} } {\frac1{\asbmu}-\frac{b}{\varepsilon}} \right] f_{\varepsilon}(\gamma) \equiv \exp[L_{\varepsilon,b}^{\mathrm{eff}}(\gamma)] f_{\varepsilon}(\gamma) \;, \end{equation} where the next-to-leading term is now suppressed by a factor of $\omega$ with respect to the leading one, much in the same spirit as the $\omega$-expansion~\cite{omExp}. We can thus solve Eq.~(\ref{fdeNLbis}) by the same method used before to get Eq.~(\ref{solHomEqEps2}). Correspondingly, the saddle point at $\gamma = \bar{\gamma}_{\varepsilon}\big(\asb(t);\omega\big)$ such that \begin{equation}\label{saddleNL} \varepsilon t + L_{\varepsilon,b}^{\mathrm{eff}}\left(\bar{\gamma}_{\varepsilon}\right) = 0 \qquad \iff \qquad \asb(t) \left[ \chie(\bar{\gamma}_{\varepsilon}) + \omega \frac{\chie^{(1)}(\bar{\gamma}_{\varepsilon})}{\chie(\bar{\gamma}_{\varepsilon}-\varepsilon)} \right] = \omega \end{equation} admits the solution \begin{equation}\label{gammaNL} \bar{\gamma}_{\varepsilon} = \gamma_{\varepsilon}^{(0)}\Big(\frac{\asb(t)}{\omega}\Big) + \asb(t) \gamma_{\varepsilon}^{(1)}\Big(\frac{\asb(t)}{\omega}\Big) + \ord{\asb^2} \;, \end{equation} where $\gamma_{\varepsilon}^{(0)}$ defines the LL$x$ BFKL anomalous dimension of Eq.~(\ref{gammaEps2}) and $\gamma_{\varepsilon}^{(1)}$ is its NL$x$ correction, obtained by truncating the expansion of the saddle point position to relative order $\ord{\asb(t)}$: \begin{equation}\label{delta} \gamma_{\varepsilon}^{(1)}\Big(\frac{\asb(t)}{\omega}\Big) = - \frac{\chie^{(1)}(\gamma_{\varepsilon}^{(0)})}{\chie'(\gamma_{\varepsilon}^{(0)})} \frac{\chie(\gamma_{\varepsilon}^{(0)})}{\chie(\gamma_{\varepsilon}^{(0)} - \varepsilon)} = \gamma^{(1)}_0\Big(\frac{\asb(t)}{\omega}\Big) + \varepsilon \gamma^{(1)}_1\Big(\frac{\asb(t)}{\omega}\Big) + \ord{\varepsilon^2} \;. \end{equation} This expression reduces to the customary one~\cite{FaLi98,CaCi98} in the $\varepsilon=0$ limit, but has $\ord{\varepsilon}$ corrections coming from the corresponding ones of $\chie^{(1)}$ --- yet to be extracted from the various papers in the literature~\cite{RGvert,QQvertCC,QQvertFFFK}. Finally, we expand the $\varepsilon$-dependence of the BFKL anomalous dimension, including NL$x$ terms, as follows: \begin{equation}\label{gammaBFKL} \bar{\gamma}_{\varepsilon}(\asb;\omega) = \gamma^{(0)}_0\Big(\frac{\asb}{\omega}\Big) + \asb \gamma^{(1)}_0\Big(\frac{\asb}{\omega}\Big) + \varepsilon \gamma^{(0)}_1\Big(\frac{\asb}{\omega}\Big) + \varepsilon \left[ \asb \gamma^{(1)}_1\Big(\frac{\asb}{\omega}\Big) + \varepsilon \gamma^{(0)}_2\Big(\frac{\asb}{\omega}\Big) \right] \end{equation} and we replace it into the analogue of Eq.~(\ref{gluonEps2}). The corresponding anomalous dimension exponential factorises in the form \begin{equation}\label{anDimExp} \exp\left\{\int_0^{\asb(t)} \frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)} \; \bar{\gamma}_{\varepsilon}(\alpha;\omega) \right\} = \rk\big(\asb(t);\omega\big) g^{(\overline{\mathrm{MS}})}(t) \;, \end{equation} where \begin{equation}\label{logrk} \rk\big(\asb(t);\omega\big) = \exp \left\{ \int_0^{\asb(t)} \frac{\dif\alpha}{\alpha}\; \left[\gamma^{(0)}_1\Big(\frac{\alpha}{\omega}\Big) + \alpha \gamma^{(1)}_1\Big(\frac{\alpha}{\omega}\Big) + b \alpha \gamma^{(0)}_2\Big(\frac{\alpha}{\omega}\Big) \right] \right\} \end{equation} and \begin{equation}\label{gMSbarNL} g^{(\overline{\mathrm{MS}})}(t) = \exp\left\{\int_0^{\asb(t)} \frac{\dif\alpha}{\alpha(\varepsilon-b\alpha)} \; \bar{\gamma}_{\varepsilon=b\alpha}(\alpha;\omega) \right\} \;. \end{equation} It follows that the $\overline{\mathrm{MS}}$ gluon anomalous dimension is \begin{align} \gamma^{(\overline{\mathrm{MS}})}(\asb;\omega) = \bar{\gamma}_{\varepsilon=b\asb}(\asb;\omega) &= \gamma^{(0)}_0 + \asb \gamma^{(1)}_0 + b\asb \gamma^{(0)}_1 + b\asb^2 \gamma^{(1)}_1 + (b\asb)^2 \gamma^{(0)}_2 \nonumber \\ &= \gamma^{(0)}_0 + \asb\gamma^{(1)}_0 + b\asb \frac{\partial\rk(\asb;\omega)}{\partial\log\asb} \label{gammaMSbarNL} \end{align} and has therefore subleading terms determined by the $\varepsilon$-dependence of $\bar{\gamma}_{\varepsilon}$ at $\varepsilon = b\asb$, which are related in the expected way to the coefficient $\rk$. On the other hand, the gluon density in the $Q_0$-scheme contains an additional fluctuation factor $\ff(\asb;\omega)$ whose calculation at full NL$x$ level proceeds along the lines of Sec.~\ref{s:fsl} and is not explicitly done here. The corresponding anomalous dimension is \begin{equation}\label{gammaQ0NL} \gamma^{(Q_0)}(\asb;\omega) = \gamma^{(0)}_0\Big(\frac{\asb}{\omega}\Big) + \asb \gamma^{(1)}_0\Big(\frac{\asb}{\omega}\Big) -b \asb \frac{\partial \ff(\asb;\omega)}{\partial\log\asb} \end{equation} and can be calculated directly at $\varepsilon = 0$. It should be remarked that the NNL$x$ contributions to the anomalous dimension in Eqs.~(\ref{gammaMSbarNL}) and (\ref{gammaQ0NL}) coming from the NL$x$ corrections to $\rk$ and $\ff$ are of course mixed with dynamical contributions whose calculation has not yet been attempted in the literature. Nevertheless, we do compute here --- once $\gamma_{\varepsilon}^{(1)}$ is known --- the full NL$x$ contributions to $\rk$, which therefore explain the {\em difference} between $\overline{\mathrm{MS}}$- and $Q_0$-scheme anomalous dimensions at NNL$x$ accuracy. \section{Universality of $\boldsymbol{\gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}}$ and its resummation formula\label{s:ugqg}} So far, we have considered the gluon channel only, and discussed NL$x$ corrections to the normalisation change of the gluon density from the $\overline{\mathrm{MS}}$-scheme to the $Q_0$-scheme. However physical probes are coupled to quarks, which enter the BFKL framework through a $\kt$-factorisation kernel $\qKernel$, defining the measuring process at hand (labelled by the superscript $p$), as shown in Fig.~\ref{fig:quarkLx}. \begin{figure}[hbp] \centering \includegraphics[height=0.2\textheight]{quarkLLx.eps} \caption{The leading small-$x$ contribution to the quark density.} \label{fig:quarkLx} \end{figure} In a physical scheme, like the $Q_0$-scheme, we can just {\em define} the quark density $q^{(p)}$ by the action of $\qKernel$ \begin{equation} \label{defQuark} q_{\varepsilon}^{(p)}(t) = \as(t) \int\dif^{2+2\varepsilon} \kt' \; \qKernel\Big(\frac{\kt^2}{\kt'{}^2}\Big) \ugd_{\varepsilon}(\kt') \;, \end{equation} ($\as(t) \equiv \as \esp{\varepsilon t}$, see Eq.~(\ref{d:alphas})) and it is then pretty easy to find a 4-dimensional NL$x$ resummation formula for $\gamma_{\pq\pg}^{(p)}$, as first shown by Catani and Hautmann~\cite{CaHa94} in the DIS-scheme ($q^{(\DIS)} = F_2$). On the other hand, in the $\overline{\mathrm{MS}}$-scheme one has to disentangle the coefficient part of Eq.~(\ref{defQuark}) from the anomalous dimension part. The latter contains, by definition, the minimal $1/\varepsilon$ singularities which are not exponentiated in the gluon density in the following collinear factorisation formula \begin{equation}\label{cff} q_{\varepsilon}^{(p)}(t) - C_{\pq\pg}^{(p)}\big(\as(t)\big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) = q_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) = \int_{-\infty}^t \dif\tau\; \gamma_{\pq\pg}^{(\overline{\mathrm{MS}})} \big(\as(\tau)\big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(\tau) \;, \end{equation} where we have set $C_{\pq\pq}^{(p)} = 1$ and $\gamma_{\pq\pq}^{(\overline{\mathrm{MS}})} = 0$ at NL$x$ anomalous dimension accuracy and we have omitted, for notational simplicity, the $\omega$-variable. Note that $C_{\pq\pg}^{(p)}$ is process- and $\varepsilon$-dependent, while $\gamma_{\pq\pg}$ is universal and $\varepsilon$-independent. $q^{(p)}$ is obtained in the $\gamma$-representation framework by replacing in Eq.~(\ref{defQuark}) the characteristic function of the $\qKernel$ kernel \begin{equation}\label{defQuarkEigenv} \int \frac{\dif\kt'{}^2}{\kt'{}^2} \left(\frac{\kt'{}^2}{\kt^2}\right)^\gamma \qKernel\Big(\frac{\kt^2}{\kt'{}^2}\Big) \equiv \frac{ \qcf(\gamma) }{ \gamma (\gamma+\varepsilon) } \;, \end{equation} where the two poles are expected because of the intermediate $\lt$-integration in $\qKernel$ ($\kt\gtrsim\lt\gtrsim\kt'$ roughly) and the $\varepsilon$-dimension of the kernel integration providing also a running $\as(t)=\as \cdot (\kt^2/\mu^2)^\varepsilon$ \begin{equation} \label{poleOrigin} \int\frac{\dif \kt'{}^2}{\kt'{}^2} \left(\frac{\kt'{}^2}{\kt^2}\right)^\gamma \int\frac{\dif^{2+2\varepsilon} \lt}{\lt^2} \; \Theta(\kt^2-\lt^2) \Theta(\lt^2-\kt'{}^2) = c_\varepsilon \, \frac{(\kt^2)^{\varepsilon}}{\gamma(\gamma+\varepsilon)} \;. \end{equation} The pole $1/\gamma$ in (\ref{defQuarkEigenv}) is incorporated into the integrated gluon density $g_{\varepsilon}(\gamma) = f_{\varepsilon}(\gamma)/\gamma$ to yield \begin{equation}\label{quarkGammaRep} q_{\varepsilon}^{(p)}(t) = \as(t) \int \dif\gamma\; \esp{\gamma t} g_{\varepsilon}(\gamma) \frac{\qcf(\gamma)}{\gamma+\varepsilon} \;, \end{equation} where \begin{equation}\label{gluonGammaRep} g_{\varepsilon}(t) = \int\dif\gamma\; \esp{\gamma t} g_{\varepsilon}(\gamma) = R\Big(\frac{\asb(t)}{\omega},\varepsilon\Big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) \;. \end{equation} One could try a saddle point evaluation of $q$ around $\gamma_0\big(\asb(t)/\omega)\big)$, but it is difficult to get the right accuracy, and the fluctuation algorithm is cumbersome. We prefer to extract $\gamma_{\pq\pg}$ directly from the expression \begin{subequations}\label{qdot} \begin{align} \dot{q}_{\varepsilon}^{(p)}(t) \equiv \frac{\dif}{\dif t} q_{\varepsilon}^{(p)}(t) &= \as(t) \int\dif\gamma\; \esp{\gamma t} \, \qcf(\gamma) g_{\varepsilon}(\gamma) \label{qdot1} \\ &= \frac{\dif}{\dif t} \left[ C_{\pq\pg}^{(p)}\big(\as(t),\varepsilon\big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) \right] + \gamma_{\pq\pg}\big(\as(t)\big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) \label{qdot2} \\ &= \left\{ \left[ \varepsilon \frac{\partial}{\partial\log\as(t)} + \gamma_0\Big(\frac{\asb(t)}{\omega}\Big) \right] C_{\pq\pg}^{(p)}\big(\as(t),\varepsilon\big) + \gamma_{\pq\pg}^{(\overline{\mathrm{MS}})} \big(\as(t)\big) \right\} g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) \;, \label{qdot3} \end{align} \end{subequations} having used in the first line the equality \begin{equation}\label{gpe} \frac{\dif}{\dif t} \left[\as(t)\esp{\gamma t}\right] = \as(t)\esp{\gamma t} (\gamma+\varepsilon) \;. \end{equation} It is not obvious how to extract from Eq.~(\ref{qdot}) resummation formulas for both $C_{\pq\pg}^{(p)}$ and $\gamma_{\pq\pg}$ in terms of the $\kt$-factorisation integral of $\dot{q}_{\varepsilon}^{(p)}$. For instance, at $\varepsilon=0$ we get the well known~\cite{CaHa94} $\kt$-factorisation result \begin{align} \qRes_{0}^{(p)}\Big(\gamma_0\Big(\frac{\asb(t)}{\omega}\Big)\Big) R\Big(\frac{\asb(t)}{\omega},0\Big) &= C_{\pq\pg}^{(p)}\big(\as(t),0\big) \gamma_0\Big(\frac{\asb(t)}{\omega}\Big) + \gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}\big(\as(t)\big) \nonumber \\ &= \gamma_{\pq\pg}^{(p)}\big(\as(t)\big) R\Big(\frac{\asb(t)}{\omega},0\Big)\;, \label{qgFactEps0} \end{align} which determines $\gamma_{\pq\pg}$ in the $p$-scheme, but not $C_{\pq\pg}^{(p)}$ and $\gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}$ separately. In Ref.~\cite{CaHa94}, a double expansion of~(\ref{qdot2}) in both $\as(t)$ and $\varepsilon$ is used to get half a dozen terms in the $\as(t)$-expansion of $\gamma_{\pq\pg}$, a result further improved in~\cite{Rterms}. Here we want to derive $\gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}\big(\as(t)\big)$ to all orders in $\asb(t)/\omega$, by exploiting the property of the coefficient part in~(\ref{qdot2}) of being a total $t$-derivative of the product of $g_{\varepsilon}$ and a function which is perturbative in both $\as(t)$ and $\varepsilon$. We start noticing that $\qcf$ can be expanded around $\gamma=-\varepsilon$ in the form \begin{align} \qcf(\gamma) &= \qcf(-\varepsilon) + \ord{\gamma+\varepsilon} \nonumber \\ &\equiv \qRes(\varepsilon) + \ord{\gamma+\varepsilon} \;, \label{qKerExpns} \end{align} where $\qRes(\varepsilon)$ is now process-independent, being the residue of the function~(\ref{defQuark}) at the collinear pole $\gamma=-\varepsilon$. In App.~\ref{a:cfqk} we evaluate $\qcf(\gamma)$ in various cases and we show that in all cases $\qRes(\varepsilon)$ is process-independent and can be written as the product of a rational and a transcendental part \begin{equation}\label{qKernelEps} \qRes(\varepsilon) = \left[ \frac{T_R}{2\pi} \, \frac{2}{3} \, \frac{1+\varepsilon}{(1+2\varepsilon)(1+\frac{2}{3}\varepsilon)} \right] \left[ \frac{\esp{\varepsilon\psi(1)}\Gamma^2(1+\varepsilon)\Gamma(1-\varepsilon)}{\Gamma(1+2\varepsilon)} \right] \equiv \qResR(\varepsilon) \qResT(\varepsilon) \;. \end{equation} This form basically follows from the off-shell generalisation of the $\pg\to\pq\bar{\pq}$ DGLAP splitting function introduced in~\cite{CaHa94}, and proved in App.~\ref{aa:ucb} to have a universal off-shell dependence induced by $\kt$-factorisation. On the other hand, by Eq.~(\ref{gpe}) terms of order $(\gamma+\varepsilon)^n \;(n\geq1)$ are easily seen to be total $t$-derivatives, so that we can rewrite Eq.~(\ref{qdot}) in the form \begin{equation}\label{gqgxgMSbar} \qRes(\varepsilon) \as(t) R\Big(\frac{\asb(t)}{\omega},\varepsilon\Big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) = \gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}\big(\as(t)\big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) + \text{total $t$-derivative} \;. \end{equation} This result shows that $\gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}$ is universal, being dependent on the universal functions $\qRes(\varepsilon)$ and $R$. Furthermore, it suggests how to extract the anomalous dimension contributions from the power series in $\varepsilon$ in the l.h.s., i.e., by subtracting a total $t$-derivative or, in other words, by doing an ``integration by parts''. By writing the expansions \begin{align} \qResT(\varepsilon) R\Big(\frac{\asb(t)}{\omega},\varepsilon\Big) &= \sum_{n=0}^{\infty} (-\varepsilon)^n R_n\Big(\frac{\asb(t)}{\omega}\Big) \label{Rexpns} \\ \qResR(\varepsilon) &= \sum_{m=0}^{\infty} (-\varepsilon)^m \qRes_m \label{qKexpns} \end{align} the general term of the series is of the form $(-\varepsilon)^{n+m} R_n\big(\asb(t)/\omega\big)$ and is reduced to a power series in $\as(t)$ by repeated application of the identity \begin{equation}\label{identity} -\varepsilon \as(t) \rho\Big(\frac{\asb(t)}{\omega}\Big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) = \as(t) \gamma_0\Big(\frac{\asb(t)}{\omega}\Big) \tilde{\rho}\Big(\frac{\asb(t)}{\omega}\Big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) -\frac{\dif}{\dif t} \left[ \as(t) \tilde{\rho}\Big(\frac{\asb(t)}{\omega}\Big) g_{\varepsilon}^{(\overline{\mathrm{MS}})}(t) \right] \;, \end{equation} valid for any function $\rho$, where \begin{equation}\label{defD} \tilde{\rho} \equiv \left(1+\frac{\partial}{\partial\log\as(t)}\right)^{-1} \rho \equiv \left( 1 + \hat{D} \right)^{-1} \rho \;. \end{equation} Eq.~(\ref{identity}) means that multiplication by $-\varepsilon$ corresponds to the operator $\gamma_0\big(\asb(t)/\omega\big) (1+\hat{D})^{-1}$ after integration by parts. It follows that the quark anomalous dimension is given by \begin{equation}\label{gqgMSbar} \gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}\big(\as(t);\omega\big) = \as(t) \qResR\bigg( -\gamma_0\Big(\frac{\asb(t)}{\omega}\Big) \frac1{1+\hat{D}} \bigg) \sum_{n=0}^{\infty} \left( \gamma_0\Big(\frac{\asb(t)}{\omega}\Big) \frac1{1+\hat{D}} \right)^n R_n\Big(\frac{\asb(t)}{\omega}\Big) \;, \end{equation} which is the resummed expression we were looking for. The subtlety of the result~(\ref{gqgMSbar}) is that, in order to get a resummed formula in $\asb(t)/\omega$ of the $\varepsilon$-independent $\gamma_{\pq\pg}$, we need an all order $\varepsilon$-expansion of the $\kt$-factorisation formula~(\ref{gqgxgMSbar}). The fact that terms of order $\varepsilon^n$ generate finite contributions to both coefficient and anomalous dimension is somewhat similar to what already noticed in the gluon channel, and is typical of the minimal subtraction recipe. However, while higher orders in $\varepsilon$ correspond to higher subleading $\log1/x$ resummation levels in the gluon case, here they just correspond to higher orders in the $\asb(t)/\omega$-expansion of $\gamma_{\pq\pg}$. In order to use Eq.~(\ref{gqgMSbar}) we need to understand the action of the $\gamma_0 (1+\hat{D})^{-1}$ operator. A simple example is provided by setting $R_n = \delta_{n0}$ and $\gamma_0=\asb(t)/\omega$. By noting that $\hat{D} [\as(t)]^n = n [\as(t)]^n$, we obtain \begin{equation}\label{rational} \gamma_{\pq\pg}^{(\overline{\mathrm{MS}})} = \frac{\as(t)}{2\pi} \sum_{n=0}^{\infty} \frac{\qRes_n}{n!} \left(\frac{\asb(t)}{\omega}\right)^n = \as(t) \qResR_B\Big(-\frac{\asb(t)}{\omega}\Big) = \frac{\as(t) T_R}{2} \left( \esp{2\frac{\asb(t)}{\omega}} + \frac1{3}\esp{\frac{2}{3}\frac{\asb(t)}{\omega}} \right) \;, \end{equation} where $\qRes_B$ denotes the Borel transform of $\qResR$, which is simply obtained from Eq.~(\ref{qKernelEps}). As noticed in~\cite{CaHa94}, Eq.~(\ref{rational}) provides all rational coefficients occurring in the resummed formula for $\gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}$. It is also possible to provide an expression for~(\ref{gqgMSbar}) involving only quadratures, provided the functions $R_n$ are given to all orders. In App.~\ref{a:gqg} the result is derived in terms of the intermediate function \begin{equation}\label{interm} \hat{R}\Big(\frac{\asb}{\omega}\Big) = \frac{\partial}{\partial(\asb/\omega)} \int_0^{\asb/\omega} \dif a\; R_B\Big( a, -\int_{a}^{\asb/\omega} \frac{\dif a'}{a'} \gamma_0(a') \Big) \; \end{equation} where $R_B$ is the Borel transform of $\qResT(\varepsilon)R(\asb/\omega,\varepsilon)$ in the $-\varepsilon$ variable. The final result is \begin{align} \gamma_{\pq\pg}^{(\overline{\mathrm{MS}})} = \frac{\as(t)}{2\pi}\frac{T_R}{2} \frac{\partial}{\partial(\asb/\omega)} \int_0^{\asb/\omega} \dif a & \left[ \exp\left(2\int_{a}^{\asb/\omega} \frac{\dif a'}{a'}\;\gamma_0(a')\right)\right. \nonumber \\ &\left.+\frac1{3}\exp\left(\frac{2}{3}\int_{a}^{\asb/\omega} \frac{\dif a'}{a'}\; \gamma_0(a')\right) \right] \hat{R}(a) \;, \label{gqgMSbarQuad} \end{align} where we notice the exponentials of~(\ref{rational}) occurring in a more general framework. \section{Discussion\label{s:disc}} The main results of this paper are the renormalisation group (RG) factorisation of the BFKL equation in $4+2\varepsilon$ dimensions at NL$x$ level, and the relation of the $Q_0$-scheme to the $\overline{\mathrm{MS}}$-scheme at the same level of accuracy. The collinear factorisation has been proved by solving the BFKL equation with a cutoff $Q_0^2 = \mu^2 \esp{-T}$, and by showing that --- in the on-shell limit for the initial gluon ($Q_0=0$) --- the solution admits a RG representation with a well-defined anomalous dimension $\bar{\gamma}_{\varepsilon}(\asb;\omega)$, as given by Eqs.~(\ref{gFormB}), (\ref{gluonEps2}) and (\ref{anDimExp}) at various levels of accuracy. This representation shows exponentiated IR poles $\sim\frac1{\varepsilon}\left(\frac{b\asb}{\varepsilon}\right)^n$, which are factorised in the IR-free regime with $\asb(t) < \varepsilon/b$ and $\asb(-\infty) = 0$. Then, by the $\varepsilon$-expansion of $\bar{\gamma}_{\varepsilon}(\asb;\omega)$ in Eqs.~(\ref{gammaEps2}) and (\ref{gammaBFKL}), we are able to define the minimal subtraction scheme, to switch to the UV-free regime, and to find the transformation factor $R(\asb;\omega)$ at NL$x$ level. We discover in this way that the coefficient $R$ is due to the product of a fluctuation factor $\ff$, which can be calculated at $\varepsilon=0$, and of a dynamical factor $\rk = R/\ff$ which reflects the $\varepsilon$-dependence of $\bar{\gamma}_{\varepsilon}(\asb;\omega)$, expanded around $\varepsilon = b \asb$. Furthermore, the $\varepsilon$-dependent $\bar{\gamma}$ directly provides subleading contributions to the $\overline{\mathrm{MS}}$-scheme anomalous dimension, encoded in $\bar{\gamma}_{\varepsilon=b\asb}(\asb;\omega)$ (Eqs.~(\ref{gammaMSbar2}), (\ref{logrk}) and (\ref{gMSbarNL})). In this way, the difference $\gamma^{(\overline{\mathrm{MS}})}-\gamma^{(Q_0)}$ between the two schemes is here calculated up to NNL$x$ level. Therefore, the $\varepsilon$-dependence of the kernel is transmuted into a subleading $\as$-dependence for the $\overline{\mathrm{MS}}$ anomalous dimension. A similar transmutation phenomenon occurs in the case of quark-gluon mixing. In addition, due to the different form of the scheme-changing transformation~(\ref{cff}), the $\varepsilon$-dependence induces {\em leading} $\as/\omega$-dependence into $\gamma_{\pq\pg}^{(\overline{\mathrm{MS}})}$ --- which has to be disentangled from the process-dependent contributions to $C_{\pq\pg}^{(p)}$. Therefore, the knowledge of the $\varepsilon$-dependence becomes increasingly important for the full singlet evolution. The above results are not directly applicable to the doubly resummed approach~\cite{CCSSkernel} nor, as far as we understand, to that of~\cite{ABF}. However, they provide some hints towards an improved scheme-changing transformation. First of all, the analysis of Sec.~\ref{s:ugqg} and App.~\ref{a:cfqk} shows that there is a universal part in the $\varepsilon$-dependence of $\qcf(\gamma)$ (the $p$-scheme defining kernel) which is encoded in the collinear pole at $\gamma = -\varepsilon$, which in turn comes from an off-shell generalisation of the well-known $P_{\pq\pg}$ splitting function. It is then conceivable that a similar analysis can be performed for resummed models as well, in order to control the leading part of the scheme-changing transformation and its mixing. Furthermore, since the higher order $\varepsilon$-dependence affects $R$ at subleading level, it is conceivable that most of the normalisation change comes from the already known (or generalised) leading part. A preliminary analysis in this direction is under way~\cite{schemes}. Hopefully, this will soon lead to a realistic comparison of the resummed approach to experimental data. \section*{Acknowledgements} We are grateful to Gavin Salam and Anna Sta\'sto for a variety of discussions and suggestions, and to Stefano Catani for quite useful discussions at various stages of this work. This work has been supported in part by MIUR (Italy).
{ "attr-fineweb-edu": 1.503906, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd5Y4eIOjRyLa5WFd
\section{Introduction} The emerging application of wireless communication technique in the conventional feedback control systems creates a new type of control systems that is often called networked control systems \cite{ZHY15}. It offers a variety of benefits including flexibility, maintainability and cost reduction of automation process. Nevertheless, it also produces considerable design challenges like the extra energy consumption, and additional constraints in the closed-loop systems, such as communication bandwidth and control update frequency. In recent years, it is shown that event-triggered control paradigm is a promising solution compared to the commonly used time-triggered ones to update control when needed. Instead of continuous sampling and communication, event-triggered control scheme can determine the time when the state information is needed to sample and send to the control law based on certain triggering rules; see \cite{AB02, T07}. Often in event-triggered control, a key issue is the exclusion of possible Zeno triggering behavior. In this paper, we will present a practical event triggering mechanism that guarantees positive minimum inter-event time (MIET), thus excluding the Zeno triggering for an event-triggered control system. Event-triggered control has received considerable attention in recent years, and here we review some key developments in this field. The pioneer paper \cite{T07} initialized a general event-based scheduling control for general nonlinear systems, and a typical design and analysis framework for event-triggered control systems was formally proposed. Generally speaking, the static triggering rule in \cite{T07} and many subsequent papers often consists of system state variables. An event-triggered control scheme is proposed and analyzed for perturbed linear systems in \cite{HSV08}, where the sampling is performed only when the tracking/stabilization error is large enough. Another paper \cite{A08} presented the architecture of a general structure for event-triggered control and discussed the relations to nonlinear systems. Interestingly, instead of using zero-order holder in most of event triggering mechanisms, the control input is designed to mimic a continuous feedback between two consecutive event times in \cite{LL10}. Later, another work in \cite{WL11} gave a scheme to postpone the triggering time over previously proposed methods so as to enlarge the MIET. Meanwhile, the model-based control technique and event-triggered mechanism were unified into a single framework to stabilize the uncertain general linear systems in \cite{GA12}. The network induced delays and quantization errors were considered and related results were derived. Also, the synthesis of event-triggering rule and controller for delayed linear systems was investigated and the optimization problem was considered with respect to two kinds of performance indices in \cite{WRG12}, respectively. Typically, in \cite{HDT12} the periodic event-triggered control approach was formulated for general linear networked control systems, in which the MIET is guaranteed to be at least the fixed sampling period. A concept of robust event-separation property was proposed in \cite{BH14}, which shows that some popular event triggering mechanisms do not ensure the event separation property no matter how small the disturbance is. This is an important aspect in the design of triggering function for event-based control system under perturbations. For more development of event-based control and triggering function design, the reader is referred to the surveys \cite{HJT12, zhang2016overview}. Unfortunately, for all of above work, it is worth mentioning that the upper bounds of those constructed triggering signals are hard to adjust. Specifically, although the Zeno behavior can be excluded by guaranteeing a strictly positive MIET, the value of MIET cannot flexibly adapt physical limitations of hardwares, which means that those event triggering mechanisms can not be realized on digital platform. Recently, to improve the results in \cite{T07}, the dynamic triggering mechanism was formally proposed in \cite{G14}. In this framework, an internal dynamic variable is introduced into the static triggering rule, which also helps to enlarge the next triggering time instant. However, similar to the static ones, the resulting MIET still can not adapt to the hardware limitations, also the triggering signal is dependent of the real systematical states with specific constraints . For other recent remarkable work of dynamic triggering mechanism, we notice that in \cite{ACM2016} the authors considered the output feedback stabilization problem for the general linear systems with event-triggered sampling and dynamic quantization and that in \cite{BAA18} a new type of event condition was proposed to be dependent on the state difference between the actual system and the nominal undisturbed system, which is triggered when the nominal state is equal to the state of real system. For other interesting results on dynamic event-triggered control, we further refer the readers to the overviews \cite{NGC19,DHG17}. Based on the above discussion, we notice that under the static or dynamic triggering mechanisms, the variable range and the evolution rate of the constructed triggering signal cannot be freely designed. In the meantime, the investigation on the robustness issue of dynamic event triggering mechanism is lacking. These motivate the construction of an event triggering mechanism with a designable MIET and a robust global event-separation property. We remark that only in \cite{JC19} that the designable MIET control is discussed, in the context of multi-agent consensus control with single-integrator dynamics. In this paper, we aim to propose an event-triggered control scheme to realize adjustable positive MIET and guaranteed system convergence. The main contributions of this paper are summarized as below. \begin{enumerate} \item We show new design and analysis approaches for dynamic event triggering mechanism, which are applied to general nonlinear system and also specified to general linear systems. \item A freely designable MIET is derived. Compared to the static and dynamic event triggering mechanisms, we can adjust the variable range of the constructed triggering signal regardless of physical limitations of real systematical states. \item The proposed dynamic event-triggered strategy ensures the robust global event-separation property under state perturbations. \end{enumerate} The rest of this paper is organized as follows. In Section 2, we review two event-triggered schemes in the literature and some preliminary knowledge of event-triggered control, and present the problem formulation. The design and analysis framework of MIET-designable event triggering mechanism for nonlinear systems is presented in Section 3. In Section 4, the framework presented in Section 3 is specified to general linear control system, while the robustness of the proposed algorithm is analyzed. In Section 5, two simulation examples are provided to illustrate the effectiveness of the present theoretical results. Finally, remarking conclusions are given in Section 6.\\ {\bf Notations.} Throughout this paper, $\mathbb{R}$ and $\mathbb{R}^n$ denote the set of real numbers and the $n$-dimensional Euclidean space, respectively. $\mathbb{R}^{+}_{0}$ is the set of non-negative real numbers. The notation $|\cdot|$ refers to the Euclidean norm for vectors and the induced 2-norm for matrices. The superscript $\top$ denotes vector or matrix transposition. A function $\alpha(r):\mathbb{R}^{+}_{0}\rightarrow\mathbb{R}^{+}_{0}$ is said to be of class $K_{\infty}$ if it is continuous, strictly increasing, $\alpha(0)=0$, and $\alpha(r)\rightarrow+\infty$ as $r\rightarrow+\infty$. \section{Event-Triggering Mechanism} We consider the control system of the form \begin{eqnarray} \label{nonlinear system} \dot x=f(x,u), x\in \mathbb{R}^n, u\in \mathbb{R}^m \end{eqnarray} with a state feedback law $u=k(x)$ that stabilizes the system. The resulting closed-loop control system is shown below \begin{eqnarray} \label{continuous closed-loop system} \dot x=f(x,k(x)) \end{eqnarray} Under the state feedback law $u=k(x)$ the state information of the plant should be available and accessed at a continue manner so as to update the input continuously. In event-triggering control, the state information is accessed when necessary and the control input is updated when certain events occur. This results in a discrete-time updated control law $u=-k(x(t_i)), t\in[t_i,t_{i+1})$, where $t_i$ denotes the $i$-th triggering time instant. We note that if $t_{i+1}-t_{i}\rightarrow0$ for some finite time $t_i$, the sampling process becomes impractical (which is termed a Zeno triggering). Therefore, how to guarantee a positive MIET is one of the key issues in the design of event-triggering control. As in \cite{T07}, we define the measurement error as $e(t)=x(t_i)-x(t), t\in[t_i,t_{i+1})$. We assume that the closed-loop control system \begin{eqnarray} \label{discrete closed-loop system} \dot x=f(x,k(x+e)). \end{eqnarray} is input-to-state stable (ISS) with respect to $e(t)$. There thus exists an ISS-Lyapunov function $V$ for class $K_{\infty}$ functions $\bar\alpha, \underline{\alpha}, \alpha$, and $\gamma$ satisfying the following inequalities \begin{eqnarray} \label{ISS} \underline{\alpha}\leq V(x)\leq\bar{\alpha},\nonumber\\ \dot V(x)\leq-\alpha(|x|)+\gamma(|e|). \end{eqnarray} \subsection{Static Event Triggering Mechanism} The seminal paper \cite{T07} proposed an event triggering strategy for the system (\ref{discrete closed-loop system}), with the following static triggering rule \begin{eqnarray} \label{static triggering rule} t_{i+1}=inf\{t>t_i|\gamma(|e|)\geq \sigma\alpha(|x|)\} \end{eqnarray} In event-triggered control systems, the interval time of event triggering should be designed to satisfy \begin{eqnarray} \label{MIET} t_{i+1}-t_{i}\geq\tau >0, \forall i, \end{eqnarray} where $\tau$ is a positive constant on MIET. It is shown in \cite{T07} that for all initial state $x(0)\to S$ where $S\subset\mathbb{R}^n$ is a compact set containing the origin, there exists $\tau>0$ such that the sequence $t_i$ determined by (\ref{static triggering rule}) satisfies (\ref{MIET}) if $f(\cdot)$, $k(\cdot)$, $\alpha^{-1}(\cdot)$ and $\gamma(\cdot)$ are Lipschitz continuous on compacts and $0<\sigma<1$. \subsection{Dynamic Event Triggering Mechanism} This dynamic triggering mechanism \cite{G14} is an extension of the static triggering scheme through enlarging the variable range of the constructed triggering signal $\gamma(|e|)/\alpha(|x|)$. To explain this point, let us recall the triggering rule (\ref{static triggering rule}) and add a positive item in the right hand side of the inequality \begin{eqnarray} \label{} \frac{\gamma(|e|)}{\alpha(|x|)}\geq \sigma+\frac{\eta}{\theta\alpha(|x|)},\nonumber \end{eqnarray} where $\theta>0$ is the adjusted parameter, and $\eta\geq0$ is a virtual state to be designed. Note that the additive item $\frac{\eta}{\theta\alpha(|x|)}$ increases the upper bound of the comparison threshold of the triggering signal. The dynamic triggering rule can be thus given as below \begin{eqnarray} \label{dynamic triggering rule} t_{i+1}=inf\{t>t_i|\eta + \theta ( \sigma \alpha(| x |) - \gamma(| e |) ) \leq 0\}. \end{eqnarray} When the virtual state is designed as $\dot { \eta } = - \zeta(\eta) + \sigma\alpha(| x |) - \gamma(| e |)$, it has been proved in \cite{G14} that the inequality $\eta\geq0$ is satisfied and that both the state $x(t)$ and $\eta$ will converge to the origin asymptotically. As can be observed from the above reviews, the adjustable range of MIET for static and dynamic triggering mechanism is limited. To improve the implementability of theoretical solution on physical platform, this paper follows the design of a flexible dynamic event-triggered scheme that allows the variable range to be prescribed to some extend, and ensures a positive MIET independent of intrinsic system states. \section{Dynamic MIET-Designable Event Triggering Control for Nonlinear Systems} Through observing the two types of triggering mechanisms (\ref{static triggering rule}) and (\ref{dynamic triggering rule}) reviewed in the last section, we find that the derivation of the upper bound of the comparison threshold of the MIET is dependent on the state $x$ and measurement error $e$, and the value of MIET can only be adjusted in a limited range for certain specific plants. In this section, we aim to propose a novel triggering mechanism with designable MIET for general nonlinear systems. In contrast to the static or dynamic triggering mechanism, the intuitive idea here is to create a triggering signal $Z(t)$, of which the variable range can be freely designed to some extent. We thus adopt the following event triggering rule \begin{eqnarray} \label{Triggering rule 1} t_0&=&0,\nonumber\\ t_{i+1}&=&inf\{t>t_i|Z(t)=0\}, \end{eqnarray} where $Z(t_i)$ is reset as $\bar Z$ at a triggering instant and $\bar Z>0$ is the design parameter. Here, the variable $Z(t)$ takes a similar role to a countdown variable with an assigned upper bound $\bar Z$. The dynamics $\dot Z(t)=\omega(\varpi(x,e),\varepsilon)$ is considered in the sequel of this paper for designing event triggering conditions, where $\varepsilon>0$ is design parameter. Thus, the first main result of this paper can be given as follows. \begin{thm} \label{theorem1} Consider the nonlinear control systems (\ref{discrete closed-loop system}) with the event triggering mechanism (\ref{Triggering rule 1}). The dynamics of additional variable is given as $\dot Z(t)=\omega(\varpi(x,e),\varepsilon)$. Then, for all initial conditions $x(0)$, the closed-loop control system is always guaranteed to converge to the origin asymptotically. Meanwhile, there exists a designable MIET lower bounded by $\tau_1$ \begin{eqnarray} \label{tao1} \tau_1 &=& \sqrt{\frac{1}{b\varepsilon}}\{\mathbf{atan}[\sqrt{\frac{b}{\varepsilon}}(1+\bar Z)] -\mathbf{atan}[\sqrt{\frac{b}{\varepsilon}}]\}>0, \nonumber\\ b&=&L^{2} \frac{|M|^{2}}{\lambda_{\min }(M)} \end{eqnarray} with certain design parameters $M, \varepsilon, \bar Z$ to be detailed in the sequel, for the triggering sequence $(t_i)_{i\rightarrow+\infty}$. \end{thm} \begin{pf} We first analyze the stability. Choose the candidate Lyapunov function as $W =V+\frac{1}{2} Z e^{\top} M e $, where $M$ is a symmetric positive definite matrix. Note that the derivative of $W$ along the solution of (3) is $\dot{W} =\dot{V}+\frac{1}{2} \omega e^{\top} M e+Z e^{\top} M \dot{e}$. Because of $\dot{x}=-\dot{e}$ and inequality (\ref{ISS}), it follows \begin{eqnarray} \dot{W}&\leq&-\alpha(|x|)+\gamma(|e|)+\frac{1}{2} \omega e^{\top} M e-Ze^{\top} M \dot{x} \nonumber\\ &=&-\alpha(|x|)+\gamma(|e|)+\frac{1}{2} \omega e^{\top} M e-Z e^{\top} M f(x,k(x+e))\nonumber \end{eqnarray} Since $\omega<0$, we have \begin{eqnarray} \dot{W}&\leq&-\alpha(|x|)+\gamma(|e|)+\frac{1}{2} \omega \lambda_{\min }(M)|e|^{2}\nonumber\\ &&+Z|M||e||f(x,k(x+e))|\nonumber \end{eqnarray} Because that the Lipschitz continuity on compact sets of $f(x,u)$ and $k(x)$ renders that $f(x,k(x+e))$ is also Lipschitz continuous, we can thus obtain $|f(x,k(x+e))|\leq L|x|+L|e|$ with Lipschitz constant $L>0$. These facts lead to \begin{eqnarray} \dot{W}&\leq&-\alpha(|x|)+\gamma(|e|)+\frac{1}{2} \omega \lambda_{\min }(M)|e|^{2}\nonumber\\ &&+Z|M||e|(L|x|+L|e|)\nonumber\\ &=&-\alpha(|x|)+\gamma(|e|)+\frac{1}{2} \omega \lambda_{\min }(M)|e|^{2}\nonumber\\ &&+ZL|M||e||x|+ZL|M||e|^2\nonumber \end{eqnarray} In order to guarantee the asymptotic stability of control system, we enforce the following inequality \begin{eqnarray} \omega&<&\frac{2\alpha(|x|)}{\lambda_{\min}(M)} \cdot \frac{1}{|e|^{2}}-\frac{2 \gamma(|e|)}{\lambda_{\min }(M)} \cdot \frac{1}{|e|^{2}}\nonumber\\ &&-\frac{2 L Z|M|}{\lambda_{\min }(M)} \cdot \frac{|x|}{|e|}-\frac{2 L Z|M|}{\lambda_{\min }(M)}.\nonumber \end{eqnarray} If more conservative class $K_{+\infty}$ functions $\alpha(|x|)=\frac{1}{2}|x|^{2}$ and $ \gamma(|e|)=L|M||x||e|$ are chosen, the variable $\omega$ further satisfies \begin{eqnarray} \omega<\varpi=\frac{1}{\lambda_{\min} (M)} \cdot \frac{|x|^{2}}{|e|^{2}}-(1+Z) \frac{2 L|M|}{\lambda_{\min} (M)} \cdot \frac{|x|}{|e|}\nonumber \end{eqnarray} By using the Young inequality, we can obtain \begin{eqnarray} -(1+Z) \frac{2 L|M|}{\lambda_{\min }(M)} \cdot \frac{|x|}{|e|} \geq-b(1+Z)^{2}-\frac{1}{b} \cdot \frac{L^{2}|M|^{2}}{\lambda_{\min }^{2}(M)} \cdot \frac{|x|^{2}}{|e|^{2}}\nonumber \end{eqnarray} such that \begin{eqnarray} \varpi \geq-b(1+Z)^{2}+\left(\frac{1}{\lambda_{\min}(M)}-\frac{1}{b} \cdot \frac{L^{2}|M|^{2}}{\lambda_{\min}^{2}(M)}\right) \cdot \frac{|x|^{2}}{|e|^{2}}\nonumber \end{eqnarray} By letting the second item in the right hand side of the above inequality satisfy $ b=L^{2} \frac{|M|^{2}}{\lambda_{\min }(M)}$, we can write \begin{eqnarray} \varpi-\varepsilon \geq-b(1+Z)^{2}-\varepsilon,\nonumber \end{eqnarray} where the design parameter $\varepsilon>0$. If we further design $\omega$ as \begin{eqnarray} \label{dynamics of additional variable} \omega=\left\{ \begin{array}{lr} min(0,\varpi)-\varepsilon, &e\neq0, \\ -\varepsilon, &e=0, \end{array} \right. \end{eqnarray} and consider two cases of $e$: (1)$e\neq0$, if $\varpi<0$, then $\omega=\varpi-\varepsilon$ and if $\varpi\geq0$, $\omega=-\varepsilon\geq-b(1+Z)^{2}-\varepsilon$, so is case (2) when $e=0$. Note that because $\omega<\varpi$ can be always guaranteed by the design (\ref{dynamics of additional variable}) of $\omega$, we conclude that the Lyapunov function $W$ decreases such that $x(t)$ converges to the origin asymptotically. Moreover, the dynamics of the countdown variable $Z$ gives the estimate $\dot Z\geq-b(1+Z)^{2}-\varepsilon$. Let $\phi$ be the solution of differential equation $\dot \phi=-b(1+\phi)^{2}-\varepsilon$, then the inter-event time is lower bounded by the time $\tau_1$ that it takes for $\phi$ to evolve from $\bar Z$ to 0. We therefore conclude the formula of $\tau_1$ in (9). \qed \end{pf} We also note that the derivation of the MIET $\tau_1$ does not involve the state $x$ or the measurement error $e$, implying that the event triggering mechanism developed here has the global robust event-separation property. \subsubsection{Remark 1.} There are only two independent design parameters $\bar Z$ and $\varepsilon$ in the proposed event-triggering scheme. Intuitively, they have the opposite effects on the time interval between two consecutive events. We also note that an event-triggered control system with an strictly positive MIET automatically excludes the Zeno behavior. \subsubsection{Remark 2.} Note that there exists an upper bound for the MIET which can be obtained by the following calculation \begin{eqnarray} \tau_{1,max}\triangleq\lim_{\varepsilon\to0^{+}}\lim_{\bar Z\to+\infty}\tau_1=\frac{1}{b}\quad when\quad \frac{\varepsilon}{1+\bar Z}>b,\nonumber\\ \tau_{1,max}\triangleq\lim_{\varepsilon\to0^{+}}\lim_{\bar Z\to+\infty}\tau_1=+\infty\quad when\quad \frac{\varepsilon}{1+\bar Z}<b,\nonumber \end{eqnarray} Here, we have established an explicit quantitative relation between $\tau_{1,max}$ and $b$, which represents the communication cost and decay rate, respectively. \section{Dynamic MIET-Designable Event Triggering Control Of General Linear Systems} In this section, we will specify previous results to event-triggered control of general linear systems with designable MIET. \subsection{Basic Algorithm} Consider a general linear control system of the form \begin{eqnarray} \label{general linear systems} \dot x(t)=Ax(t)+Bu(t), \end{eqnarray} where $A$ and $B$ are system matrices with proper dimensions, and we assume the system is controllable. A feedback control law is designed as $u(t)=Kx(t)$ through pole assignment for stabilizing the system (\ref{general linear systems}). The closed-loop control system is then obtained as below \begin{eqnarray} \dot x(t)=(A+BK)x(t).\nonumber \end{eqnarray} This thus implies that there exists a Lyapunov function $V=x^{\top}Px$ such that the symmetric positive definite matrix $P$ satisfies \begin{eqnarray} (A+B K)^{\top} P+P(A+B K)=-Q,\nonumber \end{eqnarray} where $Q$ is an arbitrary symmetric positive definite matrix. When the state-feedback control law $u(t)=Kx(t)$, which is updated in a continuous time manner, is conducted on digital platforms and/or wireless communication environment, then it needs to be modified as discrete-time updates. In this section, we formally propose an MIET-designable event triggering method to schedule the computation and communication resources and determine the triggering time that updates the feedback of the system states $x(t)$ into the closed-loop control system; i.e., the control is modified as $u(t)=Kx(t_i), t\in[t_i, t_{i+1})$, where $t_i$ is the triggering instant. Following the idea of the event-triggered control framework presented in Section 2, we define the measurement error $e(t)=x(t_i)-x(t), t\in[t_i, t_{i+1})$; the following closed-loop system is thus rendered \begin{eqnarray} \label{closed loop event triggering linear system} \dot x(t)=Ax(t)+BKx(t)+BKe(t). \end{eqnarray} Similar to the event triggering mechanism (\ref{Triggering rule 1}), we apply the same triggering rule for the general linear system, and define the additional event function dynamics as $\dot Z(t)=\omega(\varpi,\varepsilon)$. We can now give the second main contribution of paper. \begin{thm} \label{theorem2} Consider the general linear control system (\ref{closed loop event triggering linear system}) with the event triggering mechanism (\ref{Triggering rule 1}) for all initial condition $x(0)$. Then, $x(t)$ asymptotically converge to the origin. Meanwhile, there exists a designable MIET lower bounded by $\tau_2$ \begin{eqnarray} \label{tao2} \tau_2 &=& \sqrt{\frac{1}{b\varepsilon}}\{\mathbf{atan}[\sqrt{\frac{b}{\varepsilon}}(1+\bar Z)] -\mathbf{atan}[\sqrt{\frac{b}{\varepsilon}}]\}>0, \nonumber\\ b &=& \frac{|P B K|^{2}}{\lambda_{\min }(P) \lambda_{\min }(Q)} \end{eqnarray} for the triggering sequence $(t_i)_{i\rightarrow+\infty}$. \end{thm} \begin{pf} We choose a Lyapunov function candidate $W=\frac{1}{2} x^{\top} P x+\frac{1}{2}Ze^{\top}Pe$. Its derivative along the solution of (12) gives \begin{eqnarray} \dot{W}&=&x^{\top} P \dot{x}+\frac{1}{2} \omega e^{\top} P e+Ze^{\top} P \dot{e}\nonumber\\ &=&-\frac{1}{2} x^{\top} Q x+x^{\top} P B K e\nonumber\\ &&+\frac{1}{2} \omega e^{\top} P e-Z e^{\top} P(A+B K) x\nonumber\\ &&-Z e^{T} P B K e\nonumber\\ &\leq&-\frac{1}{2} \lambda_{\min}(Q)|x|^{2}+|P B K||x||e|\nonumber\\ &&+\frac{1}{2} \omega \lambda_{\min}(P)|e|^{2}+Z|PA||x||e|\nonumber\\ &&+Z|PBK||x||e|+Z|PBK||e|^2\nonumber \end{eqnarray} By enforcing the following inequality \begin{eqnarray} \omega&<&\varpi=\frac{\lambda_{\min }(Q)}{\lambda_{\min }(P)} \frac{|x|^{2}}{|e|^{2}}-2 \frac{|P BK|}{\lambda_{\min }(P)} \frac{|x|}{|e|}-2 Z \frac{|P B K|}{\lambda_{\min} (P)} \frac{|x|}{|e|},\nonumber\\ &=& \frac{\lambda_{\min }(Q)}{\lambda_{\min }(P)} \frac{|x|^{2}}{|e|^{2}}-2(1+Z) \frac{|P B K|}{\lambda_{\min }(P)} \frac{|x|}{|e|},\nonumber \end{eqnarray} we can guarantee the asymptotic stability of the closed-loop system (\ref{closed loop event triggering linear system}). Analogous to the nonlinear case, through some manipulations based on the Young inequality, it follows that \begin{eqnarray} \varpi &\geq& -b(1+Z)^{2}+\left(\frac{\lambda \min (Q)}{\lambda_{\min} (P)} -\frac{|P B K|^{2}}{b\lambda_{\min}^{2}(P)}\right) \frac{|x|^{2}}{|e|^{2}}.\nonumber \end{eqnarray} When we choose $b$ to satisfy $ b= \frac{|P B K|^{2}}{\lambda_{\min }(P) \lambda_{\min }(Q)}$, then $\varpi-\varepsilon \geq-b(1+Z)^{2}-\varepsilon$ can be obtained where $\varepsilon>0$ is a design parameter. Recalling the definition of $\omega$ in (\ref{dynamics of additional variable}) for all value of $\varpi$, we can always write $\omega \geq-b(1+Z)^{2}-\varepsilon$ if $e\not=0$. Note that the inequality is also satisfied if $e=0$ and $\omega=-\varepsilon$. Therefore, we can conclude that $\phi\leq Z$ where $\phi$ is the solution of $\dot \phi \geq-b(1+\phi)^{2}-\varepsilon$ satisfying $\phi(0)=\bar Z$. The MIET (\ref{tao2}) can be thus lower bounded by the time $\tau_2$ that it takes to reach $\phi(\tau_2)=0$, which gives the explicit solution as in (13). Moreover, for any initial state $x(0)$ the Lyapunov function $W$ converges to the origin asymptotically due to the fact $\omega<\varpi$. \qed \end{pf} \subsubsection{Remark 3.} In Theorem \ref{theorem2}, a freely designable MIET only relying on design parameters $\bar Z, \varepsilon$ is completely established. Moreover, we can further enlarge the positive MIET by decreasing the parameter $b$, which can be realized through adjusting the matrix $Q$ and $P$, and the control gain $K$. Analogous to the case of nonlinear system, the event triggering mechanism also holds the global robust event-separation property for general linear systems. \subsubsection{Remark 4.} Theorems 1 and 2 both give the estimate $\dot \phi =-b(1+\phi)^{2}-\varepsilon$ of the same form, and $\tau_1=\tau_2$. It is worth noting that this estimate also matches the results in \cite{NGC19}, which implies the similar event triggering mechanism with closely related properties. \subsection{Robustness Analysis} We continue to consider the following perturbed linear system \begin{eqnarray} \label{add disturbance} \dot x(t)=Ax(t)+Bu(t)+Hd(t). \end{eqnarray} \begin{proposition} \label{theorem3} Consider the general linear control systems (\ref{add disturbance}) with the event triggering mechanism (\ref{Triggering rule 1}) and any bounded disturbance $|d|\leq\bar d$ for all initial condition $x(0)$. Then there exists the same positive lower bound for the designable MIET $\tau_3=\tau_2$, implying that the minimum event interval is robust to perturbations. Furthermore, suppose that the perturbation $d(t)$ is convergent. Then the system state also asymptotically converges to the origin. \end{proposition} \begin{pf} We first analyze the robustness issues of the algorithm. Recall the Lyapunov function candidate and its derivation \begin{eqnarray} \dot W&\leq&-\frac{1}{2} \lambda_{\min}(Q)|x|^{2}+|P B K||x||e|+\frac{1}{2} \omega \lambda_{\min}(P)|e|^{2}\nonumber\\ &&+Z|PA||x||e|+Z|PBK||x||e|+Z|PBK||e|^2\nonumber\\ &&+|PH||x|\bar d+Z|PH||e|\bar d\nonumber \end{eqnarray} Therefore, it is clear that the formula below is still satisfied \begin{eqnarray} \omega<\varpi= \frac{\lambda_{\min }(Q)}{\lambda_{\min }(P)} \frac{|x|^{2}}{|e|^{2}}-2(1+Z) \frac{|P B K|}{\lambda_{\min }(P)} \frac{|x|}{|e|}.\nonumber \end{eqnarray} Following the same lines in the proof of Theorem 2, it can be found that its derivation process only depends on the design parameters $\bar Z, b, \varepsilon$. We therefore conclude that the minimum event-interval is still guaranteed in the presence of system perturbation $d(t)$. The convergent $d(t)$ implying convergent $x(t)$ is a consequence of the exponential stability of the linear system. \qed \end{pf} \section{Numerical simulations} In this section, two simulations are given to show the effectiveness of the proposed theoretical results. \subsection{Nonlinear system} Firstly, let us consider the forced van der Pol oscillator: $\dot x_1=x_2$, $\dot x_2=(1-x^2_1)x_2-x_1+u$, where $x_1,x_2\in\mathbb{R}$ are states and $u$ is the control input to be designed. Here, we adopt the control law $u=-x_2-(1-x^2_1)x_2$ which can stabilize the origin of the system. The nominal values of design parameters are chosen as $\bar Z=1$, $\varepsilon=1$. The Lipschitz constant $L$ and matrix $M$ are obtained as $1$ and $[1\quad0.25;0.25 \quad1]$, respectively. The initial states are arbitrarily set as $x_1=1, x_2=-0.5$. \begin{figure}[h] \label{shiyitu} \centering \subfigure[]{\includegraphics[width=3.2in,height=2in]{N_States}} \subfigure[]{\includegraphics[width=3.2in,height=2in]{N_e}} \caption{(a) The trajectories of states $x_1, x_2$, (b) the trajectories of the measurement errors $e_1, e_2$, both for event-triggered nonlinear system} \end{figure} \begin{figure}[h] \label{shiyitu} \centering \includegraphics[width=3.2in,height=2in]{N_Z_and_Events} \caption{The evolution of the additional dynamic variable $Z$ and the related events for event-triggered nonlinear system} \end{figure} In Fig.1, the asymptotic convergence of states $x_1, x_2$ and measurement error $e$ validates the related results in Theorem 1. The evolution of the additional dynamic variable $Z$ can be observed in Fig.2. It shows that the events are triggered almost periodically for every 0.9$s$. Note that this is different to the sampled-data control with a fixed period where the system is open-loop control between two continuous sampling instants. In the proposed event-triggering framework, we can increase the design parameter $\bar Z$ so as to reach a larger inter-event time while guaranteeing the stability. For example, the interval between two consecutive events increases up to 3.722$s$ when the parameter $\bar Z=3$ is chosen. Also, after we have the matrix $M$ and Lipschitz constant $L$, $b=2.083$ can be directly obtained because of equation (\ref{tao1}). In this case, the positive lower bound $\tau_1$ can be calculated as 0.189$s$, which is less than the practical simulation result. \subsection{Linear system} In order to compare the present results with the static and dynamic triggering mechanisms for linear systems in \cite{T07} and \cite{G14}, we use the same linear plant model with the same controller and choose the same gains. Specifically, choosing $A=[0\quad1;-2 \quad3]$, $B=[0;1]$, $K=[1 \quad -4]$ can get $P=[1\quad 1/4;1/4\quad 1]$ and $Q=[1/2 \quad1/4;1/4 \quad3/2]$. Since $\frac{|P B K|^{2}}{\lambda_{\min }(P) \lambda_{\min }(Q)}=54.61$, based on formula (\ref{tao2}) we choose $b=55$. Meanwhile, by adopting the same simulation setup in \cite{JC19}, $\bar Z=1$ and $\varepsilon=1$ are chosen as the design parameters. We set the initial states as $x_1=10, x_2=0$. \begin{figure}[h] \label{shiyitu} \centering \subfigure[]{\includegraphics[width=3.2in,height=2in]{States_and_Events}} \subfigure[]{\includegraphics[width=3.2in,height=2in]{Z_and_Events}} \caption{Numerical simulation results of the dynamic MIET-designable event-triggered linear control system with $\bar Z=1$ and $\varepsilon=1$. (a) The trajectories and event triggering time, respectively. (b) The triggering events and the evolution of $Z$ from 3.2 to 5s.} \end{figure} \begin{figure}[h] \label{shiyitu} \centering \includegraphics[width=3.2in,height=2in]{e} \caption{The evolution of the measurement errors $e_1, e_2$ for event-triggered linear control system.} \end{figure} From Fig.3 and Fig.6, it can be found that the states $x$ and measurement error $e$ converge to the origin asymptotically which validates the stability of the general linear control system with the MIET-designable event triggering mechanism. Based on the formula (\ref{tao2}), we can calculate a lower bound of MIET as 9 $ms$, which is smaller than the simulation result of 36 $ms$, implying that the calculated lower bound MIET may be conservative. In the meantime, from the simulation, we can compute the maximum inter-event time as 86 $ms$. \begin{figure}[h] \label{shiyitu} \centering \includegraphics[width=3.2in,height=2in]{Omega} \caption{The evolution of $\omega$ for event-triggered linear control system.} \end{figure} The variable $\omega$ always keeps smaller than $-\varepsilon$ in Fig.5, which implies that the clock-like variable $Z$ always decreases while the speed rate is changed throughout the whole countdown process. The evolution of the term ``$\frac{1}{2}Ze^{\top}Pe$" is shown in Fig.6, which is not monotonous. \begin{figure}[h] \label{shiyitu} \centering \includegraphics[width=3.2in,height=2in]{W} \caption{The evolution of the item $\frac{1}{2}Ze^{\top}Pe$ in the Lyapunov function.} \end{figure} Next, we compute the eigenvalues of state matrix $A+BK$ in \cite{PSH19} as $\lambda_1=-0.5+0.866j$ and $\lambda_2=-0.5-0.866j$, which are complex conjugates and non-real. Furthermore, it is noticed that $\pi/0.866=3.6277$ is very close to the period observed in Fig.7. All of these facts are consistent with the Theorem 3 in \cite{PSH19}, in which the planar linear system and static event condition $\|\hat{x}(t)-x(t)\|\leq\sigma\|x(t)\|$ in \cite{T07} are considered. Moreover, initial states $[10;0],[-10;0],[0;10],[0;-10],[5;5]$ are implemented in the same setting, and the results further validate the statement in Theorem 3. It is shown that the period of inter-event times is irrelevant to the initial state of controlled system, but might lead to different phase. In the meantime, these findings also provide some hints for the connection between static and dynamic triggering mechanisms. \begin{figure}[h] \label{shiyitu} \centering \includegraphics[width=3.2in,height=2in]{IET} \caption{The evolution of the inter-event times.} \end{figure} \section{Concluding remarks} In this work, in order to improve certain crucial characteristics like MIET and event-separation property, we develop a new framework of the design and analysis of the event-triggered control system. A MIET-designable triggering mechanism has been established for nonlinear system and general linear system, respectively. Afterwards, the robustness issue of the proposed results is further considered. It is shown that the present MIET-designable triggering mechanism guarantees Zeno-free triggering and the robust global event-separation property. Currently, we are working on applying the proposed triggering mechanism to the distributed control and networked control systems with general linear systems. It is also interesting to investigate other kinds of disturbances, such as the time delay, DoS attack, or timing error, etc. In the future, we also plan to apply the MEIT-designable event triggering mechanism to more broad fields, such as cyber-physical systems and power network systems. \bibliographystyle{ieeetr}
{ "attr-fineweb-edu": 1.768555, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd5s5qrqCyw_JhWDG
\subsubsection*{\bibname}} \newtheorem{theorem}{Theorem} \newtheorem{conjecture}{Conjecture} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem{observation}{Observation} \newtheorem{definition}{Definition} \newtheorem{notation}{Notation} \newtheorem{property}{Property} \newcommand{\mathbb{E}}{\mathbb{E}} \begin{document} \date{\today} \title{Size of Interventional Markov Equivalence Classes\\ in Random DAG Models} \author[1]{Dmitiry Katz} \author[1]{Karthikeyan Shanmugam} \affil[1]{IBM Research, NY \& MIT-IBM Watson AI Lab.} \author[2]{Chandler Squires} \author[2]{Caroline Uhler} \affil[2]{ MIT, Cambridge, MA.} \maketitle \unmarkedfntext{\small \\ Author email addresses \\ \texttt{[email protected]}, \texttt{[email protected]}, \texttt{[email protected]} \& \texttt{[email protected]} } \begin{abstract} Directed acyclic graph (DAG) models are popular for capturing causal relationships. From observational and interventional data, a DAG model can only be determined up to its \emph{interventional Markov equivalence class} (I-MEC). We investigate the size of MECs for random DAG models generated by uniformly sampling and ordering an Erd\H{o}s-R\'{e}nyi graph. For constant density, we show that the expected $\log$ observational MEC size asymptotically (in the number of vertices) approaches a constant. We characterize I-MEC size in a similar fashion in the above settings with high precision. We show that the asymptotic expected number of interventions required to fully identify a DAG is a constant. These results are obtained by exploiting Meek rules and coupling arguments to provide sharp upper and lower bounds on the asymptotic quantities, which are then calculated numerically up to high precision. Our results have important consequences for experimental design of interventions and the development of algorithms for causal inference. \end{abstract} \section{Introduction} Directed acyclic graphs (DAGs) are popular models for capturing causal relationships among a set of variables. This approach has found important applications in various areas including biology, epidemiology and sociology \citep{gangl2010causal,lagani2016probabilistic}. A central problem in these applications is to learn the causal DAG from observations on the nodes. A popular approach is to infer missing edges based on conditional independence information that is learned from the data~\citep{Spirtes,kalisch2007estimating}. However, multiple DAGs can encode the same set of conditional independences. Hence in general the causal DAG can only be learned up to a \textit{Markov equivalence class} (MEC) and interventional data is needed in order to identify the causal DAG. While an MEC may contain a super exponential number of candidate DAGs, \cite{gillispie2001enumerating} showed by enumerating all MECs up to 10 nodes that for small graphs (up to 10 nodes) an MEC on average contains about four DAGs and that about a quarter of all MECs consist of a unique DAG. Generalizing these results to larger graphs is critical for estimating the average number of interventional experiments needed for identifying the underlying causal DAG. More generally, given the recent rise in interventional data in genomics enabled by genome editing technologies~\citep{xiao2015gene}, it is of great interest to understand the average reduction in the size of MECs through the availability of interventional data, i.e., to characterize the average size of an \emph{interventional Markov equivalence class} (I-MEC). Further, such an analysis would also shed light on the number of additional interventions needed to uniquely identify the underlying causal DAG moving away from worst case bounds. The problem of characterizing the size of an MEC or I-MEC is not only of interest for experimental design of interventions but also from an algorithmic perspective. A popular approach to causal inference is given by score-based methods that assign a score such as the Bayesian Information Criterion (BIC) to each DAG or MEC and greedily optimize over the space of DAGs~\citep{castelo2003inclusion}, a combination of permutations and undirected graphs~\citep{teyssier2012ordering,raskutti2013learning,Solus,Mohammadi} or MECs~\citep{Meek_GES,brenner2013sparsityboost}. Similar score-based approaches have also been developed in the interventional setting~\citep{hauser2012characterization,wang2017permutation,Yang2018}. While a greedy step in the space of graphs can easily be defined (addition, removal or flipping of an edge), a greedy step in the space of Markov equivalence classes is complicated~\citep{Meek_GES}. Hence performing a greedy algorithm in the space of MECs only makes sense if the space of MECs is significantly smaller as compared to the space of DAGs. For instance showing that typically occurring MECs or I-MECs are small would imply that graph-based search procedures operate on a similar search space as the ones that use MECs, but can do so using simpler moves. Motivated by these considerations, in this work, we initiate the study of interventional and observational MECs for random DAG models. We focus on \textit{random order DAGs}, where the skeleton is a random Erd\H{o}s R\'{e}nyi graph with constant density $\rho$ and the ordering is a random permutation. We derive tight bounds for the asymptotic versions of various metrics on the I-MECs. More specifically, our contributions are as follows: \begin{enumerate}[itemsep=-4pt] \item We derive tight upper and lower bounds on (a) the asymptotic expected number of unoriented edges in an I-MEC given data from $r=0,1,2 \ldots$ interventions; (b) the asymptotic probability that the I-MEC is a unique DAG given data from $r$ interventions; (c) the asymptotic number of additional interventions needed to fully discover the DAG given data from $r$ interventions; and (d) the asymptotic expected $\log$-size of the I-MEC given data from $r$ interventions. \item We also provide tight bounds for the number of unoriented edges in the I-MEC when $r$ interventions have been performed using different algorithms for choosing the interventions given the observational MEC as input. \item If $M(r)_n$ is the metric of interest of a random order DAG of size $n$ and $r \geq 0$ interventions, then our bounds are of the following form: $\mathbb{E}[M(r)_n] \leq \mathbb{E}[M(r)_{\infty}] \leq \mathbb{E}[M(r)_n] + \epsilon_n$. Here, $M(r)_{\infty}$ is the limiting asymptotic metric, which we show is well defined and exists. We also show that $\epsilon_n$ decays exponentially fast in $n$ for constant density $\rho$. \item We numerically compute $\mathbb{E}[M(r)_n]$ through Monte Carlo simulations for $n$ as large as $110$ at which point $\epsilon_n$ is a small constant for various parameter regimes. \item One of the surprising results is that for constant density random order DAGs, all the above metrics tend asymptotically to a constant. Through a combination of analysis of our bounds and numerical computation, we can characterize these constants precisely. \item As an example of the nature of our results, quite surprisingly, the asymptotic (as $n \rightarrow \infty$) expected $\log$-observational MEC size of a random order DAG with density $0.5$ is at most $3.497$ with probability at least $0.99$ (see Theorem \ref{thm-upp-bounds}). \end{enumerate} All omitted proofs can be found in the supplemental material. \textbf{Related Work:} There is currently only limited work available on counting and characterizing MECs. In \citep{gillispie2001enumerating}, the authors enumerated all MECs on DAGs with $p \leq 10$ nodes and analyzed the total number of MECs, the average size of an MEC, and the proportion of MECs of size one on $p$ nodes. Motivated by this work, \cite{gillispie2006formulas}, \cite{steinsky2003enumeration}, and \cite{wagner2013asymptotic} provided formulas for counting MECs of a specific size. Supplementing this line of work, \cite{he2016formulas} developed various methods for counting the size of a given MEC. Finally, \cite{radhakrishnan2016counting} addressed these enumerative questions using a pair of generating functions that encode the number and size of MECs for DAGs with a fixed skeleton (i.e.~underlying undirected graph) and also applied these results to derive bounds on the MECs for various families of DAGs on trees~\citep{radhakrishnan2018counting}. Another line of work \citep{hu2014randomized,hauser2012two,shanmugam2015learning,eberhardt2012number,hyttinen2013experiment,kocaoglu2017cost} aims at characterizing the number of interventions required to learn a causal DAG completely. While some of these works deal with the active learning setting \citep{shanmugam2015learning,hauser2012two}, others choose interventions non-adaptively given the observational MEC \citep{hu2014randomized,eberhardt2012number,hyttinen2013experiment,kocaoglu2017cost,bello2017learning} and hence are concerned with the worst-case scenario. \section{Preliminaries and Definitions} In this work, we characterize the asymptotic behavior of different metrics that capture the amount of ``causal relationships" which can be inferred from observational and interventional data on random DAG models. In this section, we describe the random orderDAG model, briefly review causal DAG models and Markov equivalence, and introduce the metrics that we will analyze in this work. \subsection{Random Order DAG Model} \label{sec:orderDAG} Let $G = (V, E)$ be a directed acyclic graph (DAG) with vertices $V= [n]$ and directed edges $E \subseteq V \times V$. A random \textbf{orderDAG} with density $\rho$ on $n$ vertices is a DAG $G_n$ whose \emph{skeleton} (i.e., underlying undirected graph) is given by an Erd\"os-R\'enyi graph on $n$ vertices with edge probability $\rho$ and whose edges are oriented according to a total ordering which is uniformly sampled among all permutations of $n$ vertices. We denote a graph $G_n$ sampled from this model by $G_n \sim orderDAG(n, \rho)$. \textbf{Remark:} Our sampling procedure is a standard one used for testing causal inference algorithms. It is for example used in the well known \texttt{pcalg} R package\footnote{https://rdrr.io/rforge/pcalg/man/randomDAG.html}. A different sampling scheme would be to sample DAGs uniformly at random from all DAGs in which isomorphic DAGs would not be double counted. However, such a sampling scheme is difficult to perform in practice, while ours has a generative model that is easy and intuitive. Limited prior computational evidence in the observational setting suggests that the two sampling schemes behave similarly~\citep{gillispie2001enumerating}. \subsection{Markov Equivalence} A joint distribution $P$ on the variables $(X_v)_{v\in V}$ associated to the vertices of a DAG $G$ is \emph{Markov} with respect to $G$ if for any node $v \in G$, $X_v$ is conditionally independent of its non-descendents given its parents. In this case we say that $P \in \mathcal{M}(G)$. Two directed acyclic graphs $G$ and $G'$ are in the same \textit{Markov equivalence class} (MEC) if and only if $\mathcal{M}(G) = \mathcal{M}(G')$. Two DAGs in the same MEC entail the same set of conditional independence relations. \citep{meek1995causal}. The MEC of a DAG $G$ can be uniquely represented by a partially directed graph $\mathrm{Ess}(G)$ known as the \emph{essential graph} of $G$. The skeleton of $\mathrm{Ess}(G)$ is the same as the skeleton of $G$ and the directed edges in $\mathrm{Ess}(G)$ are precisely those edges in $G$ that have the same orientation in all members of the MEC of $G$. All other edges in $\mathrm{Ess}(G)$ are unoriented~\citep{hauser2012characterization}. The following procedure provides all directed edges in $\mathrm{Ess}(G)$: \begin{enumerate}[itemsep=-5pt] \item For every triple of nodes $i,j,k\in V$ if $i$ and $j$ are disconnected in $G$ and the ordered pairs $(i,k), (j,k)\in E$, then both edges $(i,k)$ and $(j,k)$ are also oriented in $\mathrm{Ess}(G)$. \item Orient edges by successive application of the `Meek rules' (see \citep{meek1995causal} or Appendix \ref{meek}) until they cannot be applied anymore to orient any new edge. \end{enumerate} \subsection{Interventional Markov Equivalence} Let $I \subset V$ and consider the set of single node interventional distributions $(P_{i})_{i\in I}$, where node $i$ is set to some constant. Since in $P_i$, node $X_i$ (a constant) is independent of its parents $X_{\mathrm{Pa}(i)}$, it introduces additional conditional independences in addition to those present in $P$. Let $G^{(i)}$ denote the intervened DAG obtained by deleting the edges from $\mathrm{Pa}(i)$ to $i$. If $P$ is Markov with respect to $G$, then $P_i$ is Markov with respect to $G^{(i)}$. Two DAGs $G$ and $G'$ are in the same \emph{$I$-Markov equivalence class} (I-MEC) if and only if $G^{(i)}$ and $G'^{(i)}$ are in the same MEC for all $i \in I$~\citep{hauser2012characterization}. Similarly as in the purely observational setting, an $I$-MEC can be uniquely represented by an \emph{$I$-essential graph} denoted by $\mathrm{Ess}(G,I)$. The skeleton of $\mathrm{Ess}(G, I)$ is the same as the skeleton of $G$ and the directed edges in $\mathrm{Ess}(G,I)$ are precisely those edges in $G$ that have the same orientation in all members of the $I$-MEC of $G$. All other edges in $\mathrm{Ess}(G,I)$ are unoriented. The following procedure provides all directed edges in $\mathrm{Ess}(G, I)$: \begin{enumerate}[itemsep=-4pt] \item For every triple of nodes $i,j,k\in V$ with $(i,k), (j,k)\in E$ and if $i$ and $j$ are disconnected in $G$, then both edges $(i,k)$ and $(j,k)$ are also oriented in $\mathrm{Ess}(G)$. \item For every edge $(i,j)$ such that either $ j \in I$ or $i \in I$, then $(i,j)$ is oriented. \item Orient further edges by successive application of the four rules in \citep{hauser2012characterization} (also given in Appendix \ref{meek}) until it cannot be applied anymore to orient any new edges. \end{enumerate} \subsection{Metrics of Interest} Suppose that the causal Bayesian network that generates data (both interventional and observational) is an orderDAG $G_n$. Let $\mathbf{P}_{*}$ be an associated family of interventional distributions compatible with $G_n$. In this setting, our work asymptotically characterizes some metrics that reflect identifiable portions of $G_n$ from an observational distribution $P$ nd possibly also interventional distributions. We denote by uEss an essential graph that is also a DAG, i.e., an essential graph representing an MEC consisting of a unique DAG. Such DAGs are of particular interest since they are identifiable from purely observational data. In the following, we will measure the degree of identifiability of a random DAG $G_n \sim orderDAG(n, \rho)$ using the following metrics: \begin{enumerate}[itemsep=-5pt] \item Let $X_n$ be the number of unoriented edges in $\mathrm{Ess}(G_n)$. We show that $X_{\infty} := \lim \limits_{n \rightarrow \infty} X_n$ exists. \item Let $isuEss_n$ be an indicator variable that is $1$ only if $\mathrm{Ess}(G_n)$ is a DAG. Similarly, the limit is denoted $isuEss_{\infty}$ \item Let $I_n$ be the number of single node interventions required to fully orient $G_n$. Similarly, the limit is denoted $I_{\infty}$. \item Let $L_n$ be the size of the (observational) MEC of $G_n$. The limit is denoted $L_{\infty}$. \item Let $X_n(r)$ be the minimum number of unoriented edges in $\mathrm{Ess}(G_n,I)$ optimized over all $I: \lvert I \rvert=r$. The limit is denoted $X_{\infty}(r)$. \item Let $isuEss_n(r)$ be an indicator variable that is 1 when $X_n(r) = 0$. The limit is denoted $isuEss_{\infty}(r)$. \item Let $L_n(r)$ be the size of the interventional markov equivalence class when the interventions in the set $I$ are performed on $G_n$, where $I$ minimizes the number of unoriented edges in $\mathrm{Ess}(G_n,I)$ optimized over all $I: \lvert I \rvert=r$. This limit is denoted $L_{\infty}(r)$. \end{enumerate} \section{Main Results} We first describe the nature of our results and the approach taken for obtaining these results for $X_n$. The results for all other metrics follow using a similar approach, although the technical details differ depending on the metric of interest. We show that $\mathbb{E}(X_n) \leq \mathbb{E}(X_\infty) \leq \mathbb{E}(X_n) + \epsilon_n$ and we provide an explicit expression for $\epsilon_n$. As a consequence, tight upper and lower bounds can be constructed on the quantities of interest by numerically computing $\mathbb{E}[X_n]$ using Monte Carlo simulations by generating random order DAGs $G_n$ for large $n$ and averaging. Formally, we state the main result in our work about the asymptotic quantities of various metrics. \begin{theorem}\label{asymp} We have the following inequalities satisfied by various metrics: \begin{align} E[X_n(r)] &\leq E[X_{\infty}(r)] \leq E[X_n(r)] + \epsilon_n \nonumber \\ E[I_n] &\leq E[I_{\infty}] \leq E[I_n] + \epsilon_n \nonumber \\ E[\log_2 (L_n(r))] &\leq E[\log_2(L_{\infty}(r))] \leq E[X_{\infty}(r)] \nonumber \\ E[isuEss_n(r)] &\geq E[isuEss_{\infty}(r)] \geq E[isuEss_n(r)] - \epsilon_n \nonumber \end{align} for all $r=0,1,2 \ldots$. Here, $\epsilon_n$ is defined as follows: \begin{align} \epsilon_n=\sum_{i \geq n} \mathrm{RHS}(\rho,i) &\leq \frac{(1 - \rho (1 - \rho))^n}{\rho (1-\rho)^2} + \nonumber \\ \hfill & n\frac{(1 - \rho (1 - \rho))^{n-1}}{(1-\rho)},\label{eqn:eps} \end{align} where $\mathrm{RHS}(\rho,n) = \rho n*(1 - \rho (1 - \rho))^{n-1}$ and $\rho$ is the edge probability when sampling an order DAG. \end{theorem} We establish the main result on upper and lower bounds through intermediate results as follows (explained taking the example of $X_n$): a) We first exhibit a coupling between $G_n$ and $G_{n+1}$ such that their respective marginal distributions are preserved. This is done in Section \ref{sec:coupling}. b) Using the properties of this specific coupling, we first show that $\mathbb{E}[X_n]$ is a monotonic sequence in $n$ in Section \ref{monotone}. c) The expression for $\epsilon_n$ is obtained by upper bounding the successive differences $\mathbb{E}[X_{n}]-\mathbb{E}[X_{n+1}]$ again using the properties of order DAG sampling and the coupling. This is explained in Sections \ref{obsgap} and \ref{intgap}. Other sections provide additional results on I-MECs obtained through other interventional design algorithms along with numerical and simulation results. \subsection{Probability coupling}\label{sec:coupling} In this section, we provide a coupling argument between the distribution of $G_n$ and $G_{n+1}$ such that `un-orientability' properties of certain edges are preserved. For all $1\leq i < j\leq n$, let $A_{i,j}$ be a binary random variable that is 1 with probability $\rho$. Let $G_n$ be the DAG with nodes $v_1 \ldots v_n$ and directed edges between $v_i\to v_j$ if and only if $A_{i,j} = 1$. \vspace{0.2cm} \begin{observation} $G_n$ with permutation $v_1, v_2, \ldots, v_n$, has the distribution of a random orderDAG on $n$ vertices with density $\rho$. \end{observation} \textbf{Remark:} Observation 1 says that randomly sampling a symmetric adjacency matrix (undirected graph with edge probability $\rho$), permuting rows and columns with a random permutation, and then taking the upper triangular part (orienting the graph according to the permutation) is the same as fixing the permutation from 1,2..n and populating the upper triangular part randomly. \textbf{Coupling:} Motivated by the above observation, we couple $G_n$ and $G_{n+1}$ as follows. We first generate $A_{i,j}$ for $1\leq i<j \leq n$ as above and use that to orient $G_n$. Then, we generate additional random variables $A_{i,n+1}$ for all $1 \leq i\leq n$ and orient the edges incident to $v_{n+1}$ accordingly. The above coupling along with certain structural properties of Meek Rules (given in Appendix \ref{meek}) leads to the following results on orientability of certain edges in $G_n$ and $G_{n+1}$ under the coupling. \vspace{0.2cm} \begin{lemma}\label{orderUndirected} Under the above coupling, if an edge $(i,j)$ is unorientable in $G_n$, it is also unorientable in $G_{n+1}$. \end{lemma} \begin{lemma}\label{orderRUndirected} Under the above coupling, if after a set of interventions $R$ on $G_n$ the edge $(i,j)$ is unorientable in $G_n$, then it is also unorientable in $G_{n+1}$ after the same set of interventions on $G_n$ together with an intervention on $v_{n+1}$. \end{lemma} \subsubsection{Monotonicity Lemmas} \label{monotone} We prove that expected values of all metrics of interest are monotonic in $n$ using the properties of the coupling demonstrated above. First, we show this for observational quantities by appealing to Lemma \ref{orderUndirected}. \begin{theorem}\label{ELmonotone} The following statements hold with probability $1$ for the coupling between $G_n$ and $G_{n+1}$: \\ a) $X_{n+1} \geq X_n$. \\ b) $L_{n+1} \geq L_n$. \\ c) $I_{n+1} \geq I_n$. \\ Therefore, $\mathbb{E}(X_{n+1}) \geq \mathbb{E}(X_n)$, $\mathbb{E}(L_{n+1}) \geq \mathbb{E}(L_n)$ and $\mathbb{E}(I_{n+1}) \geq \mathbb{E}(I_n)$. \end{theorem} Similar monotonicity properties for interventional quantities are obtained by appealing to Lemma \ref{orderRUndirected}. However, note that these proofs are not a straightforward application of Lemma \ref{orderRUndirected}. Often, additional arguments need to be made to show the following results. \begin{theorem}\label{EXRmonotone} $X_{n+1}(r) \geq X_n(r)$ with probability $1$ according to the coupling between $G_n$ and $G_{n+1}$. Hence, $\mathbb{E}(X_{n+1}(r)) \geq \mathbb{E}(X_n(r))$. \end{theorem} The previous two theorems directly provide the following result. \begin{theorem}\label{EisPDAGRmonotone} $isuEss_{n+1}(r) \leq isuEss_n(r)$ for all $r=0,1,2 \ldots$ best interventions with probability $1$ under the coupling between $G_n$ and $G_{n+1}$. Hence, $\mathbb{E}(isuEss_{n+1}(r)) \leq \mathbb{E}(isuEss_n(r))$. \end{theorem} \begin{proof} This follows directly from Theorem \ref{EXRmonotone} and Theorem \ref{ELmonotone}. \end{proof} \begin{theorem}\label{ELRmonotone} $L_{n+1}(r) \geq L_n(r)$ with probability $1$ under the coupling between $G_n$ and $G_{n+1}$. Hence, $\mathbb{E}(L_{n+1}(r)) \geq \mathbb{E}(L_n(r))$. \end{theorem} The established monotonicity results help prove that the asymptotic versions of these metrics exist. \begin{theorem} \label{limexists} $\lim \limits_{n \rightarrow \infty} X_n = X_{\infty}$ exists and $\mathbb{E}[X_{\infty}]=\lim_{n \rightarrow \infty} \mathbb{E}[X_n]$. \end{theorem} \textbf{Remark:} Theorem \ref{limexists} extends to all metrics that have been shown to be monotonic non-decreasing, i.e. metrics in the set $\{X_n(r),I_n,L_n(r)\}$, by analogous arguments. Note that monotonically non-increasing sequences like $isuEss_n(r)$ are bounded below and above and hence the results can be shown again by the same theorem applied to shifted negatives of these variables. \subsubsection{Gap Bounds on Observational Metrics}\label{obsgap} Using properties of the coupling between $G_n$ and $G_{n+1}$ we can show that the expected difference in the observational metrics for $G_n$ and the asymptotic version is bounded. \begin{theorem}\label{lazyEXbound} $\mathbb{E}(X_\infty) - \mathbb{E}(X_n) \leq \sum^\infty_{i = n} \rho i*(1 - \rho (1 - \rho))^{i-1}$. \end{theorem} \begin{theorem}\label{lazyEIbound} $\mathbb{E}(I_\infty) - \mathbb{E}(I_n) \leq \sum^\infty_{i = n} \rho i*(1 - \rho (1 - \rho))^{i-1}$. \end{theorem} \subsubsection{Gap Bounds on Interventional Metrics} \label{intgap} In the following, we show that the expected difference in the interventional metrics for $G_n$ and the asymptotic version is bounded again using the properties of the coupling described before. \begin{theorem}\label{lazyEXRbound} $\mathbb{E}(X_\infty(r)) - \mathbb{E}(X_n)(r) \leq \sum^\infty_{i = n} \rho i*(1 - \rho (1 - \rho))^{i-1}$. \end{theorem} \begin{theorem}\label{lazyEisPDAGbound} $\mathbb{E}(isuEss_n(r)) - \mathbb{E}(isuEss_\infty(r)) \leq \sum^\infty_{i = n} \rho i*(1 - \rho (1 - \rho))^{i-1}$. \end{theorem} All these results together allow us to prove the main result (Theorem \ref{asymp}). \begin{proof}[Proof of Theorem \ref{asymp}] The theorem follows from results in Sections \ref{monotone}, \ref{obsgap}, and \ref{intgap}. We use the fact that $\log_2(L_n(r)) \leq X_n(r)$, since $L_n(r) \leq 2^{X_n(r)}$ by considering all possible orientations of the unoriented edges in the $I$-essential graph. \end{proof} \subsubsection{Lower Bound on Successive Differences} The above gap bounds depend on upper bounding successive differences of $\mathbb{E}[X_n]$. In the following, we provide a lower bound on the successive differences which implies that gap bounds that are faster than exponential cannot exist. \begin{theorem}\label{XisSlow} $$\mathbb{E}(X_{n}) - \mathbb{E}(X_{n-1}) \geq (n-1) \rho (1-\rho)^{2n - 4} \geq \rho (1-\rho)^{2n}.$$ \end{theorem} \section{Results on I-MECs obtained by Interventional Design Algorithms} In the following, we provide asymptotic convergence rates for the number of undirected edges after $r$ interventions, when the interventions are chosen by an algorithm that has a property that we call \textit{downstream-independence}. Greedy algorithms that choose $r$ interventions sequentially based on the essential graph at the observational stage are downstream-independent. Note that, in this section, we do not consider $X_n(r)$, which is the minimum number of edges left unoriented when $r$ interventions are chosen based on the DAG structure. We are therefore interested in algorithms that optimize the interventions based on the essential graph, which can be inferred from purely observed datasets. \begin{notation} Let $J$ be a set of interventions. We say that $H = J(G)$ when $H$ is the essential graph that results from performing the interventions $J$ on the underlying causal DAG $G$. Note that if $G'$ is a subgraph of $G$, then $J(G')$ is obtained by skipping the interventions on nodes outside of $G'$. \end{notation} \begin{lemma}\label{oneVertexAlg} Let $G$ be a DAG and $v_n$ a vertex of $\mathrm{ess}(G)$ with no outgoing or undirected edges. Then, $J(G \backslash v_n) = J(G)\backslash v_n$. In other words, interventions do not affect vertices that have no outgoing or undirected edges. \end{lemma} \begin{lemma}\label{lem-subgraph-ignore} Let $G'$ be an induced subgraph of $G$ consisting of all vertices $v_i$ such that neither $v_i$ nor any descendants of $v_i$ have adjacent undirected edges. Then $J(G \backslash G') = J(G)\backslash G'$. \end{lemma} \begin{proof} The proof follows by applying Lemma \ref{oneVertexAlg} recursively to $G$. \end{proof} \begin{definition} We say that an algorithm $A$ for performing interventions on an essential graph is \textbf{downstream-independent} if the inverventions it performs on $G$ are identical to the ones it performs on $G \backslash G'$. Note that $G \backslash G'$ is the result of the following process: starting with $G$, recursively remove vertices that have no undirected or outgoing edges. \end{definition} \begin{theorem}\label{downstreamConvergence} Let $A$ be a downstream-independent algorithm. Let $Y(r,A)_i$ be the expected number of undirected edges in the essential graph of the random order DAG $G_i$ after performing $r$ interventions according to algorithm $A$. Then \begin{align} \lvert \mathbb{E}(Y(r,A)_{i+1} - \mathbb{E}(Y(r,A)_{i}) \rvert &\leq \rho i*(1 - \rho (1 - \rho))^{i-1} \nonumber \\ \hfill & *i(i+1)/2 \end{align} \end{theorem} \textbf{Remark:} Suppose there is an algorithm $A$ that optimizes some score function based on the essential graphs alone which is a proxy for minimizing the number of expected unoriented edges after $r$ interventions, then such algorithms are likely to be making decisions independent of $G'$ in general due to Lemma \ref{lem-subgraph-ignore}. An example is the algorithm that greedily picks the intervention that reduces the expected number of unoriented edges where the expectation is over the uniform distribution of DAGs compatible with the essential graph. \begin{theorem}\label{algoasymp} Let $A$ be an algorithm that is downstream independent and chooses interventions based on $\mathrm{ess}(G)$. Let $Y(r,A)_n$ be the number of undirected edges after $r$ interventions made by the algorithm $A$. Then, \begin{align*} \mathbb{E}[Y(r,A)_n] &\leq \mathbb{E}[Y(r,A)_{\infty}] \nonumber \\ \hfill &\leq \mathbb{E}[Y(r,A)_n] + \nonumber \\ \hfil & \quad\sum_{i=n}^{\infty} \rho i^2(i+1)/2 * (1-\rho(1-\rho))^{i-1}. \end{align*} Here, $\lim \limits_{n \rightarrow \infty} Y(r,A)_n = Y(r,A)_{\infty}$ and this limit exists. \end{theorem} \begin{proof} This is a direct corollary from the previous results in this section together with analogous arguments regarding monotonicity and existence of limits similar to those for $X_n(r)$. \end{proof} \begin{figure*}[htbp] \subfloat[Average number of unoriented edges, $X_n(r)$, in the essential graph associated with orderDAGs of density $\rho$ after $r$ interventions, averaged over $2000$ samples; the highlighted region corresponds to points within 2-standard deviations from the mean. \label{fig1}] {\includegraphics[width=8cm]{unoriented_edges.png}} \hfill \subfloat[Average logarithm of the size of the I-MEC for order DAGs of density $\rho$ after $r$ interventions, averaged over $2000$ samples; the highlighted region corresponds to points within 2-standard deviations from the mean. \label{fig3}]{\includegraphics[width=8cm]{log_mec_sizes.png}} \caption{We plot Monte-Carlo estimates of $\mathbb{E}[Y_n(r,A)]$, i.e~ the number of unoriented edges in the essential graph of a random order DAG after $r$ interventions, together with $\mathbb{E}[\log_{2} L_n(r,A)]$, i.e. the size of the $I$-MEC after $r$ interventions. } \end{figure*} \section{Discussion of the Results} Theorems \ref{asymp} and \ref{algoasymp} provide upper bounds in terms of quantities computable by Monte-Carlo simulation at finite $n$ from random order DAGs and constants such as $\epsilon_n$ that are exponentially small in $n$. If empirical means of these finite $n$ quantities appearing in these upper bounds can be characterized with very high precision, then we can characterize the constant by which these asymptotic quantities are upper bounded. In the following section, we plot the empirical means of these finite $n$ quantities or upper bounds to these finite $n$ quantities for very large $n$ and show that when combined with the above bounds, the asymptotic quantities tend to a constant. \subsection{Precise Calculation of High Confidence Upper Bounds on Asymptotic $\log$-MEC Size for Random Order DAGs of Density $\rho=0.5$ } We demonstrate how to obtain confidence intervals on the expected asymptotic mean $E[X_\infty]$ and $E[\log_2 (L_\infty)]$ using our bounds and Monte Carlo simulations. \textbf{Details of the Numerical Experiment:} We sampled $X_{30}$ $S=100000$ times for random order DAGs with $\rho=0.5$. The sample variance we observed was $V=7.054$ while the empirical mean was $M=3.394$. We use an empirical Bernstein bound for $E[X_{30}]$ and show the following bound on expected value of $X_{\infty}$: \begin{theorem} \label{thm-upp-bounds} With probability at least $0.99$ over the randomness in our numerical experiments over $S=100000$ samples, we have: $E[\log_2 (L_{\infty})] \leq E[X_{\infty}] \leq 3.497 $. \end{theorem} This is an illustration of how our upper bounds, emprirical Bernstein bounds and Monte Carlo simulation can be combined to give highly precise guarantees for all the considered metrics. \section{Numerical Results}\label{numerical-sec} We compute and plot the empirical means of the following observational metrics: a) $X_n$, b) $isuEss_n$, c) $I_n$, and d) $\log_{2} L_n$. We also plot the empirical mean of the following interventional metrics a) $Y(r,A)_n$, b) $isuEss(r,A)_{n}$, c) $\log_{2} L(r,A)_{n}$, and d) $I(r,A)_n$. These interventional metrics are obtained on the essential graph $\mathrm{Ess}(G_n,A)$ obtained by the greedy algorithm $A$ that operates as follows: First pick the node $I_1$ that orients the most edges, then for each consecutive $r$, pick $I_r$ that orients the most edges in $G_n$ given the ($\{ I_1, \ldots, I_{r-1} \}$-)essential graph. \begin{figure*}[htbp] \subfloat[Probability that the essential graph associated with an order DAG of density $\rho$ can be uniquely identified after $r$ interventions, averaged over $2000$ samples; the highlighted region corresponds to points within 2-standard deviations from the mean. \label{fig2}]{\includegraphics[width=8cm]{percent_updags.png}} \hfill \subfloat[Empirical mean of the number of interventions needed to fully identify a random order DAG of density $\rho$, averaged over $2000$ samples; the highlighted region corresponds to points within 2-standard deviations from the mean. \label{fig4}]{\includegraphics[width=8cm]{fully_orienting_interventions.png}} \caption{We plot Monte-Carlo estimates of $\mathbb{P}(isuEss_n(r,A))$, i.e.~the probability that the essential graph of a random order DAG is equal to the order DAG itself, together with $\mathbb{E}[I_n(r,A)]$, i.e.~the number of single-node interventions required to fully orient a random order DAG.} \end{figure*} \textbf{Graph Generation:} We generated 2,000 random order DAGs with $n = \{3, 5, 10, 30, \ldots 110 \}$ nodes and densities $\rho = \{ .1, .2, .5, .7\}$. For each DAG, we used the open-source \texttt{causaldag} package in Python to compute the number of DAGs in the ($\mathcal{I}$-)MEC and the number of undirected edges in the ($\mathcal{I}$-)essential graph obtained by applying algorithm $A$ on $G_n$. \textbf{Results Established:} The plots serve two purposes - a) The empirical mean plots (Figs. \ref{fig1}-\ref{fig4}) and the box plots (Figs. \ref{fig7}-\ref{fig12}) of all the estimated quantities provide an idea of what values the asymptotic quantities are bounded by given the formula for $\epsilon_n$ in Theorem \ref{asymp}. For a more refined high confidence upper bound, for large enough $n$, analysis similar to Theorem~\ref{thm-upp-bounds} can be done. b) They help corroborate the monotonicity results we have derived analytically. \textbf{Bounding Interventional Metrics:} We observe that the above interventional metrics plotted provide an upper bound to $X_n(r),L_n(r),isuEss_n(r)$ and $I_n(r)$ which are based on the set of optimal interventions for $G_n$ that minimize the number of unoreinted edges given $G_n$. Therefore, by Theorem \ref{asymp} they certainly provide valid upper bounds together with $\epsilon_n$. The shaded regions in each plot are the estimates of the 95\% confidence intervals as given by the \texttt{scipy.stats} function \texttt{bayes\_mvs}. Figure \ref{fig1} plots empirical mean of $X_n$ and $Y(r,A)_n$. We observe that $\bar{X}_n$ increases sharply for $\rho \geq 0.5$ and plateaus near $n = 10$, while $\bar{X}_n$ increases more gradually for $\rho < 0.5$, with a higher limit for sparse graphs. For all densities, the empirical mean of $Y(r,A)_n$ increases more gradually than the observational $\bar{X}_n$. Figure \ref{fig3} plots empirical mean of $\log L_n$ and $\log L(r,A)_n$. We again observe sharper increases and lower plateaus for the higher densities, $\rho = 0.5$ and $\rho = 0.7$, compared to more gradual rises and higher plateaus for the lower densities. Whereas in Figure~\ref{fig1}, $\bar{X}_n$ stabilizes at similar values for $\rho = 0.2$ and $\rho = 0.5$, in Figure~\ref{fig3}, the empirical mean of $\log L_n$ is greater for $\rho = 0.2$ than for $\rho = 0.5$. This indicates that each unoriented edge contributes to more MECs when the density is low. Figure \ref{fig2} demonstrates the monotonicity of the empirical mean of $isuEss_n$ and $isuEss(r,A)_n$. We observe that the empirical mean of $isuEss_n$ drops sharply for all densities, with $\rho = 0.5$ appearing to have the highest limit. The difference in behavior of the empirical mean of $isuEss(1,A)_n$ and $isuEss(2,A)_n$ for different densities is noteworthy. For sparser graphs, 1 or 2 interventions do not significantly increase the expected ability to identify the DAG; for instance, when $\rho = 0.1$, the expected number of fully identified DAGs barely changes from the observational case after $n = 30$. However, for denser graphs, such as for $\rho = 0.5$ and $\rho = 0.7$, even 1 intervention is sufficient to learn roughly 50\% and 60\% of the sampled graphs, respectively, and 2 interventions is sufficient to learn nearly all of them, even when $n = 110$. This result can be explained by the fact that sparse graphs often consist of multiple connected components and interventions in one component have no effect on other components. Finally, Figure~\ref{fig4} demonstrates the monotonicity of the empirical mean of $I_n$. Surprisingly, it takes very few interventions to orient even~large,~sparse~graphs. \section{Conclusion} We provided sharp upper and lower bounds for asymptotic expected $\log$-MEC size and the number of interventions needed to fully orient a random order DAG after $r=0,1,2..$ (constant) number of initial interventions. There are various other metrics associated with $I$-MECs of random order DAGs that we precisely quantify in this work. Our methods relied on analytical bounds on the asymptotic quantities based on coupling arguments and exploiting the properties of Meek rules. This together with Monte Carlo simulations at finite sizes establishes quantifiable and precise bounds. Our results mean that a walk over the space of graphs (larger search space but simpler moves) would not be more time consuming than a walk over the space of Markov equivalence classes (more complicated moves) when implementing greedy search for structure learning. This is because the asymptotic log MEC size goes to a constant for dense graphs. In addition, our results imply that in general relatively few interventions are needed to identifying dense causal networks. Investigations like this for random graphs considering various levels of sparsity and relaxing the causal sufficiency assumptions are interesting directions for future work. \section*{Acknowledgements} \vspace{-0.4cm} C.~Uhler was partially supported by NSF (DMS-1651995), ONR (N00014-17-1-2147 and N00014-18-1-2765), IBM, and a Sloan Fellowship. \bibliographystyle{abbrvnat}
{ "attr-fineweb-edu": 1.179688, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd685qsBB3KMWcXlw
\section{Dirichlet distribution} \label{Sec_Dirichlet} \input{Factor_LM} \section{Probabilistic PCA/ POD} \label{ppca} In this section, we give a brief review of Probabilistic PCA (PPCA) \citep{kn:tipping02} which provides a density estimation framework for POD (or PCA/LSA), under hypotheses that are different from those given in section ~\ref{Sec_PLSA} for PLSA. We will assume that the data is zero-centered without loss of generality. The basic idea of PPCA is to assume a Gaussian probability model for the observed data $\snapflu$. In that formulation (see section ~\ref{Sec_PLSA}), the motif-cell matrix $\tilde{\Phi}$ of dimension $N_x \times \nmode$ does not have a probabilistic interpretation, but relates each noisy observation to a set of $\nmode$ independent normalized Gaussian variables following \begin{equation} \snapflu = \tilde{\Phi} \tilde{a} + \epsilon \label{ppcamodel} \end{equation} where the variables $\tilde{a}$ are defined to be independent and Gaussian with unit variance and $\epsilon$ represents noise. An important assumption to proceed is that the model for the noise should be isotropic \[ <\epsilon \epsilon > = \sigma^2 I, \] so that all the dependences between the observations are going to be contained in $\tilde{\Phi}$. On can then show using equation (~\ref{ppcamodel}) that \[ p(\snapflu) = {\cal N}(0, C ) \] where $C= \tilde{\Phi}^T \tilde{\Phi} + {\sigma}^2 I $ is the observation covariance matrix of dimension $N_x^2$. The issue is to determine $\tilde{\Phi}$ and $\tilde{\sigma}$, given the observations of $\snapflu$. Under the assumption of isotropic Gaussian noise, \citet{kn:tipping02} showed that the maximum likelihood estimators $\hat{\Phi}$ and $\hat{\sigma}^2$ can be obtained from standard POD analysis on the $\nsnap$ snapshots. They showed that \begin{equation} \hat{\Phi}= \Phi ( \Lambda_{nmode} - \sigma^2 I_{\nmode})^{1/2} R \end{equation} where $\Phi $ contains the first $\nmode$ eigenvectors of the sampled covariance matrix $\cordual$ where $\cordual$ was defined in equation ~\ref{defcordual} (note that the dimension of $\cordual$ is $\nsnap^2$), $\Lambda_{\nmode}$ is a diagonal matrix containing the $\nmode$ first eigenvalues of $\cordual$ and $R$ is an arbitrary rotation matrix. An estimate for the error variance can then be given by \begin{equation} \hat{\sigma}^2 = \frac{1}{\nsnap - \nmode }\sum_{j=\nmode + 1}^{\nsnap} \eigval_j, \end{equation} which represents the variance lost in the project and averaged over the lost dimensions. \section{Introduction} The introduction of coherent structures \citep{kn:klineandfriends,kn:townsend47} has represented a major paradigm shift for turbulence theory and has had a significant impact in various related fields, ranging from geophysical flows to industrial applications. Coherent structure identification has become a key step towards modelling and controlling wall-bounded turbulent flows. However a recurrent stumbling block is the absence of a precise definition of structures, as is apparent from several comprehensive reviews \citep{kn:cantwell81,kn:robinson,kn:jimenez13, kn:dennis15}. Studies originating in the 1960's \citep{kn:klineandfriends,kn:kimandfriends} have established that most of the turbulence in the near-wall region occurred in a highly intermittent manner in both space and time, during what was originally termed ``bursting events''. Quadrant analysis of the Reynolds stress in the plane of streamwise and wall-normal fluctuation $(u',v')$ was introduced by \citet{kn:wallabrodkey, kn:willmarthlu72} to characterize these events. Bursting events were found to be associated with low-speed streaks being lifted away from the wall, as well with sweeping motions of high-speed fluid towards the wall, which respectively correspond to Quadrant II ($u'<0, v'>0)$ and Quadrant IV ($u'>0, v'<0)$ events. The two quadrants corresponding to $-u'v'>0$ can be termed $Q_-$ events and represent the major contribution to the Reynolds stress \citep{kn:wallace2016}. An interpretation of these bursts is that they are the signature of coherent structures or eddies advected by the mean field. Determining the characteristics of these structures has been the object of considerable effort, \cite{jimenez18}. A central element of wall turbulence theory is the attached eddy model, reviewed in detail by \citet{kn:marusicmonty19}. The model is based on the idea that turbulence arises as a field of randomly distributed eddies, identified as organized flow patterns which extend to the wall, in the sense that their characteristics are influenced by the wall. Further assumptions require that the entire geometry of the eddies scales with the wall distance, with a constant characteristic velocity scale. The model was extended by \citet{kn:perrychong82}, who introduced the idea of a hierarchy of discrete scales, with an inverse-scale probability distribution. \citet{kn:woodcockmarusic15} showed that this inverse probability distribution was in fact a direct consequence of the self-similarity of the eddies. Further extensions of the model for the logarithmic layer include a wider variety of structures, such as wall-detached ones \citep{kn:perry95, hu20}. Detection of self-similarity in boundary layers has been the focus of several experimental studies, such as \citet{kn:baars17}'s, who used spectral coherence analysis to provide evidence of self-similar structures in the streamwise velocity fluctuations of pipe flow. Numerical simulation has proved a powerful tool to explore three-dimensional flow fields using a clustering approach. Examples include the work of \citet{kn:delalamo06}, who showed that the logarithmic region of turbulent channel was organized in self-similar vortex clusters, and \citet{kn:lozanoduran12} developed a three-dimensional extension of quadrant analysis to detect self-similarity in numerical data at various Reynolds numbers. More recently, wall-attached structures were identified in the streamwise fluctuations of a turbulent boundary layer \citep{kn:hwang18} as well as in pipe flow \citep{kn:hwang19}. The structures were shown to scale with the wall distance while their population density scales inversely with the distance to the wall. \citet{chengcheng2020} detected the signature of wall-attached eddies in the streamwise and spanwise velocity fluctuations in turbulent channel flow simulations at low Reynolds numbers. Evidence of self-similarity has been found as well in the context of resolvent analysis, \cite{sharmamckeon13}. It has also emerged from Proper Orthogonal Decomposition (POD) results, such as channel flow simulations at low Reynolds numbers \citep{kn:jfe10,kn:pof17}, or pipe flow experiments \citep{hellstrom16}. The increase of available data, whether through numerical simulation or experiment, has strengthened the need for new identification methods, such as those provided by machine learning (see \citet{kn:brunton2020} for a review). The challenge is to extract structural information about the data without pre-existing knowledge, which defines an {\it unsupervised learning} problem. Solutions to this problem should be robust, easy to implement and scalable. One example of unsupervised learning method that meets these criteria is Proper Orthogonal Decomposition \citep{kn:lumleyPOD}, a now classical approach to decompose turbulent fields. POD is a statistical technique which provides an objective representation of the data as a linear combination of spatial eigenfunctions, which can be hierarchized with respect to a given norm. Although the reconstruction is optimal with respect to this norm \citep{kn:HLB}, a potential limitation of the decomposition is that the physical interpretation of the eigenfunctions is not clear. In particular, in the case of homogeneous statistics, the eigenfunctions are spatial Fourier modes over the full domain {(see \cite{kn:HLB} for a proof)}, even though instantaneous patterns are strongly localized in space. The connection between POD spatial eigenfunctions with observed coherent structures is therefore not necessarily straighforward. Moreover, the amplitudes of the spatial eigenfunctions are generally strongly inter-dependent, even though they are by construction uncorrelated. This makes it difficult to give a physical meaning to individual amplitudes, especially in the absence of a probabilistic framework in which to interpret them. In this paper we consider such a framework to explore an alternative unsupervised learning approach called Latent Dirichlet Allocation (LDA), which can be derived from POD \citep{kn:hofmann99}. LDA is a generative probabilistic model, that is a probabilistic model that mimics the characteristics of a collection of data. It is based on a soft clustering approach, which was first developed for text mining applications \citep{kn:blei03}, but has been extended to other fields in recent years \citep{kn:aubert13}. The goal of LDA \citep{kn:blei03} is to find short descriptions of the members of a collection that enable efficient processing of large collections while preserving the essential statistical relationships that are useful for basic tasks such as classification, novelty detection, summarization, and similarity and relevance. LDA is a three-level hierarchical Bayesian model, in which each member of a collection is modeled as a finite mixture over an underlying set of topics or motifs. In the field of natural language processing, the dataset to which LDA is applied consists of a set of documents, each of which is considered as a ``bag-of-words'', that is an unordered set of words taken from a finite vocabulary. A particular word may appear several times in the document, or not appear at all. The number of occurrences of each vocabulary word in a document can be seen as an entry of a sparse matrix where the lines correspond to the vocabulary words and the columns to the documents. Based on this typically sparse word count matrix, the classification method returns a set of $\ntopic$ {\it topics}, where the topics are latent variables inferred { from the word counts in the documents} and the number of topics $\ntopic$ is a user-defined parameter. Unlike ``hard'' clustering, such as the K-means approach \citep{kn:macqueen1967}, where each document is assigned to a specific topic, LDA represents each document as { a mixture of topics, where the coefficients of the mixture represent the probability of the topic in the document.} An interesting application of the LDA method was carried out for { a dataset containing} images by \citet{kn:griffiths04}. The dataset considered was a collection of gray-scale images where each image consists of an array of pixels, each of which is associated with a gray level. In this framework, each image is the equivalent of a document, each pixel represents an individual vocabulary word, and the gray-level intensity measured at each pixel is taken as the analog of the word count matrix entry (the lines of the matrix now represent the pixels, while the columns represent the snapshots). The sum of the intensities over the pixels, which will be called throughout the paper the {\it total intensity}, is the analog of the total number of words observed in the document. Given a set of original patterns constituting the topics or {\it motifs}, a collection of synthetic images was generated from random mixtures of the patterns. It was shown that LDA was able to recover the underlying patterns from the observations of the generated images. Following \cite{kn:griffiths04}, the idea of the paper is to look for evidence of coherent structure in turbulent flow snapshots by identifying LDA topics or {\it motifs}. The relevant gray-level intensity is based on the value of $Q_-$ (unlike in \citet{kn:griffiths04}'s work, it corresponds to a physical field.) We thus propose the following analogy: each scalar field observed in a collection of snapshots results from a mixture of $\ntopic$ spatial {\it topics} {that will be referred to as {\it motifs} in the remainder of the paper}. This can be compared with the standard view that each realization of a turbulent flow is constituted of a random superposition of discrete eddies, characterized by a hierarchy of scales. The paper is organized as follows. We show in Section~\ref{Sec_decomp} how the POD method of snapshots, which is equivalent to Latent Semantic Allocation (LSA), can be generalized to a probabilistic framework (Probabilistic Latent Semantic Allocation or PLSA) which is then further extended into Latent Dirichlet Allocation (LDA) in Section~\ref{Sec_LDA}. Application to the extraction of motifs for a turbulent channel flow is introduced in Section~\ref{Sec_channel} and results are discussed in Section~\ref{Sec_results}. The potential of the approach for flow reconstruction and flow generation is considered in Section~\ref{Sec_reconstruction} before Section~\ref{Sec_Conclusion} closes the paper. \input{Decomp_LM_BP} \section{Latent Dirichlet Allocation} \label{Sec_LDA} Latent Dirichlet Allocation (LDA) extends PLSA to address its limitations. Its specificity is: \begin{itemize} \item the introduction of a probabilistic model for the collection of snapshots: each snapshot is now characterized by a distribution over the structures which will be now called {\it motifs}. \item the use of Dirichlet distributions to model both motif-cell and snapshot-motif distributions. \end{itemize} The Dirichlet distribution is a multivariate probability distribution over the space of multinomial distributions. It is parametrized by a vector of positive-valued parameters $\balpha=\left(\alpha_1,\ldots,\alpha_N\right)$: \[ \proba\left(x_1, \ldots, x_N ; \alpha_1, \ldots, \alpha_N\right) =\frac{1}{B(\balpha)} \prod_{\imode=1}^{N} x_\imode^{\alpha_\imode-1}, \] where $B$ is a normalizing constant, which can be expressed in terms of the Gamma function $\Gamma$: \[ B(\balpha)= \frac{\prod_{\imode=1}^{N} \Gamma(\alpha_\imode)}{\Gamma(\sum_{\imode=1}^{N} \alpha_\imode)}. \] The support of the Dirichlet distribution is the set of $N$-dimensional discrete distributions, which constitutes the $N-1$ simplex. Introduction of the Dirichlet distribution allows us to specify the prior belief about the snapshots. The Bayesian learning problem is now to estimate $\proba(\btopic_\imode, \bsnap_\isnap)$ and $\proba(\vecx_\ix, \btopic_\imode)$ from $\matsnap$ given our prior belief $\balpha$, and it can be shown that Dirichlet distributions offer a tractable, well-posed solution to this problem \citep{kn:blei03}. LDA is therefore based on the following representation: \begin{enumerate} \item Each motif $\btopic_\imode$ is associated with a multinomial distribution $\wtdistrib_\imode$ over the grid cells ($\proba\left(\vecx_\ix|\btopic_\imode\right)= \wtdistribscal_{\ix, \imode} $). This distribution is modeled with a Dirichlet prior parametrized with a $\nx$-dimensional vector {\bf $ {\boldsymbol \beta}$}. The components $\beta_\ix$ of ${\boldsymbol \beta}$ control the sparsity of the distribution: values of $\beta_\ix$ larger than 1 correspond to evenly dense distributions, while values lower than 1 correspond to sparse distributions. In all that follows, we will assume a non-informative prior, meaning that ${\boldsymbol \beta} = \beta \boldone_\nx$. \item Each snapshot $\bsnap_\isnap$, is associated with a distribution of motifs $\btheta_\isnap$ such that $\theta_{\imode,\isnap} = \proba(\btopic_\imode|\bsnap_i)$. The probabilities of each motif add up to $1$ in each snapshot. This distribution is modelled with a $\nmode$-dimensional Dirichlet distribution of parameter $\balpha$. The magnitude of $\balpha$ characterizes the sparsity of the distribution (low values of $\alpha_\imode$ correspond to snapshots with relatively few motifs). The same assumption of a non-informative prior leads us to assume $\balpha=\alpha \boldone_\nmode$. \end{enumerate} The generative process performed by LDA with $\ntopic$ motifs is the following: \begin{enumerate} \item For each motif $\btopic_\imode$, a cell-motif distribution $\wtdistrib_\imode$ is drawn from the Dirichlet distribution of parameter $\beta$. \item For each snapshot $\bsnap_\isnap$: \begin{itemize} \item a snapshot-motif distribution $\btheta_{\isnap}$ is drawn. \item each intensity unit $1 \le \iunit \le N_\isnap$ where $N_\isnap$ is the total intensity with $N_\isnap= \sum_{\ix} \snap_{\ix, \isnap}$ is then distributed among the different cells as follows: \begin{itemize} \item a motif $\btopic_{\imode}$ is first selected from $\btheta_\isnap$ (motif $\btopic_\imode$ occurs with probability $\theta_{\imode, \isnap}$ in the snapshot), \item for this motif, a cell $\ix$ is chosen among the cells using $\wtdistribscal_{\ix, \imode}$ and the intensity associated with cell $\ix$ is incremented by 1. \end{itemize} \end{itemize} \end{enumerate} The generative process can be summarized as follows: \begin{algorithm}[H] \SetAlgoLined \For {each of the $\ntopic$ {motifs} $\imode$} { sample $\wtdistrib_\imode \sim \Dir(\beta)$ } \ \For {each of the $\nsnap$ {snapshots} $\isnap$} { sample $\btheta_\isnap \sim \Dir(\alpha)$ \\ \For {each of the $N_\isnap$ {intensity units} } { 1. sample a motif $\btopic_{\imode}$ from $\theta_{\imode, \isnap}$ \\ 2. for this motif sample a cell $\ix$ from $\wtdistribscal_{\ix, \imode}$ \ }} \caption{LDA Generative Model.} \end{algorithm} The snapshot-motif distribution $\btheta_{\isnap}$ and the cell-motif distribution $\wtdistrib_\imode$ are determined from the observed $\bsnap_\isnap$. They are respectively $\nmode$- and $\nx$-dimensional categorical distributions. Finding the distributions $\btheta_{\isnap}$ and $\wtdistrib_\imode$ that are most compatible with the observations is an inference problem that can be solved by either a variational formulation \citep{kn:blei03} or a Gibbs sampler \citep{kn:griffiths02}. In the variational approach, the objective function to minimize is the Kullback-Leibler divergence. The solution {\it a priori} depends on the number of motifs and on the values of the Dirichlet parameters $\alpha$ and $\beta$. We conclude this section with two remarks. \begin{enumerate} \item LDA can generalize to new snapshots more easily than PLSA, due to the snapshot-motif distribution formalism. In PLSA, the snapshot probability is a fixed point in the dataset, which cannot be estimated directly if it is missing. In LDA, the dataset serves as training data for the Dirichlet distribution of snapshot-motif distributions. If a snapshot is missing, it can easily be sampled from the Dirichlet distribution instead. \item An alternative viewpoint can also be adopted in interpreting the LDA in the form of a regularized matrix factorization method. This is further discussed in Appendix~\ref{Sec_MF}. \end{enumerate} \section{Application of LDA to turbulent flows} \label{Sec_channel} \subsection{Numerical configuration } The idea of this paper is to apply this methodology to snapshots of turbulent flows in order to determine latent motifs from observations of $Q_-$ events. We will consider the configuration of turbulent channel flow at a moderate Reynolds number of $R_{\tau}= u_{\tau} h/\nu= 590$ \citep{kn:moser99,kn:muralidhar19}, where $R_{\tau}$ is the Reynolds number based on the fluid viscosity $\nu$, channel half-height $h$ and friction velocity $u_{\tau}$. Wall units based on the friction velocity and fluid viscosity will be denoted with a subscript $_+$. The streamwise, wall-normal and spanwise directions will be referred to as $x,y$ and $z$ respectively. The horizontal dimensions of the numerical domain are $(\pi, \pi/2)h$. Periodic boundary conditions are used in the horizontal directions. The resolution of $(256)^3$ points is based on a regular spacing in the horizontal directions and a hyperbolic tangent stretching function for the vertical direction. The configuration is shown in Figure~\ref{config}. More details about the numerical simulation can be found in \citet{kn:muralidhar19}. \subsection{LDA inputs} In this section, we introduce the different parameters of the study. The python library \texttt{scikit-learn} \citep{scikit-learn} was used to implement LDA. The sensitivity of the results to these parameters will be examined in a subsequent section. We first focus on 2-D vertical subsections of the domain, then present 3-D results. The vertical extent of the domain of investigation was the half-channel height. Since this is an exploration into a new technique, a limited range of scales was considered in the horizontal dimensions: the spanwise dimension of the domain was limited to 450 wall units. The streamwise extent of the domain was in the range of 450-900 wall units. The number of realizations considered for 2-D analysis was $\nsnap=800$, with a time separation of 60 wall time units. The number of snapshots was increased to 2400 for 3-D analysis. The scalar field $\bsnap$ of interest corresponds to $Q_-$ events. It is defined as the positive part of the product $-u'v'$ , where fluctuations are defined with respect to an average taken over all snapshots and horizontal planes. The LDA procedure requires that the input field consists of integer values: it was therefore rescaled and digitized and the scalar field $\snap$ was defined as: \[ \snap = [A \tau_{-}], \] where $\tau_-= \mathrm{max}\left( -u'v',0 \right)$ and $[\cdot]$ represents the integer part. The rescaling factor $A$ was chosen in order to yield a sufficiently large, yet still tractable, total intensity. In practice we used $A=40$, which led to a total intensity $\sum_\isnap \sum_{\ix} \snap_{\ix,\isnap}$ of about $10^8$ for plane sections. The effect of the rescaling factor will be examined in a subsequent section. LDA is characterized by a user-defined number of motifs $\ntopic$, a parameter $\alpha$ which characterizes the sparsity of prior Dirichlet snapshot-motif distribution, and a parameter $\beta$ which characterizes the sparsity of the prior Dirichlet motif-cell distribution. Results were obtained assuming uniform priors for $\alpha$ and $\beta$ with a default value of $1/\ntopic$. The sensitivity of the results to the priors will be evaluated in Section~\ref{Sec_sensitivity}. \subsection{LDA outputs} For a collection of $\nsnap$ snapshots and a user-defined number of motifs $\ntopic$, LDA returns $\ntopic$ motif-cell distributions $\wtdistrib_\imode$ and $\nsnap$ snapshot-motif distributions $\btheta_\isnap$. Each {\it motif} is defined by a probability distribution $\wtdistrib_\imode$ associated with each grid cell. It is therefore analogous to a structure or a portion of structure since it contains spatial information - note however that its definition is different from standard approaches. The motif-snapshot distribution $\btheta_\isnap$ characterizes the prevalence of a given motif in the snapshot. As will be made clear below, the motifs most often consist of single connected regions, although occasionally a couple of distinct regions were identified. In most cases, the motifs can thus be characterized by a characteristic location $\vecx^c$ and a characteristic dimension in each direction $L_j$, $j \in \{x,y,z\}$. To determine these characteristics, we first define for each motif a mask associated with a domain $D$. The origin of the domain was defined as the position $\vecx_\mathrm{m})$ corresponding to its maximum probability $\proba_\mathrm{m} = \wtdistrib_\imode (\vecx_\mathrm{m})$. The dimensions of the domain in each direction (for instance $L_x$) were defined as the segment extending from the domain origin over which the probability remained larger than $1\%$ of its maximum value $\proba_\mathrm{m}$. The position and characteristic dimension of a motif for instance in the $x$-direction are then defined as: \begin{eqnarray} x^c & = & \frac {\int_{D} x \wtdistrib_\imode \ddroit D }{\int_{D} \wtdistrib_\imode \ddroit D }, \\ L_x^2 & = & 2 \frac {\int_{D} (x - x^c)^2 \wtdistrib_\imode \ddroit D }{\int_{D} \wtdistrib_\imode \ddroit D }. \label{defLx} \end{eqnarray} Analogous definitions can be given for $y^c$ and $z^c$. \begin{figure} \centerline{ \includegraphics[height=5cm]{config.png}} \caption{Numerical domain $D$. The shaded surfaces correspond to the two types of planes used in the analysis. The volume considered for 3D analysis is indicated in bold lines.} \label{config} \end{figure} \section{Results} \label{Sec_results} \subsection{Vertical planes} In order to investigate in detail the vertical organization of the flow, LDA was first applied to vertical sections of the flow. Both cross-flow $(y,z)$ and longitudinal $(x,y)$ sections were considered. Due to the horizontal homogeneity of the flow, we do not expect significant changes in the cell-motif and the motif-document distributions when the sections are translated in the horizontal direction. \subsubsection{Cross-flow planes} The dimensions of the cross-sections were $d_{z+}=450$ and $d_{y+}=590$. Figure~\ref{verticaltopic} shows selected motifs for a total number of motifs $\ntopic=96$ on a vertical plane at $x=0$. The motifs consist of isolated regions, the dimensions of which increase with the wall distance. This is confirmed by Figure~\ref{LyLzvert}, which represents characteristic sizes of LDA motifs of a succession of four vertical planes separated by a distance of $100$ wall units (+). We point out that observing motifs which are detached from the wall does not infirm the presence of wall-attached structures, as they would be consistent with a cross-section of a wall-attached structure elongated in the streamwise direction. Results for several motif numbers (three different motif numbers $\ntopic=48, 96, 144$ are shown in Figure~\ref{LyLzvert}), it was found that both spanwise and vertical dimensions increase linearly with the wall distance in the region $y_+> 100$. Again, this is in agreement with \citet{kn:townsend61}'s hypothesis of a hierarchy of structures of increasing dimensions, which was also confirmed numerically by \citet{kn:flores10a}. The aspect ratio $L_z/L_y$ is constant with the wall distance above $y_+>100$, with a typical value of about 1. We note that \citet{kn:lozanoduran12} found with a different definition that $Q_-$ events were characterized by nearly equal spanwise and vertical sizes $\Delta z \sim \Delta y$, while \citet{kn:delalamo06} found a scaling of $\Delta z \sim 1.5 \Delta y$ for vortex clusters. \begin{figure} \includegraphics[height=3.5cm]{FIGURES/LDA/verticaltopica_x01.png} \\ \includegraphics[height=1.1cm]{FIGURES/LDA/colorbara_vert.png} \\ \includegraphics[trim=0.2cm 0 0 0, clip, height=3.5cm]{FIGURES/LDA/verticaltopicb_x01.png} \\ \includegraphics[height=1.1cm]{FIGURES/LDA/colorbarb_vert.png} \\ \caption{Selected motifs in a cross-flow plane for a number of motifs $\ntopic=96.$ } \label{verticaltopic} \end{figure} Figure~\ref{histzvert} (left) shows the distribution of the vertical location $\proba(y_\mathrm{m})$ of the motif maximum probability. Comparison of two different plane locations $x$ confirms that results do not depend on the location of the plane, which reflects the statistical homogeneity of the flow in the horizontal direction. The probability decreases as the inverse of the wall distance on all planes. This is in agreement with Townsend's self-similarity hypothesis that the number of structures decreases with the wall distance in $1/y$ \citep{kn:townsend61,kn:woodcockmarusic15}. Figure~\ref{histzvert} (right) shows that a good fit is $\proba(y) \simeq \frac{c}{y} - \gamma$, with $\gamma=0.0006$ and $c = 0.4$. \begin{figure} \begin{tabular}{ll} \includegraphics[height=5cm]{FIGURES/LDA/Lyvertnew.png} & \includegraphics[height=5cm]{FIGURES/LDA/Lzvertnew.png} \\ \includegraphics[height=5cm]{FIGURES/LDA/LyoLzvertnew.png} & \\ \end{tabular} \caption{ Cross-plane motif characteristic sizes; Left: Vertical dimension $L_y$; Right : Spanwise dimension $L_z$; Bottom: Aspect ratio $L_y/L_z$. Each dot corresponds to a motif.} \label{LyLzvert} \end{figure} \begin{figure} \includegraphics[height=5cm]{FIGURES/LDA/histzvert.png} \includegraphics[height=5cm]{FIGURES/LDA/histzvertcompensated.png} \caption{Left: Distribution of the motif maximum location $y^c$; Right: Compensated plot of the distribution for different sets of motifs and different subdomains. The legend is the same for the two figures.} \label{histzvert} \end{figure} \subsubsection{Longitudinal planes} We now examine results for the longitudinal sections $(x,y)$. The streamwise and vertical dimensions of the sections are respectively $d_{x+}=900$ and $d_{y+}= 590$ wall units, although some tests were also carried out for a streamwise extent of 450 units. Figure~\ref{longtopic} presents selected motifs for the longitudinal planes for $\ntopic=96$. As in the cross-flow plane, the dimensions of the motifs increase with the wall distance, which is confirmed by Figure~\ref{LxLylong}. The characteristic dimensions seem essentially independent of the total number of motifs (see also next section). There is a wide disparity in streamwise characteristic dimensions near the wall. The motif aspect ratio is highest near the wall and decreases sharply in the region $0 < y_+ < 50$. The vertical dimension increases linearly with the wall distance in the region $y_+ > 100$, as well as the streamwise dimension, with an aspect ratio of $L_x/L_y$ on the order of 2. Figure~\ref{histzlong} shows the distribution of the motif maximum probability location for two different sets $\ntopic=48, 96,$ and for two domain lengths. The shape of the distribution does not appear to change, and again fits well with the distribution $\proba \simeq \frac{c}{y} - \gamma$ with $c=0.4 $ and $\gamma=-0.0006$ (Figure~\ref{histzlong} right). \begin{figure} \begin{tabular}{ll} \includegraphics[trim={0.8cm 3cm 0 3cm}, clip, height=3.5cm]{FIGURES/LDA/longitudinaltopica_96.png} & \includegraphics[trim={0.8cm 3cm 0 3cm}, clip, height=3.5cm]{FIGURES/LDA/longitudinaltopicb_96.png} \\ \includegraphics[trim={0 0cm 0 12cm}, clip, height=1.2cm]{FIGURES/LDA/colorbara_96.png} & \includegraphics[trim={0 0cm 0 12cm}, clip, height=1.2cm]{FIGURES/LDA/colorbarb_96.png} \\ \end{tabular} \caption{Selected motifs for a longitudinal plane with $\ntopic=96$ motifs.} \label{longtopic} \end{figure} \begin{figure} \begin{tabular}{ll} \includegraphics[height=5cm]{FIGURES/LDA/Lxlongnew.png} & \includegraphics[height=5cm]{FIGURES/LDA/Lylongnew.png} \\ \includegraphics[height=5cm]{FIGURES/LDA/LxoLylongnew.png} & \\ \end{tabular} \caption{ Longitudinal motif characteristic dimensions; Left: Streamwise dimension $L_x$; Right: Vertical dimension $L_y$; Bottom : Aspect Ratio $L_x/L_y$. Each dot corresponds to a motif.} \label{LxLylong} \end{figure} \begin{figure} \includegraphics[height=5cm]{FIGURES/LDA/histzlong.png} \includegraphics[height=5cm]{FIGURES/LDA/histzlongcompensated.png} \caption{Left: Histogram of the motif location $y^c$; Right: Compensated plot of the histogram for different sets of motifs and different subdomains. The legend is the same for the two figures.} \label{histzlong} \end{figure} \subsection{Sensitivity of the results} \label{Sec_sensitivity} In this section we examine if and how the characteristics of the motifs depend on the various parameters of LDA. We point out that the probabilistic framework of the model makes exact comparison difficult, since there is no convergence in the $L_2$ sense, and the Kullback-Leibler divergence, which measures the difference between two distributions is not a true metric tensor (see Appendix). The criteria we chose to assess the robustness of the results were the characteristic size of the topics and the distribution of their locations. We first examine the influence of various LDA parameters on the results obtained for cross-flow sections for a constant number of topics $\ntopic=48$. The reference case corresponded to an amplitude $A=40$, prior values of $\alpha=\beta=1/\ntopic$ and a total number of snapshots $\nsnap=800$. Figure~\ref{inputeffect} (top row) shows that the characteristic dimension is not modified when the number of snapshots was reduced by 50\%, indicating that the procedure has converged. Figure~\ref{inputeffect} (bottom row) shows the characteristic vertical dimension $L_y$ of the structures when the rescaling parameter $A$ was varied. Similar results (not shown) were found for $L_z$. Although some fluctuations were observed in the individual characteristic dimensions, no significant statistical change was observed. Figure~\ref{ldaprior} shows the characteristic dimensions of the structures for different prior choices for $\alpha$ and $\beta$, which govern the sparsity of the representation. No significant statistical trend was modified when $\alpha$ and $\beta$ were made to vary within $1/10$ and up to 10 times their default values of $1/\ntopic$. Figure~\ref{location} shows that the distribution of the maximum location of the motifs follows the same inverse law and does not depend on the choice of parameters chosen for LDA. \begin{figure} \begin{tabular}{ll} \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40.png} & \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40tpnh.png} \\ \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40A60.png} & \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40A20.png} \\ \end{tabular} \caption{ Motif characteristic vertical dimension for $\ntopic=48$. Top row: Influence of dataset size; $\nsnap$: $\nsnap$= 800 (left), $\nsnap=400$ (right); Bottom row: Effect of rescaling factor; $A=60$ (left); $A=20$ (right). } \label{inputeffect} \end{figure} \begin{figure} \begin{tabular}{cc} $\alpha= 0.1/\ntopic$, $\beta= 1/\ntopic$ & $\alpha= 10/\ntopic$, $\beta= 1/\ntopic$ \\ \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40d01.png} & \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40d10.png} \\ $\alpha= 1/\ntopic$, $\beta= 0.1/\ntopic$ & $\alpha= 1/\ntopic$, $\beta= 10/\ntopic$ \\ \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40t01.png} & \includegraphics[height=5cm]{FIGURES/NEWLDA/Lyvert48f40t10.png} \\ \end{tabular} \caption{ Characteristic vertical motif length for different LDA priors, $\ntopic=48$. } \label{ldaprior} \end{figure} \begin{figure} \centerline{ \includegraphics[height=7cm]{FIGURES/NEWLDA/histz48vertf40nsAprior.png}} \caption{ Distribution $\proba$ of motif/cell distribution maximum $y_\mathrm{m}$ for different parameters.} \label{location} \end{figure} We now study the sensitivity of the motifs to the choice of $\ntopic$ for both types of vertical planes. We have seen in the previous sections that the motif dimensions appear essentially independent of the number of motifs considered. To quantify this more precisely, we first define a characteristic motif size $L_T$ as $L_T= \sqrt{\left<A_T\right>}$ where $A_T$ is the area corresponding to the ellipsoid with the same characteristic dimensions as the motif and $\left<\cdot\right>$ represents the average over the motifs. Figure~\ref{topicarea} summarizes how the motif size evolves with the number of motifs for both vertical and longitudinal planes. In all cases, it was found that the characteristic size varies slowly around a minimal value (Figure~\ref{topicarea}, left), and that the characteristic area of the motif was minimum when the sum of the motif characteristic areas $\ntopic A_T$ was comparable with the total domain area $A_D$ (Figure~\ref{topicarea}, right). \begin{figure} \begin{tabular}{ll} \includegraphics[height=5cm]{FIGURES/LDA/topicarea.png} & \includegraphics[height=5cm]{FIGURES/LDA/topicareafrac.png} \\ \end{tabular} \caption{Left: motif characteristic dimension $L_T$ for different datasets as a function of the number of motifs; Right: relative fraction of the area captured by the sum of the topics $\ntopic A_T/A_D$.} \label{topicarea} \end{figure} \subsection{3-D Analysis} LDA was then applied to a volumic section of the flow of size $450 \times 590 \times 450$ wall units. Figure~\ref{topic3d} shows the cross-sections views of three 3-D motifs. One can note the streamwise coherence of the topics over different heights. We note that the small dimensions of the volume may make it difficult to capture full-length structures, even at this comparatively low Reynolds number, and results should be confirmed by a more extensive investigation which is outside the scope of this paper. The characteristic dimensions of the motifs are reported in Figure~\ref{motif3d}. Two different regions can be identified. For $y_+ < 100$ the region is characterized by a wide distribution of $L_x$, with large values that can extend over the whole domain. Some relatively large values of $L_z$ can occasionally be observed. For $y_+ < 100$ values of $L_x$ are lower and $L_z$ grows linearly. $L_y$ appears to grow linearly over both regions. The ratio between the horizontal dimensions $L_x$ and $L_z$ is reported in Figure~\ref{motif3d} (right). We can see that the streamwise to spanwise aspect ratio decreases over $0 < y_+ < 100$ from an average value of 5 at the wall, which corresponds to the typical aspect ratio of the streaks \citep{kn:dennis15}. It then decreases more slowly towards an aspect ratio of about 2 in the region $100 < y_+ < 500.$ This ratio is consistent with results from analysis of POD eigenfunctions in \citet{kn:jfe10}, as well as from vortex cluster analysis from \citet{kn:delalamo06}. 3-D motif characteristic sizes are consistent with those obtained for vertical planes, which shows that information about the 3-D organization of the flow can be obtained from analysis performed on 2-D sections. This is of particular interest as it suggests that the LDA method could be usefully applied to PIV experimental data. \begin{figure} \begin{tabular}{ccc} $x_+=28$ & $x_+=142$ & $x_+=255$ \\ \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic9stream28.png} & \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic9stream142.png} & \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic9stream255.png} \\ \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic6stream28.png} & \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic6stream142.png} & \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic6stream255.png} \\ \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic31stream28.png} & \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic31stream142.png} & \includegraphics[trim=0cm 1cm 8cm 1.1cm, clip, height=6cm]{FIGURES/VOLUME/topic31stream255.png} \\ \end{tabular} \caption{ Cross-sections at different streamwise locations of three different 3D motifs obtained for $\ntopic=144$; Top row: Motif index $n=34$; Middle row: Motif index $n=7$; Bottom row: Motif index $n=24.$ \label{topic3d} \end{figure} \begin{figure} \begin{tabular}{ll} \includegraphics[height=5cm]{FIGURES/VOLUME/Lmodif3d.png} & \includegraphics[height=5cm]{FIGURES/VOLUME/Lxoz3d.png} \\ \end{tabular} \caption{ Left: Characteristic dimensions of the 3D motifs, $\ntopic=144$; Right : Evolution of ratio $L_x/L_z$ with height for $N_T=144$ and $N_T=48$. } \label{motif3d} \end{figure} \section{Field reconstruction and generation} \label{Sec_reconstruction} \subsection{Reconstruction} We now examine how the flow can be reconstructed using LDA. { In all that follows, without loss of generality, we will focus on one of the cross-flow planes examined in Section~\ref{Sec_results}, specifically the cross-section at $x=0$ of dimensions $d_{y+}=590$ and $d_{z+}=450$. } As described in the algorithm presented in Section~\ref{Sec_LDA}, both the motif-snapshot and the cell-motif distributions can be sampled for the total intensity $N_\isnap = \sum_\ix \snap_{\ix,\isnap}$ in the $\isnap$-th snapshot. This total intensity is defined as the rescaled integral value of the Reynolds stress (digitized and restricted to $Q_-$ events) over the plane. Since results were found to be essentially independent of the rescaling, we can make the simplifying assumption that $N_\isnap$ is large enough so that the distribution $\wtdistribscal_\imode$ is well approximated by the samples. For a given total intensity $N_\isnap$, a reconstruction of the $\isnap$-th snapshot can then be obtained at each grid cell $\vecx_\ix$ from \[ \tau^\mathrm{R-LDA}(\vecx, t_\isnap) = \frac{1}{A} \snap_i(\vecx) = \frac{N_\isnap}{A} \sum_{\imode=1}^{\ntopic} \theta_{\imode, \isnap} \wtdistribscal_\imode(\vecx), \] where \begin{itemize} \item $\wtdistribscal_\imode(\vecx)$ is the motif-cell distribution, \item the snapshot-motif distribution $\theta_{\imode, \isnap}$ represents the likelihood of motif $\btopic_\imode$ in the $\isnap$-th snapshot. \end{itemize} It seems natural to compare this reconstruction with the POD representation of the field which has a similar expression \[ \tau^\mathrm{R-POD}(\vecx,t_\isnap)= \sum_{\imode=0}^{N_\mathrm{POD}-1} \coef_{\imode,\isnap} \phi_\imode(\vecx), \] where \begin{itemize} \item $\phi_\imode(\vecx)$ are the POD eigenfunctions extracted from the autocorrelation tensor $C_{\isnap, \isnap'}$ obtained from the $\nsnap$ snapshots, \item $\coef_{\imode,\isnap}$ corresponds to the amplitude of the $\imode$-th POD mode in the $\isnap$-th snapshot. \end{itemize} The first six fluctuating POD modes are represented in Figure~\ref{modepod}. We note that the $0$-th POD mode represents the temporal average of the field. As expected, the fluctuating POD modes consist of Fourier modes in that spanwise direction (due to homogeneity of the statistics), and their intensity reaches a maximum at around $y_+ \simeq 25$. If the number of POD modes is equal to the number of motifs $\ntopic$, by construction, POD will provide a better representation of the statistics at least up to second-order \citep{kn:HLB}. We note that, in terms of computational requirement, POD may appear less expensive than LDA, as it requires solving an SVD problem versus implementing an iterative Expectation Maximization algorithm \citep{dempster77}. However the performance of the EM algorithm can be improved, in particular with online updates \citep{kn:hofmann99}. In terms of storage, a reconstructed snapshot requires $N_\mathrm{POD}$ modes for POD and $\ntopic$ topics for LDA. However, storage reduction could be obtained in the case of LDA by filtering out the motifs with a low probability $\theta_{\imode, \isnap} $, \ie{}, lower than a threshold $\thres$. We note that, in this case, it is necessary to store the indices $\imode$ of the motifs as well as the value of $\theta_{\imode, \isnap}$, so that if $n$ modes (resp. topics) are kept, storage will consist of $2 n$ variables per snapshot. We see that storage reduction can be achieved if the fraction of retained modes $\eta=n/\ntopic$ is sufficiently small. The LDA storage data length per snapshot $2 \eta \ntopic$ should then be compared with the POD data length $N_\mathrm{POD}$. For $\ntopic=96$, choosing a threshold of $\thres=0.015$ resulted in less than 8\% difference between the filtered and unfiltered LDA reconstructions (the $L_2$ norm was used). The average value for $\eta$ was $0.2$, which means that the number of POD modes that would represent a storage equivalent to that of LDA with $\ntopic=96$ is $N_\mathrm{POD} \simeq 2 \eta \ntopic \simeq 40$. We note that the total storage cost should further take into account the size of the LDA basis $\{\btopic_\imode\}_\imode$, which will be larger than the POD basis $\{\bmode_\imode\}_\imode$ since they are respectively equivalent to $\ntopic$ and $N_\mathrm{POD}$ fields. However efficient storage of the LDA basis can be achieved by making use of the limited spatial support of $\btopic_\imode$, in particular for motifs located close to the wall. In the remainder of this section we will compare a filtered LDA reconstruction of 96 motifs (where values of $\theta_{\imode, \isnap}$ lower than $\kappa=0.015$ are excluded from the reconstruction), with a POD representation of $N_\mathrm{POD}=48$ modes, which captures about 75\% of the total energy. Figure~\ref{comparison} compares an instantaneous field with its LDA reconstruction and its POD reconstruction. A more general assessment is provided by Figure~\ref{histcorrelation}, which shows the correlation coefficient between each snapshot and its reconstruction based on POD as well as that based on LDA. Although POD appears to be slighty superior, the correlation coefficients are very close with respective average values of 0.75 for LDA and 0.77 for POD. \begin{figure} \includegraphics[height=7cm]{FIGURES/SAVE/podmodes.png} \caption{ Contour plot of the first six fluctuating normalized POD spatial modes; Contour values go from $-0.03$ to $0.03$. Negative values are indicated by dashed lines. } \label{modepod} \end{figure} \begin{figure} \begin{tabular}{cc} \includegraphics[trim={0 5cm 0 3cm},clip, height=5cm]{FIGURES/SAVE/compfield300renorm.png} & \includegraphics[height=4.5cm]{FIGURES/SAVE/colorbar.png} \\ \end{tabular} \caption{ Instantaneous Reynolds stress field (limited to $Q_-$ events) Left: True field; Middle: POD-reconstructed field using 48 POD modes; Right: LDA-reconstructed field using 96 modes. } \label{comparison} \end{figure} \begin{figure} \centerline{ \includegraphics[height=7cm]{FIGURES/SAVE/histcorrelation48renorm.png}} \caption{Distribution of the correlation coefficient between each original snapshot and its reconstruction based on LDA (top) or POD (bottom).} \label{histcorrelation} \end{figure} \subsection{Generation} LDA is a generative model, so it is straightforward to generate synthetic snapshots by sampling from distributions $\theta$ and $\wtdistribscal$ for a total intensity $N_{\isnap}=\sum_{\ix} f_{\ix, \isnap}$, which is modeled as a Poisson process with the same mean and standard deviation as the original database. In contrast, POD is not a generative model \emph{per se}. We will use a simplified version of the probabilistic extension of POD (PPCA) derived by \citet{kn:tipping02}, which is presented in Appendix~\ref{ppca}, where we will make the additional assumption that no noise is present in the model, POD-based synthetic fields will be reconstructed from deterministic spatial POD modes $\bmode_\imode$ and random POD amplitudes $\tvecA_{\imode}$ which are assumed to be Gaussian variables. Examination of Figure~\ref{histpod}, which represents the distribution of the first fluctuating POD coefficients $\imode \ge 1$, suggests that it is quite acceptable as a first approximation to assume Gaussian distributions for the amplitudes $\bcoefs_\imode$ --- alternatively, the amplitudes could be sampled from the empirical distributions. The amplitude of the $0$-th mode, which corresponds to the average of the field over the snapshots, will be assumed to be constant for all snapshots. We can therefore compare the databases reconstructed from and generated with LDA with those obtained from POD. The generated databases consist of $\nsnap$ snapshots corresponding to arbitrary instants $\widetilde{t}_\isnap$. Overall, the statistics of five different databases can be compared: \begin{itemize} \item the true database $\tau_-(y,z,t_\isnap)$ corresponding to the actual values of the $Q_-$ events \item the POD-reconstructed (R-POD) or POD-projected database \[ \tau_-^\mathrm{R-POD}(y,z,t_\isnap) = \sum_{\imode=0}^{N_\mathrm{POD}-1} \coef_{\isnap,\imode} \phi_\imode(y,z), \] where $\phi_\imode$ are the POD eigenfunctions and $\coef_{\isnap,\imode}$ are the amplitudes of the $\imode$-th POD mode in the $\isnap$-th snapshot. \item the POD-generated (G-POD) database \[ \tau_-^\mathrm{G-POD}(y,z,\widetilde{t}_\isnap) = \sum_{\imode=0}^{N_\mathrm{POD}-1} \widetilde{a}_{\isnap,\imode} \phi_\imode(y,z), \] where $\widetilde{a}_{\isnap,0}=\left<\coef_{\isnap,0}\right>$, with $\left<\cdot\right>$ the average over all snapshots and $\widetilde{a}_{\isnap,\imode}$, $\imode \geq 1$, centered Gaussian random variables with variance $\left<\widetilde{a}_{\isnap,\imode}^2\right>$. \item the LDA-reconstructed database (R-LDA) \[ \tau_-^\mathrm{R-LDA}(y,z,t_\isnap) = \frac{N_\isnap}{A} \sum_{\imode=1}^{\ntopic} \theta_{\imode, \isnap} \wtdistribscal_\imode(y,z), \] where $N_\isnap$ is the total intensity measured in the $\isnap$-th snapshot, $\theta_{\imode, \isnap}$ is the distribution of motif $\imode$ on the $\isnap$-th snapshot and $\wtdistribscal_\imode(y,z)$ is the identified distribution of the cell at $(y,z)$ on motif $\imode$. \item the LDA-generated database (G-LDA) \[ \tau_-^\mathrm{G-LDA}(y,z,\widetilde{t}_\isnap) = \frac{\widetilde{N}_\isnap}{A} \sum_{\imode=1}^{\ntopic} \widetilde{\theta}_{\imode,\isnap} \wtdistribscal_\imode(y,z), \] where $\widetilde{N}_\isnap$ is the total intensity, which is sampled from a Poisson process, $\wtdistribscal_\imode(y,z)$ is the identified distribution of the cell at $(y,z)$ on motif $\imode$ and $\widetilde{\theta}_{\imode,\isnap}$ is sampled for each $\imode$ from the empirical distribution $\theta_{\imode, \isnap}$ over the snapshots. \end{itemize} Figure~\ref{database} shows the statistics of the different databases as a function of the wall distance. Averages are taken over all snapshots and in the streamwise direction. The mean value of the Reynolds stresses is correctly recovered by all methods. The second-order statistics are slightly better recovered by the POD-reconstructed and POD-generated snapshot sets, but both LDA approaches also capture a significant portion of the variance. The POD databases capture 75\% of the total variance, while the reconstructed and generated LDA databases respectively capture 68\% and 60\% of the variance. {Figure~\ref{spatialstructure} shows the vertical spatial autocorrelation of $\tau_-$ defined as $R(y, y')=\left<\tau_-(x,y,z,t)\tau_(x,y',z,t)\right>$ (where $\left<\cdot\right>$ represents an average taken in time and in the spanwise position). We can see that the generated LDA autocorrelation is very similar to its reconstructed POD counterpart, which shows that the LDA synthetic fields capture as much as the spatial structure as the POD reconstructed ones. We note that the autocorrelation at large separations is well reproduced by all datasets. } Figure~\ref{histogram} shows histograms of the fields at different heights. We note that unlike the LDA approach, which is a non-negative decomposition (since it is based on probabilities), some negative values are observed for the POD approach, even though the original field values considered are always positive. We can see that at different wall distances the POD-reconstructed database reproduces well the distribution of the original database, but the POD-generated database does not. This failure is due to the fact that although POD amplitudes are uncorrelated by construction, they are not independent. We note that the same failure was observed when sampling the POD coefficients from their data-observed distributions $\coef_{\isnap, \imode}$ instead of Gaussian processes. In contrast, both reconstructed and generated LDA methods yield very similar distributions, which reproduce the main features of the original Reynolds stress values, such as the intermittency (sharp peak at zero) and the asymptotic decay for positive values. \begin{figure} \centerline{ \includegraphics[height=8.5cm]{FIGURES/SAVE/histnormalizedPOD.png} } \caption{ Histograms of the normalized amplitudes of the first six fluctuating POD modes and comparison with a sampled Gaussian distribution.} \label{histpod} \end{figure} \begin{figure} \begin{tabular}{ll} \includegraphics[height=5cm]{FIGURES/SAVE/meandatabase48renorm.png} & \includegraphics[height=5cm]{FIGURES/SAVE/stddatabase48renorm.png} \\ \end{tabular} \caption{Statistics of the different databases averaged over the spanwise direction and the number of snapshots. Left: Mean value; Right: Standard deviation.} \label{database} \end{figure} \begin{figure} \begin{tabular}{ll} \includegraphics[trim={0 0 0 0},clip, height=5cm,align=c]{FIGURES/SAVE/spatialstructure33.png} & \includegraphics[trim={0 0 0 0},clip, height=5cm,align=c]{FIGURES/SAVE/spatialstructure163.png} \\ \end{tabular} \caption{Spatial autocorrelation of the Reynolds stress (limited to $Q_-$ events) in the vertical direction at different heights. The average is taken over snapshots and in the spanwise direction.} \label{spatialstructure} \end{figure} \begin{figure} \begin{tabular}{lll} $y_+=19$ & \includegraphics[trim={0 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogrampod19.png} & \includegraphics[trim={6cm 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogramlda19renorm.png} \\ $y_+=61$ & \includegraphics[trim={0 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogrampod61.png} & \includegraphics[trim={6cm 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogramlda61renorm.png} \\ $y_+=157$ & \includegraphics[trim={0 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogrampod157.png} & \includegraphics[trim={6cm 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogramlda157renorm.png} \\ $y_+=343$ & \includegraphics[trim={0 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogrampod343.png} & \includegraphics[trim={6cm 4cm 0 0},clip, height=3.6cm,align=c]{FIGURES/SAVE/histogramlda343renorm.png} \\ \end{tabular} \caption{Histograms of the Reynolds stress (limited to $Q_-$ events) corresponding to the different databases at different heights.} \label{histogram} \end{figure} \section{Conclusion} \label{Sec_Conclusion} This paper presents exploratory work about the application of Latent Dirichlet Allocation (LDA) to the identification of coherent structures in turbulent flows. In the probabilistic framework of LDA, latent factors or motifs are inferred from a collection of snapshots. Each snapshot is characterized by a motif distribution, and each motif itself is distributed over space. Implementation was carried out for a scalar field representing Reynolds stress $Q_-$ events. Evidence of self-similarity was found in the motifs: the spanwise and vertical dimensions of the motifs increase linearly with the wall distance in the logarithmic region, and the number of structures evolves inversely with the wall distance. This is in agreement with the eddy attached model hypotheses. The characteristics of the motifs were established to be robust with respect to the LDA parameters. LDA yields a sparse, efficient reconstruction of the snapshots that compares reasonably well with POD representation. Adding in the fact that the motifs have a local spatial support, even when statistics are homogeneous, could make the LDA representation of interest for estimation and control purposes. Further, a strong benefit of LDA is its inherent generative property, which makes it possible to generate a set of synthetic snapshots which is statistically similar to the original one. The first results obtained with the LDA method open up exciting prospects for data analysis and modeling of turbulent flows. We plan to study larger domains at higher Reynolds numbers in future work. Moreover, while the investigation was limited to a positive scalar field in the present implementation, it would be useful to extend the capabilities of LDA to fully real, as well as multi-dimensional fields. Finally, since the technique appears well suited to describe intermittent phenomena, it would be interesting to apply it to strongly inhomogeneous flow regions such as the turbulent/non-turbulent interface \citep{kn:philip14} or other types of intermittency \citep{kn:johnson17}. \section*{Acknowledgments} This work was supported by the Center of Data Science from the Paris-Saclay University. Computations were carried out at IDRIS-GENCI (project 02262). The authors are grateful to the anonymous Referees for their helpful comments on the first version of the manuscript. \section*{Declaration of Interests} The authors report no conflict of interest. \input{Annexes_BP} \bibliographystyle{jfm} \section{Representation format of data} \label{Sec_decomp} \section{A probabilistic extension of Proper Orthogonal Decomposition } \label{Sec_decomp} To suitably introduce and contextualize the Latent Dirichlet Allocation, several established approaches to represent data are first briefly discussed. \subsection{Proper Orthogonal Decomposition} \subsubsection{General formulation} The Proper Orthogonal Decomposition (POD) is arguably the most popular tool for representation and analysis of turbulent flow fields. It relies on a method rediscovered and revisited several times in different scientific domains and comes by several names (Principal Component Analysis, Empirical Mode Decomposition, Karhunen-Lo\`eve decomposition, Latent Semantic Allocation (LSA) \ldots) although they are not all strictly equivalent. It was introduced for turbulent flows and adapted by \citet{kn:lumleyPOD}. The POD method allows to derive an orthogonal basis for the (sub)space of the fluctuations of a multi-dimensional quantity $\bsnap$ of finite variance. One can show that a basis for the space of fluctuations, defined as $\bsnapflu\left(t\right) := \bsnap\left(t\right) - \left<\bsnap\right>$, with $\left<\cdot\right>$ the statistical mean, is given by the set of elements $\set{\bmode_\imode}_\imode$, eigenvectors of the following eigenvalue problem \citep{kn:HLB}: \be \corLM \bmode_\imode = \eigval_\imode \bmode_\imode, \label{Eq_eigpb_POD} \ee with $\eigval_\imode$ the eigenvalue and $\corLM \in \R^{\nx \times \nx}$ the empirical 2-point covariance matrix: \be \corLM = \frac{1}{\ntLM} \sum_{\isnap=1}^{\nsnap}{\bsnapflu\left(t_\isnap\right) \bsnapflu\left(t_\isnap\right)}, \label{Eq_empicov} \ee with $\set{t_\isnap}_\isnap$ the time instants for which the field $\bsnap$ is available. Some conditions on the temporal sampling scheme apply for the empirical covariance $\corLMemp$ to be an accurate approximation of $\corLM$ \citep{kn:HLB}. POD modes are identified as the eigenvectors $\bmode_\imode$. \subsubsection{Method of snapshots} The above method is a quite natural implementation of the underlying Hilbert-Schmidt decomposition theory. However, the algorithmic complexity associated with the eigenvalue problem \eqref{Eq_eigpb_POD} scales as $\mathcal{O}\left(\ntLM \, \nx^2\right)$, where the number of field instances $\ntLM$ was assumed to be lower than the size $\nx$ of the discrete field, $\ntLM \le \nx$. For large field vectors (large $\nx$), the computational and memory cost is hence high. For this widely encountered situation, a possible workaround was suggested in \cite{kn:siro87} and consists in solving the following eigenvalue problem: \be \cordual \, \tvecA_\imode = \eigval_\imode \tvecA_\imode, \qquad \tvecA_\imode \in \R^\ntLM, \label{Eq_eigpb_POD_snapshot} \ee with \be \cordual_{\isnap, {\isnap'}} \propto \left<\bsnapflu\left(t_\isnap\right), \bsnapflu\left(t_{\isnap'}\right)\right>_\Dx, \qquad \forall \: \isnap, {\isnap'} \in \left[1, \ntLM\right] \subset \mathbb{N}, \label{defcordual} \ee and $\left<\cdot, \cdot\right>_\Dx$ the Euclidean inner product. Since the correlation matrix $\cordual$ is Hermitian, its eigenvalues are real and non-negative, $\eigval_\imode \ge 0$, $\forall \, \imode$, and its eigenvectors $\set{\tvecA_\imode}_\imode$ are orthogonal and can be made orthonormal in an Euclidean sense, $\tvecA_\imode^\transpose \, \tvecA_{\imode'} \propto \delta_{\imode, \imode'}$, with $\delta$ the Kronecker delta. The spatial POD modes are finally retrieved via projection as follows: \be \bmode_\imode = \eigval_\imode^{-1\slash 2} \, \flufieldmat \, \tvecA_\imode, \qquad \forall \, \imode. \ee where the $\isnap$-th column of the matrix $\flufieldmat$ is the snapshot $\bsnapflu_\isnap$. The algorithmic complexity is now $\mathcal{O}\left(\ntLM^3\right)$ and scales much better than the standard POD approach ($\mathcal{O}\left(\ntLM \, \nx^2\right)$) in the usual situation where $\ntLM \ll \nx$. In this work, we rely on this so-called method of snapshots to implement POD. Formally the decomposition of the snapshot matrix $\flufieldmat$ is equivalent to a singular value decomposition SVD \be \flufieldmat = \matmode \Sigma \coefmat^\transpose, \label{lsa} \ee where $\matmode$ is the matrix constituted by the $\imode$ columns $\bmode_\imode$, $\coefmat$ is the matrix containing the $\imode$ columns $\tvecA_\imode$ and $\Sigma$ is a diagonal matrix whose entries are $\eigval_\imode^{-1\slash 2}$. The snapshot matrix can thus be decomposed into a snapshot-mode matrix $A$ and into a cell-mode matrix $\matmode$. The spatial modes or {\it structures} can be seen as latent variables allowing optimal reconstruction of the data in the $L_2$ norm or an equivalent. The decomposition can be truncated to retain only the $\nmode$ largest values corresponding to the $\nmode$ first columns of each matrix. \subsection{Probabilistic Latent Semantic Analysis} \label{Sec_PLSA} In all that follows we will consider a collection of $\nsnap$ scalar fields $\{\bsnap_\isnap \}_{\isnap=1, \cdots, \nsnap}$. Each field is of dimension $\nx$ and consists of either positive or zero integer values on each grid cell. For each snapshot $\isnap$, the value of $\bsnap_\isnap$ on grid cell $\ix$ indicates that the grid cell $\isnap$ has been detected or activated $\snap_{\ix,\isnap}$ times. Probabilistic Latent Semantic Analysis (PLSA) tackles the problem of finding latent variables using a probabilistic method instead of SVD. This representation assumes that each snapshot $\bsnap_\isnap$ consists of a mixture of structures $\btopic_\imode$. PLSA adds a probabilistic flavor as follows: \begin{itemize} \item given a snapshot $\bsnap_\isnap$, the structure $\btopic_\imode$ is present in that snapshot with probability $\proba(\btopic_\imode|\bsnap_\isnap)$, \item given a structure $\btopic_\imode$, the grid cell $\bgrid_\ix$ is activated with probability $\proba(\bgrid_\ix|\btopic)$. \end{itemize} Formally, the joint probability of seeing a given snapshot $\bsnap_\isnap$ and activating a grid cell $\bgrid_\ix$ is: \begin{equation} \proba(\bsnap_\isnap,\bgrid_\ix)= \proba(\bsnap_\isnap)\sum_{\imode} \proba(\btopic_\imode|\bsnap_\isnap) \proba(\bgrid_\ix|\btopic_\imode). \label{Eq_PLSAone} \end{equation} $\proba(\bsnap_\isnap)$, $\proba(\btopic_\imode|\bsnap_\isnap)$, and $\proba(\bgrid_\ix|\bsnap_\isnap)$ are the parameters of the model: $\proba(\bsnap_\isnap)$ is the probability to obtain such a snapshot $\bsnap_\isnap$ and is constant in our case, $\proba(\bsnap_\isnap)=1/\nsnap$. $\proba(\btopic_\imode|\bsnap_\isnap)$ and $\proba(\bgrid_\ix|\btopic_\imode)$ can be infered using the Expectation-Maximization (EM) algorithm of \citet{dempster77}. Using Bayes' rule, $\proba(\bsnap_\isnap, \bgrid_\ix)$ can be equivalently written as: \begin{equation} \proba(\bsnap_\isnap,\bgrid_\ix)= \sum_{\imode} \proba(\btopic_\imode) \proba(\bgrid_\ix|\btopic_\imode) \proba(\bsnap_\isnap|\btopic_\imode). \label{Eq_PLSAtwo} \end{equation} This alternative formulation shows a direct link between PLSA model and POD model (as mentioned above, POD is called Latent Semantic Allocation or LSA in text mining). If we compare equations~\eqref{lsa} and \eqref{Eq_PLSAtwo}, we see that the structure probability $\proba(\btopic_\imode)$ corresponds to the diagonal matrix $\varLambda_\imode$, the probability of the snapshot $\bsnap_\isnap$ given the structure $\btopic_\imode$ corresponds to the snapshot-mode matrix entry $A_{\isnap, \imode}$, and the probability to activate the cell $\bgrid_\ix$ given the structure $\btopic_\imode$ corresponds to the matrix entry $\matmode_{\ix, \imode}$. \section{LDA as a factorization method} \label{Sec_MF} To further shed light on the interpretation of LDA, we now adopt a different viewpoint and briefly explore the connections between the decomposition methods discussed above in the framework of Matrix Factorization (MF). Specifically, we now explain how model decomposition methods, such as POD, K-means and LDA, can be interpreted in terms of Matrix Factorization. \subsection{Matrix factorization} Letting $\matsnap\in \R^{\nx\times \ntLM}$ be a data matrix to be approximated, MF consists in the following decomposition: \begin{equation} \matsnap = X Y, \end{equation} with $X\in \R^{\nx\times \nmode}$ and $Y\in \R^{\nmode\times \ntLM}$ two real-valued matrices. Compression is achieved whenever $\nmode <\min(\nx,\ntLM)$, which is considered hereafter. MF can be formulated as an optimization problem: \begin{equation} (X, Y) \in \operatornamewithlimits{arg\ min}_{\dumarg{X} \in \admsetX, \dumarg{Y} \in \admsetY} \normLM{\matsnap - \dumarg{X} \dumarg{Y}}{}^2 + \mathcal{R}\left(\dumarg{X}, \dumarg{Y}\right), \label{Eq_FM_canon} \end{equation} with $\normLM{\cdot}{}$ a given norm, $\admsetX$ and $\admsetY$ admissibility sets for $X$ and $Y$ respectively, and $\mathcal{R}$ a regularization term. \subsection{POD-MF equivalence} Let the singular value decomposition (SVD) of the $\Nx \times \nsnap$ real-valued data matrix $\matsnap$ be \be \matsnap = \leftSV \varSigma B^\transpose, \label{SVD} \ee with $\leftSV$ and $B$ two orthonormal matrices and $\varSigma$ being diagonal. The Eckart-Young theorem makes precise in which sense this decomposition is optimal, \cite{Eckart1936a}. In particular, it follows that \be \leftSV_\nmode, \left(\varSigma B^\transpose\right)_\nmode \in \operatornamewithlimits{arg\ min}_{\dumarg{\leftSV}^\transpose \dumarg{\leftSV} = I_\nmode} \normLM{\matsnap - \dumarg{\leftSV} \left(\dumarg{\varSigma B^\transpose}\right)}{F}, \qquad \forall \, \nmode \le \min\left(\Nx, \nsnap\right), \ee where $\left(\varSigma B^\transpose\right)_\nmode = \varSigma_\nmode B_\nmode^\transpose$ and with $\leftSV_\nmode$ and $B_\nmode$ the restriction of $\leftSV$ and $B$ to their columns associated with the dominant $\nmode$ singular values $\diag\left(\varSigma_\nmode\right)$. From Eq.~\eqref{SVD}, it comes \be \matsnap \matsnap^\transpose \leftSV = \leftSV_\nmode \varSigma_\nmode B_\nmode^\transpose B_\nmode \varSigma_\nmode^\transpose \leftSV_\nmode^\transpose \leftSV_\nmode = \leftSV_\nmode \varSigma_\nmode^2 = C_\nmode \leftSV_\nmode. \ee Refering to Eqs.~\eqref{Eq_eigpb_POD} and \eqref{Eq_empicov}, the diagonal matrix $\varSigma_\nmode^2$ and $\leftSV_\nmode$ then directly identify with the $\nmode$ dominant eigenvalues $\Lambda$ and POD modes $\eigenmat$, respectively. Denoting the Moore-Penrose pseudo-inverse with a $^+$ superscript, the POD projection coefficients are: \be A = \eigenmat^+ \matsnap = \eigenmat^\transpose \matsnap = \leftSV_\nmode^\transpose \matsnap = \varSigma_\nmode B_\nmode^\transpose, \ee so that the POD decomposition is finally seen to satisfy the following matrix factorization problem: \be \eigenmat, A \in \operatornamewithlimits{arg\ min}_{\eigenmat^\transpose \eigenmat = I_\nmode} \normLM{\matsnap - \eigenmat A}{F}, \ee of the form of Eq.~\eqref{Eq_FM_canon} with $\mathcal{R} \equiv 0$ and $\admsetX$ such that $X^\transpose X = I_\nmode$. \subsection{K-means-MF equivalence} Clustering is an unsupervised learning technique aiming at identifying groups (clusters) in the data such that data points in the same group have similar features, while data points in different groups have highly dissimilar features. K-means is one of the simplest and popular clustering methods, \cite{kn:macqueen1967,Lloyd_82}. The algorithm tries to iteratively partition the dataset into $\nmode$ predefined distinct non-overlapping clusters $\left\{ \cluster_\imode \right\}_\imode$. In its standard deterministic version, each data point belongs to only one cluster. The key idea consists in assigning each data point to the closest centroid (arithmetic mean of all the data points that belong to that cluster). The distance is defined in terms of some chosen norm $\normLM{\cdot}{}$. Setting the number of clusters $\nmode$, the algorithm starts with an initial guess for the $\nmode$ centroids $\left\{ \centro_\imode \right\}_\imode$, by randomly selecting $\nmode$ data points from the data set without replacement. It then iterates between the data assignment step, assigning each data point $\bsnap_{\isnap}$ to the closest cluster $\cluster_{\imode_\isnap^\star}$ and the centroid update step, which computes the centroid of each cluster: \begin{align} \imode^\star_\isnap & \gets \operatornamewithlimits{arg\ max}_{1 \le \dumarg{\imode} \le \nmode} \normLM{\centro_{\dumarg{\imode}} - \bsnap_{\isnap}}{}^2, & \forall \: 1 \le \isnap \le \nsnap, \label{Eq_Kmean1} \\ \centro_\imode & \gets \frac{1}{\card{\cluster_\imode}} \sum_{\bsnap_{\isnap} \in \cluster_\imode} \bsnap_{\isnap}, & \forall \: 1 \le \imode \le \nmode. \label{Eq_Kmean2} \end{align} K-means is guaranteed to converge to a local optimum but not necessarily to a global optimum. Therefore, we choose to run the algorithm with different initializations of centroids and retain the solution that yielded the lowest loss $\mathscr{L}$: \begin{align} \mathscr{L} = \sum_{\imode=1}^\nmode {\sum_{\bsnap_\isnap \in \cluster_\imode}{\normLM{\bsnap_\isnap - \centro_\imode}{}^2}}. \label{Kmeans_loss} \end{align} Solving a clustering problem in the $L^2$-sense means finding a set of $\left\{\cluster_\imode\right\}_{\imode=1}^\nmode$ disjoint clusters ($\cluster_\imode \bigcap \cluster_{\imode^{'}} =\{\emptyset\}$, $\imode \ne \imode^{'}$), that minimizes the following cost function: \begin{equation} \mathscr{L} = \sum_{\imode=1}^\nmode {\sum_{\bsnap_\isnap \in \cluster_\imode}{\normLM{\bsnap_\isnap - \centro_\imode}{2}^2}} = \sum_{\isnap=1}^\nsnap {\normLM{\bsnap_\isnap}{2}^2} - \sum_{\imode=1}^\nmode {\sum_{\bsnap_\isnap, \bsnap_{\isnap'} \in \cluster_\imode}{ n_\imode^{-1} \bsnap_\isnap^\transpose \bsnap_{\isnap'} }}, \label{Eq_clust} \end{equation} where $\left\{\centro_\imode\right\}_{\imode=1}^\nmode$ are the cluster centroids, $\centro_\imode := \sum_{\bsnap_\isnap \in \cluster_\imode} {\bsnap_\isnap} \slash n_\imode$, $n_\imode := \card{\cluster_\imode}$. Let $Y\in\left[0,1\right]^{\ntLM \times \nmode}$ be the normalized cluster indicator matrix, $\by_\imode = n_\imode^{-1 \slash 2} \mathbbm{1}_{\left\{\bsnap_\isnap \in \cluster_\imode\right\}}$. Disjointedness of clusters implies that columns of $Y$ are orthonormal, $Y^\transpose Y= I_\nmode$. The clustering problem \eqref{Eq_clust} may now reformulate in terms of $Y \ge 0$ as, \cite{Ding05onthe}: \begin{align} Y & \in \operatornamewithlimits{arg\ min}_{\dumarg{Y} \ge 0, \dumarg{Y}^\transpose \dumarg{Y}= I_\nmode} \trace{\matsnap^\transpose \matsnap} - \trace{\dumarg{Y}^\transpose \matsnap^\transpose \matsnap \dumarg{Y}}, \nonumber \\ & \in \operatornamewithlimits{arg\ min}_{\dumarg{Y} \ge 0, \dumarg{Y}^\transpose \dumarg{Y} = I_\nmode} \normLM{\matsnap^\transpose \matsnap}{F}^2 - 2 \trace{\dumarg{Y}^\transpose \matsnap^\transpose \matsnap \dumarg{Y}} + \normLM{\dumarg{Y}^\transpose \dumarg{Y}}{F}^2, \nonumber \\ & \in \operatornamewithlimits{arg\ min}_{\dumarg{Y} \ge 0, \dumarg{Y}^\transpose \dumarg{Y} = I_\nmode} \normLM{\matsnap^\transpose \matsnap - \dumarg{Y} \dumarg{Y}^\transpose}{F}^2. \end{align} The Euclidean hard-clustering K-means problem hence stems from an orthogonal non-negative matrix factorization form and the clusters are given by $\centro_\imode = n_\imode^{-1 \slash 2} \matsnap \by_\imode$, $\forall \imode$. \subsection{LDA-MF equivalence} We now focus on LDA and discuss the fact that, similarly to POD and K-means, it can also be interpreted as a matrix factorization technique, under certain conditions. Let us consider the variational LDA flavor, where infering the LDA parameters from maximizing the posterior distribution $p$ is substituted with an approximated posterior $q$, easier to sample from. The inference problem then consists in minimizing the approximation error, which is equivalent to maximizing the Evidence Lower Bound (ELBO) $\loss$: \be \loss = \expe_{\mu_q}\left[p\right] - \expe_{\mu_q}\left[q\right]. \ee Provided suitable approximations in the inference problem are made, and under a symmetric Dirichlet priors hypothesis $\left(\balpha = \alpha \boldone\right)$, \cite{Faleiros_Lopes_2016} have derived an upper bound for the ELBO associated with variational LDA: \begin{align} \max \loss \lessapprox \min \sum_{\ix}^\nx { \sum_\isnap^\nsnap{\left(\matsnap_{\ix, \isnap}\log {\frac{\matsnap_{\ix, \isnap}}{{\left(X Y\right)}_{\ix, \isnap}}} + \sum_\imode^\nmode{\mathcal{R}(Y_{\imode,\isnap}, \alpha_\imode)}\right) }}, \label{LDA_NMF} \end{align} where $X \geq 0$ and $Y \geq 0$ are variational parameters to infer, normalized as $\sum_{\ix}X_{\ix, \imode}=\sum_{\imode}Y_{\imode, \isnap}=1$, and regarded as normalized probability distributions. $\vecx_\imode$ is related to ${\boldsymbol \beta}$ while $\by_\isnap$ is related to the distribution $\btheta_\isnap$ of a document $\bsnap_\isnap$. The term $\mathcal{R}(Y_{\imode, \isnap},{\alpha}_\imode) := (Y_{\imode, \isnap} - {\alpha}_\imode)(\log Y_{\imode, \isnap} - Y_{\imode, \isnap}(\log Y_{\imode, \isnap}-1))$ corresponds to the prior influence and induces sparsity over the document-topic distribution. From Eq.~\eqref{LDA_NMF}, it follows that maximizing the ELBO $\loss$ under certain approximations takes the form of a non-negative matrix factorization problem (NMF) of $\matsnap \approx X Y$ expressed in terms of the Kullback-Leibler divergence $D_{I}(\matsnap\|X Y) := \sum_{\ix, \isnap}{ \left(\matsnap_{\ix, \isnap}\log {\frac{\matsnap_{\ix, \isnap}}{{\left(X Y\right)}_{\ix, \isnap}}}-\matsnap_{\ix, \isnap}+{\left(X Y\right)}_{\ix, \isnap} \right)}$, supplemented with a regularization term. Details of the derivation are beyond the scope of this paper and one should refer to \cite{Faleiros_Lopes_2016} for a more complete discussion.
{ "attr-fineweb-edu": 1.453125, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd7Q4dbghX5eqgCWE
\section{Introduction} \label{sec:intro} Many computer vision frameworks contain a pooling stage that combines local responses at different spatial locations~\cite{boureau2010theoretical}. This operation is often a sum- or max-pooling mechanism implemented by a wide range of algorithms, \emph{e.g}\bmvaOneDot in feature descriptors (such as SIFT~\cite{lowe2004distinctive} and HOG~\cite{dalal2005histograms}) or convolutional neural networks~\cite{lecun1990handwritten,scherer2010evaluation}. Choosing the correct pooling operator can make a great difference in the performance of a method~\cite{boureau2010theoretical}. Current standard pooling mechanisms lack the desired generalisation to find equilibrium between frequently-occurring and rare but informative descriptors~\cite{murray2014generalized}. Therefore, many computer vision applications can benefit from a more dynamic pooling solution that takes into account the content of pooled signals. Such pooling operators are commonly used in modelling the phenomenon of colour constancy (\emph{i.e}\bmvaOneDot a visual effect that makes the perceived colour of surfaces remain approximately constant under changes in illumination~\cite{foster2011color,hubel2000perception}) both in biological~\cite{carandini2012normalization} and computational~\cite{retinax} solutions. Despite decades of research, colour constancy still remains as an open question~\cite{foster2011color}, and solutions to this phenomenon are important from a practical point of view, \emph{e.g}\bmvaOneDot camera manufacturers need to produce images of objects that appear the same as the actual objects in order to satisfy customers. Motivated by above, in this article we propose a contrast-variant-pooling mechanism and investigate its feasibility in the context of computational colour constancy. \subsection{Computational Models of Colour Constancy} Mathematically, the recovery of spectral reflectance from a scene illuminated by light of unknown spectral irradiance is an ill-posed problem (it has infinite possible solutions). The simplest and most popular solution has been to impose some arbitrary assumptions regarding the scene illuminant or its chromatic content. Broadly speaking colour constancy algorithms can be divided into two categories: (i) low-level driven, which reduce the problem to solving a set of non-linear mathematical equations~\cite{retinax,vandeWeijerTIP2007,7018983}, and (ii) learning-based, which train machine learning techniques on relevant image features~\cite{forsyth1990novel,funt2004estimating,agarwal2007machine}. Learning-based approaches may offer the highest performance results, however, they have major setbacks which make them unsuitable in certain conditions: (a) they rely heavily on training data that is not easy to obtain for all possible situations, and (b) they are likely to be slow~\cite{gijsenij2011color} and unsuitable for deployment inside limited hardware. A large portion of low-level driven models can be summarised using a general Minkowski expression \cite{finlayson2004shades,vandeWeijerTIP2007} such as: \begin{equation} L_c(p) = \left( \int_{\Omega} \left[ f_c(x) \right]^{p} dx \right)^{\frac{1}{p}} = ke_c , \label{eq:mink} \end{equation} where $L$ represents the illuminant at colour channel $c \in \lbrace R,G,B\rbrace$; $f(x)$ is the image's pixel value at the spatial coordinate $x\in \Omega$; $p$ is the Minkowski norm; and $k$ is a multiplicative constant chosen such that the illuminant colour, $e$, is a unit vector. Distinct values of the Minkowski norm $p$ results into different pooling mechanisms. Setting $p = 1$ in Eq.~\ref{eq:mink} reproduces the well known Grey-World algorithm (\emph{i.e}\bmvaOneDot sum-pooling), in which it is assumed that all colours in the scene average to grey~\cite{buchsbaum1980spatial}. Setting $p = \infty$ replicates the White-Patch algorithm (\emph{i.e}\bmvaOneDot max-pooling), which assumes the brightest patch in the image corresponds to the scene illuminant~\cite{retinax}. In general, it is challenging to automatically tune $p$ for every image and dataset. At the same time inaccurate $p$ values may corrupt the results noticeably~\cite{gijsenij2011color}. The Minkowski framework can be generalised further by replacing $f(x)$ in Eq.~\ref{eq:mink} with its higher-order derivatives~\cite{vandeWeijerTIP2007}. These non-linear solutions are analogous to centre-surround mechanisms of visual cortex~\cite{land1986alternative}, which is typically modelled by a Difference-of-Gaussians (DoG) operators where a narrower, positive Gaussian plays the role of the ``centre'' and a broader, negative Gaussian plays the role of the ``surround''~\cite{enroth1966contrast,marr1980theory}. Recently, a few biologically-inspired models of colour constancy grounded on DoG have offered promising results~\cite{7018983,parraga2016colour}, however their efficiency largely depends on finding an optimum pooling strategy for higher cortical areas. In short, pooling is a crucial component of many colour constancy models driven by low-level features (or even in deep-learning solutions~\cite{barron2015convolutional,fourure2016mixed}). In the primate visual systems the size of the receptive field varies according to the local contrast of the light falling on it~\cite{Shushruth2069,angelucci2013beyond} presenting a dynamic solution dependent on the region of interest. The low-level models mentioned above are related to the early stages of visual processing, \emph{i.e}\bmvaOneDot the primary visual cortex (area V1), that are likely to be involved in colour constancy. Physiological measures suggest that although receptive fields triple in size from area V1 to area V2~\cite{wilson2014configural}, their basic circuity with respect to surround modulation is similar, \emph{i.e}\bmvaOneDot keeping the same size dependency with contrast properties found in V1~\cite{Shushruth2069}. This is consistent with the large body of physiological and psychophysical literature highlighting the significance of contrast in the visual cortex. In computer vision, contrast-dependent models have also shown encouraging results in various applications such as visual attention~\cite{itti2001computational}, tone mapping~\cite{reinhard2002photographic}, and boundary detection~\cite{BMVC2016_12,akbarinia2017feedback}, to name a few. From these we can hypothesise the convenience and explanatory value of various ``pooling strategies'' such as those proposed by previous colour constancy methods. In the rest of this work we will explore the advantages of replacing the different feed-forward (pooling) configurations of some successful colour constancy models~\cite{retinax,vandeWeijerTIP2007,7018983} by that of the visual system (as described by current neurophysiological literature \cite{Shushruth2069,angelucci2013beyond}). Our aim in doing so is dual, on the one hand we want to explore the technological possibilities of creating a more efficient algorithm and on the other hand we would like to test the idea that contrast-variant-pooling might play an important role in colour constancy. \subsection{Summary of the Contributions} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{images/flowchart-eps-converted-to.pdf} \caption{Flowchart of the proposed contrast-variant-pooling (CVP) mechanism in the context of colour constancy. We implemented CVP through a \textit{top-x-percentage-pooling}. Given an input image or a feature map: (i) value of the \textit{x} is computed according to the inverse of local contrast for each channel, and (ii) we estimate the scene illuminant as the average value of the \textit{top-x-percentage} of pooled pixels (those on the right side of the depicted dashed lines).} \label{fig:v4pooling} \end{figure} In the present article we propose a generic contrast-variant-pooling (CVP) mechanism that can replace standard sum- and max-pooling operators in a wide range of computer vision applications. Figure~\ref{fig:v4pooling} illustrates the flowchart of CVP, which is based on local contrast and therefore it offers a dynamic solution that adapts the pooling mechanism according to the content of region of interest. We tested the feasibility of CVP in the context of colour constancy by substituting the pooling operation of four algorithms: (a) White-Patch~\cite{retinax}, (b) first-order Grey-Edge~\cite{vandeWeijerTIP2007}, (c) second-order Grey-Edge~\cite{vandeWeijerTIP2007}, and (d) Double-Opponency~\cite{7018983} in three benchmark datasets. Results of our experiments show the quantitative and qualitative benefits of CVP over max-pooling. \section{Model} \label{sec:method} \subsection{Max-pooling Colour Constancy} One of the earliest computational models of colour constancy (White-Patch) is grounded on the assumption that the brightest pixel in an image corresponds to a bright spot or specular reflection containing all necessary information about the scene illuminant \cite{retinax}. Mathematically this is equivalent to a max-pooling operation over the intensity of all pixels: \begin{equation} L_c = \argmax_{x,y} I_c(x,y), \end{equation} where $L$ represents the estimated illuminant at each chromatic channel $c \in \left\lbrace R, G, B \right\rbrace$; $I$ is the original image and $(x,y)$ are the spatial coordinates in the image domain. One important flaw of this simple approach is that a single bright pixel can misrepresent the whole illuminant. Furthermore, the White-Patch algorithm may fail in the presence of noise or clipped pixels in the image due to the limitations of the max-pooling operator \cite{funt1998machine}. One approach to address these issues is to account for a larger set of ``white'' points by pooling a small percentage of the brightest pixels (\emph{e.g}\bmvaOneDot the top $1\%$) \cite{ebner2007color}, an operation referred as \textit{top-x-percentage-pooling}. In this manner, the pooling mechanism is collectively computed considering a group of pixels rather than a single one. This small variant might be a crucial factor in the estimation of the scene illuminant \cite{joze2012role}. A similar mechanism has also been deployed successfully in other applications such as shadow removal~\cite{finlayson2002removing}. In practice, given the chosen \textit{x-percentage}, the \textit{top-x-percentage-pooling} can be implemented through a histogram-based clipping mechanism~\cite{finlayson2002removing,ebner2007color}. Let $H$ be the histogram of the input image $I$, and let $H_c(i)$ represents number of pixels in colour channel $c$ with intensity $i \in [0, \cdots, K_c]$ (histogram's domain). The scene illuminant $L_c$ is computed as: \begin{equation} L_c = \frac{1}{N_{k_c}} \sum_{i=k_c}^{K_c} i \cdot H_c(i), \label{eq:illuminant} \end{equation} where $N_{k_c}$ is the total number of pixels within the intensity range $k_c$ to $K_c$. The values of $k_{c= \{ R, G, B \}}$ are determined from the chosen \textit{x-percentage} such that: \begin{equation} N_{k_c} = \sum_{i=k_c}^{K_c} H_c(i) = x_c \cdot P, \label{eq:hist} \end{equation} where $P$ is the total number of pixels in the image and $x_c$ is the chosen percentage for colour channel $c$. Within this formulation it is very cumbersome to define a universal, optimally-fixed percentage of pixels to be pooled \cite{ebner2007color} and consequently the free variable $x$ requires specific tuning for each image or dataset individually. Naturally, this limitation restricts the usability of the \textit{top-x-percentage-pooling} operator. In the following sections we show how to automatically compute this percentage, based on the local contrast of each image. \subsection{Pooling Mechanisms in the Visual Cortex} We know from the physiology of the cerebral cortex that neurons in higher cortical areas pool information from lower areas over increasingly larger image regions. Although the exact pooling mechanism is yet to be discovered, ``winner-takes-all'' and ``sparse coding'' kurtotical behaviour are common to many groups of neurons all over the visual cortex~\cite{carandini2012normalization,olshausen1996emergence}, and it is conceivable that a mechanism analogous to max-pooling might be present within the cortical layers. Indeed such behaviour has been discovered across a population of cells in the cat visual cortex \cite{lampl2004intracellular} and the activation level of cells with max-like behaviour was reported to vary depending on the contrast of visual stimuli. Results reported by~\cite{lampl2004intracellular} suggest an inverse relationship between the contrast of a stimulus and the percentage of the signal pooled. When pooling neurons were exposed to low contrast stimuli, their responses shifted slightly away from pure max-pooling (selecting the highest activation response within a region) towards integrating over a larger number of highly activated neurons. In the language of computer vision, this can be regarded as \textit{top-x-percentage-pooling}, where \textit{x} assumes a smaller value in high contrast and a larger value in low contrast. Interestingly, the pooling of those neurons remained always much closer to max-pooling than to the linear integration of all neurons (sum-pooling)~\cite{lampl2004intracellular}. Mathematically, this can be interpreted as having a very small (\textit{top-x-percentage-pooling}) $x$ value. It does not come as a great surprise that the pooling mechanism in the visual cortex depends on the stimulus' contrast. There is a large body of physiological studies showing that receptive fields (RF) of neurons are contrast variant (for a comprehensive review refer to~\cite{angelucci2013beyond}). Quantitative results suggest that RFs in visual area one (V1) of awake macaques double their size when measured at low contrast~\cite{Shushruth2069}. Similar expansions have also been reported for the RFs of neurons in extrastriate areas, such as V2 and V4. This implies that a typical neuron in higher cortical areas that normally pool responses from its preceding areas over about three neighbouring spatial locations \cite{wilson2014configural} can access a substantially larger region to pool from in the presence of low contrast stimuli. This is in line with the reported pooling mechanism in the cat visual cortex~\cite{lampl2004intracellular}. \subsection{Contrast Variant Pooling} In order to model this contrast-variant-pooling mechanism, we first computed local contrast $C$ of the input image $I$ at every pixel location by means of its local standard deviation defined as: \begin{equation} C_c(x, y; \sigma) = \sqrt{ \frac{1}{\# \mathcal{N}_{\sigma}(x,y)} \sum_{ \substack{ (x', y') \in \mathcal{N}_{\sigma}(x,y) }} \big( I_c(x', y') - \mu (x, y) \big) ^2 } \label{eq:contrastcalculation} \end{equation} where $c$ indexes each colour channel; $(x,y)$ are the spatial coordinates of a pixel; $\mu$ is an isotropic average kernel with size $\sigma$; and $\mathcal{N}_{\sigma}(x,y)$ represents the neighbourhood centred on pixel $(x,y)$ of radius $\sigma$. To simulate this inverse relation between stimulus contrast and percentage of signal pooled \cite{lampl2004intracellular} in the \textit{top-x-percentage-pooling} operator, we determined the percentage $x_c$ in Eq.~\ref{eq:hist} as the average of inverse local contrast signals: \begin{equation} x_c = \frac{1}{P} \sum_{x,y}\frac{1}{C_c(x,y; \sigma)}, \ \forall (x,y) \in \Omega, \label{eq:pestimation} \end{equation} where $C_c$ is computed from Eq. \ref{eq:contrastcalculation}, and $\Omega$ is the spatial image domain. In this fashion, instead of defining a fix percentage of signal (pixels) to be pooled (as in~\cite{finlayson2002removing}), we chose an adaptive percentage according to the contrast of image. In terms of colour constancy, this effectively relates the number of pooled pixels to compute the scene illuminant to the average contrast of an image. We illustrated this mechanism of contrast-variant-pooling in the central panel of Figure~\ref{fig:v4pooling}, where red, green and blue signals correspond to the histogram of each chromatic channel. Pixels on the right side of the dashed lines ($k_c$) are pooled. In this example, contrast is higher for the red signal and therefore a smaller percentage of cells are pooled in the red channel. Bearing in mind that ``contrast'' is just a fraction in the range $[0,1]$ -- with $0$ characterising an absolutely uniform area and $1$ representing points with the highest contrast, \emph{e.g}\bmvaOneDot edges -- we can predict that the percentage $x$ will be a very small number for natural images where homogeneous regions are likely to form the majority of the scene. Consequently in our \textit{top-x-percentage-pooling} operator we always pool a small percentage. This is in agreement with observations of~\cite{lampl2004intracellular} which indicate that such pooling mechanism is always much closer to max-pooling than to sum-pooling. \subsection{Generalisation to Other Colour Constancy Models} There is a number of colour constancy models in the literature which are driven by low-level features that require a pooling mechanism on top of their computed feature maps in order to estimate the scene illuminant. In the Double-Opponency~\cite{7018983} algorithm this feature map is computed by convolving a colour-opponent representation of the image with a DoG kernel followed by a max-pooling operation. In the Grey-Edge~\cite{vandeWeijerTIP2007} model, higher-order derivatives of the image are calculated through its convolution with the first- and second-order derivative of a Gaussian kernel. This is complemented by a Minkowski summation, which oscillates between sum- and max-pooling depending on its norm. Similar to the White-Patch algorithm, the pooling mechanism of these models can also be replaced with our \textit{top-x-percentage-pooling} operator, where \textit{x} is computed according to the local contrast of image as explained above. The only difference is that instead of pooling from an intensity image (as in case of the White-Patch algorithm), Double-Opponency and Grey-Edge pool over their respective feature maps. This means that Eq.~\ref{eq:illuminant} receives a feature map $M$ instead of an intensity image $I$ as input. \section{Experiments and Results} \label{sec:results} In order to investigate the efficiency of our model, we applied the proposed contrast-variant-pooling (CVP) mechanism to four different colour constancy algorithms whose source code were publicly available: White-Patch~\cite{retinax}, first-order Grey-Edge~\cite{vandeWeijerTIP2007}, second-order Grey-Edge~\cite{vandeWeijerTIP2007}, and Double-Opponency~\cite{7018983}. We simply replaced their max-pooling operator with our proposed pooling mechanism. To evaluate each method we used the recovery angular error defined as: \begin{equation} \epsilon^{\circ} \left( e_e, e_t \right) = \arccos \left(\frac{<e_e , e_t>}{\Vert e_e \Vert \Vert e_t \Vert }\right) , \end{equation} where $< . >$ represents the dot product of the estimated illuminant $e_e$ and the ground truth $e_t$; and $\Vert . \Vert $ stands for the Euclidean norm of a vector. It is worth mentioning that this error measure might not correspond precisely to observers' preferences~\cite{vazquez2009color}, however, it is the most commonly used comparative measure in the literature. We also computed the reproduction angular error~\cite{finlayson2014reproduction} in all experiments (due to lack of space these results are not reported here). Readers are encouraged to check the accompanying supplementary materials. We conducted experiments on three benchmark datasets\footnote{All source code and materials are available in the supplementary submission.}, (i) SFU Lab (321 images)~\cite{barnard2002data}, (ii) Colour Checker (568 images)~\cite{ColourCheckerDB}, and (iii) Grey Ball (11,346 images)~\cite{CiureaF03}. In Table~\ref{tab:resultssfulab} we have reported the best median and trimean angular errors for each of the considered methods (these metrics were proposed by~\cite{hordley2006reevaluation} and~\cite{gijsenij2009perceptual} respectively to evaluate colour constancy algorithms since they are robust to outliers). Mean angular errors are reported in the supplementary materials. \begin{table}[ht] \setlength{\tabcolsep}{4.7pt} \centering \begin{tabular}{@{\hskip 0.05in}ll@{\hskip 0.02in}|c|c|c|c|c|c|} \cline{3-8} & & \multicolumn{2}{c|}{SFU Lab~\cite{barnard2002data}} & \multicolumn{2}{c|}{Colour Checker~\cite{ColourCheckerDB}} & \multicolumn{2}{c|}{Grey Ball~\cite{CiureaF03}} \\ \hline \multicolumn{1}{|@{\hskip 0.05in}l@{\hskip 0.02in}}{Method} & & Median & Trimean & Median & Trimean & Median & Trimean \\ \cline{1-8} \multicolumn{1}{|@{\hskip 0.05in}l@{\hskip 0.02in}}{\small White-Patch} & \cite{retinax} & 6.44 & 7.51 & 5.68 & 6.35 & 6.00 & 6.50 \\ \multicolumn{1}{|@{\hskip 0.05in}l@{\hskip 0.02in}}{\small Grey-Edge 1\textsuperscript{st}-order} & \cite{vandeWeijerTIP2007} & 3.52 & 4.02 & 3.72 & 4.76 & 5.01 & 5.80 \\ \multicolumn{1}{|@{\hskip 0.05in}l@{\hskip 0.02in}}{\small Grey-Edge 2\textsuperscript{nd}-order} & \cite{vandeWeijerTIP2007} & 3.22 & 3.65 & 4.27 & 5.19 & 5.72 & 6.39 \\ \multicolumn{1}{|@{\hskip 0.05in}l@{\hskip 0.02in}}{\small Double-Opponency} & \cite{7018983} & 2.37 & 3.27 & 3.46 & 4.38 & 4.62 & 5.28 \\ \hline \hline \multicolumn{2}{|@{\hskip 0.05in}l@{\hskip 0.02in}|}{\textbf{\small CVP White-Patch}} & \textbf{2.99} & \textbf{3.42} & \textbf{2.97} & \textbf{3.45} & \textbf{5.02} & \textbf{5.15} \\ \multicolumn{2}{|@{\hskip 0.05in}l@{\hskip 0.02in}|}{\textbf{\small CVP Grey-Edge 1\textsuperscript{st}-order}} & \textbf{3.29} & \textbf{3.72} & \textbf{2.48} & \textbf{2.79} & \textbf{4.70} & \textbf{5.17} \\ \multicolumn{2}{|@{\hskip 0.05in}l@{\hskip 0.02in}|}{\textbf{\small CVP Grey-Edge 2\textsuperscript{nd}-order}} & \textbf{2.85} & \textbf{3.13} & \textbf{2.59} & \textbf{2.93} & \textbf{4.65} & \textbf{5.05} \\ \multicolumn{2}{|@{\hskip 0.05in}l@{\hskip 0.02in}|}{\textbf{\small CVP Double-Opponency}}& \textbf{2.02} & \textbf{2.56} & \textbf{2.39} & \textbf{2.84} & \textbf{4.00} & \textbf{4.24} \\ \hline \end{tabular} \caption{Recovery angular errors of four colour constancy methods under max- and contrast-variant-pooling (CVP) on three benchmark datasets. Lower figures indicate better performance.} \label{tab:resultssfulab} \end{table} Figure~\ref{fig:qualres} illustrates three exemplary results obtained by the proposed contrast-variant-pooling (CVP) operator for two colour constancy models: White-Patch and the first-order Grey-Edge. Qualitatively, we can observe that CVP does a better job than max-pooling in estimating the scene illuminant. This is also confirmed quantitatively for the angular errors, shown at the right bottom side of each computed output. \begin{figure}[ht] \centering \begin{tabular}{@{\hskip 0.0in}c@{\hskip 0.04in}c@{\hskip 0.04in}c@{\hskip 0.04in}c@{\hskip 0.04in}c@{\hskip 0.04in}c@{\hskip 0.00in}} \includegraphics[width=0.159\linewidth]{images/SfuLab256.png} & \includegraphics[width=0.159\linewidth]{images/SfuLab256-GT.png} & \begin{overpic}[width=0.159\linewidth]{images/SfuLab256-WP.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$9.73^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/SfuLab256-CVPWP.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$1.83^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/SfuLab256-GE.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$6.73^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/SfuLab256-CVPGE.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$2.59^\circ$}}}}} \end{overpic} \\ \includegraphics[width=0.159\linewidth]{images/ColourChecker2.png} & \includegraphics[width=0.159\linewidth]{images/ColourChecker2-GT.png} & \begin{overpic}[width=0.159\linewidth]{images/ColourChecker2-WP.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$11.39^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/ColourChecker2-CVPWP.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$1.52^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/ColourChecker2-GE.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$10.31^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/ColourChecker2-CVPGE.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$2.34^\circ$}}}}} \end{overpic} \\ \includegraphics[width=0.159\linewidth]{images/GreyBall11342.png} & \includegraphics[width=0.159\linewidth]{images/GreyBall11342-GT.png} & \begin{overpic}[width=0.159\linewidth]{images/GreyBall11342-WP.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$8.45^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/GreyBall11342-CVPWP.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$1.45^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/GreyBall11342-GE.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$7.18^\circ$}}}}} \end{overpic} & \begin{overpic}[width=0.159\linewidth]{images/GreyBall11342-CVPGE.png} \put(68,6){\colorbox{white}{\parbox[c]{0.03\textwidth}{\textsc{\tiny{$1.69^\circ$}}}}} \end{overpic} \\ Original & Ground Truth & WP & CVP WP & GE1 & CVP GE1 \end{tabular} \caption{Qualitative results of White-Patch (WP) and the first-order Grey-Edge (GE1) under max- and contrast-variant-pooling (CVP). Angular errors are indicated on the bottom right corner of each panel. Images are from the SFU Lab, Colour Checker and Grey Ball dataset respectively.} \label{fig:qualres} \end{figure} \subsection{Influence of the Free Parameters} For each free variable of the tested models we compared the performance of max- to contrast-variant-pooling. White-Patch does not have any free variable, therefore it is exempted from this analysis. In Figure~\ref{results_do} we have reported the impact of different $\sigma$s (receptive filed size) on Double-Opponency algorithm for the best and the worst results obtained by free variable $k$ in each dataset (results for all $k$s are available in the supplementary material). We can observe that almost in all cases contrast-variant-pooling outperforms max-pooling. The improvement is more tangible for the Colour Checker and Grey Ball datasets and in low $\sigma$s. \begin{figure}[ht] \centering \begin{tabular}{c c c} \includegraphics[width=.3\linewidth]{images/SfuLab_do_maxpooling_contrast-converted-to.pdf} & \includegraphics[width=.3\linewidth]{images/ColourChecker_do_maxpooling_contrast-converted-to.pdf} & \includegraphics[width=.3\linewidth]{images/GreyBall_do_maxpooling_contrast-converted-to.pdf} \\ SFU Lab~\cite{barnard2002data} & Colour Checker~\cite{ColourCheckerDB} & Grey Ball~\cite{CiureaF03} \end{tabular} \caption{The best and the worst results obtained by max- and contrast-variant-pooling for the free variables of Double-Opponency~\cite{7018983} algorithm ($k$ and $\sigma$).} \label{results_do} \end{figure} Figure~\ref{results_ge} illustrates the impact of different $\sigma$s (Gaussian size) on the first- and second-order Grey-Edge algorithm. We can observe similar patterns as with Double-Opponency (contrast-variant-pooling outperforms max-pooling practically in all cases). This improvement is more significant for low $\sigma$s, for the Colour Checker dataset and for the second-order derivative. It must be noted that the objective of this article was merely to study the performance of max-pooling and CVP on top of the Grey-Edge algorithm. However, the angular errors of our \textit{CVP Grey-Edge} happen to be in par with the best results reported for Grey-Edge (obtained by using the optimum Minkowski norm for each dataset~\cite{vandeWeijerTIP2007}), with the important caveat that CVP has no extra variables to be tuned, whereas in the Minkowski norm optimisation the value of $p$ must be hand-picked for each dataset. \begin{figure}[ht] \centering \begin{tabular}{c c c} \includegraphics[width=.3\linewidth]{images/SfuLab_ge_maxpooling_contrast-converted-to.pdf} & \includegraphics[width=.3\linewidth]{images/ColourChecker_ge_maxpooling_contrast-converted-to.pdf} & \includegraphics[width=.3\linewidth]{images/GreyBall_ge_maxpooling_contrast-converted-to.pdf} \\ SFU Lab~\cite{barnard2002data} & Colour Checker~\cite{ColourCheckerDB} & Grey Ball~\cite{CiureaF03} \end{tabular} \caption{Comparison of max- and contrast-variant-pooling for the free variable $\sigma$ of Grey-Edge~\cite{vandeWeijerTIP2007} algorithm (both the first- and second-order derivatives).} \label{results_ge} \end{figure} From Figures~\ref{results_do} and~\ref{results_ge} we can observe that the greatest improvement occurs in the Colour Checker dataset. We speculate that one of the reasons for this is the larger range of intensity values in the Colour Checker dataset (16-bit) in comparison to the other two datasets that contain 8-bit images, therefore, an inaccurate max-pooling is more severely penalised. \subsection{Discussion} \label{sec:discussion} We would like to emphasise that the objective of this article is not to improve state-of-the-art in colour constancy, but to show that contrast-variant-pooling (CVP) almost always produces improvements over max-pooling. Surprisingly, the results we obtained are even competitive with the state-of-the-art. For instance, in the SFU Lab dataset, the lowest reported angular error is 2.1 (obtained by an Intersection-based Gamut algorithm~\cite{barnard2000improvements}). This means that our \textit{CVP Double-Opponency} with an angular error of 2.0 outperforms the state-of-the-art in this dataset. In the Colour Checker and Grey Ball datasets there are a few learning-based models (\emph{e.g}\bmvaOneDot Exemplar-based method \cite{6588227}) that obtain lower angular errors in comparison to \textit{CVP Double-Opponency}, nevertheless our results are comparable with theirs. Physiological evidence besides, the better performance of CVP can be explained intuitively by the fact that max-pooling relies merely on the peak of a function (or a region of interest), whereas in our model, pooling is defined collectively based on a number of elements near the maximum. Consequently, those peaks that are outliers and likely caused by noise get normalised by other pooled elements. The rationale within our model is to pool a larger percentage at low contrast since in those conditions, peaks are not informative on their own, whereas at high contrast peaks are likely to be more informative and other irrelevant details must be removed (therefore a smaller percentage is pooled). Although, the importance of choosing an appropriate pooling type has been demonstrated both experimentally~\cite{jarrett2009best,yang2009linear}, and theoretically~\cite{boureau2010theoretical}, current standard pooling mechanisms lack the desired generalisation~\cite{murray2014generalized}. We believe that contrast-variant-pooling can offer a more dynamic and general solution. In this article, we evaluated CVP on the colour constancy phenomenon as a proof-of-concept, however our formulation of CVP is generic (and based on local contrast) and in principle it can be applied to a wider range of computer vision algorithms, such as deep-learning, where pooling is a decisive factor~\cite{scherer2010evaluation}. In our implementation of CVP, we approximated local contrast through local standard deviation (see Eq.~\ref{eq:contrastcalculation}). There are at least two factors that require a more profound analysis: (i) incorporating more sophisticated models of contrast perception~\cite{haun2013perceived} accounting for extrema in human contrast sensitivity; and (ii) analysing the role of kernel size in the computation of local contrast. \section{Conclusion} \label{sec:conclusion} In this article, we presented a novel biologically-inspired contrast-variant-pooling (CVP) mechanism grounded in the physiology of the visual cortex. Our main contribution can be summarised as linking the percentage of pooled signal to the local contrast of the stimuli, \emph{i.e}\bmvaOneDot pooling a larger percentage at low contrast and a smaller percentage at high contrast. Our CVP operator remains always closer to max-pooling rather than to sum-pooling since natural images generally contain more homogeneous areas than abrupt discontinuities. We tested the efficiency of our CVP model in the context of colour constancy by replacing the max-pooling operator of four algorithms with the proposed pooling. After that, we conducted experiments in three benchmark datasets. Our results show that contrast-variant-pooling outperforms the commonly used max-pooling operator nearly in all cases. This can be explained by the fact that our model allows for more informative peaks to be pooled while suppressing less informative peaks and outliers. We further argued that the proposed CVP is a generic operator, thus its application can be extended to a wider range of computer vision algorithms by offering a dynamic (and automatic) framework that is based on the local contrast of an image or a pixel. This opens a multitude of possibilities for future lines of research and it remains an open question whether our model can reproduce its excellent results in other domains as well. Therefore, it certainly is interesting to investigate whether our CVP can improve convolutional neural networks. \section*{Acknowledgements} This work was funded by the Spanish Secretary of Research and Innovation (TIN2013-41751-P and TIN2013-49982-EXP).
{ "attr-fineweb-edu": 1.897461, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd8w5qoTAh9p00xZk
\section{Introduction}\label{intro_section} We study in this article the optimal control of a first order Fokker-Planck equation with a reaction term, under congestion constraints. This kind of equation typically arises to model the evolution of a probability measure of a large population of agents. In this paper, the state of each agent is composed of a continuous variable and of a discrete variable. The optimal control problem we study can be interpreted heuristically as an approximation of the limit case $n\to\infty$ of an optimal switching problem of $n$ agents. Our work is motivated by the optimal switching control for a large population of agents, and more precisely to the smart charging in electrical engineering \cite{seguret2021mean}. Each agent can represent a plug-in electric vehicle (PEV) aiming at charging its battery. The overall population of PEVs is controlled by a central planner. The continuous variable represents the level of battery of the PEV and the discrete variable the mode of charging (e.g. not charging, charging, discharging, etc...). Finally, the congestion constraint avoids high demand of energy over the period. Combinatorial techniques as well as optimal control tools fail to solve problems with large population of PEVs, due to the curse of dimensionality \cite{bellman2015adaptive}. To overcome these difficulties, a continuum of PEVs can be considered, leading to optimal control of PDE techniques. Optimal control of a Fokker-Planck applied to smart charging can be found in \cite{le2015optimal,sheppard2017optimal}, and applied to the management of a population of thermostatically controlled loads in \cite{ghaffari2015modeling,moura2013modeling}. Through the article, we consider a finite horizon $[0,T]$ and a mixed state space equal to the product $[0,1]\times I$, where $I$ is a finite space, whose cardinality is denoted by $\vert I\vert $. We consider the uncontrolled velocity field $b$, which describes how agents move on the segment $[0,1]$. We consider the function $\alpha$, which is the control determining the jump intensity of the agents between the different modes in $I$. For any $(t,s,i,j)\in [0,T]\times [0,1]\times I\times I$, the value $\alpha_{i,j}(t,s)$ denotes the jump intensity of agents from state $(s,i)$ to the state $(s,j)$, at time $t$. The control $\alpha$ is required to be a non negative measurable function and to satisfy $\alpha_{i,i}=0$, meaning that no agents can jump from state $(s,i)$ to state $(s,i)$. The function $\alpha$ is determined by an aggregator. We highlight that the agents are controlled by the same function $\alpha$. We define $m$ such that for any $(t,s,i)\in [0,T]\times [0,1]\times I$, the value $m_i(t,s)$ represents the proportion of agents at time $t$ at state $(s,i)$. The pair $(\alpha,m)$ is the weak solution, in the sense of Definition \ref{weak_sol_cont_equ}, of the continuity equation on $[0,T]\times [0,1]\times I$: \begin{equation} \label{fk} \begin{array}{ll} \partial_t m_i(t,s)+\partial_s(m_i(t,s)b_i(s)) =-\sum_{j\neq i}(\alpha_{i,j}(t,s)m_i(t,s)-\alpha_{j,i}(t,s)m_{j}(t,s))& (i,t,s)\in I\times (0,T)\times (0,1), \\ m_i(0,s) = m_i^0(s) & (i,s)\in I\times [0,1], \end{array} \end{equation} where the initial distribution $m^0$ is given. This equation is a first order Fokker-Planck equation, where the right-hand side is a reaction term. As mentioned above, we consider congestion constraints on the total mass per mode $i$ in $I$ at any $t\in [0,T]$. This is used to avoid synchronization effects and to limit the proportion of agents per mode $i\in I$ at any time. Note that this constraint introduces interactions between agents in our model. In the limit case $n\to \infty$, the constraint is of the form: \begin{equation} \label{congestion_ineq_D} m_i(t,[0,1])\leq D_i(t)\quad \forall(i,t)\in I\times [0,T], \end{equation} where $D_i>0$ is given. The objective function $J$ is defined as followed: \begin{equation} \label{obj_formulation} J(m,\alpha) := \sum_{i\in I}\int_{0}^{T}\int_0^1(\sum_{j\in I,j\neq i} L(\alpha_{ij}(t,s)) + c_{i}(t,s))m_i(t,ds)dt + \sum_{i\in I}\int_0^1g_i(s)m_i(T,ds), \end{equation} where the function $L:\mathbb{R}\mapsto \bar{\mathbb{R}}_+$ is defined by: \begin{equation} \label{definition_l} L(x):=\left\{ \begin{array}{ll} \frac{x^2}{2} & \mbox{if }x\geq 0\\ +\infty & \mbox{otherwise.} \end{array} \right. \end{equation} The cost function $L$ penalizes high values of $\alpha$. It aims at avoiding multiple jumps of agents between the elements of the set $I$. The value $c_i(t,s)$ corresponds to the cost per agent to be at time $t\in [0,T)$ at state $(s,i)$, while $g_i(s)$ is a final cost per agent to be at state $(s,i)$. Regularity assumptions on $c$ and $g$ will be introduced later. Our purpose is to study the optimization problem: \begin{equation} \label{opt:J} \begin{array}{l} \inf_{m,\alpha}\,J(m,\alpha) \\ (\alpha,m)\mbox{ satisfies }\eqref{fk}\mbox{ and }\eqref{congestion_ineq_D} \end{array} \end{equation} \subsection{Motivations} We briefly present an optimal switching problem of $n$ agents in this subsection. Problem \eqref{opt:J} can be interpreted heuristically as an approximation of the limit case $n\to\infty$ of this problem. We consider $n$ controlled agents $x^1,\ldots,x^n$ over the period $[0,T]$. The state of the $k^{th}$ agent at time $t$ is denoted by $x^k(t):=(q^k(t),z^k(t))$ and is composed of a continuous variable $q^k(t)\in [0,1]$ and a discrete one $z^k(t)\in I$. A strategy is a couple $(\tau,\iota)$ where $\tau$ is composed of $n$ sequences (one sequence per agent) of stopping times in $[0,T]$ and $\iota$ is composed of $n$ sequences (also one sequence per agent) with values $I$. The dynamic of the state of the $k^{th}$ agent is controlled as follows: \begin{equation*} \frac{\mbox{d}q^k(t)}{\mbox{d}t}=b_{z^k(t)}(q^k(t)) \mbox{ and }z_k(t)=\sum_{h=0}^\infty\iota^k_h\mathbbm{1}_{[\tau^k_h,\tau^k_{h+1})}(t). \end{equation*} The stopping time $\tau^k_h\in [0,T]$ is the $h^{th}$ jump of the $k^{th}$ agent and $\iota^k_h$ its jump destination. The cost of a strategy is estimated by: \begin{equation*} J^n(\tau,\iota):= \frac{1}{n}\sum_{k=0}^{n}\left(\int_0^Tc_{z^k(t)}(t,q^k(t))dt + g_{z^k(T)}(q^k(T))\right)+\frac{1}{n}P(\iota,\tau) \end{equation*} in which $P$ is the cost of switching between nodes, aiming at avoiding large number of jumps and synchronization effects between agents. This function will be determined in a later work. Functions $c$ and $g$ have already been defined in the definition of $J$ at \eqref{obj_formulation}. The goal of the switching problem is to solve: \begin{equation} \label{prob_n_processes} \begin{array}{l} \inf_{\tau,\iota}\,J^n(\tau,\iota) \\ \mbox{s.c. } \frac{1}{n}\sum_n \mathbbm{1}_i(z^k(t))\leq D_i(t)\mbox{ for any }t\in [0,T]. \end{array} \end{equation} The reader can refer to \cite[Section 4.4]{bardi2008optimal} for an introduction to optimal switching. Heuristically, Problem \eqref{opt:J} can be interpreted as a formulation, when $n$ tends to infinity, of \eqref{prob_n_processes}. The connection between the two problems will be addressed in a later work. Note that the mean field behaviour of interacting and controlled processes has been investigated, in deterministic and stochastic settings, in \cite{cavagnari2020lagrangian, lacker2017limit} and the references therein. As precised above, this problem is motivated by its application in smart charging. Each agent represents an electric vehicle aiming at charging its battery. The continuous variable $q$ represents the level of battery of the electric vehicle and the discrete variable $z$ the mode of charging (e.g. not charging, charging, discharging, etc...). The transfers from one mode of charging to another one are penalized through the cost $P$ in order to avoid multiple switches, synchronization effects and battery aging. The goal of the final cost $g$ is to penalize small battery level at the end of the period, while the function $c$ can represent a cost of electricity for power consumption. Finally, the constraint \eqref{prob_n_processes} avoids high demand of energy over the period. \subsection{Contributions and literature} One of our main results states the existence of solutions of \eqref{opt:J} and gives optimality conditions, by using classic tools of optimization and convex duality theory \cite{ekeland1999convex}. More precisely we show that, if $(m,\alpha)$ is a solution to \eqref{opt:J}, then there exists a pair $(\phi, \lambda)$ such that for any $i,j\in I$, $\alpha_{i,j}=(\varphi_i-\varphi_j)^+$ and $(\phi,\lambda,m)$ is a weak solution of the following system: \begin{equation} \label{syst_opt_cond_intro} \left\{ \begin{array}{ll} -\partial_t\varphi_i-b_i\partial_s\varphi_i-c_i-\lambda_i+\sum_{j\in I,j\neq i}H(\varphi_j-\varphi_i)= 0 &\mbox{on }(0,T)\times (0,1)\times I \\ \partial_t m_i+\partial_s(m_i b_i) +\sum_{j\neq i}((\varphi_i-\varphi_j)^+m_i -(\varphi_j-\varphi_i)^+m_{j}) =0 &\mbox{on }(0,T)\times (0,1)\times I\vspace{0.2cm}\\ m_i(0,s) = m_i^0(s),\, \varphi_i(T,s) = g_i(s) &\mbox{on } (0,1)\times I \\ \int_0^1m_i(t,s)ds-D(t)\leq 0,\,\lambda\geq 0 &\mbox{on }[0,T]\times I\\ \sum_{i\in I}\int_0^T\int_0^1 m_i(t,ds)\lambda_i(dt) - \int_0^T D(t)\lambda(dt)=0 \end{array} \right. \end{equation} The function $\varphi$ is the multiplier associated to the dynamic constraint \eqref{fk}, and $\lambda$ is associated to the congestion constraint \eqref{congestion_ineq_D}. The first equation is a backward Hamilton-Jacobi equation, where $H$ is defined by $H(y):=(y^-)^2$ for any $y\in \mathbb{R}$. Existence, uniqueness and characterization of weak solution of the backward Hamilton-Jacobi equation are investigated in the paper. The second equation is a forward Fokker-Planck equation, similar to \eqref{fk}, where the control $\alpha$, defined by $\alpha_{i,j}=(\varphi_i-\varphi_j)^+$, is optimal. The measure $\lambda$ is non negative and finite and the last equality in \eqref{syst_opt_cond_intro} ensures that that the congestion constraint \eqref{congestion_ineq_D} is satisfied. Moreover, one of our main contributions is a regularity property for any weak solution $(\lambda,\varphi,m)$ of \eqref{syst_opt_cond_intro}. We prove, that under suitable assumptions on the data $m^0,b,g$ and $c$, the function $\varphi$ is continuous for a.e. $t$ in $[0,T]$ and $\partial_s\varphi\in L^\infty((0,T)\times I,C^0([0,1]))$. In addition, for any $i\in I$ the measure $m_i$ is a Lipschitz continuous function on $[0,T]\times [0,1]$. This kind of system \eqref{syst_opt_cond_intro} typically arises in the Mean Field Game Theory (MFG for short). This class of problem, introduced by Lasry and Lions \cite{lasry2006stat, lasry2006jeux, lasry2007mean} and Huang, Malhamé and Caines \cite{huang2007large,huang2006large}, describes the interaction between a large population of identical and rational agents in competition. The duality approach adopted to obtain \eqref{syst_opt_cond_intro} consists in relaxing the dynamic \eqref{fk} and congestion constraint \eqref{congestion_ineq_D}. The resulting relaxed problem is then expressed as the dual of an other convex problem. We show that the system \eqref{syst_opt_cond_intro} is an optimality condition of both problems. Solving optimal control of a Fokker-Planck equation by means of duality theory is well known since few decades \cite{fleming1989convex, vinter1993convex}. Our work follows the method developed in the seminal work of Benamou and Brenier \cite{benamou1998optimal}, for optimal transport problems. In \cite{benamou1998optimal}, a Fokker-Planck equation is controlled with initial and final constraint, optimality conditions are obtained as a system of PDEs close to \eqref{syst_opt_cond_intro}. Similar method and results still in optimal transport are derived in \cite{cardaliaguet2013geodesics}. The duality approach adopted in the paper is close to method used in MFG theory as in \cite{cardaliaguet2015mean}, where existence and uniqueness of the weak solution of the MFG system are proved, and the solution is characterized as the minimizer of some optimal control of Hamilton-Jacobi and Fokker-Planck equations. This approach enables to use optimization techniques, to prove existence and uniqueness of the solution of the MFG system as well as for Mean Field Control (MFC for short) problem. We refer to \cite{achdou2016mean,benamou2017variational,briceno2018proximal,cardaliaguet2015second,orrieri2019variational} and the references therein. The variational approach allows besides to apply optimization algorithms to solve numerically MFG problems \cite{ benamou2015augmented,briceno2019implementation, briceno2018proximal}. Note that different optimality conditions, for control problems in the space of probability measures, can be derived by using a kind of Pontryagin Maximum Principle \cite{bonnet2019pontryagin}. The paper deals with a congestion constraint \eqref{congestion_ineq_D} on the measure. Two kinds of congestions effects have been explored in the MFG and MFC frameworks. On the one hand, "soft constraints" which increase the cost of velocity of the agents in areas with high density. On the other hand, "hard congestion" which imposes density constraints, e.g. $m\leq \bar{m}$ at any point $(t,s)$. The variational approach shows good results when applied to MFC \cite{achdou2016mean} and MFG with "soft congestion" in a stationary framework \cite{evangelista2018first}, as well as to MFG problems dealing with "hard congestion" constraints. This has been first investigated in \cite{santambrogio2011modest} where the density of the population does not exceed a given threshold, then in \cite{meszaros2015variational} where stationary second order MFG are considered. In \cite{cardaliaguet2016first} a price, imposed on the saturated zone to make the density satisfy the constraints, is obtained. In the same vein as the work of Benamou and Brenier \cite{benamou1998optimal}, "hard congestion" constraints are also examined in optimal transport \cite{buttazzo2009optimization}. We highlight that our paper deals with aggregated "hard congestion" constraints on the measure $m$ \eqref{congestion_ineq_D}, i.e. our constraint is less restrictive than a constraint of the type $m\leq \bar{m}$ a.e.. We consider a mixed state space, with continuous and discrete state variables. To the best of our knowledge, these settings have been barely investigated in the MFG literature, e.g. articles cited above look only at continuous state variables. The resulting Fokker-Planck equation \eqref{fk} contains a term of reaction, indicating mass transfers between states on $I$. Such PDE arises also in \cite{annunziato2014connection}, to model the mean field limit of Piecewise Deterministic Markov Proccesses (PDMP for short). The velocity is controlled in \cite{annunziato2014connection} while we control the intensity of the jump $\alpha$ (the velocity $b$ is given). A discrete time and state space MFG problem is explored in \cite{gomes2010discrete}. The uniqueness of the solution of a finite state MFG is discussed in \cite{bayraktar2021finite}. Mixed state space in a MFG framework can be found in \cite{firoozi2017mean}, where a major player can switch his state on a finite state space and minor players decide their stopping time. A MFG problem in a finite state space and discrete time settings with "hard congestion", has been studied in \cite{bonnans2021discrete}, using also variational methods. Concerning the regularity results, let us point out that the regularity of solutions of \eqref{syst_opt_cond_intro} is not usual, and we believe that it is mainly due to the linearity of the Hamilton-Jacobi equation w.r.t. $\partial_s \varphi$, and to the regularity assumptions on $b$, allowing to use the characteristic method to solve the PDEs. These results will be useful in a later work to quantify the mean field limit assumption of the model. The time regularity of $m$ and $\varphi$ may not be improved as far as we have no more regularity results on $\lambda$. The function $\varphi$ is discontinuous at each atom of the measure $\lambda$. Regularity results, about the multiplier of the density constraint can be found in the literature: in \cite{cardaliaguet2016first} the authors show some BV estimates on the pressure, whereas $L^\infty$ estimates for the price have been proved in \cite{lavenant2019new}, in the special case of a quadratic Hamiltonian. Sobolev regularity, for the solution of a first order MFG, has been established in \cite{lions2007theorie}, and improved in \cite{santambrogio2018regularity}; see also \cite{graber2018sobolev}. The paper is organized as follows. In the rest of Section \ref{intro_section} we present our assumptions and main results. Problem \eqref{opt:J} is analyzed in Section \ref{prob_formulation_section} and we give some regularity results on the solution $m$ of \eqref{fk}. We study in detail the Hamilton-Jacobi equation of the system \eqref{syst_opt_cond_intro} in Section \ref{hjb_section}. In Section \ref{dual_prob_section}, the variational approach of Problem \eqref{opt:J} is developed. We prove our main result in Section \ref{charac_mini}. Finally, we recall basic statements about weak solutions in the Appendix \ref{appendices_section}. \subsection{Assumptions} \label{sub_of_assum} The following assumptions are in force throughout the paper. \begin{enumerate} \item \label{hyp_on_b} For any $i\in I$, $b_i\in C^2(\mathbb{R})$ with $b_i(s)=0$ for any $s \not\in (0,1)$. \item \label{hyp_on_m_0} $m^0$ is a probability measure on $[0,1]$, absolutely continuous w.r.t. the Lebesgue measure, with a density denoted in the same way. We assume that for any $i\in I$: $m_i^0\in C^1(\mathbb{R})$, with $\supp(m_i^0)\subset [0,1]$. \item \label{hyp_on_D} For any $i\in I$, $D_i\in C^0([0,T])$ and there exists $\varepsilon^0>0$ such that for any $i\in I$ and $t\in [0,T]$: \begin{equation} \label{ineq_for} \varepsilon^0< D_i(t) - \int_0^1m_i^0(s)ds. \end{equation} \item \label{hyp_on_c_g}For any $i\in I$, it is assumed that $c_i\in C^1([0,T]\times [0,1])$ and $g_i\in C^1([0,1])$. \end{enumerate} The main role of assumptions \ref{hyp_on_b} and \ref{hyp_on_m_0} is to ensure that the population of agents remains concentrated on $[0,1]$. Inequality \eqref{ineq_for} of Assumption \ref{hyp_on_D} is used to show the existence of a solution of the optimization problem, whose optimality condition is the system \eqref{syst_opt_cond_intro}. Finally, the regularity results of the weak solutions of the system \eqref{syst_opt_cond_intro} are derived thanks to the assumptions formulated on $c$ and $g$ in Assumption \eqref{hyp_on_c_g}. \subsection{Notations} The space of positive and bounded measures on a space ${A}$ is denoted by $\mathcal{M}^+({A})$ and the space of probability measure $\mathcal{P}({A})$. For any measure $\mu\in \mathcal{M}([0,T])$ and $0\leq t_1<t_2\leq T$, we set $\textstyle \int_{t_1}^{t_2}\mu(dt):=\mu([t_1,t_2))$. Given a finite vector space $\mathcal{S}$, for any function $f$ defined on $I\times \mathcal{S}$ we use the notation $f_i(x):=f(i,x)$ for any $(i,x)\in I\times \mathcal{S}$. Similarly, for any function $g$ defined on $I\times I\times \mathcal{S}$ we consider the notation $f_{i,j}(x):=f(i,j,x)$ for any $(i,j,x)\in I\times I \times \mathcal{S}$. The Wasserstein distance on $\mathcal{P}([0,1]\times I)$ is denoted by $W_1$. Given a finite vector space $\mathcal{S}$, let $\Lip(\mathcal{S})$ denote the vector space of bounded functions $f:\mathcal{S}\to \mathbb{R}$ such that the Lipschitz constant of $f$ defined by $\sup\{\vert f(x)-f(y) \vert/ \Vert x-y \Vert \, \vert x, y\in \mathcal{S},x\neq y\}$ is finite. Let $L^{\infty}((0,T)\times I\times I,\Lip([0,1])$ be the vector space of measurable maps $f:(0,T)\times (0,1)\times I\times I\to \mathbb{R}$ such that there exists a finite constant $c_f>0$ where for a.e. $t\in (0,T)$ and any $i,j\in I$, $\Vert f_{i,j}(t,\cdot)\Vert_\infty \leq c_f$ and such that $f_{i,j}(t,\cdot)$ is Lipschitz continuous on $[0,1]$ with Lipschitz constant $c_f$. For any $\mu \in C^0([0,T],\mathcal{P}(\mathbb{R}))$, we define the set: $L^2_\mu([0,T]\times \mathbb{R}) \textstyle :=\{f:[0,T]\times \mathbb{R}\mapsto \mathbb{R},\, \int_0^T\int_0^1f(t,s)^2\mu(t,ds)dt<+\infty\}$. The norm $\Vert \cdot \Vert_1$ is defined for any $f\in L^1([0,T],\times [0,1]\times I)$ by $\Vert f \Vert_1 = \textstyle \sum_{i\in I}\int_0^T\int_0^1\vert f_i\vert$. The dual of a normed space $X$ is denoted by $X^\ast$. We denote for any $s\in[0,1]$, $i\in I$ and $t\in [0,T]$ by $S_i^{t,s}$ the unique solution on $[0,T]$ of: \begin{equation} \label{ODE} S_i^{t,s}(t)=s,\quad \frac{\mbox{d}S_i^{t,s}(\tau)}{\mbox{d}\tau}=b_i(S_i^{t,s}(\tau))\quad \tau\in [0,T] . \end{equation} \subsection{Main results} We introduce, for a given $\lambda\in \mathcal{M}^+([0,T]\times I)$, the Hamilton-Jacobi equation on $(0,T)\times (0,1)\times I$: \begin{equation} \label{hjb_0} \begin{array}{ll} - \partial_ t\varphi_i(t,s)-b_i(s)\partial_s\varphi_i(t,s)-c_i(t,s)-\lambda_i(t)+\sum_{j\in I,j\neq i}H((\varphi_j-\varphi_i)(t,s))= 0 & (t,s,i)\in (0,T)\times (0,1)\times I,\\ \varphi_i(T,\cdot)= g_i & (s,i)\in [0,1]\times I, \end{array} \end{equation} We denote by $\mathcal{R}_0$ the set of weak solutions of \eqref{hjb_0}, in the sense of Definition \ref{def_weak_sol_hjb}, and consider the following problem \begin{equation} \label{relaxed_problem_0} \inf_{(\varphi,\lambda)\in \mathcal{R}_0}\tilde{A}(\varphi,\lambda) \end{equation} where: \begin{equation} \label{def_tilde_A_0} \tilde{A}(\varphi,\lambda):= \sum_{i \in I}\int_0^1 - \varphi_i(0,s)m_i^0(ds) + \int_0^TD_i(t)\lambda_i(dt). \end{equation} We can now state our main result. \begin{theorem}\label{main_results} Problem \eqref{opt:J} has a solution $(m,\alpha)$, and Problem \eqref{relaxed_problem_0} has also a solution $(\varphi,\lambda)$, where $\varphi\in L^\infty([0,T]\times [0,1]\times I)$ and $\partial_s\varphi\in L^\infty((0,T)\times I,C^0(0,1))$. In addition, we have the following characterization of the minimizers: \begin{enumerate} \item\label{min_imp_wea} If $(m,\alpha)$ is a minimizer of Problem \eqref{opt:J} and $(\varphi,\lambda)\in \mathcal{R}_0$ a minimizer of Problem \eqref{relaxed_problem_0}, then $(\varphi, \lambda, m)$ is a weak solution of \eqref{syst_opt_cond_intro}, in the sense of Definition \ref{weak_sol_syst}, and $\alpha_{i,j}=(\varphi_i-\varphi_j)^+$ on $\{m_i>0\}$ for any $i,j\in I$. \item \label{wea_imp_min} Conversely, if $(\varphi,\lambda,m)$ is a weak solution of \eqref{syst_opt_cond_intro} in the sense of Definition \ref{weak_sol_syst}, then $(\varphi, \lambda)\in \mathcal{R}_0$ is a minimizer of Problem \eqref{relaxed_problem_0} and there exists $\alpha$, defined for any $i,j\in I$ by: $\alpha_{i,j}:=(\varphi_i-\varphi_j)^+$ on $\{m_i>0\}$, such that $(m,\alpha)$ is a minimizer of \eqref{opt:J}. \item \label{min_imp_reg}If $(m,\alpha)$ is a minimizer of Problem \eqref{opt:J}, then for any $i,j\in I$ $\alpha_{i,j}\in L^\infty((0,T), \Lip([0,1]))$, and $m\in\Lip([0,T]\times [0,1]\times I)$ \end{enumerate} \end{theorem} \begin{remark} For the sake of simplicity, we have defined $L$ in \eqref{definition_l}. However, results in this paper still hold for any function $L: \mathbb{R}\to \mathbb{R}_+$, satisfying: \begin{enumerate} \item $L$ is a lower semi continuous and convex function and $\dom\,L=\mathbb{R}_+$. \item The function $H$, defined by $H(x):=L^\ast(-x)$ is non decreasing and differentiable, and is such that its Fenchel conjugate $H^\ast$, defined by $H^\ast(x):=L(-x)$, is essentially strictly convex \cite[Theorems 26.1, 26.3]{rockafellar2015convex}. \item There exist $r>1$ and $C>0$, such that for any $x\in \mathbb{R}_+$ we have: \begin{equation*} \frac{x^r}{rC}-C\leq L(x)\leq \frac{C}{r}x^r+C. \end{equation*} \end{enumerate} \end{remark} The existence of a solution of \eqref{opt:J} is stated by Lemma \ref{Existence_sol_prob_init} in Section \ref{prob_formulation_section}. In section \ref{dual_prob_section} Theorem \ref{existence_solution_relaxed_problem} proves the existence of a solution of \eqref{relaxed_problem_0}. These results are obtained by classical techniques in convex optimization. The characterization of these solutions are given by Theorem \ref{optimality_conditions} in Section \ref{charac_mini}. A variational approach is used to deduce this characterization. We introduce a convex problem, whose dual is, up to a change of variable, Problem \eqref{opt:J} (from Theorem \ref{pb_in_duality}) and Problem \eqref{relaxed_problem_0} is a relaxed version of this problem. The Lipschitz continuity of $m$ is deduced from the regularity of $\varphi$, derived in Section \ref{hjb_section}, and Lemma \ref{regularity_m}. \section{Variational problem}\label{prob_formulation_section} \label{prob_formulation} \begin{definition} \label{weak_sol_cont_equ} A pair $(\alpha,m)$ satisfies \eqref{fk} in the weak sense if $t\in [0,T]\mapsto m(t,\cdot)\in \mathcal{P}(\mathbb{R}\times I)$ is continuous, for any $i,j\in I$ with $i\neq j$ it holds $\alpha_{i,j}\in L_{m_i}^2([0,T]\times \mathbb{R})$ and for any test function $\phi \in C^\infty_c([0,T]\times \mathbb{R}\times I)$ we have: \begin{equation*} \begin{array}{l} \sum_{i\in I}\int_{\mathbb{R}} \phi_i(T,s)m_i(T,ds) - \phi_i(0,s)m_i^0(ds) \\ = \int_0^T\int_{\mathbb{R}} \sum_{i\in I} (\partial_t\phi_i(t,s) + b_i(s)\partial_s\phi_i(t,s))m_i(t,ds) + \sum_{j\in I, j\neq i}(\phi_j(t,s)-\phi_i(t,s))\alpha_{i,j}(t,s)m_i(t,ds)dt, \end{array} \end{equation*} \end{definition} \begin{remark} Recalling Assumptions \ref{hyp_on_b} and \ref{hyp_on_m_0}, Lemma \ref{supportm} in Appendix \ref{appendix_weak_sol_analysis} states that for any weak solution $(\alpha,m)$ of \eqref{fk}, in the sense of Definition \ref{weak_sol_cont_equ}, the measure $m_i(t,\cdot)$ has its support included in $[0,1]$ for any $(t,i)\in [0,T]\times I$. Thus, we will consider throughout the paper, only weak solutions $(\alpha,m)$ of \eqref{fk} where for any $t\in [0,T]$ we have $m(t,\cdot)\in \mathcal{P}([0,1]\times I)$. \end{remark} Problem \eqref{opt:J} being not convex w.r.t. the variables $(m,\alpha)$, we make a change of variables $E:=\alpha m$. We now rewrite the continuity equation \eqref{fk} for any $i\in I$: \begin{equation} \label{evoX} \begin{array}{ll} \partial_t m_i(t,s)+\partial_s(m_i(t,s)b_i(s)) =-\sum_{i,j\in I,j\neq i}(E_{i,j}(t,s)- E_{j,i}(t,s))& (i,t,s)\in I\times (0,T)\times (0,1)\\ m_i(0,s) = m_i^0(s)& (i,s)\in I\times [0,1], \end{array} \end{equation} where $E_{i,j}\in\mathcal{M}^+([0,T]\times[0,1])$, with a first marginal equals to the Lebesgue measure on $[0,T]$ and such that $E_{i,j}(t,\cdot) \ll m_i(t,\cdot)$ with $\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i} = \alpha_{i,j}$ and $\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i} \in L_{m_i}^2([0,T]\times \mathbb{R})$. For any initial distribution, absolutely continuous w.r.t. the Lebesgue measure, with density satisfying $m^0\in C^1([0,1]\times I)$ and any $D\in C^0([0,T]\times I)$, we introduce the set: \begin{equation} \begin{array}{ll} \label{def_set_CE} CE(m^0,D):=&\left\{ (m,E)\mbox{ such that }(m,\alpha) \mbox{ satisfies }\eqref{evoX} \mbox{ in the weak sense, where }\alpha_{i,j}:= \frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}, \right.\\ &\mbox{ with additional constraints:} \left. \int_0^1m_i(t,ds)\leq D_i(t)\quad\forall (i,t)\in I\times [0,T], \mbox{ and }\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}\geq 0\right\}. \end{array} \end{equation} The function $\rho$ denotes, throughout the paper, the function such that $(\rho,0)$ is the weak solution of \eqref{fk}. One can easily show that for any $i\in I$ it holds $\rho_i\in C^1([0,T]\times [0,1])$ and for any $t\in [0,T]$: $\int_0^1\rho_i(t,s)ds=\int_0^1 m_i^0(s)ds<D_i(t)$. Then, it follows that $(\rho,0)\in CE(m^0,D)$. We define the function $\tilde{B}$ for any $(m,E)\in CE(m^0,D)$ by: \begin{equation} \label{def_B} \tilde{B}(E,m): = \sum_{i\in I} \int_0^T\int_0^1 c_i(t,s) m_i(t,ds) + \sum_{n I,j\neq i}L \left(\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}(t,s)\right)m_i(t, ds)dt + \sum_{i\in I}\int_0^1 g_i(s)m_i(T,ds), \end{equation} where the function $L$ is defined in \eqref{definition_l}. The following optimization problem is considered: \begin{equation} \label{problemE} \underset{(m,E)\in CE(m^0,D)}{\inf}\,\tilde{B}(E,m) \end{equation} From Assumption \ref{hyp_on_c_g}, we deduce that the quantity $\tilde{B}(E,m)$ is finite for any $(m,E)\in CE(m^0,D)$. For any $\gamma >0$, we denote by $CE_\gamma(m^0,D)$ the subset of $CE(m^0,D)$ whose elements $(m,E)$ satisfy: \begin{equation} \label{ineqGamma0} \sum_{(i,j)\in I,i\neq j}\int_0^T\int_0^1 L \left(\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}(t,s)\right)m_i(t, ds)dt= \sum_{(i,j)\in I,i\neq j}\int_0^T\int_0^1 \frac{1}{2}\left(\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}(t,s)\right)^2m_i(t, ds)dt \leq \gamma. \end{equation} For any $(m,E)\in CE_\gamma(m^0,D)$, the next Lemma provides a Hölder regularity property on $m$. \begin{lemma} \label{UnifCont} For any $\gamma> 0$, there exists a positive constant $C_\gamma$ such that, for any $(m,E)\in CE_\gamma(m^0,D)$, $m$ is $\frac{1}{2}$-Hölder continuous of constant $C_\gamma$. \end{lemma} \begin{proof} Let $\varphi:\mathbb{R}\times I\to \mathbb{R}$ be globally $1-$Lipschitz continuous and $C^1$ w.r.t. the first variable. We show that the function: $t\to \frac{d}{dt}\int_\mathbb{R}\sum_{i=0}^{d}\varphi_i(s)m_i(t,ds)$ is uniformly bounded on $[0,T]$: \begin{equation*} \begin{array}{l} \frac{d}{dt}\int_\mathbb{R}\sum_{i=1}^{d}\varphi_i(s)m_i(t,ds) = \int_\mathbb{R}\sum_{i\in I} b_i(s)\partial_s\varphi_i(s) m_i(t,ds) + \int_\mathbb{R} \sum_{i,j\in I,j\neq i}(\varphi_j(s)-\varphi_i(s))\alpha_{i,j}(t,s)m_i(t,ds), \end{array} \end{equation*} where we used Assumption \ref{hyp_on_b} and that $(m,E)$ is a weak solution of \eqref{evoX} with $\alpha_{i,j}:=\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}$. Since $b_i$ is bounded by $\Vert b\Vert_\infty:= \max_i(\Vert b_i\Vert_\infty)$ and $\Vert \partial_s\varphi_i\Vert$ is bounded by 1 for any $i\in I$, Lemma \ref{supportm} in the Appendix \ref{appendices_section} implies that: \begin{equation*} \begin{array}{ll} \left\vert \frac{d}{dt}\int_\mathbb{R}\sum_{i\in I}\varphi_i(s)m_i(t,ds) \right\vert & \leq \vert I \vert \Vert b \Vert_\infty + \int_0^1 \sum_{i,j\in I,j\neq i} \vert\varphi_j(s)-\varphi_i(s)\vert\alpha_{i,j}(t,s)m_i(t,ds)\\ & \leq \vert I \vert\Vert b \Vert_\infty + \int_0^1\sum_{i,j\in I,j\neq i} \alpha_{i,j}(t,s)m_i(t,ds), \end{array} \end{equation*} where we used that $s\mapsto (\varphi_j(s)-\varphi_i(s))$ is bounded by $1$ for any pair $(i,j)$. Let $t,\tilde{t}\in(0,T)$, with $t>\tilde{t}$, using Cauchy-Schwarz inequality, it holds: \begin{equation} \label{boundW} \begin{array}{l} \left\vert\int_\mathbb{R}\sum_{i=1}^{d}\varphi_i(s)m_i(t,ds) - \int_\mathbb{R}\sum_{i=1}^{d}\varphi_i(s)m_i(\tilde{t},ds) \right\vert \\ \leq (t-\tilde{t})\vert I \vert\Vert b \Vert_\infty + \left(\int^t_{\tilde{t} }\int_0^1 \sum_{i,j\in I,j\neq i}m_i(\tau,ds)d\tau \right)^{\frac{1}{2}}\left(\int^t_{\tilde{t}} \int_0^1\sum_{i,j\in I,j\neq i}(\alpha_{i,j}(\tau,s))^2m_i(\tau,ds)d\tau\right)^{\frac{1}{2}} \\ \leq \vert t-\tilde{t}\vert^{\frac{1}{2}}\left(T^{\frac{1}{2}}\vert I \vert\Vert b\Vert_\infty + \sqrt{2}\gamma^{\frac{1}{2}}\right). \end{array} \end{equation} From \eqref{boundW} it holds: \begin{equation*} W_1(m(t,\cdot),m(\tilde{t},\cdot))\leq \vert t-\tilde{t}\vert^{\frac{1}{2}}\left(T^{\frac{1}{2}}\vert I \vert\Vert b\Vert_\infty + \sqrt{2}\gamma^{\frac{1}{2}}\right). \end{equation*} \end{proof} The next Lemma is usefull to show that any minimizing sequence of \eqref{problemE} is relatively compact. \begin{lemma} \label{RelativCompact} For any $\gamma>0$, the subset $CE_\gamma(m^0,D)$ is relatively compact. \end{lemma} \begin{proof} For any $(m,E)\in CE_\gamma(m^0,D)$, using Lemma \ref{supportm}, it holds that $m$ is tight. In addition for any $i,j\in I$ with $i\neq j$, using that $ E_{i,j}$ is a positive measure and Cauchy-Schwarz inequality, we have: \begin{equation*} \begin{array}{ll} \int_0^T\int_0^1 E_{i,j}(t,ds) dt = \int_0^T\int_0^1 \frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}(t,s) m_i(t,ds) dt &\leq \left(\int_0^T\int_0^1 \left(\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}(t,s)\right)^2 m_i(t,ds) dt \right)^{\frac{1}{2}} \left(\int_0^T\int_0^1 m_i(t,ds) dt \right)^{\frac{1}{2}} \vspace{0.1cm}\\ &\leq (2\gamma T) ^{\frac{1}{2}}, \end{array} \end{equation*} thus the mass of $E_{i,j}$ is bounded on $[0,1]\times I$ by $ (\gamma T)^{\frac{1}{2}}$. Since $E_{i,j}\ll m_i$, it holds that $E$ is also tight. Thus, for any sequence $\{(m^n, E^n) \}_n$ in $CE_\gamma(m^0,D)$, there exists a subsequence $\{(m^{\theta_n}, E^{\theta_n}) \}_n$ converging weakly to $(\tilde{m},\tilde{E})$. Using Lemma \ref{UnifCont}, $\{m^{\theta_n}\}_n$ converges uniformly on $[0,T]$ to a $\tilde{m}$ and it holds $\tilde{m}\in C([0,T],\mathcal{P}(\mathbb{R}\times I))$. We want to show that $\tilde{E}$ is absolutely continuous w.r.t. $\tilde{m}$. We define the functional $\Theta$ by: \begin{equation*} \Theta(m,E):\left\{ \begin{array}{ll} \int_0^T\int_0^1 \sum_{i,j\in I,i\neq j } L(\alpha_{i,j}(t,s))m_i(t,ds)dt &\mbox{ if }\forall i,j, \,E_{i,j} \ll m_i \mbox{ and }\alpha_{i,j}:= \frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}\,\mbox{ with }\alpha_{i,j}\geq 0, \\+\infty & \mbox{ otherwise } \end{array}\right. \end{equation*} The functional $\Theta$ being w.l.s.c. \cite[Proposition 5.18]{santambrogio2015optimal}, and $\Theta(m^n,E^n)$ being bounded by $\gamma$ for any $n$, we deduce that $\Theta(\tilde{m},\tilde{E})\leq \gamma$ and $\tilde{E}\ll \tilde{m}$. Finally, using the definition of weak convergence, it is easy to check that $(\tilde{m},\tilde{E})$ is a weak solution of \eqref{evoX} and $\tilde{m}$ satisfies for any $(i,t)\in I\times [0,T]$: $\int_0^1\tilde{m}_i(t,ds)\leq D_i(t)$. \end{proof} \begin{lemma}\label{Existence_sol_prob_init} Problem \eqref{problemE} admits a solution. \end{lemma} \begin{proof} We consider $(\bar{m},\bar{E})\in CE(m^0,D)$ and we define $\gamma$ by: \begin{equation*} \gamma:=\tilde{B}(\bar{m},\bar{E})+\vert I \vert (T\Vert c \Vert_\infty +\Vert g \Vert_\infty)+1, \end{equation*} where $\Vert c \Vert_\infty:=\max_{i\in I}(\Vert c_i\Vert_\infty)$, $\Vert g \Vert_\infty:=\max_{i\in I}(\Vert g_i\Vert_\infty)$. For any pair $(m,E)\in CE(m^0,D)$, if $\tilde{B}(m,E)\leq \tilde{B}(\bar{m},\bar{E})$, then: \begin{equation*} \sum_{(i,j)\in I,i\neq j}\int_0^T\int_0^1 \frac{1}{2}\left(\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}(t,s)\right)^2m_i(t, ds)dt \leq \gamma. \end{equation*} We deduce that $ (\bar{m},\bar{E})\in CE_\gamma(m^0,D)$. Taking a minimizing sequence $\{(m^n,E^n)\}_n$ of Problem \eqref{problemE}, there exists $\tilde{n}\in \mathbb{N}$ such that $(m^n,E^n)\in CE_\gamma(m^0,D)$ for all $n\geq \tilde{n}$. From Lemma \ref{RelativCompact}, a subsequence of $\{(m^n,E^n)\}_n$ weakly converges to a certain $({m}^\ast,{E}^\ast)\in CE_\gamma(m^0,D)$. Since $\tilde{B}$ is weakly lower semi continuous on $CE(m^0,D)$, $({m}^\ast,{E}^\ast)$ minimizes $\tilde{B}$. \end{proof} We end up this section giving the following lemma, which provides Lipschitz continuity results on $m$. This Lemma will be useful in section \ref{charac_mini}. \begin{lemma} \label{regularity_m} Suppose $\alpha\in L^\infty((0,T)\times I\times I,\Lip([0,1]))$, then there exists a unique solution $m$ of \eqref{fk} associated to $\alpha$. In addition we have $m \in \Lip([0,T]\times [0,1]\times I)$. \end{lemma} To prove Lemma \ref{regularity_m}, we need to rewrite the equation \eqref{fk} in $\mathbb{R}^{\vert I \vert}$, in the following form: \begin{equation} \label{fkmultilinear} \partial_t m(t,s) + b(s)\partial_s m(t,s) = G(t,s)m(t,s), \end{equation} where $m(t,s):=(m_0(t,s), \ldots, m_{\vert I \vert}(t,s))$, $b(s):=\mbox{diag}(b_0(s),\ldots,b_{\vert I \vert}(s))$ and $G(t,s)$ is a square matrix of size $\vert I \vert$, such that the $i^{th}$ coordinate of the vector $G(t,s)m(t,s)$ is equal to: \begin{equation} \label{def_G} (G(t,s)m(t,s))_i := -m_i(t,s)\partial_s b_i(s) -\sum_{j\neq i}(\alpha_{i,j}(t,s)m_i(t,s)-\alpha_{j,i}(t,s)m_{j}(t,s)), \end{equation} with the initial constraint: $m(0,\cdot) = m^0(\cdot)$ on $[0,1]$. Note that if $\alpha\in L^\infty((0,T)\times I\times I,\Lip([0,1]))$, then $G\in L^\infty((0,T)\times I\times I,\Lip([0,1]))$. \begin{proof}[Proof of Lemma \ref{regularity_m}] This Lemma is a direct application of Proposition \ref{prop_fk} in Appendix \ref{appendices_section}. \end{proof} \section{Analysis of the HJB solutions}\label{hjb_section} This section is devoted to the analysis of the equation: \begin{equation} \label{hjb} \begin{array}{ll} - \partial_ t\varphi_i(t,s)-b_i(s)\partial_s\varphi_i(t,s)-c_i(t,s)-\lambda_i(t)+\sum_{j\in I,j\neq i}H((\varphi_j-\varphi_i)(t,s))= 0 & \mbox{on }(0,T)\times (0,1)\times I,\\ \varphi_i(T,\cdot)= g_i & \mbox{on } (0,1)\times I, \end{array} \end{equation} where $\lambda\in \mathcal{M}^+([0,T]\times I)$ is given and $H(y):=\frac{1}{2}(y^-)^2$ for any $y\in \mathbb{R}$. We introduce the notion of weak solution for this equation. \begin{definition} \label{def_weak_sol_hjb} For a given $\lambda\in \mathcal{M}^+([0,T]\times I)$, $\varphi$ is called a weak solution of equation \eqref{hjb} if for any $i\in I$ $\varphi_i\in BV((0,T)\times (0,1))$ and for any test function $\psi\in C^1((0,T)\times (0,1)\times I)$, it satisfies: \begin{equation} \label{def_sub_so_2} \begin{array}{l} \int_0^1\varphi_i(0,s)\psi_i(0,s)ds -\int_0^1g_i(s)\psi_i(T,s)ds +\int_0^T\int_0^1\left(\partial_t\psi_i(t,s)+\partial_s(\psi_i(t,s)b_i(s))\right)\varphi_i(t,s)dsdt \vspace{0.1cm} \\ +\int_0^T\int_0^1 \left(\sum_{j\in I,j\neq i}H((\varphi_j-\varphi_i)(t,s))-c_i(t,s)\right)\psi_i(t,s)dtds-\int_0^T\int_0^1\psi_i(t,s)\lambda_i(dt)ds\vspace{0.1cm}\\ = 0, \end{array} \end{equation} $\phi_i(0, \cdot)$ is understood in the sense of trace. \end{definition} \begin{remark} Observe that there is no boundary condition in \eqref{def_sub_so_2}. This is due to the Assumption \ref{hyp_on_b}, involving a null incoming flow in the domain $[0,1]$. \end{remark} In order to analyze \eqref{hjb} we now introduce several notations. Let $\delta>0$ (to be chosen below) and assume that: \begin{equation} \label{bound_lambda_delta_measure} \lambda([0,T]\times I)< \delta. \end{equation} Let $M^\delta$ and $\bar {M}^\delta$ be such that: \begin{equation} \label{def_m} \begin{array}{l} M^\delta:=\max_{i\in I}\Vert g_i\Vert _\infty +T(\max_{i\in I}\Vert c_i\Vert _\infty + \vert I \vert H(\delta)) +2. \\ \bar {M}^\delta:=M^\delta + \delta. \end{array} \end{equation} From Assumptions \ref{hyp_on_b} and \ref{hyp_on_c_g}, there exists a positive constant $K$ such that for any $i\in I$ and $t\in [0,T]$, functions $g_i$, $c_i(t,\cdot)$ and $b_i$ are Lipschitz continuous on $[0,1]$ with Lipschitz constant $K$. Using the definition of $H$, introduced in equation \eqref{hjb}, there exists a positive constant $K^\delta$ such that $H$ and $H'$ are Lipschitz continuous on $[-2 \bar {M}^\delta,\,2\bar {M}^\delta]$ with Lipschitz constant $K^\delta$. For any $i\in I$, let $S_i^{t,s}:[0,T]\to \mathbb{R}$ be the maximal solution of the ODE \eqref{ODE} with condition: $S^{t,s}_i(t)= s$. The function $b_i$ being $C^1$, the map: \begin{equation*} S_i:(\tau,t,s)\to S_i^{t,s}(\tau) \end{equation*} is $C^1$ on $[0,T]\times [0,T] \times [0,1]$. The flow $S_i^{t,\cdot}(\tau)$ is a diffeomorphism on $[0,1]$ whose inverse function is $S_i^{\tau,\cdot}(t)$ with derivative w.r.t. space variable $\partial_sS_i^{\tau,\cdot}(t)$. For any $i\in I$ $\partial_s{S_i}$ being continuous on $[0,T]\times [0,T] \times [0,1]$ we define: $\Vert \partial_s S\Vert_\infty:=\max_{i\in I}(\Vert \partial_s{S_i}\Vert_\infty)$. Let $k^\delta\in\mathbb{R}^+$ be such that: \begin{equation} \label{def_l} k^\delta:=K +l^\delta \mbox{ where }l^\delta:=4(\vert I \vert-1)K^\delta +1, \end{equation} We consider the space $L^1((0,T)\times I, C^1([0,1]))$ endowed with the norm $\Vert \cdot \Vert_1^\delta$, which is defined for every $v\in L^1((0,T)\times I, C^1(0,1))$ by : \begin{equation} \label{def_norm_1} \Vert v \Vert_1^\delta:= \sum_{i\in I}\int_0^T\Vert v_i(t,\cdot)\Vert_{C^1} e^{-\kappa^\delta(T-t)}dt, \end{equation} where $ \Vert v_i(t,\cdot)\Vert_{C^1} := \Vert v_i(t,\cdot)\Vert_{\infty} + \Vert\partial_s v_i(t,\cdot)\Vert_{\infty}$ and the constant $\kappa$ is defined by: \begin{equation} \label{def_kappa} \kappa^\delta : =\vert I \vert^2 K^\delta (\Vert \partial_s S\Vert_\infty +1)+1. \end{equation} The space $(L^1((0,T)\times I, C^1(0,1)), \Vert \cdot \Vert_1^\delta )$ is a Banach space. The constants defined in \eqref{def_l} and \eqref{def_kappa} are determined to build a contracting map in a subspace of $(L^1((0,T)\times I, C^1(0,1)), \Vert \cdot \Vert_1^\delta )$. In this section we are looking for a solution of \eqref{hjb} in an integral form, i.e. a function $\varphi$, defined on $[0,T]\times [0,1] \times I$, satisfying: \begin{equation} \begin{array}{ll} \label{def_integral_equation} \varphi_i(t,s)=\int_t^T\sum_{j\in I,j\neq i}-H((\varphi_j-\varphi_i)(\tau,S_i^{t,s}(\tau))) + c_i(\tau,S_i^{t,s}(\tau)) d\tau + \int_t^T \lambda_i(d\tau) + g_i(S_i^{t,s}(T)) & \mbox{on }(0,T)\times [0,1]\times I\\ \varphi_i(T,s)=g_i(s)& \mbox{on } [0,1]\times I \end{array} \end{equation} One can observe that $\varphi$ is a solution of \eqref{def_integral_equation} if and only if the function $\nu$, defined by: \begin{equation*} \nu_i(t,s):=\varphi_i(t,s)-\int_t^T\lambda_i(d\tau) \end{equation*} is a solution of: \begin{equation} \begin{array}{ll} \label{def_integral_equation_2} \nu_i(t,s)=\int_t^T\sum_{j\in I,j\neq i}-H^\lambda(i,j,t,\tau,s,\nu) + c_i(\tau,S_i^{t,s}(\tau)) d\tau + g_i(S_i^{t,s}(T)) & \mbox{a.e. on }(0,T)\times [0,1]\times I\\ \nu_i(T,s)=g_i(s)& \mbox{a.e. on } [0,1]\times I \end{array} \end{equation} where $ H^\lambda $ is defined on $I\times I \times [0,T]\times [0,T]\times [0,1]\times L^1((0,T)\times (0,1)\times I)$ by: \begin{equation*} H^\lambda(i,j,t,\tau,s,\nu):= H\left((\varphi_j-\varphi_i)(\tau, S_i^{t,s}(\tau))+\int_\tau^T(\lambda_j-\lambda_i)(dr)\right). \end{equation*} Our aim is to build a solution to \eqref{def_integral_equation_2} (and thus to \eqref{hjb}) by a fixed point argument. Let for any $i\in I$, $\Gamma_i^\lambda$ be the map defined for any $\varphi\in L^1((0,T)\times I,C^1([0,1]))$ by: \begin{equation} \label{def_gamma} \Gamma_i^\lambda(\varphi)(t,s):=\int_t^T\sum_{j\in I,j\neq i}-H^\lambda(i,j,t,\tau,s,\varphi) + c_i(\tau,S_i^{t,s}(\tau)) d\tau + g_i(S_i^{t,s}(T)). \end{equation} Let $\Sigma^\lambda$ be the set of functions $f:[0,T]\times [0,1]\times I\to \mathbb{R}$ such that for any $i\in I$, $(t,s)\mapsto f_i(t,s)$ is measurable on $[0,T]\times [0,1]$ and for a.e. $t\in (0,T)$: $f_i(t,\cdot)\in C^1([0,1])$, $\Vert f_i(t,\cdot)\Vert_\infty \leq M^\delta$ and $\Vert \partial_sf_i(t,\cdot)\Vert_\infty\leq 2K e^{k(T-t)}$. We want to apply a fixed point theorem in the space $\Sigma^\lambda$. To do so, we need to define a function on $\Sigma^\lambda$ with values in $\Sigma^\lambda$. For a given $v\in \Sigma^\lambda$, the value $ \Vert \Gamma^\lambda_i(v)\Vert_\infty$ may be larger than $M^\delta$. Thus, we introduce the smooth truncation $F_\delta\in C^1( \mathbb{R}, [-M^\delta+1/2,\,M^\delta-1/2])$, satisfying $F'_{\delta}\geq 0$, $\vert F'_{\delta}(x)\vert \leq 1$ for any $x\in \mathbb{R}$ and: \begin{equation} \label{def_f} F_{\delta}(x):\left\{ \begin{array}{ll} -M^\delta+\frac{1}{2} &\mbox{if }x< -M^\delta,\\ x & \mbox{if }-(M^\delta-1)\leq x \leq M^\delta-1,\\ M^\delta -\frac{1}{2}& \mbox{if }M^\delta\leq x. \end{array} \right. \end{equation} Finally we define the function $\Pi^\lambda$ by: \begin{equation*} \forall \varphi \in \Sigma^\lambda,\,\Pi^\lambda(\varphi):=(\Pi^\lambda_1(\varphi),\ldots,\Pi^\lambda_{\vert I \vert}(\varphi)) \mbox{ where }\Pi^\lambda_i(\varphi):=(F_{M}\circ \Gamma_i^\lambda)(\varphi)\quad \forall i\in I \end{equation*} \begin{remark} \label{existence_gamma} The set $\Sigma^\lambda$ is bounded and closed w.r.t. the topology induced by the norm $\Vert \cdot \Vert^\delta_1$, defined in \eqref{def_norm_1}. \end{remark} \subsection{Existence of a fixed point of $\Pi^\lambda$ on $\Sigma^\lambda$} The following lemma states that $\Pi^\lambda$ maps $\Sigma^\lambda$ to itself. \begin{lemma} \label{impage_space} For any $\varphi\in \Sigma^\lambda$, it holds $\Pi^\lambda(\varphi)\in \Sigma^\lambda$. \end{lemma} \begin{proof} For any $i\in I$, let $\varphi_i$ be a function in $\Sigma^\lambda$ and $\psi_i$ be such that: $\psi_i:=\Gamma^\lambda_i(\varphi)$. From equation \eqref{def_gamma}, it holds that for any $i\in I$, $(t,s)\mapsto \psi_i(t,s)$ is measurable on $[0,T]\times [0,1]$. We need to show that for all $i\in I$ and a.e. $t\in [0,T]$, the function $s\mapsto \psi_i(t,s)$ is in $C^1([0,1])$ and that $\Vert \partial_s \psi_i(t,s)\Vert_\infty$ is bounded by $2K e^{k(T-t)}$. From Assumption \ref{hyp_on_c_g}, it is clear that $\psi_i(T,\cdot)\in C^1([0,1])$ for any $i\in I$ and that $\Vert \partial_s\psi_i(T,\cdot)\Vert_\infty \leq 2K$. For any $(i,s,t)\in I\times [0,1]\times(0,T)$ and a.e $\tau\in [0,T]$, using the chain rule it holds: \begin{equation} \label{compute_deriv_H} \partial_s H^\lambda(i,j,t,\tau,s,\varphi)= \partial_s S_i^{t,s}(\tau) \partial_s(\varphi_j-\varphi_i)(\tau,S_i^{t,s}(\tau))H'\left((\varphi_j-\varphi_i)(\tau, S_i^{t,s}(\tau))+\int_\tau^T(\lambda_j-\lambda_i)(dr)\right). \end{equation} Since $H'$ is bounded by $K^\delta$ on $[-2 \bar {M}^\delta,\,2\bar {M}^\delta]$, it comes: \begin{equation*} \vert \partial_s H^\lambda(i,j,t,\tau,s,\varphi) \vert \leq 4\Vert \partial_s S\Vert_\infty KK^\delta e^{k(T-\tau)}. \end{equation*} Therefore for any $t\in [0,T]$, the function: $s\mapsto\int_t^T-\sum_{j\in I,j\neq i}H^\lambda(i,j,t,\tau,s,\varphi)d\tau$ is differentiable on $[0,1]$ and $\psi_i$ satisfies: \begin{equation} \label{diff_H_tilde} \begin{array}{ll} \partial_s\psi_i(t,s) = & \int_t^T-\sum_{j\in I,j\neq i} \partial_s S_i^{t,s}(\tau) \partial_s(\varphi_j-\varphi_i)(\tau,S_i^{t,s}(\tau))H'\left((\varphi_j-\varphi_i)(\tau, S_i^{t,s}(\tau))+\int_\tau^T(\lambda_j-\lambda_i)(dr)\right)d\tau \vspace{0.1cm}\\ & +\int_\tau^T\partial_s S_i^{t,s}(\tau) \partial_s c_i(\tau,S_i^{t,s}(\tau))d\tau + \partial_s S_i^{t,s}(T)g'_i(S_i^{t,s}(T)). \end{array} \end{equation} From equality \eqref{compute_deriv_H}, it holds that for any $s\in [0,1]$ and any $t\in [0,T]$, the function $\tau\mapsto \partial_s H^\lambda(i,j,t,\tau,s,\varphi)$ is measurable on $[0,T]$. In addition for a.e. $\tau,t\in [0,T]$ the function $s\mapsto \partial_s H^\lambda(i,j,t,\tau,s,\varphi)$ is continuous on $[0,1]$. Therefore it comes for a.e. $t\in [0,T]$ that $\psi_i(t,\cdot)\in C^1([0,1])$. We need now to show that for a.e. $t\in [0,T]$, it holds $\Vert \partial_s\psi(t,\cdot)\Vert_\infty\leq 2K e^{k(T-t)}$. From Lemma \ref{lemma_flow_2} and the Lipschitz continuity of $b_i$, it holds for any $t\in [0,T]$: $\Vert \partial_sS_i^{t,s}(\tau)\Vert_\infty\leq e^{K(\tau-t)}$. Using the Lipschitz continuity of $c_i$ and $g_i$, the bound of $H'$ and the bound on $\Vert \partial_s\varphi(t,\cdot)\Vert_\infty$, from equation \eqref{diff_H_tilde} it holds for a.e. $t\in [0,T]$ and any $s\in [0,1]$: \begin{equation} \label{control_deriv_psi_0} \Vert \partial_s\psi_i(t,\cdot)\Vert_\infty \leq \int_t^T \left( 4(\vert I \vert-1)KK^\delta e^{K(\tau-t)}e^{k^\delta (T-\tau)} + Ke^{K(\tau-t)}\right)d\tau + Ke^{K(T-t)} \end{equation} Using that $k^\delta=K+l^\delta$ and that $\int_t^Te^{K(\tau-t)}d\tau\leq\frac{K}{l^\delta}e^{k^\delta(T-t)}$, inequality\eqref{control_deriv_psi_0} becomes: \begin{equation*} \Vert \partial_s\psi_i(t,\cdot)\Vert_\infty \leq K\left(\frac{4(\vert I \vert-1)K^\delta+1}{l^\delta}+1\right)e^{k^\delta(T-t)}. \end{equation*} Using the definition of $l$ at equation \eqref{def_l}, it comes for a.e. $t\in [0,T]$: $\Vert \partial_s\psi_i(t,\cdot)\Vert_\infty \leq 2Ke^{k^\delta(T-t)}$. From the definition of $F_{\delta}$ in equation \eqref{def_f}, it is clear that $F_{\delta}(\psi_i)$ is bounded by $M^\delta$. Finally the composition by $F_{\delta}$ preserves the continuity and differentiability. Since $\vert F_{\delta}'\vert\leq 1$ and $\Vert \partial_s\psi_i(t,\cdot)\Vert_\infty\leq 2K e^{k^\delta(T-t)}$ a.e. on $[0,T]$, it comes that $\Vert \partial_s F_{\delta}(\psi_i(t,\cdot))\Vert_\infty\leq 2K e^{k^\delta(T-t)}$ a.e. on $[0,T]$. \end{proof} \begin{lemma} \label{contraction_lemma} The map $\Pi^\lambda$ is a contraction on $\Sigma^\lambda$. \end{lemma} \begin{proof} Let $\varphi,\theta\in \Sigma^\lambda$, using that $\varphi$ and $\theta$ are bounded by $M^\delta$, it holds for any $(t,s,i)\in (0,T)\times [0,1]\times I$: \begin{equation*} \begin{array}{ll} \vert \Gamma_i^\lambda(\varphi)(t,s)- \Gamma_i^\lambda(\theta)(t,s)\vert & \leq \sum_{j\in I,j\neq i}\int_t^T \vert H^\lambda(i,j,t,\tau,s,\theta)-H^\lambda(i,j,t,\tau,s,\varphi)\vert \, d\tau \vspace{0.1cm} \\ & \leq \vert I \vert K^\delta \sum_{j\in I} \int_t^T\vert \theta_j(\tau, S_i^{t,s}(\tau)) -\varphi_j(\tau, S_i^{t,s}(\tau))\vert \,d\tau. \end{array} \end{equation*} Then it holds: \begin{equation} \label{ineq_norm_infty_gamma} \begin{array}{ll} \int_0^T \sum_{i\in I}\Vert \Gamma_i^\lambda(\varphi)(t,\cdot)- \Gamma_i^\lambda(\theta)(t,\cdot)\Vert_\infty e^{-\kappa^\delta(T-t)} dt & \leq \vert I \vert^2 K^\delta \int_0^T\int_t^T\sum_{i\in I}\Vert \theta_i(\tau,\cdot) -\varphi_i(\tau, \cdot)\Vert_\infty e^{-\kappa^\delta(T-t)}\,d\tau dt \\ & \leq \frac{\vert I \vert^2 K^\delta}{\kappa^\delta} \int_0^T\sum_{i\in I}\Vert \theta_i(\tau,\cdot) -\varphi_i(\tau, \cdot)\Vert_\infty e^{-\kappa^\delta(T-\tau)}\,d\tau. \end{array} \end{equation} Now consider: for any $(s,i)\in [0,1]\times I$ and a.e. $(t,\tau)\in (0,T)\times (0,T)$: \begin{equation} \label{local_deriv_gamma} \vert \partial_s( \Gamma_i^\lambda(\varphi)(t,s)- \Gamma_i^\lambda(\theta)(t,s))\vert \leq \sum_{j\in I,j\neq i}\int_t^T \vert \partial_s H^\lambda(i,j,t,\tau,s,\theta)-\partial_s H^\lambda(i,j,t,\tau,s,\varphi)\vert \, d\tau \end{equation} Using \eqref{compute_deriv_H} and that $H'$ is bounded by $K^\delta$ on $[-2\bar {M}^\delta,\,2\bar {M}^\delta]$, it comes: \begin{equation} \label{calc_diff_norm_2} \begin{array}{l} \vert \partial_s H^\lambda(i,j,t,\tau,s,\theta)-\partial_s H^\lambda(i,j,t,\tau,s,\varphi)\vert \leq \Vert \partial_s S\Vert_\infty K^\delta \Vert \partial_s(\varphi_i-\varphi_j-\theta_i+\theta_j)(\tau,\cdot)\Vert_\infty \end{array} \end{equation} From inequalities \eqref{local_deriv_gamma} and \eqref{calc_diff_norm_2}, it holds: \begin{equation} \label{local_deriv_gamma_2} \begin{array}{ll} \vert \partial_s( \Gamma_i^\lambda(\varphi)(t,s)- \Gamma_i^\lambda(\theta)(t,s))\vert & \leq \vert I \vert\sum_{j\in I}\int_t^T \Vert \partial_s S\Vert_\infty K^\delta \Vert \partial_s(\varphi_j-\theta_j)(\tau,\cdot)\Vert_\infty \, d\tau. \end{array} \end{equation} Integrating over $[0,T]$ inequality \eqref{local_deriv_gamma_2}, one has: \begin{equation} \label{ineg_diff_gamma} \begin{array}{l} \int_0^T \sum_{i\in I}\Vert \partial_s(\Gamma_i^\lambda(\varphi)- \Gamma_i^\lambda(\theta))(t,\cdot)\Vert_\infty e^{-\kappa^\delta(T-t)} dt \\ \leq \vert I \vert^2\Vert \partial_s S\Vert_\infty K^\delta\int_0^T\int_t^T \sum_{i\in I} \Vert \partial_s(\varphi_i-\theta_i)(\tau,\cdot)\Vert_\infty e^{-\kappa^\delta(T-t)}d\tau dt\\ \leq \frac{\vert I \vert^2}{\kappa^\delta}\Vert \partial_s S\Vert_\infty K^\delta \int_0^T\sum_{i\in I} \Vert \partial_s(\varphi_i-\theta_i)(\tau,\cdot)\Vert_\infty e^{-\kappa^\delta(T-\tau)}d\tau \end{array} \end{equation} From equations \eqref{ineq_norm_infty_gamma} and \eqref{ineg_diff_gamma} and the definition of the norm $\Vert\cdot\Vert_1^\delta$ in \eqref{def_norm_1}, we deduce: \begin{equation*} \Vert \Gamma_i^\lambda(\varphi)- \Gamma_i^\lambda(\theta)\Vert_1^\delta\leq \frac{\vert I \vert^2 K^\delta (\Vert \partial_s S\Vert_\infty +1)}{\kappa^\delta} \Vert \varphi- \theta \Vert_1^\delta \end{equation*} Using the definition of $\kappa$ at \eqref{def_kappa}, it follows that $\Gamma$ is a contraction. The function $F_{M}$ being also non expensive, the conclusion follows. \end{proof} \begin{lemma} \label{fixed_point_pi} The function $\Pi^\lambda$ admits a fixed point $\nu^\lambda\in \Sigma^\lambda$. \end{lemma} \begin{proof}This is a direct consequence of Lemma \ref{contraction_lemma}. \end{proof} For any $\lambda\in \mathcal{M}^+([0,T]\times I)$ satisfying inequality \eqref{bound_lambda_delta_measure}, the subset $E_\lambda$ of $(0,T)$ denotes the set of points where for any $i$, the function $\lambda\mapsto \int_t^T\lambda_i(d\tau)$ is differentiable. One has $[0,T]\setminus E_\lambda$ is negligible w.r.t. the Lebesgue measure. The next lemma provides usefull regularity properties of the fixed point $ \nu^\lambda$. \begin{lemma} \label{fixed_point_C1} For any $\lambda\in \mathcal{M}^+([0,T]\times I)$ the associated fixed point $\nu^\lambda$ of $\Pi^\lambda$ is Lipschitz continuous w.r.t. the time variable, differentiable at any $(t,s,i)\in E_\lambda\times [0,1]\times I$, and for any $(t,i)\in E_\lambda\times I$, $s\mapsto \partial_s\nu_i(t,s)$ is continuous on $[0,1]$. In addition, if $\lambda\in C^0([0,T]\times I,\mathbb{R}^+)$ then it holds $\nu^\lambda\in C^1((0,T)\times [0,1]\times I)$. \end{lemma} \begin{proof} From the definition of $\Gamma^\lambda$ in equation \eqref{def_gamma}, the continuity of $g$ and of the flow $S$, it holds that for any $(s,i)\in [0,1]\times I$, the function $\Gamma^\lambda_i(\nu^\lambda)(\cdot,s)$ is continuous on $[0,T]$. For any $(s,i)\in [0,1]\times I$, the function $t\mapsto g_i(S_i^{t,s}(T))$ being Lipschitz on $[0,T]$ and the function $\tau\mapsto \sum_{j\in I,j\neq i}-H^\lambda(i,j,t,\tau,s,\nu^\lambda) + c_i(\tau,S_i^{t,s}(\tau))$ bounded by $\sup_{x\in [-2\bar {M}^\delta,2\bar {M}^\delta]}\,\vert H(x)\vert+ \Vert c_i \Vert_\infty$ on $[0,T]$, the function $\Gamma^\lambda_i(\nu^\lambda)(\cdot,s)$ is Lipschitz continuous on $[0,T]$. Since $F_{\delta}$ is a non expensive map, by composition the function $\nu^\lambda_i(\cdot,s)$ is Lipschitz continuous for any $(i,s)\in I\times [0,1]$ on $[0,T]$. For any $(i,s,t)\in I\times [0,1]\times(0,T)$ and $\tau\in E_\lambda$, using the chain rule we have: \begin{equation} \label{compute_temporal_deriv_H} \partial_t H^\lambda(i,j,t,\tau,s,\nu^\lambda)= \partial_t( S_i^{t,s}(\tau)) \partial_s(\nu^\lambda_j-\nu^\lambda_i)(\tau,S_i^{t,s}(\tau))H'\left((\nu^\lambda_j-\nu^\lambda_i)(\tau, S_i^{t,s}(\tau))+\int_\tau^T(\lambda_j-\lambda_i)(dr)\right). \end{equation} Thanks to the Lemma \ref{lemma_flow} in Appendix, we have the bound: $\vert \partial_tS_i^{t,s}(\tau)\vert \leq \Vert b_i \Vert_\infty \Vert \partial_s S \Vert_\infty$. Thus, one deduces that: $\vert \partial_t H^\lambda(i,j,t,\tau,s,\nu^\lambda)\vert\leq 4\Vert b_i \Vert_\infty \Vert \partial_s S \Vert_\infty K K^\delta e^{k(T-t)}$. Therefore for any $s\in[0,1]$, the function $$ t\mapsto\int_t^T-\sum_{j\in I,j\neq i}H^\lambda(i,j,t,\tau,s,\nu^\lambda)d\tau$$ is differentiable on $E_\lambda$. It has been shown in the proof of Lemma \ref{impage_space}, that $\vert \partial_s H^\lambda(i,j,t,\tau,s,\varphi) \vert \leq 4\Vert \partial_s S \Vert_\infty K K^\delta e^{k^\delta(T-\tau)}$, therefore for any $t\in E_\lambda$, the function $\textstyle s\mapsto\int_t^T-\sum_{j\in I,j\neq i}H^\lambda(i,j,t,\tau,s,\nu^\lambda)d\tau$ is differentiable on $[0,1]$. From Assumption \ref{hyp_on_c_g}, it holds for any $i\in I$ that $c_i\in C^1((0,T)\times[0,1])$ and $g_i\in C^1([0,1])$. Then, for any $s\in [0,1]$ the function $t\mapsto\Gamma_i(\nu^\lambda)(t,s)$ is differentiable on $E_\lambda$. Since the function $F_{\delta}$ belongs to $C^1$, $\nu^\lambda$ is also differentiable on $E_\lambda\times [0,1]\times I$. Now suppose $\lambda\in C^0([0,T]\times I,\mathbb{R}^+)$. It comes $E_\lambda=[0,T]$ and the conclusion follows. \end{proof} \begin{lemma} \label{classic_ae_solution}Let $\lambda\in \mathcal{M}^+([0,T]\times I)$ satisfies inequality \eqref{bound_lambda_delta_measure}. Let $t_0\in[0,T)$ be such that for any $t\in [t_0,T]$ it holds for any $i\in I$ $\Vert \nu^\lambda_i(t,\cdot)\Vert_\infty \leq M^\delta-1$. Then for any $(t,s,i)\in (E_\lambda\cap [t_0,T])\times [0,1]\times I$, we have: \begin{equation} \label{hjb_3} - \partial_ t\nu^\lambda_i(t,s)-b_i(s)\partial_s\nu^\lambda_i(t,s)-c_i(t,s)+\sum_{j\in I,j\neq i}H^\lambda(i,j,t,t,s,\varphi) =0 \end{equation} \end{lemma} \begin{proof} From Lemma \ref{fixed_point_C1}, for any $i\in I$ the function $\nu^\lambda_i$ is differentiable on $E_\lambda\times [0,1]$. Since $\nu^\lambda$ satisfies \eqref{def_integral_equation_2} on $(E_\lambda\cap(t_0,T))\times [0,1]\times I$, we have: \begin{equation} \label{deriv_temp_0} \begin{array}{ll} \partial_t \nu^\lambda_i(t,s)= &- \int_t^T \partial_t S_i^{t,s}(\tau)\sum_{j\in I,j\neq i}\partial_s(\nu^\lambda_j-\nu^\lambda_i)(\tau,S_i^{t,s}(\tau))H'\left((\nu^\lambda_j-\nu^\lambda_i)(\tau, S_i^{t,s}(\tau))+\int_\tau^T(\lambda_j-\lambda_i)(dr)\right)d\tau \\ & + \int_t^T \partial_t S_i^{t,s}(\tau) \partial_s c_i(\tau, S_i^{t,s}(\tau)) d\tau+\sum_{j\in I,j\neq i}H^\lambda(i,j,t,t,s,\nu^\lambda)-c_i(t,s) + \partial_t(S_i^{t,s}(T)) g_i'(S_i^{t,s}(T)), \end{array} \end{equation} and \begin{equation} \label{deriv_spatial_0} \begin{array}{ll} b_i(s)\partial_s \nu^\lambda_i(t,s) = & \int_t^T -b_i(s)\partial_s S_i^{t,s}(\tau)\sum_{j\in I,j\neq i}\partial_s(\nu^\lambda_j-\nu^\lambda_i)(\tau,S_i^{t,s}(\tau))H'\left((\nu^\lambda_j-\nu^\lambda_i)(\tau, S_i^{t,s}(\tau))+\int_\tau^T(\lambda_j-\lambda_i)(dr)\right)d\tau\\ &+ \int_t^T b_i(s)\partial_s S_i^{t,s}(\tau)\partial_s c_i(\tau, S_i^{t,s}(\tau)) d\tau + b_i(s) \partial_s(S_i^{t,s}(T)) g_i'(S_i^{t,s}(T)). \end{array} \end{equation} Adding \eqref{deriv_temp_0} and \eqref{deriv_spatial_0} and using Lemma \ref{lemma_flow}, it holds on $(E_\lambda\cap [t_0,T])\times [0,1]$: \begin{equation} \label{edp_aa_0} \partial_t \nu^\lambda_i(t,s) + b_i(s)\partial_s \nu^\lambda_i(t,s) = \sum_{j\in I,j\neq i} H^\lambda(i,j,t,t,s,\nu^\lambda)-c_i(t,s). \end{equation} \end{proof} For any $\lambda\in \mathcal{M}^+([0,T]\times I)$, let $\phi^\lambda$ be defined on $I\times [0,T]\times [0,1]$ by: \begin{equation} \label{def_phi} \phi^\lambda_i(t,s):=\nu^\lambda_i(t,s)+\int_t^T\lambda_i(d\tau), \end{equation} where $\nu^\lambda$ is the fixed point of $\Pi^\lambda$, whose existence is established in Lemma \ref{fixed_point_pi}. We want to prove that $\phi^\lambda$ is a solution of \eqref{def_integral_equation}. To obtain this result, it suffices to show that $\phi^\lambda$ is bounded independently of $\bar {M}^\delta-1/2$. \subsection{Comparison principle} \begin{definition}Let $\lambda\in \mathcal{M}^+([0,T]\times I)$ and $t_0\in [0,T)$. A function $\underline{u}\in L^1((0,T)\times I,C^1([0,1]))$ (resp. $\bar{u}\in L^1((0,T)\times I,C^1([0,1]))$), is a weak subsolution (resp. a weak supersolution) of \eqref{hjb} if the function $\underline{\nu}$ (resp. $\bar{\nu}$), defined on $(t_0,T]\times [0,1]\times I$ by: \begin{equation} \label{def_nu_from_u} \underline{\nu}_i(t,s)=\underline{u}_i(t,s)-\int_t^T\lambda_i(d\tau) \end{equation} (and resp. $\bar{\nu}_i(t,s)=\bar{u}_i(t,s)-\int_t^T\lambda_i(d\tau))$ is Lipschitz continuous in time, differentiable on $E_\lambda\times[0,1]\times I$, and satisfies for any $(t,s,i)\in(E_\lambda\cap(t_0,T))\times [0,1] \times I $: \begin{equation*} -\partial_t \underline{\nu}_i(t,s) -b_i(s)\partial_s\underline{\nu}_i(t,s) \leq - \sum_{j\in I,j\neq i}H^\lambda(i,j,t,t,s,\underline{\nu}) +c_i(t,s) \end{equation*} and for any $(s,i)\in (0,1) \times I $: \begin{equation*} \underline{\nu}_i(T,s)\leq g_i(s), \end{equation*} and resp. $\bar{\nu}$ is Lipschitz continuous in time, differentiable on $E_\lambda\times(0,1)\times I$, and satisfies for any $(t,s,i)\in(E_\lambda\cap(t_0,T))\times [0,1] \times I $: \begin{equation*} -\partial_t \bar{\nu}_i(t,s) -b_i(s)\partial_s\bar{\nu}_i(t,s) \geq -\sum_{j\in I,j\neq i}H^\lambda(i,j,t,t,s,\bar{\nu})+ c_i(t,s) \end{equation*} and for any $(s,i)\in (0,1) \times I $: \begin{equation*} \bar{\nu}_i(T,s)\geq g_i(s). \end{equation*} \end{definition} \begin{lemma}[Comparison principle]\label{comp_principle} Let $\underline{u}$ and $\bar{u}$ be respectively weak subsolution and supersolution of \eqref{hjb} on $(t_0,T)\times [0,1] \times I $. Then one has $\underline{u}_i\leq \bar{u}_i$ on $(t_0,T)\times [0,1]\times I $. \end{lemma} \begin{proof}Let $\gamma$ be defined on $(t_0,T]$ by: $\gamma(t):=\sup_{j\in I,s\in [0,1]}(\underline{u}_j(t,s)-\bar{u}_j(t,s))$. From \eqref{def_nu_from_u}, it comes: $\gamma(t):=\sup_{j\in I,s\in [0,1]}(\underline{\nu}_j(t,s)-\bar{\nu}_j(t,s))$. For any $t\in (t_0,T)$, $\underline{\nu}(t,\cdot)$ and $\bar{\nu}(t,\cdot)$ are continuous on $[0,1]$ thus, $\gamma$ is well defined. Since $\underline{\nu}$ and $\bar{\nu}$ are Lipschitz continuous in time, $\gamma$ is also Lipschitz continuous and thus, differentiable a.e. on $[0,T]$. Using the envelop theorems \cite[Theorem 1]{milgrom2002envelope}, $\gamma$ is absolutely continuous on $(t_0,T]$ and for a.e. $t\in(t_0,T]$ there exists maximum point $(i(t),x(t))\in I\times [0,1]$ such that: \begin{equation*} \gamma'(t)=\partial_t(\underline{\nu}_{i(t)}(t,x(t))-\bar{\nu}_{i(t)}(t,x(t))). \end{equation*} Since $\bar{\nu}_{i(t)}$ and $\underline{\nu}_{i(t)}$ are respectively weak supersolution and subsolution: \begin{equation*} - \gamma'(t) -b_{i(t)}(x(t))\partial_s(\underline{\nu}_{i(t)}(t,x(t))-\bar{\nu}_{i(t)}(t,x(t))) \leq \sum_{j\neq i(t)}H((\bar{u}_j-\bar{u}_{i(t)})(t,x(t)))-H((\underline{u}_j-\underline{u}_{i(t)})(t,x(t))). \end{equation*} From Assumption \ref{hyp_on_b} and definition of $i(t)$ and $x(t)$, if $x(t)\in \{0,1\}$ then $b_{i(t)}(x(t))=0$, while if $x(t)\in (0,1)$, then $\partial_s((\underline{u}_{i(t)}(t,x(t))-\bar{u}_{i}(t,x(t)))=0$. It comes: $b_{i(t)}(x(t))\partial_s((\underline{u}_{i(t)}(t,x(t))-\bar{u}_{i}(t,x(t)))= 0$. Thus: \begin{equation*} - \gamma'(t) \leq \sum_{j\neq i(t)}H((\bar{u}_j-\bar{u}_{i(t)})(t,x(t)))-H((\underline{u}_j-\underline{u}_{i(t)})(t,x(t))). \end{equation*} The function $H$ being convex and differentiable, it holds: \begin{equation*} - \gamma'(t) \leq \sum_{j\neq i(t)}H'((\bar{u}_j-\bar{u}_{i(t)})(t,x(t))) (\bar{u}_j-\bar{u}_{i(t)} -(\underline{u}_j-\underline{u}_{i(t)}))(t,x(t)) \end{equation*} Using that $H$ is non increasing and that at time $t$ it holds for any $j\neq i(t)$: $(\underline{u}_{i(t)}-\bar{u}_{i(t)})(t,x(t))\geq (\underline{u}_j-\bar{u}_j)(t,x(t))$ it comes that $\gamma$ is increasing over $(t_0,T]$. Since $\gamma(T)\leq0$, the conclusion follows. \end{proof} \begin{lemma} \label{Bound_sol_HJB} Let $\lambda\in \mathcal{M}^+([0,T]\times I)$ verifying \eqref{bound_lambda_delta_measure} and $u$ be a function satisfying \eqref{def_integral_equation} a.e. on $(t_0,T)\times [0,1] \times I$. Then for any $i\in I$, $u_i$ is bounded a.e. on $(t_0,T)\times [0,1]$ by $P^\delta$, where \begin{equation} \label{def_K_lambda} P^\delta:=\max_{i\in I}\Vert g_i\Vert_\infty +T(\max_{i\in I}\Vert c_i\Vert_\infty + \vert I \vert H(\delta) )+ \delta = \bar{M}^\delta -2 \end{equation} \end{lemma} \begin{proof} From Lemma \ref{classic_ae_solution}, it holds that $u$ is both a weak sub-solution and a super solution of \eqref{hjb} on $(t_0,T)\times [0,1]$. Let $\underline{u}$ be such that for any $(t,s,i)\in (t_0,T]\times [0,1]\times I$: $$\underline{u}_i(t,s):=-\max_{i\in I}\Vert g_i\Vert _\infty -(T-t)(\max_{i\in I}\Vert c_i\Vert _\infty + \vert I \vert H(\delta) ). $$ The function $\underline{u}$ is a weak sub-solution of \eqref{def_integral_equation}. Let $\bar{u}$ be such that for any $(t,s,i)\in (t_0,T]\times [0,1]\times I$: $$\bar{u}_i(t,s):=\max_{i\in I}\Vert g_i\Vert _\infty +(T-t)\max_{i\in I}\Vert c_i\Vert _\infty + \int_t^T\lambda_i(d \tau).$$ The function $\bar{u}$ is a weak super-solution of \eqref{def_integral_equation}. Thus from comparison principle in Lemma \ref{comp_principle}, it holds that for any $(t,s,i)\in (t_0,T)\times [0,1]\times I$: \begin{equation*} -\max_{i\in I}\Vert g_i\Vert _\infty -(T-t)(\max_{i\in I}\Vert c_i\Vert _\infty + \vert I \vert H(\delta) ) \leq u_i(t,s) \leq \max_{i\in I}\Vert g_i\Vert _\infty +(T-t)\max_{i\in I}\Vert c_i\Vert _\infty + \int_t^T\lambda_i(d \tau) \end{equation*} Using the definition of $P^\delta$ in \eqref{def_K_lambda}, the conclusion follows. \end{proof} \begin{lemma} \label{classical_solution} For any $\lambda\in \mathcal{M}^+([0,T]\times I)$ satisfying inequality \eqref{bound_lambda_delta_measure}, the function $\phi^\lambda$ defined in \eqref{def_phi} is bounded independently of $\bar {M}^\delta-1$ and is a solution of \eqref{def_integral_equation} a.e. on $[0,T]\times [0,1]\times I$. \end{lemma} \begin{proof} To show that $\phi^\lambda$ is a solution a.e. on $[0,T]\times [0,1]\times I$ of \eqref{def_integral_equation}, one needs to prove that $\nu^\lambda$ is bounded independently of $M^\delta-1$. To do so, we only need to show that $\phi^\lambda$ is bounded independently of $\bar {M}^\delta-1$. Let $t_0\in[0,T)$ be the minimum time such that $\phi^\lambda$ is a solution \eqref{def_integral_equation} a.e. on $(t_0,T]$. The time $t_0$ is less than $T$. Indeed, for any $i\in I$ it holds $\Vert \nu_i^\lambda(T,\cdot)\Vert_\infty = \Vert g_i\Vert_\infty<M^\delta-1$ and thus $\Vert \phi_i^\lambda(T,\cdot)\Vert_\infty< \bar {M}^\delta-1$. From the continuity of $c$ and $H$, the boundedness of $\phi^\lambda$ in Lemma \ref{Bound_sol_HJB} and the definition of $\bar {M}^\delta$ in \eqref{def_m}, there exists $\varepsilon>0$ such that for any $s\in [0,1]$ and $i\in I$: \begin{equation*} \varepsilon\left( \Vert c_i \Vert_\infty +\vert I \vert\sup_{x\in [-2\bar {M}^\delta,2\bar {M}^\delta]}\,\vert H(x)\vert \right)+\int_{T-\varepsilon}^T \lambda_i(d\tau)+\Vert g_i \Vert_\infty < \bar {M}^\delta - 1. \end{equation*} Therefore it holds $t_0\leq T-\varepsilon$ and for all $t\in (t_0,T]$ and any $i\in I$ it holds $\Vert \nu_i^\lambda(t,\cdot)\Vert_\infty \leq M^\delta -1$. From Lemma \ref{Bound_sol_HJB}, for any $i$ the function $\phi^\lambda_i$ is bounded by $P^\delta$, defined in \ref{def_K_lambda}, a.e. on $(t_0,T]\times [0,1]\times I$. We deduce that a.e. on $(t_0,T]\times [0,1]\times I$, $\vert \nu^\lambda_i(t,s)\vert \leq M^\delta-2$. Applying the same argument as previously, there exists $\varepsilon'>0$ such that for a.e. $(t's,i)\in (t_0-\varepsilon',t_0] \times[0,1]\times I$: \begin{equation*} \vert \nu^\lambda_i(t',s)\vert \leq \Vert \nu^\lambda_i(t_0,\cdot)\Vert_\infty + \varepsilon' \left( \Vert c_i \Vert_\infty+\vert I \vert\sup_{x\in [-2\bar{M}^\delta,2\bar{M}^\delta]}\,\vert H(x)\vert\right)<M^\delta-1, \end{equation*} and thus for a.e. $(t's,i)\in (t_0-\varepsilon',t_0] \times[0,1]\times I$ $\vert \phi^\lambda_i(t,s)\vert \leq \bar{M}^\delta-1$. Therefore $\phi^\lambda$ is a solution of \eqref{def_integral_equation} a.e. on $(t_0-\varepsilon',T]$ and the contradiction holds. Therefore $\phi^\lambda$ is a solution of \eqref{def_integral_equation} a.e. on $[0,T]\times [0,1]\times I$. \end{proof} \begin{lemma} For any $\lambda\in \mathcal{M}^+([0,T]\times I)$ satisfying inequality \eqref{bound_lambda_delta_measure}, the function $\phi^\lambda$ defined in \eqref{def_phi}, is the unique solution of \eqref{def_integral_equation}. \end{lemma} \begin{proof} From Lemma \ref{classical_solution} it holds that $\phi^\lambda$ is a solution of \eqref{def_integral_equation}. Uniqueness is a direct consequence of the comparison principle in Lemma \ref{comp_principle}. \end{proof} The following Lemma gives an important continuity property of the mapping $\lambda \mapsto \nu^\lambda$. \begin{lemma} \label{conv_inf_space} Let $\lambda\in \mathcal{M}^+([0,T]\times I)$ satisfying inequality \eqref{bound_lambda_delta_measure} and $\{\lambda^n\}_n$ be a sequence $\{\lambda^n\}_n$, where for any $n\in \mathbb{N}$ $\lambda^n\in C^0([0,T]\times I, \mathbb{R}_+)$, weakly converging to $\lambda$. Then we have for any $i\in I$ and for a.e. $t\in [0,T]$: \begin{equation} \lim_{n\to \infty }\Vert \phi^n_i(t,\cdot)- \phi^\lambda_i(t,\cdot)\Vert_\infty = 0 \end{equation} where $\phi^\lambda$ (resp. $\phi^n$) is the solution to \eqref{def_integral_equation} associated to $\lambda$ (resp. $\lambda^n$). \end{lemma} \begin{proof} There exists $n_0\in \mathbb{N}$ such that for any $n\geq n_0$, $\lambda^n$ satisfies inequality \eqref{bound_lambda_delta_measure}. Using that $\phi^\lambda$ and $\phi^n$ are fixed points of $\Pi^\lambda$ and $\Pi^{\lambda^n}$ and that $F_{\delta}$ is a contraction, it comes for all $(s,i)\in [0,1]\times I$ and a.e. $t\in (0,T)$: \begin{equation} \label{ineq_1_phi_psi} \begin{array}{l} \vert \phi^\lambda_i(t,s) - \phi_i^n(t,s)\vert \\ = \vert \Pi^\lambda_i(\phi^\lambda)(t,s) - \Pi_i^{\lambda^n}(\phi^n)(t,s)\vert \\ \leq \left\vert \int_t^T \sum_{j\in I,j\neq i} H((\phi^\lambda_j-\phi^\lambda_i)(\tau, S_i^{t,s}(\tau)))- H((\phi^n_j-\phi^n_i)(\tau, S_i^{t,s}(\tau)))d\tau +\int_t^T(\lambda^n-\lambda) (d\tau)\right\vert. \end{array} \end{equation} Recalling that functions $\phi^\lambda$ and $\phi^n$ are bounded a.e. on $(0,T)\times [0,1]\times I$ by $\bar{M}^\delta$, and that $H$ is Lipschitz continuous on $[-2M^\delta, 2M^\delta]$ with Lipschitz constant $K$, from inequality \eqref{ineq_1_phi_psi} it comes: \begin{equation*} \begin{array}{l} \vert \phi^\lambda_i(t,s) - \phi_i^n(t,s)\vert \leq\int_t^T K^\delta \sum_{j\in I,j\neq i}\vert (\phi^\lambda_i - \phi_i^n)(\tau, S_i^{t,s}(\tau))\vert + \vert(\phi^\lambda_j - \phi_j^n)(\tau, S_i^{t,s}(\tau))\vert d\tau +\left\vert\int_t^T(\lambda^n_i-\lambda_i)(d\tau) \right\vert \end{array} \end{equation*} Taking the supremum over $I\times[0,1] $ and applying Gronwall Lemma to $t\mapsto \sup_{i\in I,s\in [0,1]}\vert \phi^\lambda_i(t,s) - \phi_i^n(t,s)\vert$ on $[0,T]$, it comes for a.e. $t\in [0,T]$: \begin{equation} \label{int_sup_lambda_mu} \begin{array}{ll} \sup_{i,s}\vert \phi^\lambda_i(t,s) - \phi_i^n(t,s)\vert & \leq 2 K^\delta \vert I \vert \int_t^T \sup_{i,s}\vert \phi^\lambda_i(t,s) - \phi_i^n(t,s)\vert d\tau +\sup_{i}\left\vert\int_t^T(\lambda^n_i-\lambda_i)(d\tau) \right\vert\\ &\leq \sup_{i}\left\vert\int_t^T(\lambda^n_i-\lambda_i)(d\tau) \right\vert + 2 K^\delta \vert I \vert e^{2 T K^\delta \vert I \vert}\int_0^T \sup_{i}\left\vert\int_t^T(\lambda^n_i-\lambda_i)(d\tau) \right\vert dt \end{array} \end{equation} Since for any $t\in E_\lambda$ we have $ \lim_{n\to \infty }\,\left\vert\int_t^T(\lambda^n_i-\lambda_i)(d\tau)\right\vert=0$, the result follows. \end{proof} \subsection{Link between weak solution \eqref{def_sub_so_2} and fixed point solution \eqref{def_integral_equation}} We start to show the connection between the solutions of \eqref{def_sub_so_2} and \eqref{def_integral_equation} when $\lambda\in C^0([0,T]\times I, \mathbb{R}^+)$. Let $\lambda\in C^0([0,T]\times I)$ satisfy \eqref{bound_lambda_delta_measure}. \begin{lemma} \label{regular_Int_sol_implies_weak_sol} For any $\lambda\in C^0([0,T]\times I, \mathbb{R}^+)$ satisfying inequality \ref{bound_lambda_delta_measure}, the solution $\phi^\lambda$ of \eqref{def_integral_equation} is a classical solution of \eqref{hjb} and a weak solution in the sense of Definition \eqref{def_weak_sol_hjb}. \end{lemma} \begin{proof} From Lemma \ref{fixed_point_C1}, the function $\nu^\lambda$ is in $C^1$ and from \eqref{def_phi} it holds that $\phi^\lambda$ is also in $C^1$. Applying Lemma \ref{classic_ae_solution} with $t_0 = 0$ it holds on $(0,T)\times (0,1)$: \begin{equation} \label{edp_aa} \partial_t \phi^\lambda_i(t,s) + b_i(s)\partial_s \phi^\lambda_i(t,s) = \sum_{j\in I,j\neq i} H((\phi^\lambda_j-\phi^\lambda_i)(t,s))-c_i(t,s)-\lambda_i(t). \end{equation} For any $\psi \in C^\infty(0,T)\times (0,1)\times I)$, integrating by part over $(0,T)\times (0,1)$ and using that $\phi^\lambda_i(T,\cdot)=g_i$ on $[0,1]$, it comes for any $i\in I$: \begin{equation} \label{int_by_part_bv} \int_0^1\phi^\lambda(0,s)\psi_i(0,s)ds - \int_0^1g_i(s)\psi_i(T,s)ds = -\int_0^T\int_0^1(\partial_t\psi_i(t,s))\phi^\lambda_i(t,s)dsdt -\int_0^T\int_0^1\psi_i(t,s)\partial_t\phi^\lambda_i(dt,s)ds, \end{equation} Using \eqref{edp_aa}, equality \eqref{int_by_part_bv} becomes: \begin{equation*} \begin{array}{ll} \int_0^1\phi^\lambda(0,s)\psi_i(0,s)ds - \int_0^1g_i(s)\psi_i(T,s)ds & = -\int_0^T\int_0^1(\partial_t\psi_i(t,s))\phi^\lambda_i(t,s)+ \psi_i(t,s)b_i(s)\partial_s \phi^\lambda_i(t,s) dsdt \\ & + \int_0^T\int_0^1 \psi_i(t,s)\left( \sum_{j\in I,j\neq i} -H((\phi^\lambda_j-\phi^\lambda_i)(t,s))+c_i(t,s)+\lambda_i(t)\right)dsdt \end{array} \end{equation*} Integrating by part $\psi_i b_i\partial_s\phi^\lambda_i$ knowing that $b_i(0)=b_i(1)=0$ and the result follows. \end{proof} The previous Lemma is then extended for any $\lambda\in \mathcal{M}^+([0,T]\times I)$, satisfying $t\mapsto \lambda([t,T))$ is continuous at $0$. This continuity assumption is motivated by the following remark. \begin{remark}\label{continuity_measure_0}Let $\lambda\in \mathcal{M}^+([0,T]\times I)$ and $\{\lambda^n\}_n$, a sequence in $C^\infty([0,T]\times I,\mathbb{R}_+)$ converging weakly to $\lambda$. If for any $i\in I$ the function $t\mapsto \lambda_i([t,T))$ is continuous at $0$, then for any $\psi\in C^0(0,1)$, we have for any $i\in I$: \begin{equation} \label{cont_0_measure} \int_0^1\phi^n_i(0,s)\psi(s)ds\xrightarrow[n\to \infty]{}\int_0^1\phi^\lambda(0,s)\psi_i(s)ds, \end{equation} where $\phi^n$ is a solution of \eqref{def_integral_equation} associated to $\lambda^n$. Indeed, the continuity of $t\mapsto \lambda_i([t,T))$ at $0$ implies \break $ \lim_{n\to \infty }\,\left\vert\int_0^T(\lambda^n_i-\lambda_i)(d\tau)\right\vert=0$. Applying the same arguments as in the proof of Lemma \ref{conv_inf_space}, the result is then deduced from inequality \eqref{int_sup_lambda_mu} at time $t=0$. \end{remark} \begin{lemma}\label{Int_sol_implies_weak_sol} For any $\lambda\in \mathcal{M}^+([0,T]\times I)$, such that $t\mapsto \lambda([t,T))$ is continuous at 0, the solution $\phi^\lambda$ of \eqref{def_integral_equation} is a weak solution of \eqref{hjb} in the sense of Definition \ref{def_weak_sol_hjb}. \end{lemma} \begin{proof} Let $\tilde{\lambda}\in\mathcal{M}^+(\mathbb{R}\times I)$ be an extension of $\lambda$ to $\mathbb{R}\times I$, defined for any $i\in I$ by $\tilde{\lambda}_i(B)=\lambda_i(B\cap [0,T])$ for any $B\in \mathcal{B}(\mathbb{R})$. Let $\xi$ be a standard convolution kernel on $\mathbb{R}_+$ such that $\xi>0$. Let $\xi^n(t):=\xi(t/\varepsilon_n)/\varepsilon_n$ with $\varepsilon_n\xrightarrow[n\to \infty]{} 0$. For any $n\in \mathbb{N}$, let the function $\lambda^n$ be defined by: \begin{equation} \label{convolution} \lambda^n:=\xi^n \ast \tilde{\lambda}, \end{equation} where $\ast$ stands for the convolution product. Then, $\lambda^n\in C^\infty([0,T]\times I,\mathbb{R}_+)$ and the sequence $\{\lambda^n\}_n$ weakly converges to $\lambda$ in $\mathcal{M}^+([0,T]\times I)$. From Lemmas \ref{classical_solution} and \ref{fixed_point_C1}, we know that for any $\lambda^n$, there exists a function $\phi^n\in C^1([0,T]\times [0,1]\times I)$ such that $\phi^n$ is a solution of \eqref{def_integral_equation}. From Lemma \ref{conv_inf_space} it comes that the sequence $\phi^n$ converges to $\phi^\lambda$ w.r.t. the norm $\Vert\cdot \Vert_1$. Since for any $i\in I$ and $n\in \mathbb{N}$ $\lambda^n_i\in C^\infty((0,T),\mathbb{R}_+)$, Lemma \ref{regular_Int_sol_implies_weak_sol} gives that for any $\psi_i\in C^\infty((0,T)\times (0,1))$ we have: \begin{equation} \label{weak_sol_conv_n} \begin{array}{l} \int_0^1\phi^n_i(0,s)\psi_i(0,s)ds -\int_0^1g_i(s)\psi_i(T,s)ds +\int_0^T\int_0^1\left(\partial_t\psi_i(t,s)+\partial_s(\psi_i(t,s)b_i(s))\right)\phi^n_i(t,s)dsdt \vspace{0.1cm} \\ +\int_0^T\int_0^1 \left(\sum_{j\in I,j\neq i}H((\phi^n_j-\phi^n_i)(t,s))-c_i(t,s)\right)\psi_i(t,s)dtds-\int_0^T\int_0^1\psi_i(t,s)\lambda^n_i(dt)ds\vspace{0.1cm}\\ = 0, \end{array} \end{equation} Taking a subsequence of $\{\phi^{n_k}\}_k$ converging a.e. on $[0,T]\times [0,1]\times I$ to $\phi^\lambda$ and using Remark \ref{continuity_measure_0}, letting k tend to infinity in equality \eqref{weak_sol_conv_n} gives the result. \end{proof} Finally, the next lemma states the converse of the previous lemma. \begin{lemma} \label{int_sol_imp_weak_sol} For any $(\lambda,\varphi)\in \mathcal{M}^+([0,T]\times I)\times BV((0,T)\times (0,1)\times I)$, if $\varphi$ is a weak solution of \eqref{hjb}, in the sense of Definition \ref{def_weak_sol_hjb}, associated to $\lambda$, then $\varphi$ satisfies a.e. on $[0,T]\times [0,1]\times I$ equality \eqref{def_integral_equation}. \end{lemma} \begin{proof} Let $\theta\in C^\infty((0,T)\times (0,1)\times I)$, $\Theta\in C^\infty((0,1)\times I)$ and $\psi\in C^1((0,T)\times (0,1)\times I, \mathbb{R})$ such that it holds on $(0,T)\times (0,1)\times I$: \begin{equation} \label{equ_verif_phi} \begin{array}{ll} \partial_t\psi_i(t,s)+\partial_s(\psi_i(t,s)b_i(s)) = \theta_i(t,s) & \mbox{on }(0,T)\times (0,1)\times I\\ \psi_i(0,\cdot)=\Theta_i(\cdot)&\mbox{ on}(0,1)\times I. \end{array} \end{equation} One has for any $(i,t,s)\in [0,T]\times [0,1]\times I]$: \begin{equation*} \psi_i(t,s)=\int_0^t\theta_i(\tau, S_i^{t,s}(\tau))\exp\left(-\int_\tau^t b'_i(S_i^{t,s}(r))dr\right)d\tau + \Theta_i(S_i^{t,s}(0)) \exp\left(-\int_0^t b'_i(S_i^{t,s}(\tau))d\tau\right) \end{equation*} For any $i\in I$, let $\nu_i$ and $\pi_i$ be defined for any $(t,s)\in [0,T]\times [0,1]$ by: \begin{equation*} \nu_i(t,s):=\int_0^t\theta_i(\tau, S_i^{t,s}(\tau))\exp\left(-\int_\tau^t b'_i(S_i^{t,s}(r))dr\right)d\tau \quad \mbox{ and }\quad \pi_i(t,s):=\Theta_i(S_i^{t,s}(0)) \exp\left(-\int_0^t b'_i(S_i^{t,s}(\tau))d\tau\right). \end{equation*} One can observe: $\psi_i=\nu_i+\pi_i$. For any function $f\in L^1((0,T)\times (0,1))$, by switching the order of integration, applying the change of variable $x=S_i^{t,s}(\tau)$ and Lemma \ref{lemma_flow_2}, it holds: \begin{equation} \label{eq2_demo_weak_int} \begin{array}{ll} \int_0^T\int_0^1f(t,s)\nu_i(t,s)dtds &=\int_0^T\int_0^1\int_0^t f(t,s)\theta_i(\tau, S_i^{t,s}(\tau))\exp\left(-\int_\tau^t b'_i(S_i^{t,s}(r))dr\right)d\tau dsdt \vspace{0.1cm}\\ & = \int_0^T\int_0^1\theta_i(\tau, x)\int_\tau^T f(t,S_i^{\tau,x}(t))\exp\left(-\int_\tau^t b'_i(S_i^{\tau,x}(r))dr\right)\partial_xS_i^{t,x}(\tau)dt dxd\tau,\\ &= \int_0^T\int_0^1\theta_i(\tau, x) \int_\tau^T f(t,S_i^{\tau,x}(t))dt dx d\tau. \end{array} \end{equation} Applying same calculus, for any $i\in I$ one has: \begin{equation} \begin{array}{ll} \int_0^1 g_i(s)\nu_i(T,s)ds & = \int_0^T\int_0^1 g_i(S_i^{\tau,x}(T)) \theta_i(\tau, x)ds d\tau, \end{array} \end{equation} \begin{equation} \label{eq3_demo_weak_int} \begin{array}{ll} \int_0^T\int_0^1f(t,s)\pi_i(t,s)dtds &= \int_0^1\Theta_i(x)\int_0^Tf(t,S_i^{0,x}(t)) dt dx,\\ \end{array} \end{equation} and: \begin{equation} \label{eq4_demo_weak_int} \begin{array}{ll} \int_0^1 g_i(s)\pi_i(T,s)ds & = \int_0^1 \Theta_i( x) g_i(S_i^{0,x}(T))dx \end{array} \end{equation} Let $\lambda_i\in \mathcal{M}^+([0,T])$, applying same computation gives: \begin{equation} \label{same_calc_lambda} \begin{array}{l} \int_0^T\int_0^1\psi_i(t,s)ds\lambda_i(dt) = \int_0^T \int_0^1\theta_i(\tau,x)\left(\int_\tau^T\lambda_i(dt)\right)dx + \int_0^1\Theta_i(x)\left(\int_0^T\lambda_i(dt)\right)dx \end{array} \end{equation} Taking $f(t,s) = \sum_{j\in I,j\neq i }H((\varphi_j-\varphi_i)(t,s)) + c_i(t,s)$, using \eqref{eq2_demo_weak_int}, \eqref{eq3_demo_weak_int}, \eqref{eq4_demo_weak_int} and \eqref{same_calc_lambda} and \eqref{equ_verif_phi} satisfied by $\psi_i$, for any $i\in I$ equation \eqref{def_sub_so_2} becomes: \begin{equation*} \begin{array}{l} \int_0^1 \Theta_i(s)\left(\varphi_i(0,s)+\int_0^T\sum_{j\in I,j\neq i }H((\varphi_j-\varphi_i)(\tau,S_i^{0,s}(\tau)))-c_i(\tau,S_i^{0,s}(\tau)) dt -\int_0^T\lambda_i(d\tau)-g_i(S_i^{0,s}(T)) \right)ds \\ + \int_0^T \int_0^1 \theta_i(t,s)\left(\varphi_i(t,s) + \int_t^T \sum_{j\in I,j\neq i }H((\varphi_j-\varphi_i)(\tau,S_i^{t,s}(\tau)))-c_i(\tau,S_i^{t,s}(\tau))d\tau -\int_t^T\lambda_i(d\tau)-g_i(S_i^{t,s}(T))\right)dsdt\\ =0 \end{array} \end{equation*} Since $\theta_i$ and $\Theta_i$ are test functions, for a.e. $s\in [0,1]$ one has: \begin{equation} \label{init_equation_ae} \varphi_i(0,s) = \int_0^T\sum_{j\in I,j\neq i }-H((\varphi_j-\varphi_i)(\tau,S_i^{0,s}(\tau)))+c_i(\tau,S_i^{0,s}(\tau)) dt +\int_0^T\lambda_i(d\tau)+g_i(S_i^{0,s}(T)) \end{equation} and a.e. on $(0,T)\times (0,1)$: \begin{equation*} \varphi_i(t,s) = \int_t^T \sum_{j\in I,j\neq i }-H((\varphi_j-\varphi_i)(\tau,S_i^{t,s}(\tau)))+c_i(\tau,S_i^{t,s}(\tau))d\tau +\int_t^T\lambda_i(d\tau)+g_i(S_i^{t,s}(T)). \end{equation*} From \eqref{init_equation_ae}, and using that $\varphi_i\in BV((0,T)\times(0,1)$, it holds in the sense of trace: $\varphi_i(T,\cdot)=g_i$ on $(0,1)$. \end{proof} \section{Dual problem}\label{dual_prob_section} In this section, an optimization problem \eqref{dual_prob} is introduced. Using tools from convex analysis \cite{ekeland1999convex}, we show that this problem is in duality with \eqref{problemE}. We consider the set $\tilde{I}:=\{(i,j)\in I^2;\,i\neq j\}$ and the following spaces: \begin{equation*} E_0 = C^1([0,T]\times [0,1]\times I)\times C^0([0,T]\times I) \mbox{ and }E_1:= C^0([0,1]\times[0,T]\times I) \times C^1([0,T]\times [0,1]\times \tilde{I}) \end{equation*} We consider the following inequality: \begin{equation} \label{hjb_2} \begin{array}{ll} - \partial_ t\bar{\varphi}_i(t,s)-b_i(s)\partial_s\bar{\varphi}_i(t,s)-c_i(t,s)-\lambda(t)+\sum_{j\in I,j\neq i}H((\bar{\varphi}_j-\bar{\varphi}_i)(t,s))\leq0 & \mbox{on }(0,T)\times (0,1)\times I,\\ \bar{\varphi}_i(T,\cdot)\leq g_i & \mbox{on } (0,1)\times I. \end{array} \end{equation} The set $\mathcal{K}_0$ is defined by: $\mathcal{K}_0:=\{(\varphi,\lambda)\in E_0;\,\varphi\mbox{ solution of }\eqref{hjb_2}\mbox{ associated to }\lambda\}$. We introduce the function ${A}$, defined on $\mathcal{K}_0$ by : \begin{equation} \label{def_dual_fun} {A}(\varphi,\lambda):= \sum_{i \in I}\int_0^1 - \varphi_i(0,s)m_i^0(ds) + \int_0^T\lambda_i(t)D_i(t)dt, \end{equation} and the following problem is considered: \begin{equation} \label{dual_prob} \inf_{(\varphi,\lambda)\in \mathcal{K}_0}\,A(\varphi,\lambda) \end{equation} \begin{lemma} \label{dual_finite} $\inf_{(\varphi,\lambda)\in \mathcal{K}_0}\,A(\varphi,\lambda)$ is finite. \end{lemma} \begin{proof} We consider $(\varphi,\lambda)\in \mathcal{K}_0$ and $\bar{\varphi}$ a classical solution of the PDE \eqref{hjb} associated to $\lambda$, where the inequality is replaced by an equality. From the comparison principle (see Lemma \ref{comp_principle}), it holds $\varphi\leq \bar{\varphi}$ on $[0,T]\times \mathbb{R}\times I$. Thus, we have: \begin{equation} \label{ineq_A} A(\bar{\varphi},\lambda)\leq A(\varphi,\lambda). \end{equation} The set $\mathcal{L}_0$ is defined by: $ \mathcal{L}_0:=\{(\varphi,\lambda)\in E_0;(\varphi,\lambda)\mbox{ solution of }\eqref{hjb}\mbox{ and }\lambda\geq 0\}$. From \eqref{ineq_A}, we obtain: \begin{equation} \label{link_inf_A} \inf_{(\varphi,\lambda)\in \mathcal{L}_0}\,A(\varphi,\lambda)= \inf_{(\varphi,\lambda)\in \mathcal{K}_0}\,A(\varphi,\lambda). \end{equation} Let $(\bar{\varphi},\lambda)\in \mathcal{L}_0$. From Lemma \ref{int_sol_imp_weak_sol} $\bar{\varphi}$ satisfies \eqref{def_integral_equation}. Then, taking $t=0$, we have for any $(i,s)\in I\times[0,1]$: $\varphi_i(0,s)\leq \int_0^Tc_i(\tau,S_i^{0,s}(\tau))+\lambda_i(\tau)d\tau + g_i(S_i^{0,s}(T))$, where $S_i$ is the flow defined at equation \eqref{ODE}. Setting $Q:=-\sum_{i\in I}\int_0^1\big(g_i(S_i^{0,s}(T)+\int_0^Tc_i(t,S_i^{0,s}(t))dt)\big)m^0_i(ds)$, one has: \begin{equation} \label{to_get_ineq_A} Q+\sum_{i\in I} \int_0^T\lambda_i(t)\left(D_i(t)-\int_0^1m^0_i(ds)\right)dt \leq A(\bar{\varphi},\lambda) \end{equation} Using that $\lambda\geq0$, we deduce from Assumption \ref{hyp_on_D} and \eqref{to_get_ineq_A}: \begin{equation} \label{to_get_ineq_A_2} Q\leq \inf_{(\varphi,\lambda)\in \mathcal{L}_0}\,A(\varphi,\lambda). \end{equation} Combining \eqref{to_get_ineq_A_2} and \eqref{link_inf_A}, the conclusion follows. \end{proof} We consider the linear and bounded function $\Lambda:E_0\to E_1$ defined by: $\Lambda(\varphi,\lambda):=(\partial_t\varphi+b\partial_s\varphi +\tilde{\lambda}, \Delta \varphi)$, where $\partial_t\varphi + b\partial_s\varphi:=(\partial_t\varphi_i+ b_i\partial_s\varphi_i)_{i\in I}$, $\Delta \varphi :=(\Delta \varphi_{i,j})_{(i,j)\in \tilde{I}}$ with $\Delta \varphi_{i,j}=\varphi_j- \varphi_i$ and for any $(s,i)\in [0,1]\times I,\,\tilde{\lambda}_i(\cdot,s):=\lambda_i(\cdot)$. The linear function $\Lambda^\ast:E_1^\ast\to E_0^\ast $ is the adjoint operator of $\Lambda$. The functional $\mathcal{F}$ is defined by: \begin{equation*} \mathcal{F}(\varphi,\lambda):= \left\{ \begin{array}{ll} \sum_{i \in I}\int_0^1-\varphi_i(0,s)m_i^0(ds)+ \int_0^T D_i(t)\lambda_i(t) dt& \mbox{if }\varphi_i(T,\cdot)\leq g_i\mbox{ and }\lambda_i\geq 0\quad\forall i\in I, \\ +\infty& \mbox{otherwise.} \end{array} \right. \end{equation*} Using that: \begin{equation*} \begin{array}{l} \langle (m,E),\Lambda(\varphi,\lambda)\rangle_{E_1^\ast, E_1} \vspace{0.2cm} \\ = \sum_{i\in I} \int_0^1 \int_0^T (\partial_t\varphi_i(t,s) + b_i(s)\partial_s\varphi_i(t,s))m_i(ds,t) + \sum_{j\in I, j\neq i}(\varphi_j(t,s)-\varphi_i(t,s))E_{i,j}(t,ds)dt\\ +\sum_{i\in I}\int_0^T\int_0^1m_i(t,ds)\tilde{\lambda}_i(t,s)dt, \end{array} \end{equation*} defining $\mathcal{F}^\ast$ as the Fenchel conjugate of $\mathcal{F}$, we have: \begin{equation*} \mathcal{F}^\ast\left(\Lambda^\ast(m,E)\right):= \left\{ \begin{array}{ll} \int_0^1 \sum_{i\in I}g_i(s)m_i(T,ds) & \mbox{if }(m,E) \mbox{ weak solution of \eqref{evoX}}\\ &\mbox{and }\int_0^1m_i(t,ds)\leq D_i(t)\quad\forall(t,i)\in[0,T]\times I,\vspace{0.2cm}\\ +\infty & \mbox{otherwise}. \end{array} \right. \end{equation*} For any $(x,y)\in E_1$, the functional $\mathcal{G}$ is defined by: \begin{equation*} \mathcal{G}(x,y):=\left\{ \begin{array}{ll} 0 & \mbox{if } -c_i(t,s) - x_i(t,s) +\sum_{j\in I,j\neq i}\frac{(y_{i,j}(t,s)^-)^2}{2}\leq 0 \quad\forall (t,s,i)\in(0,T)\times(0,1)\times I, \\ +\infty & \mbox{otherwise}. \end{array} \right. \end{equation*} Then for any $(\varphi,\lambda)\in E_0$ it holds: \begin{equation*} \mathcal{G}(\Lambda(\varphi,\lambda)):=\left\{ \begin{array}{ll} 0 & \mbox{if } -c_i(t,s)-\partial_t\varphi_i(t,s) - b_i(t,s)\partial_s\varphi_i(t,s) -\tilde{\lambda_i}(t,s) + \sum_{j\in I,j\neq i}\frac{((\Delta\varphi_{i,j}(t,s))^-)^2}{2}\leq 0 \\ & \forall (t,s,i)\in(0,T)\times(0,1)\times I,\vspace{0.2cm}\\ +\infty & \mbox{otherwise}. \end{array} \right. \end{equation*} Observing, from \cite{benamou1998optimal}, that for any $(\rho,w)\in \mathbb{R}^2$: \begin{equation*} \sup_{a,b \in \mathbb{R}}\,\{a\rho+bw;\,a+\frac{(b^+)^2}{2}\leq 0\}=\left\{ \begin{array}{ll} \frac{1}{2}\frac{w^2}{\rho}& \mbox{if }\rho >0\mbox{ and }w\geq0,\\ 0& \mbox{if }\rho =0\mbox{ and }w=0,\\ +\infty & \mbox{otherwise}, \end{array} \right. \end{equation*} then, applying similar computations as in \cite{cardaliaguet2015mean}[Lemma 4.3], for any $(m,E)\in E_1'$, we have: \begin{equation} \label{Gast0} \begin{array}{l} \mathcal{G}^{\ast}(-(m,E))\\ = \sup_{(x,y)\in E_1}\,\sum_{i\in I}\int_0^T\int_0^1-x_i(t,s)m_i(t,ds)dt-\sum_{j\neq i}y_{i,j}(t,s)E_{i,j}(t,ds)dt-\mathcal{G}(x,y)\vspace{0.1cm}\\ =\sup_{(x,y)\in E_1}\,\sum_{i\in I}\int_0^T\int_0^1(-x_i(t,s)-c_i(t,s)+c_i(t,s))m_i(t,ds)-\sum_{j\neq i}y_{i,j}(t,s)E_{i,j}(t,ds)dt-\mathcal{G}(x,y)\vspace{0.2cm}\\ =\sum_{i\in I}\int_0^T\int_0^1c_i(t,s)m_i(t,ds)dt+\sup_{(x,y)\in E_1}\,\sum_{i\in I}\int_0^T\int_0^1x_i(t,s)m_i(t,ds)+\sum_{j\neq i}y_{i,j}(t,s)E_{i,j}(t,ds)dt-\mathcal{G}(-x-c,-y)\vspace{0.2cm}\\ = \left\{ \begin{array}{ll} \int_0^T\int_0^1\sum_{i\in I} c_i(t,s) m_i(t,ds) + \sum_{j\neq i} \frac{1}{2}\left(\frac{\mbox{d}E_{i,j}}{ \mbox{d} m_i}(t,s)\right)^2m_i(t, ds)dt & \mbox{if }m>0,E\geq 0 \mbox{ and }E\ll m,\vspace{0.1cm}\\ 0 & \mbox{if }m=0 \mbox{ and }E=0,\vspace{0.1cm}\\ +\infty &\mbox{otherwise} \end{array} \right. \end{array} \end{equation} The following lemma is useful to show the constraint qualification for Problem \eqref{dual_prob}. \begin{lemma} \label{contraint_qualifiation} There exists $(\varphi,\lambda)\in E_0$ such that $\mathcal{F}(\varphi,\lambda)< \infty$ and $\mathcal{G}$ is continuous at $\Lambda(\varphi,\lambda)$. \end{lemma} \begin{proof} Let $\varphi$ and $\lambda$ be such that for any $i\in I$, $s\in[0,1]$ and $t\in [0,T]$: \begin{equation*} \varphi_i(t,s) = -\max_{i\in I}( \Vert g_i \Vert_\infty) - 1, \end{equation*} and \begin{equation*} \lambda_i(t) := \Vert c_i\Vert_\infty + 1, \end{equation*} Functions $\varphi$ and $\lambda$ being constant, it holds that $(\varphi,\lambda)\in E_0$ and $\mathcal{F}(\varphi,\lambda)<\infty$. Also, from the choice of $\varphi$ and $\lambda$, it follows that for any $i\in I$, $s\in(0,1)$ and $t\in (0,T)$: \begin{equation*} -c_i(t,s)-\partial_t\varphi_i(t,s) - b_i(t,s)\partial_s\varphi_i(t,s) -\lambda_i(t,s) + \sum_{j\in I,j\neq i}\frac{((\Delta\varphi_{i,j}(t,s))^-)^2}{2}< - \frac{1}{2}. \end{equation*} Thus, $\mathcal{G}$ is continuous at $\Lambda(\varphi,\lambda)$. \end{proof} \begin{theorem} \label{pb_in_duality} We have: \begin{equation*} \inf_{(\varphi,\lambda)\in \mathcal{K}_0}\,A(\varphi,\lambda)= - \underset{(m,E)\in CE(m^0,D)}{\inf}\,\tilde{B}(E,m) \end{equation*} \end{theorem} \begin{proof} On can observe that: \begin{equation*} \inf_{(\varphi,\lambda)\in \mathcal{K}_0}\,A(\varphi,\lambda)= \inf_{(\varphi,\lambda)\in E_0}\mathcal{F}(\varphi,\lambda)+\mathcal{G}(\Lambda(\varphi,\lambda)), \end{equation*} and \begin{equation*} \inf_{(m,E)\in CE(m^0,D)}\,\tilde{B}(m,E)= \inf_{(m,E)\in E_1'}\, \mathcal{F}(\Lambda^\ast(m,E))+\mathcal{G}^{\ast}(-(m,E)), \end{equation*} Using Lemmas \ref{contraint_qualifiation} and \ref{dual_finite}, the conclusion follows by applying the Fenchel-Rockafellar duality theorem \cite{ekeland1999convex}. \end{proof} \subsection{Relaxed problem of \eqref{dual_prob}}\label{sub_sec_relaxed_prob} The problem defined at \eqref{dual_prob} might not have a solution. A relaxed problem is introduced and the existence of a solution is proved. We define $\mathcal{R}_0$ by: \begin{equation*} \mathcal{R}_0:=\{(\varphi,\lambda)\,\vert \, \lambda\in \mathcal{M}^+([0,T]\times I)\mbox{ and }\varphi\mbox{ solution of \eqref{hjb}, in the sense of definition \ref{def_weak_sol_hjb}, associated to }\lambda\}. \end{equation*} The following relaxed problem is considered: \begin{equation} \label{relaxed_problem} \inf_{(\varphi,\lambda)\in \mathcal{R}_0}\tilde{A}(\varphi,\lambda) \end{equation} where: \begin{equation} \label{def_tilde_A} \tilde{A}(\varphi,\lambda):= \sum_{i \in I}\int_0^1 - \varphi_i(0,s)m_i^0(ds) + \int_0^TD_i(t)\lambda_i(dt) \end{equation} \subsection{Existence of solution of the relaxed problem \eqref{relaxed_problem}} In order to prove the existence of a solution of \eqref{relaxed_problem}, we need the following estimate on $\lambda$. \begin{lemma} \label{borne_lambda} Let $A>0$, there exists a constant $ K_A>0$ such that for any $(\varphi,\lambda)\in \mathcal{R}_0$ satisfying $\tilde{A}(\varphi,\lambda)\leq A$, we have: \begin{equation*} \sum_{i\in I}\int_0^T\lambda_i(dt) \leq K_A, \end{equation*} \end{lemma} \begin{proof} Let $A\in \mathbb{R}$ and $(\varphi,\lambda)\in \mathcal{R}_0$ be such that $\tilde{A}(\varphi,\lambda)\leq A$. Since $(\rho,0)\in CE(m^0,D)$ (where $\rho$ is defined in Section \ref{prob_formulation}) and using Assumptions \ref{hyp_on_b} and \ref{hyp_on_m_0}, it comes $\rho\in C^1((0,T)\times (0,1)\times I)$. From the definition of a weak solution \ref{def_sub_so_2}, taking $\rho$ as a test function, we have for any $i\in I$: \begin{equation*} \begin{array}{l} \int_0^1\left(\varphi_i(0,s)m^0_i(s) -g_i(s)\rho_i(T,s)\right)ds +\int_0^T\int_0^1\sum_{j\in I,j\neq i}H((\varphi_j- \varphi_i)(t,s))\rho_i(t,s)dsdt \\ = \int_0^T\int_0^1 c_i(t,s)\rho_i(t,s)dsdt+ \int_0^T\int_0^1\rho_i(t,s)ds\lambda_i(dt). \end{array} \end{equation*} Since that for any $(t,s)\in [0,T]\times [0,1]$ and $(i,j)\in \tilde{I}$ it holds $H((\varphi_j- \varphi_i)(t,s))\rho_i(t,s)\geq 0$, it comes: \begin{equation*} \int_0^1\varphi_i(0,s)m^0_i(ds) \leq \int_0^T\int_0^1\rho_i(t,s)ds\lambda_i(dt) + K_i, \end{equation*} where: $K_i:=\int_0^Tg_i(s)\rho_i(T,s)ds+\int_0^T\int_0^1c_i(t,s)\rho_i(t,s)dsdt$. From the definition $\tilde{A}$ in \eqref{def_dual_fun}, we deduce: \begin{equation*} -\sum_{i\in I}K_i+\int_0^T\left(D_i(t)-\int_0^1\rho_i(t,s)ds\right)\lambda_i(dt)\leq \tilde{A}(\varphi,\lambda)\leq A. \end{equation*} Using Assumption \ref{hyp_on_D}, there exists $\varepsilon^0>0$ such that for any $i\in I$ and $t\in[0,T]$, it holds $D_i(t)-\int_0^1\rho_i(t,s)ds=D_i(t)-\int_0^1m_i^0(s)ds > \varepsilon^0$. Thus, we get $\lambda([0,T]\times I)dt\leq K_A$, where $K_A:=\frac{A+\sum_{i\in I}K_i}{\varepsilon^0}$. \end{proof} Next lemma is useful to show that for any minimizing sequence $\{(\varphi^n,\lambda^n)\}_n$ of \ref{relaxed_problem}, $\{\tilde{A}(\varphi^n,\lambda^n)\}_n$ converges up to a subsequence. \begin{lemma} \label{conv_phi_en_0} Let $(\phi, \lambda)\in \mathcal{R}_0$ and a sequence $\{(\phi^n, \lambda^n)\}_n\in \mathcal{K}_0^\mathbb{N}$ be such that $\{\phi^n\}_n$ converges to $\phi$ in $L^1((0,T)\times (0,1)\times I)$ and $\{\lambda^n\}_n$ weakly converges to $\lambda$ in $\mathcal{M}^+(I\times [0,T])$. Then, up to a subsequence of $\{(\phi^n, \lambda^n)\}_n$ it holds for any $i\in I$ : \begin{equation*} \underset{n\to \infty}{\lim}\int_0^1\sum_{i\in I}(\phi^n_i(0,s)-\phi_i(0,s)) m^0_i(s)ds=0. \end{equation*} \end{lemma} \begin{proof} Let $\rho$ be defined as in Section \ref{prob_formulation}. Since $(\phi, \lambda)\in \mathcal{R}_0$ and that for any $n\in \mathbb{N}$ $(\phi^n, \lambda^n)\in \mathcal{K}_0$, we obtain for any $i\in I$: \begin{equation} \label{diff_weak_formulation} \begin{array}{l} \int_0^1(\phi_i(0,s)-\phi^n_i(0,s))m^0_i(s)ds +\int_0^T\int_0^1\sum_{j\in I,j\neq i}\left(H((\phi_j- \phi_i)(t,s))- H((\phi_j^n- \phi_i^n)(t,s))\right)\rho_i(t,s)dsdt \\ = \int_0^T\int_0^1\rho_i(t,s)ds(\lambda_i-\lambda_i^n)(dt). \end{array} \end{equation} Since $\phi^n\underset{n\to \infty}{\to}\phi$ in $L^1$, there exists a subsequence of $\{(\phi^n, \lambda^n)\}_n$ such that $\phi^n\underset{n\to \infty}{\to}\phi$ a.e. on $[0,T]\times [0,1]\times I$. From Lemma \ref{classical_solution} and the weak convergence of $\{\lambda^n\}_n$, the sequence $\{\phi^n \}_n$ is uniformly bounded. Then, the continuity of $H$ and the dominated convergence theorem give: \begin{equation} \label{conv_H} \underset{n\to \infty }{\lim} \int_0^T\int_0^1\sum_{j\in I,j\neq i}\left(H((\phi_j- \phi_i)(t,s))- H((\phi_j^n- \phi_i^n)(t,s))\right)\rho_i(t,s)dsdt=0. \end{equation} Since for any $i\in I$ it holds $\rho_i\in C^1 ((0,T)\times(0,1))$. Then from the weak$^\ast$ convergence of $\{\lambda_i^n\}_n$ to $\lambda_i$, we get: \begin{equation} \label{conv_lambda_n} \underset{n\to \infty }{\lim}\int_0^T\int_0^1\rho_i(t,s)ds(\lambda_i-\lambda_i^n)(dt)= 0. \end{equation} Taking the limit in \eqref{diff_weak_formulation} and using \eqref{conv_H} and \eqref{conv_lambda_n}, the result follows. \end{proof} \begin{theorem} \label{existence_solution_relaxed_problem} The relaxed problem \eqref{relaxed_problem} has a solution. \end{theorem} \begin{proof} Taking a minimizing sequence $\{(\varphi^n,\lambda^n)\}_n$ of \eqref{relaxed_problem}, according to Lemmas \ref{borne_lambda}, the sequence $\{\lambda^n\}_n$ is bounded. Thus there exists a subsequence $\{\lambda^{\theta_n}\}_n$ of $\{\lambda^n\}_n$ weakly converging to $\lambda^\ast$ in $\mathcal{M}^+([0,T]\times I)$. From Lemma \ref{classical_solution}, there exists a weak solution $\varphi^\ast$ of \eqref{def_sub_so_2} associated to $\lambda^\ast$. From Lemma \ref{conv_inf_space}, it holds $\{\varphi^{\theta_n}\}_n$ converges to $\varphi^\ast$ w.r.t. the norm $\Vert \cdot \Vert_1$. Thus, Lemma \ref{conv_phi_en_0} gives up to a subsequence of $\{\varphi^{\theta_n}\}_n$: \begin{equation} \label{conv_en_0_val_absolue} \underset{n\to \infty}{\lim}\,\sum_{i\in I}\int_0^1(\varphi^n(0,s)-\varphi^\ast(0,s)) m^0_i(s)ds=0. \end{equation} From Assumption \ref{hyp_on_D}, we have for any $i\in I$ $D_i\in C^0 (0,T)$, then from the weak$^\ast$ convergence of $\{\lambda_i^{\theta_n}\}_n$ to $\lambda_i^\ast$, one has: \begin{equation} \label{conv_lambda_val_absolu} \underset{n\to\infty}{\lim}\,\sum_{i\in I}\left\vert\int_0^TD_i(t)\lambda_i^\ast(dt)-\int_0^TD_i(t)\lambda_i^n(t)dt \right\vert=0. \end{equation} Thus, $(\varphi^\ast, \lambda^\ast)$ minimizes $\tilde{A}$. \end{proof} The next Theorem shows that Problem \eqref{dual_prob} and the relaxed problem \eqref{relaxed_problem}\ have the same value. \begin{theorem} \label{pb_relaxed_equals_original_pb} It holds: \begin{equation} \inf_{(\varphi,\lambda)\in \mathcal{K}_0}{A}(\varphi,\lambda)= \inf_{(\varphi,\lambda)\in \mathcal{R}_0}\tilde{A}(\varphi,\lambda) \end{equation} \end{theorem} \begin{proof} From Theorem \ref{existence_solution_relaxed_problem} we get that \eqref{relaxed_problem} has a solution, $(\varphi^\ast, \lambda^\ast)\in \mathcal{R}_0$. Since $\mathcal{K}_0\subset \mathcal{R}_0$, it is clear that \begin{equation} \tilde{A}(\varphi^\ast, \lambda^\ast) \leq \inf_{(\varphi,\lambda)\in \mathcal{K}_0}{A}(\varphi,\lambda). \end{equation} Let $\xi$ be a standard convolution kernel on $\mathbb{R}_+$ such that $\xi>0$ and the functions $\{\lambda^n\}_n$ be defined as in \eqref{convolution} to approximate $\lambda^\ast$. For any $n\in \mathbb{N}$, let $\phi^n$ be the solution of \eqref{def_integral_equation} associated to $\lambda^n$. From Lemma \ref{conv_inf_space}, we have that the sequence $\phi^n$ converges to $\phi^\ast$ w.r.t. the norm $\Vert\cdot \Vert_1$. From Lemma \ref{fixed_point_C1}, it holds for any $n\in \mathbb{N}$ that $\varphi^n\in C^1((0,T)\times (0,1)\times I)$ and thus, $(\varphi^n,\lambda^n)\in \mathcal{K}_0$. Using similar arguments as in the proof of Theorem \ref{existence_solution_relaxed_problem}, one obtains: $ \inf_{(\varphi,\lambda)\in \mathcal{K}_0}{A}(\varphi,\lambda)\leq \inf_{(\varphi,\lambda)\in \mathcal{R}_0}\tilde{A}(\varphi,\lambda)$. \end{proof} \section{Characterization of minimizers} \label{charac_mini} The purpose of this section is to define and characterize the solutions of Problem \eqref{problemE}. We show that the following system gives optimality conditions for \eqref{problemE}: \begin{equation} \label{forward_backward_system} \left\{ \begin{array}{ll} -\partial_t\varphi_i-b_i\partial_s\varphi_i-c_i-\lambda_i+\sum_{j\in I,j\neq i}H((\varphi_j-\varphi_i))= 0 &\mbox{on }(0,T)\times (0,1)\times I \\ \partial_t m_i+\partial_s(m_i b_i) +\sum_{j\neq i}((\varphi_i-\varphi_j)^+m_i -(\varphi_j-\varphi_i)^+m_{j}) =0 &\mbox{on }(0,T)\times (0,1)\times I\vspace{0.2cm}\\ m_i(0,s) = m_i^0(s),\, \varphi_i(T,s) = g_i(s) &\mbox{on } (0,1)\times I \\ \int_0^1m_i(t,s)ds-D(t)\leq 0,\,\lambda\geq 0 &\mbox{on }[0,T]\times I\\ \sum_{i\in I}\int_0^T\int_0^1 m_i(t,ds)\lambda_i(dt) - \int_0^T D(t)\lambda(dt)=0 \end{array} \right. \end{equation} The notion of weak solutions of the system \eqref{forward_backward_system} is detailed in the following definition. \begin{definition} \label{weak_sol_syst} A triplet $(\varphi, \lambda, m)\in BV((0,T)\times (0,1)\times I)\times \mathcal{M}^+([0,T]\times I)\times \Lip([0,T]\times [0,1]\times I)$ is called a weak solution of \eqref{forward_backward_system} if it satisfies the following conditions. \begin{enumerate} \item \label{weak_sol_hjb_syst} The function $\varphi$ is a weak solution of \eqref{hjb}, associated to $\lambda$ in the sense of Definition \ref{def_weak_sol_hjb} and $\varphi(T,\cdot)=g_i$ in the sense of trace. \item $m$ satisfies the continuity equation: \begin{equation*} \partial_t m_i+\partial_s(m_i b_i) +\sum_{j\neq i}((\varphi_i-\varphi_j)^+m_i -(\varphi_j-\varphi_i)^+m_{j}) =0,\quad m_i(0,\cdot)=m_i^0 \end{equation*} in the sense of Definition \ref{weak_sol_cont_equ}, with $\alpha_{i,j}=(\varphi_i-\varphi_j)^+$. \item It holds for any $t\in [0,T]$: \begin{equation*} \begin{array}{lll} \int_0^1m_i(t,s)ds-D(t)\leq 0 & \mbox{ and } &\sum_{i\in I}\int_0^T\int_0^1 m_i(t,ds)\lambda_i(dt) - \int_0^T D(t)\lambda(dt)=0. \end{array} \end{equation*} \end{enumerate} \end{definition} \begin{remark} From Lemmas \ref{fixed_point_C1} and \ref{classical_solution}, it holds that for any $i\in I$, $\varphi_i$ is bounded and for a.e. $t\in [0,T]$ $\varphi(t,\cdot)$ is continuous on $[0,1]$. Thus, the forward equation in \eqref{forward_backward_system} makes sense. \end{remark} \begin{theorem} \label{optimality_conditions} \begin{enumerate} \item \label{minimizer_implies_weak_solution}If $(m,E)\in CE(m^0,D)$ is a minimizer of Problem \eqref{problemE} and $(\varphi,\lambda)\in \mathcal{R}_0$ a minimizer of Problem \eqref{relaxed_problem}, then $(\varphi, \lambda, m)$ is a weak solution of \eqref{forward_backward_system} and $\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i}=(\varphi_i-\varphi_j)^+$ on $\{m_i>0\}$ for any $i,j\in I$. \item \label{weak_sol_implies_minimizer}Conversely, if $(\varphi,\lambda,m)$ is a weak solution of \eqref{forward_backward_system}, then $(\varphi, \lambda)\in \mathcal{R}_0$ is a minimizer of Problem \eqref{relaxed_problem} and there exists $E$, defined for any $i,j\in I$ by: $\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i}:=(\varphi_i-\varphi_j)^+$, such that $(m,E)\in CE(m^0,D)$ is a minimizer of \eqref{problemE}. \end{enumerate} \end{theorem} \subsection{Proof of Theorem \eqref{forward_backward_system}} Before to start the proof of Theorem \ref{forward_backward_system}, we make the following remark. \begin{remark} \label{ineg_phi_0_phi_T} Suppose $(\varphi,\lambda)\in\mathcal{R}_0$ and $(m,E)\in CE(m^0,D)$ with $m\in L^\infty([0,T]\times [0,1]\times I)$ and $E\in L^\infty([0,T]\times [0,1]\times I \times I,\mathbb{R}_+)$, then one has: $-\tilde{B}(m,E)\leq \tilde{A}(\varphi,\lambda)$. Indeed, let $\{\lambda_n\}_n$ be a sequence of smooth functions, approximating $\lambda$, obtained by convolution as in \eqref{convolution}. For any $n\in \mathbb{N}$, $\varphi^n\in C^1((0,T)\times (0,1)\times I)$ denotes the solution of \eqref{def_integral_equation} associated to $\lambda^n$. From Lemma \ref{conv_phi_en_0}, one has : $\lim_{n\to \infty}A(\varphi^n,\lambda^n)=\tilde{A}(\varphi,\lambda)$. Observing that for any $n\in \mathbb{N}$ we have $(\varphi^n,\lambda^n)\in \mathcal{R}_0$, and using the proof of Theorem \ref{pb_in_duality}, we get: $A(\varphi^n,\lambda^n)\geq -\tilde{B}(m,E)$ and thus $\tilde{A}(\varphi,\lambda)\geq -\tilde{B}(m,E)$. \end{remark} \begin{proof}[Proof of Theorem \ref{optimality_conditions}.(\ref{minimizer_implies_weak_solution})] From Theorems \ref{pb_relaxed_equals_original_pb} and \ref{pb_in_duality}, it comes: \begin{equation*} \inf_{(\varphi,\lambda)\in \mathcal{R}_0}\,\tilde{A}(\varphi,\lambda)= - \underset{(m,E)\in CE(m^0,D)}{\inf}\,\tilde{B}(E,m), \end{equation*} and thus \begin{equation} \label{eg_opt_obj_functions} \sum_{i\in I} \int_0^1g_im_i(T)-\varphi_i(0)m_i^0 +\int_0^TD_i\lambda_i+ \int_0^T\int_0^1\left(c_i+\sum_{j\neq i}L\left(\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i}\right)\right)m_i = 0. \end{equation} We want to show that $E_{i,j}=m_i(\varphi_i-\varphi_j)^+$. Let $\{\lambda_n\}_n$ be a sequence of smooth functions, approximating $\lambda$, obtained by convolution as in \eqref{convolution}. For any $n\in \mathbb{N}$, $\varphi^n\in C^1((0,T)\times (0,1)\times I)$ denotes the solution of \eqref{def_integral_equation} associated to $\lambda^n$. For any $n\in \mathbb{N}$, $\varphi^n$ is smooth enough to be a test function for the weak formulation of \eqref{hjb} satified by $m$. Using Lemma \ref{conv_phi_en_0} and that $D\in C^0([0,T]\times I)$, it holds for any $n\in \mathbb{N}$ and $i\in I$: \begin{equation*} \begin{array}{l} \sum_{i\in I} \int_0^1g_im_i(T)-\varphi_i(0)m_i^0 + \int_0^T\lambda_iD_i \\= \lim_{n\to \infty}\sum_{i\in I} \int_0^1g_im_i(T)-\varphi_i^n(0)m_i^0 + \int_0^T D_i\lambda^n_i\\ = \lim_{n\to \infty}\sum_{i\in I} \int_0^T\int_0^1\left(\partial_t\varphi^n_i+b_i\partial_s(\varphi^n_i) +\sum_{j\in I,j\neq i}(\varphi_j^n-\varphi_i^n)\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i} \right)m_i + \int_0^T\lambda_i^n D_i \\ = \lim_{n\to \infty}\sum_{i\in I} \int_0^T\int_0^1\left(-c_i+\sum_{j\in I,j\neq i}H(\varphi_j^n-\varphi_i^n) +\sum_{j\in I,j\neq i}(\varphi_j^n-\varphi_i^n)\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i} \right)m_i + \int_0^T\lambda_i^n\left(D_i-\int_0^1m_i\right) \end{array} \end{equation*} Using previous equality and \eqref{eg_opt_obj_functions}, it comes: \begin{equation*} \lim_{n\to \infty}\sum_{i\in I} \int_0^T\int_0^1\left(\sum_{j\in I,j\neq i}H(\varphi_j^n-\varphi_i^n) +L\left(\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i}\right) +(\varphi_j^n-\varphi_i^n)\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i} \right)m_i + \int_0^T\lambda_i^n\left(D_i-\int_0^1m_i\right) = 0 \end{equation*} From Lemma \ref{conv_inf_space}, for a.e. $t\in [0,T]$, the sequence $\{\varphi(t,\cdot)\}_{n}$ converges uniformly to $\varphi$. From Lemma \ref{classical_solution} we have that $\varphi$ is bounded, consequently the sequence $\{\varphi(t,\cdot)\}_{n}$ is uniformly bounded. Using dominated convergence theorem, we get $\textstyle \int_0^T\int_0^1m_iH(\varphi_j^n-\varphi_i^n)$ converges to $\textstyle \int_0^T \int_0^1m_iH(\varphi_j-\varphi_i)$ for any $i,j\in I$ and a.e. $t\in [0,T]$. Since $(m,E)$ is a solution of \eqref{problemE}, $\tilde{B}(m,E)$ is finite and from Lemma \ref{RelativCompact} one has for any $i,j\in I$: $\textstyle \int_0^T\int_0^1 E_{i,j}<\infty$. Applying dominated convergence theorem, we get: $\textstyle\int_0^T\int_0^1(\varphi_j^n-\varphi_i^n)E_{i,j}$ converges to $\textstyle \int_0^T \int_0^1(\varphi_j-\varphi_i)E_{i,j}$. In addition, for any $i\in I$, the map $t\mapsto \int_0^1m_i(t,s)ds$ is continuous, the weak convergence of $\lambda^n$ to $\lambda$ in $\mathcal{M}^+([0,T]\times I)$ gives: $$\lim_{n\to \infty}\sum_{i\in I} \int_0^T\lambda_i^n\left(D_i-\int_0^1m_i\right)=\sum_{i\in I} \int_0^T\lambda_i\left(D_i-\int_0^1m_i\right).$$ Thus, we have: \begin{equation} \label{lim_eg_opt_obj_functions} \int_0^T\int_0^1\left(\sum_{j\in I,j\neq i}H(\varphi_j-\varphi_i) +L\left(\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i}\right) +(\varphi_j-\varphi_i)\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i} \right)m_i + \int_0^T\lambda_i\left(D_i-\int_0^1m_i\right) = 0 \end{equation} Since $\lambda\geq 0$ and for any $t\in [0,T]$ $\int_0^1m_i(t,s)ds\leq D_i(t)$, one has for any $i\in I$ and $t\in [0,T]$: \begin{equation} \label{lim_inf_lamb_ineq_m_n} 0\leq \int_0^T\lambda_i(t)\left(D_i-\int_0^1m_i(t,s)ds\right). \end{equation} Recalling the definition of $L$ and $H$: \begin{equation*} L(p):=\left\{ \begin{array}{cc} \frac{p^2}{2} & \mbox{if }p\geq 0\\ +\infty & \mbox{otherwise} \end{array} \right. \quad \mbox{and} \quad H(q)=\frac{({q^-})^2}{2}, \end{equation*} we have: $L^\ast(p) = H(-p)$. One can observe that: \begin{equation*} \forall q\in \mathbb{R}_-\quad H(q)+L(p)+pq =\frac{(p+q)^2}{2}\geq 0 \quad\mbox{ and }\quad\forall q\in \mathbb{R}_+\quad H(q)+L(p)+pq \geq \frac{p^2}{2} \geq 0. \end{equation*} Using the two previous equations and \eqref{lim_inf_lamb_ineq_m_n}, equality \eqref{lim_eg_opt_obj_functions} gives: \begin{equation} \label{id_E} \frac{\mbox{d}E_{i,j}}{\mbox{d}m_i}(t,s)=\left\{ \begin{array}{ll} \varphi_i(t,s)-\varphi_j(t,s) & \mbox{if }\varphi_i(t,s)-\varphi_j(t,s) \geq 0 \\ 0 & \mbox{otherwise} \end{array} \right. m-\mbox{a.e. } \end{equation} and inequality \eqref{lim_inf_lamb_ineq_m_n} becomes an equality. Thus, it also comes: $\textstyle\int_0^T\left(D_i-\int_0^1m_i(t,s)ds\right)\lambda_i(dt)=0$. From equality \eqref{id_E} and Lemma \ref{regularity_m}, we deduce that $m \in \Lip([0,T]\times [0,1]\times I)$. (\ref{weak_sol_implies_minimizer}) We assume now that $(\lambda,\varphi,m)$ is a weak solution of \eqref{forward_backward_system}. Since $\varphi$ is in $BV((0,T)\times (0,1)\times I)$ and $\lambda$ is a finite measure, the quantity $\tilde{A}(\varphi,\lambda)$ is well defined. Thus $(\varphi,\lambda)$ belongs to $\mathcal{R}_0$. We want to show that $\tilde{A}(\varphi,\lambda)+\tilde{B}(m,E)=0$. We approximate $(\lambda,\varphi)$ with the same sequence $\{(\varphi^n,\lambda^n)\}_n$ as in the proof of Theorem \ref{optimality_conditions}. (\ref{minimizer_implies_weak_solution}). For any $n$, $\varphi^n$ is smooth enough to be considered as a test function for $m$ and we have for any $i\in I$: \begin{equation} \label{test_phi_weak_sol_m} \sum_{i\in I} \int_0^1g_im_i(T)-\varphi^n_i(0)m_i^0 -\sum_{i\in I}\int_0^T\int_0^1m_ib_i\partial_s\varphi_i^n +m_i\partial_t\varphi_i^n+\varphi_i^n\sum_{j\in I,j\neq i}(\varphi_i-\varphi_j)^+m_i - (\varphi_j-\varphi_i)^+m_j = 0. \end{equation} For any $i\in I$, $\varphi^n_i$ is a classical solution of \eqref{hjb} associated to $\lambda^n$. Multiplying \eqref{hjb} by $m_i$, summing over $I$ and integrating over $[0,T]\times [0,1]$, we have: \begin{equation} \label{varphi_classic_sol_multiplied_m} \sum_{i\in I}\int_0^T\int_0^1-m_i\partial_t\varphi_i^n- m_ib_i\partial_s \varphi_i^n -m_ic_i-m_i\lambda^n +\sum_{j\in I,j\neq i}H(\varphi_j^n-\varphi_i^n)m_i =0. \end{equation} Combining \eqref{test_phi_weak_sol_m} and \eqref{varphi_classic_sol_multiplied_m}: \begin{equation*} \sum_{i\in I} \int_0^1g_im_i(T)-\varphi^n_i(0)m_i^0 +\sum_{i\in I}\int_0^T\int_0^1 c_i m_i +\lambda_i m_i -m_i\left(\sum_{j\in I,j\neq i}(\varphi_i-\varphi_j)^+(\varphi^n_i-\varphi^n_j) + H(\varphi_j^n-\varphi_i^n)\right)= 0. \end{equation*} Since $(\varphi,\lambda,m)$ is a weak solution of \eqref{forward_backward_system}, using Lemmas \ref{conv_phi_en_0} and \ref{conv_inf_space}, and letting n tend to infinity one deduces: \begin{equation*} \sum_{i\in I} \int_0^1g_im_i(T)-\varphi_i(0)m_i^0 +\sum_{i\in I}\int_0^1 D_i \lambda_i +\sum_{i\in I} \int_0^T\int_0^1 c_i m_i -m_i\left(\sum_{j\in I,j\neq i}(\varphi_i-\varphi_j)^+(\varphi_i-\varphi_j) + H(\varphi_j-\varphi_i)\right)= 0. \end{equation*} Using the definition of $L$ and $H$, we have: \begin{equation*} \sum_{i\in I} \int_0^1g_im_i(T)-\varphi_i(0)m_i^0 +\sum_{i\in I} \int_0^1 D_i \lambda_i +\sum_{i\in I} \int_0^T\int_0^1 c_i m_i +m_i\left(\sum_{j\in I,j\neq i} L((\varphi_i-\varphi_j)^+)\right)= 0. \end{equation*} Using the definition of $\tilde{A}$ in \eqref{def_tilde_A} and $\tilde{B}$ in \eqref{def_B} and we have $\tilde{A}(\varphi,\lambda)+\tilde{B}(m,E)=0$. The conclusion follows from Remark \ref{ineg_phi_0_phi_T} \end{proof} \subsection{Proof of Theorem \eqref{main_results}} Using Theorem \ref{optimality_conditions}, we are now ready to prove our main Theorem, applying the change of variable $\alpha_{i,j}=\frac{\mbox{d}E_{i,j}}{\mbox{d}m_i}$. \ref{main_results}. \begin{proof}[Proof of Theorem \ref{main_results}] (\ref{min_imp_wea}) This statement is proved by Theorem \ref{optimality_conditions}.\ref{minimizer_implies_weak_solution} (\ref{wea_imp_min}) This point is given by Theorem \ref{optimality_conditions}.\ref{weak_sol_implies_minimizer}. (\ref{min_imp_reg}) Using Theorem \ref{main_results}.(\ref{min_imp_wea}), there exists $(\varphi,\lambda)$ such that $(\varphi,\lambda,m)$ is a weak solution of \ref{forward_backward_system} and for any $i,j\in I$ $\alpha_{i,j}=(\varphi_i-\varphi_j)^+$. Since $\partial_s\varphi\in L^\infty((0,T)\times I, C^0([0,1]))$, then one deduces $\alpha_{i,j}\in L^\infty((0,T),\Lip(0,1))$. Applying Lemma \ref{regularity_m}, we deduce that $m\in \Lip([0,T]\times [0,1]\times I)$. \end{proof}
{ "attr-fineweb-edu": 1.240234, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd9A5qrqCyqr5hmvT
\subsection{Monte Carlo simulation} In order to simulate our signal, we have implemented the model of Sec.~\ref{% lhcnmfv_SecModel} into {\sc Feynrules 2.0}~\cite{Alloul:2013bka} to get a UFO model~\cite{Degrande:2011ua} to be used within the {\sc MadGraph5}\_aMC@NLO framework~\cite{Alwall:2014hca}. We have generated leading-order (LO) hard-scattering matrix elements for squark pair-production and decay, that we have convoluted with the leading-order set of NNPDF~3.0 parton distribution functions~\cite{Ball:2014uwa}. Parton showering and hadronisation have been handled with {\sc Pythia 8.2}~\cite{Sjostrand:2014zea}, and each event has been reweighted so that the corresponding total rate matches the production cross-section estimated at the NLO+NLL accuracy~\cite{Borschensky:2014cia}. We generate a grid in the parameter space of the model, the lightest squark mass being varied in the [600~GeV, 1.5~TeV] window by steps of 100~GeV, and the neutralino mass in the [50~GeV, 900~GeV] window in steps of 50~GeV for $m_{\tilde\chi^0_1}<400\, \GeV$ and of 100~GeV above. The squark mixing angle is fixed to $\pi/4$. As stated above, we focus on the signal topology with one isolated lepton (electron or muon), jets and missing transverse energy. The SM processes which can mimic this topology involve one or two leptons originating either from the decay of a $W$ or a $Z$ boson, or from leptonically-decaying tau leptons. We consequently generate events for SM \ensuremath{t\bar{t}}\xspace, $Wt$, $t$-channel single top, \ensuremath{t\bar{t}W}\xspace, \ensuremath{t\bar{t}Z}\xspace, $tWZ$, $tZ$, \ensuremath{W}+jets\xspace, \ensuremath{Z}+jets\xspace, $WW$, $WZ$ and $ZZ$ production. For \ensuremath{t\bar{t}}\xspace, single top and diboson processes, events are simulated at the NLO in QCD within the {\sc Powheg Box} framework \cite{Alioli:2010xd}. Samples for the remaining processes are then generated at LO, using {\sc MadGraph5}\_aMC@NLO. We consider matrix elements featuring a variable number of additional jets that we merge according to the CKKW prescription as implemented in {\sc Pythia 8}~\cite{Lonnblad:2011xx}. For \ensuremath{W}+jets\xspace and \ensuremath{Z}+jets\xspace, we merge samples describing final-states containing up to four additional partons, whereas for \ensuremath{t\bar{t}W}\xspace and \ensuremath{t\bar{t}Z}\xspace production, the matrix elements are allowed to include up to two extra partons. All those events are reweighted so that the total rates match the next-to-next-to-leading order (NNLO) cross-sections if available, or the NLO ones otherwise.\par Jets are reconstructed according to the anti-$k_T$ jet algorithm~\cite{% Cacciari:2008gp} with a jet radius parameter set to $R = 0.4$, as implemented in {\sc FastJet}~\cite{Cacciari:2011ma}. Moreover, jets are labelled as $b$-jets if the angular distance $\Delta R\equiv(\Delta\phi^2+\Delta\eta^2)^{1/2}$ between the jet and the nearest $B$-hadron satisfies $\Delta R < 0.5$. Similarly, we define $c$-jets as jets that fail $b$-tagging and for which there exists a charmed hadron lying at an angular distance smaller than 0.5 from the jet. Any jet that is not identified as a $b$-jet or as a $c$-jet is labelled as a light jet. The missing transverse momentum $\ensuremath{\vec p^{\mathrm{\ miss}}_\mathrm{T}\xspace}$, with magnitude $\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace$, is estimated by the vector sum of the transverse momenta of all invisible particles.\par Detector effects are simulated by smearing the momenta of all reconstructed objects and by applying reconstruction efficiency factors in a way that reproduces the performance of the ATLAS detector~\cite{Aad:2008zzm, Aad:2009wy}, as described in Ref.~\cite{Pani:2017qyd}. In particular, we include $b$-tagging and $c$-tagging efficiency and rejection factors based on the performance reported in Refs.~\cite{ATL-PHYS-PUB-2015-022, ATL-PHYS-PUB-2017-013}, and we adopt working points corresponding to an average $b$-tagging efficiency of $\epsilon_b(b)=77\%$ for charm and light jet rejection factors of $1/\epsilon_b(c)=4.5$ and $1/\epsilon_b(l)=140$ respectively, and an average $c$-tagging efficiency of $\epsilon_c(c)=30\%$ for rejection factors of $1/\epsilon_c(b)=18$ and $1/\epsilon_c(l)=5$ for $b$-jets and light jets respectively. Such a choice is aimed at optimising background rejection when it is dominated by final states featuring two $b$-jets. As there is currently no public information on the correlations between the $b$-tagging and $c$-tagging algorithms used by the collaborations, we do not allow a jet to be $b$-tagged and $c$-tagged simultaneously. We instead first select jet candidates based on their kinematics before applying either $b$-tagging or $c$-tagging.\par We have compared our approach with an independent simulation based on the publicly available detector simulation software {\sc Delphes 3}~% \cite{deFavereau:2013fsa}, and have found good agreement between the two methods. \subsection{Event selection} \label{sec:anal1} The topology of interest includes one isolated lepton (electron or muon) arising from the top decay, jets including one $b$-jet (also issued from the top decay) and one $c$-jet, as well as missing transverse energy carried by the two neutralinos. Consequently, we preselect events by requiring the presence of exactly one isolated electron or muon with a transverse momentum \mbox{$\ensuremath{p_{\mathrm{T}}}\xspace>25$~GeV} and a pseudorapidity $|\eta|<2.5$, and of at least one \ensuremath{b}-tagged\xspace jet with $\ensuremath{p_{\mathrm{T}}}\xspace>50\, \GeV$ and $|\eta|<2.5$. We moreover ask the invariant mass of at least one of the possible systems made of a $b$-jet and the lepton to fulfil $m_{b\ell} < 160\, \GeV$, since in the signal case the lepton and the $b$-jet originate from a top decay so that $m_{b\ell}$ is bounded to be smaller than around 153~GeV. The dominant backgrounds at this point are comprised of $\ensuremath{t\bar{t}}\xspace$ events with either one or both top quarks decaying leptonically, single top events and $\ensuremath{t\bar{t}Z}\xspace$ events with an invisible $Z$-boson decay. As all backgrounds where the missing energy originates from a leptonic $W$-boson decay ($W \to \ell \nu$) feature \begin{equation} \ensuremath{m_\mathrm{T}^{lep}}\xspace \!\equiv\!\sqrt{2\,|\ensuremath{\vec p^{\mathrm{\ \ell}}_\mathrm{T}\xspace}|\,\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace\,(1\!-\!\cos\Delta\phi(\ensuremath{\vec p^{\mathrm{\ \ell}}_\mathrm{T}\xspace},\ensuremath{\vec p^{\mathrm{\ miss}}_\mathrm{T}\xspace}))}< m_W\ , \end{equation} we require $\ensuremath{m_\mathrm{T}^{lep}}\xspace>160$~GeV to increase the signal over background ratio. In the definition of $\ensuremath{m_\mathrm{T}^{lep}}\xspace$, $\ensuremath{\vec p^{\mathrm{\ \ell}}_\mathrm{T}\xspace}$ is the lepton (vector) transverse momentum and $\Delta\phi(\ensuremath{\vec p^{\mathrm{\ \ell}}_\mathrm{T}\xspace},\ensuremath{\vec p^{\mathrm{\ miss}}_\mathrm{T}\xspace})$ the angle between $\ensuremath{\vec p^{\mathrm{\ \ell}}_\mathrm{T}\xspace}$ and $\ensuremath{\vec p^{\mathrm{\ miss}}_\mathrm{T}\xspace}$. Moreover, most of these backgrounds exhibit two $b$-jets in the final state, whereas the signal features in contrast one $b$-jet and one $c$-jet. Two strategies can therefore be envisaged to separate the signal from the backgrounds. Either one could veto the presence of any additional $b$-tagged jet besides the one required at the preselection level (Case-A), or one could enforce, in addition, the presence of an extra $c$-tagged jet (Case-B). From naive calculations based on the efficiencies of the different tagging algorithms, the signal over background ratio is improved by a factor of about 1.5 more for the Case-B strategy, but at the price of an overall reduction in statistics by a factor of approximately 3. Both approaches are thus pursued in the following. For the Case-A strategy, we veto the presence of any extra $b$-jet and impose that there is an extra light jet with $\ensuremath{p_{\mathrm{T}}}\xspace>100\, \GeV$ failing $b$-tagging. In contrast, for the Case-B strategy, we require that only one $b$-tagged jet satisfies $m_{b\ell} < 160\, \GeV$, we additionally impose that the leading jet fullfilling $m_{j\ell} > 160\, \GeV$ is $c$-tagged and has a $\ensuremath{p_{\mathrm{T}}}\xspace>100$~GeV, and we ask all remaining jets with $m_{j\ell} > 160\, \GeV$ to fail $b$-tagging. \begin{figure*} \begin{center} \includegraphics[width=0.495\textwidth]{figures/amt2_blj_j1} \includegraphics[width=0.495\textwidth]{figures/dphimin_j3.pdf} \end{center} \caption{Distributions in the \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace (left) and $|\Delta\phi_{\rm min}|$ (right) variables after imposing all cuts of the Case-A analysis strategy, excepted the one corresponding to the represented variable. The different background contributions and two representative signal scenarios are shown for an integrated luminosity of 300~fb$^{-1}$. For the $|\Delta\phi_{\rm min}|$ distribution, the \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace variable is required to be larger than 400 GeV.} \label{fig:nm1} \end{figure*} In order to further reduce the dileptonic \ensuremath{t\bar{t}}\xspace background where one of the leptons escapes identification, we make use of the now standard asymmetric \ensuremath{m_{\mathrm{T2}}}\xspace variable (denoted \ensuremath{am_{\mathrm{T2}}}\xspace)~\cite{Konar:2009qr, Lester:2014yga} that consists in a variant of the \ensuremath{m_{\mathrm{T2}}}\xspace observable. The \ensuremath{am_{\mathrm{T2}}}\xspace variable is built from two legs (corresponding to the two decay chains) containing both a visible part and an invisible part, and it requires two test masses corresponding to the invisible mass attached with each leg. The visible part of the first leg is built using the sum of the momenta of the $b$-tagged jet and of the lepton, with a test mass that is set to zero. The visible part of the second leg is built from the remaining jet with the highest $b$-tagging weight and $m_W$ is used as a test mass. The targeted background distribution featuring an end-point at approximately 160~GeV, we impose an $\ensuremath{am_{\mathrm{T2}}}\xspace>200\, \GeV$ cut. In addition, the background can be further reduced by constructing another transverse $m_{T2}$ variable. The signal topology features one squark leg where there is a hard $c$-jet, such that the distribution in the transverse mass built from the transverse momentum of the $c$-jet and the one of the neutralino exhibits an end-point at $(m_{\tilde u_1}^2-m_{\tilde\chi^0_1}^2)^ {1/2}$. This feature can be exploited by constructing an appropriate \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace variable. The visible part of the first leg is built from the sum of the momenta of the $b$-tagged jet and of the lepton, together with a vanishing test mass. The visible part of the second leg uses the hardest non-$b$-tagged jet or the $c$-tagged jet for the Case-A and Case-B strategies respectively, and again a vanishing test mass. We impose a selection on \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace depending on the squark-neutralino mass splitting in order to optimise the sensitivity to the signal. This optimisation is performed by varying the cut threshold from 300 to 600~GeV in steps of 50~GeV. Finally, it is found that after all cuts, the missing transverse momentum is aligned with one of the jets of the event for the backgrounds, whereas there is no preferential direction for the signal. We therefore apply a cut on the minimum azimuthal angle separation between any jet and the missing transverse momentum, $|\Delta\phi_{\rm min}|>0.6$. As an illustration, we present, in Fig.~\ref{fig:nm1}, the distribution in the \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace (left) and $|\Delta\phi_{\rm min}|$ variable (right) for the different SM backgrounds and two representative signal benchmark points. All selection cuts from the Case-A analysis strategy are imposed, but the one on the represented variable. On the left panel, we can observe that a selection of \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace $> 400$~GeV is sufficient to separate the signal from the background for the lighter of the chosen benchmark models. On the right panel, we show instead the $|\Delta\phi_{\rm min}|$ distribution after including a cut of 400~GeV on \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace. We can again observe that a significant improvement of the signal to background ratio can be achieved by imposing a $|\Delta\phi_{\rm min}|>0.6$ cut. \section{Results} \label{sec:results} On the basis of the analysis strategy outlined in the previous section, we estimate the LHC sensitivity to supersymmetric scenarios featuring mixed stop-scharm states with 300~${\rm fb}^{-1}$ and 3000~${\rm fb}^{-1}$ of integrated luminosity. For the latter configuration, we assume no modification in the detector performances for the high-luminosity LHC. The sensitivity is extracted by means of a test statistics based on a profiled likelihood ratio, and we make use of the CLs method~\cite{% Read:2002hq} to obtain 95\% confidence level (CL) exclusion limits. The statistical analysis is performed with the {\sc RooStat} toolkit \cite{% Moneta:2010pm} and we assume systematic uncertainties of 20\% and 5\% on the SM backgrounds and on the signal respectively. The results are presented in terms of the upper limits, at the 95\% CL, on the ratio of the signal yields to the corresponding benchmark predictions, denoted as $\sigma^{\mathrm{excl}}/\sigma^{\mathrm{SUSY}}$. \begin{figure*} \begin{center} \includegraphics[width=0.495\textwidth]{figures/reach_all.pdf} \includegraphics[width=0.495\textwidth]{figures/1dlimb.pdf} \end{center} \vspace*{-8mm} \caption{Left: Sensitivity of the LHC to our mixed stop-scharm scenarios given as 95\% CL exclusion contours in the $(m_{\tilde{u}_1}, m_{\tilde{\chi}^0_1})$ plane for the Case-A (red) and Case-B (blue) analysis strategies and for 300~fb$^{-1}$ (solid) 3000~fb$^{-1}$ (dashed). The projected excluded region lies between the exclusion contour and the bottom-left side of the figure. Right: Signal over background ratio as a function of $m_{\tilde{u}_1}$ for the Case-A (red, rounds) and Case-B (blue, squares) analysis strategies when one imposes that $\ensuremath{m_{\mathrm{T2}_{blj}}}\xspace>550 \, \GeV$. } \label{fig:reachmass} \end{figure*} We show in the left panel of Fig.~\ref{fig:reachmass} the analysis reach in the $(m_{\tilde{u}_1},m_{\tilde{\chi}^0_1})$ plane both for the Case-A (red) and Case-B (blue) analysis strategies and for 300~fb$^{-1}$ (solid) and 3000~fb$^{-1}$ (dashed) of integrated luminosity. The region that lies between the exclusion contour and the bottom-left side of the figure will be excluded at the future runs of LHC. The expected 95\% upper limit on $m_{\tilde{u}_1}$ for $m_{\tilde{\chi}^0_1}=50\, \GeV$ is $1050 \GeV$ for Case-A, and $920 \GeV$ for Case-B for an integrated luminosity of 300~fb$^{-1}$. The large difference is due to the fact that the analysis reach is in this case dominated by statistics, which is lower for the analysis based on $c$-tagging, due to the 30\% efficiency of the chosen $c$-tagging working point. For an integrated luminosity of 3~ab$^{-1}$ the difference is reduced, with a reach of $1280 \GeV$ for Case-A and $1240 \GeV$ for Case-B.\par In order to better understand the relative performance of the two analysis strategies, we present in the right panel of Fig.~\ref{fig:reachmass} the dependence of the signal over background ratio ($S/B$) on the squark mass $m_{\tilde{u}_1}$ for $m_{\tilde{\chi}^0_1}=50\, \GeV$, $\theta_{tc}=\pi/4$ and when a \ensuremath{m_{\mathrm{T2}_{blj}}}\xspace $>$~550~GeV cut is applied. As expected, the $S/B$ ratio is higher when $c$-tagging is incorporated. Comparisons of results stemming from analyses with and without $c$-tagging, or relying on different $c$-tagging working points could be used to get information on the flavour content of the observed squark, which is the main information one would like to extract in case of a discovery. In the Case-B analysis, we have chosen a $c$-tagging working point which optimises $S/B$, but with a similar efficiency for $c$-jets and light jets, and thus not ideal for discriminating the flavour of the signal. A different $c$-tagging algorithm working point featuring a very high rejection for light jets, as {\it e.g.} in Ref.~\cite{Aaboud:2018zjf} with $\epsilon_c(c)$=18\%, $1/\epsilon_c(b)=$ 20 and $1/\epsilon_c(l)=200$ would yield a lower overall sensitivity, but might be used to discriminate between different flavour mixing hypotheses for the signal. In Fig.~\ref{fig:exclu}, we show the 95\% CL exclusion limits in the $(m_{\tilde u_1}, \theta_{tc})$ plane for a fixed neutralino mass of 50~GeV. Recasts of the 13~TeV exclusion limits obtained by the ATLAS experiment with 36 ${\rm fb}^{-1}$ are in addition overlaid (see Sec.~\ref{sec:recast}), the blue curve corresponding to the ATLAS search for stop in the singly-leptonic mode~\cite{Aaboud:2017aeu} and the red one to the ATLAS search for squarks based on charm-tagging~\cite{Aaboud:2018zjf}. The region between the recasted exclusion contour and the top-left (bottom-left) side of the figure is excluded by the $t\bar{t}$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace ($c\bar{c}$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace) analysis. The exclusion limits expected from the $tc$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace Case-A analysis strategy are shown as solid and dotted black lines for integrated luminosities of 300 ${\rm fb}^{-1}$ and 3000 ${\rm fb}^{-1}$ respectively. Moreover, we also include the expectation of such an analysis at 13~TeV, with 36~${\rm fb}^{-1}$ of luminosity. For $tc$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace analysis, the projected excluded region lies between the exclusion contour (solid, dashed and dotted black lines) and the left side of the figure. This figure clearly illustrates the strength of the $tc$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace analysis we are proposing, covering a region of the parameter space not accessible with current searches relying on the MFV paradigm. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/alphaplane2.png} \caption{Present and expected exclusion limits in the $(\theta_{tc}, m_{\tilde u_1})$ plane. The area between the recasted exclusion contour and the top-left (bottom-left) side of the figure is excluded by the $t\bar{t}$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace ($c\bar{c}$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace) analysis. The region that lies between the exclusion contour (solid, dashed and dotted black lines) and the left side of the figure will be excluded at the future runs of LHC using $tc$+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace analysis. See the text for details.} \label{fig:exclu} \end{figure} \subsection{A simplified model for squark flavour violation} \label{lhcnmfv_SecModel} In order to assess the LHC sensitivity to supersymmetric models featuring non-minimal flavour violation in the squark sector, we consider a simplified model embedding two active flavours of squarks, a right-handed top squark $\tilde{t}_R$ and a right-handed charm squark $\tilde{c}_R$. These two states mix into two physical eigenstates $\tilde{u}_1$ and $\tilde{u}_2$ whose flavour structure is dictated by the $\theta_{tc}$ mixing angle, \begin{equation} \begin{pmatrix} \tilde{u}_1 \\ \tilde{u}_2 \end{pmatrix} = \begin{pmatrix} ~~\cos\theta_{tc} & ~\sin\theta_{tc} \\ -\sin\theta_{tc} & ~\cos\theta_{tc} \end{pmatrix} \begin{pmatrix} \tilde{c}_R \\ \tilde{t}_R \end{pmatrix} \ , \end{equation} where by convention $\tilde{u}_1$ is the lighter of the two mass eigenstates. Our simplified model additionally includes one neutralino $\tilde{\chi}^0_1$, that we take bino-like. Such an assumption does not have a significant impact on our phenomenological results. The setup of our interest is thus based on four parameters: the masses $m_{\tilde{u}_1}$ and $m_{\tilde{u}_2}$ of the two physical squarks together with the flavour mixing angle $\theta_{tc}$, and the neutralino mass $m_{\chi^0_1}$. For the sensitivity studies in the $(m_{\tilde{u}_1}, \theta_{tc})$ plane, the neutralino mass will be fixed to $m_{\chi^0_1} = 50$ GeV. Although a more complicated flavour structure involving left-handed squarks could be possible as well, this last setup implies the need to handle more complicated constraints from $B$-physics in order to build phenomenologically viable scenarios. Left-handed squarks are thus assumed heavier and decoupled, like any other superpartner. Our simplified model therefore exhibits two competing squark decay modes (if kinematically allowed), \begin{equation} \tilde{u}_i \to t \tilde{\chi}^0_1 \,, \qquad \tilde{u}_i \to c \tilde{\chi}^0_1 \qquad\text{with}\ \ i=1,2\ , \label{sqdecays} \end{equation} which yield three classes of LHC signatures originating from the production of a pair of $\tilde{u}_i$ squarks. Typical LHC search strategies have been designed on the basis of the MFV pa\-ra\-digm and thus only address the two signatures: \begin{equation} pp \to t\bar{t} + \ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace \qquad\text{and}\qquad pp \to c\bar{c} + \ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace \,, \label{decays} \end{equation} where $\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace$ is the imbalance in transverse momentum in the event generated by the undetected neutralinos. Squark flavour mixing opens up a third final state, \begin{equation} pp \to tc + \ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace \,, \label{nmfvdecay} \end{equation} where one squark decays into a top quark and the other one into a charm quark~\cite{Bartl:2010du}. \begin{figure*} \begin{center} \includegraphics[width=.495\textwidth]{figures/BR_varyTheta.pdf} \includegraphics[width=.495\textwidth]{figures/BR_varyTheta_varyMchi_m500.pdf} \end{center} \vspace*{-5mm} \caption{Dependance of the branching ratios BR($\tilde{u}_1\to t\tilde{\chi}^0_1$) (dashed) and BR($\tilde{u}_1\to c\tilde{\chi}^0_1$) (solid) on the squark mixing angle $\theta_{tc}$ for various mass configurations. In the left panel, the squark mass is fixed to 500~GeV (red) and 1000~GeV (blue), with the neutralino mass being set to $m_{\chi^0_1} = 50$~GeV. In the right panel, the neutralino mass varies and is fixed to 50~GeV (red), 200~GeV (blue) and 300~GeV (cyan), for a squark mass of $m_{\tilde{u}_1}=500$~GeV.} \label{lhcnmfv_fig:bratio} \end{figure*} In Fig.~\ref{lhcnmfv_fig:bratio}, we illustrate the $\theta_{tc}$-dependence of the squark branching ratios associated with the decays of Eq.~\eqref{sqdecays}. We observe that regardless the squark and neutralino mass configuration, there always exists a $\theta_{tc}$ value for which both decay modes have 50\% branching ratio, which means that half of the signal events would produce the final state of Eq.~\eqref{nmfvdecay}. Moreover, differences in the functional behaviour of the branching ratios for different mass hierarchies become only noticeable close to threshold, when the mass splitting between the decaying squark and the neutralino is small. This configuration is not considered further in this paper, as the phase space available for the decay is limited and the best experimental sensitivity is achieved with monojet or monotop probes~\cite{Fuks:2014lva}. \subsection{Reinterpretation of current LHC Run~2 results} \label{sec:recast} \begin{figure*} \begin{center} \includegraphics[width=.476\textwidth]{figures/massplane} \includegraphics[width=.495\textwidth]{figures/alphaplane} \end{center} \vspace*{-5mm} \caption{Reinterpretation, in the context of our simplified model, of the ATLAS bounds on top squarks obtained with single-leptonic probes~\cite{ Aaboud:2017aeu} and on Supersymmetry when charm tagging is used~\cite{ Aaboud:2018zjf}. The results are presented in the $(m_{\tilde{u}_1}, m_{\tilde{u}_2})$ plane (left) and $(m_{\tilde{u}_1}, \theta_{tc})$ plane (right), and the stars correspond to the official ATLAS results in the non-flavour-mixing case. The excluded region lies between the exclusion contour and the left-side of the figure. } \label{lhcnmfv_fig:recast} \end{figure*} The ATLAS and CMS collaborations have performed several direct searches for top squarks, mostly in a setup where they are pair-produced and decay into a pair of top-antitop quarks and missing energy as indicated by the first equation of Eq.~\eqref{decays}. With no observation of any hint for new physics, the most stringent constraints arise from LHC Run~2 analyses of proton-proton collisions at a centre-of-mass energy of 13 TeV \cite{Sirunyan:2017kqq,Aaboud:2017dmy, Sirunyan:2017xse,Sirunyan:2017wif,Sirunyan:2017kiw,Aaboud:2017nfd, Aaboud:2017wqg,Aaboud:2017ayj,Sirunyan:2017pjw,Sirunyan:2017leh,Aaboud:2017phn, Aaboud:2017aeu}. All these searches lead to exclusion limits on the top squark mass of the order of 1~TeV. Bounds on first and second generation squarks are similar when one single light squark species is considered together with a decay into light jets and missing transverse energy, whereas they reach 1.5~TeV for models featuring four mass-degenerate first and second generation squarks~\cite{Aaboud:2017vwy,Sirunyan:2017cwe}. The most sensitive stop searches, yielding a similar expected sensitivity for low neutralino masses, are the ones addressing final states with either zero or one lepton. We therefore choose the recent ATLAS search for top squarks in final states with one lepton of Ref.~\cite{Aaboud:2017aeu} as a benchmark for getting conservative Run~2 constraints on our model.\par Additionally, the ATLAS collaboration has carried out an analysis targeting top squarks decaying into charm and missing energy or charm squarks~\cite{Aaboud:2018zjf}, based on the experimental tagging of jets produced from the fragmentation of charm quarks. As this signature is expected to play a significant role for getting handles on the considered squark inter-generational mixings, we use the analysis in Ref.~\cite{Aaboud:2018zjf} as a second LHC Run~2 benchmark to evaluate the existing constraints on our simplified model.\par We perform a three-dimensional parameter space scan and vary independently the two squark masses ($m_{\tilde{u}_1}$ and $m_{\tilde{u}_2}$), as well as the top-charm squark mixing angle $\theta_{tc}$. As mentioned above, the neutralino mass has been fixed to 50~GeV, so that our results are valid as long as the squark masses are much larger than the neutralino mass. For each considered point, we evaluate the sensitivity of the two searches of Refs.~\cite{Aaboud:2017aeu, Aaboud:2018zjf} and present the results in Fig.\ \ref{lhcnmfv_fig:recast}. The excluded region lies between the exclusion contour and the left-side of the figure. Concerning the stop analysis~\cite{Aaboud:2017aeu}, we rely on the acceptances and efficiencies that have been officially provided by the ATLAS collaboration for each of the `discovery tN\_med' (targeting moderate stop masses) and `discovery tN\_high' (targeting high stop masses) regions. We then estimate the two corresponding signal yields ($N_{\rm sig}$), considering next-to-leading order (NLO) stop pair-pro\-duc\-ti\-on ra\-tes corrected by the resummation of the threshold logarithms at the next-to-leading logarithmic (NLL) accuracy~\cite{Borschensky:2014cia} and the appropriate branching ratios. These signal yields are then compared to the ATLAS model-independent upper limit ($N^{\rm obs~limit}_{\rm non-SM}$) for each of the regions. If the ratio of these two yields exceeds one, the signal point is considered excluded. While providing acceptance and efficiency values only for the inclusive `signal' regions, the ATLAS analysis employs a multi-bin fit in the most sensitive distribution for the final exclusion limit estimation. For this reason the recast exclusion contours presented in Fig.~\ref{lhcnmfv_fig:recast} represent a conservative estimate of the effective reach of the ATLAS search. We rely on the same procedure to extract the constraints from the charm-tagging analysis of Ref.~\cite{Aaboud:2018zjf}. In the left panel of Fig.~\ref{lhcnmfv_fig:recast}, we consider a class of benchmark scenarios where the two squark eigenstates are maximal admixtures of the top and charm flavours ($\theta_{ct} = \frac \pi 4$) and we vary the two masses independently (with $m_{\tilde{u}_1}< m_{\tilde{u}_2}$). The total new physics production rate is here solely driven by the lightest of the two states, except for the region where the mass splitting of the two squarks is small. For sufficiently high splittings, the exclusion is thus independent of $m_{\tilde{u}_2}$, and squarks are found to be constrained to be heavier than about 550~GeV. Compared with the more standard MFV case where the two eigenstates are also flavour eigenstates (and where the bounds are of about 1~TeV), the limits are hence weakened by almost 500~GeV. The large value of the top-charm mixing angle indeed implies that the two signal regions of the stop analysis of Ref.~\cite{Aaboud:2017aeu}, specifically targeting final states with the decay products of two top quarks, are less populated by virtue of the large decay fraction into charm jets BR$(\tilde{u}_1 \to c \tilde{\chi}^0_1)$. In the parameter space region defined by \begin{equation} m_{\tilde{u}_1}, m_{\tilde{u}_2} \lesssim 750~{\rm GeV} \ , \end{equation} the situation is somewhat different as the two squark mass eigenstates contribute to a potentially observable new phy\-sics signal. This partly compensates the loss due to the smaller branching ratio into tops, so that the obtained limits are stronger than when the second eigenstates is heavier. The charm-tagging analysis of Ref.~\cite{Aaboud:2018zjf} always implies weaker bounds for this specific classes of scenarios (the number of events populating the signal regions being very small), and the corresponding results are thus omitted. In the right panel of Fig.~\ref{lhcnmfv_fig:recast}, we reinterpret the ATLAS limits in the $(m_{\tilde u_1}, \theta_{tc})$ plane, {\it i.e.} we decouple the second eigenstate. Our results exhibit the complementary effect of the top-charm squark mixing angle on the bounds. For $\theta_{tc}=0$, the lightest squark is purely of charm flavour, so that the ATLAS stop search is insensitive to the signal and the limits ($m_{\tilde u_1} \gtrsim 800$~GeV) solely arise from the ATLAS charm-tagging analysis. With the mixing angle increasing, the $c\bar{c}+\ensuremath{E_{\mathrm{T}}^{\mathrm{miss}}}\xspace$ production rate decreases so that the bounds are progressively weakened. On the other hand, the increase in $\theta_{tc}$ implies that while the signal regions of the charm-tagging analysis are more and more depleted due to the lower and lower BR$(\tilde{u}_1 \to c\tilde{\chi}^0_1)$ branching ratio, the signal regions of the stop analysis are more and more populated due to the increasing BR$(\tilde{u}_1 \to t \tilde{\chi}^0_1)$ branching ratio. In the limit for which the lightest squark is purely of top flavour, its mass is constrained to be at least 825~GeV. In the maximal-mixing condition, the mass constraints for both analyses are below 600~GeV, which is the minimum mass value for which experimental acceptances are available for both the considered benchmark analyses. We superimpose to the results of our recasting the official limits observed by ATLAS, represented by stars on the right panel of Fig.~\ref{lhcnmfv_fig:recast} for the cases where the lightest squark is of a definite flavour. The usage of multi-bin signal regions increases the limits by about 50--100~GeV. \section{Introduction} \label{sec:intro} \input{intro.tex} \section{Model setup and exisiting LHC limits} \label{sec:model} \input{model.tex} \section{Collider projections for the reach of the $tc$ channel} \label{sec:collider} \input{collider.tex} \section{Summary} \label{sec:summary} \input{summary.tex} \section*{Acknowledgements} The authors would like to thank the organizers of `Physics at TeV Colliders' workshop (Les Houches, June 2017) whe\-re this work was initiated. We would also like to thank Michihisa Takeuchi for many useful discussions. This work has been partially supported by French state funds managed by the Agence Nationale de la Recherche (ANR) in the context of the {\it Investissements d'avenir} Labex ENIGMASS (ANR-11-LABX-0012) and Labex ILP (ANR-11-IDEX-0004-02, ANR-10-LABX-63), and by the Grant-in-Aid for Scientific Research on Scientific Research B (No. 16H03991) and Innovative Areas (16H06492). \bibliographystyle{spphys}
{ "attr-fineweb-edu": 1.828125, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdAY5qYVBUVZOigf4
\section{Introduction} In this note we are interested in the special trilogarithmic solutions of the generalized Witten--Dijkgraaf--Verlinde--Verlinde (WDVV) equations \cite{MMM}. Such solutions are determined by a f\/inite collection $A$ of covectors $\a$ with multiplicities $c_\a$. More specif\/ically, the prepotential satisfying the WDVV equations has the form \begin{gather}\label{fintro} F= \sum_{\a \in A} c_\a Li_3\big(e^{2i\a(x)}\big) + \mbox{cubic terms}, \end{gather} where $Li_3$ is the trilogarithm function. Solution of this type for the $A_n$ root system appeared in \cite{MMM2} in relation with Seiberg--Witten theory. More systematically such solutions were studied by Hoevenaars and Martini in \cite{H1, H2} who determined solutions for all irreducible reduced root systems \cite{H2}. More recently solutions of the form \eqref{fintro} were derived from reductions of Egorov hydrodynamic chains in \cite{P}. The rational versions of solutions \eqref{fintro} play an important role in the theory of Frobenius manifolds, a geometric framework for the WDVV equations~\cite{D}. Thus solutions corresponding to the Coxeter root systems are almost dual to the Frobenius structures on the orbit spaces of f\/inite Coxeter groups \cite{D2}. In the trigonometric case such a duality is verif\/ied for the af\/f\/ine $A_n$ case in \cite{R,RS}. The study of general rational solutions of the form \begin{gather}\label{fintro2} F=\sum_{\a\in A} \a(x)^2 \log \a(x) \end{gather} was initiated by Veselov in \cite{V1} where a geometric notion of the $\vee$-system equivalent to the WDVV equations for \eqref{fintro2} was introduced. It was shown in \cite{V3} that any generalized Calogero--Moser--Sutherland (CMS) operator admitting a factorized eigenfunction determines a $\vee$-system. In this note we are interested in the solutions \eqref{fintro} where the cubic terms involve extra variable like in the works \cite{H1,H2} on the solutions for the root systems. We derive geometric and algebraic conditions for a system of vectors with multiplicities so that the corresponding function \eqref{fintro} satisf\/ies the WDVV equations. These conditions should be thought of as trigonometric analogue of the notion of the $\vee$-system. The conditions carry rather strong geometrical restrictions on the collection of vectors formulated in terms of series of vectors parallel to a chosen one. We illustrate this by determining all trigonometric $\vee$-systems with up to f\/ive vectors in the plane. Trigonometric ansatz, in contrast to the rational one, allows to def\/ine the generalized CMS operator corresponding to the solution~\eqref{fintro}. We show that this operator has a factorized eigenfunction. This statement inverts the one for the rational $\vee$-systems obtained in~\cite{V3}. In fact our arguments follow~\cite{V3} very closely. We also discuss additional condition needed to have trigonometric solution to the WDVV equations starting from a CMS operator with factorized eigenfunction. \section[Trigonometric $\vee$-systems]{Trigonometric $\boldsymbol{\vee}$-systems} Consider a function $F$ of the form \begin{gather}\label{F} F=\frac13 y^3 + \sum_{\a\in A} c_{\a} \a(x)^2 y + \lambda \sum_{\a \in A} c_\a f(\a(x)), \end{gather} where $A$ is a f\/inite collection of covectors on $V\cong\mathbb C^n$, $x=(x_1,\ldots,x_n)$, $c_\a$, $\lambda$ are non-zero constants and function $f(x)$ satisf\/ies $f'''(x)=\cot x$. The last equation f\/ixes function $f(x)$ up to 2nd order terms which will not be important for the WDVV equations below. We may f\/ix a~choice of $f(x)$ by \[ f(x)=\frac16 i x^3 + \frac14 {\rm Li}_3\big(e^{-2ix}\big). \] The ansatz \eqref{F} introducing extra variable $y$ was proposed in \cite{H2} in the case of root systems $A=\mathcal R$. The form \eqref{F} guarantees that the matrix of third derivatives involving $y$ is constant, as we will explore below. {\it We will assume throughout the paper that collection $A$ of covectors $\a$ belongs to an $n$-dimen\-sional lattice, and that the bilinear form \begin{gather}\label{G} (u,v):=\sum\limits_{\a\in A}c_\a \a(u) \a(v) \end{gather} is non-degenerate on~$V$.} The form $(\cdot,\cdot)$ identif\/ies $V$ and $V^*$ and following \cite{V1} we will denote by~$\gamma^\vee$ the vector dual to the covector $\gamma$. We will also denote through $(\cdot,\cdot)$ the corresponding inner product on $V^*$. We are interested in the conditions on $\{\a, c_{\a}, \lambda\}$ when function $F$ satisf\/ies the WDVV equations \begin{gather}\label{wdvv} F_i F_k^{-1} F_j = F_j F_k^{-1} F_i, \end{gather} $i,j,k=0, 1,\ldots,n$. Here $F_i$ are $(n+1)\times (n+1)$ matrices of third derivatives in the coordinates $(x_0=y,x_1,\ldots,x_n)$, $(F_i)_{ab}=\frac{\partial^3F}{\partial x_i \partial x_a \partial x_b}$. It is suf\/f\/icient to f\/ix $k=0$, then $F_0=F_y$ is the following non-degenerate matrix \[ F_y=2 \left( \begin{array}{cc} 1 & 0\\ 0& \sum\limits_{\a \in A} c_\a \a \otimes \a \end{array} \right). \] Similarly \[ F_i=\left( \begin{array}{cc} 0 & 2\sum\limits_{\a \in A} c_\a \a_i \a\vspace{2mm}\\ 2\sum\limits_{\a \in A} c_\a \a_i \a & \lambda\sum\limits_{\a \in A} c_\a \a_i \cot \a(x) \a \otimes \a \end{array} \right), \] where we denoted by $\a$ both column and row vectors $\a=(\a_1,\ldots,\a_n)$. The WDVV conditions for a function $F$ can be reformulated partly using geometry of the system $A$. For any $\a \in A$ let us collect all the covectors from $A$ non-parallel to $\a$ into the disjoint union of {\it \bf $\pmb \a$-series} $\Gamma_\a^1, \ldots, \Gamma_\a^k$. These series are determined by the property that for any $s=1,\ldots, k$ and for any two covectors $\gamma_1, \gamma_2 \in \Gamma_\a^s$ one has either $\gamma_1-\gamma_2=n\alpha$ or $\gamma_1+\gamma_2=n\alpha$ for some {\it integer} $n$. We also assume that the series are maximal, that is if $\b \in \Gamma_\a^i$ then $\Gamma_\a^i$ must contain all the vectors of the form $\pm \b +n \a \in A$ with $n \in \mathbb Z$. We note that solution \eqref{F} is not af\/fected if some of the covectors $\a \in A$ are replaced with~$-\a$. By appropriate choice of signs the vectors can be made to belong to a half-space, we will denote such systems as~$A_+$. Moreover, for any $\a\in A$ one can choose a positive system $A_+ \ni \a$ in such a way that $\a$-series $\Gamma_\a^s$ will consist of vectors of the form $\b_s + n_i \a \in A_+$ for appropriate integer parameters $n_i$ with $\b_s \in A_+$. \begin{definition} Let $A \subset V^*\cong\mathbb C^n$ be a f\/inite collection of covectors $\a$ with multiplicities $c_\a$ such that the corresponding form \eqref{G} is non-degenerate and the covectors $\a$ belong to an $n$-dimensional lattice. We say that $A$ is a trigonometric $\vee$-system if for any $\a \in A$ and for any $\a$-series $\Gamma_\a^s$ one has \begin{gather}\label{V1} \sum_{\b \in \Gamma_\a^s} c_\b (\a,\b) \a\wedge \b=0. \end{gather} \end{definition} Notice that $\a \wedge \beta_1 = \pm \a\wedge \b_2$ if $\b_1$, $\b_2$ belong to the same $\a$-series $\Gamma_\a^s$ so identities \eqref{V1} may be simplif\/ied accordingly. Also replacement of some of the covectors by their opposite preserves the class of trigonometric $\vee$-systems. Note also that the non-degenerate linear transformations act naturally on the trigonometric $\vee$-systems, and that the direct sum $A_1 \oplus A_2$ of the trigonometric $\vee$-systems $A_1 \subset V_1^*$, $A_2 \subset V_2^*$ considered as a set of covectors in $V_1\oplus V_2$ is again a trigonometric $\vee$-system. The systems obtained in this way will be called {\it reducible}. If such a decomposition is not possible then the (trigonometric $\vee$-)system is called irreducible. \begin{theorem}\label{t1} The WDVV equations \begin{gather* F_i F_y^{-1} F_j = F_j F_y^{-1} F_i, \end{gather*} $i,j=0, 1,\ldots,n$, for the function \eqref{F} are equivalent to the following two conditions: \begin{enumerate}\itemsep=0pt \item[$1)$] $A$ is a trigonometric $\vee$-system; \item[$2)$] for a positive system $A_+$ and for any vectors $a,b,c,d \in V$ \begin{gather}\label{V2} \sum_{\a,\b \in A_+} \left(\frac14 \lambda^2 (\a, \b) - 1\right) c_\a c_\b B_{\a,\b}(a,b) B_{\a,\b}(c,d) =0, \end{gather} where $B_{\a,\b}(a,b)=\a \wedge \b (a,b)= \a(a)\b(b)-\a(b)\b(a)$. \end{enumerate} \end{theorem} \begin{proof} For a vector $a \in V$ we def\/ine $F_a^\vee = F_y^{-1} F_a$ where $F_a= \sum\limits_{i=1}^n a_i F_i$. The WDVV equations are equivalent to the commutativity $[F_a^\vee, F_b^\vee]=0$ for any $a,b \in V$. We have \[ F_i^\vee=\left( \begin{array}{cc} 0 & \sum\limits_{\a \in A} c_\a \a_i \a\vspace{2mm}\\ \sum\limits_{\a \in A} c_\a \a_i \a^\vee & \frac{\lambda}{2} \sum\limits_{\a \in A} c_\a \a_i \cot \a(x) \a \otimes \a^\vee \end{array} \right), \] where $\a^\vee$ is the (column) vector dual to the (row) covector $\a$ under the bilinear form $G=\sum\limits_{\a \in A} c_{\a} \a\otimes\a$. Therefore \[ F_a^\vee=\left( \begin{array}{cc} 0 & \sum\limits_{\a \in A} c_\a \a(a) \a \vspace{2mm}\\ \sum\limits_{\a \in A} c_\a \a(a) \a^\vee & \frac{\lambda}{2} \sum\limits_{\a \in A} c_\a \a(a) \cot \a(x) \a \otimes \a^\vee \end{array} \right) \] for any $a \in \mathbb C^n$. Now the product $F_a^\vee F_b^\vee$ equals \[ \left( \begin{array}{cc} \sum\limits_{\a,\b \in A} c_\a c_\b \a(a) \b(b) \a(\b^\vee) & \frac{\lambda}{2} \sum\limits_{\a,\b \in A} c_\a c_\b \a(a) \b(b) \a(\b^\vee) \cot \b(x) \b \vspace{2mm}\\ \frac{\lambda}{2} \sum\limits_{\a, \b \in A} c_\b c_\a \a(a) \b(b) \a(\b^\vee) \cot \a(x) \a^\vee & \nad{\sum\limits_{\a,\b \in A} c_\a c_\b \a(a) \b(b) \a^\vee \otimes \b }{+\frac{\lambda^2}{4} \sum\limits_{\a, \b \in A} c_\a c_\b \a(a) \b(b) \a(\b^\vee) \cot \a(x) \cot \b(x) \b\otimes\a^\vee} \end{array} \right). \] Therefore $[F_a^\vee, F_b^\vee]=0$ is equivalent to the identities \begin{gather}\label{sing0} \sum_{\a,\b \in A} c_\a c_\b B_{\a,\b}(a,b) (\a, \b) \cot \a(x) \a^\vee = 0, \\ \label{sing} \sum_{\a,\b \in A} \left( \frac{\lambda^2}{4} c_\a c_\b (\a, \b) \cot \a(x) \cot \b(x) + c_\a c_\b \right) B_{\a,\b}(a,b) \a\wedge \b=0. \end{gather} To cancel singularities in \eqref{sing} one should have \[ \sum_{\nad{\b \in A}{\b \nsim \a}} c_\b (\a, \b) \cot \b(x) B_{\a,\b}(a,b) \a\wedge \b=0 \] when $\cot \a(x)=0$. A linear combination of functions $\cot \b(x)|_{\cot \a(x)=0}$ can vanish only if it vanishes for each $\a$-series: \[ \sum_{\b\in \Gamma_\a^s} c_\b (\a, \b) \cot \b(x) B_{\a,\b}(a,b) \a\wedge \b=0 \] for all $\a$-series $\Gamma_\a^s$ (see e.g.~\cite{F} for more detailed explanation). The last relation can be simplif\/ied~as \begin{gather}\label{V11} \sum_{\b \in \Gamma_\a^s} c_\b (\a,\b) \a\wedge \b=0, \end{gather} which means that $A$ is a trigonometric $\vee$-system. Identities \eqref{V11} guarantee that the left-hand side of \eqref{sing} is non-singular. Since all the vectors from $A$ belong to an $n$-dimensional lattice with basis $e^1, \ldots, e^n$, the left-hand side of \eqref{sing} is a rational function in the exponential coordinates~$e^{e^i(x)}$. This rational function has degree zero and therefore it is a constant. We can assume that all covectors from $A$ belong to a half-space hence form a positive system $A_+$, so in appropriate limit $\cot(\a,x) \to i$ for all $\a\in A_+$. Thus property \eqref{sing} is equivalent to \eqref{V11} together with the condition \begin{gather*} \sum_{\a,\b \in A_+} \left(\frac{\lambda^2}{4} c_\a c_\b (\a, \b) - c_\a c_\b \right) B_{\a,\b}(a,b) \a\wedge \b =0. \end{gather*} The remaining condition \eqref{sing0} is equivalent to the set of properties \begin{gather}\label{V3} \sum_{\b \in A} c_\b (\a, \b) B_{\a,\b}(a,b) = 0, \end{gather} for any $\a \in A$. Identities \eqref{V3} follow from the $\vee$-conditions \eqref{V11}, this completes the proof of the theorem. \end{proof} \begin{remark}\label{remark1} Let trigonometric $\vee$-systems $A_1 \subset V_1^*, A_2\subset V_2^*$ def\/ine the solutions \eqref{F} of the WDVV equations for some $\lambda_1$, $\lambda_2$. Then the trigonometric $\vee$-system $A_1 \oplus A_2$ does not def\/ine a~solution. Indeed, let us take vectors $a,c \in V_1$ and $b,d \in V_2$. Then property \eqref{V2} implies that \begin{gather}\label{dirsn} (a,c)_1 (b,d)_2 =0, \end{gather} where $(\cdot,\cdot)_{1,2}$ are $\vee$-forms \eqref{G} in the corresponding spaces $V_{1,2}$. Clearly, the relation \eqref{dirsn} does not hold for general vectors $a$, $b$, $c$, $d$. \end{remark} \begin{remark}\label{remark2} Not all the trigonometric solutions of the WDVV equations have the form~\eqref{F}. It is shown in \cite{BMMM} that trilogarithmic functions have to arise when ansatz for $F$ is given by summation of $g((\a,x))$ over the roots of a root system, $x\in V$. \end{remark} \begin{remark}\label{remark3} A slightly more general ansatz for the solutions $F$ can be considered when cubic terms in $x$ are added to $F$. Similarly to the proof of Theorem~\ref{t1} it follows that $A$ still has to be a trigonometric $\vee$-system. The almost dual potentials corresponding to the $A_n$ af\/f\/ine Weyl group orbit spaces have such a form~\cite{RS}. The corresponding trigonometric $\vee$-system $A$ is the $A_n$ root system in this case. \end{remark} \begin{proposition}\label{proposition1} Let $A=\{\a, c_\a\}$ be a trigonometric $\vee$-system. Then the set of vectors $\{\sqrt{c_{\a}}\a\}$ is a $($rational$)$ $\vee$-system, that is $F^{\rm rat}=\sum\limits_{\a\in A} c_\a \a(x)^2 \log \a(x)$ is a solution of the WDVV equations in the space $V$. \end{proposition} \begin{proof} By def\/inition of the trigonometric $\vee$-system for any $\a \in A$ relations \eqref{V1} hold. Consider two-dimensional plane $\pi \subset V$ and sum up relations \eqref{V1} over $s$ so that the $\a$-series $\Gamma_\a^s$ belong to the plane $\pi \ni \a$. We arrive at the relations \[ \sum_{\b \in A\cap\pi} c_\b (\a,\b) \a\wedge \b=0, \] or, equivalently, \begin{gather}\label{rV} \sum_{\b \in A\cap\pi} c_\b (\a,\b)\b \quad \mbox{ is proportional to } \a. \end{gather} Relations \eqref{rV} is a def\/inition of the (rational) $\vee$-system for the set of covectors $\{\sqrt{c_{\a}}\a\}$ (see~\cite{V1} and~\cite{FV2} for the complex space). It is equivalent to the property that $F^{\rm rat}$ satisf\/ies WDVV equations in the space $V$ \cite{V1,FV2}. Proposition is proven. \end{proof} Due to existence of extra variable $y$ in the ansatz \eqref{F} the WDVV equations are nontrivial already when $n=2$. Thus it is natural to study at f\/irst two-dimensional conf\/igurations $A$ def\/ining solutions of WDVV equations. When $A$ consists of one vector the corresponding form~\eqref{G} is degenerate. If $A$ consists of two non-collinear vectors $\a$, $\b$ then it follows that $(\a,\b)=0$ therefore relation \eqref{V1} holds and $A$ is a trigonometric $\vee$-system. However relation \eqref{V2} cannot hold then for any $\lambda$ and therefore a pair of vectors does not def\/ine a solution to WDVV equations (see also Remark~\ref{remark1} above). The following propositions deal with the next simplest cases when $A$ consists of~3,~4 and~5 vectors respectively. In fact all irreducible trigonometric $\vee$-systems with up to 5 covectors have to be two-dimensional. \begin{proposition}\label{proposition2} Let system $A$ consist of three vectors~$\a$,~$\b$,~$\gamma$ with nonzero multiplicities~$c_\a$, $c_\b$,~$c_\gamma$. Then $A$ is an irreducible trigonometric $\vee$-system iff $\a \pm \b \pm \gamma=0$ for some choice of signs. The non-degeneracy condition for the form \eqref{G} is then given by $c_\a c_\b + c_\a c_\gamma + c_\b c_\gamma\ne 0$. Any such system $A$ defines the solution \eqref{F} of the WDVV equations with $\lambda=2(c_\a c_\b + c_\a c_\gamma + c_\b c_\gamma)(c_\a c_\b c_\gamma)^{-1/2}$. \end{proposition} \begin{proof} It follows from relations \eqref{V1} that $\gamma=\a+\b$ up to multiplication of some of the vectors by~$-1$. We take a basis $e^1=\a$, $e^2=\b$ in $\mathbb C^2$. The bilinear form \eqref{G} takes the form $G=c_\a x_1^2+c_\b x_2^2 + c_\gamma (x_1+x_2)^2$. This form is non-degenerate if\/f $c_\a c_\b + c_\a c_\gamma + c_\b c_\gamma \ne 0$. One can check that \[ {e^1}^\vee=\frac{(c_\b+c_\gamma)e_1-{c_\gamma}e_2}{c_\a c_\b + c_\a c_\gamma + c_\b c_\gamma}, \qquad {e^2}^\vee=\frac{-c_\gamma e_1+(c_\a+c_\gamma)e_2}{c_\a c_\b + c_\a c_\gamma + c_\b c_\gamma}, \] where $e_1$, $e_2$ is dual basis to $e^1$, $e^2$, that is $e^i(e_j)=\delta^i_j$. Relations \eqref{V1} look as follows \begin{gather*} \big({e^1}^\vee,{e^2}^\vee\big)(c_\b+c_\gamma)+\big({e^1}^\vee,{e^1}^\vee\big)c_\gamma=0, \\ \big({e^1}^\vee,{e^2}^\vee\big)(c_\a+c_\gamma)+\big({e^2}^\vee,{e^2}^\vee\big)c_\gamma=0, \\ \big({e^1}^\vee,{e^2}^\vee\big)(c_\a-c_\b)+\big({e^1}^\vee,{e^1}^\vee\big)c_\a-\big({e^2}^\vee,{e^2}^\vee\big)c_\b=0, \end{gather*} and they are automatically satisf\/ied. Relation \eqref{V2} results to one scalar equation \[ \frac{\lambda^2}{4}\left(c_\a c_\b (\a^\vee,\b^\vee)+c_\a c_\gamma (\a^\vee,\gamma^\vee)+c_\b c_\gamma (\b^\vee,\gamma^\vee)\right)=c_\a c_\b+c_\a c_\gamma +c_\b c_\gamma, \] which has solution as stated in the formulation. \end{proof} In the following proposition we study conf\/igurations consisting of four covectors. \begin{proposition}\label{proposition3} Let system $A$ consist of four vectors $\a$, $\b$, $\gamma$, $\delta$ with nonzero multiplicities $c_\a$, $c_\b$, $c_\gamma$, $c_\delta$. Then $A$ is an irreducible trigonometric $\vee$-system iff the vectors in $A_+$ take the form $e^1$, $e^2$, $e^1 \pm e^2$ in a suitable basis, and the corresponding multiplicities $c_1$, $c_2$, $c_\pm$ satisfy $c_1=c_2$. This property is equivalent to the orthogonality $(e^1+e^2, e^1-e^2)=0$ under the corresponding $\vee$-product. The non-degeneracy condition for the form \eqref{G} is then given by $\Delta=(c_1+2 c_+)(c_1+2c_-)\ne 0$. These systems $A$ define the solutions \eqref{F} of the WDVV equations with $\lambda=2\Delta c_1^{-1/2}(4c_+c_-+c_1(c_++c_-))^{-1/2}$ once $\lambda$ is finite. \end{proposition} \begin{proof} It follows from the series relations \eqref{V1} that there is a a vector $\a \in A$ such that all the remaining vectors $\b,\gamma, \delta \in A$ belong to single $\a$-series $\Gamma^1_\a$. Indeed, otherwise, up to renaming the covectors and taking opposite, we have $\delta=\gamma+ n \a$, $n \in \mathbb N$, $(\a,\b)=(\gamma,\delta)=0$. Then consideration of $\b$-series gives $2\gamma+n\a=m \b$ for some $m \in \mathbb Z$. And consideration of $\gamma$-series gives $\a + p \gamma = \pm \b$ for some $p \in \mathbb Z$. Therefore $2 \gamma + n \a = \pm m(\a + p \gamma)$, hence $n=\pm m$ and $2 = \pm mp$, thus $m=\pm 1$ or $p=\pm 1$. In the case $m=\pm 1$ we have $n=1$ hence $\gamma$-series contains $\delta$ together with $\a$ and $\b$. And in the case $p=\pm 1$ the $\a$-series contains~$\b$ together with $\gamma$ and $\delta$. So we can assume that there is only one $\a$-series so that the remaining vectors take the form $\gamma=\b + n_1 \a$, $\delta= \b + n_2 \a$ with integer $n_2 > n_1 >0$. By considering $\b$-series we conclude that $n_1=1$. Consider now the $\delta$-series. It is easy to see that covector $\b$ has to form a single series, therefore $(\b,\delta)=0$ and the covectors $\b+\a$ and $\a$ belong to a $\delta$-series. This is possible only if $n_2=2$. Taking now the basis vectors as $e^1=\a$, $e^2=\b + \a$ we conclude that the system $A$ consists of covectors $e^1$, $e^2$, $e^1 \pm e^2$. The bilinear form \eqref{G} takes now the form \begin{gather*} G=c_1 x_1^2+c_2 x_2^2 + c_+ (x_1+x_2)^2+ c_- (x_1-x_2)^2\\ \phantom{G}{}=(c_1+c_++c_-) x_1^2 + (c_2+c_+ + c_-) x_2^2 + 2 (c_+-c_-) x_1 x_2, \end{gather*} which has determinant $\Delta= c_1 c_2 +(c_1+c_2)(c_++c_-)+4c_+c_-$. Therefore \begin{gather} \big(e^1,e^1\big)=\Delta^{-1}(c_2+c_++c_-),\qquad \big(e^2,e^2\big)=\Delta^{-1}(c_1+c_++c_-),\nonumber\\ \big(e^1,e^2\big)=\Delta^{-1}(c_--c_+).\label{scpr} \end{gather} Now we analyze the series relations \eqref{V1}. The orthogonality $(e^1-e^2, e^1+e^2)=0$ is clearly equivalent to the condition $c_1=c_2$. Then the remaining conditions \eqref{V1} on $(e^1\pm e^2)$-series are automatically satisf\/ied. The condition \eqref{V1} for the $e^1$-series has the form \[ c_-\big(-e^1+e^2,e^1\big)+ c_2 \big(e^2, e^1\big)+ c_+ \big(e^1+e^2,e^1\big)=0, \] and it follows from the scalar products \eqref{scpr}. The condition on the $e^2$-series is also satisf\/ied. It is easy to check that relation \eqref{V2} holds if\/f $\lambda$ is as stated, hence proposition is proven. \end{proof} \begin{proposition}\label{proposition4} Let irreducible trigonometric $\vee$-system $A$ consist of five vectors with non-zero multiplicities. Then in the appropriate basis $A_+$ takes the form $e^1$, $2e^1$, $e^2$, $e^1 \pm e^2$ and the corresponding multiplicities $c_1$, $\tilde c_1$, $c_2$, $c_{\pm}$ satisfy $c_+=c_-$ (equivalently, $(e^1,e^2)=0$) and $2\tilde c_1 c_2 = c_+(c_1-c_2)$. The form \eqref{G} is then non-degenerate when $\Delta=(c_1+4\tilde c_1 + 2 c_+)(c_2+2c_+)\ne 0$. The corresponding solution of the WDVV equations has the form \eqref{F} with $\lambda=\sqrt{2}\Delta (c_2+2c_+)^{-1/2}(c_1+4\tilde c_1)^{-1/2} c_+^{-1/2}$. \end{proposition} Proof is obtained by simple analysis of the series conditions \eqref{V1}. One can f\/irstly establish that $A$ is two-dimensional. Then it is easy to see that $A$ has to contain collinear vectors, and the required form follows. To conclude this section we present a few examples of trigonometric $\vee$-systems on the plane with higher number of vectors. Recall f\/irstly that the positive roots of the root system ${\cal G}_2$ can be written as $\a$, $\b$, $\b+\a$, $\b+n \a$, $\b+(n+1)\a$, $2\b+(n+1)\a$ where $n=2$. One can show that for integer $n>2$ the above vectors never form a trigonometric $\vee$-system, and that for $n=2$ the multiplicities have to satisfy $c_\a=c_{\b+\a}=c_{\b+n\a}$ and $c_\b=c_{2\b+(n+1)\a}=c_{\b+(n+1)\a}$ which is the case of the ${\cal G}_2$ system. There are though some possibilities to extend the ${\cal G}_2$ system. Firstly, one can show that ${\cal G}_2 \cup {\cal A}_2$ where the system ${\cal A}_2$ consists of doubled short roots of ${\cal G}_2$, is a trigonometric $\vee$-system for appropriate multiplicities. Secondly the following proposition takes place. \begin{proposition}\label{proposition5} Let $A$ consist of the vectors $e_1$, $e_2$, $2e_2$, $\frac12 (e_1 \pm e_2)$, $\frac12 (e_1\pm 3 e_2)$ with the corresponding nonzero multiplicities $c_1$, $c_2$, $\tilde c_2$, $a$, $b$. Then $A$ is a trigonometric $\vee$-system iff the multiplicities satisfy the relations $a=3b$, $c_2=a+ 2 \tilde c_2$, $(2c_1+b)c_2=(c_1+2b)a$. \end{proposition} Note that in the limiting case $\tilde c_2=0$ we recover the system ${\cal G}_2$ with special multiplicities. An example of trigonometric $\vee$-system with yet higher number of vectors is given by vectors $e_1$, $2e_1$, $e_2$, $2e_2$, $e_1 \pm e_2$, $e_1 \pm 2e_2$, $2e_1 \pm e_2$ where the multiplicities can be chosen appropriately. \section[Relations with generalized Calogero-Moser-Sutherland systems]{Relations with generalized Calogero--Moser--Sutherland\\ systems} Relation between $\vee$-systems and the property of a Schr\"odinger operator of CMS type to have a~factorized eigenfunction was observed by Veselov in \cite{V3}. Namely, it was shown in \cite{V3} that if an operator \[ L=-\Delta+\sum_{\a\in A_+} \frac{m_{\a}(m_{\a}+1)(\a,\a)}{\sin^2 (\a,x)} \] has a formal eigenfunction \[ \psi=\prod_{\a \in A_+} \sin^{-m_{\a}}(\a,x), \qquad L \psi =\mu \psi, \] then $F=\sum\limits_{\a\in A_+} m_\a(\a,x)^2 \log (\a,x)$ satisf\/ies the WDVV equations. The following theorem establishes the converse statement in the case of trigonometric $\vee$-systems. \begin{theorem} \label{theorem2} Let $A$ be a trigonometric $\vee$-system consisting of pairwise non-collinear covectors $\a$ with multiplicities $c_\a$. Then Schr\"odinger operator \begin{gather}\label{sch}L=-\Delta+\sum_{\a\in A} \frac{c_{\a}(c_{\a}+1)(\a,\a)}{\sin^2 \a(x)}\end{gather} constructed by the metric \eqref{G} has the formal eigenfunction \begin{gather}\label{eig} \psi=\prod_{\a \in A} \sin^{-c_{\a}}\a(x), \qquad L \psi =\mu \psi. \end{gather} \end{theorem} \begin{proof} The property $L\psi = \mu \psi$ is equivalent to the identity \begin{gather}\label{iden} \sum_{\a \ne \b} c_\a c_\b (\a,\b) \cot \a(x) \cot \b(x)={\rm const}. \end{gather} To establish the last identity it is suf\/f\/icient to show that the left-hand side of \eqref{iden} is non-singular. In other words, we need to show that \begin{gather}\label{trt} \sum_{\b, \b \ne \a} c_\b (\a,\b) \cot \b(x)=0 \end{gather} if $\cot \a(x)=0$. The last properties are suf\/f\/icient to check when summation is taken along arbitrary $\a$-series, then it is guaranteed by relation \eqref{V1}. This proves the theorem. \end{proof} \begin{corollary} \label{corollary1} Assume that function \eqref{F} constructed by a set of pairwise non-collinear covectors~$\a$ with multiplicities $c_\a$ satisfies the WDVV equations~\eqref{wdvv}. Then relation \eqref{eig} holds for the Schr\"odinger operator~\eqref{sch}. \end{corollary} Conversely, the property of a Schr\"odinger operator to have a factorized eigenfunction implies that the corresponding vectors $\sqrt{c_\a}\a$ form a rational $\vee$-system \cite{V3}. This property is also suf\/f\/icient to obtain the trigonometric $\vee$-system, and the arguments are close to~\cite{V3}. \begin{theorem}\label{theorem3} Assume that the Schr\"odinger operator \eqref{sch} has an eigenfunction~\eqref{eig}. Then the set $A$ of vectors $\a$ with the multiplicities $c_\a$ forms the trigonometric $\vee$-system. \end{theorem} \begin{proof} From equation \eqref{eig}, \eqref{sch} it follows identity \eqref{trt} at $\cot \a(x)=0$. Therefore for each $\a$-series $\Gamma_\a^s$ we have \begin{gather}\label{sersum} \sum_{\b \in \Gamma_\a^s} c_\b (\a,\b) \a\wedge \b=0. \end{gather} Let $\b^\text{w}$ denote a vector dual to $\b$ with respect to the inner product $(\cdot,\cdot)$ involved in the Schr\"odinger equation. By summing identities \eqref{sersum} along all the $\a$-series we conclude that \begin{gather}\label{rav} \sum_{\b \in A} c_\b \b(\a^\text{w}) \b^\text{w} \quad \mbox{is proportional to } \a^\text{w}. \end{gather} Now we can decompose the space $V=V_1\oplus\cdots\oplus V_k$ so that the operator $\sum_{\b\in A} c_\b \b \otimes \b^\text{w}$ is equal to constant $\mu_i$ on $V_i$. We can also assume that $(V_i, V_j)=0$ if $i\ne j$. It follows from \eqref{rav} that $G(\cdot,\cdot)|_{V_i}=\mu_i (\cdot,\cdot)|_{V_i}$. Therefore identities \eqref{sersum} imply \[ \sum_{\b \in \Gamma_\a^s} c_\b \a(\b^\vee) \a\wedge \b=0 \] which are identities \eqref{V1} from the def\/inition of the trigonometric $\vee$-systems. \end{proof} \begin{corollary}\label{corollary2} Assume that the Schr\"odinger operator \eqref{sch} has an eigenfunction \eqref{eig}. Assume also that the system $A$ is irreducible and that for some $\Lambda$ and any $a,b,c,d \in V$ the property \begin{gather}\label{fin} \sum_{\a,\b \in A_+} (\Lambda (\a, \b) - 1)c_\a c_\b B_{\a,\b}(a,b) B_{\a,\b}(c,d) =0 \end{gather} holds. Then the corresponding function \eqref{F} with appropriate $\lambda$ satisfies the WDVV equations~\eqref{wdvv}. \end{corollary} \begin{remark}\label{remqrk4} The previous corollary also holds for the reducible systems $A$ if we replace the Schr\"odinger equation metric $(\a,\b)$ in \eqref{fin} by the $\vee$-product $\a(\b^\vee)$. In this case $\lambda=2 \sqrt{\Lambda}$. \end{remark} \section{Concluding remarks} Trigonometric $\vee$-systems require further investigations. It would be interesting to obtain almost dual prepotentials for the Frobenius manifolds of the af\/f\/ine Weyl groups as well as for their discriminants (cf.\ rational case~\cite{D2,FV1}). Comparison with a recent work on the elliptic solutions~\cite{Str} might also be interesting. We also hope that the series conditions would allow understanding and eventually classif\/ication of the trigonometric $\vee$-systems. We hope to return to some of these questions soon. \subsection*{Acknowledgements} I am very grateful to L.~Hoevenaars, A.~Kirpichnikova, M.~Pavlov, I.~Strachan and A.P.~Veselov for useful and stimulating discussions. The work was partially supported by the EPSRC grant EP/F032889/1, by European research network ENIGMA (contract MRTN-CT-2004-5652), by PMI2 Project funded by the UK Department for Innovation, Universities and Skills for the benef\/it of the Japanese Higher Education Sector and the UK Higher Education Sector. \pdfbookmark[1]{References}{ref}
{ "attr-fineweb-edu": 1.166016, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdBA4ubng4IUB0ucq
\section{Introduction} The wire-tap channel was first analyzed by Wyner in \cite{wyner:wiretap}, where a wire-tapper has access to a degraded version of the intended receiver's signal in a single-user communications scenario. He measured the amount of ``secrecy" using the conditional entropy of the transmitted message given the received signal at the wire-tapper, and determined the region of all possible Rate/Wiretapper Equivocation pairs. Wyner showed the existence of a \ital{secrecy capacity}, $C_s$, for communication below which it is possible to transmit zero information to the wire-tapper. Carleial and Hellman, in \cite{hellman-carleial:wiretap}, showed that it is possible to transmit several low-rate messages at perfect secrecy to achieve an overall rate closer to capacity. In \cite{leung-hellman:gaussianwiretap}, the authors extended Wyner's results to Gaussian channels and also showed that Carleial and Hellman's results hold for Gaussian channels as well. Csisz\'ar and K\"orner, in \cite{csiszar-korner:confbroadcast}, showed that Wyner's results can be extended to weaker, so called ``less noisy" and ``more capable" channels. Furthermore, they analyzed the more general case of sending common information to both the receiver and the wire-tapper. More recently, the notion of the wire-tap channel was extended to parallel channels, \cite{yamamoto:secretsharing, yamamoto:secretsharinggaussian}, relay channels, \cite{oohama:relaywiretap}, and fading channels, \cite{barros:fadingwiretap}. Multiple-access channels were considered in \cite{tekin:ASILOMAR05, tekin:ISIT06, liang:genMACconf}. In \cite{tekin:ASILOMAR05, tekin:ISIT06}, the wire-tapper gets a degraded version of a GMAC uplink signal, and it is shown that the nature of the channel allows an improvement in the individual achievable rates over the single-user channel while having the same limitation on the sum-rate. In \cite{liang:genMACconf}, there is no external eavesdropper, but the two transmitters try to keep their messages secret from each other. In \cite{tekin:ASILOMAR05}, we considered the Gaussian Multiple Access Wire-Tap Channel (GMAC-WT) and defined two separate secrecy constraints: (i) the \ital{individual} secrecy constraints, the normalized entropy of any set of messages conditioned on the transmitted codewords of the other users and the received signal at the wire-tapper, and (ii) the \ital{collective} secrecy constraints, the normalized entropy of any set of messages conditioned on the wire-tapper's received signal. The first set of constraints is more conservative to ensure secrecy of any subset of users even when the remaining users are compromised. The second set of constraints ensures the collective secrecy of any set of users, utilizing the secrecy of the remaining users. In \cite{tekin:ASILOMAR05}, we considered a scenario where the wire-tapper received a physically degraded version of the receiver's signal and examined the \ital{perfect secrecy rate regions} for both sets of constraints. We generalized this to a pre-determined level of secrecy, $0 \le \delta \le 1$, in \cite{tekin:ISIT06, tekin:IT06a}. In this paper, we utilize the collective secrecy constraints with perfect secrecy, and consider the more general case where the eavesdropper's \footnote{Henceforth, we will refer to the adversary as the eavesdropper rather than the wire-tapper since the communication situation modeled is a more general model that is more appropriate for wireless communications} signal is not necessarily degraded, but is at an overall disadvantage compared to the receiver, which we model as a set of received power constraints. Under these constraints, using random Gaussian codebooks, we find an achievable \ital{secure rate region}, where users can communicate with arbitrarily small probability of error with the intended receiver under perfect secrecy from the eavesdropper. For this achievable rate region, we find the transmit powers that maximize the sum rate. We also find the sum-rate maximizing power allocation, and users with ``good" channels - those with standardized channel gains below a certain threshold - transmit with maximum power, and those with ``bad" channels, below this threshold, do not transmit. Next, we show that a non-transmitting user can help increase the secrecy capacity for a transmitting user by effectively ``jamming" the eavesdropper, or even enable secret communications that would not be possible in a single-user scenario. We term this scheme \ital{collaborative secrecy}. \section{Achievable Rates} \newcommand{\mathfrak{X}}{\mathfrak{X}} \newcommand{\Xm_\Sigma}{\Xm_\Sigma} Here, we present an achievable region using Gaussian codebooks. The proof is very similar to the proof of the achievable region presented in \cite{tekin:ISIT06}. Note that, when $h_1=\dotsc=h_K<1$, the region reduces to the special case examined in \cite{tekin:ASILOMAR05}, \cite{tekin:ISIT06}, \cite{tekin:IT06a}. \begin{theorem} \label{thm:achC} We can transmit with perfect secrecy using Gaussian codebooks at rates satisfying \begin{equation} \label{eqn:Gach} \ssum_{k \in \Ss} R_k \le \CM[\Ss] - \CWs[\Ss] \quad \forall \Ss \subseteq \Ks \end{equation} where $\v{P} \in \Ps$. The region containing all $\Rm$ satisfying these equations is denoted \Gsl. \end{theorem} \begin{proof} Let $\Rm=(R_1,\dotsc,R_K)$ satisfy \eqref{eqn:Gach}. For user $k \in \Ks$, consider the scheme: \begin{enumerate} \item Let $M_k=\twon{R_k-\e'}$ where $0 \le \e' < \e$ where $\e'$ is chosen to ensure that $M_k$ is an integer. \item Generate $2$ codebooks $\mathfrak{X}_k$ and $\mathfrak{X}_{kx}$. $\mathfrak{X}_k$ consists of $M_k$ codewords, each component of which is drawn $\isnormal{0,\lambda_k P_k -\varepsilon}$. Codebook $\mathfrak{X}_{kx}$ has $M_{kx}$ codewords with each component randomly drawn $\isnormal{0,(1-\lambda_k) P_k-\varepsilon}$ where $\varepsilon$ is arbitrarily small to ensure that the power constraints on the codewords are satisfied with high probability. Define $R_{kx}=\ninv \log M_{kx}$ and $M_{kt}=M_k M_{kx}$. Then $R_{kt}=\ninv \log M_{kt}=R_k+R_{kx}+\e'$. \item To transmit message $W_k \in \{1,\dotsc,M_k\}$, user $k$ finds the codeword corresponding to $W_k$ in $\mathfrak{X}_k$ and also uniformly chooses a codeword from $\mathfrak{X}_{kx}$ which are then added and the resulting codeword, $\Xm_k$, is sent so that we are actually transmitting one of $M_{kt}$ codewords. \end{enumerate} The specific rates are chosen such that $\forall \Ss \subseteq \Ks$ the following are satisfied: \begin{align} \label{eqn:achR} \ssum_{k \in \Ss} R_k &\le \CM[\Ss] - \CWs[\Ss]\\ \label{eqn:achRx} \ssum_{k=1}^K R_{kx} &= \CW \\ \label{eqn:achRt} \ssum_{k \in \Ss} R_{kt} &\le \CM[\Ss] \end{align} From \eqref{eqn:achRt} and the GMAC coding theorem, with high probability the receiver can decode the codewords with low probability of error. We now need to show that the secrecy constraints are satisfied. Note that since the secrecy of the overall system ensures the secrecy of each subset, we only need to show that the coding scheme described achieves $\Delta_\Ks \ge 1-\e$. We concern ourselves only with MAC sub-code $\{\mathfrak{X}_k\}_{k=1}^K$. From this point of view, the coding scheme described is equivalent to each user $k \in \Ks$ selecting one of $M_k$ messages, and sending a uniformly chosen codeword from among $M_{kx}$ codewords for each. Define $\Xm_\Sigma=\ssum_{k=1}^K \sqrt{h_k}\Xm_k$. \begin{align} \hspace{-.1in} H(\Wm_\Ks|\Zm) \hspace{-.5in} &\\ &=H(\Wm_\Ks,\Zm)-H(\Zm) \\ &= H(\Wm_\Ks,\Xm_\Sigma,\Zm)-H(\Xm_\Sigma|\Wm_\Ks,\Zm)-H(\Zm) \\ &=H(\Wm_\Ks)+H(\Zm|\Wm_\Ks,\Xm_\Sigma)-H(\Zm) \notag \\ &\hspace{.4in} +H(\Xm_\Sigma|\Wm_\Ks)-H(\Xm_\Sigma|\Wm_\Ks,\Zm) \\ &\label{eqn:achprf1}= H(\Wm_\Ks) - I(\Xm_\Sigma;\Zm)+I(\Xm_\Sigma;\Zm|\Wm_\Ks) \end{align} where we used $\Markov{\Wm_\Ks}{\Xm_\Sigma}{\Zm} \Rightarrow H(\Zm|\Wm_\Ks,\Xm_\Sigma)=H(\Zm|\Xm_\Sigma)$ to get \eqref{eqn:achprf1}. We will consider the two terms individually. First, we have the trivial bound due to channel capacity: \begin{equation} \label{eqn:achprf3} I(\Xm_\Sigma;\Zm) \le n\CW \end{equation} Now write \begin{equation} I(\Xm_\Sigma;\Zm|\Wm_\Ks) = H(\Xm_\Sigma|\Wm_\Ks)-H(\Xm_\Sigma|\Wm_\Ks,\Zm) \end{equation} Since user $k$ sends one of $M_{kx}$ codewords for each message, \begin{align} H(\Xm_\Sigma|\Wm_\Ks) &= \log \paren{M_{1x} M_{2x}}\\ \label{eqn:achprf4a}&= n \paren{R_{1x}+R_{2x}} = n\CW \end{align} We can also write \begin{equation} \label{eqn:achprf4b} H(\Xm_\Sigma|\Wm_\Ks,\Zm) \le n\xi_n \end{equation} where $\xi_n \tozero$ as $n \toinf$ since, with high probability, the eavesdropper can decode $\Xm_\Sigma$ given $\Wm_\Ks$ due to \eqref{eqn:achRx}. Note that the individual rates are unimportant - as far as the eavesdropper is concerned, it is receiving one of $n\CW$ codewords with equal probability for each $(W_1,W_2)$ pair. Using \eqref{eqn:achR}, \eqref{eqn:achRx}, \eqref{eqn:achprf3}, \eqref{eqn:achprf4a} and \eqref{eqn:achprf4b} in \eqref{eqn:achprf1}, we get \begin{align} H(\Wm_\Ks|\Zm) &\ge H(\Wm_\Ks)-n\CW+n\CW-n\xi_n\\ \label{eqn:G1}&= H(\Wm_\Ks)-n\xi_n \end{align} and dividing both sides by $H(\Wm_\Ks)$ gives \begin{equation} \Delta^{(C)}_\Ks \ge 1- \frac{\xi_n}{\sum_{k=1}^K R_k} \end{equation} completing the proof. An intuitive way of looking at this is as ``capacity stuffing with superfluous information". For each message pair, the eavesdropper can decode the extra ``sum-codeword" transmitted if it knew which messages were sent, but since this information arrives at its capacity, it cannot gain any information about the actual transmitted messages. For a single-user system consisting of user $k$, if $h_k \ge 1$, then the secrecy capacity for that user would have been $0$. However, the multi-access nature of the channel enables a different user $j$ with $h_j<1$ to ``help" such a user achieve a non-zero rate with perfect secrecy. \end{proof} \begin{figure}[t] \centering \includegraphics[width=3.0in,angle=0]{GGMACregs2_h1=.1,h2=.2.eps} \caption{\small Achievable rate region for $h_1=.1, \, h_2=.2, \, P_{1,max}=10,\, P_{2,max}=10$} \label{fig:ggmacwtreg1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.0in,angle=0]{GGMACregs2_h1=.1,h2=1.4.eps} \caption{\small Achievable rate region for $h_1=.1, \, h_2=1.4, \, P_{1,max}=10,\, P_{2,max}=10$} \label{fig:ggmacwtreg2} \end{figure} \section{Conclusions} In this paper, we found an achievable rate region for the General Gaussian Multiple-Access Wire-Tap Channel (GGMAC-WT), in which a second wireless receiver is eavesdropping on the uplink of a GMAC. We also showed that the sum-rate is maximized when only users with ``better" channels to the intended receiver as opposed to the eavesdropper transmit, and they do so using all their available power. Moreover we have explored the possibility of the users with worse channels to the intended receiver helping the transmitting users by jamming. This scheme, which we term \ital{collaborative secrecy}, is analyzed for the two user case. \section{Secrecy Through Collaboration} In the previous section, we showed that the sum secrecy rate is maximized when users with $h_k \ge 1$ do not transmit. An interesting question in this case, is whether such a user can somehow help increase the secrecy capacity for another user that has $h_k<1$ and is transmitting at full power. We will show that this is possible in some cases, namely by using the fact that a user with $h_k \ge 1$ can have a `more adverse' effect on the eavesdropper than on the intended receiver. We will consider the two-user scenario and examine two cases: \subsection{$h_1 < 1 \le h_2$} Consider the same case examined in the previous section: $h_1 < 1 \le h_2$. The sum-rate achievable with perfect secrecy was shown to be $C_s=g(P_{1,max})-g(h_1 P_{1,max})$ with $P_1=P_{1,max}, P_2=0$. User 2, rather than sit idle, can help user 1 by generating white noise and sending this across the channel. This will create additional noise at the intended receiver, but even more additional noise at the eavesdropper's receiver. Since the secrecy capacity in the now single-user channel is known to be the difference of the channel capacities, this scheme may increase the secrecy capacity by reducing the eavesdropper's channel capacity more than it does the intended receiver's. The problem at hand can be written as: \begin{multline} \label{eqn:prob-summaxhelper} \max_{(P_1,P_2)} g \paren{\frac{P_1}{1+P_2}} - g \paren{\frac{h_1 P_1}{1+h_2 P_2}} \\ \suchthat \; 0 \le P_1 \le P_{1,max}, \; 0 \le P_2 \le P_{2,max} \end{multline} Start by writing the Lagrangian using the monotonicity of $\log$: \begin{multline} \label{eqn:Lag-summaxhelper} \Lag(\v{P},\muv)= - \frac{(1+P_1+P_2)(1+h_2 P_2)}{(1+P_2)(1+h_1 P_1 + h_2 P_2)} \\ - \sum_{k=1}^2 \mu_{1k}P_k + \sum_{k=1}^2 \mu_{2k}(P_k-P_{k,max}) \end{multline} Consider user 1: \begin{multline} \frac{\del \Lag(\v{P},\muv)}{\del P_1}= \frac{\Psi_1 (P_2)}{(1+P_2)(1+h_1 P_1 + h_2 P_2)^2} \\ - \mu_{11} + \mu_{21} = 0 \end{multline} where \begin{equation} \hspace{-.01in} \Psi_1 (P_2) = -(1+h_2P_2) \bracket{(1-h_1)+(h_2-h_1)P_2} \hspace{-.3in} \end{equation} $\Psi_1(P_2)$ is always negative due to $h_1 < 1 \le h_2$. Hence, we must have $\mu_{21}>0 \Rightarrow P_1=P_{1,max}$. Now examine user 2: \begin{multline} \frac{\del \Lag(\v{P},\muv)}{\del P_2}= \frac{\Psi_2(P_1,P_2)}{(1+P_2)^2(1+h_1 P_1 + h_2 P_2)^2} \\ - \mu_{21} + \mu_{22} = 0 \end{multline} where \begin{align} \hspace{-.05in} \Psi_2(P_1,P_2) \hspace{-.4in}&\hspace{.4in}= P_1 h_2 (h_2-h_1) (P_2-p^{(1)})(P_2-p^{(2)}) \\ p^{(1)} & =\frac{-h_2(1-h_1) + \sqrt{D}}{h_2(h_2-h_1)}, \\ p^{(2)} & =\frac{-h_2(1-h_1) - \sqrt{D}}{h_2(h_2-h_1)}, \\ D & =h_1h_2\bracket{(h_2-1)+(h_2-h_1)P_1)}(h_2-1) \hspace{-.2in} \end{align} We already know that $P_1=P_{1,max}$. Note that if $\Psi_2(P_{1,max},P_2) > 0$, then we must have $\mu_{21}>0 \Rightarrow P_2=0$. On the other hand, if $\Psi_2(P_{1,max},P_2) < 0$, then $\mu_{22}>0 \Rightarrow P_2=P_{2,max}$. Only when $\Psi_2(P_{1,max},P_2)=0$ do we have $0<P_2<P_{2,max}$. It is easy to see that $D\ge (h_2-1) \sqrt{h_1h_2} \ge 0$ and hence $p^{(2)} < 0$. $\Psi_2(P_{1,max},P_2)$ is an upright parabola with respect to $P_2$ with at least one negative root. As a result, if the other root, $p^{(1)}$ is also negative, then $\Psi_2(P_{1,max},P_2)>0 \Rightarrow P_2=0$. An example is when $h_2=1 \Rightarrow p^{(1)}=p^{(2)}$. If $p^{(1)}$ is positive, then we have two possibilities, either $P_2$ lies between the roots, in which case $\Psi_2(P_{1,max},P_2) < 0 \Rightarrow P_2=P_{2,max}$, or $P_2 \ge p^{(1)}$. Since the latter would imply $\Psi_2(P_{1,max},P_2) >0$ and hence $P_2=0$, it is not possible. Thus, the optimal solution is \begin{align} P_1&=P_{1,max}, & \! P_2&= \begin{cases} 0, & \text{if}\; p^{(1)} \le 0 \\ p^{(1)}, & \text{if}\; 0< p^{(1)} \le P_{2,max} \\ P_{2,max}, & \text{if}\; p^{(1)} > P_{2,max} \end{cases} \end{align} The condition to have $p^{(1)} \le 0$ is equivalent to $P_{1,max} \le \frac{1-h_1h_2}{h_1(h_2-h_1)}$. Note that if $h_1h_2 \ge 1$, regardless of $P_{1,max}$, user $2$ can always help increase the secrecy capacity. \begin{figure}[t] \centering \includegraphics[width=3.0in,angle=0]{ggmacwtcase1.eps} \caption{\small Sum capacity as a function of $P_2$ with different $P_{1,max}$ for $P_{2,max}=10, \, h_1=.4, \, h_2=1.4$} \label{fig:ggmacwtcase1} \end{figure} \subsection{$1 \le h_1 < h_2$} Now consider the case where neither user could transmit in the region given in Section 3. We are motivated by the previous result to see whether user $2$ can actually make it possible for user $1$ to transmit with perfect secrecy. Our optimization problem and the Lagrangian are the same as given in \eqref{eqn:prob-summaxhelper} and \eqref{eqn:Lag-summaxhelper}. This time $\Psi_1(P_2)$ has a single root with $P_2 \ge 0$, and is not necessarily negative. Depending on its value, we will have different optimum $P_1$ values. Thus, \renewcommand{\theenumi}{(\roman{enumi})} \begin{enumerate} \item $P_2 < \frac{h_1-1}{h_2-h_1} \Rightarrow \Psi_1(P_2) > 0 \Rightarrow \mu_{11}>0 \Rightarrow P_1=0$. \item $P_2 = \frac{h_1-1}{h_2-h_1} \Rightarrow \Psi_1(P_2) = 0 \Rightarrow \mu_{11}=\mu_{21}=0$. \item $P_2 > \frac{h_1-1}{h_2-h_1} \Rightarrow \Psi_1(P_2) < 0 \Rightarrow \mu_{21}>0 \Rightarrow P_1=P_{1,max}$. \end{enumerate} Now look at user 2. Again, $D >0$ and $\Psi_2(P_1,P_2)$ is an upright parabola of $P_2$. However, this time we are guaranteed a positive root as $p^{(1)}>0$, and the solution for $P_2$ depends on $p^{(2)}$. Consider each of the above cases: In (i) and (ii), $C_s=0$ regardless of $P_1$, so we are not interested in $P_2$. Consider case (iii): We then have $P_2 > p^{(2)}$. If $P_2< p^{(1)}$, then $\Psi_2(P_1,P_2)<0$, and $P_2=P_{2,max}$. If $P_2 \ge p^{(1)}$, then $\Psi_2(P_1,P_2) \ge 0$. Since we cannot have $\mu_{21}>0$ at the same time, the only solution is $P_2=p^{(1)}$. Summarizing, we get \begin{equation} \v{P} = \begin{cases} (0,0), & \text{if}\; P_{2,max} \le \frac{h_1-1}{h_2-h_1} \\ (P_{1,max},P_{2,max}), & \text{if}\; \frac{h_1-1}{h_2-h_1} < P_{2,max} \le p^{(1)}\\ (P_{1,max},p^{(1)}), & \text{if}\; P_{2,max} > p^{(1)} \end{cases} \end{equation} Note that the solution is of the same form as the previous case. As long as user $2$ has enough power to make user $1$'s effective channel better than the eavesdropper's, user $1$ can transmit at full power as in the previous setting. User $1$ could also have helped user $2$, but it is better for the ``worse" user to help the ``better" user to maximize the sum rate. \begin{figure}[!t] \centering \includegraphics[width=3.0in,angle=0]{ggmacwtcase2.eps} \caption{\small Sum capacity as a function of $P_2$ with different $P_{1,max}$ for $P_{2,max}=20, \, h_1=1.2,\, h_2=1.4$} \label{fig:ggmacwtcase2} \end{figure} \section{Sum Capacity Outer Bound} In this section, we find an upper bound on the achievable sum rate. To this end, we start with a few lemmas: \begin{lemma} \label{lem:conv1} Let $\Xm_\Ks = \{\Xm_k\}_{k=1}^2$. Then, \begin{equation} R_\Ks \le \ninv I(\Xm_\Ks;\Ym|\Zm) +\nu_n' \end{equation} where $\nu_n' \tozero$ as $\e \tozero$ and $n \toinf$. \end{lemma} \begin{proof} Consider the two inequalities: \begin{gather} \label{eqn:lemconv11} H(\Wm_\Ks|\Zm) \ge H(\Wm_\Ks) -\e \ge n R_\Ks -(2n+1) \e \\ \label{eqn:lemconv12} H(\Wm_\Ks|\Ym,\Zm) \le H(\Wm_\Ks|\Ym) \le n\nu_n \end{gather} where \eqref{eqn:lemconv12} follows using Fano's Inequality with $\nu_n \tozero$ as $n \toinf$. Using \eqref{eqn:lemconv11} and \eqref{eqn:lemconv12}, we can write \begin{align} n R_\Ks &\le H(\Wm_\Ks)+2n\e\\ &\le H(\Wm_\Ks|\Zm)+2n\e + \e +n\nu_n-H(\Wm_\Ks|\Ym,\Zm) \\ &=I(\Wm_\Ks;\Ym|\Zm)+ n(2\e + \frac{\e}{n}+ \nu_n) \\ &\le I(\Xm_\Ks;\Ym|\Zm)+n \nu_n' \end{align} where $\nu_n' = (2 + \ninv) \e + \nu_n$, and we used $\Markov{W_\Ks}{\Xm_\Ks}{\Ym,\Zm}$ in the last step. Dividing by $n$ completes the proof. \end{proof} Using $\Markov{\Ym}{\Xm_{\Ks}}{\Zm}$, we can write \begin{align} I(\Xm_{\Ks};\Ym|\Zm) &= H(\Ym|\Zm) - H(\Ym|\Xm_{\Ks},\Zm) \\ &= H(\Ym|\Zm) - H(\Ym|\Xm_{\Ks}) \\ &= H(\Ym|\Zm) - \frac{n}{2} \log (2 \pi e) \\ &= H(\Ym) - I(\Ym;\Zm) - \frac{n}{2} \log(2 \pi e) \\ &= H(\Ym) - H(\Zm) + H(\Zm|\Ym) - \frac{n}{2} \log(2 \pi e) \end{align} \section{Maximization of Sum Rate} The achievable region given in Theorem \ref{thm:achC} depends on the transmit powers. We are naturally interested in the power allocation $\v{P}^*=(\Popt_1,\dotsc,\Popt_K)$ that would maximize the total throughput, i.e. the sum rate. However, the sum rate maximization is a non-trivial problem since the powers have to be constrained to $\Ps$. WLOG, assume that the users are ordered such that $h_1 \le h_2 \le \dotsc \le h_K$. \begin{align} \max_{\v{P} \in \Ps} \; \CM -\CW \hspace{-.6in}& \\ &= \max_{\v{P} \in \Ps} \; g\paren{\sum_{k=1}^K P_k} - g\paren{\sum_{k=1}^K h_k P_k} \\ &=\min_{\v{P} \in \Ps} \; \onehalf \log \rho(\v{P}) \\ &\equiv \min_{\v{P} \in \Ps} \; \rho(\v{P}) \label{eqn:sumcapprob1} \end{align} where we used the monotonicity of the $\log$ function and \begin{equation} \label{eqn:rhodef} \rho(\v{P}) \triangleq \frac{1+\sum_{k=1}^K h_k P_k}{1+\sum_{k=1}^K P_k} \end{equation} We start with writing the Lagrangian to be minimized, \begin{multline} \label{eqn:Lag} \hspace{-.12in} \Lag(\v{P},\muv) = \rho(\v{P}) -\sum_{k=1}^K \mu_{1k}P_k + \sum_{k=1}^K \mu_{2k}(P_k-P_{k,max}) \\ -\sum_{\Ss \subseteq \Ks} \mu_{3\Ss} \phi_\Ss(\v{P}) \end{multline} Equating the derivative of the Lagrangian to zero, we get \begin{multline} \label{eqn:Lagder} \frac{\del \Lag(\v{P},\muv)}{\del P_j} = \rhodot^{(j)}(\v{P}) -\mu_{1j} + \mu_{2j} \\ -\sum_{\Ss \subseteq \Ks} \mu_{3\Ss} \dot{\phi}^{(j)}_\Ss(\v{P}) = 0 \end{multline} where \begin{align} \label{eqn:rhodotjdef} \rhodot^{(j)}(\v{P}) &\triangleq \frac{\del \rho(\v{P})}{\del P_j} =\frac{h_j - \rho(\v{P})}{1+\sum_{k=1}^K P_k} \\ \label{eqn:phidotjdef} \dot{\phi}^{(j)}_\Ss(\v{P}) &\triangleq \frac{\del \phi_\Ss(\v{P})}{\del P_j} \notag \\ &=\case {1 - \frac{h_j}{1+\sum_{k \in \Ss^c} h_k P_k}}{j \in \Ss} {\frac{h_j\sum_{k \in \Ss} h_k P_k}{\paren{1+\sum_{k \in \Ss^c} h_k P_k}^2}} {j \not \in \Ss} \end{align} We begin with the following lemma: \begin{lemma} \label{lem:zeropow} Let $\v{P}^*$ be the optimum power allocation. For a user $k \in \Ks$, if $h_k \ge 1$, then $\Popt_k=0$. \end{lemma} \begin{proof} Assume this statement is wrong, i.e., let $\s{T}=\{k \in \Ks \colon h_k \ge 1, \, \Popt_k > 0\} \neq \emptyset$. Consider $\v{Q}$ such that $Q_k=\Popt_k, \; k \not \in \s{T}$ and $Q_k=0, \; k \in \s{T}$. In other words, for a user $k$, $Q_k=\Popt_k$ if $h_k <1$ and $Q_k=0$ if $h_k \ge 1$. We first check to see whether $\v{Q} \in \Ps$. Since $\v{P}^* \in \Ps$, $\v{P}_{max} \succeq \v{Q} \succeq \v{0}$, so we only need to check if $\phi_\Ss(\v{Q}) \ge 0 \; \forall \Ss$. \begin{align} \phi_\Ss(\v{Q}) &=\sum_{j \in \Ss} Q_j - \frac{\sum_{j \in \Ss} h_j Q_j} {1+\sum_{j \in \Ss^c} h_j Q_j} \\ &=\sum_{j \in \Ss-\s{T}} Q_j + \sum_{j \in \Ss\cap\s{T}} Q_j \notag \\ &\qquad -\frac{\sum_{j \in \Ss-\s{T}} h_j Q_j + \sum_{j \in \Ss\cap\s{T}} h_jQ_j} {1+\sum_{j \in \Ss^c} h_j Q_j} \\ &=\sum_{j \in \Ss-\s{T}} \Popt_j -\frac{\sum_{j \in \Ss-\s{T}} h_j \Popt_j}{1+\sum_{j \in \Ss^c} h_j Q_j} \\ &\ge \sum_{j \in \Ss-\s{T}} \Popt_j - \sum_{j \in \Ss-\s{T}} h_j \Popt_j \\ &\ge 0 \end{align} since all users $k \in \Ss-\s{T}$ must have $h_k<1$. The proof will be complete if we can show that this new power allocation also increases the sum rate achieved, or equivalently decreases $\rho$. Begin by writing \begin{align} \rho(\v{Q}) &= \frac{1+\sum_{k=1}^K h_k Q_k}{1+\sum_{k=1}^K Q_k} \\ &= \frac{1+\sum_{k \in \s{T}} h_k Q_k+\sum_{k \in \s{T}^c} h_k Q_k} {1+\sum_{k \in \s{T}} Q_k+\sum_{k \in \s{T}^c} Q_k}\\ &= \frac{1+\sum_{k \in \s{T}^c} h_k \Popt_k}{1+\sum_{k \in \s{T}^c} \Popt_k}\\ &\le \frac{1+\sum_{k \in \s{T}^c} h_k \Popt_k+\sum_{k \in \s{T}} h_k \Popt_k} {1+\sum_{k \in \s{T}^c} \Popt_k+\sum_{k \in \s{T}} \Popt_k}\\ &=\rho(\v{P}^*) \end{align} where we have used $\frac{a}{b} \le \frac{a+c}{b+d}$ if $\frac{a}{b}\le 1$ and $\frac{c}{d}\ge 1$ when $a,b,c,d \ge 0$. \end{proof} This lemma basically states that to maximize the sum-rate, any user who has a better or equivalent eavesdropper channel must cease transmission. Now, we look at the optimum power allocation among the remaining users. This is stated in the below lemma: \begin{theorem} \label{thm:sumrate} The optimum power allocation $\v{P}^*$ satisfies $\Popt_k=P_{k,max}$ for $k=1,\dotsc,l$ and $\Popt_k=0$ for $k=l+1,\dotsc,K$ where $l$ is some limiting user such that \begin{equation} \label{eqn:sumcaplimusr} h_l < \frac{1+\sum_{k=1}^l h_k P_{k,max}}{1+\sum_{k=1}^l P_{k,max}} \le h_{l+1} \end{equation} \end{theorem} \begin{proof} From Lemma \ref{lem:zeropow}, we see that if $h_k \ge 1$, then $\Popt_k=0$ for all $k \in \Ss$. Thus, we have $\phi_\Ss(\v{P}^*)\ge0$ with equality if and only if $\Popt_k=0$ for all $k \in \Ss$. Then, from the supplementary conditions, we must have $\mu_{3\Ss}=0$ for all $\Ss$ containing a transmitting user, and $\sum_{k \in \Ss} \Popt_k=0$ for all $\Ss$ not containing a transmitting user. As a result, $\mu_{3\Ss}\dot{\phi}^{(j)}_\Ss(\v{P}^*)=0, \, \forall \Ss \subseteq \Ks$. Then, it is easy to see that $\Popt_j=P_{j,max}$ if $h_j<\rho(\v{P}^*)$ and $\Popt_j=0$ if $h_j > \rho(\v{P}^*)$. A user $j$ may have $0<\Popt_j<P_{j,max}$ iff $h_j=\rho(\v{P}^*)$. However, then the sum rate is independent of that user's power, so we could set $\Popt_j=0$ without any loss in sum rate achievable, and conserve power. The next step is to find this limiting user $l$. It is easy to see that this user must satisfy \eqref{eqn:sumcaplimusr}, and can be found in at most $K$ steps. \end{proof} \section{System Model and Problem Statement} \label{sec:system} We consider $K$ users communicating with a receiver in the presence of an eavesdropper. Transmitter $k=1,\dotsc,K$ chooses a message $W_k$ from a set of equally likely messages $\s{W}_k=\{1, \dotsc, M_k\}$. The messages are encoded using $(2^{nR_k},n)$ codes into $\{\tilde X_k^n(W_k)\}$, where $R_k=\ninv \log_2 M_k$. The encoded messages $\{\tilde \Xm_k\}=\{\tilde X_k^n\}$ are then transmitted, and the intended receiver and the eavesdropper each get a copy $\Ym=Y^n$ and $\Zm=Z^n$. The receiver decodes $\Ym$ to get an estimate of the transmitted messages, $\hat \Wm$. We would like to communicate with the receiver with arbitrarily low probability of error, while maintaining perfect secrecy, the exact definition of which will be made precise shortly. The signals at the intended receiver and the eavesdropper are given by \newcommand{h^{{\scriptscriptstyle (\Mch)}}}{h^{{\scriptscriptstyle (\channel{M})}}} \newcommand{h^{{\scriptscriptstyle (\Wch)}}}{h^{{\scriptscriptstyle (\channel{W})}}} \newcommand{\Nm^{{\scriptscriptstyle (\Mch)}}}{\Nm^{{\scriptscriptstyle (\channel{M})}}} \newcommand{\Nm^{{\scriptscriptstyle (\Wch)}}}{\Nm^{{\scriptscriptstyle (\channel{W})}}} \newcommand{\tilde{\Nm}^{{\scriptscriptstyle (\Mch)}}}{\tilde{\Nm}^{{\scriptscriptstyle (\channel{M})}}} \newcommand{\tilde{\Nm}^{{\scriptscriptstyle (\Wch)}}}{\tilde{\Nm}^{{\scriptscriptstyle (\channel{W})}}} \begin{align} \Ym &= \ssum_{k=1}^K \sqrt{h^{{\scriptscriptstyle (\Mch)}}_k} \tilde \Xm_k + \tilde{\Nm}^{{\scriptscriptstyle (\Mch)}} \\ \Zm &= \ssum_{k=1}^K \sqrt{h^{{\scriptscriptstyle (\Wch)}}_k} \tilde \Xm_k + \tilde{\Nm}^{{\scriptscriptstyle (\Wch)}} \end{align} where $\tilde{\Nm}^{{\scriptscriptstyle (\Mch)}},\tilde{\Nm}^{{\scriptscriptstyle (\Wch)}}$ are the AWGN, i.e., $\tilde{\Nm}^{{\scriptscriptstyle (\Mch)}} \isnormal{\v{0},\nvar_\channel{M} \v{I}}$ and $\tilde{\Nm}^{{\scriptscriptstyle (\Wch)}} \isnormal{\v{0},\nvar_\channel{W} \v{I}}$. We also assume the following transmit power constraints: \begin{equation} \ninv \sumton{\tilde X_{ki}^2} \le \tilde P_{k,max}, \; k=1,\dotsc,K \end{equation} Similar to the scaling transformation to put an interference channel in standard form, \cite{carleial:interference}, we can represent any GMAC-WT by an equivalent standard form as in \cite{tekin:IT06a}: \begin{subequations} \label{eqn:YZstd} \begin{align} \Ym &= \ssum_{k=1}^K \Xm_k + \Nm^{{\scriptscriptstyle (\Mch)}} \\ \Zm &= \ssum_{k=1}^K \sqrt{h_k} \Xm_k + \Nm^{{\scriptscriptstyle (\Wch)}} \end{align} \end{subequations} where \begin{itemize} \item the original codewords $\{\tilde \Xm\}$ are scaled to get $\Xm_k = \sqrt{\frac{h^{{\scriptscriptstyle (\Mch)}}_k}{\nvar_\channel{M}}}\tilde \Xm_k$. \item The eavesdropper's new channel gains are given by $h_k = \frac{h^{{\scriptscriptstyle (\Wch)}}_k \nvar_\channel{M}}{h^{{\scriptscriptstyle (\Mch)}}_k \nvar_\channel{W}}$. \item The noise vectors are normalized such that $\Nm^{{\scriptscriptstyle (\Mch)}} = \frac{1}{\nvar_\channel{M}}\tilde{\Nm}^{{\scriptscriptstyle (\Mch)}}$ and $\Nm^{{\scriptscriptstyle (\Wch)}} = \frac{1}{\nvar_\channel{W}}\tilde{\Nm}^{{\scriptscriptstyle (\Wch)}}$. \end{itemize} In \cite{tekin:ASILOMAR05}, we examined the special case of the eavesdropper getting a degraded version of the received signal, which is equivalent to $h_1=h_2=\dotsc=h_K \equiv h < 1$. In this paper, we look at the more general case where this is not necessarily true. The model is illustrated in Figure \ref{fig:gmacwtsystem}. \begin{figure}[t] \centering \includegraphics[height=1.66in,angle=0]{systemK.eps} \caption{\small Equivalent General Gaussian Multiple-Access Wire-Tap Channel (GGMAC-WT) system model.} \label{fig:gmacwtsystem} \end{figure} We use the collective secrecy constraints defined in \cite{tekin:ASILOMAR05} to take into account the multi-access nature of the channel. \begin{equation} \Delta^{(C)}_\Ss \triangleq \frac{H(\Wm_\Ss|\Zm)}{H(\Wm_\Ss)} \quad \forall \Ss \subseteq \Ks \triangleq \{1,\dotsc,K\} \end{equation} We constrain each subset of users to maintain perfect secrecy, i.e. $\Delta^{(C)}_\Ss \ge 1-\e$ for all sets $\Ss$ such that $H(\Wm_\Ss) >0$. Since this must be true for all sets of users, collectively the system has perfect secrecy. However, if a group of users are somehow compromised, the remaining users may also be vulnerable. Note that providing $\Delta^{(C)}_\Ks \ge 1-\frac{\e}{r}$, where $r \ge \ssum_{k=1}^K R_k / \min_{k:R_k>0} R_k$ guarantees the perfect secrecy of all subsets as seen by the following argument: \begin{align} H(\Wm_\Ks|\Zm) &\ge H(\Wm_\Ks) - \frac{\e}{r} H(\Wm_\Ks) \\ H(\Wm_\Ss|\Zm) &\ge H(\Wm_\Ss)+H(\Wm_{\Ss^c}|\Wm_\Ss) \notag \\ &\qquad- H(\Wm_{\Ss^c}|\Wm_\Ss,\Zm) -\frac{\e}{r} H(\Wm_\Ks) \\ &\ge H(\Wm_\Ss) -\frac{\e}{r} H(\Wm_\Ks) \\ \frac{H(\Wm_\Ss|\Zm)}{H(\Wm_\Ss)} &\ge 1 - \frac{H(\Wm_\Ks)}{H(\Wm_\Ss)} \frac{\e}{r} \\ \Delta^{(C)}_\Ss &\ge 1-\e \end{align} \begin{definition}[Achievable rates] \label{def:achrate} The rate vector $\Rm=(R_1,\dotsc,R_K)$ is said to be \ital{achievable with perfect secrecy} if for any given $\e>0$ there exists a code of sufficient length \n such that \begin{align} \hspace{-.1in} \ninv \log_2 M_k &\ge R_k - \e \quad k=1,\dotsc,K\\ \Perr &\le \e \\ \Delta^{(C)}_\Ss &\ge 1-\e \quad \forall \Ss \subseteq \Ks=\{1,\dotsc,K\} \end{align} where user $k$ chooses one of $M_k$ symbols to transmit according to the uniform distribution and \begin{equation} \Perr = \frac{1}{\prod_{k=1}^K M_k} \hspace{-.15in} \sum_{\hspace{.15in}\Wm \in \s{W}_1 \times\s{W}_2 \times \dotsm \times \s{W}_K} \hspace{-.3in} \prob{\hat \Wm \neq \Wm |\Wm \text{ was sent}}. \end{equation} is the average probability of error. We will denote the set of all achievable rates with perfect secrecy $\Csd$. \end{definition} Before we state our results, we define the following quantities for any $\Ss \subseteq \Ks$. \begin{gather*} \CM[\Ss] \triangleq g\paren{\ssum_{k \in \Ss} P_k}, \hspace{.3in} \CW[\Ss] \triangleq g\paren{\ssum_{k \in \Ss} h_k P_k} \\ \CMs[\Ss] \triangleq g\paren{\frac{\ssum_{k \in \Ss} P_k}{1+\ssum_{k \in \Ss^c} P_k}} \\ \CWs[\Ss] \triangleq g\paren{\frac{\ssum_{k \in \Ss} h_k P_k}{1+\ssum_{k \in \Ss^c} h_k P_k}} \end{gather*} where $g(\xi) \triangleq \onehalf \log (1+\xi)$ and $\Ss^c=\Ks \setminus \Ss$. The quantities with $\Ss=\Ks$ will sometimes also be used with the subscript \ital{sum}. Note that these quantities are functions of $\{P_k\}_{k=1}^K$. We also define the following set of allowable powers such that $\CM[\Ss] \ge \CWs[\Ss]$ : \begin{align} \notag \Ps \triangleq \bigg \{\v{P}=(P_1,\dotsc,P_K) \colon& \\ &\hspace{-.9in} P_{k,max} \ge P_k \ge 0, && k=1,\dotsc,K, \notag \\ \label{eqn:Pset} &\hspace{-.9in} \phi_\Ss(\v{P}) \ge 0 && \forall \Ss \subseteq \Ks \quad \bigg \} \end{align} where \begin{equation} \label{eqn:phidef} \phi_\Ss(\v{P}) \triangleq \sum_{k \in \Ss} P_k - \frac{\sum_{k \in \Ss} h_k P_k}{1+\sum_{k \in \Ss^c} h_k P_k} \end{equation} Note that if $h_k \le 1 \, \forall k$, we are left with $\Ps \equiv \{\v{P} \colon P_{k,max} \ge P_k \ge 0, \; k=1,\dotsc,K\}$. On the other hand, if $h_k > 1,\, \forall k$, then the constraint for set $\Ks$ forces $P_k=0, \, \forall k$.
{ "attr-fineweb-edu": 1.844727, "attr-cc_en_topic": 12, "domain": "arxiv" }