text
stringlengths
94
1.28M
meta
dict
TITLE: Simple bivectors in four dimensions QUESTION [1 upvotes]: I am trying to characterize simple bivectors in four dimensions, i.e. elements $B \in \bigwedge^2 \mathbb{R}^4$ such that $B = a \wedge b$ for two vectors $a, b \in \mathbb{R}^4$. In the book Clifford Algebras and spinors by Lounesto Pertti, I found the following: I can see why the square of any simple bivector is real since we have the identity $(a \wedge b)^2 = -|a \wedge b |^2$. However, I cannot prove the second statement, i.e. If the square of a bivector is real, then it is simple. Writing $e_{ij} = e_i \wedge e_j$ and choosing $\{e_{14}, e_{24}, e_{34}, e_{23}, e_{31}, e_{12}\}$ as a basis of $\bigwedge^2\mathbb{R}^4$ (I have specific reasons to choose this slightly atypical basis), I find by direct computation that $B^2 = -|B|^2 + 2(B_{12}B_{34} + B_{14} B_{23} + B_{31}B_{24})e_{1234}$, where $e_{1234} = e_1 e_2 e_3 e_4$ denotes the pseudo scalar in the Clifford algebra of $\mathbb{R}^4$. Yet, I don't manage to conclude from that. As a more general approach, I thought of using the relationship between simple rotations of $\mathbb{R}^4$ and simple bivectors. In fact, the simple bivectors form a double cover of the simple rotations, so the geometry of the simple bivectors should be something like the choice of a plane in $\mathbb{R}^4$ and the choice of an angle $\theta \in [- \pi, +\pi]$, i.e. $$ \text{simple bivectors } \simeq Gr(2, 4) \times [- \pi, +\pi]. $$ Is the latter more or less correct? And how can this help me to characterize more precisely simple bivectors in $\mathbb{R}^4$? REPLY [0 votes]: $ \newcommand\form[1]{\langle#1\rangle} $It is much, much easier than the other answers are making it out to be. Note that $B^2$ must be an even element; so $$ B^2 = \form{B^2}_0 + \form{B^2}_2 + \form{B^2}_4 = B\cdot B + B\times B + B\wedge B = B\cdot B + B\wedge B. $$ where $\form{\cdot}_r$ is the grade $r$ projection and $X\times Y = \tfrac12(XY - YX)$ is the commutator product. (It is true for any $k$-vector $X$ that $\form{BX}_k = B\times X$. If $B$ is simple, then clearly we're left with $B^2 = B\cdot B$, a scalar. If $B^2$ is scalar, then $B\wedge B = 0$. We may write $B$ as a sum of simple bivectors $$ B = B_1 + B_2 $$ Hence $$ 0 = B\wedge B = 2B_1\wedge B_2 $$ so the planes represented by $B_1$ and $B_2$ must intersect in a line represented by a vector $b$; so $B$ lives in a 3D subspace and $B$ is factorable. More concretely, there are $b_1, b_2$ such that $B_1 = b\wedge b_1$ and $B_2 = b\wedge b_2$, so $$ B = b\wedge b_1 + b\wedge b_2 = b\wedge(b_1 + b_2). $$
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 3730174, "subset_name": null }
TITLE: Noetherian descent extension for a given ring QUESTION [3 upvotes]: For a homomorphism of rings $R \to S$, the following are equivalent: a) $(-) \otimes_R S : \mathrm{Mod}(R) \to \mathrm{Mod}(S)$ reflects isomorphisms b) $R \to S$ satisfies effective descent with respect to modules. c) $R \to S$ is pure: For every $R$-module $M$ the natural map $M \to M \otimes_R S$ is a monomorphism. For a reference, see "Descent Theory for Schemes" by Bachuki Mesablishvili; this work generalizes Grothendieck's descent theory for quasi-coherent modules. Question. Let $R$ be a ring. Is there a noetherian ring $S$ together with a ring homomorphism $R \to S$ satisfying the equivalent properties above? I don't know which one of (a) or (c) are more easy to verify. I've also included (b) to illustrate the strength of the condition, but also the geometric significance: Is every (affine) scheme "built up" out of noetherian schemes with respect to the "topology" of effective descent mophisms for quasi-coherent modules? An affirmative answer would settle another problem in my work. On the other hand, I have no idea how to construct a reasonable noetherian ring $S$ over $R$ at all. If $R$ is a domain (which you may assume for convenience), then $S = \mathrm{Quot}(R)$ is an example, but of course it does not satisfy the condition. REPLY [8 votes]: I am afraid that this is not true for any non-noetherian ring. Let $R \to S$ be a descent extension, and assume that $S$ is noetherian. Let $I$ be an ideal of $R$; there will be a finitely generated ideal $J \subseteq I$ of $R$ such that $JS = IS$. Hence $R/I$ and $R/J$ become isomorphic when tensored with $S$, so they are isomorphic are $R$-modules. So $I = J$. Hence $R$ is noetherian.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 3, "question_id": 82014, "subset_name": null }
TITLE: An identity involving the Pochhammer symbol QUESTION [4 upvotes]: I need help proving the following identity: $$\frac{(6n)!}{(3n)!} = 1728^n \left(\frac{1}{6}\right)_n \left(\frac{1}{2}\right)_n \left(\frac{5}{6}\right)_n.$$ Here, $$(a)_n = a(a + 1)(a + 2) \cdots (a + n - 1), \quad n > 1, \quad (a)_0 = 1,$$ is the Pochhammer symbol. I do not really know how one converts expressions involving factorials to products of the Pochhammer symbols. Is there a general procedure? Any help would be appreciated. REPLY [4 votes]: By using the formula \begin{align} (a)_{kn} = k^{kn} \prod_{r=0}^{n-1} \left( \frac{a+r}{k} \right)_{n} \end{align} it is evident that the desired quantity, \begin{align} (1728)^{n} \left( \frac{1}{6} \right)_{n} \left( \frac{3}{6} \right)_{n} \left( \frac{5}{6} \right)_{n}, \end{align} can be seen as \begin{align} 2^{6n} 3^{3n} \left( \frac{1}{6} \right)_{n} \left( \frac{3}{6} \right)_{n} \left( \frac{5}{6} \right)_{n} = \frac{ 6^{6n} \left( \frac{1}{6} \right)_{n} \left( \frac{2}{6} \right)_{n} \left( \frac{3}{6} \right)_{n} \left( \frac{4}{6} \right)_{n} \left( \frac{5}{6} \right)_{n} }{ 3^{3n} \left( \frac{1}{3} \right)_{n} \left( \frac{2}{3} \right)_{n} } = \frac{(1)_{6n}}{(1)_{3n}} = \frac{(6n)!}{(3n)!}. \end{align}
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 4, "question_id": 656505, "subset_name": null }
TITLE: How can I construct a rotationally symmetric energy eigenstate for a particle on a ring? QUESTION [0 upvotes]: The energy eigenstates for a particle on a ring are $\psi_m(\phi) \propto e^{im\phi}$, which are two-fold degenerate. A general energy eigenstate is any linear combination of these, which I can write as $a\psi_m + b\psi_{-m}$. These eigenstates are in general not rotationally symmetric. How can I construct a rotationally symmetric energy eigenstate for an arbitrary energy? REPLY [4 votes]: You cannot. If it $\psi(\theta)$ is independent of the angle $\theta$, then it is an eigenstate of the $L= -i\partial_\theta$ with eigenvalue $0$, and so an eigenstate of $H=L^2= -\partial^2_\theta$ with eigenvalue $E=0$.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 625651, "subset_name": null }
\begin{document} \title[Fixed equilibria]{Equilibrium singularity distributions in the plane} \author[P.K. Newton \& V. Ostrovskyi]{Paul K. Newton and Vitalii Ostrovskyi} \affiliation{Department of Aerospace \& Mechanical Engineering and Department of Mathematics\\ University of Southern California, Los Angeles, CA 90089-1191} \label{firstpage} \maketitle \begin{abstract}{Singularity dynamics; Fixed equilibria; Singular values; Point vortex equilibria; Shannon entropy} We characterize all {\it fixed} equilibrium point singularity distributions in the plane of logarithmic type, allowing for real, imaginary, or complex singularity strengths $\vec{\Gamma}$. The dynamical system follows from the assumption that each of the $N$ singularities moves according to the flowfield generated by all the others at that point. For strength vector $\vec{\Gamma} \in {\Bbb R}^N$, the dynamical system is the classical point vortex system obtained from a singular discrete representation of the vorticity field from incompressible fluid flow. When $\vec{\Gamma} \in Im$, it corresponds to a system of sources and sinks, whereas when $\vec{\Gamma} \in {\Bbb C}^N$ the system consists of spiral sources and sinks discussed in Kochin et. al. (1964). We formulate the equilibrium problem as one in linear algebra, $A \vec{\Gamma} = 0$, $A \in {\Bbb C}^{N \times N}$, $\vec{\Gamma} \in {\Bbb C}^N$, where $A$ is a $N \times N$ complex skew-symmetric configuration matrix which encodes the geometry of the system of interacting singularities. For an equilibrium to exist, $A$ must have a kernel. $\vec{\Gamma}$ must then be an element of the nullspace of $A$. We prove that when $N$ is odd, $A$ always has a kernel, hence there is a choice of $\vec{\Gamma}$ for which the system is a fixed equilibrium. When $N$ is even, there may or may not be a non-trivial nullspace of $A$, depending on the relative position of the points in the plane. We describe a method for classifying the equilibria in terms of the distribution of the non-zero eigenvalues (singular values) of $A$, or equivalently, the non-zero eigenvalues of the associated covariance matrix $A^{\dagger} A$, from which one can calculate the Shannon entropy of the configuration. \end{abstract} \section{Introduction} Consider the vector field at $z = 0$ governed by the complex dynamical system: \begin{eqnarray} \dot{z}^* = \frac{\Gamma}{2 \pi i} \frac{1}{z}, \quad z(t) \in {\Bbb C}, \quad \Gamma \in {\Bbb C}, \quad t \in {\Bbb R} > 0,\label{z} \end{eqnarray} where $z^*$ denotes the complex conjugate of $z(t)$. Letting $z(t) = r(t) \exp (i \theta(t))$, $\Gamma = \Gamma_r + i \Gamma_i$, gives: \begin{eqnarray} \dot{r} &=& \frac{\Gamma_i}{2 \pi r},\\ \dot{\theta} &=& \frac{\Gamma_r}{2 \pi r^2}, \end{eqnarray} from which it is easy to see that: \begin{eqnarray} r(t) &=& \sqrt{\left(\frac{\Gamma_i}{2\pi}\right) t + r^2 (0)}, \end{eqnarray} \begin{equation} \theta(t) = \left\{ \begin{array}{ll} \left(\frac{\Gamma_r}{\Gamma_i} \right) \ln \left( \left(\frac{\Gamma_r}{\Gamma_i}\right) t + r^2 (0)\right) & \mbox{if $ \Gamma_i \ne 0$} \\\\ \frac{\Gamma_r t}{2 \pi r^2 (0)} + \theta (0) & \mbox{if $ \Gamma_i = 0$}. \end{array} \right. \end{equation} \noindent When $\Gamma_r \ne 0$, $\Gamma_i = 0$, the field is that of a classical point-vortex (figure \ref{fig1}(a),(b)); when $\Gamma_r = 0$, $\Gamma_i \ne 0$ it is a source ($\Gamma_i > 0$) or sink ($\Gamma_i < 0$) (figure \ref{fig1}(c),(d)), while when $\Gamma_r \ne 0$, $\Gamma_i \ne 0$, it is a spiral-source or sink ((figure \ref{fig1}(e)-(h)). \begin{figure}[ht] \begin{tabular}{cccc} \includegraphics[scale=0.25,angle=0]{template1.pdf} & \includegraphics[scale=0.25,angle=0]{template2.pdf}& \includegraphics[scale=0.25,angle=0]{template3.pdf} & \includegraphics[scale=0.25,angle=0]{template4.pdf}\\ {\footnotesize (a) $\Gamma_r > 0, \Gamma_i = 0$} & {\footnotesize (b) $\Gamma_r < 0, \Gamma_i = 0$ } & {\footnotesize (c) $\Gamma_r = 0, \Gamma_i < 0$} & {\footnotesize (d) $\Gamma_r = 0, \Gamma_i > 0$}\\ \vspace{0.01cm}\\ \includegraphics[scale=0.25,angle=0]{template5.pdf} & \includegraphics[scale=0.25,angle=0]{template6.pdf}& \includegraphics[scale=0.25,angle=0]{template7.pdf} & \includegraphics[scale=0.25,angle=0]{template8.pdf}\\ {\footnotesize (e) $\Gamma_r < 0$, $ \Gamma_i < 0$} & {\footnotesize (f) $ \Gamma_r > 0$, $\Gamma_i > 0$} & {\footnotesize (g) $\Gamma_r > 0$, $ \Gamma_i < 0$ } & {\footnotesize (h) $\Gamma_r < 0$, $\Gamma_i > 0$}\\ \vspace{0.01cm}\\ \end{tabular} \caption{\footnotesize All possible flowfields at the singular point $z = 0$ associated with the dynamical system (\ref{z}).} \label{fig1} \end{figure} A collection of $N$ of these point singularities, each located at $z = z_{\beta}(t)$, $\beta = 1, ..., N$, by linear superposition, produces the field: \begin{equation} {\dot{z}^{*}} = \frac{1}{2 \pi i} \sum_{\beta = 1}^N \frac{\Gamma_{\beta}}{z - z_{\beta}}; \quad z(t) \equiv x(t) + iy(t) \in {\Bbb C},\quad \Gamma_{\beta} \in {\Bbb C}.\label{eqn1} \end{equation} Then, if we advect each by the velocity field generated by all the others\footnote{One might characterize this dynamical assumption by saying that each singularity `goes with the flow'.}, we arrive at the complex dynamical system: \begin{equation} {\dot{z}^{*}_{\alpha}} = \frac{1}{2 \pi i} \sum_{\beta = 1}^N{}' \frac{\Gamma_{\beta}}{z_{\alpha} - z_{\beta}}; \quad z_{\alpha}(t) \equiv x_{\alpha}(t) + iy_{\alpha}(t) \in {\Bbb C},\quad \Gamma_{\beta} \in {\Bbb C},\label{eqn1a} \end{equation} where $'$ indicates that $\beta \ne \alpha$. In this paper we characterize all {\it fixed} equilibria of (\ref{eqn1a}), namely solutions for which ${\dot{z}^{*}}_{\alpha}(t) = 0$. For this, we have the $N$ coupled equations: \begin{equation} \sum_{\beta = 1}^N{}' \frac{\Gamma_{\beta}}{z_{\alpha} - z_{\beta}} = 0, \quad (\alpha = 1, ... N),\label{eqn2} \end{equation} where we are interested in positions $z_{\alpha}$ and strengths $\Gamma_{\alpha}$ for which this nonlinear algebraic system is satisfied. Since Eqn (\ref{eqn2}) is {\it linear} in the $\Gamma$'s, it can more productively be written in matrix form \begin{equation} A \vec{\Gamma} = 0\label{eqn3} \end{equation} where $A \in {\Bbb C}^{N \times N}$ is evidently a skew-symmetric matrix $A = -A^T$, with entries $[ a_{\alpha \alpha } ] = 0$, $[ a_{\alpha \beta} ] = \frac{1}{z_\alpha - z_\beta} = - [ a_{\beta \alpha} ] $. We call $A$ the {\it configuration matrix} associated with the interacting particle system (\ref{eqn1a}). The collection of points $\{ z_1 (0), z_2 (0), ... , z_N (0) \}$ in the complex plane is called the {\it configuration}. From (\ref{eqn3}), we can conclude that the points $z_{\alpha}$ are in a fixed equilibrium configuration if $\det (A) = 0$, i.e. there is at least one zero eigenvalue of $A$. If the corresponding eigenvector is real, the configuration is made up of point-vortices. If it is imaginary, it is made up of sources and sinks. If it is complex, it is made up of spiral sources and sinks. Notice also that if $\frac{d{z}^{*}_{\alpha}}{dt} = 0$, then one can prove that $\frac{d^n{z}^{*}_{\alpha}}{dt^n} = 0$ for any $n$. It follows that: \medskip \noindent {\bf Theorem 1.} {\it For a given configuration of $N$ points $\{z_1 , z_2, ..., z_N \}$ in the complex plane, there exists a set of singularity strengths $\vec{\Gamma}$ for which the configuration is a fixed equilibrium solution of the dynamical system (\ref{eqn1a}) iff $A$ has a kernel, or equivalently, if there is at least one zero eigenvalue of $A$. If the nullspace dimension of $A$ is one, i.e. there is only one zero eigenvalue, the choice of $\vec{\Gamma}$ is unique (up to a multiplicative constant). If the nullspace dimension is greater than one, the choice of $\vec{\Gamma}$ is not unique and can be any linear combination of the basis elements of null($A$).} The equilibria we consider in this paper all have one-dimensional nullspaces and odd $N$. The more delicate cases of equilibria with higher dimensional nullspaces and even $N$ are deferred to a separate study. We mention here work of Campbell \& Kadtke (1987) and Kadtke \& Campbell (1987) in which a different technique is described to find stationary solutions to (\ref{eqn1a}). \section{General properties of the configuration matrix} Since $A$ is skew-symmetric, it follows that \begin{equation} \det (A) = \det (- A^T ) = (-1)^N \det (A^T ) = \det (A^T ).\label{eqn4} \end{equation} Hence, for $N$ {\it odd}, we have $- \det (A^T ) = \det (A^T)$, which implies $\det (A^T ) = 0$. \medskip \noindent {\bf Theorem 2.} {\it When $N$ is odd, $A$ always has at least one zero eigenvalue, hence for any configuration there exists a choice $\vec{\Gamma} \in {\Bbb C}$ for which the system is a fixed equilibrium.} \medskip \noindent When $N$ is even, there may or may not be a fixed equilibrium, depending on whether or not $A$ has a non-trivial nullspace. In general, we would like to determine a basis set for the nullspace of $A$ for a given configuration, i.e. the set of all strengths for which a given configuration remains fixed. Other important general properties of skew-symmetric matrices are listed below: \begin{enumerate} \item The eigenvalues always come in pairs $\pm \lambda$. If $N$ is odd, there is one unpaired eigenvalue that is zero. \item If $N$ is even, $\det (A) = Pf(A) ^2 \ge 0$, where $Pf$ is the Pfaffian. \item Real skew-symmetric matrices have pure imaginary eigenvalues. \end{enumerate} Recall that every matrix can be written as the sum of a Hermitian matrix ($B = B^\dagger$) and a skew-Hermitian matrix ($C = -C^\dagger$). To see this, notice \begin{equation} A \equiv \frac{1}{2}(A + A^{\dagger}) + \frac{1}{2}(A - A^{\dagger}). \end{equation} Here, $B \equiv \frac{1}{2}(A + A^{\dagger}) = B^{\dagger}$ and $C \equiv \frac{1}{2}(A - A^{\dagger}) = -C^{\dagger}$. A matrix is {\it normal} if $AA^\dagger = A^\dagger A$, otherwise it is {\it non-normal}. If we calculate $AA^\dagger - A^\dagger A$, where $A = B + C$ as above, then it is easy to see that \begin{equation} AA^\dagger - A^\dagger A = 2(CB - BC). \end{equation} Therefore, if $B = 0$ or $C = 0$, $A$ is normal. \medskip \noindent {\bf Theorem 3.} {\it All Hermitian or skew-Hermitian matrices are normal.} \medskip The generic configuration matrix $A$ arising from (\ref{eqn3}) is, however, non-normal. \subsection{Spectral decomposition of normal and non-normal matrices} For normal matrices, the following spectral-decomposition holds: \medskip \noindent {\bf Theorem 4.} {\it $A$ is a normal matrix $\Leftrightarrow$ $A$ is unitarily diagonalizable, i.e. \begin{equation} A = Q \Lambda Q^\dagger \label{diag} \end{equation} where $Q$ is unitary.} \medskip \noindent Here, the columns of $Q$ are the $N$ linearly independent eigenvectors of $A$ that can be made mutually orthogonal. The matrix $\Lambda$ is a diagonal matrix with the $N$ eigenvalues down the diagonal. See Golub and Van Loan (1996) for details. In general, however, for the system of interacting particles governed by (\ref{eqn2}), (\ref{eqn3}), $A \in {\Bbb C}^{N \times N}$ will be a non-normal matrix. The most comprehensive decomposition of $A$ in this case is the singular value decomposition (Golub and Van Loan (1996), Trefethen and Bau (1997)). It is a factorization that greatly generalizes the spectral decomposition of a normal matrix, and it is available for any matrix. The $N$ singular values, $\sigma^{(i)}$ ($i = 1, \ldots N$), of $A$, are non-negative real numbers that satisfy \begin{align} A {\bf v}^{(i)} = \sigma^{(i)} {\bf u}^{(i)}; \quad A^\dagger {\bf u}^{(i)} = \sigma^{(i)} {\bf v}^{(i)},\label{singeqns} \end{align} where ${\bf u}^{(i)} \in {\Bbb C}^N$ and ${\bf v}^{(i)} \in {\Bbb C}^N$. The vector ${\bf u}^{(i)}$ is called the left-singular vector associated with $\sigma^{(i)}$, while ${\bf v}^{(i)}$ is the right-singular vector. In terms of these, the matrix $A$ has the factorization \begin{align} A = U\Sigma V^\dagger = \sum_{i=1}^k \sigma^{(i)} {\bf u}^{(i)} {\bf v}^{(i)}{}^T, \quad (k \le N) \label{svddecomp} \end{align} where $U \in {\Bbb C}^{N \times N}$ in unitary, $V \in {\Bbb C}^{N \times N}$ is unitary, and $\Sigma \in {\Bbb R}^{N \times N}$ is diagonal. (\ref{svddecomp}) is the non-normal analogue of the spectral decomposition formula (\ref{diag}) where the summation term on the right hand side gives an (optimal) representation of $A$ as a linear combination of rank-one matrices with weightings governed by the singular values ordered from largest to smallest. Here, the rank of $A$ is $k$. The columns of $U$ are the left-singular vectors ${\bf u}^{(i)}$, while the columns of $V$ are the right-singular vectors ${\bf v}^{(i)}$. The matrix $\Sigma$ is given by: \begin{align} \Sigma = \left( \begin{array} {ccc}\sigma^{(1)} & \cdots & 0\\ & \ddots & \\ 0 & \cdots & \sigma^{(N)} \end{array} \right) \in {\Bbb R}^{N \times N}.\label{singmatrix} \end{align} The singular values can be ordered so that $\sigma^{(1)} \ge \sigma^{(2)} \ge \ldots \ge \sigma^{(N)} \ge 0$ and one or more may be zero. As is evident from multiplying the first equation in (\ref{singeqns}) by $A^\dagger$ and the second by $A$, \begin{align} (A^{\dagger} A - \sigma^{(i)}{}^2 ){\bf v}^{(i)} = 0; \quad (A A^{\dagger} - \sigma^{(i)}{}^2 ){\bf u}^{(i)} = 0,\label{eigs} \end{align} the singular values squared are the eigenvalues of the {\it covariance} matrices $A^{\dagger} A$ or $A A^{\dagger}$, which have the same eigenvalue structure, while the left-singular vectors ${\bf u}^{(i)}$ are the eigenvectors of $AA^\dagger$, and the right-singular vectors ${\bf v}^{(i)}$ are the eigenvectors of $A^{\dagger} A$. From (\ref{singeqns}), we also note that the right singular vectors ${\bf v}^{(i)}$ corresponding to $\sigma^{(i)} = 0$ form a basis for the nullspace of $A$. Because of (\ref{eqn3}), we seek configuration matrices with one or more singular values that are zero. \section{Collinear equilibria} For the special case in which all the particles lie on a straight line, there is no loss in assuming $z_{\alpha} = x_{\alpha} \in {\Bbb R}$. Then $A \in {\Bbb R}^{N \times N}$, $A$ is a normal skew-symmetric matrix, and the eigenvalues are pure imaginary. As an example, consider the collinear case $N = 3$. Let the particle positions be $x_1 < x_2 < x_3$, with corresponding strengths $\Gamma_1$, $\Gamma_2$, $\Gamma_3$. The $A$ matrix is then given by \begin{equation} A = \left[ \begin{array}{ccc} 0 & \frac{1}{x_1 - x_2} & \frac{1}{x_1 - x_3}\\ \frac{1}{x_2 - x_1} & 0 & \frac{1}{x_2 - x_3}\\ \frac{1}{x_3 - x_1} & \frac{1}{x_3 - x_2} & 0 \end{array} \right]. \end{equation} Since $N$ is odd, we have $\det(A) = 0$. The other two eigenvalues are given by: \begin{eqnarray} \lambda_{123} = \pm i\sqrt{\frac{1}{(x_2 - x_1 )^2} + \frac{1}{(x_3 - x_2 )^2} + \frac{1}{(x_3 - x_1 )^2}}, \end{eqnarray} which is invariant under cyclic permutations of the indices ($\lambda_{123} = \lambda_{231} = \lambda_{312}$). We can scale the length of the configuration so that the distance between $x_1$ and $x_3$ is one, hence without loss of generality, let $x_1 = 0, x_2 = x, x_3 = 1$. The other two eigenvalues are then given by the formula: \begin{eqnarray} \lambda = \pm i\sqrt{\frac{(1-x+x^2 )^2}{x^2 (1-x)^2} }. \end{eqnarray} It is easy to see that the numerator has no roots in the interval $(0,1)$, hence the nullspace dimension of $A$ is one. The nullspace vector is then given (uniquely up to multiplicative constant) by: \begin{equation} \vec{\Gamma} = \left[ \begin{array}{c} 1\\ -\left( \frac{x_3 - x_2}{x_3 - x_1}\right) \\ \left(\frac{x_3 - x_2}{x_2 - x_1}\right) \end{array} \right]. \end{equation} For the special symmetric case $x_3 - x_1 = 1$, $x_3 - x_2 = 1/2$, $x_2 - x_1 = 1/2$, we have $\Gamma_1 = 1, \Gamma_2 = -1/2, \Gamma_3 = 1$. We show this case in figure \ref{fig2} along with the separatrices associated with the corresponding flowfield generated by the singularities. Since the sum of the strengths of the three vortices is $\Gamma_1 + \Gamma_2 + \Gamma_3 = 1 - 1/2 + 1 = 3/2$, the far field is that of a point vortex of strength $\Gamma = 3/2$. Interestingly, for the collinear cases, since $A$ is real, the nullspace vector is either real, or if multiplied by $i$, is pure imaginary. Hence, each collinear configuration of point vortices obtained with a given $\vec{\Gamma} \in {\Bbb R}$ is also a collinear configuration of sources/sinks with corresponding strengths given by $i\vec{\Gamma}$. The corresponding streamline pattern for the source/sink configuration, as shown in the dashed curves of figure \ref{fig2}, is the orthogonal complement of the curves corresponding to the point vortex case. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6,angle=0]{Figure2.pdf} \end{center} \caption{\footnotesize $N = 3$ evenly distributed point vortices on a line with strengths $\Gamma_1 = 1, \Gamma_2 = -\frac{1}{2}, \Gamma_3 = 1$, in equilibrium. The far field is that of a point vortex at the center-of-vorticity of the system. Solid streamline pattern is for point vortices, dashed streamline pattern is for source/sink system. The patterns are orthogonal.} \label{fig2} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6,angle=0]{Figure3.pdf} \end{center} \caption{\footnotesize $N = 7$ evenly distributed point vortices on a line. The far field is that of a point vortex at the center-of-vorticity of the system. Because of the symmetry of the spacing, the vortex strengths are symmetric about the central point $x_4$ which also corresponds to the center-of-vorticity.} \label{fig3} \end{figure} \begin{figure}[h] \includegraphics[scale=0.6,angle=0]{Figure4.pdf} \caption{\footnotesize $N = 7$ randomly distributed point vortices on a line. The far field is that of a point vortex at the center-of-vorticity of the system.} \label{fig4} \end{figure} For $N$ even, we cannot say a priori whether or not $\det (A) = 0$ as the case for $N = 2$ shows. For this, the $A$ matrix is \begin{equation} A = \left[ \begin{array}{cc} 0 & \frac{1}{x_1 - x_2}\\ \frac{1}{x_2 - x_1} & 0 \end{array} \right] = \left[ \begin{array}{cc} 0 & \frac{1}{d}\\ - \frac{1}{d} & 0 \end{array} \right]. \end{equation} The eigenvalues are $\lambda = \pm i/d$, hence there is no equilibrium (except in the limit $d \rightarrow \infty$). We show in figures \ref{fig3} and \ref{fig4} two representative examples of collinear fixed point vortex equilibria for $N = 7$, along with their corresponding global streamline patterns. In figure \ref{fig3} we deposit seven evenly spaced points on a line and solve for the nullspace vector to obtain the singularity strengths (ordered from left to right) \begin{eqnarray} \vec{\Gamma} &=& (1.0000, -0.5536, 0.9212, -0.5797, 0.9212, -0.5536, 1.0000),\\ \sum_\alpha \Gamma_\alpha &=& 2.1555. \end{eqnarray} Because of the even spacing, the strengths are symmetric about the central point $x_4$ ($\Gamma_1 = \Gamma_7, \Gamma_2 = \Gamma_6 , \Gamma_3 = \Gamma_5$), which is also the location of the center-of-vorticity $\sum_{\alpha = 1}^7 \Gamma_\alpha x_\alpha $. Figure \ref{fig4} shows a fixed equilibrium corresponding to seven points randomly placed on a line. The nullspace vector for this case is (ordered from left to right) \begin{eqnarray} \vec{\Gamma} &=& (1.0000, -0.5071, 0.5342, -0.4007, 0.2815, -0.2505, 1.0743),\\ \sum_\alpha \Gamma_\alpha &=& 1.7317. \end{eqnarray} In both cases, the singularities are all point vortices (or source/sink systems) hence are examples of collinear equilibria such as those discussed in Aref (2007a, 2007b, 2009) and Aref et. al. (2003) where the strengths are typically chosen as equal. The streamline pattern at infinity in both cases is that of a single point vortex of strength $\sum_{\alpha = 1}^7 \Gamma_\alpha \ne 0$ located at the center of vorticity $\sum_{\alpha = 1}^7 \Gamma_\alpha x_\alpha $. \section{Triangular equilibria} The case $N = 3$ is somewhat special and worth treating separately. Given any three points $\{ z_1 , z_2 , z_3 \}$ in the complex plane, the corresponding configuration matrix $A$ is: \begin{equation} A = \left[ \begin{array}{ccc} 0 & \frac{1}{z_1 - z_2} & \frac{1}{z_1 - z_3}\\ \frac{1}{z_2 - z_1} & 0 & \frac{1}{z_2 - z_3}\\ \frac{1}{z_3 - z_1} & \frac{1}{z_3 - z_2} & 0 \end{array} \right]. \end{equation} There is no loss of generality in choosing two of the points along the real axis, one at the origin of our coordinate system, the other at $x = 1$. Hence we set $z_1 = 0$, $z_2 = 1$, and we let $z_3 \equiv z$. Then $A$ is written much more simply: \begin{equation} A = \left[ \begin{array}{ccc} 0 &-1& - \frac{1}{ z}\\ 1& 0 & \frac{1}{1 - z}\\ \frac{1}{z} & \frac{1}{z - 1} & 0 \end{array} \right]. \end{equation} Since $N$ is odd, one of the eigenvalues of $A$ is zero. The other two are given by: \begin{eqnarray} \lambda = \pm i\sqrt{\frac{1}{z^2} + \frac{1}{(1-z)^2} + 1} = \pm i\sqrt{\frac{ (1-z+z^2 )^2}{z^2 (1-z)^2 }} \end{eqnarray} When the numerator is not zero, the nullspace dimension is one and it is easy to see that the nullspace of $A$ is given by: \begin{equation} \vec{\Gamma} = \left[ \begin{array}{c} \frac{1}{z-1} \\ - \frac{1}{z}\\ 1 \end{array} \right].\label{onednulls} \end{equation} However, the numerator is zero at the points: \begin{eqnarray} z = \exp (\frac{\pi i}{3}), \exp (\frac{5\pi i}{3}), \end{eqnarray} at which $\Re z = \frac{1}{2}$, $\Im z = \pm \frac{\sqrt{3}}{2}$. This forms an equilateral triangle in which case the nullspace dimension is three. We have thus proven the following: \medskip \noindent {\bf Theorem 5.} {\it For three point vortices, or for three sources/sinks, the only fixed equilibria are collinear. In this case, the nullspace dimension of $A$ is one and is given by (\ref{onednulls}). For the equilateral triangle configuration, the nullspace dimension is three.} \medskip We show a fixed equilibrium equilateral triangle state in figure \ref{fig5} along with the corresponding streamline pattern. Figures \ref{fig6}, \ref{fig7}, \ref{fig8} show examples of $N = 3$ triangular states that are not equilateral. \begin{figure}[h] \begin{center} \includegraphics[scale=0.6,angle=0]{Figure5.pdf} \caption{\footnotesize $N = 3$ equilateral triangle configuration with corresponding streamline pattern. The strengths are given by $\Gamma_1 = 1.0000$, $\Gamma_2 = -0.5000 + 0.8660i$, $\Gamma_3 = -0.5000 + 0.8660i$. } \label{fig5} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6,angle=0]{Figure6.pdf} \caption{\footnotesize $N = 3$ non-equilateral triangular state with corresponding streamline pattern. The strengths are given by $\Gamma_1 = 1.0000$, $\Gamma_2 = 0.3077 + 0.4615i$, $\Gamma_3 = -0.3200 - 0.2400i$. } \label{fig6} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.8,angle=0]{Figure7.pdf} \caption{\footnotesize $N = 3$ isosceles triangle state. The strengths are given by $\Gamma_1 = 1.0000$, $\Gamma_2 = -1.6000 + 0.8000i$, $\Gamma_3 = -1.6000 - 0.8000i$. } \label{fig7} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.6,angle=0]{Figure8.pdf} \caption{\footnotesize $N = 3$ isosceles triangle state. The strengths are given by $\Gamma_1 = 1.0000$, $\Gamma_2 = -0.0308 + 0.2462i$, $\Gamma_3 = -0.0308 - 0.2462i$. } \label{fig8} \end{center} \end{figure} \section{Equilibria along prescribed curves} We now ask a more general and interesting question. Given any curve in the complex plane, if we distribute points $\{z_{\alpha}\}$, ($\alpha = 1, ..., N$) along the curve, is it possible to find a strength vector $\vec{\Gamma}$ so that the configuration is fixed? The answer is yes, if $N$ is odd, and sometimes, if $N$ is even. Figures \ref{fig9} - \ref{fig15} show a collection of fixed equilibria along curves that we prescribe. First, figure \ref{fig9} shows $7$ points places randomly in the plane, with the singularity strengths obtained from the nullspace of $A$ so that the system is an equilibrium. The strengths are given by: $\vec{\Gamma} = ( 1.0000, -0.7958 + 1.0089i, -1.3563 - 0.4012i, 0.0297 + 0.1594i, 0.9155 + 0.3458i, -2.0504 - 0.8776i, -0.1935 - 1.0802i)^T$ with the sum given by $ -2.4508 - 0.8449 i$. Thus, the far field is that of a spiral-sink configuration. Figure \ref{fig10} shows the case of $N = 7$ points distributed evenly around a circle. The nullspace vector is given by $\vec{\Gamma} = (1.0000, -0.9010 + 0.4339i, 0.6235 - 0.7818i, -0.2225 + 0.9749i, -0.2225 - 0.9749i, 0.6235 + 0.7818i, -0.9010 - 0.4339i)^T$. For this very symmetric case, the sum of the strengths is zero, hence in a sense, the far field vanishes. Figure \ref{fig11} shows the case of $N = 7$ points placed at random positions on a circle. Here, the nullspace vector is given by $\vec{\Gamma} = ( 1.0000, -0.6342 + 0.4086i, 0.3699 - 0.5929i, -0.1501 + 0.6135i, -0.2483 - 0.9884i, 0.2901 + 0.3056i, -0.3595 - 0.2686i)^T$ The random placement of points breaks the symmetry of the previous case and the sum of strengths is given by $0.2649 - 0.5222 i$ which corresponds to a spiral-sink. In figures \ref{fig12} and \ref{fig13} we show equilibrium distribution of points along a curve we call a `flower-petal', given by the formula $r(\theta ) = \cos (2\theta)$, $0 \le \theta \le 2 \pi$. In figure \ref{fig12} we distribute them evenly on the curve, while in figure \ref{fig13} we distribute them randomly. The particle strengths from the configuration in figure \ref{fig12} are $\vec{\Gamma} = (1.0000, 0.1824 + 0.1498i, -0.9892 - 0.9103i, -0.1378 - 0.5333i, -0.1378 + 0.5333i, -0.9892 + 0.9103i, 0.1824 - 0.1498i )^T $ with sum equaling $ -0.8892$ corresponding to a far field point vortex. Figure \ref{fig13} shows particles distributed randomly on the same flower-petal curve. Here, the particle strengths are $\vec{\Gamma} = (1.0000, 0.2094 - 0.4071i, -0.3009 + 0.3003i, 0.0404 - 0.2864i, -0.1779 + 0.2773i, 0.4236 + 0.8052i, -0.4702 - 0.3304i)^T$, with sum given by $ .7244 + .3589i$. Hence the far field corresponds to a source-spiral. The last two configurations, shown in figures \ref{fig14} and \ref{fig15} are equilibria distributed along figure eight curves, given by the formulas $r(\theta ) = \cos^2 (\theta)$, $0 \le \theta \le 2 \pi$. In figure \ref{fig14} we distribute the points evenly around the curve, which gives rise to strengths $\vec{\Gamma} = (1.0000, -0.2734 + 0.5350i, 0.0239 - 0.2080i, 0.1063 - 0.0517i, 0.1063 + 0.0517i, 0.0239 + 0.2080i, -0.2734 - 0.5350i )^T $, whose sum is $.7136$, thus a far field point vortex. In contrast, when the points are distributed randomly around the same curve, as in figure \ref{fig15}, the strengths are given by $\vec{\Gamma} = (1.0000, -0.1054 + 0.5724i, -0.0174 - 0.4587i, 0.9208 + 1.2450i, -0.0460 - 0.4577i, -0.5292 + 0.2371i, -0.2543 - 0.0921i)^T $, with sum equaling $ .9685+1.0460i$, hence a far field source-spiral. \begin{figure}[h] \begin{center} \includegraphics[scale=0.75,angle=0]{Figure9.pdf} \caption{\footnotesize Fixed equilibrium for seven points placed at random locations in the plane. The far field is a spiral-sink (figure 1(e)) with since $\sum \Gamma_\alpha = -2.4508 - 0.8449 i$.} \label{fig9} \end{center} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.9,angle=0]{Figure10.pdf} \caption{\footnotesize $N = 7$ evenly distributed points on a circle (dashed curve) in equilibrium. Because of the symmetry of the configuration, $\sum \Gamma_\alpha = 0$, hence the far-field vanishes.} \label{fig10} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.9,angle=0]{Figure11.pdf} \caption{\footnotesize $N = 7$ randomly distributed particles on a circle (dashed curve) in equilibrium along with the corresponding streamline pattern. The far field streamline pattern is that of a spiral-sink (figure 1(g)) since $\sum \Gamma_\alpha = 0.2649 - 0.5222 i$.} \label{fig11} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.9,angle=0]{Figure12.pdf} \caption{\footnotesize $N = 7$ evenly distributed particles in equilibrium on the curve $r(\theta ) = \cos (2\theta)$ (dashed curve) along with the corresponding streamline pattern. The far field corresponds to a point vortex since $\sum \Gamma_\alpha = -0.8892$.} \label{fig12} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.9,angle=0]{Figure13.pdf} \caption{\footnotesize $N = 7$ randomly distributed particles in equilibrium on the curve $r(\theta ) = \cos (2 \theta)$ (dashed curve). The far field corresponds to a source-spiral (figure 1(f)) since $\sum \Gamma_\alpha = 0.7244 + 0.3589i$.} \label{fig13} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.9,angle=0]{Figure14.pdf} \caption{\footnotesize $N = 7$ evenly distributed particles in equilibrium on the curve $r(\theta ) = \cos^2 (\theta)$ (dashed curve). The far field corresponds to a point vortex since $\sum \Gamma_\alpha = 0.7136$.} \label{fig14} \end{figure} \begin{figure}[ht] \includegraphics[scale=0.9,angle=0]{Figure15.pdf} \caption{\footnotesize $N = 7$ randomly distributed particles in equilibrium on the curve $r(\theta ) = \cos^2 (\theta)$ (dashed curve). The far field corresponds to a source-spiral (figure 1(f)) since $\sum \Gamma_\alpha = 0.9685 +1.0460i$.} \label{fig15} \end{figure} \section{Classification of equilibria in terms of the singular spectrum} \begin{figure}[ht] \includegraphics[scale=0.7,angle=0]{Figure16.pdf} \caption{\footnotesize Singular spectrum (normalized) for $N = 7$ particles placed randomly along a figure eight planar curve (i.e. the equilibrium configuration shown in figure \ref{fig15}). Singular values are grouped in pairs (except for the zero value) due to the skew-symmetry of the configuration matrix.} \label{fig16} \end{figure} \begin{table} \begin{center} \begin{minipage}{8.0cm} \begin{tabular}{||l|c|c|c||} \hline Configuration & $\sigma$ (unormalized) & $\sigma$ (normalized) & Shannon entropy \\\hline & 1.0000 & 0.5000 & 0.6931\\ Equilateral & 1.0000 & 0.5000 & \\ & 0.00 & 0.00 & \\\hline &1.0598 & 0.5000 & 0.6931 \\ Isosceles (acute)& 1.0598 & 0.5000 & \\ & 0.00 & 0.00 & \\\hline &2.7203 & 0.5000 & 0.6931 \\ Isosceles (obtuse)& 2.7203 & 0.5000 & \\ & 0.00 & 0.00 & \\\hline &1.2115 & 0.5000 & 0.6931 \\ Arbitrary triangle & 1.2115 & 0.5000 & \\ & 0.00 & 0.00 & \\\hline \end{tabular} \end{minipage} \end{center} \caption{Singular spectrum of triangular states ($N = 3$) \label{table1}} \end{table} \begin{table} \begin{center} \begin{minipage}{8.0cm} \begin{tabular}{||l|c|c|c||} \hline Configuration & $\sigma$ (unormalized) & $\sigma$ (normalized) & Shannon entropy \\\hline & 4.5000 & 0.5000 & 0.6931\\ $N=3$ & 4.5000 & 0.5000 & \\ & 0.00 & 0.00 & \\\hline & 2.5249 & 0.3214 & 1.5237 \\ $N = 7$ (even) & 2.5249 & 0.3214 & \\ & 1.6831 & 0.1428 & \\ & 1.6831 & 0.1428 &\\ & 0.8420 & 0.0357 & \\ & 0.00 & 0.00 & \\\hline & 6.3408 & 0.4457 & 1.0723 \\ $N = 7$ (random) & 6.3408 & 0.4457 & \\ & 2.0969 & 0.0487 & \\ & 2.0969 & 0.0487 &\\ & 0.7062 & 0.0055 & \\ & 0.7062 & 0.0055 & \\ & 0.0000 & 0.0000 & \\\hline \end{tabular} \end{minipage} \end{center} \caption{Singular spectrum of collinear states ($N = 3, 7$) \label{table2}} \end{table} \begin{table} \begin{center} \begin{minipage}{8.0cm} \begin{tabular}{||l|c|c|c||} \hline Configuration & $\sigma$ (unormalized) & $\sigma$ (normalized) & Shannon entropy \\\hline & 3.0000 & 0.3214 & 1.5236 \\ $N = 7$ (even) & 3.0000 & 0.3214 & \\ & 2.0000 & 0.1429 & \\ & 2.0000 & 0.1429 &\\ & 1.0000 & 0.0357 & \\ & 1.0000 & 0.0357 & \\ & 0.0000 & 0.0000 & \\\hline & 3.7954 & 0.3363 & 1.4700 \\ $N = 7$ (random) & 3.7954 & 0.3363 & \\ & 2.4250 & 0.1373 & \\ & 2.4250 & 0.1373 &\\ & 1.0631 & 0.0264 & \\ & 1.0631 & 0.0264 & \\ & 0.0000 & 0.0000 & \\\hline \end{tabular} \end{minipage} \end{center} \caption{Singular spectrum of circular states ($N = 7$) \label{table3}} \end{table} \begin{table} \begin{center} \begin{minipage}{8.0cm} \begin{tabular}{||l|c|c|c||} \hline Configuration & $\sigma$ (unormalized) & $\sigma$ (normalized) & Shannon entropy \\\hline & 11.9630 & 0.4664 & 0.9651 \\ $N = 7$ (even) & 11.9630 & 0.4664 & \\ & 3.0001 & 0.0293 & \\ & 3.0001 & 0.0293 &\\ & 1.1454 & 0.0043 & \\ & 1.1454 & 0.0043 & \\ & 0.0000 & 0.0000 & \\\hline & 6.9337 & 0.3465 & 1.3929 \\ $N = 7$ (random) & 6.9337 & 0.3465 & \\ & 4.4357 & 0.1418 & \\ & 4.4357 & 0.1418 &\\ & 1.2769 & 0.0117 & \\ & 1.2769 & 0.0117 & \\ & 0.0000 & 0.0000 & \\\hline \end{tabular} \end{minipage} \end{center} \caption{Singular spectrum of figure eight states ($N = 7$) \label{table4}} \end{table} \begin{table} \begin{center} \begin{minipage}{8.0cm} \begin{tabular}{||l|c|c|c||} \hline Configuration & $\sigma$ (unormalized) & $\sigma$ (normalized) & Shannon entropy \\\hline & 5.9438 & 0.4447 & 1.1034 \\ $N = 7$ (even) & 5.9438 & 0.4447 & \\ & 1.8115 & 0.0413 & \\ & 1.8115 & 0.0413 &\\ & 1.0538 & 0.0140 & \\ & 1.0538 & 0.0140 & \\ & 0.0000 & 0.0000 & \\\hline & 8.0780 & 0.3875 & 1.3393 \\ $N = 7$ (random) & 8.0780 & 0.3875 & \\ & 3.8900 & 0.0899 & \\ & 3.8900 & 0.0899 &\\ & 1.9523 & 0.0226 & \\ & 1.9523 & 0.0226 & \\ & 0.0000 & 0.0000 & \\\hline \end{tabular} \end{minipage} \end{center} \caption{Singular spectrum of flower states ($N = 7$) \label{table5}} \end{table} Here we describe how to use the non-zero singular spectrum of $A$ to classify the equilibrium when $A$ has a kernel. Let $\sigma^{(i)}$, $i = 1, ..., k < N$ denote the non-zero singular values of the configuration matrix $A$, arranged in descending order $\sigma^{(1)} \ge \sigma^{(2)} \ge ... \ge \sigma^{(k)} > 0$. First we normalize each of the singular values so that they sum to one: \begin{eqnarray} \hat{\sigma}^{(i)} \equiv \sigma^{(i)} /\sum_{j=1}^k \sigma^{(j)} \end{eqnarray} Then \begin{eqnarray} \sum_{i=1}^k \hat{\sigma}^{(i)} =1, \end{eqnarray} and the string of $k$ numbers arranged from largest to smallest: $(\hat{\sigma}^{(1)} , \hat{\sigma}^{(2)}, ..., \hat{\sigma}^{(k)})$ is the `spectral representation' of the equilibrium. The rate at which they decay from largest to smallest is encoded in a scalar quantity called the Shannon entropy, $S$, of the matrix (see Shannon (1948) and more recent discussions associated with vortex lattices in Newton and Chamoun (2009)): \begin{eqnarray} S = - \sum_{i=1}^k \hat{\sigma}^{(i)} \log \hat{\sigma}^{(i)} . \end{eqnarray} With this representation, spectra that drop off rapidly from highest to lowest, are `low-entropy equilibria', whereas those that drop off slowly (even distribution of normalized singular values) are `high-entropy equilibria'. Note that from the representation (\ref{svddecomp}), low-entropy equilibria have configuration matrix representations that are dominated in size by a small number of terms, whereas the configuration matrices of high-entropy equilibria equilibria have terms that are more equal in size. See Newton and Chamoun (2009) for more detailed discussions in the context of relative equilibrium configurations, and the original report of Shannon (1948) which has illuminating discussions of entropy, information content, and its interpretations with respect to randomness. As an example of the normalized spectral distribution associated with the figure eight equilibrium shown in figure \ref{fig15}, we show in figure \ref{fig16} the $7$ singular values (including the zero one). The fact that they are grouped in pairs follows from the skew-symmetry of $A$ which implies that the eigenvalues come in pairs $\pm \lambda$. Since the singular values are the squares of the eigenvalues, it follows that there are two of each of the non-zero ones. Tables \ref{table1} - \ref{table5} show the complete singular spectrum for all the equilibria considered in this paper. A common measure of `robustness' associated with the configuration matrix, hence the equilibrium, is the size of the `spectral gap' as measured by the size of the smallest non-zero singular value. From Table \ref{table2}, the collinear state with points distributed randomly and the figure-eight state with points distributed evenly (Table \ref{table4}) are the least robust in that their smallest non-zero singular values are closest to zero. \section{Discussion} In this paper we describe a new method for finding and classifying fixed equilibrium distributions of point singularities of source/sink, point vortex, or spiral source/sink type in the complex plane under the dynamical assumption that each point `goes with the flow'. This includes configurations placed at random points in the plane, at prescribed points, or lying along prescribed curves. This last situation is reminiscent of a classical technique for enforcing boundary conditions along arbitrarily shaped boundaries embedded in fluid flows. These techniques are generically referred to as singularity distribution methods. See, for example, Katz and Plotkin (2001) and Cortez (1996, 2000) for applications and discussions of these methods in the context of potential flow, hence inviscid boundary conditions, and Cortez (2001) in the context of Stokes flow, hence viscous boundary conditions. For these problems, there is generally no associated evolution equation for the interacting singularities which discretize the boundary, as in (\ref{eqn1a}) for us. Their positions are fixed to lie along the given boundary, and the strengths are then judiciously chosen to enforce the relevant inviscid or viscous boundary conditions. As in Cortez (2001), it would be of interest to `regularize' the point singularities (1.1) and ask if the methods in this paper can be extended to smoothed out singularities, as would a more complete analysis of the `pseudo-spectrum' associated with the configuration matrices $A$, as discussed Trefethen and Bau (1997).
{ "config": "arxiv", "file": "1006.0543/Complex_equilibria-2.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: If $\lim\limits_{x \rightarrow \infty} f'(x)^2 + f^3(x) = 0$ , show that $\lim \limits_ {x\rightarrow \infty} f(x) = 0$ QUESTION [3 upvotes]: If $f : \mathbb{R} \rightarrow \mathbb{R}$ is a differentiable function and $\lim\limits_{x \rightarrow \infty} f'(x)^2 + f^3(x) = 0$ , show that $\lim\limits_{x\rightarrow \infty} f(x) = 0$. I really have no clue how to start, I tried things like MVT and using definition of derivatives but I really can't figure this out. REPLY [1 votes]: Assume the claim is wrong, ie., there exists $a>0$ and a sequence $\xi_n\to \infty$ with $|f(\xi_n)|>a$. There exists $x_0$ with $|f'(x)^2+f(x)^3|<\frac{a^3}8$ for $x\ge x_0$. Assume $x_0<x_1<x_2$ and $f'(x_{1,2})=0$. Then $|f(x_{1,2})|<\frac a2$. If some $\xi_n$ is in $[x_1,x_2]$ then there is a local extremum at some $\eta\in(x_1,x_2)$ where $f'(\eta)=0$ and $|f(\eta)|>|f(\xi_n)|>a$. Since this contradicts $|f'(\eta)^2+f(\eta)^3|<\frac{a^3}8$, we conclude that no $\xi_n$ is between two critical points, hence there is at most one critical point $>x_0$. Hence we may assume (by increasing $x_0$ if necessary) that $f$ is strictly monotonic on $[x_0,\infty)$. Assume $f$ is strictly increasing. Since $f(x)<\frac a2$ for $x>x_0$, $f$ is bounded from above, hence converges. Then $f'(x)^2$ converges as well and as it is never positive, $f'(x)$ converges, and must converge to $0$. Then also $f(x)\to 0$ and we are done. Hence we may assume that $f$ is strictly decreasing. If $f$ is bounded from below, the same aregument as above applies and we are done again. Hence $f$ is strictly decreasing and not bounded from below. As $f(x)\to-\infty$, we may assume wlog. that $f(x)^3<-\frac a{16}$ for $x\ge x_0$. Hence $$\tag1 f'(x)^2+\frac12f(x)^3>0$$ Let $b\in\mathbb R $ and $g(x)=-8(x-b)^{-2}$ for $x<b$. Then $g'(x)=16(x-b)^{-3}$ so that $$\tag2g'(x)^2+\frac12g(x)^3=0.$$ Now adjust $b$ so that $x_0<b$ and $g(x_0)=f(x_0)$. That is, we let $$ b=x_0+\sqrt{-\frac8{f(x_0)}}$$ (Note that the radicand is positive). At any point $x\in[x_0,b)$ where $f(x)=g(x)$, we have $f'(x)<g'(x)<0$ by $(1)$ and $(2)$ so that in $f(\xi)<g(\xi)$ in an interval $(x,x+\epsilon)$. We conclude that $f(x)\le g(x)$ for all $x\in[x_0,b)$. But $g(x)\to -\infty$ as $x\to b^-$ whereas $f(x)$ is bounded on $[x_0,b]$. - Contradiction!
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 3, "question_id": 1261736, "subset_name": null }
TITLE: Understanding the usage of Green's Theorem in this problem QUESTION [0 upvotes]: Use Green's theorem to find the area enclosed by the ellipse: $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ We parameterize the ellipse by the equations: $$x = a\cos{t}\\y = b\sin{t}$$ ... for $t \in [0, 2\pi]$. Green's theorem tell sus that the area of $R$, the region enclosed by this simple closed curve, is: $$\text{area of R} = \oint_c x~ dy = \int _0 ^{2\pi} (a\cos{t})(b\cos{t}dt)$$ Green's theorem is: $$\oint_c Mdx + Ndy = \int \int_R \left (\frac{\partial N}{\partial x} - \frac{\partial M}{\partial y} \right ) dA$$ I don't understand the usage of Green's theorem here. Where does the integral $\oint_c x~dy$ come from? I assume we're letting $N = x$, right? REPLY [0 votes]: Let $C$ be a closed and smooth curve and $R$ the region enclosed by $C$, then the area defined by $R$ is $$Area_R = \oint_R xdy\,\,\,\,\,\,\,\,\,\,(1)$$ provided $C$ is defined as x = x(t) and y = y(t). To see this, we have that the area of $R$ is $\iint_R dA$, if we apply Green's Theorem to (1), we get $M=0,\,N=x,\,\frac{\partial N}{\partial x}-\frac{\partial M}{\partial y}=1$, so $Area_R =\oint_R xdy=\iint_R 1dA.$
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 2435282, "subset_name": null }
TITLE: Prove that $f(\overline A) \subseteq \overline{f(A)}$ QUESTION [2 upvotes]: Let $f : X \longrightarrow Y$ be continuous, and let $A \subseteq X$. Show that $f(\overline A) \subseteq \overline{f(A)}$. My attempt Let $y \in f(\overline A)$. Then there exists $x \in \overline A$ such that $f(x) = y$. Now let us take an open neighbourhood $V$ of $y$ in $Y$ arbitrarily. If we can show that $V \cap f(A) \neq \emptyset$ then our purpose will be served. Now since $f(x) = y \in V$, we have $x \in f^{-1} (V)$. Since $x \in \overline A$ and $f^{-1} (V)$ is open in $X$ (since $f$ is continuous), we have that $f^{-1} (V) \cap A \neq \emptyset$. Let $z \in f^{-1} (V) \cap A$. Then $f(z) \in V \cap f(A)$, which proves that $V \cap f(A) \neq \emptyset$ and we are done. Is my reasoning correct at all? Please verify it. REPLY [2 votes]: Your reasoning is correct. Alternative route: $f$ is continuous and $\overline{f(A)}$ is closed, so we conclude that $f^{-1}\left(\overline{f(A)}\right)$ is closed. This with: $$A\subseteq f^{-1}\left(f(A)\right)\subseteq f^{-1}\left(\overline{f(A)}\right)$$allowing us to conclude:$$\overline{A}\subseteq f^{-1}\left(\overline{f(A)}\right)$$ or equivalently: $$f(\overline{A})\subseteq\overline{f(A)}$$
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 2, "question_id": 2396558, "subset_name": null }
{\bf Problem.} The U.S. produces about 5.5 million tons of apples each year. Of the total, $20\%$ is mixed with other products, with $50\%$ of the remainder used for apple juice and the other $50\%$ sold fresh. How many million tons of apples are used for apple juice? Express your answer as a decimal to the nearest tenth. {\bf Level.} Level 4 {\bf Type.} Prealgebra {\bf Solution.} First, we wish to determine what percentage of the tons of apples are used for apple juice. After $20\%$ is mixed with other products, $80\%$ remains. Half of this is used for apple juice; therefore, $40\%$ of the tons of apples is used for apple juice. To calculate $40\%$ of $5.5$ million tons, we find $10\%$ and then multiply by four. $10\% = 0.55$, and $0.55 \cdot 4 = 2.2$. Thus, $\boxed{2.2}$ million tons are used for apple juice.
{ "config": null, "file": null, "set_name": "MATH", "score": null, "question_id": null, "subset_name": null }
TITLE: Find the solution to recurrence relation and initial conditions. Use an iterative approach. QUESTION [0 upvotes]: $$A(n)=3A(n-1)+1, A(0)=1$$ the solution from the textbook is $$A(n)=\dfrac{3^{n+1}-1}2$$ But I am having trouble to understand how we get this answer. REPLY [0 votes]: Start with $$ A(n)=3A(n-1)+1\quad\text{and}\quad A(0)=1 $$ Multiply by $3^{-n}$ $$ 3^{-n}A(n)=3^{1-n}A(n-1)+3^{-n} $$ Substitute $B(n)=3^{-n}A(n)$ $$ B(n)=B(n-1)+3^{-n}\quad\text{and}\quad B(0)=1 $$ Apply the sum of a geometric series $$ B(n)=\frac{1-3^{-n-1}}{1-3^{-1}} $$ Back substitute to get $$ A(n)=\frac{3^{n+1}-1}2 $$
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 3482367, "subset_name": null }
\begin{document} \title[Countably compact groups]{Countably compact groups without non-trivial convergent sequences} \author[Hru\v{s}\'{a}k]{M. Hru\v{s}\'{a}k} \address{Centro de Ciencias Matem\'aticas\\ Universidad Nacional Aut\'onoma de M\'exico\\ Campus Morelia\\Morelia, Michoac\'an\\ M\'exico 58089} \curraddr{} \email{[email protected]} \thanks{{The research of the first author was supported by a PAPIIT grant IN100317 and CONACyT grant A1-S-16164. The third named author was partially supported by the PAPIIT grants IA100517 and IN104419. Research of the fourth author was partially supported by European Research Council grant 338821. Paper 1173 on the fourth author's list}} \urladdr{http://www.matmor.unam.mx/~michael} \author[van Mill]{J. van Mill} \address{KdV Institute for Mathematics, University of Amsterdam, Science Park 105-107, P.O. Box 94248, 1090 GE Amsterdam, The Netherlands} \curraddr{} \email{[email protected]} \thanks{} \author[Ramos-Garc\'{\i}a]{U. A. Ramos-Garc\'{\i}a} \address{Centro de Ciencias Matem\'aticas\\ Universidad Nacional Aut\'onoma de M\'exico\\ Campus Morelia\\Morelia, Michoac\'an\\ M\'exico 58089} \curraddr{} \email{[email protected]} \thanks{} \author[Shelah]{S. Shelah} \address{Einstein Institute of Mathematics, Edmond J. Safra Campus, The Hebrew University of Jerusalem, Givat Ram, Jerusalem, 91904, Israel and Department of Mathematics, Hill Center - Busch Campus, Rutgers, The State University of New Jersey, 110 Frelinghuysen Road, Piscataway, NJ 08854-8019, USA} \curraddr{} \email{[email protected]} \thanks{} \urladdr{http://shelah.logic.at} \subjclass[2010]{Primary 22A05, 03C20; Secondary 03E05, 54H11} \date{\today} \keywords{Products of countably compact groups, $p$-compact groups, ultrapowers, countably compact groups without convergent sequences} \begin{abstract} We construct, in {\sf ZFC}, a countably compact subgroup of $2^\mathfrak c$ without non-trivial convergent sequences, answering an old problem of van Douwen. As a consequence we also prove the existence of two countably compact groups $\mathbb G_0$ and $\mathbb G_1$ such that the product $\mathbb G_0 \times \mathbb G_1$ is not countably compact, thus answering a classical problem of Comfort. \end{abstract} \maketitle \section{ Introduction} The celebrated Comfort-Ross theorem \cite{MR207886,MR776643} states that any product of pseudo-compact topological groups is pseudo-compact, in stark contrast with the examples due to Nov\'ak \cite{MR0060212} and Terasaka \cite{MR0051500} who constructed pairs of countably compact spaces whose product is not even pseudo-compact. This motivated Comfort \cite{Letter-Comfort} (repeated in \cite{MR1078657}) to ask: \begin{question}[Comfort \cite{MR1078657}] Are there countably compact groups $\mathbb G_0, \mathbb G_1$ such that $\mathbb G_0\times\mathbb G_1$ is not countably compact? \end{question} The first consistent positive answer was given by van Douwen \cite{MR586725} under {\sf MA}, followed by Hart-van Mill \cite{MR982236} under {\sf MA${}_{ctble}$}. In his paper van Douwen showed that every Boolean countably compact group without non-trivial convergent sequences contains two countably compact subgroups whose product is not countably compact, and asked: \begin{question}[van Douwen \cite{MR586725}] Is there a countably compact group without non-trivial convergent sequences? \end{question} In fact, the first example of such a group was constructed by Hajnal and Juh\'asz \cite{MR0431086} a few years before van Douwen's \cite{MR586725} assuming {\sf CH}. Recall, that every compact topological group contains a non-trivial convergent sequence, as an easy consequence of the classical and highly non-trivial theorem of Ivanovski\u{\i}-Vilenkin-Kuz'minov (see \cite{MR0104753}) that every compact topological group is \emph{dyadic}, \emph{i.e.,} a continuous image of $2^{\kappa}$ for some cardinal number $\kappa$. \smallskip Both questions have been studied extensively in recent decades, providing a large variety of sufficient conditions for the existence of examples to these questions, much work being done by Tomita and collaborators \cite{MR2029279,MR2113947,MR1925707,MR2921841,MR2519216,MR1426926,MR1664516,MR1974663,MR2107169,MR2163099,MR2133678,MR2080287}, but also others \cite{MR2284939,MR1719996,MR2139740,MR792239,MR1083312}. The questions are considered central in the theory of topological groups \cite{MR3241473,MR2433295,MR776643,MR1078657,dikranjan2007selected,MR1666795,MR1900269}. \smallskip Here we settle both problems by constructing in {\sf ZFC} a countably compact subgroup of $2^\mathfrak c$ without non-trivial convergent sequences. \smallskip The paper is organized as follows: In Section 2 we fix notation and review basic facts concerning ultrapowers, Fubini products of ultrafilters and Bohr topology. In Section 3 we study van Douwen's problem in the realm of $p$-compact groups. We show how iterated ultrapowers can be used to give interesting partial solutions to the problem. In particular, we show that an iterated ultrapower of the countable Boolean group endowed with the Bohr topology via a selective ultrafilter $p$ produces a $p$-compact subgroup of $2^{\mathfrak{c}}$ without non-trivial convergent sequences. This on the one hand raises interesting questions about ultrafilters, and on the other hand serves as a warm up for Section 4, where the main result of the paper is proved by constructing a countably compact subgroup of $2^{\mathfrak{c}}$ without non-trivial convergent sequences using not a single ultrafilter, but rather a carefully constructed $\mathfrak{c}$-sized family of ultrafilters. \section{Notation and terminology} Recall that an infinite topological space $X$ is \emph{countably compact} if every infinite subset of $X$ has an accumulation point. Given $p$ a nonprincipal ultrafilter on $\omega$ (for short, $p\in \omega^*$), a point $x\in X$ and a sequence $\{x_n: n\in\omega\}\subseteq X$ we say (following \cite{MR0251697}) that $x=p$-$\lim_{n \in \omega} x_n$ if for every open $U\subseteq X$ containing $x$ the set $\{n \in \omega \colon x_n\in U\}\in p$. It follows that a space $X$ is countably compact if and only if every sequence $\{x_n: n\in\omega\}\subseteq X$ has a $p$-limit in $X$ for some ultrafilter $p\in\omega^*$. Given an ultrafilter $p\in\omega^*$, a space $X$ is \emph{$p$-compact} if for every sequence $\{x_n: n\in\omega\}\subseteq X$ there is an $x\in X$ such that $x=p$-$\lim_{n \in \omega} x_n$. \medskip For introducing the following definition, we fix a bijection $\varphi: \omega\to\omega\times\omega$, and for a limit ordinal $\alpha<\omega_1$, we pick an increasing sequence $\{\alpha_n: n\in\omega\}$ of smaller ordinals with supremum $\alpha$. Given an ultrafilter $p\in\omega^*$, the \emph{iterated Fubini powers} or \emph{Frol\'{\i}k sums} \cite{MR0203676} of $p$ are defined recursively as follows: $$ p^1=p$$ $$p^{\alpha+1} =\{A\subseteq \omega: \{n:\{m: (n,m)\in \varphi(A)\}\in p^{\alpha}\}\in p\}\text{ and }$$ $$p^{\alpha} =\{A\subseteq \omega: \{n:\{m: (n,m)\in \varphi(A)\}\in p^{\alpha_n}\}\in p\}\text{ for } \alpha \text{ limit.}$$ The choice of the ultrafilter $p^\alpha$ depends on (the arbitrary) choice of $\varphi$ and the choice of the sequence $\{\alpha_n: n\in\omega\}$, however, the \emph{type} of $p^\alpha$ does not (see \emph{e.g.}, \cite{MR0203676,MR1227550}). \medskip For our purposes we give an alternative definition of the iterated Fubini powers of $p$: given $\alpha < \omega_{1}$ we fix a well-founded tree $T_{\alpha} \subset \omega^{<\omega}$ such that \begin{enumerate}[(i)] \item $\rho_{T_{\alpha}}(\varnothing)=\alpha$, where $\rho_{T_{\alpha}}$ denotes the rank function on $\langle T_{\alpha},\subseteq\rangle$; \item For every $t \in T_{\alpha}$, if $\rho_{T_{\alpha}}(t) > 0$ then $t^{\frown}n \in T_{\alpha}$ for all $n \in \omega$. \end{enumerate} For $\beta \leqslant \alpha$, let $\Omega_{\beta}(T_{\alpha})=\{t \in T_{\alpha} \colon \rho_{T_{\alpha}}(t)=\beta\}$ and $T_{\alpha}^{+}=\{t \in T_{\alpha} \colon \rho_{T_{\alpha}}(t) > 0\}$. \medskip If $p \in \omega^{*}$, then $\mathbb{L}_{p}(T_{\alpha})$ will be used to denote the collection of all trees $T \subseteq T_{\alpha}$ such that for every $t \in T \cap T_{\alpha}^{+}$ the set $\text{succ}_{T}(t)=\{n \in \omega \colon t^{\frown}n \in T\}$ belongs to $p$. Notice that each $T \in \mathbb{L}_{p}(T_{\alpha})$ is also a well-founded tree with $\rho_{T}(\varnothing)=\alpha$. Moreover, the family $\{\Omega_{0}(T) \colon T \in \mathbb{L}_{p}(T_{\alpha})\}$ forms a base of an ultrafilter on $\Omega_{0}(T_{\alpha})$ which has the same type of $p^{\alpha}$. If $T \in \mathbb{L}_{p}(T_{\alpha})$ and $U \in p$, $\restr{T}{U}$ denotes the tree in $\mathbb{L}_{p}(T_{\alpha})$ for which $\text{succ}_{\restr{T}{U}}(t)= \text{succ}_{T}(t) \cap U$ for all $t \in (\restr{T}{U})^{+}$. \medskip Next we recall the \emph{ultrapower} construction from model theory and algebra. Given a group $\mathbb G$ and an ultrafilter $p\in\omega^*$, denote by $$\mathsf{ult}_p(\mathbb G)=\mathbb G^\omega/\equiv \text{, where } f\equiv g \text{ iff } \{n: f(n)=g(n)\}\in p.$$ The \emph{Theorem of \L\'os} \cite{MR0075156} states that for any formula $\phi$ with parameters $[f_0], [f_1],\dots$ $[f_n]$, $\mathsf{ult}_p(\mathbb G)\models \phi ([f_0], [f_1],\dots [f_n])$ if and only if $\{k : \mathbb G\models\phi (f_0(k), f_1(k),$ $\dots$ $f_n(k))\}\in p$. In particular, $\mathsf{ult}_p(\mathbb G)$ is a group with the same first order properties as $\mathbb G$. \medskip There is a natural embedding of $\mathbb G$ into $\mathsf{ult}_p(\mathbb G)$ sending each $g\in \mathbb G$ to the equivalence class of the constant function with value $g$. We shall therefore consider $\mathbb G$ as a subgroup of $\mathsf{ult}_p(\mathbb G)$. Also, without loss of generality, we can assume that $\text{dom}(f) \in p$ for every $[f] \in \mathsf{ult}_p(\mathbb G)$. \medskip Recall that the \emph{Bohr topology} on a group $\mathbb{G}$ is the weakest group topology making every homomorphism $\Phi \in \text{Hom}(\mathbb{G},\mathbb{T})$ continuous, where the circle group $\mathbb{T}$ carries the usual compact topology. We let $(\mathbb{G},\tau_{\, \text{Bohr}})$ denote $\mathbb{G}$ equipped with the Bohr topology. \medskip Finally, our set-theoretic notation is mostly standard and follows \cite{MR597342}. In particular, recall that an ultrafilter $p \in \omega^{*}$ is a \emph{$P$-point} if every function on $\omega$ is finite-to-one or constant when restricted to some set in the ultrafilter and, an ultrafilter $p \in \omega^{*}$ is a \emph{$Q$-point} if every finite-to-one function on $\omega$ becomes one-to-one when restricted to a suitable set in the ultrafilter. The ultrafilters $p \in \omega^{*}$ which are P-point and Q-point are called \emph{selective} ultrafilters. For more background on set-theoretic aspects of ultrafilters see \cite{MR2757533}. \medskip \section{Iterated ultrapowers as \texorpdfstring{$p$}{Lg}-compact groups}\label{p-comp} In this section we shall give a canonical construction of a $p$-compact group for every ultrafilter $p \in \omega^{*}$. This will be done by studying the iterated ultrapower construction. \medskip Fix a group $\mathbb{G}$ and put $\mathsf{ult}_{p}^{0}(\mathbb{G})=\mathbb{G}$. Given an ordinal $\alpha$ with $\alpha > 0$, let \[ \mathsf{ult}_{p}^{\alpha}(\mathbb{G})=\mathsf{ult}_{p}\left(\varinjlim_{\beta < \alpha}\mathsf{ult}_{p}^{\beta}(\mathbb{G}))\right), \] where $\varinjlim_{\beta < \alpha}\mathsf{ult}_{p}^{\beta}(\mathbb{G})$ denotes the direct limit of the direct system $\langle \mathsf{ult}_{p}^{\beta}(\mathbb{G}), \varphi_{\delta\beta} \colon \delta \leqslant \beta < \alpha \rangle$ with the following properties: \begin{enumerate} \item $\varphi_{\delta\delta}$ is the identity function on $\mathsf{ult}_{p}^{\delta}(\mathbb{G})$, and \item $\varphi_{\delta\beta}\colon \mathsf{ult}_{p}^{\delta}(\mathbb{G}) \to \mathsf{ult}_{p}^{\beta}(\mathbb{G})$ is the canonical embedding of $\mathsf{ult}_{p}^{\delta}(\mathbb{G})$ into $\mathsf{ult}_{p}^{\beta}(\mathbb{G})$, defined recursively by $\varphi_{\delta, \alpha +1}([f])=$ the constant function with value $[f]$, and $\varphi_{\delta, \alpha }([f])=$ the direct limit of $\varphi_{\delta, \beta }([f]), \ \beta<\alpha$ for a limit ordinal $\alpha$. \end{enumerate} In what follows, we will abbreviate $\mathsf{ult}_{p}^{\alpha^{-}}(\mathbb{G})$ for $\varinjlim_{\beta < \alpha}\mathsf{ult}_{p}^{\beta}(\mathbb{G})$. Moreover, we will treat $\mathsf{ult}_{p}^{\alpha^{-}}(\mathbb{G})$ as $\bigcup_{\beta < \alpha}\mathsf{ult}_{p}^{\beta}(\mathbb{G})$ and, in such case, we put $ \text{ht}(a)=\min\{\beta < \alpha \colon a \in \mathsf{ult}_{p}^{\beta}(\mathbb{G})\} $ for every $a \in \mathsf{ult}_{p}^{\alpha^{-}}(\mathbb{G})$. This is, of course, formally wrong, but is facilitated by our identification of $\mathbb{G}$ with a subgroup of $\mathsf{ult}_{p}(\mathbb{G})$. In this way we can avoid talking about direct limit constructions. \medskip We now consider $(\mathbb{G},\tau_{\, \text{Bohr}})$. Having fixed an ultrafilter $p \in \omega^{*}$, this topology naturally \emph{lifts} to a topology on $\mathsf{ult}_{p}(\mathbb{G})$ as follows: Every $\Phi \in \text{Hom}(\mathbb{G},\mathbb{T})$ naturally extends to a homomorphism $\overline{\Phi} \in \text{Hom}(\mathsf{ult}_{p}(\mathbb{G}),\mathbb{T})$ by letting \begin{equation}\label{e:p-lim} \overline{\Phi}([f])=p\text{ -}\lim_{n \in \omega}\Phi(f(n)). \end{equation} By {\L}\'os's theorem, $\overline \Phi$ is indeed a homomorphism from $\mathsf{ult}_p(\mathbb{G})$ to $\mathbb{T}$ and hence the weakest topology making every $\overline{\Phi}$ continuous, where $\Phi \in \text{Hom}(\mathbb{G},\mathbb{T})$, is a group topology on $\mathsf{ult}_p(\mathbb{G})$. This topology will be denoted by $\tau_{\, \overline{\text{Bohr}}}$. \medskip The following is a trivial, yet fundamental fact: \begin{lemma}\label{p-lim} For every $f:\omega\to \mathbb{G}$, $[f]=p$-$\lim_{n \in \omega} f(n)$ in $\tau_{\, \overline{\text{Bohr}}}$. \end{lemma} \begin{proof} This follows directly from the definition of $\overline \Phi$ and the identification of $\mathbb G$ with a subgroup of $\mathsf{ult}_p(\mathbb{G})$. \end{proof} The group that will be relevant for us is the group $\mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G})$, endowed with the topology $\tau_{\, \overline{\text{Bohr}}}$ induced by the homomorphisms in $\text{Hom}(\mathbb{G},\mathbb{T})$ extended recursively all the way to $\mathsf{ult}_p^{\omega_{1}}(\mathbb{G})$ by the same formula (\ref{e:p-lim}). \medskip The (iterated) ultrapower with this topology is usually not Hausdorff (see \cite{MR2207497,MR698308}), so we identify the inseparable functions and denote by $(\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G}),\tau_{\, \overline{\text{Bohr}}})$ this quotient. More explicitly, \[ \mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G}) = \mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G})/K, \] where $K=\bigcap_{\Phi \in \text{Hom}(\mathbb{G},\mathbb{T})}\text{Ker}(\overline{\Phi})$. The natural projection will be denoted by \[ \pi \colon \mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G}) \to \mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G})/K. \] \medskip The main reason for considering the iterated Fubini powers here is the following simple and crucial fact: \begin{proposition}\label{p-compact} Let $p\in\omega^*$ be an ultrafilter. \begin{enumerate}[\upshape (1)] \item $\mathsf{ult}_{p}^{\alpha}(\mathbb{G}) \simeq \mathsf{ult}_{p^{\alpha}}(\mathbb{G})$ for $\alpha < \omega_{1}$, and \item $(\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G}),\tau_{\, \overline{\text{Bohr}}})$ is a Hausdorff $p$-compact topological group. \end{enumerate} \end{proposition} \begin{proof} To prove (1), fix an $\alpha < \omega_{1}$. For given $[f] \in \mathsf{ult}_{p}^{\alpha}(\mathbb{G})$, recursively define a tree $T_{f} \in \mathbb{L}_{p}(T_{\alpha})$ and a function $\hat{f} \colon T_{f} \to \mathsf{ult}_{p}^{\alpha}(\mathbb{G})$ so that \begin{itemize} \item $\text{succ}_{T_{f}}(\varnothing)=\text{dom}(f_{\varnothing})$ and $\hat{f}(\varnothing)=[f_{\varnothing}]$, where $f_{\varnothing}=f$; \item if $\hat{f}(t)$ is defined say $\hat{f}(t)=[f_{t}]$, then $\text{succ}_{T_{f}}(t)=\text{dom}(f_{t})$ and $\hat{f}(t^{\frown}n) = f_{t}(n)$ for every $n \in \text{succ}_{T_{f}}(t)$. \end{itemize} \medskip We define $\varphi \colon \mathsf{ult}_{p}^{\alpha}(\mathbb{G}) \to \mathsf{ult}_{p^{\alpha}}(\mathbb{G})$ given by \[ \varphi([f])= [\restr{\hat{f}}{\Omega_{0}(T_{f})}]. \] \begin{claim} $\varphi$ is an isomorphism. \end{claim} \begin{proofclaim} To see that $\varphi$ is a surjection, let $[f] \in \mathsf{ult}_{p^{\alpha}}(\mathbb{G})$ be such that $\text{dom}(f) = \Omega_{0}(T_{f})$ for some $T_{f} \in \mathbb{L}_{p}(T_{\alpha})$. Consider the function $\check{f} \colon T_{f} \to \mathsf{ult}_{p}^{\alpha}(\mathbb{G})$ defined recursively by \begin{itemize} \item $\restr{\check{f}}{\Omega_{0}(T_{f})}=f$ and, \item if $ t \in T_{\alpha}^{+}$, then $\check{f}(t)=[\langle \check{f}(t^{\frown}n) \colon n \in \text{succ}_{T_{f}}(t)\rangle]$. \end{itemize} \medskip Notice that the function $\check{f}$ satisfies that $\check{f}(t) \in \mathsf{ult}_{p}^{\rho_{T_{f}(t)}}(\mathbb{G})$ for every $t \in T_{f}$. In particular, $\check{f} (\varnothing) \in \mathsf{ult}_{p}^{\alpha}(\mathbb{G})$ and, a routine calculation shows that $\varphi(\check{f} (\varnothing)) = [f]$. \medskip To see that $\varphi$ is injective, suppose that $\varphi([f])=\varphi([g])$. Then there exists a tree $T \in \mathbb{L}_{p}(T_{\alpha})$ such that \[ \restr{\hat{f}}{\Omega_{0}(T)} = \restr{\hat{g}}{\Omega_{0}(T)}. \] If set $h := \restr{\hat{f}}{\Omega_{0}(T)}$, then we can verify recursively that $\check{h}(\varnothing)=[f]=[g]$. Therefore, $\varphi$ is a one-to-one function. \medskip Finally, using again a recursive argument, one can check that $\varphi$ preserves the group structure. \end{proofclaim} \medskip To prove (2) note that by definition $\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G})$ is a Hausdorff topological group. To see that $\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G})$ is $p$-compact, since $\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G})$ is a continuous image of $\mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G})$, it suffices to check that $\mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G})$ is $p$-compact. Let $f \colon \omega \to \mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G})$ be a sequence and let $n \in \omega$. So $f(n) \in \mathsf{ult}_{p}(\mathsf{ult}_{p}^{\omega_{1}^{-}}(\mathbb{G}))$, that is, there exists $f_{n} \colon \omega \to \bigcup_{\alpha < \omega_{1}}\mathsf{ult}_{p}^{\alpha}(\mathbb{G})$ such that $f(n)=[f_{n}]$. Thus, for every $n \in \omega$ there exists $\alpha_{n} < \omega_{1}$ such that $f(n) \in \mathsf{ult}_{p}^{\alpha_{n}}(\mathbb{G})$ and hence $[f] \in \mathsf{ult}_{p}^{\alpha}(\mathbb{G})$ for $\alpha =\sup \{\alpha_n: n\in \omega\} < \omega_{1}$. Then $[f]=p$-$\lim_{n \in \omega} f(n)$ in $\tau_{\, \overline{\text{Bohr}}}$ as by the construction $\overline{\Phi}([f])=p$-$\lim \overline \Phi(f(n))$ for every $\Phi \in \text{Hom}(\mathbb{G},\mathbb{T})$. This gives us the $p$-compactness of $\mathsf{ult}_{p}^{\omega_{1}}(\mathbb{G})$. \end{proof} The plan for our construction is as follows: fix an ultrafilter $p \in \omega^{*}$, find a suitable topological group $\mathbb{G}$ without convergent sequences and consider $(\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G}),\tau_{\, \overline{\text{Bohr}}})$. The remaining issue is: Does $(\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{G}),\tau_{\, \overline{\text{Bohr}}})$ have non-trivial convergent sequences? \medskip While our approach is applicable to an arbitrary group $\mathbb{G}$, in the remainder of this paper we will be dealing exclusively with \emph{Boolean} groups, \emph{i.e.,} groups where each element is its own inverse.\footnote{The general case will be dealt with in a separate paper.} These groups are, in every infinite cardinality $\kappa$, isomorphic to the group $[\kappa]^{<\omega}$ with the symmetric difference $\triangle$ as the group operation and $\varnothing$ as the neutral element. Every Boolean group is a vector space over the trivial $2$-element field which we identify with $2=\{0,1\}$. Hence, we can talk, \emph{e.g.,} about \emph{linearly independent} subsets of a Boolean group. Also, since every homomorphism from a Boolean group into the torus $\mathbb T$ takes at most two values (in the unique subgroup of $\mathbb T$ of size $2$) we may and will identify $\text{Hom}([\omega]^{<\omega}, \mathbb T)$ with $\text{Hom}([\omega]^{<\omega}, 2)$ to highlight the fact that there are only two possible values. Hence also $\text{Hom}([\omega]^{<\omega}, 2)$ is a Boolean group and a vector space over the same field. \medskip The following theorem is the main result of this section. \begin{theorem}\label{Th:SelectiveUltrafilter} Let $p \in \omega^{*}$ be a selective ultrafilter. Then $(\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega}),\tau_{\, \overline{\text{Bohr}}})$ is a Hausdorff $p$-compact topological Boolean group without non-trivial convergent sequences. \end{theorem} In order to prove this theorem, we apply the first step of our plan. \begin{proposition}\label{p:ground-group} The group $[\omega]^{<\omega}$ endowed with the topology $\tau_{\, \text{Bohr}}$ is a non-discrete Hausdorff topological group without non-trivial convergent sequences. \end{proposition} \begin{proof} It is well-known and easy to see that $\tau_{\, \text{Bohr}}$ is a non-discrete Hausdorff group topology (\emph{e.g.,} see \cite{MR2433295} Section 9.9). To see that $\tau_{\, \text{Bohr}}$ has no non-trivial convergent sequences, assume that $f \colon \omega \to [\omega]^{<\omega}$ is a non-trivial sequence. Then $\text{rng}(f)$ is an infinite set. Find an infinite linearly independent set $A \subseteq \text{rng}(f)$ and split it into two infinite pieces $A_{0}$ and $A_{1}$, and take $\Phi \in \text{Hom}([\omega]^{<\omega},2)$ such that $A_{i} \subseteq \Phi^{-1}(i)$ for every $i < 2$. Therefore, $\Phi$ is a witness that the sequence $f$ does not converge. \end{proof} We say that a sequence $\langle [f_{n}] \colon n \in \omega \rangle \subset \mathsf{ult}_{p}([\omega]^{<\omega})$ is \emph{$p$-separated} if for every $n \ne m \in \omega$ there is a $\Phi \in \text{Hom}([\omega]^{<\omega},2)$ such that $\overline{\Phi}([f_{n}]) \ne \overline{\Phi}([f_{m}])$. In other words, a sequence $\langle [f_{n}] \colon n \in \omega \rangle \subset \mathsf{ult}_{p}([\omega]^{<\omega})$ is $p$-separated if and only if its elements represent distinct elements of $$\mathsf{Ult}_{p}([\omega]^{<\omega})=\mathsf{ult}_{p}([\omega]^{<\omega})/K$$ where $K=\bigcap_{\Phi\in \text{Hom}([\omega]^{<\omega},2)} \text{Ker}(\overline{\Phi})$ and $\pi:\mathsf{ult}_{p}([\omega]^{<\omega})\to \mathsf{Ult}_{p}([\omega]^{<\omega}) $ is the corresponding projection. \medskip We next show that, in general, the plan does not work for all $p \in \omega^{*}$. \begin{lemma}\label{L:ConvergentSequences} The following are equivalent: \begin{enumerate}[\upshape (1)] \item There exists a $p \in \omega^{*}$ such that $(\mathsf{Ult}_{p}([\omega]^{<\omega}),\tau_{\, \overline{\text{Bohr}}})$ has non-trivial convergent sequences. \item There exist a sequence $\langle \Phi_{n} \colon n \in \omega \rangle \subset \text{Hom}([\omega]^{<\omega},2)$ and a mapping \\ $H \colon\text{Hom}([\omega]^{<\omega},2)\to \omega$ such that for every $n \in \omega$ the family $$\{[\omega]^{<\omega}\setminus \text{Ker}(\Phi_{n})\} \cup \{\text{Ker}(\Phi)\colon \ H(\Phi) \leqslant n\}$$ is centered. \end{enumerate} \end{lemma} \begin{proof} Let us prove (1) implies (2). Let $\tilde{f} \colon \omega \to \mathsf{Ult}_{p}([\omega]^{<\omega})$ be a non-trivial sequence, say $\tilde{f}(n)=\pi(f(n))$ ($n \in \omega$) where $f \colon \omega \to \mathsf{Ult}_{p}([\omega]^{<\omega})$. Without loss of generality we can assume that $\tilde{f}$ is a one-to-one function converging to $\pi([\langle \varnothing\rangle])$, here $\langle \varnothing\rangle$ denotes the constant sequence where each term is $\varnothing$. So $\langle [f_{n}] \colon n \in \omega \rangle$ is a $p$-separated sequence $\tau_{\, \overline{\text{Bohr}}}$-converging to $[\langle \varnothing\rangle]$, where $[f_{n}]=f(n)$ for $n \in \omega$. By taking a subsequence if necessary, we may assume that for every $n \in \omega$ there is a $\Phi_{n} \in \text{Hom}([\omega]^{<\omega},2)$ such that $\overline{\Phi}_{n}([f_{n}])=1$. Now, by $\tau_{\, \overline{\text{Bohr}}}$-convergence of $\langle [f_{n}] \colon n \in \omega \rangle$, there is a mapping $H \colon \text{Hom}([\omega]^{<\omega},2) \to \omega$ such that for each $\Phi \in \text{Hom}([\omega]^{<\omega},2)$ and each $n \geqslant H(\Phi)$ it follows that $\overline{\Phi}([f_{n}])=0$. Now we will check that for every $n \in \omega$ the family $\{\text{Ker}(\Phi_{n})^{c}\} \cup \{\text{Ker}(\Phi)\colon$ $H(\Phi) \leqslant n\}$ is centered.\footnote{For a subset $A$ of the group $[\omega]^{<\omega}$, $A^{c}=[\omega]^{<\omega}\setminus A$.} For this, since $([\omega]^{<\omega},\tau_{\, \text{Bohr}})$ is without non-trivial convergent sequences and $[f_{n}] \xrightarrow{\tau_{\, \overline{\text{Bohr}}}} [\langle \varnothing\rangle]$, we may assume that $[f_{n}]\ne [\langle a\rangle]$ for every $\langle n,a \rangle \in \omega \times [\omega]^{<\omega}$, that is, $f_{n}[U]$ is infinite for all $\langle n,U\rangle \in \omega \times p$. Now, fix $n \in \omega$ and let $F \subset \text{Hom}([\omega]^{<\omega},2)$ be a finite set such that $H(\Phi) \leqslant n$ for every $\Phi \in F$. Then $\overline{\Phi}([f_{n}])=0$ for every $\Phi \in F$ and hence there exists $U_{F} \in p$ such that $\Phi(f_{n}(k))=0$ for every $\langle k,\Phi\rangle \in U_{F}\times F$. Since $\overline{\Phi}_{n}([f_{n}])=1$, there exists $U_{n} \in p$ such that $\Phi_{n}(f_{n}(k))=1$ for every $k \in U_{n}$. Put $U=U_{F}\cap U_{n} \in p$. Then $f_{n}[U] \subset \text{Ker}(\Phi_{n})^{c} \cap \bigcap_{\Phi \in F}\text{Ker}(\Phi)$, so we are done. \medskip To prove (2) implies (1), first we observe that there is a sequence $\langle f_{n} \colon n \in \omega \rangle \subset ([\omega]^{<\omega})^{\omega}$ such that for each $F \in [\omega]^{<\omega}$ and every $\sigma \colon F \to [\omega]^{<\omega}$ there exists $k \in \omega$ such that $f_{i}(k) = \sigma(i)$ for all $i \in F$. Now, define $A_{\Phi,n}^{0}=\{k \in \omega \colon \Phi(f_{n}(k))=0\}$ and $A_{\Phi,n}^{1}=\{k \in \omega \colon \Phi(f_{n}(k))=1\}$ for all $(\Phi,n) \in \text{Hom}([\omega]^{<\omega},2)\times \omega$. \medskip Fix $\langle \Phi_{n} \colon n \in \omega \rangle \subset \text{Hom}([\omega]^{<\omega},2)$ and $H \colon\text{Hom}([\omega]^{<\omega},2)\to \omega$ as in (2). \begin{claim}\label{Claim:CenteredFamily} The collection $\bigcup_{n \in \omega}\{A_{\Phi_{n},n}^{1}\} \cup \{A_{\Phi,n}^{0} \colon H(\Phi)\leqslant n\}$ forms a centered family which generates a free filter $\mathcal{F}$. \end{claim} \begin{proofclaim} To show that such family is centered, let $m >0$ and for every $i<m$ fix a finite set $\{\Phi^{j} \colon j < m_{i}\} \subset H^{-1}[i+1]$. Then, considering all choice functions \[ \sigma \colon n \to \bigcup_{i<m}\left(\text{Ker}(\Phi_{i})^{c} \cap \bigcap_{j<m_{i}} \text{Ker}(\Phi^{j})\right), \] we can ensure that \[ \bigcap_{i < m}\left( A_{\Phi_{i},i}^{1} \cap \bigcap_{j < m_{i}}A_{\Phi^{j},i}^{0}\right) \] is an infinite set. To see that the filter $\mathcal{F}$ is free, let $k \in \omega$. If there is an $n \in \omega$ such that $f_{n}(k)=\varnothing$, then $k \notin A_{\Phi_{n}, n}^{1} \in \mathcal{F}$. In another case, since $\langle f_{n}(k) \colon n \in \omega\rangle$ does not $\tau_{\, \text{Bohr}}$-converge to $\varnothing$, there exists $\Phi \in \text{Hom}([\omega]^{<\omega},2)$ such that $\Phi(f_{n}(k))=1$ for infinitely many $n$. Then pick one of such $n$ with $H(\Phi)\leqslant n$ and, $k \notin A_{\Phi,n}^{0} \in \mathcal{F}$. \end{proofclaim} \medskip Let $p \in \omega^{*}$ extend $\mathcal{F}$. By Claim \ref{Claim:CenteredFamily}, it follows that \begin{enumerate}[(i)] \item $\overline{\Phi}_{n}([f_{n}])=1$, for every $n \in \omega$. \item The sequence $\langle \overline{\Phi}([f_{n}]) \colon n \in \omega\rangle$ converges to $0$, for every $\Phi \in \text{Hom}([\omega]^{<\omega},2)$, \emph{i.e.,} $\langle [f_{n}] \colon n \in \omega\rangle$ is a $\tau_{\, \overline{\text{Bohr}}}$-convergent sequence to $[\langle \varnothing\rangle]$. \end{enumerate} Finally, taking a subsequence if necessary, we can assume that $\langle [f_{n}] \colon n \in \omega\rangle$ is $p$-separated and, hence $\langle \pi([f_{n}]) \colon n \in \omega\rangle$ is a non-trivial convergent sequence in $(\mathsf{Ult}_{p}([\omega]^{<\omega}),\tau_{\, \overline{\text{Bohr}}})$. \end{proof} \medskip \begin{remark} Note that the filter $\mathcal{F}$ is actually an $F_{\sigma}$-filter. \end{remark} \medskip \begin{theorem} There exists a $p \in \omega^{*}$ such that $(\mathsf{Ult}_{p}([\omega]^{<\omega}),\tau_{\, \overline{\text{Bohr}}})$ has non-trivial convergent sequences. \end{theorem} \begin{proof} We will show that the second clause of the Lemma \ref{L:ConvergentSequences} holds. To see this, choose any countable linearly independent set $\{\Phi_{n} \colon n \in \omega\} \subset \text{Hom}([\omega]^{<\omega},2)$. Let $W$ be a vector subspace of $\text{Hom}([\omega]^{<\omega},2)$ such that $\text{Hom}([\omega]^{<\omega},2)=\text{span}\{\Phi_{n} \colon n \in \omega\} \oplus W$. We define the mapping $H \colon\text{Hom}([\omega]^{<\omega},2)\to \omega$ as follows: \[ H(\Phi)=\min\{n \colon \Phi \in \text{span}\{\Phi_{i} \colon i<n\} \oplus W\}. \] Now, let $n \in \omega$ and fix a finite set $\{\Phi^{j} \colon j<m\} \subset H^{-1}[n+1]$. In order to show that \[ \text{Ker}(\Phi_{n})^{c} \cap \bigcap_{j<m} \text{Ker}(\Phi^{j}) \] is infinite, we shall need a fact concerning linear functionals on a vector space. \begin{fact}[\cite{MR1741419}, p. 124] Let $V$ be a vector space and $\Phi, \Phi^{0}, \dots, \Phi^{m-1}$ linear functionals on $V$. Then the following statements are equivalent: \begin{enumerate}[(1)] \item $\bigcap_{j<m} \text{Ker}(\Phi^{j}) \subset \text{Ker}(\Phi)$. \item $\Phi \in \text{span}\{\Phi^{j} \colon j<m\}$.\hfill $\square$ \end{enumerate} \end{fact} Using this fact, and noting that $\Phi_{n} \notin \text{span}\{\Phi^{j} \colon j<m\}$, one sees that \[ \text{Ker}(\Phi_{n})^{c} \cap \bigcap_{j<m} \text{Ker}(\Phi^{j}) \ne \emptyset. \] Pick an arbitrary $a \in \text{Ker}(\Phi_{n})^{c} \cap \bigcap_{j<m} \text{Ker}(\Phi^{j})$ and put \[ K = \text{Ker}(\Phi_{n}) \cap \bigcap_{j<m} \text{Ker}(\Phi^{j}). \] Then $K$ is an infinite set, and hence $a + K$ is an infinite set too. But \[ a+K \subset \text{Ker}(\Phi_{n})^{c} \cap \bigcap_{j<m} \text{Ker}(\Phi^{j}), \] so we are done. \end{proof} \medskip \begin{corollary}[$\CH$]\label{Cor:P-point} There is a P-point $p \in \omega^{*}$ such that $(\mathsf{Ult}_{p}([\omega]^{<\omega}),\tau_{\, \overline{\text{Bohr}}})$ has non-trivial convergent sequences. \end{corollary} \begin{proof} It is well-known (\emph{e.g.,} see \cite{MR3692233}) that assuming $\CH$ every $F_{\sigma}$-filter can be extended to a P-point. \end{proof} As $(\mathsf{Ult}_{p}([\omega]^{<\omega}),\tau_{\, \overline{\text{Bohr}}})$ is a topological subgroup of $(\mathsf{Ult}_{p}^{\omega_1}([\omega]^{<\omega}),\tau_{\, \overline{\text{Bohr}}})$ there are ultrafilters (even P-points assuming {\sf CH}) such that $\mathsf{Ult}_{p}^{\omega_1}([\omega]^{<\omega})$ has a non-trivial convergent sequence. \medskip Selective ultrafilters and Q-points, have immediate combinatorial reformulations relevant in our context. Given a non-empty set $I$ and $\mathbb{G}$ a Boolean group, we shall call a set $\{ f_{i} \colon i \in I \}$ of functions $f_{i} \colon \omega \to \mathbb{G}$ \emph{$p$-independent} if \[ \left\{n \colon a + \sum_{i \in E}f_{i}(n) = \varnothing\right\} \notin p \] for every non-empty finite set $E \subset I$ and every $a \in \mathbb{G}$. Note that, in particular, a function $f \colon \omega \to \mathbb{G}$ is not constant on an element of $p$ if and only if $\{f\}$ is $p$-independent. Now, we will say that a function $f \colon I \to \mathbb{G}$ is \emph{linearly independent} if $f$ is one-to-one and $\{f(i) \colon i \in I\}$ is a linearly independent set and, a function $f \colon I \to \mathsf{ult}_{p}(\mathbb{G})$ is \emph{$p$-independent} if $f$ is one-to-one and $\{f_{i} \colon i \in I\}$ is a $p$-independent set, where $f(i)=[f_{i}]$ for $i \in I$. \medskip \begin{proposition}\label{selective} Let $p\in\omega^*$ be an ultrafilter. Then: \begin{enumerate}[\upshape (1)] \item $p$ is a Q-point if and only if for every finite-to-one function $f \colon \omega\to [\omega]^{<\omega}$ there is a set $U\in p$ such that $\restr{f}{U}$ is linearly independent. \item The following are equivalent \begin{enumerate}[\upshape (a)] \item $p$ is selective; \item for every function $f \colon \omega \to [\omega]^{<\omega}$ which is not constant on an element of $p$ there is a set $U\in p$ such that $\restr{f}{U}$ is linearly independent; \item for every $p$-independent set $\{f_n \colon n\in\omega \}$ of functions $f_n \colon \omega\to [\omega]^{<\omega}$, there is a set $U \in p$ and a function $g \colon \omega \to \omega$ so that $\restr{f_{n}}{U \setminus g(n)}$ is one-to-one for $n \in \omega$, $f_{n}[U \setminus g(n)] \cap f_{m}[U \setminus g(m)] = \varnothing$ if $n \ne m$, and \[ \bigsqcup_{n \in \omega} f_{n}[U \setminus g(n)] \] is linearly independent.\footnote{Here $\sqcup$ denotes the disjoint union.} \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Let us prove (1). Suppose first that $p$ is a Q-point. Let $f \colon \omega\to [\omega]^{<\omega}$ be a finite-to-one function. Recursively define a strictly increasing sequence $\langle n_{k} \colon k \in \omega\rangle$ of elements of $\omega$ and a strictly increasing sequence of finite subgroups $\langle H_{n} \colon n \in \omega\rangle$ of $[\omega]^{<\omega}$ so that \begin{enumerate}[\upshape (i)] \item $H_{n} \cap \text{rng}(f) \ne \emptyset$ for all $n \in \omega$, and \item $n_{k}=\max f^{-1}[H_{k}]\ \& \ f^{\prime\prime}[0,n_{k}] \subset H_{k+1}$, for all $k \in \omega$. \end{enumerate} Then partitioning $\omega$ into the union of even intervals, and the union of odd intervals, one of them is in $p$, say \[ A=\bigcup_{i \in \omega}[n_{2i},n_{2i+1}) \in p. \] Applying Q-pointness we can assume that there exists a $U \in p$ such that \[ |[n_{2i},n_{2i+1}) \cap U|=1 \text{ for every } i \in \omega, \] and $U \subseteq A$. By item (ii) and since $\langle H_{n} \colon n \in \omega\rangle$ is a strictly increasing sequence, it follows that $\restr{f}{U}$ is one-to-one and $\{f(n) \colon n\in U \}$ is linearly independent. Suppose now that for every finite-to-one function $f \colon \omega\to [\omega]^{<\omega}$ there is a $U\in p$ such that $\restr{f}{U}$ is one-to-one and $\{f(n) \colon n\in U \}$ is linearly independent. Let $\langle I_{n} \colon n \in \omega\rangle$ be a partition of $\omega$ into finite sets. Define a finite-to-one function $f \colon \omega \to [\omega]^{<\omega}$ by putting $f(k)=\{n\}$ for each $k \in I_{n}$. Then there is an $U \in p$ such that $\restr{f}{U}$ is one-to-one and $\{f(n) \colon n\in U \}$ is linearly independent. Note that necessarily $|I_{n} \cap U| \leqslant 1$ for every $n \in \omega$ and therefore $p$ is a Q-point. \medskip (2) To see (a) implies (b), let $f \colon \omega \to [\omega]^{<\omega}$ be a function which is not constant on an element of $p$. Using P-pointness, we may assume without loss of generality that $f$ is a finite-to-one function. So, by item (1), there is an $U \in p$ such that $\restr{f}{U}$ is one-to-one and $\{f(n) \colon n\in U \}$ is linearly independent. \medskip To see (b) implies (a), let $f \colon \omega \to [\omega]^{<\omega}$ be a function which is not constant on an element of $p$. By item (b), there is an $U \in p$ such that $\restr{f}{U}$ is one-to-one and $\{f(n) \colon n\in U \}$ is linearly independent, and hence $p$ is a P-point. To verify that $p$ is a Q-point, notice that every finite-to-one function $f \colon \omega \to [\omega]^{<\omega}$ is not constant on an element of $p$. Thus, by clause (1) we get the desired conclusion. \medskip To prove (a) implies (c), assume that $\{f_{n} \colon n \in \omega\}$ is a $p$-independent set of functions $f_n \colon \omega\to [\omega]^{<\omega}$. \begin{fact}\label{F:p-independent} Given a finite $p$-independent set $\{f_{i} \colon i <n\}$, and a finite linearly independent set $A \subset [\omega]^{<\omega}$, the set of all $m \in \omega$ such that $A \sqcup \{f_{i}(m) \colon i <n\}$ is linearly independent, belongs to $p$.\hfill $\square$ \end{fact} Using Fact \ref{F:p-independent}, we can recursively construct a $p$-branching tree $T \subset \omega^{<\omega}$ such that for every $t \in T$, it follows that \[ \text{succ}_{T}(t)=\{m \colon A_{t} \sqcup \{f_{i}(m) \colon i \leqslant |t|\} \text{ is linearly independent}\}, \] where $A_{t}=\{f_{i}(t(j)) \colon i < |t| \ \& \ j \in [i,|t|)\}$. \medskip By Galvin-Shelah's theorem (\cite[Theorem 4.5.3]{MR1350295}), let $x \in [T]$ be a branch such that $\text{rng}(x) \in p$. Thus, if we put $U = \text{rng}(x)$ and $g(n) = \max(\restr{x}{n})$ for $n \in \omega$, we get the properties as in (c). \medskip Finally, notice that (b) is a particular instance of (c) when $\{f_{n} \colon n \in \omega\} = \{f\}$. Therefore, (c) implies (b). \end{proof} \medskip \begin{remark}\label{R:selective} In the previous theorem, it is possible to change the group $[\omega]^{<\omega}$ to any arbitrary Boolean group and, the conclusions of the theorem remain true. \end{remark} \medskip For technical reasons, it will be necessary to reformulate the notion of $p$-indepen\-dence. \begin{lemma}\label{L:p-independence} Let $\mathbb{G}$ be a Boolean group and $0 < \alpha < \omega_{1}$. Then: \begin{enumerate}[\upshape (1)] \item A set $\{ f_{i} \colon i \in I \}$ of functions $f_{i} \colon \omega \to \mathbb{G}$ is $p$-independent if and only if the function \[ \tilde{f}\colon I \to \mathsf{ult}_{p}^{1}(\mathbb{G})/\mathsf{ult}_{p}^{0}(\mathbb{G}) \] defined by $\tilde{f}(i)=\pi_{0}^{1}([f_{i}])$ for $i \in I$ is linearly independent, where $\pi_{0}^{1} \colon \mathsf{ult}_{p}^{1}(\mathbb{G}) \to \mathsf{ult}_{p}^{1}(\mathbb{G})/\mathsf{ult}_{p}^{0}(\mathbb{G})$ denotes the natural projection. \item A set $\{ f_{i} \colon i \in I \}$ of functions $f_{i} \colon \omega \to \mathsf{ult}_{p}^{\alpha}(\mathbb{G})$ is $p$-independent if and only if the set $\{\tilde{f}_{i} \colon i \in I\}$ of functions $\tilde{f}_{i} \colon \omega \to \mathsf{ult}_{p}^{\alpha}(\mathbb{G})/\mathsf{ult}_{p}^{\alpha^{-}}(\mathbb{G})$ is a $p$-independent set, where each $\tilde{f}_{i}$ is defined by $\tilde{f}_{i}(n)=\pi_{\alpha^{-}}^{\alpha}(f_{i}(n))$ for $n \in \omega$ and \[ \pi^{\alpha}_{\alpha^{-}} \colon \mathsf{ult}_{p}^{\alpha}(\mathbb{G}) \to \mathsf{ult}_{p}^{\alpha}(\mathbb{G})/ \mathsf{ult}_{p}^{\alpha^{-}}(\mathbb{G}) \] denotes the natural projection. \end{enumerate} \end{lemma} \begin{proof} To see (1), note that \[ \sum_{i \in E}[f_{i}]=[\langle a \rangle] \] iff \[ \left\{n \colon a + \sum_{i \in E}f_{i}(n) = \varnothing\right\} \in p, \] for every non-empty finite set $E \subset I$ and every $a \in \mathbb{G}$. \medskip To see (2). Let $E \subseteq I$ be a non-empty finite set and $a \in \mathsf{ult}_{p}^{\alpha}(\mathbb{G})$ and, notice that \[ \left\{n \colon \sum_{i \in E}\tilde{f}_{i}(n) = \pi^{\alpha}_{\alpha^{-}}(a) \right\} \in p \] iff \[ \left\{ n \colon a + \sum_{i \in E} f_{i}(n) \in \mathsf{ult}_{p}^{\alpha^{-}}(\mathbb{G})\right\}\in p \] iff \[ \left\{ n \colon (a + [f])+ \sum_{i \in E} f_{i}(n)=\varnothing\right\}\in p, \] where for some $U \in p$ we have that $f(n)=a + \sum_{i \in E} f_{i}(n)\in \mathsf{ult}_{p}^{\alpha^{-}}(\mathbb{G})$ for $n \in U$. \end{proof} \medskip Note also that if $\text{ht}([f])=\alpha$ for $\alpha > 0$, then $f$ is not constant on an element of $p$ (equivalently, $\{f\}$ is $p$-independent). \medskip \begin{lemma}\label{L:main} Let $0 < \alpha < \omega_{1}$, $[f] \in \mathsf{ult}_{p}^{\alpha}([\omega]^{<\omega})$ and $p$ a selective ultrafilter. If $f$ is not constant on an element of $p$, then there is a tree $T \in \mathbb{L}_{p}(T_{\alpha})$ with $T \subseteq T_{f}$ such that $\restr{\hat{f}}{\Omega_{0}(T)}$ is linearly independent.\footnote{Here, we are using the notation from the proof of Proposition \ref{p-compact} (1).} \end{lemma} \begin{proof} First, if $\alpha = 1$, then the conclusion of the lemma follows from Proposition \ref{selective} (2) (b). Thus, we may assume that $\alpha \geqslant 2$. \medskip We plan to construct a tree $T \in \mathbb{L}_{p}(T_{\alpha})$ with $T \subseteq T_{f}$, so that the following hold for any $\beta \leqslant \alpha$: \begin{itemize} \item if $\beta > 0$, then $\langle \hat{f}(t) \colon t \in \Omega_{\beta}(T)\rangle$ forms a $p$-independence sequence; \item if $\beta = 0$, then $\langle \hat{f}(t) \colon t \in \Omega_{0}(T)\rangle$ forms a linearly independent sequence. \end{itemize} \medskip In order to do this, first, we recursively construct a tree $T^{*} \in \mathbb{L}_{p}(T_{\alpha})$ with $T^{*} \subseteq T_{f}$, so that the following hold for any $t \in T^{*}$ with $\rho_{T^{*}}(t) \geqslant 1$: \begin{itemize} \item if $\text{ht}(\hat{f}(t))=1$, then $\langle \hat{f}(t^{\frown}n) \colon n \in \text{succ}_{T^{*}}(t)\rangle \subset [\omega]^{<\omega}$ forms a linearly independent sequence; \item if $\text{ht}(\hat{f}(t))=\beta + 1$ with $\beta \geqslant 1$, then $\langle \hat{f}(t^{\frown}n) \colon n \in \text{succ}_{T^{*}}(t)\rangle \subset \mathsf{ult}_{p}^{\beta}([\omega]^{<\omega})$ forms a $p$-independent sequence; \item if $\text{ht}(\hat{f}(t))$ is a limit ordinal, then $\langle \text{ht}(\hat{f}(t^{\frown}n)) \colon n \in \text{succ}_{T^{*}}(t)\rangle$ is a strictly increasing sequence of non-zero ordinals. \end{itemize} \medskip At step $t$. If $\text{ht}(\hat{f}(t))=1$ and $\langle \hat{f}(t^{\frown}n) \colon n \in \text{succ}_{T_{f}}(t)\rangle$ is not constant on an element of $p$, then $\rho_{T_{f}}(t)=1$ and applying Proposition \ref{selective} (2) (b) there exists $U \in p$ with $U \subseteq \text{succ}_{T_{f}}(t)$ such that $\langle \hat{f}(t^{\frown}n) \colon n \in U\rangle$ is linearly independent. Therefore, in this case we put $\text{succ}_{T^{*}}(t)=U$. \medskip If $\text{ht}(\hat{f}(t))=\beta + 1$ with $\beta \geqslant 1$ and $\langle \hat{f}(t^{\frown}n) \colon n \in \text{succ}_{T_{f}}(t)\rangle$ is not constant on an element of $p$, then consider the sequence \[ \tilde{f}_{t} \colon \text{succ}_{T_{f}}(t) \to \mathsf{ult}_{p}^{\beta}([\omega]^{<\omega})/ \mathsf{ult}_{p}^{-\beta}([\omega]^{<\omega}) \] defined by $\tilde{f}_{t}(n)= \pi^{\beta}_{\beta^{-}}(\hat{f}(t^{\frown}n))$ for $n \in \text{succ}_{T_{f}}(t)$. Since $\langle \hat{f}(t^{\frown}n) \colon n \in \text{succ}_{T_{f}}(t)\rangle$ is not constant on an element of $p$, by Lemma \ref{L:p-independence} (2), the sequence $\tilde{f}_{t}$ is not constant on an element of $p$. Therefore, applying Proposition \ref{selective} (2) (b) and Remark \ref{R:selective}, we can find an element $U \in p$ with $U \subseteq \text{succ}_{T_{f}}(t)$ such that $\restr{\tilde{f}_{t}}{U}$ is linearly independent. Thus, by Lemma \ref{L:p-independence} (1), putting $\text{succ}_{T^{*}}(t)=U$ we can conclude that $\langle \hat{f}(t^{\frown}n) \colon n \in \text{succ}_{T^{*}}(t)\rangle$ forms a $p$-independent sequence. \medskip If $\text{ht}(\hat{f}(t))=\beta$ is a limit ordinal, then for every $\delta < \beta$ we set $U_{\delta}=\{n \in \text{succ}_{T_{f}}(t) \colon \text{ht}(\hat{f}(t^{\frown}n))=\delta\}$. Then \[ \bigsqcup_{\delta < \beta}U_{\delta}=\text{succ}_{T_{f}}(t), \] where each $U_{\delta} \notin p$. The selectiveness of $p$ implies that there is an $U \in p$ such that $|U \cap U_{\delta}|\leqslant 1$ for every $\delta < \beta$. Thus, in this case put $\text{succ}_{T^{*}}(t)=U \setminus U_{0}$. This concludes recursive construction of $T^{*}$. \medskip Notice that $\rho_{T^{*}}(t)=\text{ht}(\hat{f}(t))$ for every $t \in T^{*}$. Now given a tree $T^{\prime} \in \mathbb{L}_{p}(T_{\alpha})$ with $T^{\prime} \subseteq T^{*}$, we can canonically list its members $t^{\prime} \in T^{\prime}$ as $\{t_{k}^{T^{\prime}} \colon k <\omega\}$ so that \begin{itemize} \item $t_{k}^{T^{\prime}} \subset t_{l}^{T^{\prime}}$ entails $k < l$; \item $t_{k}^{T^{\prime}}=t^{\frown} n$, $t_{l}^{T^{\prime}}=t^{\frown} m$, $\text{ht}(\hat{f}(t))$ is a limit ordinal, and $\text{ht}(\hat{f}(t^{\frown} n))<\text{ht}(\hat{f}(t^{\frown} m))$ entails $k < l$; \item $t_{k}^{T^{\prime}}=t^{\frown} n$, $t_{l}^{T^{\prime}}=t^{\frown} m$, $\text{ht}(\hat{f}(t))$ is a successor ordinal, and $n < m$ entails $k < l$. \end{itemize} Choose a sufficiently large regular cardinal $\theta$ and a countable elementary submodel $M$ of $\langle H(\theta), \in \rangle$ containing all the relevant objects such as $p$ and $T^{*}$. Fix $U \in p$ so that $U$ is a pseudo-intersection of $p \cap M$. Put $T^{**}=\restr{T^{*}}{U}$ and $V_{t} = \text{succ}_{T^{**}}(t)$ for $t \in (T^{**})^{+}$. \medskip We unfix $t$, and construct by recursion on $k$ the required condition $T=\{t_{k}^{T} \colon k \in \omega\} \in \mathbb{L}_{p}(T_{\alpha})$ with $T \subseteq T^{**}$, as well as an auxiliary function $g \colon T^{+} \to \omega$ and sets $W_{t} \subseteq V_{t}$ for $t \in T^{+}$ such that the following are satisfied: \begin{enumerate}[\upshape (a)] \item $W_{t}= V_{t} \setminus g(t)=\text{succ}_{T}(t)$ for all $t \in T^{+}$ (by definition). \item For all $k$, \begin{itemize} \item if $\rho_{T}(t^{T}_{k})= 1$, then \[ \left\langle \hat{f}(t_{l}^{T}{}^{\frown}n) \colon \exists \, l \leqslant k \left( n \in W_{t_{l}^{T}} \ \& \ \rho_{T}(t_{l}^{T}{}^{\frown}n)= 0 \right)\right\rangle \subseteq [\omega]^{<\omega} \] forms a linearly independent sequence; \item if $\rho_{T}(t^{T}_{k})= \beta +1$ with $\beta \geqslant 1$, then \[ \left\langle \hat{f}(t_{l}^{T}{}^{\frown}n) \colon \exists \, l \leqslant k \left( n \in W_{t_{l}^{T}} \ \& \ \rho_{T}(t_{l}^{T}{}^{\frown}n)=\beta\right)\right\rangle \subset \mathsf{ult}_{p}^{\beta}([\omega]^{<\omega}) \] forms a $p$-independence sequence; \item if $\rho_{T}(t^{T}_{k})=\beta$ is a limit ordinal, then \[ \left\langle \text{ht}(\hat{f}(t_{l}^{T}{}^{\frown}n)) \colon \exists \, l \leqslant k \left( n \in W_{t_{l}^{T}} \ \& \ \rho_{T}(t_{l}^{T})=\beta\right)\right\rangle \] forms an one-to-one sequence, and \begin{align*} &\sup\left\{\text{ht}(\hat{f}(t_{l}^{T}{}^{\frown}n)) \colon \exists \, l < k \left( \rho_{T}(t_{l}^{T}) \ne \beta \ \& \ n \in W_{t_{l}^{T}} \ \& \ \rho_{T}(t_{l}^{T}{}^{\frown}n) < \beta \right)\right\}\\ &< \min\left\{\text{ht}(\hat{f}(t_{k}^{T}{}^{\frown}n)) \colon n \in W_{t_{k}^{T}}\right\}. \end{align*} \end{itemize} \end{enumerate} Before describing the construction let us recall a simple fact from linear algebra: \begin{fact}\label{F:linearly-independent} Let $A$ and $B$ be linearly independent sets in a Boolean group with $A$ a finite set. Then there is $A'\subseteq B$ such that $|A'|\leq |A|$ and $A\sqcup (B\setminus A')$ is linearly independent. \end{fact} \begin{prooffact} Let $V_{A}=\text{span}(A)$ and $V_{B}=\text{span}(B)$. Then $\text{dim}(V_{A} \cap V_{B}) \leqslant |A|$, so there exists a set $A^{\prime}\subseteq B$ such that $\text{span}(A^{\prime})=V_{A} \cap V_{B}$. Therefore, $|A^{\prime}|\leqslant |A|$ and $A \sqcup (B\setminus A^{\prime})$ is linearly independent. \end{prooffact} \medskip \textsf{Basic step} $k=0$. So $t_{0}^{T}=\varnothing$. We put $g(t_{0}^{T})=0$ and hence $W_{t_{0}^{T}}=V_{t_{0}^{T}}$. The conditions (a) and (b) are immediate. \medskip \textsf{Recursion step} $k > 0$. Assume $W_{t^{T}_{l}}$ (for $l<k$) as well as $\restr{g}{k}$ have been defined so as to satisfy (a) and (b). In particular, we know already $t_{k}^{T}$, for it is of the form $t_{l}^{T}{}^{\frown}n$ for some $n \in W_{t_{l}^{T}}$ where $l < k$. Put $\rho_{T}(t^{T}_{k})=\gamma$ and assume $\gamma \geqslant 1$. Note that, since (b) is satisfied for $l$, we must have $\rho_{T}(t_{l}^{T})=\gamma + 1$ and \[ \left\langle \hat{f}(t_{j}^{T}{}^{\frown}m) \colon \exists \, j \leqslant l \left( m \in W_{t_{j}^{T}} \ \& \ \rho_{T}(t_{j}^{T}{}^{\frown}m) = \gamma \right)\right\rangle \subset \mathsf{ult}_{p}^{\gamma}([\omega]^{<\omega}) \] is a $p$-independent sequence. Put \begin{align*} A_{l} &= \{t_{l^{\prime}}^{T} \colon l^{\prime} \leqslant k \ \& \ \rho_{T}(t_{l^{\prime}}^{T}) = \gamma\} \\ &\subset \left\{t_{j}^{T}{}^{\frown}m \colon \exists \, j \leqslant l \left( m \in W_{t_{j}^{T}} \ \& \ \rho_{T}(t_{j}^{T}{}^{\frown}m)= \gamma \right) \right\} \end{align*} and $ A_{l}^{-}=A_{l}\setminus \{t_{k}^{T}\}$. \medskip If $\gamma= 1$, then applying Proposition \ref{selective} (2) (c) there exists $V \in p$ and a function $g_{l} \colon A_{l} \to \omega$ such that \[ \left \langle \hat{f}(t^{\frown}m) \colon t \in A_{l} \ \& \ m \in V \setminus g_{l}(t) \right \rangle \subseteq [\omega]^{<\omega} \] is a linearly independent sequence. Using the elementarity of $M$ and our assumption about $U$ we conclude that there exists a function $g_{l,U} \colon A_{l} \to \omega$ such that \[ \left \langle \hat{f}(t^{\frown}m) \colon t \in A_{l} \ \& \ m \in U \setminus g_{l,U}(t) \right \rangle \subseteq [\omega]^{<\omega} \] is a linearly independent sequence. Note that $V_{t_{k}^{T}} \setminus g_{l,U}(t_{k}^{T}) \subseteq U \setminus g_{l,U}(t_{k}^{T})$ and $W_{t} \setminus g_{l,U}(t) \subseteq U \setminus g_{l,U}(t)$ for $t \in A_{l}^{-}$. Since $A_{l}$ is a finite set, using Fact \ref{F:linearly-independent}, we can find a natural number $g(t_{k}^{T}) \geqslant g_{l,U}(t_{k}^{T})$ so that \[ \left \langle \hat{f}(t^{\frown}m) \colon t \in A_{l}^{-} \ \& \ m \in W_{t} \right \rangle \cup \left \langle \hat{f}(t_{k}^{T}{}^{\frown}m) \colon m \in V_{t_{k}^{T}} \setminus g(t_{k}^{T}) \right \rangle \] forms a linearly independent sequence, as required. \medskip For the case $\gamma= \beta +1$ with $\beta \geqslant 1$, we will proceed in a similar way as the previous case. Given $t\in A_{l}$, let \[ \tilde{f}_{t} \colon V_{t} \to \mathsf{ult}_{p}^{\beta}([\omega]^{<\omega})/\mathsf{ult}_{p}^{\beta^{-}}([\omega]^{<\omega}) \] be defined by $\tilde{f}_{t}(m)= \pi_{\beta}^{\beta^{-}}(\hat{f}(t^{\frown}m))$ for $m \in V_{t}$. By Lemma \ref{L:p-independence} (2), $\{\tilde{f}_{t} \colon t \in A_{l}\}$ is a $p$-independent set. Thus, applying Proposition \ref{selective} (2) (c) and Remark \ref{R:selective}, we can find an element $V \in p$ and a function $g_{l} \colon A_{l} \to \omega$ such that \[ \left \langle \tilde{f}_{t}(m) \colon t \in A_{l} \ \& \ m \in V \setminus g_{l}(t) \right \rangle \subseteq \mathsf{ult}_{p}^{\beta}([\omega]^{<\omega})/\mathsf{ult}_{p}^{\beta^{-}}([\omega]^{<\omega}) \] is a linearly independent sequence. By elementarity of $M$ and the property of $U$ we have that there exists a function $g_{l,U} \colon A_{l} \to \omega$ such that \[ \left \langle \tilde{f}_{t}(m) \colon t \in A_{l} \ \& \ m \in U \setminus g_{l,U}(t) \right \rangle \] is a linearly independent sequence. Since $A_{l}$ is a finite set, $V_{t_{k}^{T}} \setminus g_{l,U}(t_{k}^{T}) \subseteq U \setminus g_{l,U}(t_{k}^{T})$ and $W_{t} \setminus g_{l,U}(t) \subseteq U \setminus g_{l,U}(t)$ for $t \in A_{l}^{-}$, using Fact \ref{F:linearly-independent}, we can find a natural number $g(t_{k}^{T}) \geqslant g_{l,U}(t_{k}^{T})$ so that \[ \left \langle \tilde{f}_{t}(m) \colon t \in A_{l}^{-} \ \& \ m \in W_{t} \right \rangle \cup \left \langle \tilde{f}_{t_{k}^{T}}(m) \colon m \in V_{t_{k}^{T}} \setminus g(t_{k}^{T}) \right \rangle \] forms a linearly independent sequence and, by Lemma \ref{L:p-independence} (1), this means that \[ \left \langle \hat{f}(t^{\frown}m) \colon t \in A_{l}^{-} \ \& \ m \in W_{t} \right \rangle \cup \left \langle \hat{f}(t_{k}^{T}{}^{\frown}m) \colon m \in V_{t_{k}^{T}} \setminus g(t_{k}^{T}) \right \rangle \subset \mathsf{ult}_{p}^{\beta}([\omega]^{<\omega}) \] forms a $p$-independent sequence, as required. \medskip If $\gamma$ is a limit ordinal, then applying Proposition \ref{selective} (2) (c) there exists $V \in p$ and a function $g_{l} \colon A_{l} \to \omega$ such that \[ \left \langle \hat{f}(t^{\frown}m) \colon t \in A_{l} \ \& \ m \in V \setminus g_{l}(t) \right \rangle \subset \mathsf{ult}_{p}^{\gamma^{-}}([\omega]^{<\omega}) \] is a linearly independent sequence. Thus, proceeding as previous cases, it is possible to find a function $g_{l,U} \colon A_{l} \to \omega$ and a natural number $g(t_{k}^{T}) \geqslant g_{l,U}(t_{k}^{T})$ so that \[ \left \langle \hat{f}(t^{\frown}m) \colon t \in A_{l}^{-} \ \& \ m \in W_{t} \right \rangle \cup \left \langle \hat{f}(t_{k}^{T}{}^{\frown}m) \colon m \in V_{t_{k}^{T}} \setminus g(t_{k}^{T}) \right \rangle \] forms a linearly independent sequence. In particular, \[ \left \langle \text{ht}(\hat{f}(t^{\frown}m)) \colon t \in A_{l}^{-} \ \& \ m \in W_{t} \right \rangle \cup \left \langle \text{ht}(\hat{f}(t_{k}^{T}{}^{\frown}m)) \colon m \in V_{t_{k}^{T}} \setminus g(t_{k}^{T}) \right \rangle \] forms an one-to-one sequence and, since $\gamma$ is a limit ordinal, one sees that without loss of generality, we may assume that \begin{align*} &\sup\left\{\text{ht}(\hat{f}(t_{l}^{T}{}^{\frown}m)) \colon \exists \, l < k \left( \rho_{T}(t_{l}^{T}) \ne \gamma \ \& \ m \in W_{t_{l}^{T}} \ \& \ \rho_{T}(t_{l}^{T}{}^{\frown}m) < \gamma \right)\right\}\\ &< \min\left\{\text{ht}(\hat{f}(t_{k}^{T}{}^{\frown}m)) \colon m \in V_{t_{k}^{T}} \setminus g(t_{k}^{T})\right\}, \end{align*} as required. \end{proof} \medskip Now we are ready to prove the main theorem of this section. \medskip \begin{proofmain}{\ref{Th:SelectiveUltrafilter}} According to Proposition \ref{p-compact}, $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ is a Hausdorff $p$-compact topological group. It remains therefore only to show that $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ contains no non-trivial convergent sequences to $\pi([\langle \varnothing\rangle])$. To see this, let $\tilde{f} \colon \omega \to \mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ be a non-trivial sequence, say $\tilde{f}(n)=\pi(f(n))$ ($n \in \omega$) where $f \colon \omega \to \mathsf{ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$. Without loss of generality we can assume that $\tilde{f}$ is a one-to-one function. Thus, since \[ \mathsf{ult}_{p}^{\omega_{1}}([\omega]^{<\omega})=\mathsf{ult}_{p}\left(\bigcup_{\alpha < \omega_{1}}\mathsf{ult}_{p}^{\alpha}([\omega]^{<\omega})\right), \] there exists $0 < \alpha < \omega_{1}$ so that $[f] \in \mathsf{ult}_{p}^{\alpha}([\omega]^{<\omega})$ and $f$ is not constant on an element of $p$. By Lemma \ref{L:main}, there is a tree $T \in \mathbb{L}_{p}(T_{\alpha})$ with $T \subseteq T_{f}$ such that $\restr{\hat{f}}{\Omega_{0}(T)}$ is linearly independent. Note that $\hat{f}[\Omega_{0}(T)] \subseteq [\omega]^{<\omega}$. Take $\Phi \in \text{Hom}([\omega]^{<\omega},2)$ so that $\hat{f}[\Omega_{0}(T)] \subseteq \Phi^{-1}(1)$. So $\overline{\Phi}([\hat{f}])=1$ and hence $\overline{\Phi}([f])=1$. Thus, $\overline{\Phi}$ is a witness that the sequence $f$ does not $\tau_{\, \overline{\text{Bohr}}}$-converge to $[\langle \varnothing\rangle]$ and, since $\tilde{f}$ is one-to-one, in fact $\tilde{f}$ does not converge to $\pi([\langle \varnothing\rangle])$. \end{proofmain} \medskip \section{Countably compact group without convergent sequences} \label{cc-group} In this section we develop the ideas introduced in the previous section into a {\sf ZFC} construction of a countably compact subgroup of $2^\mathfrak c$ without non-trivial convergent sequences. Recall that any boolean group of size $\mathfrak c$ (in particular $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$) is isomorphic to $[\mathfrak c]^{<\omega}$. In fact, the extension of homomorphisms produces a (topological and algebraic) embedding $h$ of $(\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega}), \tau_{\, \overline{\text{Bohr}}})$ into $2 ^\mathfrak c \simeq 2^{\text{Hom}([\omega]^{<\omega}, 2)}$ defined by $$h([f])(\Phi)= \overline\Phi ([f]).$$ Similarly to the ultrapower construction, we shall extend the Bohr topology $\tau_{\, \text{Bohr}}$ on $[\omega]^{<\omega}$ to a group topology $\tau_{\, \overline{\text{Bohr}}}$ on $[\mathfrak c]^{<\omega}$ to obtain the result. The difference is that rather than using a single ultrafilter, we shall use a carefully constructed $\mathfrak c$-sized family of ultrafilters. \begin{theorem}\label{Th: main} There is a Hausdorff countably compact topological Boolean group without non-trivial convergent sequences. \end{theorem} \begin{proof} We shall construct a countably compact topology on $[\mathfrak c]^{<\omega}$ starting from $([\omega]^{<\omega}, \tau_{\, \text{Bohr}})$ as follows: \medskip Fix an indexed family $\{f_\alpha \colon \alpha\in [\omega,\mathfrak c)\}\subset ([\mathfrak c]^{<\omega})^\omega$ of one-to-one sequences such that \begin{enumerate} \item for every infinite $X\subseteq[\mathfrak c]^{<\omega}$ there is an $\alpha\in [\omega,\mathfrak c)$ with $\text{rng}(f_\alpha)\subseteq X$, \item each $f_\alpha$ is a sequence of linearly independent elements, and \item $\text{rng}(f_\alpha)\subset [\alpha]^{<\omega}$ for every $\alpha\in [\omega,\mathfrak c)$. \end{enumerate} \medskip Given a sequence $\{p_\alpha \colon \alpha\in [\omega,\mathfrak c)\} \subset \omega^*$ define for every $\Phi\in \text{Hom}([\omega]^{<\omega},2)$ its extension $\overline{\Phi}\in \text{Hom}([\mathfrak c]^{<\omega},2)$ recursively by putting $$\overline{\Phi}(\{\alpha\})=p_\alpha\text{-}\lim_{n \in \omega} \overline \Phi (f_\alpha(n)).$$ Note that $[\omega]^{<\omega}$ together with the independent set $\{\{\alpha\}: \alpha \in [\omega,\mathfrak c)\}$ generate the group $[\mathfrak c]^{<\omega}$ so the above definition uniquely extends $\Phi$ to a homomorphism $\overline \Phi: [\mathfrak c]^{<\omega}\to 2$. \medskip This allows us to define the topology $\tau_{\, \overline{\text{Bohr}}}$ induced by $\{\overline{\Phi}: \Phi \in \text{Hom}([\omega]^{<\omega},2)\}$ on $[\mathfrak c]^{<\omega}$ as the weakest topology making all $\overline{\Phi}$ continuous (for $\Phi \in \text{Hom}([\omega]^{<\omega},2)$), or equivalently, the group topology having $\{\text{Ker}(\overline{\Phi}): \Phi \in \text{Hom}([\omega]^{<\omega},2)\}$ as a subbasis of the filter of neighbourhoods of the neutral element $\varnothing$. It follows directly from the above observation that independently of the choice of the ultrafilters the topology is a countably compact group topology on $[\mathfrak c]^{<\omega}$. Indeed, $\{\alpha\} \in \overline{\{f_{\alpha}(n) \colon n \in \omega\}}^{\tau_{\, \overline{\text{Bohr}}}}$ for every $\alpha\in [\omega,\mathfrak c)$, in fact $\{\alpha\}=p_\alpha\text{-}\lim_{n \in \omega}f_\alpha(n)$. \medskip Call a set $D\in [\mathfrak c]^{\omega}$ \emph{suitably closed} if $\omega\subseteq D$ and $\bigcup_{n\in\omega} f_\alpha(n)\subseteq D$ for every $\alpha\in D$. The following claim shows that the construction is locally countable. \begin{claim}\label{key} The topology $\tau_{\, \overline{\text{Bohr}}}$ contains no non-trivial convergent sequences if and only if $\forall D\in [\mathfrak c]^{\omega} \text{ suitably closed } \exists \Psi\in \text{Hom}([D]^{<\omega},2) \text{ such that }$ \begin{enumerate} \item $\forall \alpha \in D\setminus\omega \ \Psi (\{\alpha\})=p_\alpha\text{-}\lim_{n \in \omega} \Psi(f_\alpha(n))$; \item $\forall i\in 2 \ |\{ n: \Psi(f_\alpha (n))=i\}|=\omega$. \end{enumerate} \end{claim} \begin{proofclaim} Given an infinite $X\subseteq [\mathfrak c]^{<\omega}$ there is an $\alpha\in [\omega,\mathfrak c)$ such that $\text{rng}(f_\alpha)\subseteq X$. Let $D$ be suitably closed with $\alpha\in D$, and let $\Psi$ be the given homomorphism. It follows directly from the definition, and property (1) of $\Psi$, that, if $\Phi=\Psi \restriction [\omega]^{<\omega}$ then in turn $\Psi=\overline{\Phi} \restriction [D]^{<\omega}$, which implies that $\langle f_\alpha(n) \colon n \in \omega \rangle$ (and hence also $X$) is not a convergent sequence as $\overline{\Phi}$ takes both values $0$ and $1$ infinitely often on the set $\{ f_\alpha(n) \colon n \in \omega\}$. The reverse implication is even more trivial (and not really necessary for the proof). \end{proofclaim} \medskip Note that if this happens then, in particular, $$K=\bigcap_{\Phi \in \text{Hom}([\omega]^{<\omega},2)} \text{Ker}(\overline \Phi)$$ is finite, and $[\mathfrak c]^{<\omega}/K$ with the quotient topology is the Hausdorff countably compact group without non-trivial convergent sequences we want. \medskip Hence to finish the proof it suffices to produce a suitable family of ultrafilters. \begin{claim}\label{ult-fam} There is a family $\{p_\alpha \colon \alpha < \mathfrak{c}\}$ of free ultrafilters on $\omega $ such that for every $D\in [\mathfrak c]^\omega $ and $ \{f_\alpha:\alpha \in D\}$ such that each $f_\alpha$ is an one-to-one enumeration of linearly independent elements of $[\mathfrak c]^{<\omega}$ there is a sequence $\langle U_\alpha:\alpha\in D\rangle$ such that \begin{enumerate} \item $\{U_\alpha:\alpha\in D\}$ is a family of pairwise disjoint subsets of $\omega$, \item $U_\alpha \in p_\alpha$ for every $ \alpha\in D $, and \item $\{ f_\alpha (n): \alpha\in D \ \& \ n \in U_\alpha\}$ is a linearly independent subset of $[\mathfrak c]^{<\omega}$. \end{enumerate} \end{claim} \begin{proofclaim} Fix $\{I_n: n\in\omega\}$ a partition of $\omega$ into finite sets such that $$ |I_n|>n \cdot \sum_{m<n}|I_m|,$$ and let $$\mathcal B=\{ B\subseteq \omega : \ \forall n\in \omega \ |I_n\setminus B| \leqslant \sum_{m<n}|I_m|\}.$$ Note that $\mathcal B$ is a centered family, and denote by $\mathcal F$ the filter it generates. Note also, that if $A$ is an infinite subset of $\omega$ then $\bigcup_{n \in A}I_n\in \mathcal F^+$. \medskip Let $\{ A_\alpha:\alpha\in \mathfrak{c}\}$ be any almost disjoint family of size $\mathfrak c$ of infinite subsets of $\omega$, and let, for every $\alpha<\mathfrak c$, $p_\alpha$ be any ultrafilter on $\omega$ extending $\mathcal F\restriction \bigcup_{n \in A_\alpha}I_n$. \medskip To see that this works, let $D=\{\alpha_n:n\in\omega\}$ and a family $\{f_\alpha:\alpha\in D\}$ of one-to-one sequences of linearly independent elements of $[\mathfrak{c}]^{<\omega}$ be given. Let $\{B_n: n\in\omega\}$ be a partition of $\omega$ such that $B_n=^* A_{\alpha_n}$ for every $n\in\omega$, and recursively define a set $B$ such that, $I_0\subseteq B$, $$ |I_n\setminus B| \leqslant \sum_{m<n}|I_m|$$ for every $n >0$, and $$\{f_{\alpha_n}(m): m\in B\cap I_l, \, l\in B_n\text{ and }n \in \omega\}\text{ is linearly independent}.$$ \smallskip In order to obtain the set $B$ we recursively use Fact \ref{F:linearly-independent} to construct a sequence $\{C_{l} \colon l \in \omega\}$ of finite sets such that: \begin{itemize} \item $C_{0}=\varnothing$; \item If $l>0$, then \begin{enumerate}[(i)] \item $C_{l} \subseteq I_{l}$, \item $|C_{l}|\leqslant \sum_{i<l} |I_{i} \setminus C_{i}|$, and \item $\bigsqcup_{i<l}f_{\alpha_{n_{i}}}[I_{i}\setminus C_{i}]$ is linearly independent, where $n_{i}$ is such that $i \in B_{n_{i}}$. \end{enumerate} \end{itemize} Put $B=\bigcup_{l \in \omega}I_{l}\setminus C_{l}$. By (ii), it follows that $B\in \mathcal{B}\subseteq \mathcal{F}$. Since $B_{n}=^{*}A_{n}$ and $B \in \mathcal{F}$, is clear that $U_{n}:=B \cap \bigcup_{l\in B_{n}}I_{l}\in p_{\alpha_{n}}$. By (iii), it follows that $\{f_{\alpha_{n}}(m) \colon m \in U_{n} \text{ and } n \in \omega\}$ is linearly independent. Therefore, the sequence $\langle U_{n} \colon n \in \omega\rangle$ is as required. \end{proofclaim} \medskip Now, use this family of ultrafilters as the parameter in the construction of the topology described above. By Claim \ref{key} it suffices to show that given a suitably closed $D\subseteq \mathfrak c$ and $\alpha\in D\setminus \omega$ there is a homomorphism $\Psi: [D]^{<\omega}\to 2$ such that \begin{enumerate} \item $\forall \alpha \in D\setminus\omega \ \Psi (\{\alpha\})=p_\alpha\text{-}\lim_{n \in \omega} \Psi(f_\alpha(n))$ \item $\forall i\in 2 \ |\{ n: \Psi(f_\alpha (n))=i\}|=\omega$. \end{enumerate} By Claim \ref{ult-fam}, there is a sequence $\langle U_\alpha:\alpha\in D\setminus \omega \rangle$ such that \begin{enumerate} \item $\{U_\alpha:\alpha\in D\setminus\omega \}$ is a family of pairwise disjoint subsets of $\omega$, \item $U_\alpha \in p_\alpha$ for every $ \alpha\in D\setminus \omega $, and \item $\{ f_\alpha (n): \alpha\in D\setminus \omega \ \& \ n \in U_\alpha\}$ is a linearly independent subset of $[\mathfrak c]^{<\omega}$. \end{enumerate} Enumerate $D\setminus \omega$ as $\{\alpha_n:n\in\omega\}$ so that $\alpha=\alpha_0$. Recursively define a function $h: \{ f_\alpha (n): \alpha\in D\setminus \omega \ \& \ n \in U_\alpha\}\to 2$ so that \begin{enumerate} \item $h$ takes both values $0$ and $1$ infinitely often on $\{ f_{\alpha_0} (n): \ n \in U_{\alpha_0}\setminus \{\alpha_0\}\}$, \item $\Psi_0(\{\alpha_0\})=p_{\alpha_0}$-$\lim_{k\in U_{\alpha_0}} \Psi_0(f_{\alpha_0}(k))$, and \item if $\{\alpha_n\}$ is in the subgroup generated by $\{ f_{\alpha_m} (n): m<n\ \& \ n \in U_{\alpha_m}\}$ then $\Psi_n(\{\alpha_n\})=p_{\alpha_n}\text{-}\lim_{k\in U_{\alpha_n}} \Psi_n(f_{\alpha_n}(k))$, and making sure that \item $\Psi_n (\{\alpha_n\})=p_{\alpha_n}\text{-}\lim_{k\in U_{\alpha_n}}\Psi_n( f_\alpha(k)).$ \end{enumerate} Where $\Psi_n$ is a homomorphism defined on the subgroup generated by $$\{ f_{\alpha_m} (n): m<n\ \& \ n \in U_{\alpha_m}\}\cup\{\{\alpha_m\}:m<n\}$$ extending $h\restriction \{ f_{\alpha_m} (n): m<n\ \& \ n \in U_{\alpha_m}\}$. Then let $\Psi$ be any homomorphism extending $ \bigcup_{m\in\omega} \Psi_m $. Doing this is straightforward given that the set $$\{ f_\alpha (n): \alpha\in D\setminus \omega \ \& \ n \in U_\alpha\}$$ is linearly independent. \medskip Finally, note that if we, for $a\in [\mathfrak c]^{<\omega }$, let $$H(a)(\Phi)= \overline{\Phi}(a) $$ then $H$ is a continuous homomorphism from $[\mathfrak c]^{<\omega}$ to $2^{\text{Hom}([\omega]^{<\omega}, 2)}$ whose kernel is the same group $K=\bigcap_{\Phi \in \text{Hom}([\omega]^{<\omega})} \text{Ker}(\overline \Phi)$, which defines a homeomorphism (and isomorphism) of $[\mathfrak c]^{<\omega}/K$ onto a subgroup of $2^{\text{Hom}([\omega]^{<\omega})}\simeq 2^\mathfrak c$. \end{proof} \section{Concluding remarks and questions} Even though the results of the paper solve longstanding open problems, they also open up very interesting new research possibilities. In Theorem \ref{Th:SelectiveUltrafilter} we showed that if $p$ is a selective ultrafilter then $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ is a $p$-compact group without non-trivial convergent sequences. This raises the following two interesting questions, the first of which is the equivalent of van Douwen's problem for $p$-compact groups. \begin{question} Is there in {\sf ZFC} a Hausdorff $p$-compact topological group without a non-trivial convergent sequence? \end{question} A closely related problem asks how much can the property of being selective be weakened in Theorem \ref{Th:SelectiveUltrafilter}. Recall that by Corollary \ref{Cor:P-point} it is consistent that there is a P-point $p$ for which $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ does contain a non-trivial convergent sequence. On the other hand, $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})\simeq \mathsf{Ult}_{p^\alpha}^{\omega_{1}}([\omega]^{<\omega})$ for every $\alpha<\omega_1$, so there are consistently non-P-points for which $(\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ contains no non-trivial convergent sequences. \begin{question} Is the existence of an ultrafilter $p$ such that $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ contains no non-trivial convergent sequences equivalent to the existence of a selective ultrafilter? \end{question} \begin{question} Is it consistent with {\sf ZFC } that $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ contains a non-trivial convergent sequence for every ultrafilter $p\in\omega^*$? \end{question} Assuming $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$ contains no non-trivial convergent sequences, it is easy to construct for every $n\in\omega$ a subgroup $\mathbb H$ of $\mathsf{Ult}_{p}^{\omega_{1}}([\omega]^{<\omega})$, such that $\mathbb H^n$ is countably compact while $\mathbb H^{n+1}$ is not. It should be possible to modify the construction in Theorem \ref{Th: main} to construct such groups in {\sf ZFC}. These issues will be dealt with in a separate paper. \smallskip Another interesting question is: \begin{question} Is it consistent with $\ZFC$ that there is a Hausdorff countably compact topological group without non-trivial convergent sequences of weight $< \mathfrak{c}$? \end{question} Finally, let us recall a 1955 problem of Wallace: \begin{question}[Wallace \cite{MR67907}] Is every both-sided cancellative countably compact topological semigroup necessarily a group? \end{question} It is well known that a counterexample can be recursively constructed inside of any non-torsion countably compact topological group without non-trivial convergent sequences \cite{MR1328373, MR1426694}. The fact that we do not know how to modify (in {\sf ZFC}) the construction in Theorem \ref{Th: main} to get a non-torsion example of a countably compact group seems surprising. Also the proof of Theorem \ref{Th:SelectiveUltrafilter} does not seem to easily generalize to non-torsion groups. Hence: \begin{question} Is there, in {\sf ZFC }, a non-torsion countably compact topological group without non-trivial convergent sequences? \end{question} \begin{question} Assume $p\in\omega^*$ is a selective ultrafilter. Does $(\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb Z), \tau_{\, \overline{\text{Bohr}}})$ contain no non-trivial convergent sequence? \end{question} Here the $\tau_{\, \overline{\text{Bohr}}}$ is defined as before as the weakest topology on $\mathsf{ult}_{p}^{\omega_{1}}(\mathbb{Z})$ which makes all extensions of homomorphisms from $\mathbb{Z}$ to $\mathbb{T}$ continuous, and the group $\mathsf{Ult}_{p}^{\omega_{1}}(\mathbb{Z})=\mathsf{ult}_{p}^{\omega_{1}}(\mathbb{Z})/K$ with $K$ being the intersection of all kernels of the extended homomorphisms. \bigskip {\bf Acknowledments.} The authors would like to thank Alan Dow and Osvaldo Guzm\'an for stimulating conversations. The authors also wish to thank the anonymous referee for a thorough reading of the text and for helpful suggestions. \bibliographystyle{amsplain} \bibliography{References} \end{document}
{ "config": "arxiv", "file": "2006.12675/Countably_compact_groups_without_non_trivial_convergent_sequences.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: Limit with log in exponent QUESTION [2 upvotes]: How would one prove that $\lim_{n\rightarrow\infty} \log(n) \left( 1 - \left(\frac{n}{n-1}\right)^{\log(n)+1}\right) = 0 $ ? I can see from Mathemtica that the limit is zero. REPLY [0 votes]: It is easy to see that $$1\lt\left(\frac{n}{n-1}\right)^{\ln(n)+1}\lt e^{\frac{\ln(n)+1}{n-1}},$$ on the other hand we have $$\lim_{n\rightarrow \infty}\ln(n)\left(e^{\frac{\ln(n)+1}{n-1}}-1\right)=\lim_{n\rightarrow\infty}\frac{\ln(n)(\ln(n)+1)}{n-1}\frac{e^{\frac{\ln(n)+1}{n-1}}-1}{\frac{\ln(n)+1}{n-1}}=0,$$ and we are done.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 2, "question_id": 888255, "subset_name": null }
TITLE: Help with the proof QUESTION [0 upvotes]: I am trying to prove the following claim: The union of a finite or countable number of sets each of power $c$ is itself of power $c$. My idea is to use induction, but I cannot finish the proof when there are two sets. Here is what I thought: if $I_1$ and $I_2$ both have continuum cardinality and $I_1\cap I_2=\emptyset$ then it is straightforward to prove. I cannot figure out how to prove if their intersection is non-empty. Any help would be great! REPLY [0 votes]: I don't know, but I suspect this may get at your problem: Lemma: Let $\mathcal T$ be a set of sets. Then $$\left|\bigcup\mathcal T\right|\le\left|\bigsqcup\mathcal T\right|\le\left|\mathcal T\times\max\bigl\{|T|: T\in\mathcal T\bigr\}\right|,$$ where $\bigsqcup$ represents disjoint union.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 531393, "subset_name": null }
\begin{document} \maketitle \begin{abstract} We are interested in numerically approximating the solution $\BU (t)$ of the large dimensional semilinear matrix differential equation $\dot{\BU}(t) = { \bm A}\BU (t) + \BU (t){ \bm B} + {\cal F}(\BU,t)$, with appropriate starting and boundary conditions, and $ t \in [0, T_f]$. In the framework of the Proper Orthogonal Decomposition (POD) methodology and the Discrete Empirical Interpolation Method (DEIM), we derive a novel matrix-oriented reduction process leading to an effective, structure aware low order approximation of the original problem. The reduction of the nonlinear term is also performed by means of a fully matricial interpolation using left and right projections onto two distinct reduction spaces, giving rise to a new two-sided version of DEIM. By maintaining a matrix-oriented reduction, we are able to employ first order exponential integrators at negligible costs. Numerical experiments on benchmark problems illustrate the effectiveness of the new setting. \end{abstract} \begin{keywords} Proper orthogonal decomposition, Discrete empirical interpolation method, Semilinear matrix differential equations, Exponential integrators. \end{keywords} \begin{AMS} 37M99, 15A24, 65N06, 65F30 \end{AMS} \section{Problem description}\label{sec:intro} We are interested in numerically approximating the solution $\BU (t) \in {\cal S}$ to the following semilinear matrix differential equation \begin{equation} \label{Realproblem} \dot{\BU}(t) = {\color{black} \bm A}\BU (t) + \BU (t){\color{black} \bm B} + {\cal F}(\BU,t) , \quad \BU(0) = \BU_{0}, \end{equation} where ${\color{black} \bm A} \in \mathbb{R}^{n_{x} \times n_{x}}, {\color{black} \bm B} \in \mathbb{R}^{n_{y} \times n_{y}}$, and $ t \in [0, T_f] = {\cal T} \subset \RR$, equipped with appropriate boundary conditions. The function {${\cal F} : {\cal S} \times {\cal T} \rightarrow \mathbb{R}^{n_x \times n_y}$} is a sufficiently regular nonlinear function that can be evaluated elementwise, and ${\cal S}$ is a functional space containing the sought after solution. The problem (\ref{Realproblem}) arises for instance in the discretization of two-dimensional partial differential equations of the form \begin{equation} \label{nonlinpde} u_{t} = \ell(u) + f(u,t) , \quad u = u(x,y, t) \quad \mbox{with} \,\, (x,y) \in \Omega \subset \mathbb{R}^2,\, t\in {\cal T}, \end{equation} and given initial condition $u(x, y, 0) = u_{0}(x,y)$, for certain choices of the physical domain $\Omega$. The differential operator $\ell$ is linear in $u$, typically a second order operator in the space variables, while {$f: S \times {\cal T} \rightarrow \RR$} is a nonlinear function, where $S$ is an appropriate space with $u\in S$. Time dependent equations of type (\ref{nonlinpde}) arise in biology and ecology, chemistry and physics, where the interest is in monitoring the time evolution of a complex phenomenon; see, e.g., \cite{Mainietal.01}, \cite{Malchowetal.08}, \cite{Quarteroni.17}, \cite{Tveitoetal.10}, and references therein. We develop a matrix-oriented \podeim\ order reduction strategy for the problem (\ref{Realproblem}) that leads to a semilinear {\it matrix} differential equation with the same structure as (\ref{Realproblem}), but of significantly reduced dimension. {More precisely, we determine an approximation to $\BU(t)$ of the type \begin{equation}\label{eqn:Uapprox} {\bm V}_{\ell,U}{\bm Y}_k(t){\bm W}_{r,U}^{\top} , \quad t \in [0, T_f] , \end{equation} where ${\bm V}_{\ell,U}\in \mathbb{R}^{n \times k_1}$ and ${\bm W}_{r,U}\in \mathbb{R}^{n \times k_2}$ are matrices to be determined, independent of time. Here $k_1, k_2 \ll n$ and we let $k=(k_1,k_2)$. The function ${\bm Y}_k(t)$ is determined as the numerical solution to the following {\it reduced} semilinear matrix differential problem \begin{equation} \label{Realproblemsmall0} \begin{split} \dot{{\bm Y}}_k(t) &= {\bm A}_{k}{\bm Y}_k(t) + {\bm Y}_k(t){\bm B}_{k} + \reallywidehat{{\cal F}_{k}({\bm Y}_k,t)} \\ {\bm Y}_k(0) &= {\bm Y}_k^{(0)} := {\bm V}_{\ell,U}^{\top}\BU_{0}{\bm W}_{r,U} , \end{split} \end{equation} with ${\bm A}_{k} = {\bm V}_{\ell,U}^{\top}{\color{black} \bm A}{\bm V}_{\ell,U}$, ${\bm B}_{k} = {\bm W}_{r,U}^{\top} {\color{black} \bm B}{\bm W}_{r,U},$ and $\reallywidehat{{\cal F}_{k}({\bm Y}_k,t)}$ is a matrix-oriented DEIM approximation to \begin{equation}\label{eqn:F} {\cal F}_{k}({\bm Y}_k,t)= {\bm V}_{\ell,U}^{\top}{\cal F}({\bm V}_{\ell,U}{\bm Y}_k{\bm W}_{r,U}^{\top},t){\bm W}_{r,U}. \end{equation} Standard procedures for (\ref{Realproblem}) employ a vector-oriented approach: semi-discretization of \cref{nonlinpde} in space leads to the following system of ordinary differential equations (ODEs) \begin{equation} \label{vecode} \dot{\bu}(t) = {\bm L}\bu (t) + {\bm f}(\bu,t) , \quad \bu(0) = \bu_{0}. \end{equation} For $t>0$, the vector $\bu (t)$ contains the representation coefficients of the sought after solution in the chosen discrete space, ${\bm L} \in\RR^{N\times N}$ accounts for the discretization of the linear differential operator $\ell$ and {${\bm f}$} is evaluated componentwise at ${\bm u}(t)$. The discretization of (\ref{nonlinpde}) can directly lead to the form \cref{Realproblem} whenever $\ell$ is a second order differential operator with separable coefficients and a tensor basis is used explicitly or implicitly for its discretization, such as finite differences on structured grids, certain spectral methods, isogeometric analysis, etc. In a tensor space discretization, for instance, the discretization of $\ell(u)$ yields the operator $\BU \mapsto {\bm A} \BU + \BU {\bm B}$, where the matrices ${\color{black} \bm A}$ and ${\color{black} \bm B}$ approximate the second order derivatives in the $x-$ and $y-$directions, respectively, using $n_x$ and $n_y$ discretization nodes\footnote{Here we display the discretized Laplace operator, more general operators can be treated; see, e.g., \cite[Section 3]{Simoncini2017}.}. We aim to show that by sticking to the matrix formulation of the problem throughout the computation - including the reduced model - major advantages in terms of memory allocations, computational costs and structure preservation can be obtained. For ${\cal F}\equiv 0$, the equation (\ref{Realproblem}) simplifies to the differential Sylvester equation, for which reduction methods have shown to be competitive; see, e.g., \cite{Behr.Benner.Heiland.19} and references therein. Our approach generalizes to the semilinear case a matrix-oriented projection methodology successfully employed in this linear and quadratic contexts \cite{Kirsten2019}. A challenging difference lies in the selection and construction of the two matrices ${\bm V}_{\ell,U}, {\bm W}_{r,U}$, so as to effectively handle the nonlinear term in (\ref{Realproblem}). In addition, ${\cal F}$ itself needs to be approximated for efficiency. Order reduction of the vector problem (\ref{vecode}) is a well established procedure. Among various methods, the Proper Orthogonal Decomposition (POD) methodology has been widely employed, as it mainly relies on solution samples, rather than the a-priori generation of an appropriate basis \cite{benner2015review},\cite{Benner.05},\cite{hinze2005},\cite{kunisch1999}. Other approaches include reduced basis methods, see, e.g., \cite{patera2007reduced}, and rational interpolation strategies \cite{ABG.20}; see, e.g., \cite{BCOW.17} for an overview of the most common reduction strategies. {The overall effectiveness of the POD procedure is largely influenced by the capability of evaluating the nonlinear term within the reduced space, motivating a considerable amount of work towards this estimation, including quadratic bilinear approximation \cite{gu2011,kramer2019nonlinear, benner2015} and trajectory piecewise-linear approximation \cite{white2003}. Alternatively several approaches consider interpolating the nonlinear function, such as missing point estimation \cite{astrid2008} and the best points interpolation method \cite{nguyen2008}. One very successful approach is the Discrete Empirical Interpolation Method (DEIM) \cite{chaturantabut2010nonlinear}, which is based on the Empirical Interpolation Method originally introduced in \cite{barrault2004}. We devise a matrix-oriented POD approach tailored towards the construction of the matrix reduced problem formulation (\ref{Realproblemsmall0}). {An adaptive procedure is also developed to limit the number of snapshots contributing to the generation of the approximation spaces.} The reduction of the nonlinear term is then performed by means of a fully matricial interpolation using left and right projections onto two distinct reduction spaces, giving rise to a new two-sided version of DEIM. The idea of using left and right POD-type bases in a matrix-oriented setting is not new in the general context of semilinear differential equations (see section~\ref{sec:others}). Nonetheless, after reduction these strategies resume the vector form of (\ref{Realproblemsmall0}) for integration purposes, thus loosing the structural and computational benefits of the matrix formulation. We claim that once (\ref{Realproblemsmall0}) is obtained, matrix-oriented integrators should be employed. In other words, by combining matrix-oriented versions of POD, DEIM and ODE integrators, we are able to carry the whole approximation with explicit reference to the two-dimensional computational domain. As a result, a fast (offline) reduction phase where a significant decrease in the problem size is carried out, is followed by a light (online) phase where the reduced ordinary differential matrix equation is integrated over time with the preferred matrix-oriented method. Our construction focuses on the two-dimensional problem. The advantages of our methodology become even more apparent in the three-dimensional (3D) case. A simplified version of our framework in the 3D case is experimentally explored in the companion manuscript \cite{Kirsten.21}, where the application to systems of differential equations is also discussed. Here we deepen the analysis of all the ingredients of this new methodology, and emphasize its advantages over the vector-based approaches with a selection of numerical results. A more extensive experimental evidence can be found in the previous version of this paper \cite{Kirsten.Simoncini.arxiv2020}. The paper is organized as follows. In \cref{sec:POD_DEIM} we review the standard \podeim\ algorithm for systems of the form \cref{vecode}, whereas our new two-sided proper orthogonal decomposition is derived in \cref{sec:matpod}. In \cref{sec:others} we discuss the relation to other matrix-based interpolation strategies and in \cref{sec:dynamic} we present a dynamical procedure for selecting the snapshots. Section~\ref{sec:matdeim} is devoted to the crucial approximation of the nonlinear function by the new two-sided discrete empirical interpolation method. The overall new procedure with the numerical treatment of the reduced differential problem is summarized in \cref{extend}. Numerical experiments are reported in \cref{sec:exp} to illustrate the effectiveness of the proposed procedure. Technical implementation details and computational costs are discussed in \cref{sec:alg}. {\it Notation.} ${\bm I}_n$ denotes the $n \times n$ identity matrix{; the subscript is omitted whenever clear from the context}. For a matrix ${\bm A}$, $\|{\bm A}\|$ denotes the matrix norm induced by the Euclidean vector norm, and $\|{\bm A}\|_F$ the Frobenius norm. Scalar quantities are denoted by lower case letters, while vectors (matrices) are denoted by bold face lower (capital) case letters. As an exception, matrix quantities of interest in the {\it vector} POD-DEIM approximation are denoted in sans-serif, instead of bold face, i.e., ${\sf M}$. All reported experiments were performed using MATLAB 9.6 (R2020b) (\cite{matlab2013}) on a MacBook Pro with 8-GB memory and a 2.3-GHz Intel core i5 processor. \section{The standard POD method and DEIM in the vector framework}\label{sec:POD_DEIM} We review the standard \podeim\ method and its application to the dynamical system \cref{vecode}. The proper orthogonal decomposition is a technique for reducing the dimensionality of a given dynamical system, by projecting it onto a space spanned by the orthonormal columns of a matrix ${\sf V}_k$. To this end, we consider a set of \emph{snapshot solutions} $\bu_j = \bu(t_j)$ of the system \cref{vecode} at $n_s$ different time instances ($0 \leq t_1 < \cdots < t_{n_s} \leq T_f$). Let \begin{equation} \label{linsvd} \mathsf{S} = [\bu_1, \cdots, \bu_{n_s}] \in \mathbb{R}^{N \times n_s}, \end{equation} and $\mathcal{S} = \mbox{\tt Range}({\sf S})$ of dimension $d$. A POD basis of dimension $k < d$ is a set of orthonormal vectors whose linear span gives the best approximation, according to some criterion, of the space ${\cal S}$. In the 2-norm, this basis can be obtained through the singular value decomposition (SVD) of the matrix ${\sf S}$, ${\sf S} = {\sf V}{\sf {\bm \Sigma}}{\sf W}^{\top}$, with ${\sf V}$ and ${\sf W}$ orthogonal matrices and ${\sf {\bm \Sigma}}={\rm diag}(\sigma_1, \ldots, \sigma_{n_s})$ diagonal with non-increasing positive diagonal elements. If the diagonal elements of ${\sf {\bm \Sigma}}$ have a rapid decay, the first $k$ columns of ${\sf V}$ (left singular vectors) are the most dominant in the approximation of ${\sf S}$. Denoting with ${\sf S}_k = {\sf V}_k{\sf {\bm \Sigma}}_k{\sf W}_k^{\top}$ the reduced SVD where only the $k\times k$ top left portion of ${\sf {\bm \Sigma}}$ is retained and ${\sf V}, {\sf W}$ are truncated accordingly, then $\|{\sf S} - {\sf S}_k\| = \sigma_{k+1}$ \cite{golub13}. Once the matrix ${\sf V}_k$ is obtained, for $t \in [0, T_f]$ the vector $\bu (t)$ is approximated as $\bu (t) \approx {\sf V}_k{\bm y}_k(t)$, where the vector ${\bm y}_k(t) \in \RR^k$ solves the {\em reduced} problem \begin{equation} \dot{{\bm y}}_k(t) = {\bm L}_k{\bm y}_k(t) + {\bm f}_k({\bm y}_k,t) , \quad {\bm y}_k(0) = {\sf V}_k^{\top}\bu_{0}. \label{vecodesmall} \end{equation} Here ${\bm L}_k = {\sf V}_k^{\top}{\bm L}{\sf V}_k$ and ${\bm f}_k({\bm y}_k,t) = {\sf V}_k^{\top}{\bm f}({\sf V}_k{\bm y}_k,t)$. Although for $k \ll N$ problem \cref{vecodesmall} is cheaper to solve than the original one, the definition of ${\bm f}_k$ above requires the evaluation of ${\bm f}({\sf V}_k{\bm y}_k,t)$ at each timestep and at all $N$ entries, thus still depending on the original system size. One way to overcome this problem is by means of DEIM. The DEIM procedure, originally introduced in \cite{chaturantabut2010nonlinear}, is utilized to approximate a nonlinear vector function ${\bm f}: {\cal T} \rightarrow \RR^N$ by interpolating it onto an empirical basis, that is, ${\bm f}(t) \approx \In {\color{black}{\bm c}}(t),$ where $\{{\bm \varphi}_1,\dots,{\bm\varphi}_p\} \subset \RR^{N}$ is a low dimensional basis, $\In = [{\bm\varphi}_{1}, \dots, {\bm\varphi}_p] \in \mathbb{R}^{N \times p}$ and ${\color{black}{\bm c}}(t) \in \mathbb{R}^{p}$ is the vector of time-dependent coefficients to be determined. Let $\mathsf{P} = [{\color{black} \bm e}_{\rho_{1}}, \dots, {\color{black} \bm e}_{\rho_{p}}] \in \mathbb{R}^{N \times p}$ be a subset of columns of the identity matrix, named the ``selection matrix''. If $\mathsf{P}^{\top}\In$ is invertible, in \cite{chaturantabut2010nonlinear} the coefficient vector ${\color{black}{\bm c}}(t)$ is uniquely determined by solving the linear system $\mathsf{P}^{\top}\In {\color{black}{\bm c}}(t) = \mathsf{P}^{\top}{\bm f}(t)$, so that \begin{equation} \label{deimsetup} {\bm f}(t) \approx \In {\color{black}{\bm c}}(t) = \In(\mathsf{P}^{\top}\In)^{-1}\mathsf{P}^{\top}{\bm f}(t). \end{equation} The nonlinear term in the reduced model \cref{vecodesmall} is then approximated by \begin{equation} \label{fkapprox} {\bm f}_k({\bm y}_k,t) \approx {\sf V}_k^{\top}\In(\mathsf{P}^{\top}\In)^{-1}\mathsf{P}^{\top}{\bm f}({\sf V}_k{\bm y}_k,t). \end{equation} The accuracy of DEIM depends greatly on the basis choice, and in a lesser way by the choice of $\sf P$. In most applications the interpolation basis $\{{\bm\varphi}_1,\dots,{\bm\varphi}_p\}$ is selected as the POD basis of the set of snapshots $\{{\bm f}(\bu_1,t_1), \dots, {\bm f}(\bu_{n_s},t_{n_s})\}$, as described earlier in this section, that is given the matrix \begin{equation} \mathsf{N} = [{\bm f}(\bu_1,t_1), \dots, {\bm f}(\bu_{n_s},t_{n_s})] \in \mathbb{R}^{N \times n_s}, \label{nonsvd} \end{equation} the columns of the matrix $\In = [{\bm\varphi}_{1}, \dots, {\bm\varphi}_p]$ are determined as the first $p\le n_s$ dominant left singular vectors in the SVD of $\mathsf{N}$. The matrix $\sf P$ for DEIM is selected by a greedy algorithm based on the system residual; see \cite[Algorithm 3.1]{chaturantabut2010nonlinear}. In \cite{gugercin2018} the authors showed that a pivoted QR-factorization of $\In^{\top}$ may lead to better accuracy and stability properties of the computed matrix ${\sf P}$. {The resulting approach, called Q-DEIM, will be used in the sequel and is implemented as algorithm {\tt q-deim} in \cite{gugercin2018}}. DEIM is particularly advantageous when the function ${\bm f}$ is evaluated componentwise, in which case it holds that $\mathsf{P}^{\top}{\bm f}({\sf V}_k{\bm y}_k,t) = {\bm f}({\sf P}^{\top}{\sf V}_k{\bm y}_k,t)$, so that the nonlinear function is evaluated only at $p\ll N$ components. In general, this occurs whenever finite differences are the underlying discretization method. If more versatile discretizations are required, then different procedures need to be devised. A discussion of these approaches is deferred to section~\ref{sec:others}. \section{A new two-sided proper orthogonal decomposition}\label{sec:matpod} We derive a \podeim\ algorithm that fully lives in the matrix setting, without requiring a mapping from $\RR^{n_x \times n_y}$ to $\RR^N$, so that no vectors of length $N$ need to be processed or stored. We determine the left and right reduced space bases that approximate the space of the given snapshots ${\bm \Xi}(t)$ (either the nonlinear functions or the approximate solutions), {so that ${\bm \Xi}(t) \approx {\bm V}_\ell {\bm \Theta}(t) {\bm W}_{r}^{\top}$, where ${\bm V}_\ell, \bm{W}_r$ have $\nu_\ell$ and $\nu_r$ orthonormal columns, respectively. Their ranges approximate the span of the rows (left) and columns (right) spaces of the function ${\bm \Xi}(t)$, independently of the time $t$. In practice, we wish to have $\nu_\ell \ll n_x, \nu_r\ll n_y$ so that ${\bm \Theta}(t)$ will have a reduced dimension. A simple way to proceed would be to collect all snapshot matrices in a single large (or tall) matrix and generate the two orthonormal bases corresponding to the rows and columns spaces. This is way too expensive, both in terms of computational costs and memory requirements. Instead, we present a two-step procedure that avoids explicit computations with all snapshots simultaneously. The first step sequentially selects the most important information of each snapshot matrix, relative to the other snapshots, while the second step prunes the generated spaces by building the corresponding orthonormal bases. These two steps can be summarized as follows: \begin{enumerate} \item {\it Dynamic selection}. Assume $i$ snapshots have been processed and dominant SVD information retained. For the next snapshot ${\bm \Xi}(t_{i+1})$ perform a reduced SVD and retain the leading singular triplets in a way that the retained singular values are at least as large as those already kept from previous iterations. Make sure that at most $\kappa$ SVD components are retained overall, with $\kappa$ selected a-priori; \item {\it Bases pruning}. Ensure that the vectors spanning the reduced right and left spaces have orthonormal columns. Reduce the space dimension if needed. \end{enumerate} In the following we provide the details for this two-step procedure. The strategy that leads to the selection of the actual time instances used for this construction will be discussed in section~\ref{sec:dynamic}. To simplify the presentation, and without loss of generality, we assume $n_x=n_y\equiv n$. \vskip 0.1in {\it First step.} Let $\bm\Xi_i = {\bm \Xi}(t_i)$. For the collection of snapshots, we wish to determine a (left) reduced basis for the range of the matrix $\bm{H}\{{\bm \Xi} \} := ({\bm\Xi}_1, \, \ldots, \, {\bm\Xi}_{n_s} ) \in \mathbb{R}^{n \times (n\cdot n_s)}$ and a (right) reduced basis for the range of the matrix $\bm{Z}\{ {\bm \Xi} \} = ( {\bm\Xi}_1^\top, \ldots, {\bm\Xi}_{n_s}^\top)^\top = ( {\bm\Xi}_1; \ldots; {\bm\Xi}_{n_s})$. This is performed by incrementally including leading components of the snapshots ${\bm\Xi}_i$, so as to determine the approximations \begin{eqnarray*} {\bm H}\{\bm \Xi\} &\approx& \widetilde{\bm H}\{\bm \Xi\} := \widetilde{\bm V}_{n_s}\widetilde{\bm \Sigma}_{n_s}\widetilde{\bm W}_{n_s}^{\top}, \qquad \widetilde{\bm V}_{n_s} \in \mathbb{R}^{n \times \kappa}, \widetilde{\bm W}_{n_s} \in \RR^{n\cdot n_s \times \kappa} \\ {\bm Z}\{ {\bm \Xi} \} &\approx& \widehat{\bm Z}\{ {\bm \Xi} \} := \widehat{\bm V}_{n_s} \widetilde{\bm \Sigma}_{n_s}\widehat{\bm W}_{n_s}^{\top}, \qquad \widehat{\bm V}_{n_s} \in \mathbb{R}^{n\cdot n_s \times \kappa}, \widehat{\bm W}_{n_s} \in \RR^{n\times \kappa}. \end{eqnarray*} A rank reduction of the matrices $\widetilde{\bm V}_{n_s}$ and $\widehat{\bm W}_{n_s}$ will provide the sought after bases, to be used for time instances other than those of the snapshots. Let $\kappa\le n$ be the chosen maximum admissible dimension for the reduced left and right spaces. For $i \in \{1, \ldots, n_s\}$, let \begin{eqnarray}\label{eqn:Xi_i} {\bm \Xi}_i \approx {\bm V}_i{\bm \Sigma}_i{\bm W}_i^\top, \quad {\bm V}_i, {\bm W}_i \in \RR^{n \times \kappa}, \quad {\bm \Sigma}_i = {\rm diag}({\sigma_1^{(i)}, \ldots, \sigma_{\kappa}^{(i)}}) \end{eqnarray} be the reduced SVD of ${\bm \Xi}_i$ corresponding to its dominant $\kappa$ singular triplets, with singular values sorted decreasingly. Let $\widetilde {\bm V}_1={\bm V}_1$, $\widetilde {\bm \Sigma}_1={\bm \Sigma}_1$ and $\widetilde {\bm W}_1={\bm W}_1$. For each subsequent $i=2, \ldots, n_s$, the leading singular triplets of ${\bm \Xi}_i$ are {\it appended} to the previous matrices $\widetilde{\bm \Sigma}_{i-1}$, $\widetilde{\bm V}_{i-1}$ and $\widetilde{\bm W}_{i-1}$, that is \begin{equation}\label{htilde1} \widetilde{ {\bm H}}_{i}\{{ \bm \Xi} \} = ( \widetilde{\bm V}_{i-1}, \, {\bm V}_{i} ) \begin{pmatrix} \widetilde{{\bm \Sigma}}_{i-1} & \\ & {{\bm \Sigma}}_{i} \end{pmatrix} \begin{pmatrix} \widetilde{\bm W}_{i-1}^{\top} & \\ & {\bm W}_{i}^{\top} \end{pmatrix} \equiv \widetilde{\bm V}_{i} {\widetilde{\bm \Sigma}}_{i} \widetilde{\bm W}_{i}^{\top} . \end{equation} After at most $n_s$ iterations we will have the final $\widetilde{ {\bm H}}\{{ \bm \Xi} \}$, with no subscript. The expansion is performed ensuring that the appended singular values are at least as large as those already present, so that the retained directions are the leading ones among all snapshots. Then the three matrices $\widetilde{\bm V}_{i}$, $\widetilde{\bm W}_{i}$ and $\widetilde{\bm\Sigma}_{i}$ are truncated so that the retained diagonal entries of $\widetilde{\bm \Sigma}_i$ are its $\kappa$ largest diagonal elements. The diagonal elements of $\widetilde {\bm \Sigma}_{i}$ are not the singular values of ${\bm H}_{i}\{{ \bm \Xi}\}$, however the adopted truncation strategy ensures that the error committed is not larger than $\sigma_{\kappa+1}$, which we define as the largest singular value discarded during the whole accumulation process. Since each column of $\widetilde {\bm V}_{n_s}$ has unit norm, it follows that $\|\widetilde {\bm V}_{n_s}\| \le \kappa$. Moreover, the columns of $\widetilde{{\bm W}}_{n_s}$ are orthonormal. Hence $\|{\bm H}\{{ \bm \Xi}\} - \widetilde{\bm H}\{{ \bm \Xi}\}\| \le \kappa\sigma_{\kappa+1}$. In particular, this procedure is not the same as taking the leading singular triplets of each snapshot per se: this first step allows us to retain the leading triplets of each snapshot {\it when compared with all snapshots}, using the magnitude of all singular values as quality measure.} A similar strategy is adopted to construct the right basis. Formally, \begin{equation} \label{ztilde} \widehat{ {\bm Z}}_{i}\{{ \bm \Xi} \} = \begin{pmatrix} \widehat{\bm V}_{i-1} & \\ & {\bm V}_{i} \end{pmatrix} \begin{pmatrix} \widetilde{{\bm \Sigma}}_{i-1} & \\ & {{\bm \Sigma}}_{i} \end{pmatrix} \begin{pmatrix} \widehat{\bm W}_{i-1}^\top \\ {\bm W}_{i}^\top \end{pmatrix} = \widehat{\bm V}_{i} {\widetilde {\bm \Sigma}}_{i} \widehat{\bm W}_{i}^{\top} ; \end{equation} notice that ${\widetilde {\bm \Sigma}}_{i}$ is the same for both $\widetilde {\bm H}$ and $\widehat {\bm Z}$, and that the large matrices $\widetilde {\bm W}_i$ and $\widehat{\bm V}_i$ are not stored explicitly in the actual implementation. At completion, the following two matrices are bases candidates for the left and right spaces: \begin{equation}\label{eq:basis} {\rm Left:} \quad \widetilde{\bm V}_{n_s} {\widetilde {\bm \Sigma}}_{n_s}^{\frac 1 2}, \qquad {\rm Right:}\quad {\widetilde {\bm \Sigma}}_{n_s}^{\frac 1 2} \widehat {\bm W}_{n_s}^{\top} , \end{equation} where the singular value matrices keep track of the relevance of each collected singular vector, and the square root allows us to maintain the order of magnitude of the snapshot matrices, when the product of the two left and right matrices is carried out. Here $n_s$ is the total number of snapshots included in the whole procedure, using the dynamic procedure discussed in section~\ref{sec:dynamic}. The procedure is described in Algoritm~\ref{alg:snapstep}. \begin{algorithm} \caption{\sc Dynamic selection procedure} \label{alg:snapstep} \begin{algorithmic}[1] \STATE{\textbf{INPUT:} ${\bm \Xi}_i$, $\widetilde{\bm V}_{i-1} \in \RR^{n \times \kappa}$, $\widetilde{\bm \Sigma}_{i-1} \in \RR^{\kappa \times \kappa}$, ${\widehat{\bm W}}_{i-1} \in \RR^{n \times \kappa}$, $\kappa$} \STATE{\textbf{OUTPUT:} $\widetilde{\bm V}_{i} \in \RR^{n \times \kappa}$, $\widetilde{\bm \Sigma}_{i} \in \RR^{\kappa \times \kappa}$, ${\widehat{\bm W}}_{i} \in \RR^{n \times \kappa}$.} \STATE{ Compute $[{\bm V}_i, {\bm \Sigma}_i, {\bm W}_i] = \mbox{\tt svds}\left({\bm \Xi}_i, \kappa \right)$;} \STATE{Append $\widetilde{\bm V}_{i} \leftarrow (\widetilde{\bm V}_{i-1}, {\bm V}_i)$, ${\widehat{\bm W}}_{i} \leftarrow ({\widehat{\bm W}}_{i-1},{\bm W}_i)$, $\widetilde{\bm \Sigma}_{i} \leftarrow \mbox{\tt blkdiag} (\widetilde{\bm \Sigma}_{i-1}, {\bm \Sigma}_i)$;} \STATE{Decreasingly order the entries of (diagonal) $\widetilde{\bm \Sigma}_{i}$ and keep the first $\kappa$;} \STATE{Order $\widetilde{\bm V}_{i}$ and ${\widehat{\bm W}}_{i}$ accordingly and keep the first $\kappa$ vectors of each;} \end{algorithmic} \end{algorithm} \vskip 0.1in {\it Second step.} We complete the two-sided approximation of the snapshot functions by pruning the two orthonormal bases associated with the representation (\ref{eq:basis}). Let \begin{equation} \label{eqn:Wtilde} \widetilde{\bm V}_{n_s}{\widetilde{\bm \Sigma}_{n_s}}^{\frac 1 2} = \overline{{\bm V}}\,\overline{{\bm \Sigma}} \overline{\bm W}^\top \quad {\rm and} \quad {\widetilde{\bm \Sigma}_{n_s}}^{\frac 1 2}\widehat{\bm W}_{n_s}^{\top} = \breve{\bm V} \breve{\bm \Sigma} \breve{\bm W}^{\top} \end{equation} be the singular value decompositions of the given matrices. If the matrices $\overline{\bm \Sigma}$ and $\breve{\bm \Sigma}$ have rapidly decaying singular values, we can further reduce the low rank approximation of each ${\bm \Xi}_i$. More precisely, let \begin{equation}\label{eq:partitioning} \overline{\bm V}\, \overline{{\bm \Sigma}} = [{\bm V}_\ell, {\bm V}_{\cal E}] \begin{pmatrix} \overline{{\bm \Sigma}}_{\ell} & \\ & \overline{{\bm \Sigma}}_{{\cal E}}\end{pmatrix}, \quad {\rm and} \quad \breve{{\bm \Sigma}} \breve{\bm W}^\top = \begin{pmatrix} \breve{\bm \Sigma}_r & \\ & \breve{\bm \Sigma}_{\cal E} \end{pmatrix} \begin{bmatrix}{\bm W}_r^\top \\ {\bm W}_{\cal E}^\top \end{bmatrix}, \end{equation} with ${\bm{V}}_{\ell} \in \RR^{n \times \nu_{\ell}}, {\bm W}_r \in \RR^{n \times \nu_{r}}$. The final reduced dimensions $\nu_\ell$ and $\nu_r$, that is the number of columns to be retained in the matrices ${{\bm V}}_{\ell}$ and ${\bm W}_r$, respectively, is determined by the following criterion: \begin{equation} \label{kselect} \|\overline{\bm \Sigma}_{\cal E}\|_F \le \frac{\tau}{\sqrt{n_{\max}}} \|\overline{\bm \Sigma}\|_F \quad \mbox{and} \quad \|\breve{\bm \Sigma}_{\cal E}\|_F \le \frac{\tau}{\sqrt{n_{\max}}} \|\breve{\bm \Sigma}\|_F , \end{equation} for some chosen tolerance $\tau\in (0,1)$ and $n_{\max}$ is the maximum number of available snapshots of the considered function. We have assumed so far that at least one singular triplet is retained for all ${ \bm \Xi}$'s. In practice, it might happen that for some $i$ none of the singular values is large enough to be retained. In this case, ${ \bm \Xi}_i$ does not contribute to the two-sided basis. \begin{remark} \label{cor:symmetry} If ${\bm \Xi}_i={\bm \Xi}(t_i)$ is symmetric for $t_i \in [0, T_f]$, the reduction process can preserve this structure. Indeed, since ${\bm \Xi}_i$ is symmetric it holds that ${ \bm \Xi}_i={{\bm V}_i}{{\bm \Sigma}}_i{{\bm W}_i}^{\top} = {{\bm V}_i}{{\bm \Sigma}}_i{\bm D}_i{{\bm V}_i}^{\top}$, with ${\bm D}_i$ diagonal of ones and minus ones. As a consequence, ${\bm W}_r={\bm V}_\ell$. Positive definiteness can also be preserved, with ${\bm D}_i$ the identity matrix. \end{remark} In the following we use the pair $({\bm V}_{\ell}, {\bm {W}}_{r})$, hereafter denoted as {\it two-sided proper orthogonal decomposition} ({\sc 2s-pod)}, to approximate the function ${\color{black} \bm \Xi}(t)$ for some $t\ne t_i$: \begin{equation}\label{eqn:approx_Xi} {\bm \Xi}(t) \approx {\bm V}_{\ell,{\bm \Xi}} {\bm \Theta}(t) {\bm {W}}_{r,{\bm \Xi}}^{\top} \end{equation} with $\bm \Theta$ depending on $t$, of reduced dimension, and ${\bm V}_{\ell,{\bm \Xi}}$ and ${\bm W}_{r,{\bm \Xi}}$ play the role of ${\bm V}_\ell$ and ${\bm W}_{r}$ respectively. \section{Connections to other matrix-based interpolation POD strategies}\label{sec:others} The approximation discussed in the previous section is not restricted to problems of the form (\ref{Realproblem}), but rather it can be employed to any POD function approximation where the snapshot vectors are transformed into matrices, giving rise to a matrix DEIM methodology. This class of approximation has been explored in the recent literature, where different approaches have been discussed, especially in connection with parameter-dependent problems and Jacobian matrix approximation; see, e.g., \cite{wirtz2014},\cite{carlberg2015}, and the thorough discussion in \cite{benner2015review}. In the former case, the setting is particularly appealing whenever the operator has a parameter-based affine function formulation, while in the Jacobian case the problem is naturally stated in matrix terms, possibly with a sparse structure \cite{bonomi2017},\cite{sandu2017}. In the nonaffine case, in \cite{bonomi2017},\cite{negri2015} an affine matrix approximation (MDEIM) was proposed by writing appropriate (local) sparse representations of the POD basis, as is the case for finite element methods. As an alternative in this context, it was shown in \cite{dedden2012} that DEIM can be applied locally to functions defined on the unassembled finite element mesh (UDEIM); we refer the reader to \cite{tiso2013} for more details and to \cite{antil2014} for a detailed experimental analysis. In our approach we consider the approximation in (\ref{eqn:approx_Xi}). If ${\bm \Theta}(t)$ were diagonal, then this approximation could be thought of as an MDEIM approach, since then ${\color{black} \bm \Xi}(t)$ would be approximated by a sum of rank-one matrices with the time-dependent diagonal elements of ${\bm \Theta}(t)$ as coefficients. Instead, in our setting ${\bm \Theta}(t)$ is far from diagonal, hence our approximation determines a more general memory saving approximation based on the low rank representation given by ${\bm V}_{\ell,\Xi}, {{\bm W}}_{r,\Xi}$. Another crucial novel fact of our approach is the following. While methods such as MDEIM aim at creating a linear combination of matrices, they still rely on the vector DEIM for computing these matrices, thus only detecting the leading portion of the left range space. In our construction, the left and right approximation spaces spanned by ${\bm V}_{\ell,\Xi}, {{\bm W}}_{r,\Xi}$, respectively, stem from a subspace selection of the range spaces of the whole snapshot matrix ${\bm H}\{{\color{black} \bm \Xi}\}$ (left space) and ${\bm Z}\{{\color{black} \bm \Xi}\}$ (right space); here both spaces are {\it matrix} spaces. In this way, the leading components of both spaces can be captured. In particular, specific space directions taken by the approximated function during the time evolution can be more easily tracked; we refer to the previous version of this manuscript for an experimental illustration \cite[section 8.1]{Kirsten.Simoncini.arxiv2020}. In light of the discussion above, our approach might also be interpreted in terms of the ``local basis'' POD framework, see, e.g., \cite{amsallem2011}, where the generality of the bases is ensured by interpolation onto matrix manifolds. For a presentation of this methodology we also refer the reader to the insightful survey \cite[section 4.2]{benner2015review}. In this context, the matrices ${\bm V}_\ell, {{\bm W}}_{r}$ may represent a new {\it truncated} interpolation of the matrices in ${\bm H}\{{\color{black} \bm \Xi}\}$ and ${\bm Z}\{{\color{black} \bm \Xi}\}$, in a completely algebraic setting. \section{A dynamic algorithm for creating the \MPOD\ approximation space} \label{sec:dynamic} We describe an adaptive procedure for selecting the time instances employed in the first step of the basis construction of section~\ref{sec:matpod}. This procedure will be used for the selection of both the solution and the nonlinear function snapshots. The dynamic procedure starts with a coarse discretization of the time interval (using one forth of the available nodes), and then continues with two successive refinements if needed. Let $n_{\max}$ be the maximum number of available snapshots of the considered function ${\bm \Xi}(t)$, $t \in [t_0, T_f]$ with $t_0=0$. A first set ${\cal I}_1$ of $n_{\max}/4$ equispaced time instances in $[0, T_f]$ is considered (symbol `$\star$' in \cref{line}). If needed, a second set ${\cal I}_2$ of $n_{\max}/4$ equispaced time instances are considered (symbol `$\times$'), whereas the remaining $n_{\max}/2$ time instances (symbol `$\square$' and set ${\cal I}_3$) are considered in the third phase, if needed at all. \begin{figure}[htb] \begin{center} \includegraphics[width=.35\textwidth]{timeline2} \caption{The three evaluation phases of the refinement procedure. \label{line}} \end{center} \end{figure} The initial \MPOD\ basis matrices of dimension $\kappa$, i.e. $\widetilde{\bm V}_1$ and $\widehat{\bm W}_1$ from \cref{eq:basis}, are constructed by processing the snapshot ${\bm \Xi}(t_0)$. For all other time instances $t_i$ in each phase, we use the following inclusion criterion \begin{equation} {\rm if} \quad \epsilon_i := \frac{\|{\bm \Xi}(t_i) - \Pi_\ell {\bm \Xi}(t_i) \Pi_r\|}{\|{\bm \Xi}(t_i)\|} > {\tt tol} \quad {\rm then \, include} \label{error} \end{equation} where $\Pi_\ell$ and $\Pi_r$ are orthogonal projectors onto the left and right spaces (these are implicitly constructed by performing a reduced QR decomposition of the current matrices $\widetilde{\bm V}_i$ and $\widehat{\bm W}_i$ on the fly). If a snapshot is selected for inclusion, the leading singular triplets of ${\bm \Xi}(t_i)$ are appended to the current bases, and the leading $\kappa$ components are retained as in Algorithm~\ref{alg:snapstep}; then the next time instance in the phase is investigated. If, by the end of the phase, the {arithmetic} mean of the errors $\epsilon_i$ in \cref{error} is above {{\tt tol}}, it means that the bases are not sufficiently good and we move on to the next refinement phase. Otherwise, the snapshot selection procedure is ended and the matrices $\overline{\bm V}$ and $\breve{\bm W}$ are computed and pruned by the second step in section~\ref{sec:matpod} to form the final {\sc 2s-pod} basis matrices ${\bm V}_{\ell,{\bm \Xi}} \in \RR^{n \times \nu_\ell}$ and ${\bm W}_{r,{\bm \Xi}} \in \RR^{n \times \nu_r}$. The full {\sc dynamic} procedure to create the \MPOD\ approximation space is presented in \cref{alg:snapadap}.} \begin{algorithm} \caption{\texttt{{\sc dynamic} \MPOD\ }} \label{alg:snapadap} \begin{algorithmic}[1] \STATE{\textbf{INPUT:} Function ${\bm \Xi}\,:\, {\cal T} \mapsto \RR^{n \times n}$, $n_{\max}$, {\tt tol}, $\kappa$, phase sets ${\cal I}_{1,2,3}$} \STATE{\textbf{OUTPUT:} ${\bm V}_{\ell,{\bm \Xi}} \in \RR^{n \times \nu_{\ell}}$ and ${\bm W}_{r,{\bm \Xi}} \in \RR^{n \times \nu_{r}}$} \STATE{ Compute $[{\bm V}_1, {\bm \Sigma}_1, {\bm W}_1] = \mbox{\tt svds}\left({\bm \Xi}(t_0), \kappa\right)$;} \STATE{ Let $\widetilde{\bm V}_1 = {\bm V}_1$, $\widehat{\bm W}_1 = {\bm W}_1, \widetilde{\bm \Sigma}_1 = {\bm \Sigma}_1$;} \vskip 0.05in \STATE{\hspace{0.1cm} {\it Dynamic selection step :}} \vskip 0.05in \FOR{{\sc phase} $= 1,2,3$} \FOR{all $t_i \in {\cal I}_{\footnotesize\mbox{\sc phase}}$} \IF{\cref{error} satisfied} \STATE{Process snapshot ${\bm \Xi}(t_i)$ using Dynamic selection (\cref{alg:snapstep});} \STATE{Update {$\widetilde{\bm V}_i$ and $\widehat{\bm W}_i$};} \ENDIF \ENDFOR \IF{$ \sum_{t_i\in {\cal I}_{\tiny\mbox{\sc phase}}} \epsilon_i \le {\tt tol} |{\cal I}_{\tiny\mbox{\sc phase}}|$} \STATE{{\bf break} and go to 17;} \ENDIF \ENDFOR \vskip 0.05in \STATE{\hspace{0.1cm} {\it Bases Pruning:}} \vskip 0.05in \STATE{Determine the reduced SVD of $\widetilde{\bm V}_{n_s}\widetilde{\bm\Sigma}_{n_s}^{\frac 1 2}$ and $\widetilde{\bm\Sigma}_{n_s}^{\frac 1 2} \widehat {\bm W}_{n_s}^\top$ in (\ref{eqn:Wtilde});} \STATE{Determine the final reduced ${\bm V}_{\ell,{\bm \Xi}}$ and ${\bm W}_{r,{\bm \Xi}}$ as in (\ref{eq:partitioning}) using the criterion (\ref{kselect});} \STATE{\bf Stop} \end{algorithmic} \end{algorithm} \section{Approximation of the nonlinear function ${\cal F}_k$ in the reduced model} \label{sec:matdeim} To complete the reduction of the original problem to the small size problem (\ref{Realproblemsmall0}), we need to discuss the derivation of the approximation $\reallywidehat{{\cal F}_{k}({\bm Y}_k,t)}$. Let $\{{\cal F}(t_j)\}_{j=1}^{n_s}$ be a set of snapshots of the nonlinear function ${\cal F}$ at times $t_j$, $j=1, \ldots, n_s$. Using Algorithm~\ref{alg:snapadap} we compute the two matrices ${\bm V}_{\ell,{\cal F}}\in\RR^{n\times p_1}$, ${\bm W}_{r,{\cal F}}\in\RR^{n\times p_2}$ so as to approximate ${\cal F}(t)$ as \begin{equation}\label{eq:Ftilde} {\cal F}(t) \approx {\bm V}_{\ell,{\cal F}}{\bm C}(t){\bm W}_{r,{\cal F}}^{\top}, \end{equation} with ${\bm C}(t)$ to be determined. Here $p_1, p_2$ play the role of $\nu_{\ell}, \nu_r$ in the general description, and they will be used throughout as basis truncation parameters for the nonlinear snapshots. By adapting the DEIM idea to a two-sided perspective, the coefficient matrix ${\bm C}(t)$ is determined by selecting independent rows from the matrices ${\bm V}_{\ell,{\cal F}}$ and ${\bm W}_{r,{\cal F}}$, so that $$ {\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}}{\bm C}(t){\bm W}_{r,{\cal F}}^{\top}{\bm P}_{r,{\cal F}} = {\bm P}_{\ell, {\cal F}}^{\top}{\cal F}(t){\bm P}_{r,{\cal F}} , $$ where ${\bm P}_{\ell, {\cal F}} = [e_{\pi_1}, \cdots,e_{\pi_{p_1}}] \in \mathbb{R}^{n \times p_1}$ and ${\bm P}_{r,{\cal F}} = [e_{\gamma_1}, \cdots,e_{\gamma_{p_2}}]\in \mathbb{R}^{n \times p_2}$ are columns of the identity matrix of size $n$. Both matrices are defined similarly to the selection matrix ${\sf P}$ from section~\ref{sec:POD_DEIM}, and they act on ${\bm V}_{\ell,{\cal F}}, {\bm W}_{r,{\cal F}}$, respectively. If $ {\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}}$ and ${\bm P}_{r,{\cal F}}^{\top}{\bm W}_{r,{\cal F}}$ are nonsingular, then the coefficient matrix ${\bm C}(t)$ is determined by $$ {\bm C}(t) = ({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}})^{-1}{\bm P}_{\ell, {\cal F}}^{\top}{\cal F}(t){\bm P}_{r,{\cal F}}({\bm W}_{r,{\cal F}}^{\top}{\bm P}_{r,{\cal F}})^{-1}. $$ With this coefficient matrix ${\bm C}(t)$, the final approximation (\ref{eq:Ftilde}) becomes\footnote{If the nonlinear function ${\cal F}(t)$ is symmetric for all $t \in [0, T_f]$, thanks to \cref{cor:symmetry} this approximation will preserve the symmetry of the nonlinear function.} \begin{equation} \label{deimapprox} \widetilde{\cal F}(t) = {\bm V}_{\ell,{\cal F}}({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}})^{-1}{\bm P}_{\ell, {\cal F}}^{\top}{\cal F}(t) {\bm P}_{r,{\cal F}}({\bm W}_{r,{\cal F}}^{\top}{\bm P}_{r,{\cal F}})^{-1} {\bm W}_{r,{\cal F}}^{\top} =: {\color{black} \bm Q}_{\ell,{\cal F}}{\cal F}(t){\color{black} \bm Q}_{r,{\cal F}}^{\top} . \end{equation} Note that ${\color{black} \bm Q}_{*,{\cal F}}$ are oblique projectors. {A similar approximation can be found in \cite{sorensen2016}}. In addition to that of the two spaces, an important role is played by the choice of the interpolation indices contained in ${\bm P}_{\ell, {\cal F}}$ and ${\bm P}_{r,{\cal F}}$. We suggest determining {\color{black}these indices for the matrices ${\bm P}_{\ell, {\cal F}}$ and ${\bm P}_{r,{\cal F}}$ as the output of {\tt q-deim} (\cite{gugercin2018})} with inputs ${\bm V}_{\ell,{\cal F}}$ and ${\bm W}_{r,{\cal F}}$, respectively. We next provide a bound measuring the distance between the error obtained with the proposed oblique projection (\ref{deimapprox}) and the best approximation error of ${\cal F}$ in the same range spaces, where we recall that ${\bm V}_{\ell,{\cal F}}$ and ${\bm W}_{r,{\cal F}}$ have orthonormal columns. This bound is a direct extension to the matrix setting of \cite[Lemma 3.2]{chaturantabut2010nonlinear}. \begin{proposition} \label{errprop} Let ${\cal F} \in \RR^{n \times n}$ be an arbitrary matrix, and let $ \widetilde{\cal F} = {\color{black} \bm Q}_{\ell,{\cal F}}{\cal F}{\color{black} \bm Q}_{r,{\cal F}}^{\top}, $ as in \cref{deimapprox}. Then \begin{equation} \label{errbound} \| {\cal F} - \widetilde{\cal F} \|_F \le {\color{black} c_{\ell}}{\color{black} c_{r}}\, \|{\cal F} - {\bm V}_{\ell,{\cal F}}{\bm V}_{\ell,{\cal F}}^{\top} {\cal F}{\bm W}_{r,{\cal F}}{\bm W}_{r,{\cal F}}^{\top} \|_F \end{equation} where ${\color{black} c_{\ell}} = \left \| ({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}})^{-1} \right \|_2$ and ${\color{black} c_{r}} = \left \| ({\bm P}_{r,{\cal F}}^{\top}{\bm W}_{r,{\cal F}})^{-1} \right \|_2$. \end{proposition} {\it Proof.} Recall that ${\bm f} = \mbox{\tt vec}({\cal F})$. Then, by the properties of the Kronecker product {\small \begin{equation*} \begin{split} \| {\cal F} - \widetilde{\cal F} \|_F &= \| \mbox{\tt vec}({\cal F}) - \mbox{\tt vec}(\widetilde{\cal F}) \|_2 = \| {\bm f} - ({\color{black} \bm Q}_{r,{\cal F}} \otimes {\color{black} \bm Q}_{\ell,{\cal F}}){\bm f} \|_2 \\ &= \left \|{\bm f} - ({\bm W}_{r,{\cal F}} \otimes {\bm V}_{\ell,{\cal F}})\left(({\bm P}_{r,{\cal F}} \otimes {\bm P}_{\ell, {\cal F}})^{\top}({\bm W}_{r,{\cal F}} \otimes {\bm V}_{\ell,{\cal F}})\right)^{-1}({\bm P}_{r,{\cal F}} \otimes {\bm P}_{\ell, {\cal F}})^{\top}{\bm f}\right\|_2 \end{split} \end{equation*}} Therefore, by \cite[Lemma 3.2]{chaturantabut2010nonlinear}, {\footnotesize \begin{eqnarray*} {\| {\cal F} - \widetilde{\cal F} \|_F} &\le& { \left\|\left(({\bm P}_{r,{\cal F}} \otimes {\bm P}_{\ell, {\cal F}})^{\top}({\bm W}_{r,{\cal F}} \otimes {\bm V}_{\ell,{\cal F}})\right)^{-1}\right\|_2 \left\|{\bm f} - ({\bm W}_{r,{\cal F}} \otimes {\bm V}_{\ell,{\cal F}})({\bm W}_{r,{\cal F}} \otimes {\bm V}_{\ell,{\cal F}})^{\top}{\bm f}\right\|_2 } \\ &=& { \left\|({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}})^{-1}\right\|_2 \left\|({\bm P}_{r,{\cal F}}^{\top}{\bm W}_{r,{\cal F}})^{-1}\right\|_2 \left \|{\cal F} - {\bm V}_{\ell,{\cal F}}{\bm V}_{\ell,{\cal F}}^{\top} {\cal F}{\bm W}_{r,{\cal F}}{\bm W}_{r,{\cal F}}^{\top} \right \|_F.}\,\, \square \end{eqnarray*} } We emphasize that ${\color{black} c_{\ell}}, {\color{black} c_{r}}$ do not depend on time, in case $\cal F$ does. As has been discussed in \cite{chaturantabut2010nonlinear},\cite{gugercin2018}, it is clear from \cref{errbound} that minimizing these amplification factors will minimize the error norm with respect to the best approximation onto the spaces $\mbox{\tt Range}({\bm V}_{\ell,{\cal F}})$ and $ \mbox{\tt Range}({\bm W}_{r,{\cal F}})$. The quantities ${\color{black} c_{\ell}}$, ${\color{black} c_{r}}$ depend on the interpolation indices. If the indices are selected greedily, as in \cite{chaturantabut2010nonlinear}, then \begin{equation}\label{eqn:bound_P} {\color{black} c_{\ell}} \le \frac{(1 + \sqrt{2n})^{p_1 - 1}}{\|e_1^T {\bm V}_{\ell,{\cal F}}\|_{\infty}}, \quad {\color{black} c_{r}} \le \frac{(1 + \sqrt{2n})^{p_2 - 1}}{\|e_1^T {\bm W}_{r,{\cal F}}\|_{\infty}}. \end{equation} If the indices are selected by a pivoted QR factorization as in {\color{black}\tt q-deim}, then \begin{eqnarray*} {\color{black} c_{\ell}}\le \sqrt{n-p_1 + 1}\frac{\sqrt{4^{p_1} + 6p_1 -1}}{3}, \quad {\color{black} c_{r}} \le \sqrt{n-p_2 + 1}\frac{\sqrt{4^{p_2} + 6p_2 -1}}{3} , \end{eqnarray*} which are better bounds than those in (\ref{eqn:bound_P}), though still rather pessimistic; see \cite{gugercin2018}. To complete the efficient derivation of the reduced model in \cref{Realproblemsmall0} we are left with the final approximation of ${\cal F}_k$ in (\ref{deimapprox}). If ${\cal F}$ is evaluated componentwise, as we assume throughout\footnote{For general nonlinear functions the theory from \cite[Section 3.5]{chaturantabut2010nonlinear} can be extended to both matrices ${\bm V}_{\ell,U}$ and ${\bm W}_{r,U}$.}, then ${\bm P}_{\ell, {\cal F}}^{\top} {\cal F}({\bm V}_{\ell,U}{\bm Y}(t) {\bm W}_{r,U}^{\top},t){\bm P}_{r,{\cal F}} = {\cal F}({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,U}{\bm Y}(t){\bm W}_{r,U}^{\top}{\bm P}_{r,{\cal F}},t)$, so that {\small \begin{align} \label{nonlinapprox} {\cal F}_k({\bm Y}_k, t) &\approx {\bm V}_{\ell,U}^{\top}{\bm V}_{\ell,{\cal F}}({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}})^{-1}{\bm P}_{\ell, {\cal F}}^{\top} {\cal F}({\bm V}_{\ell,U}{\bm Y}_k(t) {\bm W}_{r,U}^{\top},t){\bm P}_{r,{\cal F}}({\bm W}_{r,{\cal F}}^{\top}{\bm P}_{r,{\cal F}})^{-1} {\bm W}_{r,{\cal F}}^{\top}{\bm W}_{r,U} \nonumber\\ & ={\bm V}_{\ell,U}^{\top}{\bm V}_{\ell,{\cal F}}({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,{\cal F}})^{-1}{\cal F} ({\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,U}{\bm Y}_k(t){\bm W}_{r,U}^{\top}{\bm P}_{r,{\cal F}},t) ({\bm W}_{r,{\cal F}}^{\top}{\bm P}_{r,{\cal F}})^{-1}{\bm W}_{r,{\cal F}}^{\top}{\bm W}_{r,U}\nonumber \\ & =: \reallywidehat{{\cal F}_k({\bm Y}_k, t)}. \end{align} } The matrices ${{\bm V}_{\ell,U}^{\top}{\bm V}_{\ell,{\cal F}}({\bm P}_{\ell, {\cal F}}^{\top} {\bm V}_{\ell,{\cal F}})^{-1}} \in \RR^{k_1 \times p_1}$ and ${({\bm W}_{r,{\cal F}}^{\top}{\bm P}_{r,{\cal F}})^{-1} {\bm W}_{r,{\cal F}}^{\top} {\bm W}_{r,U}} \in \RR^{p_2 \times k_2}$ are independent of $t$, therefore they can be precomputed and stored once for all. Similarly for the products ${\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,U} \in \RR^{p_1 \times k_1}$ and ${\bm W}_{r,U}^{\top}{\bm P}_{r,{\cal F}} \in \RR^{k_2 \times p_2}$. Note that products involving the selection matrices ${\bm P}$'s are not explicitly carried out: the operation simply requires selecting corresponding rows or columns in the other matrix factor. Finally, we remark that in some cases the full space approximation matrix may not be involved. For instance, if ${\cal F}$ is a matrix function (\cite{higham2008}) and $\BU(t)$ is symmetric for all $t \in [0,T_f]$, {\color{black}so that $\BU(t) \approx {\bm V}_{\ell,U}{\bm Y}_k(t) {\bm V}_{\ell,U}^{\top}$, then, recalling \cref{eqn:F}, it holds that $$ {\cal F}_{k}({\bm Y}_k,t) = {\bm V}_{\ell,U}^{\top}{\cal F}({\bm V}_{\ell,U}{\bm Y}_k{\bm V}_{\ell,U}^{\top},t){\bm V}_{\ell,U} \overset{\star}{=} {\cal F}({\bm V}_{\ell,U}^{\top}{\bm V}_{\ell,U}{\bm Y}_k,t) = {\cal F}({\bm Y}_k,t), $$ where the equality $\overset{\star}{=}$ is due to \cite[Corollary 1.34]{higham2008}.} \section{Two-sided POD-DEIM for nonlinear matrix-valued ODEs} \label{extend} To complete the derivation of the numerical method, we need to determine the time-dependent matrix ${\bm Y}_k(t)$, $t \in [0,T_f]$ in the approximation ${\bm V}_{\ell,U}{\bm Y}_k(t){\bm W}_{r,U}^{\top} \approx \BU(t)$, where ${\bm V}_{\ell,U}\in \mathbb{R}^{n \times k_1}$ and ${\bm W}_{r,U}\in \mathbb{R}^{n \times k_2}$, $k_1, k_2 \ll n$ and we let $k=(k_1,k_2)$. The function ${\bm Y}_k(t)$ is computed as the numerical solution to the reduced problem (\ref{Realproblemsmall0}) with $\reallywidehat{{\cal F}_k({\bm Y}_k, t)}$ defined in (\ref{nonlinapprox}). {To integrate the reduced order model \cref{Realproblemsmall0} as time $t$ varies, several alternatives can be considered. The vectorized form of the semilinear problem can be treated with classical first or second order semi-implicit methods such as IMEX methods (see, e.g.,\cite{ascher1995}), that appropriately handle the stiff and non-stiff parts of the equation. Several of these methods were originally constructed as rational approximations to exponential integrators, which have for long time been regarded as too expensive for practical purposes. Recent advances in numerical linear algebra have allowed a renewed interest in these powerful methods, see, e.g., \cite{brachet2020comparaison, garcia2014, Grooms2011}. One of the advantages of our matrix setting is that exponential integrators can be far more cheaply applied than in the vector case, thus allowing for a better treatment of the stiff component in the solution; see, e.g., \cite{Autilia2019matri}. Let $\{{\mathfrak t}_i\}_{i=0, \ldots, n_{\mathfrak t}}$ be the nodes discretizing the time interval $[0, T_f]$ with meshsize $h$. Given the (vector) differential equation $\dot\by={\BL}\by + f(t,\by)$, the (first order) exponential time differencing (ETD) Euler method is given by the following recurrence, $$ \by^{(i)} = e^{h {\BL}} \by^{(i-1)} + h \varphi_1(h {\BL}) f({\mathfrak t}_{i-1},\by^{(i-1)}), $$ where $\varphi_1(z) = (e^z -1)/z$. In our setting, ${\BL} = {\bm B}_{k}^\top \otimes \BI_{k_1} + \BI_{k_2}\otimes {\bm A}_{k}$, for which it holds that $e^{h {\BL}} = e^{h {\bm B}_{k}^\top}\otimes e^{h {\bm A}_{k}}$ \cite[Th.10.9]{higham2008}, so that the computation of the exponential of the large matrix ${\BL}$ reduces to $e^{h{\BL}} {\tt vec}(\BY_k) = {\tt vec}(e^{h {\bm A}_k} \BY_k e^{h {\bm B}_k})$. The two matrices used in the exponential functions have now small dimensions, so that the computation of the matrix exponential is fully affordable. As a consequence, the integration step can be performed all at the matrix level, without resorting to the vectorized form. More precisely (see also \cite{Autilia2019matri}), let ${\mathfrak f}({\bm Y}_k^{(i-1)}) = \reallywidehat{{\cal F}_k({\bm Y}_k^{(i-1)}, {\mathfrak t}_{i-1})}$. Then we can compute $\BY_k^{(i)}\approx \BY_k({\mathfrak t}_{i})$ as $$ \BY_k^{(i)} = e^{h {\bm A}_{k}} \BY_k^{(i-1)} e^{h {\bm B}_{k}} + h\BPhi^{(i-1)}, $$ where the matrix $\BPhi^{(i-1)}$ solves the following linear (Sylvester) matrix equation \begin{equation}\label{eq:Sylvester} {\bm A}_{k} \BPhi + \BPhi {\bm B}_{k} = e^{h{\bm A}_{k}} {\mathfrak f}({\bm Y}_k^{(i-1)}) e^{h {\bm B}_{k}} - {\mathfrak f}({\bm Y}_k^{(i-1)}) . \end{equation} At each iteration, the application of the matrix exponentials in ${\bm A}_{k}$ and ${\bm B}_{k}$ is required, together with the solution of the small dimensional Sylvester equation. This linear equation has a unique solution if and only if the spectra of ${\bm A}_{k}$ and $-{\bm B}_{k}$ are disjoint, a hypothesis that is satisfied in our setting. The solution $\BPhi^{(i-1)}$ can be obtained by using the Bartels-Stewart method \cite{bartels1972}. In case of symmetric ${\bm A}_{k}$ and ${\bm B}_{k}$, significant computational savings can be obtained by computing the eigendecomposition of these two matrices at the beginning of the online phase, and then compute both the exponential and the solution to the Sylvester equation explicitly; we refer the reader to \cite{Autilia2019matri} for these implementation details. Matrix-oriented first order IMEX schemes could also be employed (see \cite{Autilia2019matri}), however in our experiments ETD provided smaller errors at comparable computational costs. {Concerning the quality of our approximation, error estimates for the full \podeim\ approximation of systems of the form \cref{vecode} have been derived in \cite{wirtz2014, chat2012}, which also take into account the error incurred in the numerical solution of the reduced problem. A crucial hypothesis in the available literature is that ${\bm f}(\bu, t) = \mbox{\tt vec}({\cal F}(\BU,t))$ be Lipschitz continuous with respect to the first argument, and this is also required for exponential integrators. This condition is satisfied for the nonlinear function of our reduced problem. Indeed, consider the vectorized approximation space $\mathbb{V}_{\color{black} \bu} = {\bm W}_{r,{\BU}} \otimes {\bm V}_{\ell,{\BU}}$ and the oblique projector $\mathbb{ Q}_{\bm f} = {\bm Q}_{r,{\cal F}} \otimes {\bm Q}_{\ell,{\cal F}}$ from (\ref{deimapprox}). If we denote by $\widehat{\bm Y}_k(t)$ the approximate solution of \cref{Realproblemsmall0} with ${\cal F}_k$ approximated above, then \begin{equation} \label{errorequal} \|{\bm U}(t) - {\bm V}_{\ell,\BU}\widehat{\bm Y}_k(t){\bm W}_{r,{\BU}}^{\top}\|_F = \|\bu(t) - \mathbb{V}_{\color{black} \bu}\widehat{\bm y}_k(t)\|_2, \end{equation} where $\widehat{\bm y}_k(t) = \mbox{\tt vec}(\widehat{\bm Y}_k(t))$ solves the reduced problem $\dot{\widehat{\bm y}}_k(t) = \mathbb{V}_{\color{black}\bu}^{\top}{\bm L}\mathbb{V}_{\color{black} \bu}\widehat{\bm y}_k(t) + \mathbb{V}_{\color{black} \bu}^{\top}\mathbb{Q}_{\color{black} \bm f}{\bm f}(\mathbb{V}_{\color{black} \bu}\widehat{\bm y}_k, t)$. For ETD applied to semilinear differential equations the additional requirement is that $\BL$ be sectorial, which can also be assumed for the discretization of the considered operator $\ell$; see \cite[Th.4 for order $s=1$]{hochbruck2005}. The error in \cref{errorequal} can therefore be approximated by applying the a-priori \cite{chat2012} or a posteriori \cite{wirtz2014} error estimates to the vectorized system associated with $\widehat{\bm y}_k$. Moreover, if ${\bm f}(\bu, t)$ is Lipschitz continuous, this property is preserved by the reduced order vector model, since $\mathbb{V}_{\color{black} \bu}$ has orthonormal columns and $\|\mathbb{Q}_{\color{black} \bm f}\|$ is a bounded constant, as shown in \cref{errprop}; see e.g., \cite{chat2012}. The complete \MPDEIM\ method for the semilinear differential problem \cref{Realproblem} is presented in Algorithm \MPDEIM. In Table~\ref{tableparameters} we summarize the key dimensions and parameters of the whole procedure. A technical discussion of the algorithm and its computational complexity is postponed to \cref{sec:alg}. \vskip 0.1in \begin{algorithm} { {\bf Algorithm} \MPDEIM\ } \hrule \vskip 0.1in {\bf INPUT:} Coefficient matrices of \cref{Realproblem}, ${\cal F}: \RR^{n \times n} \times [0,T_f] \rightarrow \RR^{n \times n}$, (or its snapshots), $n_{\max}$, $\kappa$, and $\tau$, $n_{\mathfrak t}$, $\{{\mathfrak t}_i\}_{i=0, \ldots, n_{\mathfrak t}}$. {\bf OUTPUT:} $\BV_{\ell,\BU}, \BW_{r,\BU}$ and ${\bm Y}_k^{(i)}$, $i=0, \ldots, n_{\mathfrak t}$ to implicitly form the approximation $\BV_{\ell,\BU} {\bm Y}_k^{(i)} \BW_{r,\BU}^\top \approx \BU({\mathfrak t}_i)$ \vskip 0.1in \hspace{1.2cm}\emph{Offline:} \begin{enumerate} \item Determine $\BV_{\ell,\BU}, \BW_{r,\BU}$ for {$\{\BU\}_{i=1}^{n_{\max}}$} and $\BV_{\ell,{\cal F}}, \BW_{r,{\cal F}}$ for $\{{\cal F}\}_{i=1}^{n_{\max}}$ via \cref{alg:snapadap} ({\sc dynamic} \MPOD) using at most $n_s$ of the $n_{\max}$ time instances (if not available, this includes approximating the snapshots $\{{\cal F}(t_i)\}_{i=1}^{n_{\max}}$, $\{{\BU}(t_i)\}_{i=1}^{n_{\max}}$ as the time interval is spanned); \item Compute ${\bm Y}_k^{(0)} = {\bm V}_{\ell,U}^{\top}\BU_{0}{\bm W}_{r,U}$, ${\bm A}_{k} = {\bm V}_{\ell,U}^{\top}{\color{black} \bm A}{\bm V}_{\ell,U}$, ${\bm B}_{k} = {\bm W}_{r,U}^{\top} {\color{black} \bm B}{\bm W}_{r,U};$ \item Determine ${\bm P}_{\ell, {\cal F}}, {\bm P}_{r,{\cal F}}$ using {\color{black}\tt q-deim}({\sc 2s-deim}); \item {Compute ${{\bm V}_{\ell,U}^{\top}{\bm V}_{\ell,{\cal F}}({\bm P}_{\ell, {\cal F}}^{\top} {\bm V}_{\ell,{\cal F}})^{-1}}$, ${({\bm W}_{r,{\cal F}}^{\top}{\bm P}_{r,{\cal F}})^{-1} {\bm W}_{r,{\cal F}}^{\top} {\bm W}_{r,U}}$, ${\bm P}_{\ell, {\cal F}}^{\top}{\bm V}_{\ell,U}$ and ${\bm W}_{r,U}^{\top}{\bm P}_{r,{\cal F}}$};\\ \emph{Online:} \item For each $i=1, \ldots, n_{\mathfrak t}$ \begin{itemize} {\item[(i)] Evaluate $ \reallywidehat{{\cal F}_k({\bm Y}_k^{(i-1)},{\mathfrak t}_{i-1})}$ as in \cref{nonlinapprox} using the matrices computed above;} \item[(ii)] Numerically solve the matrix equation (\ref{eq:Sylvester}) and compute $$ \BY_k^{(i)} = e^{h {\bm A}_{k}} \BY_k^{(i-1)} e^{h {\bm B}_{k}} + h\BPhi^{(i-1)} ; $$ \end{itemize} \end{enumerate} \end{algorithm} \begin{table}[hbt] \caption{Summary of leading dimensions and parameters of Algorithm \MPDEIM. \label{tableparameters}} \centering \begin{tabular}{|l|l|} \hline Par. & Description \\ \hline {\scriptsize $n_s$ } & {\scriptsize Employed number of snapshots} \\ {\scriptsize $k$ } & {\scriptsize Dimension of vector POD subspace} \\ {\scriptsize $p$} & {\scriptsize Dimension of vector DEIM approx. space} \\ {\scriptsize $N$} & {\scriptsize Length of ${\bm u}(t)$, $N = n^2$.} \\ {\scriptsize $\kappa$} & {\scriptsize Dimension of the snapshot space approximation } \\ {\scriptsize $k_i$ } & {\scriptsize Dimension of left ($i = 1$) and right ($i = 2$) \MPOD\ subspaces}\\ {\scriptsize $p_i$ } & {\scriptsize Dimension of left ($i = 1$) and right ($i = 2$) \MODEIM\ subspaces}\\ {\scriptsize $n$} & {\scriptsize Dimension of square ${\bm U}(t)$, for $n=n_x=n_y$} \\ \hline \end{tabular} \end{table} \section{Numerical experiments} \label{sec:exp} {In this section we illustrate the performance of our matrix-oriented \MPDEIM\ integrator. In section~\ref{sec:expesF} we analyze the quality of the approximation space created by the {\sc dynamic} algorithm on three nonlinear functions {with different characteristics}. Then in section~\ref{sec:full} we focus on the ODE setting by comparing the new \MPDEIM\ procedure to the standard \podeim. \subsection{Approximation of a nonlinear function ${\cal F}$}\label{sec:expesF} We investigate the effectiveness of the proposed {\sc dynamic} \MPOD\ procedure for determining the two-sided approximation space of a nonlinear function. { As a reference comparison, we consider the vector form of the DEIM approximation (hereafter {\sc vector}) in section~\ref{sec:POD_DEIM}. We also include comparisons with a simple two-sided matrix reduction strategy that uses a sequential evaluation of all available snapshots, together with the updating of the bases ${\bm V}_{\ell,{\cal F}}$ and ${\bm W}_{r,{\cal F}}$ during the snapshot processing. In particular, if $[{\bm V}_j,{\bm \Sigma}_j, {\bm W}_j] = \mbox{\tt svds}\left({\cal F}_j, \kappa \right)$ is the singular value decomposition of ${\cal F}_j$ limited to the leading $\kappa$ singular triplets, then in this simple approach the basis matrices ${\bm V}_{\ell,{\cal F}}$ and ${\bm W}_{r,{\cal F}}$ are directly updated by orthogonal reduction of the augmented matrices \begin{equation}\label{truncsvds} \begin{pmatrix} {\bm V}_{\ell,{\cal F}}, {\bm V}_j{\bm \Sigma}_j^{\frac 1 2} \end{pmatrix} \in \RR^{n \times {\kappa}_1}\quad \mbox{and} \quad \begin{pmatrix} {\bm W}_{r,{\cal F}}, {\bm W}_j{\bm \Sigma}_j^{\frac 1 2} \end{pmatrix} \in \RR^{n \times {\kappa}_2} \end{equation} respectively, where ${\kappa}_1, {\kappa}_2 \ge \kappa$. To make this procedure comparable in terms of memory to \cref{alg:snapstep}, we enforce that the final dimension $\nu_j$ of each basis satisfies {$\nu_j \le \kappa$, for $j = \ell , r$.} We will refer to this as the {\sc vanilla} procedure for adding a snapshot to the approximation space; see, for instance, \cite{Oxberryetal.17},\cite{Kirsten.21} for additional details. \begin{example}\label{ex:1} {\rm Consider the nonlinear functions $\phi_i \, : \Omega \times [0,T_f] \, \to \RR$, $\Omega \subset \RR^2$, $i=1,2,3$ defined as \begin{equation*} \footnotesize \begin{cases} \phi_1(x_1,x_2,t) &= \frac{x_2}{\sqrt{ (x_1+x_2 - t)^2 + (2x_1 - 3t)^2 + 0.01^2 }}, \,\, \Omega=[0,2]\times [0,2], T_f=2,\\ \phi_2(x_1,x_2,t) &= \frac{x_1x_2}{(x_2 t+0.1)^2}+ \frac{2^{(x_1+x_2)}}{\sqrt{ (x_1+x_2 - t)^2 + (x_2^2+x_1^2 - t^2)^2 + 0.01^2 }}, \,\, \Omega=[0,1]\times [0,1.5], T_f=3,\\ \phi_3(x_1,x_2,t) &= \frac{x_1(0.1+t)}{(x_2 t+0.1)^2}+ \frac{t2^{(x_1+x_2)}}{\sqrt{ (x_1+x_2 - t)^2 + (x_2^2+x_1^2 - 3t)^2 + 0.01^2 }}, \,\, \Omega=[0,3]\times [0,3], T_f=5. \end{cases} \end{equation*} Each function is discretized with $n=2000$ nodes in each spatial direction to form three matrix valued functions ${\cal F}^{(i)}: [0, T_f] \rightarrow \RR^{n \times n}$, for $i = 1,\ldots,3$, respectively. {In the truncation criterion (\ref{kselect}), for all functions we set $n_{max} = 60$ and ${\tau} = 10^{-3}$}. The function $\phi_1$ shows significant variations at the beginning of the time window, $\phi_3$ varies more towards the right-hand of the time span, while $\phi_2$ is somewhere in between. The approximations obtained with the considered methods are reported in Table~\ref{tablekappa50-70} for $\kappa = 50$ and $\kappa = 70$, with the following information: For the adaptive snapshot selection procedure, we indicate the required number of {\sc phases} and the final total number $n_s$ of snapshot used, the CPU time to construct the basis vectors (time for \cref{alg:snapstep} plus time for the SVDs \cref{truncsvds} or {\sc vanilla}), the final dimensions $\nu_{\ell}$ and $\nu_{r}$ and finally, the arithmetic mean of the errors $ \|{\cal F}(t_j) - {\bm V}_{\ell,{\cal F}}{\bm V}_{\ell,{\cal F}}^{\top}{\cal F}(t_j){\bm W}_{r,{\cal F}}{\bm W}_{r,{\cal F}}^{\top}\|/\|{\cal F}(t_j)\| $ over 300 equispaced timesteps $t_j$, for each ${\cal F} = {\cal F}^{(i)}$, $i = 1,2,3$. For the vector approach, where we used $n_s=\kappa$, the reported time consists of the CPU time needed to perform the SVD of the long snapshots, while the error is measured using the vector form corresponding to the formula above; see the description at the beginning of section~\ref{sec:full}. \begin{table}[htb] \centering {\footnotesize \begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|r|} \hline & & \multicolumn{4}{c|}{$\kappa=50$} & \multicolumn{4}{c|}{$\kappa=70$} \\ \hline & & { phases} & { time} & $\nu_{\ell}$/$\nu_{r}$ & { error} & {phases} & { time} & $\nu_{\ell}$/$\nu_{r}$ & { error} \\ ${\bm \Xi}$ &{ alg.} & {($n_s$)} & sec.& & & {($n_s$)} & sec.& & \\ \hline \multirow{2}{*}{$f_1$} & {\sc dynamic} & 2\,\,(9)& 3.5 & 33/39 & $6\cdot10^{-4}$ & 1(7) & 4.7 & 40/50 & $3\cdot10^{-4}$ \\ & {\sc vanilla} & -(60) & 27.6 & 42/50 & $6\cdot10^{-4}$ & -(60) & 38.8 & 42/60 & $3\cdot10^{-4}$ \\ & {\sc vector} & -(50) & 35.9 & 41 & $1\cdot10^{-3}$ & -(70) & 77.3 & 56 & $3\cdot10^{-4}$\\ \hline \multirow{2}{*}{$f_2$} & {\sc dynamic} & 3(21) & 8.6 & 45/26 & $8\cdot10^{-4}$ & 2(10) & 6.1 & 48/30 & $6\cdot10^{-4}$ \\ & {\sc vanilla} & -(60) & 25.6 & 50/37 & $4\cdot10^{-4}$ & -(60) & 39.6 & 58/37 & $2\cdot10^{-4}$ \\ & {\sc vector} & -(50) & 42.7 & 36 & $6\cdot10^{-3}$ & -(70) & 91.8 & 47 & $2\cdot10^{-3}$ \\ \hline \multirow{2}{*}{$f_3$} & {\sc dynamic} & 2(11) & 4.4 & 34/33 & $1\cdot10^{-3}$ & 1(10) & 5.9 & 39/39 & $3\cdot10^{-4}$ \\ & {\sc vanilla} &-(60) & 25.4 & 46/46 & $2\cdot10^{-4}$ &-(60) & 38.6 & 46/46 & $2\cdot10^{-4}$ \\ & {\sc vector} & -(50) & 47.1 & 50 & $2\cdot10^{-3}$ & -(70) & 92.5 & 64 & $8\cdot10^{-4}$ \\ \hline \end{tabular} \caption{Example~\ref{ex:1}. Performance of {\sc dynamic}, {\sc vanilla} and {\sc vector} algorithms for $n = 2000$. \label{tablekappa50-70}} } \end{table} Between the matrix-oriented procedures, the dynamic procedure outperforms the simplified one, both in terms of space dimensions $n_s$, $\nu_\ell$ and $\nu_r$, and in terms of CPU time, especially when not all time selection phases are needed. The error is comparable for the two methods. Not unexpectedly, increasing $\kappa$ allows one to save on the number of snapshots $n_s$, though a slightly larger reduced dimension $\nu_{\ell}/\nu_{r}$ may occur. The vector method is not competitive for any of the observed parameters, taking into account that vectors of length $n^2$ need be stored. } $\square$ \end{example} \subsection{Solution approximation of the full problem}\label{sec:full} We report on a selection of numerical experiments with the {\sc dynamic} \MPDEIM\ procedure on matrix semilinear differential equations of the form \cref{Realproblem}. Once again, we compare the results with a standard vector procedure that applies standard {\sc pod-deim} to the vectorized solution and nonlinear function snapshots. In particular, we apply the (vectorized) adaptive procedure from \cref{sec:dynamic} using the error $\| {\pmb \xi} - {\sf V}_k{\sf V}_k^{\top}{\pmb \xi}\| /\|{\pmb \xi}\|$ for the selection criterion, where ${\pmb \xi}$ is the vectorized snapshot and ${\sf V}_k$ is the existing {\sc pod} basis. {Notice that in the vector setting we need to process as many nodes, as the final space dimension, which depends on the tolerance $\tau$. This is due to the fact that no reduction takes place.} In all experiments CPU times are in seconds, and all bases are truncated using the criterion in \cref{kselect}. To illustrate the quality of the obtained numerical solution, we also report on the evaluation of the following average relative error norm \begin{equation}\label{errormeasure} \bar{\cal E}({\bm U})= \frac{1}{n_\mathfrak{t}} \sum_{\gamma = 1}^{n_\mathfrak{t}}\frac{\| {\bm U}^{(j)} - \widetilde{\BU}^{(j)}\|_F}{\|{\bm U}^{(j)}\|_F}, \end{equation} where $\widetilde{\BU}^{(j)}$ represents either the {\sc dynamic} approximation, or the matricization of the {\sc vector} approximation and ${\bm U}^{(j)}$ is determined by means of the exponential Euler method applied to the original problem. \cref{tableode1} shows the key numbers for the bases construction for both methods. For either ${\bm U}$ or ${\cal F}$ we report the number of {\sc phases}, the number $n_s$ of included snapshots, and the final space dimensions after the reduction procedure. We first describe the considered model problem and then comment on the numbers in \cref{tableode1} and on the detailed analysis illustrated in \cref{breakdowntable}. We stress that these may be considered typical benchmark problems for our problem formulation. We also remark that for some of the experimental settings the problem becomes symmetric, so that thanks to Remark~\ref{cor:symmetry} the two bases could be taken to be the same with further computational and memory savings. Nonetheless, we do not exploit this extra feature in the reported experiments. For all examples the full matrix problem has dimension $n = 1000$, while the selected values for $\kappa$, $n_{\mbox{\tiny max}}$ and $n_{\mathfrak t}$ are displayed in Table \ref{tableode1} and Table \ref{breakdowntable}, respectively. \begin{example}{\rm {\it The 2D Allen-Cahn equation \cite{allen1979}.} \label{ex2} Consider the equation\footnote{Note that the linear term $-u$ is kept in the nonlinear part of the operator.} \begin{equation} \label{ac} u_{t} = \epsilon_1 \Delta u - \frac{1}{\epsilon_2^2}\left(u^3-u\right), \quad \Omega = [a,b]^2, \quad t \in [0,T_f], \end{equation} with initial condition $u(x,y,0) = u_0$. The first example is referred to as {\sc ac 1}. As in \cite{song2016}, we use the following problem parameters: $\epsilon_1 = 10^{-2}$, $\epsilon_2 = 1$, $a= 0$, $b=2\pi$ and $T_f = 5$. Finally, we set $u_0 = 0.05\sin x\cos y$ and impose homogeneous Dirichlet boundary conditions. The second problem (hereafter {\sc ac 2}) is a mean curvature flow problem (\cite{evans1991}) of the form \cref{ac} with data as suggested, e.g., in \cite[Section 5.2.2]{ju2015}, that is periodic boundary conditions, $\epsilon_1 = 1$, $a = -0.5$, $b= 0.5$, $T_f = 0.075$, with $u_0 = \tanh \left(\frac{0.4 - \sqrt{x^2 + y^2}}{\sqrt{2}\epsilon_2}\right)$. Following \cite[Section 5.2.2]{ju2015} we consider $\epsilon_2 \in \{0.01, 0.02, 0.04\}$. As $\epsilon_2$ decreases stability reasons enforce the use of finer time discretizations $n_{\mathfrak t}$, also leading to larger values of $n_{\mbox{\tiny max}}$ and $\kappa$, as indicated in \cref{tableode1} and Table~\ref{breakdowntable}, where we report our experimental results. } $\square$ \end{example} \begin{example}{\rm {\it Reaction-convection-diffusion equation.} \label{ex3} We consider the following reaction-convection-diffusion (hereafter {\sc rcd}) problem, also presented in \cite{caliari2009}, \begin{equation} \label{rda} u_{t} = \epsilon_1 \Delta u + (u_x + u_y) + u(u-0.5)(1-u), \quad \Omega = [0,1]^2, \quad t \in [0,0.3] . \end{equation} The initial solution is given by $u_0 = 0.3 + 256\left(x(1-x)y(1-y)\right)^2$, while zero Neumann boundary conditions are imposed. In \cref{tableode1} and Table~\ref{breakdowntable} we present results for $\epsilon_1 \in \{0.5, 0.05\}$. } $\square$ \end{example} \begin{table}[htb] \caption{\cref{ex2}: Performance of {\sc dynamic} and {\sc vector} algorithms for $n = 1000$. \label{tableode1}} \begin{center} {\footnotesize \begin{tabular}{|c|r|c|r|rrr|} \hline {\sc pb.} &$n_{\mbox{\tiny max}}$/$\kappa$&${\bm \Xi}$ &{\sc algorithm} & {\sc phases} & {$n_s$} & $\nu_{\ell}$/$\nu_{r}$ \\ \hline \multirow{4}{*}{{\sc ac 1}}& \multirow{4}{*}{40/50} &\multirow{2}{*}{${\bm U}$} & {\sc dynamic} & 1 &8& 9/2 \\ && & {\sc vector} & 2 &9 & 9 \\ && \multirow{2}{*}{${\cal F}$} & {\sc dynamic} &1 & 7& 10/3 \\ & && {\sc vector} & 2 &9 & 9 \\ \hline \multirow{4}{*}{\shortstack{{\sc ac 2} \\ $\epsilon_2 = 0.04$}}&\multirow{4}{*}{400/50} &\multirow{2}{*}{${\bm U}$} & {\sc dynamic} & 1 &2& 15/15 \\ & & &{\sc vector} & 2 &25& 25 \\ & &\multirow{2}{*}{${\cal F}$} & {\sc dynamic} &1 & 3& 27/27 \\ & && {\sc vector} & 2 &40 & 40 \\ \hline \multirow{4}{*}{\shortstack{{\sc ac 2} \\ $\epsilon_2 = 0.02$}}&\multirow{4}{*}{1200/70} &\multirow{2}{*}{${\bm U}$} & {\sc dynamic} & 1 &3& 30/30 \\ & & &{\sc vector} & 1 &28& 28 \\ & &\multirow{2}{*}{${\cal F}$} & {\sc dynamic} &1 & 4& 39/39 \\ & && {\sc vector} & 2 &53& 53 \\ \hline \multirow{4}{*}{\shortstack{{\sc ac 2} \\ $\epsilon_2 = 0.01$}}&\multirow{4}{*}{5000/150} &\multirow{2}{*}{${\bm U}$} & {\sc dynamic} & 1 &3& 62/62 \\ & & &{\sc vector} & 1 &43& 43 \\ & &\multirow{2}{*}{${\cal F}$} & {\sc dynamic} &1 & 5& 73/73 \\ & && {\sc vector} & 2 &92 & 92 \\ \hline \multirow{4}{*}{\shortstack{ {\sc rdc} \\$\epsilon_1 = 0.5$} }&\multirow{4}{*}{60/50} &\multirow{2}{*}{${\bm U}$} & {\sc dynamic} & 1 &3& 10/10 \\ & & &{\sc vector} & 1 &7 & 7 \\ & &\multirow{2}{*}{${\cal F}$} & {\sc dynamic} &1 & 3& 13/13 \\ & && {\sc vector} & 2 &11 & 11 \\ \hline \multirow{4}{*}{\shortstack{ {\sc rdc} \\$\epsilon_1 = 0.05$} }&\multirow{4}{*}{60/50} &\multirow{2}{*}{${\bm U}$} & {\sc dynamic} & 1 &4& 14/14 \\ & & &{\sc vector} & 3 &14 & 14 \\ & &\multirow{2}{*}{${\cal F}$} & {\sc dynamic} &1 & 3& 17/17 \\ & && {\sc vector} & 3 &34& 34 \\ \hline \end{tabular} } \end{center} \end{table} \begin{table}[htb] \caption{Computational time and storage requirements of \MPDEIM\ and standard vector \podeim. CPU times are in seconds and $n= 1000$.\label{breakdowntable}} \centering \begin{tabular}{|c|r|rrc|rc|r|} \hline & & \multicolumn{3}{|c|}{\sc offline}& \multicolumn{2}{|c|}{\sc online}& \\ \cline{3-5}\cline{6-7} & & {\sc basis} & {\sc deim} & && &{\sc rel.} \\ {\sc pb.} & {\small \sc method} & {\sc time} &{\sc time}& {\sc memory} & {\sc time} ($n_{\mathfrak t}$)& {\sc memory} & {\sc error}\\ \hline \multirow{2}{*}{\sc ac 1} & \sc dynamic & 1.8 & 0.001 & $200n$ &0.009 (300)& $24n$& $1 \cdot 10^{-4}$ \\ &\sc vector & 0.6 & {0.228} &$18n^2$&0.010 (300)& $18n^2$& $1\cdot 10^{-4}$ \\ \hline \multirow{2}{*}{\shortstack{{\sc ac 2}\\$0.04$}} & \sc dynamic & 0.8 & 0.005 & $200n$ &0.010 (300)& $84n$& $3 \cdot 10^{-4}$ \\ &\sc vector & {8.4} & {3.745} &$65n^2$&0.020 (300)& $65n^2$& $2\cdot 10^{-4}$ \\ \hline \multirow{2}{*}{\shortstack{{\sc ac 2}\\$0.02$}} & \sc dynamic & 1.8 & 0.004 & $280n$ &0.140 (1000)& $138n$& $2 \cdot 10^{-4}$ \\ &\sc vector & 14.56 & {5.273} &$81n^2$&0.120 (1000)& $81n^2$& $3\cdot 10^{-5}$ \\ \hline \multirow{2}{*}{\shortstack{{\sc ac 2}\\$0.01$}} & \sc dynamic & 5.3 & 0.008 & $600n$ &0.820 (2000)& $270n$& $5 \cdot 10^{-4}$ \\ &\sc vector & 46.2 & {13.820} &$135n^2$&0.420 (2000)& $135n^2$& $2\cdot 10^{-4}$ \\ \hline \multirow{2}{*}{\shortstack{{\sc rdc}\\$0.5$}} & \sc dynamic & 0.8 & 0.001 & $200n$ &0.008 (300)& $46n$& $2 \cdot 10^{-4}$ \\ &\sc vector & 0.6 & {0.277} &$18n^2$&0.010 (300)& $18n^2$& $1\cdot 10^{-4}$ \\ \hline \multirow{2}{*}{\shortstack{{\sc rdc}\\$0.05$}} & \sc dynamic & 0.9 & 0.001 & $200n$ &0.010 (300)& $62n$& $2 \cdot 10^{-4}$ \\ &\sc vector & 4.1 & {2.297} &$48n^2$&0.010 (300)& $48n^2$& $1\cdot 10^{-4}$ \\ \hline \end{tabular} \end{table} {\cref{tableode1} shows that for both the solution and the nonlinear function snapshots, the {\sc dynamic} procedure requires merely one phase, and it retains snapshots at only a few of the time instances. On the other hand, the vector approach typically requires two or even three phases to complete the procedure. The dimension of the bases is not comparable for the matrix and vector approaches, since these are subspaces of spaces of significantly different dimensions, namely $\RR^n$ and $\RR^{n^2}$ in the matrix and vector cases, respectively. Nonetheless, it is clear that the memory requirements are largely in favor of the matrix approach, as shown in \cref{breakdowntable}, where computational and memory details are reported. In particular, in \cref{breakdowntable} the offline timings are broken down into two main parts. The column {\sc basis time} collects the cost of the Gram-Schmidt orthogonalization in the {\sc vector} setting, and the cumulative cost of \cref{alg:snapstep} (for both the solution and nonlinear function snapshots) for the {\sc dynamic} procedure. Column {\sc deim time} reports the time required to determine the interpolation indices by {\tt q-deim}. The ``online times'' report the cost to simulate both reduced order models at $n_{\mathfrak t}$ timesteps. The relative approximation error in \cref{errormeasure} is also displayed, together with the total memory requirements for the offline and online parts. In terms of memory, for the {\sc vector} setting this includes the storage of all the processed snapshots while for \MPDEIM\ that of $\widetilde{\bm V}_i$ and $\widehat{\bm W}_i$ from \cref{alg:snapstep}, for both the solution and nonlinear function snapshots. This quantity is always equal to $4\kappa\cdot n$ for the {\sc dynamic} setting. We point out the large gain in basis construction time for the {\sc dynamic} procedure, mainly related to the low number of snapshots employed (cf. \cref{tableode1}). Furthermore, the {\sc dynamic} procedure enjoys a massive gain in memory requirements, for very comparable online time and final average errors. \begin{figure}[htb!] \centering \includegraphics[width=0.5\linewidth]{matrdcerr2_005} \caption{\cref{ex3}. $n = 1000$. Number of retained snapshots with respect to $\tau$, for $\epsilon_1=0.05$. \label{snapshotdependence}} \end{figure} For the reaction-convection-diffusion example we also analyze the dependence of the number $n_s$ of retained snapshots on the threshold $\tau$ of the two procedures, having fixed $n_{max} = 60$. For the vector procedure $n_s$ increases as the tolerance $\tau$ decreases, whereas for the dynamic procedure $n_s$ remains nearly constant for changing $\tau$. This ultimately indicates that the offline cost will increase for the {\sc vector} procedure, if a richer basis is required; see the appendix and Table~\ref{tablecomplexity} for a detailed discussion. } \section{Conclusions and future work} \label{sec:conc} We have proposed a matrix-oriented \podeim\ type order reduction strategy to efficiently handle the numerical solution of semilinear matrix differential equations in two space variables. By introducing a novel interpretation of the proper orthogonal decomposition when applied to functions in two variables, we devised a new two-sided discrete interpolation strategy that is also able to preserve the symmetric structure in the original nonlinear function and in the approximate solution. The numerical treatment of the matrix reduced order differential problem can take full advantage of both the small dimension and the matrix setting, by exploiting effective exponential integrators. Our very encouraging numerical experiments show that the new procedure can dramatically decrease memory and CPU time requirements for the function reduction procedure in the so-called offline phase. Moreover, we illustrated that the reduced low-dimensional matrix differential equation can be numerically solved in a rapid online phase, without sacrificing too much accuracy. This work can be expanded in various directions. In particular, the companion paper \cite{Kirsten.21} presents a first experimental exploration of the three dimensional case, which takes advantage of the tensor setting, and uses recently developed tensor linear equations solvers to advance in time in the (tensor) reduced differential equation; a dynamic approach could enhance the implementation in \cite{Kirsten.21}. Generalizations to the multidimensional case and to multiparameters suggest themselves. \section*{Acknowledgments} The authors are members of Indam-GNCS, which support is gratefully acknowledged. Part of this work was also supported by the Grant AlmaIdea 2017-2020 - Universit\`a di Bologna. \appendix \section{Discussion of \MPDEIM\ algorithm and computational complexity} \label{sec:alg} { {We compare the computational complexity of the new \MPDEIM\ method applied to \cref{Realproblem} with that of standard \podeim. All discussions are related to Algorithm \MPDEIM\ in \cref{extend}. \vskip 0.1in {\it The offline phase.} The first part of the presented algorithm defines the offline phase. For the $\BU$-set, we considered the semi-implicit Euler scheme applied directly to the differential equation in {\it matrix} form; see, e.g., \cite{Autilia2019matri}, whereas the snapshot selection is done via the adaptive procedure discussed in \cref{sec:dynamic}. Notice that moving from one {\sc phase} to the next in \cref{alg:snapadap} does not require recomputing any quantities. Indeed, if $h^{\ast}$ is the stepsize of the new phase, we determine ${\bm U}(h^{\ast})$ from ${\bm U}(0)$ and initialize the semi-implicit Euler scheme from there; see also \cref{line}. {The computational complexity of approximating the $\kappa$ leading singular triplets with {\tt svds}, as required by \cref{alg:snapstep} is given by the implicitly restarted Lanczos bidiagonalization, as implemented in the matlab function {\tt svds}. For each $i$ this cost is mainly given by matrix vector multiplications with the dense $n\times n$ matrix; one Arnoldi cycle involves at most $2\kappa$ such products, together with $2\kappa$ basis orthogonalizations, leading to ${\cal O}(n^2\kappa + n\kappa)$ operations per cycle \cite{bag2005}.} { The final SVDs for {\it bases pruning} at the end of \cref{alg:snapadap} are performed with a dense solver, and each has complexity ${\cal O}(11\kappa^3)$ \cite[p.493]{golub13}. Furthermore, each skinny $QR$-factorization required for the projections ${\Pi_\ell}$ and $\Pi_r$ in \cref{error} has complexity ${\cal O}(2n\kappa^2)$ \cite[p.255]{golub13}. For the standard {\sc pod-deim} algorithm, the reported SVD complexity is the total cost of orthogonalizing all selected snapshots by means of Gram-Schmidt, which is ${\cal O}(Nn_s^2)$. The projected coefficient matrices ${\bm A}_k$, ${\bm B}_k$, and ${\bm Y}_0$ are computed once for all and stored in step 2 of Algorithm \MPDEIM, with a total complexity of approximately {${\cal O}(n^2(k_1 + k_2) +n(k_1 + k_2 + k_1^2 + k_2^2))$}, assuming that ${\color{black} \bm A}$ and ${\color{black} \bm B}$ are sparse and ${\BU}_0$ is dense. This step is called POD projection in \cref{tablecomplexity}. Step 3 in the Algorithm {\sc 2s-pod-deim} has a computational complexity of ${\cal O}(n(p_1^2 + p_2^2))$ \cite{gugercin2018}, {while the matrices in step 4 need to be computed and stored with computational complexity {${\cal O}(n(k_1 + k_2)(p_1 + p_2) + (p_1^2 + p_2^2)n + p_1^3 + p_2^3)$}} in total \cite{chaturantabut2010nonlinear}. This step is called DEIM projection in \cref{tablecomplexity}. We recall that the products involving the selection matrices ${\bm P}_{\ell,{\cal F}}$ and ${\bm P}_{r,{\cal F}}$ do not entail any computation. Finally, for the ETD we report the costs for the reduced procedure described in \cite[Section 3.3]{Autilia2019matri}, since we did not experience any stability issues with the diagonalization in our experiments. To this end an a-priori spectral decomposition of each of the reduced matrices ${\bm A}_k$ and ${\bm B}_k$ is done once for all, which has complexity ${\cal O}(9k_1^3 + 9k_2^3)$ for dense symmetric\footnote{If the coefficient matrices were nonsymmetric, this will be more expensive (still of cubic order), but determining the exact cost is however still an open problem \cite{golub13}.} matrices \cite{golub13}. This makes the cost of evaluating the matrix exponentials negligible, since the computations at each time iteration online will be performed within the eigenbases, following \cite[Section 3.3]{Autilia2019matri}. Furthermore, thanks to the small dimension of the matrices we also explicitly compute the inverse of the eigenvector matrices at a cost of ${\cal O}(k_1^3 + k_2^3)$, as required by the online computation. All these costs are summarized in \cref{tablecomplexity} and compared with those of the standard \podeim\ offline phase applied to \cref{vecode}, as indicated in \cite{chaturantabut2010nonlinear}, with dimension $N = n^2$. All coefficient matrices are assumed to be sparse and it is assumed that both methods select $n_s$ snapshots via the adaptive procedure. In practice, however, it appears that the two-sided procedure requires far fewer snapshots than the vectorization procedure, as indicated by \cref{tableode1}. The table also includes the memory requirements for the snapshots and the basis matrices. \begin{table}[bht] \caption{Offline phase: Computational costs (flops) for standard \podeim\ applied to \cref{vecode}, and \MPDEIM\ applied to \cref{Realproblem}, { and principal memory requirements. Here $N=n^2$.} \label{tablecomplexity}} \centering \begin{tabular}{|l|l|l|} \hline Procedure & \podeim\ & {\sc dynamic} \MPDEIM\ \\ \hline {\scriptsize SVD} & {\scriptsize${\cal O}(Nn_s^2)$ } & {\scriptsize${\cal O}(n^2\kappa n_s + n\kappa n_s + 6n\kappa^2 + 11\kappa^3)$ } \\ {\scriptsize QR} & -- & {\scriptsize${\cal O}(n\kappa^2n_s)$ }\\ {\scriptsize DEIM} & {\scriptsize${\cal O}(Np^2)$} & {\scriptsize${\cal O}(n(p_1^2 + p_2^2))$ } \\ {\scriptsize POD projection} & {\scriptsize${\cal O}(Nk + Nk^2)$ } & {\scriptsize${\cal O}(n^2(k_1 + k_2) +n(k_1 + k_2 + k_1^2 + k_2^2))$ } \\ {\scriptsize DEIM projection} & {\scriptsize${\cal O}(Nkp + p^2N + p^3 )$} & {\scriptsize${\cal O}(n(k_1 + k_2)(p_1 + p_2) + (p_1^2 + p_2^2)n + p_1^3 + p_2^3)$}\\ {\scriptsize Snapshot Storage} & {\scriptsize${\cal O}(Nn_s)$} & {\scriptsize${\cal O}(n\kappa)$} \\ {\scriptsize Basis Storage} & {\scriptsize${\cal O}(N(k + p))$} & {\scriptsize${\cal O}(n(k_1 + k_2 + p_1 + p_2))$} \\ \hline \end{tabular} \end{table} {\it The online phase.} The total cost of performing step 5.(i) is ${\cal O}(\omega(p_1p_2) + k_1p_1p_2 + k_1k_2p_2)$, where $\omega(p_1p_2)$ is the cost of evaluating the nonlinear function at $p_1p_2$ entries. Step 5.(ii) requires a matrix--matrix product and the solution of the Sylvester equation \cref{eq:Sylvester} in the eigenspace. The latter demands only matrix--matrix products, which come at a cost of ${\cal O}(k_1^2k_2 + k_1k_2^2)$ and two Hadamard products with complexity ${\cal O}(k_1k_2)$; see \cite{Autilia2019matri} for more details. This brings the total complexity of one time iteration online to {${\cal O}(\omega(p_1p_2) + k_1p_1p_2 + k_1k_2p_2 + k_1^2k_2 + k_1k_2^2 + k_1k_2)$, which is independent of the original problem size $n$. }} } \bibliographystyle{siam} \bibliography{deimbib} \end{document}
{ "config": "arxiv", "file": "2006.13289/DEIMV25.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
\section{Approximate Model-CoSaMP} \label{sec:amcosamp} In this Section, we propose a second algorithm for model-based compressive sensing with approximate projection oracles. Our algorithm is a generalization of model-based CoSaMP, which was initially developed in~\cite{modelcs}. We call our variant \emph{Approximate Model-CoSaMP} (or AM-CoSaMP); see Algorithm~\ref{alg:approxmodelcosamp} for a complete description. Algorithm~\ref{alg:approxmodelcosamp} closely resembles the {\em Signal-Space CoSaMP} (or SSCoSaMP) algorithm proposed and analyzed in~\cite{davenport_redundant,GN13}. Like our approach, SSCoSaMP also makes assumptions about the existence of head- and tail-approximation oracles. However, there are some important technical differences in our development. SSCoSaMP was introduced in the context of recovering signals that are sparse in overcomplete and incoherent dictionaries. In contrast, we focus on recovering signals from structured sparsity models. Moreover, the authors of~\cite{davenport_redundant,GN13} assume that a \emph{single} oracle simultaneously achieves the conditions specified in Definitions \ref{def:headapproxoracle} and \ref{def:tailapproxoracle}. In contrast, our approach assumes the existence of two separate head- and tail-approximation oracles and consequently is somewhat more general. Finally, our analysis is simpler and more concise than that provided in~\cite{davenport_redundant,GN13} and follows directly from the results in Section~\ref{sec:amiht}. \begin{algorithm}[!t] \caption{Approximate Model-CoSaMP} \label{alg:approxmodelcosamp} \begin{algorithmic}[1] \Function{AM-CoSaMP}{$y,A,t$} \State $x^0 \gets 0$ \For{$i \gets 0, \ldots, t$} \State $b^i \gets A^T (y - A x^i)$ \State $\Gamma \gets \mathrm{supp}(H(b^i))$ \State $S \gets \Gamma \cup \mathrm{supp}(x^i)$ \label{line:cosampsupportupdate} \State $z \vert_S \gets A_S\pinv y, \quad z \vert_{S^C} \gets 0$ \State $x^{i+1} \gets T(z)$ \EndFor \State \textbf{return} $x^{t+1}$ \EndFunction \end{algorithmic} \end{algorithm} We prove that AM-CoSaMP (Alg.\ \ref{alg:approxmodelcosamp}) exhibits robust signal recovery. We make the same assumptions as in Section \ref{sec:amiht}: (i) $x \in \R^n$ and $x \in \Mmodel$. (ii) $y = A x + e$ for an arbitrary $e \in \R^m$ (the measurement noise). (iii) $T$ is a $(c_T, \Msupports, \Msupports_T, 2)$-tail-approximation oracle. (iv) $H$ is a $(c_H, \Msupports_T \supportplus \Msupports, \Msupports_H, 2)$-head-approximation-oracle. (v) $A$ has the $(\delta, \Msupports \supportplus \Msupports_T \supportplus \Msupports_H)$-model-RIP. Our main result in this section is the following: \begin{theorem}[Geometric convergence of AM-CoSaMP] \label{thm:amcosamp} Let $r^i = x-x^i$, where $x^i$ is the signal estimate computed by AM-CoSaMP in iteration $i$. Then, \begin{equation*} \norm{r^{i+1}}_2 \leq \alpha \norm{r^i}_2 + \beta \norm{e}_2 \, , \end{equation*} where \begin{align*} \alpha &= (1+c_T) \sqrt{ \frac{1+\delta}{1-\delta}} \sqrt{1 -\alpha_0^2} \, , \\ \beta &= (1+c_T) \left[\sqrt{ \frac{1+\delta}{1-\delta}} \left(\frac{\beta_0}{\alpha_0} + \frac{\alpha_0 \beta_0}{\sqrt{1-\alpha_0^2}}\right) + \frac{2}{\sqrt{1-\delta}} \right] \, , \\ \alpha_0 &= c_H (1 - \delta) - \delta \, , \\ \beta_0 &= (1+c_H)\sqrt{1+\delta} \, . \end{align*} \end{theorem} \begin{proof} We can bound the error $\norm*{r^{i+1}}_2$ as follows: \begin{align*} \norm{r^{i+1}}_2 &= \norm{x - x^{i+1}}_2 \\ &\leq \norm{x^{i+1} - z}_2 + \norm{x - z}_2 \\ &\leq c_T \norm{x - z}_2 + \norm{x - z}_2 \\ &= (1 + c_T) \norm{x - z}_2 \\ &\leq (1 + c_T) \frac{\norm{A (x - z )}_2}{\sqrt{1 - \delta}} \\ &= (1 + c_T) \frac{\norm{ Ax - A z }_2}{\sqrt{1 - \delta}} \, . \end{align*} Most of these inequalities follow the same steps as the proof provided in~\cite{cosamp}. The second relation above follows from the triangle inequality, the third relation follows from the tail approximation property and the fifth relation follows from the $(\delta, \Msupports \supportplus \Msupports_T \supportplus \Msupports_H)$-model-RIP of $A$. We also have $Ax = y - e$ and $Az = A_S z_S$. Substituting, we get: \begin{align*} \norm{r^{i+1}}_2 & \leq (1 + c_T) \left( \frac{\norm{ y - A_S z_S }_2}{\sqrt{1 - \delta}} + \frac{\norm{e}_2}{\sqrt{1 -\delta}} \right) \\ & \leq (1 + c_T) \left( \frac{\norm{ y - A_S x_S }_2}{\sqrt{1 - \delta}} + \frac{\norm{e}_2}{\sqrt{1 -\delta}} \right) \, . \numberthis \label{eq:cosamperror} \end{align*} The first inequality follows from the triangle inequality and the second from the fact that $z_S$ is the least squares estimate $A_S\pinv y $ (in particular, it is at least as good as $x_S$). Now, observe that $y = Ax +e = A_S x_S + A_{S^c} x_{S^c} + e$. Therefore, we can further simplify inequality \eqref{eq:cosamperror} as \begin{align*} \norm{r^{i+1}}_2 & \leq (1 + c_T) \frac{\norm{ A_{S^c} x_{S^c}}_2}{\sqrt{1 - \delta}} + (1+c_T) \frac{2\norm{e}_2}{\sqrt{1-\delta}} \\ &\leq (1 + c_T) \frac{\sqrt{1 + \delta}}{\sqrt{1 - \delta}} \norm{ x_{S^c} }_2 + (1+c_T) \frac{2\norm{e}_2}{\sqrt{1 -\delta}} \\ &= (1 + c_T) \sqrt{\frac{{1 + \delta}}{{1 - \delta}}} \norm{ (x - x^i)_{S^c} }_2 + (1+c_T) \frac{2\norm{e}_2}{\sqrt{1 -\delta}} \\ &\leq (1 + c_T) \sqrt{\frac{{1 + \delta}}{{1 - \delta}}} \norm{ r^i_{\Gamma^c} }_2 + (1+c_T) \frac{2\norm{e}_2}{\sqrt{1 -\delta}} \numberthis \label{eq:ineq1} \, . \end{align*} The first relation once again follows from the triangle inequality. The second relation follows from the fact that $\supp(x_{S^c}) \in \Msupports^+$ (since $\supp(x) \in \Msupports^+$), and therefore, $A_{S^c} x_{S^c}$ can be upper-bounded using the model-RIP. The third follows from the fact that $x_i$ supported on $S^c$ is zero because $S$ fully subsumes the support of $x^i$. The final relation follows from the fact that $S^c \subseteq \Gamma^c$ (see line \ref{line:cosampsupportupdate} in the algorithm). Note that the support $\Gamma$ is defined as in Lemma \ref{lemma:headiht_rip}. Therefore, we can use \eqref{eq:rlambdac} and bound $\norm{r^i_{\Gamma^c}}_2$ in terms of $\norm{r^i}_2$, $c_H$, and $\delta$. Substituting into \eqref{eq:ineq1} and rearranging terms, we obtain the stated theorem. \end{proof} As in the analysis of AM-IHT, suppose that $e = 0$ and $\delta$ is very small. Then, we achieve geometric convergence, i.e., $\alpha < 1$, if the approximation factors $c_T$ and $c_H$ satisfy \begin{equation} (1 + c_T) \sqrt{1 - c_H^2} < 1 \, , \qquad \textnormal{or equivalently, } \qquad c_H^2 > 1 - \frac{1}{(1 + c_T)^2}\, . \label{eq:headtail_cosamp} \end{equation} Therefore, the conditions for convergence of AM-IHT and AM-CoSaMP are identical in this regime. As for AM-IHT, we relax this condition for AM-CoSaMP in Section \ref{sec:boosting} and show that geometric convergence is possible for \emph{any} constants $c_T$ and $c_H$.
{ "config": "arxiv", "file": "1406.1579/approxmodelcosamp.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: Eigenvalues of a product of matrices with very large elements QUESTION [1 upvotes]: Consider a set of matrices $M_i = \begin{bmatrix} e^{-b_i}(1+a_i) & a_i \\ a_i & e^{b_i} (1-a_i), \end{bmatrix}$ where $a_i$ and $b_i$ are of order $O(1)$. (This is an example of the type of matrices I am dealing with, with negative and positive exponentials on the diagonal and determinant $1$). Consider now their product $M = \prod_{i=1}^L M_i$ , where $L = O(100)$. I am trying to compute P=$\Im \log \det [I +M]$ The issue: If I simply compute $M$ numerically it doesn't behave well, because its elements are exponentially large. For example, both eigenvalues I obtain are exponentially large, so $\det M \neq 1$, which is wrong. My approach: I can obtain P if I compute the eigenvalues of the matrix M. I figured that $\log M$ would be better behave, and I can use it to compute accurately the eigenvalues of M. I know I can compute $\log M$ using the Baker-Campbell-Hausdorff formula. However, it would be a very messy and unpractical approach. My question: Is there anyway of computing $P$, $\log M$ or the eigenvalues of $M$ directly from the $M_i$ matrices without having to directly compute $M$? REPLY [1 votes]: Here's an approach to deal with the numerical instability: Let $A_i = M_i/\|M_i\|$, where $\|\cdot \|$ denotes the (submultiplicative) matrix norm of your choice (for instance, the Frobenius norm). Let $A = \prod_{i=1}^L A_i$, and let $k = \prod_{i=1}^L \|M_i\|$. We have $$ \begin{align} \log \det(I + M) &= \log \det \left(\frac{k}{k}(I + M)\right) \\ & = \log \left[k^2\det(k^{-1}I + A)\right] = 2 \log(k) + \log \det(k^{-1} I + A). \end{align} $$ I suspect that it will be possible to compute $A$ directly without significant numerical error.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 4094551, "subset_name": null }
TITLE: If I am decelarating forwards am I accelerating backwards? QUESTION [1 upvotes]: If I am in a car and I put the brakes on so that I am slowing down (decelerating) am I then accelerating backwards? e.g. If I am decelerating in this car at -5ms⁻² am I accelerating backwards at 5ms⁻² ? REPLY [0 votes]: I am slowing down (decelerating) am I then accelerating backwards? The implication here is that the direction of the acceleration is in the opposite direction to the velocity. I think that you should avoid the word decelerate as it can be taken to mean a reduction in the speed of the car but this might not always be so. Suppose a car is moving at $\rm 10 \, m\,s^{-1}$ up a hill and suddenly the engine stops working and the brakes are not applied. The car then decelerates down the hill at $\rm 2 \,m\,s^{-2}$ ie it has an acceleration of $\rm -2 \,m\,s^{-2}$ up the hill. So after $1\,\rm s$ the velocity of the car is $\rm 8 \, m\,s^{-1}$ up the hill and after $5\,\rm s$ the car is at rest. All fine so far as the speed of the car has decreased from $\rm 10 \, m\,s^{-1}$ to be at rest. However what happens after $6\, \rm s$? The velocity of the car is $\rm 2 \, m\,s^{-1}$ down the hill ie the speed of the car is now increasing. If I am decelerating in this car at -5ms⁻² am I accelerating backwards at 5ms⁻² ? Let the car be moving in the positive x-direction. A deceleration implies that the acceleration is in the negative x-direction. A negative deceleration implies that the acceleration is in the positive x-direction. With the car moving in the positive x-direction a deceleration of 5ms⁻² implies that the car has an acceleration of 5ms⁻² in the negative x-direction. With the car moving in the positive x-direction, a deceleration of -5ms⁻² implies that the car has an acceleration of +5ms⁻² in the positive x-direction.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 525328, "subset_name": null }
TITLE: Value of $2^{\sqrt{2}}$ QUESTION [0 upvotes]: Can you evaluate the value of $2^{\sqrt{2}}$ I am not sure, if there is some trick to evaluate its value because it was asked in an interview. But can we atleast comment if it is rational or irrational. Obviously a rational raised to the power of irrational, can be either rational or an irrational number. So unless we do something we cannot simply comment about it nature. I am not getting any idea to proceed futher. REPLY [3 votes]: There isn't, no, you just need to use a calculator. It's an interesting number, but not one you can really write in another way. By the Gelfond–Schneider theorem, it's a transcendental number. It's also famously used in a much less advanced proof that some irrational $a,\,b$ satisfy $a^b\in\Bbb Q$. Take either $a=\sqrt{2},\,b=2\sqrt{2}$ if $2^\sqrt{2}$ is rational, or $a=2^\sqrt{2},\,b=\sqrt{2}$ if it's not. (Actually, we usually use $b=\sqrt{2}$ in the first case or $a=\sqrt{2}^\sqrt{2}$ in the second, but it's a similar argument either way.)
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 3440644, "subset_name": null }
TITLE: When does regularity of the base scheme imply regularity of the top scheme? QUESTION [1 upvotes]: My problem is the following: I have a finite surjective morphism $f: X\rightarrow Y$ of noetherian schemes and know that $Y$ is a regular scheme. (Indeed, in my situation, the two schemes are topologically the same and the arrow is topologically the identity.) I don't know if $f$ is étale or smooth. But I know that $f$ has a section. I want to conclude somehow that $X$ is also a regular scheme. What further minimal conditions (on $f$ or $X$) would imply this? I would be glad about just some hints of what one could do here. REPLY [3 votes]: I may be misunderstanding the question, but it seems rather straightforward to me. a) As stated, without assuming, that $X$ is, say, reduced, it is certainly false: Let $X=\mathrm{Spec}k[\varepsilon]=k[x]/(x^2)$, $Y=\mathrm{Spec} k$ and $f:X\to Y$ the structure map of $X$ as a $Y$-scheme. This is obviously a homeomorphism topologically and it has a section that maps $Y$ isomorphically to $X_{\mathrm{red}}$. b) It seems to me that pretty much this is the only thing that can go wrong: If $f$ has a section, then by definition of a section, $Y$ is isomorphic to the image of the section. As $Y$ is regular, it is necessarily reduced, so if $f$ is one-to-one, then $f$ and the section induce an isomorphism between $Y$ and $X_{\mathrm{red}}$. So, if $X$ is reduced, the statement is true, if $X$ is not reduced, then the question is whether there exists a morphism $X\to X_{\mathrm{red}}$ that's an isomorphism on $X_{\mathrm{red}}$. The above example shows that this can happen, so some assumption is required to rule out this possibility. If $f$ is not one-to-one, something similar would still happen. The section would still induce an isomorphism between $Y$ and the image of the section which would have to be an irreducible component of $X_{\mathrm{red}}$. On the other hand, in this case you would probably want to assume that $X$ and $Y$ are irreducible, since otherwise you can just add an additional component that screws things up. However, then (assuming that $X$ is irreduciblke) this implies that $Y\simeq X_{\mathrm{red}}$ and you end up in the previous case again.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 87538, "subset_name": null }
TITLE: Apart from Temperature / Speed , are there other changes when change of state of matter occurs? QUESTION [0 upvotes]: Apart from Temperature / Speed , are there other changes when change of state of matter occurs? Link to Wikipedia State of matter I want to add - Zero (0) Tag - but I don't have high enough score to create it. This tag is not related with the number of changes. REPLY [0 votes]: What about the change from normal conduction to superconduction? Or the changes occurring in the formation of a Bose-Einstein condensate? It's the rule, not the exception, that other changes than speed and temperature occur.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 724050, "subset_name": null }
TITLE: A matrix equation $Ax=0$ has infinite solutions, does $A^Tx = 0$ have infinite solutions? QUESTION [4 upvotes]: I'm wondering whether a system with a transpose of a matrix has the same type of solution that the original matrix system has. If an equation $Ax=0$ equation has a unique solution, would a system with $A$ transpose instead of $A$ also have a unique solution? And what about with no solution, and infinite solutions? REPLY [2 votes]: If you are dealing with square matrices, then the answer is yes. You are essentially asking if the matrix is invertible or not, and $A$ is invertible iff $\det(A)\ne0$ and $\det(A)=\det\!\left(A^T\right)$. However, $$ \begin{bmatrix} 3&1&2\\ 2&0&1 \end{bmatrix} \begin{bmatrix}1\\1\\-2 \end{bmatrix} =\begin{bmatrix}0\\0 \end{bmatrix} $$ and any real multiple of this solution is a solution. Yet $$ \begin{bmatrix} 3&2\\ 1&0\\ 2&1 \end{bmatrix} x =\begin{bmatrix}0\\0\\0 \end{bmatrix} $$ requires $x=\begin{bmatrix}0\\0\end{bmatrix}$.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 4, "question_id": 4014818, "subset_name": null }
TITLE: Relationship between null space and invertibility of linear transformations QUESTION [2 upvotes]: Is there a relationship between the null space $N(T)$ of a linear transformation $T$ and whether or not it is invertible? For example, if you know $N(T) \neq \{0\}$, can you be sure it's not an invertible transformation? REPLY [0 votes]: A linear map $A : X \to Y$ between finite dimensional vector spaces of the same dimenson is invertible iff $\ker A = \{ 0\}$.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 2, "question_id": 254753, "subset_name": null }
\begin{document} \maketitle \begin{abstract} In thin ferromagnetic films, the predominance of the magnetic shape anisotropy leads to in-plane magnetizations. The simplest domain wall in this geometry is the one-dimensional N{\'e}el wall that connects two magnetizations of opposite sign by a planar $180^\circ$ rotation. In this paper, we perturb the static N{\'e}el wall profile in order to construct time-periodic N{\'e}el wall motions governed by to the Landau-Lifshitz-Gilbert equation. Our construction works within a certain parameter regime and requires the restriction to external magnetic fields with small amplitudes and suitable time averages. \end{abstract} \section{Introduction} The theory of micromagnetism deals with the multiple magnetic phenomena observed in ferromagnetic materials (see for example Brown \cite{brown}, Hubert and Sch\"afer \cite{HS}) like the formation of so-called magnetic domains (regions where the magnetization is almost constant) and the appearance of so-called domain walls (thin transition layers separating the domains). It is a well known fact that the interaction and characteristic of the magnetic structures heavily depend on a variety of parameters (e.g. size and shape of the ferromagnetic sample, material properties, ...). Due to the resulting complexity of the general micromagnetic problem, the mathematical theory of micromagnetism is aimed to provide appropriate approximations for various parameter regimes (see DeSimone, Kohn, M\"uller, and Otto \cite{dkmo} and references therein). While most of the known mathematical theory has focused on the static case, we studied in \cite{alex_small} (see also \cite{alex_diss}) qualitative properties of time-depending magnetization patterns modeled by the Landau-Lifshitz-Gilbert equation (LLG). More precisely, we showed the existence of time-periodic solutions for the full three-dimensional LLG in the regime of soft and small ferromagnetic particles satisfying a certain shape condition. The paper at hand is concerned with time-periodic motions of one-dimensional domain walls which separate two domains with magnetizations of opposite directions by a one-dimensional $180^\text{o}$-transition. These walls are parts of the building blocks for more complicated wall structures and can be classified in the following way: \begin{itemize} \item The Bloch wall rotates perpendicular to the transition axis. It is observed in bulk materials. \end{itemize} \begin{itemize} \item The N{\'e}el wall rotates in the plane spanned by the transition axis and the end states. It is observed in thin films. \end{itemize} For one-dimensional domain walls modeled by $m=(m_1,m_2,m_3):\setR \to S^2$, it is possible to derive an energy functional from the full three-dimensional micromagnetic energy by means of dimension reduction. Such a reduction was carried out by Aharoni \cite{aharoni1}, \cite{aharoni2} (see also Garc{\'{\i}}a-Cervera \cite{Garcia} and Melcher \cite{Melcher}), and the resulting energy functional for a ferromagnetic layer of thickness $\delta>0$ reads as follows: \begin{align*} E(m) = d^2 \int_\setR \abs{m'}^2 \, dx + Q \int_\setR (m_1^2 + m_3^2) \, dx + \int_\setR \mathcal{S}_\delta [m_1] m_1 \, dx + \int_\setR ( m_3^2 - \mathcal{S}_\delta [m_3] m_3 ) \, dx \, . \end{align*} On the right hand side, the first two terms are called exchange energy and anisotropy energy, respectively. The exchange energy explains the tendency towards parallel alignment, where the positive material constant \nolinebreak $d$ is called exchange length or Bloch line width. Here we assume that the material has a single easy axis along $e_2=(0,1,0)$ leading to the anisotropy energy with positive quality factor $Q$. Moreover, the magnetization $m$ induces a magnetic field -- the so-called stray field -- causing an energy contribution given by the last two terms on the right hand side, where $ \mathcal{S}_\delta$ is the reduced stray field operator, a Fourier multiplication operator defined by \begin{align*} f \mapsto \mathcal{S}_\delta[f] = \mathcal{F}^{-1}\big( \sigma_\delta \, \widehat{f}\,\big) \end{align*} with real-valued, nonnegative, and bounded symbol \begin{align*} \sigma_\delta (\xi) = 1 - \frac{1- e^{- \delta \abs{\xi}}}{\delta \abs{\xi}} \,. \end{align*} The assumption that $m$ represents a domain wall, that is $m$ connects end states of opposite directions, is reflected by the additional requirement \begin{align*} \lim_{x \to \pm \infty} m(x) = \pm e_2 \,. \end{align*} In the model introduced above, the Bloch and N{\'e}el walls correspond to minimizers of $E$ in the cases $m_1\equiv0$ and $m_3\equiv0$, respectively. In an infinitely extended layer, that is $\delta = \infty$, the Bloch wall path completely avoids the occurrence of magnetic volume charges, and as a result, the stray field energy is equal to zero and therefore minimal. For this reason, the energy functional reduces to two terms, and it is possible to compute the Bloch wall profile explicitly. Indeed, with the help of a scaling argument one can restrict oneself to the computation of a single reference profile, and it turns out that the Bloch wall exhibits an exponential decay beyond a transition zone of order unity (see \cite{HS}). The N{\'e}el wall is mathematically more delicate due to the nonvanishing contribution of the stray field energy and the resulting dependence on multiple scales. In soft and thin films, the characteristic properties of the N{\'e}el wall are the very long logarithmic tail of the transition profile and logarithmic energy scaling (see \cite{dkmo}, \cite{Garcia}, \cite{Melcher}, and \cite{Melcher2}). For one-dimensional domain walls $m=m(t,x): \setR \times \setR \to S^2$, LLG with respect to a time-dependent external magnetic field $h_\text{ext}$ is given by \begin{align*} m_t = \alpha \, m \times H_{\text{eff}} - m \times ( m \times H_{\text{eff}}) \, , \quad \abs{m}=1 \,, \quad \lim_{x \to \pm \infty} m(\cdot,x) = \pm e_2 \,, \end{align*} where \begin{align*} H_{\text{eff}} = d^2 m'' - Q (m_1,0,m_3)^T - (\mathcal{S}_\delta [m_1],0,m_3 - \mathcal{S}_\delta [m_3])^T +(0,h_\text{ext},0)^T \end{align*} is the effective magnetic field and \textquotedblleft$\times$\textquotedblright{} denotes the usual cross product in $\setR^3$ (we assume that the external magnetic field is parallel to $e_2$). The so-called ``gyromagnetic'' term $\alpha \, m \times H_{\text{eff}}$ describes a precession around $H_{\text{eff}}$, whereas the ``damping'' term $- m \times \big( m \times H_{\text{eff}}\big) $ tries to align $m$ with $H_{\text{eff}}$. The assumption of in-plane magnetizations in the thin film geometry is incompatible with the dynamics as described by LLG since gyromagnetic precession generates an energetically unfavorable out-of-plane component. Certain reduced models for the in-plane components have been considered in \cite{GarciaE}, \cite{KohnSlastikov}, and \cite{CMO} as the film thickness goes to zero. In \cite{GarciaE} and \cite{KohnSlastikov}, the so-called Gilbert damping factor is held fixed of order one leading to an overdamped limit of LLG for the in-plane components. In \cite{CMO} a different parameter regime is treated. There, the Gilbert damping factor is assumed to be comparable to a certain relative thickness, and the resulting LLG is a damped geometric wave equation for the in-plane magnetization components. In the presence of a constant external magnetic field, one expects the N{\'e}el wall to move towards the less preferred end state. This is true for the reduced LLG derived in \cite{CMO}, where the authors perturb the static N{\'e}el wall profile in order to construct traveling wave solutions. Their proof relies mainly on the spectral properties of the linearized problem and the implicit function theorem. The aim of this paper is the construction of time-periodic solutions for LLG in the thin film geometry when the external magnetic field is time-periodic. For this we investigate the full LLG and allow an out-of-plane component for the magnetization, hence no further reduction is done. We assume as in \cite{CMO} that the material is soft ($Q \ll 1$) and that $\kappa = \delta^{-2} d^2 Q$ is bounded from below. The main difference compared to our work in \cite{alex_small} is that the linearization of the Euler-Lagrange equation possesses a nontrivial kernel. This stems from the translation invariance of the energy functional and requires (compared to \cite{alex_small}) the introduction of an additional parameter, which can be interpreted as time average of the external magnetic field. Starting from the static N{\'e}el wall profile, we then use the continuation method to construct time-periodic solutions in the case of external magnetic fields with small amplitudes \nolinebreak $\lambda$ and certain time averages $\gamma(\lambda)$ (see Theorem \ref{thm:3}). The paper is organized as follows: In Section \ref{sec:Neel Wall} we recall properties of static N{\'e}el walls and rewrite LLG in a suitable coordinate system. The linearization $\mathcal{L}^\epsilon_0$ of LLG is composed of two linear operators, which we analyze separately in Sections \ref{sec:NL1} and \ref{sec:NL2}. The thereby obtained results are used in Section \ref{sec:NL0} to study the spectral properties of the analytic semigroup generated by $\mathcal{L}^\epsilon_0$. These properties are the crucial ingredients for our perturbation argument in the final section. \paragraph{Notation.} Before we start, we introduce and recall some short hand notations and definitions. We write $L^p = L^p(\setR)$ and $H^k=H^k(\setR)$ for the Lebesgue and Sobolev spaces on $\setR$, and ``$\int$'' means integration over the whole real line with respect to the one-dimensional Lebesgue measure. We write $u \perp v$ to indicate that the $L^2$-functions $u$ and $v$ are perpendicular in $L^2$, that is $(u,v)_{L^2} = 0$. Furthermore, we use the abbreviations \begin{align*} \cdot ' = \frac{d}{dx} \qquad \text{and} \qquad \cdot'' = \Delta = \frac{d^2}{dx^2} \end{align*} for the first and second derivatives with respect to a one-dimensional space variable $x$. For the Fourier transform, we make the convention \begin{align*} \mathcal{F} u (\xi) = \widehat{u}(\xi) = \frac{1}{\sqrt{2 \pi}} \int e^{-\dot{\imath} x \xi } u(x) \, dx \end{align*} for functions $u:\setR \to \setC$ whenever this is well-defined. In particular, we have that $(\hat{u},\hat{v})_{L^2} = (u,v)_{L^2}$ for all $u,v \in L^2$. For a Banach space $X$ and a given $0<\beta<1$. we denote by $C^{0,\beta}_\beta(]0,T],X)$ the set of all bounded functions $f:\,]0,T] \to X$ such that \begin{align*} [f]_{C^{0,\beta}_\beta(]0,T],X)} = \sup_{0<\epsilon<T} \epsilon^\beta [f]_{C^{0,\beta}([\epsilon,T],X)} < \infty \,. \end{align*} This forms a Banach space with norm defined by \begin{align*} \norm{f}_{C^{0,\beta}_\beta(]0,T],X)} = \norm{f}_{C(]0,T],X)} + [f]_{C^{0,\beta}_\beta(]0,T],X)} \,. \end{align*} Moreover, we write $C^{1,\beta}_\beta (]0,T],X)$ for the set of all bounded and differentiable functions $f :\,]0,T] \to X$ with derivative $f'$ belonging to $C^{0,\beta}_\beta(]0,T],X)$. Again, this is a Banach space with norm defined by \begin{align*} \norm{f}_{C^{1,\beta}_\beta (]0,T],X)} = \norm{f}_{C(]0,T],X)} + \norm{f'}_{C^{0,\beta}_\beta(]0,T],X)}\,. \end{align*} Given a Banach space $Y$ and a linear operator $A:D(A) \subset Y \to Y$, we say that $A$ is sectorial if there are constants $\omega \in \setR$, $\theta \in (\pi/2,\pi)$, and $M>0$ such that the resolvent set $\rho(A)$ contains the sector $S_{\theta,\omega} = \set{ \lambda \in \setC \, | \, \lambda \not= \omega, \, \abs{\arg(\lambda - \omega)} < \theta}$, and the resolvent estimate \begin{align*} \norm{\text{R}(\lambda,A)} \le \frac{M}{\abs{\lambda - \omega}} \end{align*} is satisfied for all $\lambda \in S_{\theta,\omega}$. Moreover, we denote by $(e^{tA})_{t \ge 0}$ the analytic semigroup generated by the sectorial operator $A$ (see for example the book by Lunardi \cite{lunardi} for a self-contained presentation of the theory of sectorial operators and analytic semigroups). \section{N{\'e}el walls and LLG for N{\'e}el walls} \label{sec:Neel Wall} \paragraph{Parameter reduction and N{\'e}el walls.} We consider the micromagnetic energy functional for static and planar magnetizations $m = (m_1,m_2):\setR \to S^1$. If we rescale space by \begin{align*} x \mapsto \frac{\delta}{Q} x \, . \end{align*} then we obtain -- after a further renormalization of the energy by $\delta$ -- the rescaled energy functional \begin{align*} {E_\text{res}^\epsilon}(m) = \kappa \int \abs{m'}^2 + \int m_1^2 + \frac{1}{\epsilon} \int \mathcal{S}_\epsilon [m_1] m_1 \end{align*} with parameters $\kappa = \delta^{-2}d^2 Q$ and $\epsilon = Q$. Moreover, we have the following lemma for $ \mathcal{S}_\epsilon$: \begin{lem} \label{lem:reduced stray field} The linear operator $\mathcal{S}_\epsilon$ defines a linear and bounded mapping from $H^k$ to $H^k$ for every $k \in \setN \cup \set{0}$. Furthermore, $\mathcal{S}_\epsilon$ satisfies \begin{align*} (\mathcal{S}_\epsilon[u],v)_{L^2} = (u,\mathcal{S}_\epsilon[v])_{L^2} \qquad \text{and} \qquad (\mathcal{S}_\epsilon[u], u)_{L^2} \ge 0 \end{align*} for every $u,v \in L^2$. If $u \in L^2$ is a real-valued function, then $\mathcal{S}_\epsilon[u]$ is real-valued as well. \end{lem} \begin{proof} The boundedness of $\sigma_\epsilon$ combined with the Fourier characterization of the Sobolev spaces implies that $\mathcal{S}_\epsilon:H^k \to H^k$ is well-defined and bounded. Since $\sigma_\epsilon$ is real-valued, we see that $\mathcal{S}_\epsilon$ is symmetric on $L^2$, and because of $\sigma_\epsilon \ge 0$, we obtain that $\mathcal{S}_\epsilon$ is positive semidefinite on $L^2$. The remaining statement follows from considering $\sigma_\epsilon(\xi) = \sigma_\epsilon(-\xi)$ for every $\xi \in \setR$. The lemma is proved. \end{proof} As already announced, we assume in the sequel that $\kappa>0$ is fixed (or bounded from below) and vary the (small) parameter $\epsilon>0$. In particular, we are in the regime of soft thin films, where $\frac{\delta}{d} \sim \sqrt{Q}$. It can easily be seen that ${E_\text{res}^\epsilon}$ admits a minimizer $m_\epsilon$ in the set of admissible functions defined by \begin{align*} \Bigset{m: \setR \to S^1 \, | \, m' \in L^2,\, m_1 \in L^2, \, m_1(0) = 1,\, \lim_{x \to \pm \infty} m_2(x) = \pm 1} \,. \end{align*} In particular, minimizers are centered in the sense that $m^\epsilon_1(0)=1$ and carry out a $180^\circ$ in-plane rotation between the end states $m_\epsilon(-\infty) = (0,-1)$, $m_\epsilon(\infty) = (0,1)$. We call them rescaled N{\'e}el walls. Moreover, minimizers are weak solutions of the Euler-Lagrange equation \begin{align*} -\kappa \, m'' + \frac{1}{\epsilon} (\mathcal{S}_\epsilon [m_1],0)^T + (m_1,0)^T = \Big( \kappa \abs{m'}^2 + \abs{m_1}^2 + \frac{1}{\epsilon} m_1 \mathcal{S}_\epsilon [m_1] \Big) m \,. \end{align*} From here it follows that rescaled N{\'e}el walls $m_\epsilon$ are smooth, and $m^\epsilon_1$, $\frac{d}{dx}m^\epsilon_2$ belong to $H^k$ for all $k \in \setN$. See for example \cite{Melcher} for the derivation of the Euler-Lagrange equation and a proof of the regularity statement. Moreover, using the notion of rearrangement, it can be seen that rescaled N{\'e}el wall profiles $m^\epsilon_1$ are nonnegative and symmetrically decreasing (see \cite{Melcher}). \paragraph{The phase function of a rescaled N{\'e}el wall.} For the purpose of spectral analysis, we introduce for rescaled N{\'e}el walls $m_\epsilon$ as in \cite{CMO} the smooth phase function $\theta_\epsilon:\setR \to \setR$ such that $\theta_\epsilon(0)=0$ and \begin{align*} m_\epsilon = (m^\epsilon_1,m^\epsilon_2) = (\cos \theta_\epsilon, \sin \theta_\epsilon) \,. \end{align*} The rescaled energy functional ${E_\text{res}^\epsilon}$ and the Euler-Lagrange equation in these coordinates read as \begin{align*} {E_\text{res}^\epsilon} (\theta) = \kappa \int \abs{\theta'}^2 + \int \cos^2 \theta + \frac{1}{\epsilon} \int \mathcal{S}_\epsilon [\cos \theta] \cos \theta \end{align*} and \begin{align} \label{eq:Euler-Lagrange-N} \tag*{$(EL)$} \kappa \theta'' + \frac{1}{2} \sin (2\theta) + \frac{1}{\epsilon} \mathcal{S}_\epsilon [\cos \theta] \sin \theta =0 \, , \end{align} respectively. From the above stated regularity properties for rescaled N{\'e}el walls $m_\epsilon = (\cos \theta_\epsilon , \sin \theta_\epsilon)$ (or directly from the Euler-Lagrange equation for the phase function), we obtain $\theta'_\epsilon \in H^k$ for all $k \in \setN$. Since $m^\epsilon_1 = \cos \theta_\epsilon$ is nonnegative and $\theta_\epsilon(0)=0$, we find $- \frac{\pi}{2} \le \theta_\epsilon \le \frac{\pi}{2}$. We now see that \begin{align*} \lim_{x \to \pm \infty} \theta_\epsilon(x) = \pm \frac{\pi}{2} \, , \end{align*} and since $m^\epsilon_1 = \cos \theta_\epsilon$ is symmetrically decreasing, we obtain that $\theta_\epsilon$ is nondecreasing. As in \cite{CMO} we show that $\theta'_\epsilon(0)>0$: Set $b(x) = \mathcal{S}_\epsilon [\cos \theta_\epsilon](x)$ and observe that $b$ is continuous and bounded. Now assume that $\theta'_\epsilon(0)=0$. Then $\theta_\epsilon$ solves the ODE \begin{align*} \kappa \theta'' + \frac{1}{2} \sin (2\theta) + \frac{1}{\epsilon} b(x) \sin \theta =0, \quad \theta(0)=0, \quad \theta'(0)=0, \end{align*} and the uniqueness theorem implies $\theta_\epsilon\equiv0$, a contradiction. We summarize the properties of the phase function $\theta_\epsilon$ of a rescaled N{\'e}el wall in the next lemma. \begin{lem} \label{lem:Nphasefunction} The phase function $\theta_\epsilon$ of a rescaled N{\'e}el wall is smooth and satisfies the following properties: \begin{enumerate} \item[(i)] $\theta_\epsilon(0)=0$, $\theta'_\epsilon \ge 0$, and $\theta'_\epsilon(0)>0$. \item[(ii)] $\theta'_\epsilon$, $\cos \theta_\epsilon \in H^k$ for all $k \in \setN$. \item[(iii)] $-\frac{\pi}{2} \le \theta_\epsilon \le \frac{\pi}{2}$ and $\lim_{x \to \pm \infty} \theta_\epsilon(x) = \pm \frac{\pi}{2}$. \end{enumerate} \end{lem} \paragraph{LLG for N{\'e}el walls and choice of coordinates.} Now we consider LLG for one-dimensional domain walls $m=m(t,x):\setR\times\setR \to S^2$. If we rescale space and time by \begin{align*} x \mapsto \frac{\delta}{Q} x \qquad \text{and} \qquad t \mapsto \frac{1}{Q} t \, , \end{align*} respectively, then we obtain the rescaled LLG given by \begin{align*} m_t = \alpha \, m \times H_{\text{eff}}^{\text{res}} - m \times ( m \times H_{\text{eff}}^{\text{res}} ) \, , \,\, \abs{m}=1 \, , \,\, \lim_{x \to \pm \infty} m(\cdot,x) = \pm e_2 \,, \end{align*} with rescaled effective field \begin{align*} H_{\text{eff}}^{\text{res}} = \kappa m'' - (m_1,0,m_3)^T - \frac{1}{\epsilon} (\mathcal{S}_\epsilon [m_1],0, m_3 - \mathcal{S}_\epsilon [m_3])^T + \frac{1}{\epsilon} (0, h_\text{ext},0)^T\, . \end{align*} If the external magnetic field $h_\text{ext}$ is zero, then the rescaled N{\'e}el wall is a stationary solution for the rescaled LLG with $m_3 \equiv 0$. For $h_\text{ext} \not= 0$ and $\alpha \not= 0$, the assumption $m_3 \equiv 0$ becomes incompatible with LLG since the precession term leads to an out-of-plane component $m_3 \not= 0$. In view of the saturation constraint $\abs{m}=1$, we introduce a spherical coordinate system and write \begin{align*} m_1 = \cos \varphi \cos \theta, \quad m_2 = \cos \varphi \sin \theta, \quad m_3 = \sin \varphi, \end{align*} with angles $\varphi,\theta$. In particular, we have $\varphi=0$ and $\theta = \theta_\epsilon$ for the rescaled N{\'e}el wall with phase function \nolinebreak $\theta_\epsilon$. In order to rewrite the rescaled LLG in spherical coordinates, we introduce the matrix \begin{align*} M(m) = \begin{pmatrix} - \sin \varphi \cos \theta & - \sin \varphi \sin \theta & \cos \varphi \\ -\sec \varphi \sin \theta & \hphantom{-} \sec \varphi \cos \theta & 0 \\ \hphantom{-}\cos \varphi \cos \theta & \hphantom{-}\cos \varphi \sin \theta & \sin \varphi \end{pmatrix} \end{align*} with determinant given by $\det M(m) = - \sec \varphi$. For $m$ close to the rescaled N{\'e}el wall, we have $\varphi \approx 0$ and the matrix $M(m)$ becomes invertible. Multiplication of the rescaled LLG with $M(m)$ leads to an equivalent equation in terms of $\varphi,\theta$. A rather long but straightforward calculation shows: \begin{align*} \tag*{$(LLG)_\epsilon$} \begin{array}{ll} \varphi_t = R^\epsilon_1(t,\varphi,\theta,h_\text{ext})\, , & \quad \lim_{x \to \pm \infty} \varphi(\cdot,x) = 0 \, , \\ \theta_t = R^\epsilon_2(t,\varphi,\theta,h_\text{ext})\, , & \quad \lim_{x \to \pm \infty} \theta(\cdot,x) = \pm \frac{\pi}{2} \, , \end{array} \end{align*} where \begin{align*} &R^\epsilon_1(t,\varphi,\theta,h_\text{ext}) \\ =& \frac{\alpha}{\epsilon} h_\text{ext} \cos \theta + \frac{1}{\epsilon} \mathcal{S}_\epsilon [\sin \varphi] \cos \varphi + \frac{1}{\epsilon} \mathcal{S}_\epsilon [\cos \varphi \cos \theta] \sin \varphi \cos \theta + \frac{\alpha}{\epsilon} \mathcal{S}_\epsilon [\cos \varphi \cos \theta] \sin \theta - \frac{1}{\epsilon} h_\text{ext} \sin \varphi \sin \theta \\ &- 2 \alpha \kappa \sin \varphi \, \varphi' \theta' + \frac{1}{4 \epsilon} \sin (2\varphi) (-2 -\epsilon + \epsilon \cos (2\theta) + 2 \epsilon \kappa (\theta')^2) + \kappa \varphi'' + \frac{\alpha}{2} \cos \varphi \sin (2\theta) + \alpha \kappa \cos \varphi \, \theta'' \\ \intertext{and} &R^\epsilon_2(t,\varphi,\theta,h_\text{ext}) \\ =& -\frac{\alpha}{\epsilon} \mathcal{S}_\epsilon [\sin \varphi] + \frac{1}{\epsilon} h_\text{ext} \cos \theta \sec \varphi + \frac{\alpha}{2 \epsilon} \sin \varphi \big(2+ \epsilon - \epsilon \cos (2\theta) \big) + \frac{1}{2} \sin (2\theta) + \frac{\alpha}{\epsilon} h_\text{ext} \tan \varphi \sin \theta \\ &+\frac{1}{\epsilon} \mathcal{S}_\epsilon [ \cos \varphi \cos \theta ] \sec \varphi \sin \theta - \frac{\alpha}{\epsilon} \mathcal{S}_\epsilon [ \cos \varphi \cos \theta ] \tan \varphi \cos \theta - 2 \kappa \tan \varphi \, \varphi' \theta' - \alpha \kappa \sin \varphi (\theta')^2 \\ &- \alpha \kappa \sec \varphi \, \varphi'' + \kappa \theta'' \, . \end{align*} In the following we investigate $(LLG)_\epsilon$ and construct time-periodic solutions close to the rescaled N{\'e}el wall for time-periodic external magnetic fields $h_\text{ext}$. \paragraph{Linearization of the rescaled LLG.} As in \cite{alex_small}, the linearization of $(LLG)_\epsilon$ at the stationary solution is of crucial importance for our arguments. If we set $h_\text{ext}=0$, then the linearization of the right hand side with respect to $(\varphi,\theta)$ at $(\varphi,\theta)=(0,\theta_\epsilon)$ is given by \begin{align*} \mathcal{L}_0^\epsilon = \begin{pmatrix} \hphantom{-\alpha} \mathcal{L}_1^\epsilon & \alpha \mathcal{L}_2^\epsilon \\ -\alpha \mathcal{L}_1^\epsilon & \hphantom{\alpha}\mathcal{L}_2^\epsilon \end{pmatrix} : H^2 \times H^2 \subset L^2 \times L^2 \to L^2 \times L^2\, , \end{align*} where \begin{align*} \mathcal{L}_1^\epsilon u = \hspace{-0.1cm}\kappa u'' - \frac{1}{\epsilon} u - \frac{1}{2} u + \frac{1}{2} \cos(2 \theta_\epsilon) u + \hspace{-0.1cm}\kappa (\theta'_\epsilon)^2 u + \frac{1}{\epsilon} \mathcal{S}_\epsilon [\cos \theta_\epsilon ] \cos \theta_\epsilon \, u + \frac{1}{\epsilon} \mathcal{S}_\epsilon [u] \end{align*} and \begin{align*} \mathcal{L}_2^\epsilon v = \hspace{-0.1cm}\kappa v'' + \cos(2\theta_\epsilon) v - \frac{1}{\epsilon} \mathcal{S}_\epsilon [\sin \theta_\epsilon \, v] \sin \theta_\epsilon + \frac{1}{\epsilon} \mathcal{S}_\epsilon [\cos \theta_\epsilon ] \cos \theta_\epsilon \, v \end{align*} for $u,v \in H^2$. We remark that $\mathcal{L}_2^\epsilon$ is the linearization of the Euler-Lagrange equation \ref{eq:Euler-Lagrange-N} for the phase function at $\theta_\epsilon$. In the following two sections, we collect properties of $\mathcal{L}_1^\epsilon$ and $\mathcal{L}_2^\epsilon$ in order to analyze the spectrum of $\mathcal{L}_0^\epsilon$ in Section \ref{sec:NL0}. To be more precise, we show that $0$ is an isolated point in $\sigma(\mathcal{L}_0^\epsilon)$ with one-dimensional eigenspace spanned by $(0,\theta'_\epsilon)$ and $\sigma(\mathcal{L}_0^\epsilon) \cap \dot{\imath}\setR = \set{0}$, provided the parameter $\epsilon>0$ is small enough. \section{The linear operator $\mathcal{L}_1^\epsilon$} \label{sec:NL1} In this section we prove that $\mathcal{L}_1^\epsilon$ is self-adjoint and invertible for $\epsilon$ small enough. For this we need a priori estimates for the phase function $\theta_\epsilon$ of a rescaled N{\'e}el wall independent of the parameter $\epsilon$. We first state an elementary lemma (without proof) for the symbol of $\mathcal{S}_\epsilon$: \begin{lem} \label{lem:multiplier} We have $\frac{1}{\epsilon} \sigma_\epsilon(\xi) \le \abs{\xi}$ for all $\xi \in \setR$ and $\epsilon >0$. \end{lem} Next, we use the rescaled energy functional for the phase function and the corresponding Euler-Lagrange equation to obtain the required a priori estimates. \begin{lem} \label{lem:apriori} For the phase function $\theta_\epsilon$ of a rescaled N{\'e}el wall, the following a priori estimates are satisfied with a constant $C>0$ independent of $\epsilon>0$: \begin{enumerate} \item[(i)] $\norm{\theta'_\epsilon}_{L^2}$, $\norm{\cos \theta_\epsilon}_{H^1} \le C$ \item[(ii)] $\norm{\theta'_\epsilon}_{H^1}$, $\norm{\theta'_\epsilon}_{L^\infty} \le C$ \item[(iii)] $\frac{1}{\epsilon}\norm{\mathcal{S}_\epsilon[\cos \theta_\epsilon] \cos \theta_\epsilon}_{L^\infty} \le C$ \end{enumerate} \end{lem} \begin{proof} For (i) we choose a smooth and admissible comparison function $\theta$ to find \begin{align*} \kappa \int \abs{\theta'_\epsilon}^2 + \int \cos^2 \theta_\epsilon \le {E_\text{res}^\epsilon} (\theta_\epsilon) \le {E_\text{res}^\epsilon} (\theta) =\kappa \int \abs{\theta'}^2 + \int \cos^2 \theta + \frac{1}{\epsilon} \int \mathcal{S}_\epsilon [\cos \theta] \cos \theta \,. \end{align*} The definition of $\mathcal{S}_\epsilon$ and Lemma \ref{lem:multiplier} lead to \begin{align*} \kappa \int \abs{\theta'_\epsilon}^2 + \int \cos^2 \theta_\epsilon \le\kappa \int \abs{\theta'}^2 + \int \cos^2 \theta + \int \abs{\xi} \abs{\widehat{\cos \theta}}^2 \le \kappa \norm{\theta'}_{L^2}^2 + 2 \norm{\cos \theta}_{H^1}^2 \le C \end{align*} with some constant $C>0$ independent of $\epsilon>0$. Hence, we obtain the estimates $\norm{\theta'_\epsilon}_{L^2}$, $\norm{\cos \theta_\epsilon}_{L^2} \le C$. Since $(\cos \theta_\epsilon)' = - \theta'_\epsilon \sin \theta_\epsilon$, we also find $\norm{\cos \theta_\epsilon}_{H^1} \le C$. For (ii) we recall that $\theta_\epsilon$ solves the Euler-Lagrange equation \ref{eq:Euler-Lagrange-N}: \begin{align*} \kappa \theta''_\epsilon = - \sin \theta_\epsilon \cos \theta_\epsilon - \frac{1}{\epsilon} \mathcal{S}_\epsilon [\cos \theta_\epsilon] \sin \theta_\epsilon \,. \end{align*} From the first part, we get $\norm{\sin \theta_\epsilon \cos \theta_\epsilon}_{L^2} \le \norm{\cos \theta_\epsilon}_{L^2} \le C$. Moreover, we can estimate the remaining term on the right hand side with the help of Lemma \nolinebreak \ref{lem:multiplier} and (i) as follows: \begin{align*} \frac{1}{\epsilon^2} \norm{\mathcal{S}_\epsilon [\cos \theta_\epsilon] \sin \theta_\epsilon}_{L^2}^2 \le \frac{1}{\epsilon^2} \int \mathcal{S}_\epsilon [\cos \theta_\epsilon] ^2 \le \int \abs{\xi}^2 \abs{\widehat{\cos \theta_\epsilon}}^2 \le \norm{\cos \theta_\epsilon}_{H^1}^2 &\le C \,. \end{align*} We conclude $\norm{\theta''_\epsilon}_{L^2} \le C$. This combined with (i) and the embedding $H^1 \hookrightarrow L^\infty$ implies (ii). For (iii) we first remark that $\,(\cos \theta_\epsilon)'' = (- \theta'_\epsilon \sin \theta_\epsilon )' = - \theta''_\epsilon \sin \theta_\epsilon - (\theta'_\epsilon)^2 \cos \theta_\epsilon\,$ and find the estimate $\norm{(\cos \theta_\epsilon)''}_{L^2} \le \norm{\theta''_\epsilon}_{L^2} + \norm{\theta'_\epsilon}_{L^\infty}^2 \norm{\cos \theta_\epsilon}_{L^2} \le C$ thanks to (i) and (ii). In particular, we have $\norm{\cos \theta_\epsilon}_{H^2} \le C$. We use this and Lemma \ref{lem:multiplier} to see \begin{align*} \frac{1}{\epsilon^2} \norm{\mathcal{S}_\epsilon [\cos \theta_\epsilon]}_{H^1}^2 &= \frac{1}{\epsilon^2} \int (1+ \abs{\xi}^2 ) \, \abs{\sigma_\epsilon (\xi) \, \widehat{\cos \theta_\epsilon}(\xi)}^2 \le \int (1+ \abs{\xi}^2)^2 \, \abs{\widehat{\cos \theta_\epsilon}(\xi)}^2 = \norm{\cos \theta_\epsilon}_{H^2}^2 \, , \end{align*} hence $\frac{1}{\epsilon} \norm{\mathcal{S}_\epsilon [\cos \theta_\epsilon]}_{H^1} \le C$. It follows $\frac{1}{\epsilon} \norm{\mathcal{S}_\epsilon [\cos \theta_\epsilon] \cos \theta_\epsilon}_{L^2} \le C$ and \begin{align*} \frac{1}{\epsilon} \norm{(\mathcal{S}_\epsilon [\cos \theta_\epsilon] \cos \theta_\epsilon)'}_{L^2} &\le \frac{1}{\epsilon} \norm{\mathcal{S}_\epsilon [\cos \theta_\epsilon]}_{H^1} + \frac{1}{\epsilon} \norm{\mathcal{S}_{\epsilon}[\cos \theta_\epsilon] \theta'_\epsilon \sin \theta_\epsilon}_{L^2} \\ &\le C + \frac{1}{\epsilon} \norm{\mathcal{S}_\epsilon[\cos \theta_\epsilon]}_{L^2} \norm{\theta'_\epsilon}_{L^\infty} \\ &\le C \,. \end{align*} We end up with $\frac{1}{\epsilon}\norm{\mathcal{S}_\epsilon [\cos \theta_\epsilon] \cos \theta_\epsilon}_{L^\infty} \le \frac{C}{\epsilon} \, \norm{\mathcal{S}_\epsilon [\cos \theta_\epsilon] \cos \theta_\epsilon}_{H^1} \le C$. This proves (iii) and the lemma. \end{proof} With the help of Lemma \ref{lem:apriori}, we can show that $\mathcal{L}_1^\epsilon$ is invertible for $\epsilon$ small enough. \begin{lem} \label{lem:L1 is sectorial} The linear operator $\mathcal{L}_1^\epsilon : H^2 \subset L^2 \to L^2$ is sectorial and self-adjoint. Furthermore, $\mathcal{L}_1^\epsilon$ is invertible for $\epsilon>0$ small enough. \end{lem} \begin{proof} Because of the decomposition $\mathcal{L}_1^\epsilon = \kappa \Delta + \mathcal{B}^\epsilon$ with a linear and bounded operator $\mathcal{B}^\epsilon:L^2 \to L^2$, we obtain from \cite[Proposition 2.4.1]{lunardi} (i) that $\mathcal{L}_1^\epsilon$ is sectorial. In particular, there are $\lambda_1$ and $\lambda_2$ in $\rho(\mathcal{L}_1^\epsilon)$ such that $\text{Im}\lambda_1>0$ and $\text{Im}\lambda_2<0$. This combined with the fact that $\mathcal{L}_1^\epsilon$ is $L^2$-symmetric implies that $\mathcal{L}_1^\epsilon$ is self-adjoint. To prove the remaining statement, we define \begin{align*} \mathcal{G}(u,v) = \langle-\mathcal{L}_1^\epsilon u , v\rangle =& \kappa \int u'v' + \frac{1}{\epsilon} \int uv + \frac{1}{2} \int uv - \frac{1}{2} \int \cos(2\theta_\epsilon) uv \\ & - \kappa \int (\theta'_\epsilon)^2 u v - \frac{1}{\epsilon} \int \mathcal{S}_\epsilon [\cos \theta_\epsilon] \cos \theta_\epsilon \,u v - \frac{1}{\epsilon} \int\mathcal{S}_\epsilon [u] v \end{align*} for $u,v \in H^1$. Then $\mathcal{G} : H^1 \times H^1 \to \setR$ is well-defined, bilinear, and bounded. Moreover, we can estimate with the help of Lemma \ref{lem:apriori} as follows: \begin{align*} \mathcal{G}(u,u) \ge& \kappa \int\abs{u'}^2 + \frac{1}{\epsilon} \int \abs{u}^2 - C \int \abs{u}^2 - \frac{1}{\epsilon} \int \sigma_\epsilon \abs{\widehat{u}}^2 \, . \end{align*} We obtain by applying Lemma \ref{lem:multiplier} and the Young inequality that \begin{align*} \mathcal{G}(u,u) \ge \kappa \int\abs{u'}^2 + \frac{1}{\epsilon} \int \abs{u}^2 - C \int \abs{u}^2 - \frac{1}{2\kappa} \int \abs{u}^2 - \frac{\kappa}{2} \int \abs{u'}^2 = \frac{\kappa}{2} \int\abs{u'}^2 + \frac{1}{\epsilon} \int \abs{u}^2 - C \int \abs{u}^2 \end{align*} with some constant $C>0$ independent of $\epsilon>0$. In particular, $\mathcal{G}$ becomes coercive for $\epsilon$ small enough. Thanks to the Lax-Milgram theorem, we find for every $f \in L^2$ a unique element $u \in H^1$ such that $\mathcal{G}(u,v) = -(f,v)_{L^2}$ for all $v \in H^1$. This means that $u$ is the unique weak solution of $\mathcal{L}_1^\epsilon u = f$, and from here we directly obtain $u \in H^2$, thus $\mathcal{L}_1^\epsilon$ is invertible. The lemma is proved. \end{proof} \section{The linear operator $\mathcal{L}_2^\epsilon$} \label{sec:NL2} In this section we analyze the operator $\mathcal{L}_2^\epsilon$. As already remarked, $\mathcal{L}_2^\epsilon$ is the linearization of the rescaled Euler-Lagrange equation for the phase function at $\theta_\epsilon$. Due to the translation invariance of the energy functional, we expect $\mathcal{L}_2^\epsilon$ to have a kernel of at least dimension one. This is true as shown in the next lemma: \begin{lem} \label{lem:L2 is sectorial} The linear operator $\mathcal{L}_2^\epsilon$ is sectorial and self-adjoint. Moreover, the function $\theta'_\epsilon$ belongs to the kernel of $\mathcal{L}_2^\epsilon$. \end{lem} \begin{proof} As in Lemma \ref{lem:L1 is sectorial}, we see that $\mathcal{L}_2^\epsilon$ is sectorial and self-adjoint. We also know that $\theta_\epsilon$ is smooth and solves the Euler-Lagrange equation \ref{eq:Euler-Lagrange-N}. Furthermore, we have thanks to Lemma \ref{lem:Nphasefunction} that $\cos \theta_\epsilon \in H^k$ for all $k \in \setN$, hence $\mathcal{S}_\epsilon[\cos \theta_\epsilon] \in H^k$ for all $k \in \setN$. The Sobolev embedding theorem implies smoothness of $\mathcal{S}_\epsilon[\cos \theta_\epsilon]$, and with the help of the Fourier transform, we obtain the identity \begin{align*} \big( \mathcal{S}_\epsilon [\cos \theta_\epsilon] \big) ' = \mathcal{S}_\epsilon [ (\cos \theta_\epsilon)' ] = -\mathcal{S}_\epsilon [ \sin \theta_\epsilon \, \theta'_\epsilon ] \, . \end{align*} We now differentiate the Euler-Lagrange equation \ref{eq:Euler-Lagrange-N} with respect to the space variable and obtain \begin{align*} 0 &= \kappa \theta'''_\epsilon + \cos(2 \theta_\epsilon) \theta'_\epsilon -\frac{1}{\epsilon} \mathcal{S}_\epsilon [\sin \theta_\epsilon \, \theta'_\epsilon] \sin \theta_\epsilon + \frac{1}{\epsilon} \mathcal{S}_\epsilon [\cos \theta_\epsilon] \cos \theta_\epsilon \, \theta'_\epsilon = \mathcal{L}_2^\epsilon \theta'_\epsilon \, . \end{align*} We already know that $\theta'_\epsilon \in H^2$, hence $\theta'_\epsilon \in N(\mathcal{L}_2^\epsilon)$. The lemma is proved. \end{proof} In the sequel we show that the kernel of $\mathcal{L}_2^\epsilon$ is actually one-dimensional and therefore given by $\text{span}\set{\theta'_\epsilon}$. For this we prove a spectral gap estimate for $\mathcal{L}_2^\epsilon$ and follow the arguments presented in \cite{CMO}. Due to the reduction made for LLG in \cite{CMO}, the symbol of $\mathcal{S}_\epsilon$ differs from the one we have in our situation. However, this requires only minor changes. To keep our presentation self-contained, we have decided to repeat the proof here. We start by defining \begin{align*} \mathcal{G}(u,v) = \langle-\mathcal{L}_2^\epsilon u,v\rangle = \kappa \int u' v' -\int \cos (2 \theta_\epsilon) u v + \frac{1}{\epsilon} \int \mathcal{S}_\epsilon[ \sin \theta_ \epsilon \, u] \sin \theta_\epsilon \, v - \frac{1}{\epsilon} \int \mathcal{S}_\epsilon [\cos \theta_\epsilon] \cos \theta_\epsilon \, uv \end{align*} and $\,\mathcal{H}(u,v) = \int \big( 1 + \epsilon^{-1} \sigma_\epsilon \big) \widehat{u} \, \overline{ \widehat{v}}\,$ for all $u,v \in H^1$. The next lemma is the major step towards the spectral gap estimate and uses the rescaled Euler-Lagrange equation \ref{eq:Euler-Lagrange-N} together with a clever chosen test function. \begin{lem} \label{lem:estimate for G} For all $u \in H^1$ with $u(0)=0$, we have the estimate \begin{align*} \mathcal{G}(u,u) \ge \kappa \norm{u \theta'_\epsilon}_{L^2}^2 + \mathcal{H}(u \sin \theta_\epsilon , u \sin \theta_\epsilon) \, . \end{align*} \end{lem} \begin{proof} First, we assume that $u=0$ in a neighborhood of $0$ and rewrite $\mathcal{G}(u,u)$ with the help of the identity $\cos(2\theta_\epsilon) = \cos^2 \theta_\epsilon - \sin^2 \theta_\epsilon$ in the following way: \begin{align*} \mathcal{G}(u,u) =& \kappa \int \abs{u'}^2 - \int \cos \theta_\epsilon \, \abs{u}^2 \big(\cos \theta_\epsilon + \epsilon^{-1} \mathcal{S}_\epsilon [\cos \theta_\epsilon] \big) + \int \sin \theta_\epsilon \, u \big( \sin \theta_\epsilon \, u + \epsilon^{-1} \mathcal{S}_\epsilon [\sin \theta_\epsilon \, u] \big) \\ =& \kappa \int \abs{u'}^2 - \mathcal{H}(\cos \theta_\epsilon \, \abs{u}^2, \cos \theta_\epsilon) + \mathcal{H}( \sin \theta_\epsilon \, u, \sin \theta_\epsilon \, u) \, . \end{align*} In order to estimate the first two terms on the right hand side, we rewrite the weak form of the rescaled Euler-Lagrange equation \ref{eq:Euler-Lagrange-N} for the phase function in a similar way as above. To be more precise, we have \begin{align*} 0&= -\kappa \int \theta'_\epsilon v' + \frac{1}{2} \int \sin (2 \theta_\epsilon) v + \frac{1}{\epsilon} \int \mathcal{S}_\epsilon [ \cos \theta_\epsilon ] \sin \theta_\epsilon \, v = - \kappa \int \theta'_\epsilon v' + \mathcal{H}( \sin \theta_\epsilon \, v ,\cos \theta_\epsilon ) \, \end{align*} for all $v \in H^1$. Since $\theta_\epsilon (x) = 0$ if and only if $x = 0$ (see Lemma \ref{lem:Nphasefunction}), we can introduce the test function $v = u^2 \cot \theta_\epsilon \in H^1$. Inserting leads to \begin{align*} \mathcal{H}(\cos \theta_\epsilon \, u^2, \cos \theta_\epsilon ) &= 2 \kappa \int u u' \cot \theta_\epsilon \, \theta'_\epsilon - \kappa \int \abs{\theta'_\epsilon}^2 u^2 ( 1 + \cot^2 \theta_\epsilon) \,. \end{align*} This together with the Young inequality implies \begin{align*} \kappa \hspace{-0.1cm}\int \abs{u'}^2 - \hspace{-0.1cm}\mathcal{H}(\cos \theta_\epsilon \, \abs{u}^2, \cos \theta_\epsilon ) \ge& \kappa \hspace{-0.1cm}\int \abs{u'}^2 + \hspace{-0.1cm}\kappa \hspace{-0.1cm}\int \abs{\theta'_\epsilon}^2 \abs{u}^2 + \hspace{-0.1cm}\kappa \hspace{-0.1cm}\int \abs{\theta'_\epsilon}^2 \abs{u}^2 \cot^2 \theta_\epsilon - \hspace{-0.1cm}\kappa \hspace{-0.1cm}\int \abs{u'}^2 - \hspace{-0.1cm} \kappa \hspace{-0.1cm}\int \abs{u}^2 \abs{\theta'_\epsilon}^2 \cot^2 \theta_\epsilon \\ =& \kappa \hspace{-0.1cm}\int \abs{\theta'_\epsilon}^2 \abs{u}^2 \, . \end{align*} A combination of the above estimates yields \begin{align*} \mathcal{G}(u,u) \ge \kappa \int \abs{\theta'_\epsilon}^2 \abs{u}^2 + \mathcal{H} (\sin \theta_\epsilon \, u,\sin \theta_\epsilon \, u) \end{align*} for all $u \in H^1$ with $u=0$ in a neighborhood of $0$. To complete the proof, let $u \in H^1$ with $u(0)=0$ be given and define for $\delta >0$ the truncated function $u_\delta$ by $u_\delta(x)=u(x- \delta)$ if $x \ge \delta$, $u_\delta (x)=0$ if $-\delta <x< \delta$, and $u_\delta (x) = u(x+\delta)$ if $x \le -\delta$. Since $u_\delta \to u$ in $H^1$ for $\delta \to 0$, the lemma follows from the previous inequality by means of approximation. \end{proof} We are now in a position to prove the spectral gap estimate for $\mathcal{L}_2^\epsilon$. \begin{lem} \label{lem:L2 is coerciv} There is a constant $C=C(\epsilon)>0$ such that $\mathcal{G}(u,u) \ge C \norm{u}_{L^2}^2$ for all $u \in H^1$ with $u\perp \theta'_\epsilon$. \end{lem} \begin{proof} For $u \in H^1$ with $u\perp \theta'_\epsilon$, we consider $v = u - t \theta'_\epsilon \in H^1$ where $t = u(0)/\theta'_\epsilon(0)$ and remark that \begin{align*} \mathcal{G}(v,v) = \mathcal{G}(u,u) + 2 t (\mathcal{L}_2^\epsilon \theta'_\epsilon,u)_{L^2} - t^2 (\mathcal{L}_2^\epsilon \theta'_\epsilon, \theta'_\epsilon)_{L^2} = \mathcal{G}(u,u) \end{align*} thanks to Lemma \ref{lem:L2 is sectorial}. Moreover, Lemma \ref{lem:estimate for G} together with the fact $v(0)=0$ implies \begin{align*} \mathcal{G}(u,u) \ge \kappa \int \abs{v}^2 \abs{\theta'_\epsilon}^2 + \int \abs{v}^2 \sin^2 \theta_\epsilon \ge \min \set{\kappa,1} \int \abs{v}^2 \big( \abs{\theta'_\epsilon}^2 + \sin^2 \theta_\epsilon \big) \ge C(\epsilon) \int \abs{v}^2 \, . \end{align*} In the previous line, we have used that $\theta_\epsilon$ is nondecreasing and $\theta'_\epsilon (0)>0$ (see Lemma \ref{lem:Nphasefunction}). The assumption $u\perp \theta'_\epsilon$ implies the statement of the lemma. \end{proof} Next, the spectral gap estimate is used to determine the range of $\mathcal{L}_2^\epsilon$. \begin{lem} \label{lem:L2 is invertible on the complement of y} For all $f \in L^2$ with $f \perp \theta'_\epsilon$ there exists a unique $u \in H^2$ with $u \perp \theta'_\epsilon$ such that $\mathcal{L}_2^\epsilon u = f$. Moreover, we have the a priori estimate $\norm{u}_{H^2} \le C \, \norm{f}_{L^2}$ with a constant $C=C(\epsilon)>0$. \end{lem} \begin{proof} First, we show uniqueness. Let therefore $u_1,u_2 \perp \theta'_\epsilon$ be given such that $\mathcal{L}_2^\epsilon u_1 = \mathcal{L}_2^\epsilon u_2 =f$. We find $\mathcal{G}(u_1 - u_2,v) = 0$ for all $v \in H^1$, and thanks to Lemma \ref{lem:L2 is coerciv}, we get \begin{align*} 0= \mathcal{G}(u_1 - u_2, u_1 - u_2 ) \ge C \norm{u_1 - u_2}_{L^2}^2 \, , \end{align*} hence $u_1 = u_2$. Next, we show the existence of solutions for a given $f \in L^2$ with $f \perp \theta'_\epsilon$ and define the space $H^1_{\perp} = \set{u \in H^1 \, | \, u \perp \theta'_\epsilon \text{ in } L^2}$. We remark that $H^1_{\perp}$ is a closed subspace of $H^1$ and therefore a Hilbert space. We consider the restriction of $\mathcal{G}$ on $H^1_{\perp} \times H^1_{\perp}$, which is again bilinear, symmetric, and bounded. Moreover, we obtain with Lemma \ref{lem:L2 is coerciv} the estimate \begin{align*} (1+\delta) \mathcal{G}(u,u) = \ge& C \norm{u}_{L^2}^2 + \delta \kappa \norm{u'}_{L^2}^2 - \delta \mathcal{H}(\cos \theta_\epsilon \, \abs{u}^2, \cos \theta_\epsilon) + \delta \mathcal{H}( \sin \theta_\epsilon \, u , \sin \theta_\epsilon \, u) \end{align*} for all $u \in H^1_{\perp}$ and some $\delta>0$ to be chosen below. Because of Lemmas \ref{lem:reduced stray field} and \ref{lem:Nphasefunction}, we have that \begin{align*} \mathcal{H}(\cos \theta_\epsilon \, \abs{u}^2, \cos \theta_\epsilon) = \int \cos^2 \theta_\epsilon \, \abs{u}^2 + \frac{1}{\epsilon} \int \cos \theta_\epsilon \, \abs{u}^2 \mathcal{S}_\epsilon[\cos \theta_\epsilon] \le C \norm{u}_{L^2}^2 \,. \end{align*} Since $\sigma_\epsilon \ge 0$, we see $\mathcal{H}(\sin \theta_\epsilon \, u, \sin \theta_\epsilon \,u) \ge 0$ and therefore $\mathcal{G}(u,u) \ge c \norm{u}_{H^1}^2$ for all $u \in H^1_{\perp}$, provided $\delta$ is chosen small enough. This means that $\mathcal{G}$ is coercive on $H^1_{\perp}$, and with the help of the Lax-Milgram theorem, we find a unique $u \in H^1_{\perp}$ such that $\mathcal{G}(u,v) = \langle-\mathcal{L}_2^\epsilon u,v\rangle = (-f,v)_{L^2}$ for all $v \in H^1_{\perp}$ and $\norm{u}_{H^1}\le C\, \norm{f}_{L^2}$. In the sequel we prove that this actually holds for all $v \in H^1$ and define therefore the projection $P$ by $Pv = v - (v,\theta'_\epsilon)_{L^2}/\norm{\theta'_\epsilon}_{L^2}^2 \, \theta'_\epsilon = v - t(v) \theta'_\epsilon$ for all $v \in L^2$. The projection $P$ has the following properties: $Pv \perp \theta'_\epsilon$, $v \perp \theta'_\epsilon$ implies $Pv=v$, and $(Pv,w)_{L^2} = (v,Pw)_{L^2}$ for all $v,w \in L^2$. Furthermore, it holds $\mathcal{G}(v,Pw) = \mathcal{G}(v,w)$ for all $v,w \in H^1$ since \begin{align*} \mathcal{G}(v,Pw) &= \mathcal{G}(v,w) - t(w) \mathcal{G}(v,\theta'_\epsilon) = \mathcal{G}(v,w) + t(w) (\mathcal{L}_2^\epsilon \theta'_\epsilon,v)_{L^2} = \mathcal{G}(v,w) \, . \end{align*} For $v \in H^1$ we have $Pv \in H^1_{\perp}$ and therefore \begin{align*} \mathcal{G}(u,v) = \mathcal{G}(u,Pv) = (-f,Pv)_{L^2} = (-Pf,v)_{L^2} = (-f,v)_{L^2} \end{align*} for all $v \in H^1$. We conclude $u \in H^2$, $\mathcal{L}_2^\epsilon u =f$, and $\norm{u}_{H^2} \le C \, \norm{f}_{L^2}$. The lemma is proved. \end{proof} Finally, we summarize the properties of the self-adjoint operator $\mathcal{L}_2^\epsilon$ in the following lemma. \begin{lem} \label{lem:properties of L2} The following statements for $\mathcal{L}_2^\epsilon$ hold true: \begin{enumerate} \item[(i)] $N(\mathcal{L}_2^\epsilon) = \text{span} \set{\theta'_\epsilon}$ \item[(ii)] $R(\mathcal{L}_2^\epsilon) = L^2_{\perp} = \set{ f \in L^2 \, | \, f \perp \theta'_\epsilon}$ \item[(iii)] $\mathcal{L}_2^\epsilon : H^2_{\perp} \to L^2_{\perp}$ is an isomorphism, where $H^2_\perp = \set{u \in H^2 \, | \, u \perp \theta'_\epsilon}$. \end{enumerate} \end{lem} \section{Spectral analysis for $\mathcal{L}_0^\epsilon$} \label{sec:NL0} In this section we combine the results of Sections \ref{sec:NL1} and \ref{sec:NL2} to study the properties of the linearization \nolinebreak $\mathcal{L}_0^\epsilon$. In particular, we show that $0$ is an isolated point in $\sigma(\mathcal{L}_0^\epsilon)$ (in fact we show that $0$ is semisimple) and $\sigma(\mathcal{L}_0^\epsilon) \cap \dot{\imath}\setR = \set{0}$. This has important consequences for the analytic semigroup generated by the sectorial operator $\mathcal{L}_0^\epsilon$ and is the crucial ingredient for our perturbation argument in Section \ref{sec:Nperturbation}. We start by showing that the leading order term of $\mathcal{L}_0^\epsilon$ defines a sectorial operator. \begin{lem} \label{lem:Laplacian is sectorial} For all $\alpha \in \setR$, the linear operator $\mathcal{L}$ defined by \begin{align*} \mathcal{L} = \begin{pmatrix} \hphantom{-\alpha} \Delta & \alpha \Delta \\ - \alpha \Delta & \hphantom{\alpha} \Delta \end{pmatrix} : H^2 \times H^2 \subset L^2 \times L^2 \to L^2 \times L^2 \end{align*} is sectorial, and the graph norm of $\mathcal{L}$ is equivalent to the $H^2$-norm. \end{lem} \begin{proof} For a given $\lambda \in \setC$ with $\text{Re}\lambda >0$, we define $\mathcal{G}$ by \begin{align*} \mathcal{G}(u,v) =& \langle\lambda u - \mathcal{L}u,v\rangle = \lambda \int u_1 \overline{v_1} + \lambda \int u_2 \overline{v_2} + \int u_1' \overline {v_1'} + \alpha \int u_2' \overline{v_1'} - \alpha \int u_1' \overline{v_2'} + \int u_2' \overline{v_2'} \end{align*} for $u=(u_1,u_2),v=(v_1,v_2) \in H^1 \times H^1$. We see that $\mathcal{G}$ is well-defined, sesquilinear, and bounded. Moreover, $\mathcal{G}$ is coercive since \begin{align*} \text{Re} \,\mathcal{G} (u,u) =& \text{Re} \lambda \norm{u_1}_{L^2}^2 + \text{Re} \lambda \norm{u_2}_{L^2}^2 + \norm{u_1'}_{L^2}^2 + \norm{u_2'}_{L^2}^2 = \text{Re} \lambda \norm{u}_{L^2}^2 + \norm{u'}_{L^2}^2 \end{align*} and $\text{Re} \lambda >0$. With the help of the classical Lax-Milgram theorem, we find for every $f = (f_1,f_2) \in L^2 \times L^2$ a unique $u = (u_1,u_2) \in H^1 \times H^1$ such that $\mathcal{G}(u,v) = (f,v)_{L^2}$ for all $v=(v_1,v_2) \in H^1 \times H^1$. In particular, $u$ is the unique weak solution of the resolvent equation $\lambda u - \mathcal{L} u = f$. This also shows that $u_1$ and $u_2$ satisfy \begin{align*} \lambda (u_1 - \alpha u_2) - (1 + \alpha^2)\Delta u_1 &= f_1 -\alpha f_2 \\ \lambda (u_2 + \alpha u_1) - (1 + \alpha^2)\Delta u_2 &= \alpha f_1 +f_2 \end{align*} in the sense of distributions, hence $u_1,u_2 \in H^2$. We conclude that the resolvent equation admits a unique solution $u \in H^2 \times H^2$ for every right hand side $f \in L^2 \times L^2$. For the resolvent estimate, let $u=(u_1,u_2) \in H^2 \times H^2$ and $f = (f_1,f_2) \in L^2 \times L^2$ with $\lambda u - \mathcal{L} u = f$ be given. We find \begin{align*} (f,\Delta u)_{L^2} =& (\lambda u - \mathcal{L} u ,\Delta u )_{L^2} \\ =& - \lambda \norm{u'_1}_{L^2}^2 - \norm{\Delta u_1}_{L^2}^2 - \alpha (\Delta u_2, \Delta u_1)_{L^2} - \lambda \norm{u'_2}_{L^2}^2 + \alpha (\Delta u_1,\Delta u_2)_{L^2} - \norm{\Delta u_2}_{L^2}^2 \,. \end{align*} From here we obtain by considering only the real part that \begin{align*} \text{Re}\lambda \, \norm{u'}_{L^2}^2 + \norm{\Delta u}_{L^2}^2 = - \text{Re} (f,\Delta u)_{L^2} \le \norm{f}_{L^2} \, \norm{\Delta u}_{L^2} \, , \end{align*} hence $\norm{\Delta u}_{L^2} \le \norm{f}_{L^2}$. In particular, we have \begin{align*} \abs{\lambda} \, \norm{u}_{L^2} \le \norm{f}_{L^2} + \norm{\mathcal{L} u}_{L^2} \le \norm{f}_{L^2} + C_\alpha \norm{\Delta u}_{L^2} \le C_\alpha \, \norm{f}_{L^2} \, , \end{align*} thus $\norm{\lambda \text{R}(\lambda,\mathcal{L})} \le C_\alpha$ for all $\text{Re}\lambda >0$. Proposition \ref{prop:sectorial} implies that $\mathcal{L}$ is sectorial. Furthermore, we see with the help of the open mapping theorem that the graph norm of $\mathcal{L}$ is equivalent to the $H^2$-norm. The lemma is proved. \end{proof} Above we made use of the following proposition which is taken from \cite[Proposition 2.1.11]{lunardi}. \begin{proposition} \label{prop:sectorial} Let $A:D(A) \subset X \to X$ be a linear operator on a Banach space $X$ such that the resolvent set $\rho(A)$ contains the half-plane $\set{\lambda \in \setC \, | \, \text{Re} \lambda \ge \omega}$ and $\norm{\lambda \text{R}(\lambda,A)} \le M$ for all $\text{Re} \lambda \ge \omega$, where $\omega \in \setR$ and $M>0$. Then $A$ is sectorial. \end{proposition} Since we have the decomposition $\mathcal{L}_0^\epsilon =\kappa \mathcal{L} + \mathcal{B}^\epsilon$ where $\mathcal{L}$ is sectorial and $\mathcal{B}^\epsilon:L^2\times L^2 \to L^2\times L^2$ is bounded, we immediately obtain that $\mathcal{L}_0^\epsilon$ is sectorial. In the sequel we show that \nolinebreak $0$ is an isolated point in $\sigma(\mathcal{L}_0^\epsilon)$. Therefore, we first identify the kernel and range of $\mathcal{L}_0^\epsilon$. \begin{lem} \label{lem:L0 splitting} Let $\epsilon>0$ be small enough. Then the following statements for the linear operator $\mathcal{L}_0^\epsilon$ hold true: \begin{enumerate} \item[(i)] $N(\mathcal{L}_0^\epsilon) = \set{0} \times N(\mathcal{L}_2^\epsilon) = \set{0} \times \text{span}\set{\theta'_\epsilon}$. \item[(ii)] $R(\mathcal{L}_0^\epsilon)$ is closed and $R(\mathcal{L}_0^\epsilon) = \set{ (f_1,f_2) \in L^2 \times L^2 \, | \, (\alpha f_1 +f_2 , \theta'_\epsilon)_{L^2} = 0}$. \item[(iii)] $L^2 \times L^2 = N(\mathcal{L}_0^\epsilon) \oplus R(\mathcal{L}_0^\epsilon)$. \end{enumerate} \end{lem} \begin{proof} Statement (i) follows directly from the equivalences \begin{align*} (u_1,u_2) \in N(\mathcal{L}_0^\epsilon) \quad &\Leftrightarrow \quad 0= \mathcal{L}_1^\epsilon u_1 + \alpha \mathcal{L}_2^\epsilon u_2 \quad \text{and} \quad 0= -\alpha \mathcal{L}_1^\epsilon u_1 + \mathcal{L}_2^\epsilon u_2 \\ \quad&\Leftrightarrow\quad 0= (1+\alpha^2) \mathcal{L}_1^\epsilon u_1 \quad \text{and} \quad 0= (1+\alpha^2) \mathcal{L}_2^\epsilon u_2 \\ \quad&\Leftrightarrow\quad u_1 = 0 \quad \text{and} \quad u_2 \in N(\mathcal{L}_2^\epsilon) = \text{span}\set{\theta'_\epsilon} \, , \end{align*} where we have used Lemmas \ref{lem:L1 is sectorial} and \ref{lem:properties of L2}. For statement (ii), let first the couple $(f_1,f_2) \in R(\mathcal{L}_0^\epsilon)$ be given. This means \begin{align*} f_1 &=\mathcal{L}_1^\epsilon u_1 + \alpha \mathcal{L}_2^\epsilon u_2 \,, \qquad f_2 =-\alpha \mathcal{L}_1^\epsilon u_1 + \mathcal{L}_2^\epsilon u_2 \end{align*} for some $u_1, u_2 \in H^2$. We can equivalently rewrite this as \begin{align*} f_1 - \alpha f_2 &= (1+\alpha^2)\mathcal{L}_1^\epsilon u_1 \, , \qquad \alpha f_1 + f_2 = (1+\alpha^2)\mathcal{L}_2^\epsilon u_2 \, , \end{align*} hence $(\alpha f_1 + f_2, \theta'_\epsilon)_{L^2}=0$. For the converse inclusion, let now $f_1,f_2 \in L^2$ be such that $(\alpha f_1 + f_2, \theta'_\epsilon)_{L^2}=0$. Thanks to Lemma \ref{lem:properties of L2}, there exists a $u_2 \in H^2$ such that $\alpha f_1 + f_2 = (1+\alpha^2)\mathcal{L}_2^\epsilon u_2$, and because of Lemma \nolinebreak \ref{lem:L1 is sectorial}, there exists a unique $u_1 \in H^2$ such that$f_1 - \alpha f_2 = (1+\alpha^2)\mathcal{L}_1^\epsilon u_1$. In particular, we have $\mathcal{L}_0^\epsilon (u_1,u_2) = (f_1,f_2)$, hence $(f_1,f_2) \in R(\mathcal{L}_0^\epsilon)$. This proves (ii). We now show that the sum $N(\mathcal{L}_0^\epsilon) + R(\mathcal{L}_0^\epsilon)$ is direct. Let therefore $(f_1,f_2)$ be an element of the intersection $N(\mathcal{L}_0^\epsilon) \cap R(\mathcal{L}_0^\epsilon)$. We find $(f_1,f_2) = (0, \lambda \theta'_\epsilon)$ and $0 = (\alpha f_1 + f_2 , \theta'_\epsilon)_{L^2} = \lambda \norm{\theta'_\epsilon}_{L^2}^2$, hence $(f_1,f_2)=(0,0)$ and the sum is direct. Let now $(f_1,f_2) \in L^2 \times L^2$ be arbitrary. We can decompose $(f_1,f_2)$ as follows: \begin{align*} (f_1,f_2) = (0,\lambda \theta'_\epsilon) + (f_1,f_2 - \lambda \theta'_\epsilon) \in N(\mathcal{L}_0^\epsilon) \oplus R(\mathcal{L}_0^\epsilon) \, , \end{align*} where $\lambda = (\alpha f_1 +f_2,\theta'_\epsilon)_{L^2}/\norm{\theta'_\epsilon}_{L^2}^2$. This proves (iii) and the lemma. \end{proof} We can now prove: \begin{lem} \label{lem:0 is isolated} Let $\epsilon>0$ be small enough. Then 0 is an isolated point in the spectrum of $\mathcal{L}_0^\epsilon$. \end{lem} \begin{proof} Lemma \ref{lem:L0 splitting} implies that $0$ belongs to the resolvent set of the operator \begin{align*} \mathcal{L}_0^\epsilon : H^2 \times H^2 \cap R(\mathcal{L}_0^\epsilon) \subset R(\mathcal{L}_0^\epsilon) \to R(\mathcal{L}_0^\epsilon) \,. \end{align*} Since the resolvent set is always open, we find $r>0$ such that \begin{align*} \lambda I - \mathcal{L}_0^\epsilon : H^2 \times H^2 \cap R(\mathcal{L}_0^\epsilon) \subset R(\mathcal{L}_0^\epsilon) \to R(\mathcal{L}_0^\epsilon) \end{align*} is invertible for all $\abs{\lambda} < r$. Let now $0<\abs{\lambda} < r$ be given. We prove that the mapping \begin{align*} \lambda I - \mathcal{L}_0^\epsilon:H^2 \times H^2 \subset L^2 \times L^2 \to L^2 \times L^2 \end{align*} is invertible and thus $\lambda \in \rho(\mathcal{L}_0^\epsilon)$. First, we show that $\lambda I - \mathcal{L}_0^\epsilon$ is injective. So assume $\lambda u - \mathcal{L}_0^\epsilon u =0$. We decompose $u = v + w \in N(\mathcal{L}_0^\epsilon) \oplus \big(H^2 \times H^2 \cap R(\mathcal{L}_0^\epsilon) \big)$ to find \begin{align*} 0= \lambda v + \big( \lambda w - \mathcal{L}_0^\epsilon w \big) \in N(\mathcal{L}_0^\epsilon) \oplus R(\mathcal{L}_0^\epsilon) \, , \end{align*} hence $\lambda v=0$ and $\lambda w - \mathcal{L}_0^\epsilon w =0$. Since $0<\abs{\lambda} < r$, we obtain $v=w=0$ and $\lambda I - \mathcal{L}_0^\epsilon$ is injective. Next, we show that $\lambda I - \mathcal{L}_0^\epsilon$ is surjective: Let therefore $f \in L^2 \times L^2$ be given. Again, we use the decomposition \begin{align*} f = g + h \in N(\mathcal{L}_0^\epsilon) \oplus R(\mathcal{L}_0^\epsilon) \end{align*} and find $w \in H^2 \times H^2 \cap R(\mathcal{L}_0^\epsilon)$ such that $\lambda w - \mathcal{L}_0^\epsilon w = h$. Moreover, we choose $v = \lambda^{-1} g \in N(\mathcal{L}_0^\epsilon)$ and set $u = v + w$ to see $\lambda u - \mathcal{L}_0^\epsilon u = f$. It follows $\lambda \in \rho(\mathcal{L}_0^\epsilon)$ for $0<\abs{\lambda}<r$. In particular, 0 is an isolated point in the spectrum $\sigma(\mathcal{L}_0^\epsilon$). The lemma is proved. \end{proof} Lemmas \ref{lem:L0 splitting} and \ref{lem:0 is isolated} have the following consequences for the analytic semigroup generated by $\mathcal{L}_0^\epsilon$: \begin{lem} \label{lem:analytic semigroup of L0} Let $\epsilon >0$ be small enough. Then the following statements for the analytic semigroup $ e^{t \mathcal{L}_0^\epsilon}$ generated by $\mathcal{L}_0^\epsilon $ hold true: \begin{enumerate} \item[(i)] $e^{t \mathcal{L}_0^\epsilon} u = u$ for all $u \in N(\mathcal{L}_0^\epsilon)$ \item[(ii)] $e^{t \mathcal{L}_0^\epsilon} \big( R(\mathcal{L}_0^\epsilon) \big) \subset R(\mathcal{L}_0^\epsilon)$ \item[(iii)] $\sigma\big({e^{t \mathcal{L}_0^\epsilon}}_{|R(\mathcal{L}_0^\epsilon)} \big) \setminus \set{0} = e^{t \sigma(\mathcal{L}_0^\epsilon)\setminus \set{0}}$ for all $t>0$. \end{enumerate} \end{lem} \begin{proof} Because of Lemma \ref{lem:0 is isolated}, the point 0 is isolated in $\sigma(\mathcal{L}_0^\epsilon)$, hence the sets $\sigma_1 = \set{0}$ and $ \sigma_2 = \sigma(\mathcal{L}_0^\epsilon)\setminus\set{0}$ are closed. Moreover, we can find $r >0$ such that $\sigma_1 \subset B_r(0)$ and $\sigma_2 \cap \overline{B_r(0)} = \emptyset$. We parameterize the boundary of $B_r(0)$ by a curve $\gamma$, oriented counterclockwise, and define \begin{align*} P = \frac{1}{2 \pi \dot{\imath}} \int_\gamma \text{R}(\xi,\mathcal{L}_0^\epsilon) \, d \xi \,. \end{align*} From \cite[Proposition A.1.2]{lunardi} we know that $P$ is a projection such that $P(L^2 \times L^2) \subset H^2 \times H^2$. Moreover, we can decompose $L^2\times L^2$ by \begin{align*} L^2\times L^2 = X_1 \oplus X_2 \, , \quad X_1 = P(L^2 \times L^2)\,, \quad X_2 = (I-P)(L^2 \times L^2) \,, \end{align*} and this induces a splitting of the operator $\mathcal{L}_0^\epsilon$ as follows: \begin{align*} &\mathcal{A}_1:X_1 \to X_1: u \mapsto \mathcal{L}_0^\epsilon u \, , \\ &\mathcal{A}_2:H^2 \times H^2 \cap X_2 \to X_2: u \mapsto \mathcal{L}_0^\epsilon u\,. \end{align*} Again from \cite[Proposition A.1.2]{lunardi} we get $\sigma(\mathcal{A}_1) = \sigma_1$, $\sigma(\mathcal{A}_2) = \sigma_2$, and $\text{R}(\lambda,\mathcal{A}_1) = \text{R}(\lambda, \mathcal{L}_0^\epsilon)_{|X_1}$, $\text{R}(\lambda,\mathcal{A}_2) = \text{R}(\lambda, \mathcal{L}_0^\epsilon)_{|X_2}$ for all $\lambda \in \rho(\mathcal{L}_0^\epsilon)$. In particular, $\mathcal{A}_1$ and $\mathcal{A}_2$ generate analytic semigroups on $X_1$ and $X_2$, respectively, and we have \begin{align*} e^{t \mathcal{A}_1} = {e^{t \mathcal{L}_0^\epsilon}}_{|X_1} \, , \quad e^{t \mathcal{A}_2} = {e^{t \mathcal{L}_0^\epsilon}}_{|X_2} \,. \end{align*} From Lemma \ref{lem:L0 splitting} and \cite[Proposition A.2.2]{lunardi}, we obtain $X_1 = N(\mathcal{L}_0^\epsilon)$ and $X_2 = R(\mathcal{L}_0^\epsilon)$. This proves (i) and (ii). To see (iii), we apply the spectral mapping theorem (see \cite[Corollary 2.3.7]{lunardi}) to $\mathcal{A}_2$ and find \begin{align*} \sigma({e^{t \mathcal{L}_0^\epsilon}}_{|R(\mathcal{L}_0^\epsilon)}) \setminus \set{0}= \sigma(e^{t \mathcal{A}_2}) \setminus \set{0} = e^{t \sigma(\mathcal{A}_2)} = e^{t \sigma(\mathcal{L}_0^\epsilon)\setminus \set{0}} \end{align*} for every $t>0$. The lemma is proved. \end{proof} For the existence of $T$-periodic solutions, it is important to know whether or not the real number $1$ belongs to the spectrum of $e^{T\mathcal{L}_0^\epsilon}$. The general spectral mapping theorem for analytic semigroups (see for example \cite[Corollary 2.3.7]{lunardi}) includes the following equivalence: \begin{align*} 1 \not\in \sigma(e^{T \mathcal{L}_0^\epsilon}) \quad \Leftrightarrow \quad \frac{2 k \pi \dot{\imath}}{T} \not\in \sigma(\mathcal{L}_0^\epsilon) \text{ for all } k \in \setZ\,. \end{align*} We already know that $2 k \pi \dot{\imath}/T \in \sigma(\mathcal{L}_0^\epsilon)$ for $k=0$ because $\mathcal{L}_0^\epsilon$ has a nontrivial kernel. In view of Lemma \nolinebreak \ref{lem:analytic semigroup of L0}, it is good to analyze the remaining cases where $k \not= 0$. Since the operators $\mathcal{L}_1^\epsilon$ and $\mathcal{L}_2^\epsilon$ do not commute, we can not use the spectral theorem for commuting self-adjoint operators to answer this question. It turns out that the self-adjointness of the operators $\mathcal{L}_1^\epsilon$ and $\mathcal{L}_2^\epsilon$ already suffices to completely rule out the cases where $k \not= 0$. This is the statement of the next lemma: \begin{lem} \label{lem:Spectrum of T} Let $H$ be a Hilbert space and $D \subset H$ be a dense subspace. Furthermore, let $\mathcal{A},\mathcal{B}$ be linear and self-adjoint operators from $D$ to $H$. Consider for $\alpha \in \setR$ the linear operator $ \mathcal{T}$ defined by \begin{align*} \mathcal{T} = \begin{pmatrix} \hphantom{-\alpha} \mathcal{A} & \alpha \mathcal{B} \\ -\alpha \mathcal{A} & \hphantom{\alpha} \mathcal{B} \end{pmatrix} : D \times D \subset H \times H \to H \times H \,. \end{align*} Then $\sigma(\mathcal{T}) \cap \dot{\imath}\setR \subset \set{0}$. \end{lem} \begin{proof} We divide the proof into several steps and start with \\ Claim 1: $\mathcal{T}$ is closed. \\ Proof of Claim 1: We have to show that $(u_n) \subset D \times D, \,\, u_n \to u , \,\, \mathcal{T} u_n \to f$ implies $u \in D \times D, \,\, \mathcal{T} u = f$. For $(u_n)$ and $f$ given as above, we find \begin{align*} \mathcal{A} u_n^1 + \alpha \mathcal{B}u_n^2 \to f^1 \qquad \text{and} \qquad -\alpha \mathcal{A}u_n^1 + \mathcal{B}u_n^2 \to f^2 \,, \end{align*} hence \begin{align*} u_n^1 \to u^1, \quad (1+\alpha^2) \mathcal{A}u_n^1 \to f^1 - \alpha f^2 \qquad \text{and} \qquad u_n^2 \to u^2, \quad (1+\alpha^2) \mathcal{B}u_n^2 \to \alpha f^1 + f^2 \, . \end{align*} Since $\mathcal{A}$ and $\mathcal{B}$ are closed, we obtain $u^1,u^2 \in D$ and \begin{align*} (1+\alpha^2) \mathcal{A}u^1 = f^1 - \alpha f^2 , \quad (1+\alpha^2) \mathcal{B}u^2 = \alpha f^1 + f^2 \, , \end{align*} which rewritten means that $u \in D \times D$ and $\mathcal{T} u = f$. Claim 1 is proved. Let now $t \in \setR$ with $t \not =0$ be given. For the statement of the lemma, we have to show that for every $f=(f^1,f^2) \in H \times H$ there exists a unique element $u = (u^1,u^2) \in D \times D$ such that \begin{align*} \dot{\imath} t u^1 + \mathcal{A}u^1 + \alpha \mathcal{B}u^2 = f^1 \qquad \text{and} \qquad \dot{\imath} t u^2 - \alpha \mathcal{A}u^1 + \mathcal{B}u^2 = f^2 \,. \end{align*} This is equivalent to \begin{align*} \dot{\imath} t (u^1 - \alpha u^2) + (1+\alpha^2) \mathcal{A}u^1 = f^1 - \alpha f^2 \qquad \text{and} \qquad \dot{\imath} t (\alpha u^1 +u^2) + (1+\alpha^2) \mathcal{B}u^2 = \alpha f^1 + f^2 \,. \end{align*} Multiplying both equations by $-\dot{\imath}$ leads to \begin{align*} \mathcal{T}_0 {u^1 \choose u^2} = {- \dot{\imath} f^1 + \dot{\imath} \alpha f^2 \choose -\dot{\imath} \alpha f^1 - \dot{\imath} f^2} \, , \end{align*} where the linear operator $\mathcal{T}_0 : D \times D \subset H \times H \to H \times H$ is defined by \begin{align*} \mathcal{T}_0 = \begin{pmatrix} t - \dot{\imath} (1 + \alpha ^2) \mathcal{A} & - \alpha t \\ \alpha t & t - \dot{\imath}(1+\alpha^2) \mathcal{B} \end{pmatrix} \,. \end{align*} In particular, we have to show that $\mathcal{T}_0$ is bijective. To see this, we first notice that $\mathcal{T}_0$ is closed, which can be proved as in Claim 1. Next, we determine the adjoint of $\mathcal{T}_0$: \\ Claim 2: The adjoint $\mathcal{T}_0^*$ of $\mathcal{T}_0$ is given by \begin{align*} \mathcal{T}_0^* = \begin{pmatrix} t + \dot{\imath} (1 + \alpha ^2) \mathcal{A} & \alpha t \\ - \alpha t & t + \dot{\imath}(1+\alpha^2) \mathcal{B} \end{pmatrix} \end{align*} with domain of definition $D(\mathcal{T}_0^*) = D \times D$. \\ Proof of Claim 2: Recall that the set $D(\mathcal{T}_0^*)$ is defined by \begin{align*} D(\mathcal{T}_0^*) = \left\{ u \in H \times H \, | \, v \mapsto (\mathcal{T}_0v,u)_H \text{ is continuous on } D \times D \text{ with respect to }\norm{\cdot}_{H} \right\} \, , \end{align*} and $\mathcal{T}_0^* u$ for $u \in D(\mathcal{T}_0^*)$ is the unique vector such that $(\mathcal{T}_0v,u)_H = (v, \mathcal{T}_0^* u)_H$ for all $v \in D \times D$. Let now $u \in D \times D$ be given. Since $\mathcal{A}$ and $\mathcal{B}$ are self-adjoint, we find for all elements $v \in D \times D$ the expression \begin{align*} (\mathcal{T}_0 v ,u)_H =&(v^1, t u^1 + \dot{\imath} (1+ \alpha^2) \mathcal{A}u^1 + \alpha t u^2)_H + (v^2,- \alpha t u^1 + t u^2 + \dot{\imath} (1+\alpha^2) \mathcal{B}u^2)_H \, . \end{align*} This implies $u \in D(\mathcal{T}_0^*)$ and \begin{align*} \mathcal{T}_0^* u = \begin{pmatrix} t + \dot{\imath} (1 + \alpha ^2) \mathcal{A} & \alpha t \\ - \alpha t & t + \dot{\imath}(1+\alpha^2) \mathcal{B} \end{pmatrix} { u^1 \choose u^2 } \, . \end{align*} Let now $u \in D(\mathcal{T}_0^*)$ be given. In particular, the mappings \begin{align*} v^1 &\mapsto t(v^1,u^1)_H - \dot{\imath} (1+\alpha^2) ( \mathcal{A} v^1 ,u^1 )_H + \alpha t (v^1,u^2)_H \\ v^2 &\mapsto -\alpha t (v^2,u^1)_H + t(v^2,u^2)_H - \dot{\imath} (1+\alpha^2) (\mathcal{B} v^2,u^2)_H \end{align*} are continuous with respect to $\norm{\cdot}_H$. We obtain $u^1 \in D(\mathcal{A}^*)= D(\mathcal{A}) = D$ and $u^2 \in D(\mathcal{B}^*) = D(\mathcal{B}) = D$, hence $u \in D \times D$. Claim 2 is proved. We now show that $\mathcal{T}_0$ and $\mathcal{T}_0^*$ are injective: \\ Claim 3: For all $u \in D \times D$ we have $\text{Re} (\mathcal{T}_0 u,u)_H = t \, \norm{u}^2_H$ and $\text{Re} (\mathcal{T}_0^* u,u)_H = t \, \norm{u}^2_H$. \\ Proof of Claim 3: For $u \in D \times D$ we find \begin{align*} (\mathcal{T}_0u,u)_H =& t \, \norm{u}^2_H + \alpha t \big( (u^1,u^2)_H - \overline{(u^1,u^2)_H} \, \big) - \dot{\imath} (1+ \alpha^2)(\mathcal{A}u^1,u^1)_H - \dot{\imath} (1+ \alpha^2) (\mathcal{B}u^2,u^2)_H \,. \end{align*} Since $\mathcal{A}$ and $\mathcal{B}$ are self-adjoint, we obtain $\text{Re} (\mathcal{T}_0 u,u)_H = t \, \norm{u}^2_H$. The same argument gives the result for $\mathcal{T}_0^*$. Claim 3 is proved. \\ Claim 4: $R(\mathcal{T}_0)$ is a closed subspace of $H \times H$. \\ Proof of Claim 4: Let $(f_n) = (\mathcal{T}_0 u_n) \subset R(\mathcal{T}_0)$ be a sequence with $f_n \to f$. From Claim 3 it follows \begin{align*} \abs{t} \, \norm{u_n - u_m}^2_H &= \abs{\text{Re} (\mathcal{T}_0 u_n -\mathcal{T}_0 u_m, u_n-u_m)_H} \le \norm{f_n - f_m}_H \, \norm{u_n - u_m}_H \, , \end{align*} thus $(u_n)$ is a Cauchy sequence and $u_n \to u$ for some $u \in H \times H$. Since $\mathcal{T}_0$ is closed, we obtain $u \in D \times D$ and $\mathcal{T}_0 u=f$. Claim 4 is proved. Summarizing, we know that $\mathcal{T}_0$ is densely defined, closed, injective with closed range, and $\mathcal{T}_0^*$ is injective. With the help of the closed range theorem, we find $R(\mathcal{T}_0) = N(\mathcal{T}_0^*)^\perp = \set{0}^\perp = H\times H$. In particular, $\mathcal{T}_0$ is a bijection and the lemma is proved. \end{proof} A combination of Lemmas \ref{lem:L1 is sectorial}, \ref{lem:L2 is sectorial}, \ref{lem:analytic semigroup of L0}, and \ref{lem:Spectrum of T} yields: \begin{corollary} \label{lem:analytic semigroup of L0 continued} Let $\epsilon >0$ be small enough. Then the spectrum of $\mathcal{L}_0^\epsilon$ satisfies $\sigma(\mathcal{L}_0^\epsilon) \cap \dot{\imath}\setR =\set{0}$. Moreover, the linear mapping $e^{t \mathcal{L}_0^\epsilon} - I : R(\mathcal{L}_0^\epsilon) \to R(\mathcal{L}_0^\epsilon)$ is a bijection for every $t>0$. \end{corollary} \section{The perturbation argument} \label{sec:Nperturbation} As already announced, we introduce (compared to \cite{alex_small}) an additional parameter in our evolution equation and replace the external magnetic field $h_\text{ext}$ by $\lambda h + \gamma$. We assume that \begin{align*} h \in C^{0,\beta}(\setR,L^2) + C^{0,\beta}(\setR,L^\infty)\qquad (0<\beta<1) \end{align*} is a (fixed) $T$-periodic function and that $\lambda$, $\gamma$ are real parameters. If for example the function $h$ belongs to $C^{0,\beta}(\setR)$ and $\int_0^T h(t) \, dt = 0$, then $\gamma$ represents the time average of the external magnetic field. Since $\theta_\epsilon \not\in H^2$, we decompose the angle $\theta$ by $\theta = \theta_\epsilon + \vartheta$ with $\vartheta(t,\cdot) \in H^2$. Then $(LLG)_\epsilon$ reads as \begin{align*} \begin{array}{ll} \varphi_t = R^\epsilon_1(t,\varphi,\theta_\epsilon + \vartheta,\gamma,\lambda) \, , & \quad \lim_{x \to \pm \infty} \varphi(\cdot,x)= 0 \, , \\ \vartheta_t = R^\epsilon_2(t,\varphi,\theta_\epsilon + \vartheta,\gamma,\lambda) \, , & \quad \lim_{x \to \pm \infty} \vartheta(\cdot,x)= 0 \, , \end{array} \end{align*} for $(\varphi,\vartheta)$. For $0<\beta<1$, we define the function spaces \begin{align*} X = C^1([0,T],L^2) \cap C([0,T],H^2) \cap C^{0,\beta}_\beta(]0,T],H^2) \cap C^{1,\beta}_{\beta}(]0,T],L^2) \end{align*} and $Y = C([0,T],L^2) \cap C^{0,\beta}_\beta(]0,T],L^2)$. Similarly to \cite[Lemma 3.3]{alex_small}, we find the following statement: \begin{lem} \label{lem:Nexistence} Let $\theta_\epsilon$ be the phase function of a rescaled N{\'e}el wall. Then there exist an open neighborhood $U_\epsilon$ of $(0,0)$ in $H^2\times H^2$ and an open neighborhood $V_\epsilon$ of $(0,0)$ in $\setR \times \setR$ such that \begin{align*} \begin{array}{ll} \varphi_t = R^\epsilon_1(t,\varphi,\theta_\epsilon + \vartheta,\gamma,\lambda) \, , & \quad \varphi(0,\cdot)={\varphi_0} \, , \\ \vartheta_t = R^\epsilon_2(t,\varphi,\theta_\epsilon + \vartheta,\gamma,\lambda) \, , & \quad \vartheta(0,\cdot)={\vartheta_0} \, , \end{array} \end{align*} possesses a unique solution $(\varphi,\vartheta)=\big(\varphi(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda),\vartheta(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda)\big)$ in $X \times X$ close to $(0,0)$ for all $({\varphi_0},{\vartheta_0}) \in U_\epsilon$ and $(\gamma,\lambda) \in V_\epsilon$. Moreover, the mapping \begin{align*} ({\varphi_0},{\vartheta_0},\gamma,\lambda) \mapsto \big(\varphi(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda),\vartheta(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda)\big) \end{align*} is smooth and \begin{gather*} {D_{({\varphi_0},{\vartheta_0})} \varphi(T,0,0,0,0) \choose D_{({\varphi_0},{\vartheta_0})} \vartheta(T,0,0,0,0)}{h_1 \choose h_2} = e^{T \mathcal{L}_0^\epsilon} {h_1 \choose h_2} \, , \\ {D_\gamma \varphi (T,0,0,0,0) \choose D_\gamma \vartheta (T,0,0,0,0)} h_3= \frac{h_3}{\epsilon} \int_0^T e^{(T-s) \mathcal{L}_0^\epsilon} {\alpha \cos \theta_\epsilon \choose \cos \theta_\epsilon} \, ds \end{gather*} for all $h_1,h_2 \in H^2$, $h_3 \in \setR$. \end{lem} \begin{proof} Thanks to the embedding $H^1 \hookrightarrow L^\infty$, we can choose an open neighborhood $X_0$ of $0$ in X such that $\sup_{0 \le t \le T} \norm{\varphi(t,\cdot)}_{L^\infty} \le \frac{\pi}{4}$ for all $\varphi \in X_0$. We now define $F:X_0\times X \times \setR \times \setR \to Y\times Y$ by \begin{align*} F(\varphi,\vartheta,\gamma,\lambda) = \big( \varphi_t - R^\epsilon_1(\cdot,\varphi,\theta_\epsilon + \vartheta,\gamma,\lambda) , \vartheta_t - R^\epsilon_2(\cdot,\varphi,\theta_\epsilon + \vartheta,\gamma,\lambda) \big) \end{align*} and $G:X_0\times X \times H^2\times H^2 \times \setR \times \setR \to Y\times Y \times H^2 \times H^2$ by \begin{align*} G(\varphi,\vartheta,{\varphi_0},{\vartheta_0},\gamma,\lambda) = \big( F(\varphi,\vartheta,\gamma,\lambda) , \varphi(0)-{\varphi_0},\vartheta(0)-{\vartheta_0}\big) \,. \end{align*} The embedding $H^1 \hookrightarrow L^\infty$, the choice of $X_0$, and Lemma \ref{lem:Nphasefunction} imply that $F$ and $G$ are well-defined and smooth. For example, we use the fact $\cos \theta_\epsilon \in L^2$ to see that \begin{align*} \abs{\cos(\theta_\epsilon + \vartheta)} \le \abs{\cos \theta_\epsilon \cos \vartheta} + \abs{\sin \theta_\epsilon \sin \vartheta} \le \abs{\cos \theta_\epsilon} + \abs{\vartheta} \quad \in L^2 \end{align*} for every $\vartheta \in L^2$. We already know that $G(0,0,0,0,0,0) = (0,0,0,0)$ since the rescaled N{\'e}el wall is a stationary solution for LLG with $h_\text{ext}=0$. Moreover, the equation \begin{align*} D_{(\varphi,\vartheta)}G(0,0,0,0,0,0) [\varphi,\vartheta] = (f_1,f_2,g_1,g_2) \end{align*} is equivalent to \begin{align*} { \varphi_t \choose \vartheta_t } = \mathcal{L}^\epsilon_0 { \varphi(t) \choose \vartheta(t)} + { f_1(t) \choose f_2(t) } \, , \quad {\varphi(0) \choose \vartheta(0) } = { g_1 \choose g_2 } \,. \end{align*} Since $\mathcal{L}^\epsilon_0$ is sectorial, we obtain together with the optimal regularity result \cite[Corollary 4.3.6]{lunardi} that \begin{align*} D_{(\varphi,\vartheta)}G(0,0,0,0,0,0): X \times X \to Y \times Y \times H^2 \times H^2 \end{align*} is invertible. With the help of the implicit function theorem, we find open neighborhoods $U_\epsilon$ of $(0,0)$ in $H^2\times H^2$, $V_\epsilon$ of $(0,0)$ in $\setR \times \setR$, and a smooth mapping $(\varphi,\vartheta) : U_\epsilon \times V_\epsilon \to X_0 \times X$ such that \begin{align*} \big(\varphi(\cdot,0,0,0,0),\vartheta(\cdot,0,0,0,0)\big) = (0,0) \end{align*} and \begin{align*} G\big(\varphi(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda),\vartheta(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda),{\varphi_0},{\vartheta_0},\gamma,\lambda\big) = (0,0,0,0) \end{align*} for all $({\varphi_0},{\vartheta_0}) \in U_\epsilon$ and $(\gamma,\lambda) \in V_\epsilon$. This in particular implies that \begin{align*} \big(\varphi(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda),\vartheta(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda)\big) \quad \in X \times X \end{align*} is the desired solution. It remains to calculate the derivatives. With the help of the chain rule, we find \begin{align*} (0,0,0,0) =& D_\varphi G(0) \circ D_{({\varphi_0},{\vartheta_0})} \varphi(0) [h_1,h_2] + D_\vartheta G(0) \circ D_{({\varphi_0},{\vartheta_0})} \vartheta(0) [h_1,h_2] + D_{\varphi_0} G(0) [h_1] \\ &+ D_{\vartheta_0} G(0) [h_2] \end{align*} for all $h_1,h_2 \in H^2$. In particular, the function $v$ defined by \begin{align*} v = { D_{({\varphi_0},{\vartheta_0})}\varphi(\cdot,0,0,0,0) \choose D_{({\varphi_0},{\vartheta_0})}\vartheta(\cdot,0,0,0,0)}{h_1 \choose h_2} \end{align*} is a solution of $v_t = \mathcal{L}^\epsilon_0 v$ and $v(0) = (h_1, h_2)^T$, hence $v(t) = e^{t \mathcal{L}^\epsilon_0} {h_1 \choose h_2}$ for all $0\le t \le T$. Similarly, we see that \begin{align*} w = {D_\gamma \varphi (\cdot,0,0,0,0) \choose D_\gamma \vartheta (\cdot,0,0,0,0)} h_3 \end{align*} is a solution of \begin{align*} w_t = \mathcal{L}^\epsilon_0 w + \frac{h_3}{\epsilon}{\alpha \cos \theta_\epsilon \choose \cos \theta_\epsilon } \qquad \text{and} \qquad w(0)={ 0 \choose 0} \, . \end{align*} The variation of constants formula yields \begin{align*} w(t) = \frac{h_3}{\epsilon} \int_0^t e^{(t-s) \mathcal{L}^\epsilon_0} { \alpha \, \cos \theta_\epsilon \choose \cos \theta_\epsilon } \, ds \end{align*} for all $0\le t \le T$. The lemma is proved. \end{proof} For $(\varphi,\vartheta) = \big(\varphi(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda),\vartheta(\cdot,{\varphi_0},{\vartheta_0},\gamma,\lambda)\big)$, we make use of the following equivalence: {\itshape{ \begin{center} $(\varphi,\vartheta)$ defines a $T$-periodic solution for $(LLG)_\epsilon$ with $h_\text{ext} = \lambda h + \gamma$ \\ if and only if $\big(\varphi(T),\vartheta(T)\big) = ({\varphi_0},{\vartheta_0})$. \end{center} }} \noindent Because of that, we define the smooth function \begin{align*} F_\epsilon: \, &U_\epsilon \times V_\epsilon \subset H^2 \times H^2 \times \setR \times \setR \to H^2 \times H^2 \times \setR : \\ &({\varphi_0},{\vartheta_0},\gamma,\lambda) \mapsto \big( \varphi(T,{\varphi_0},{\vartheta_0},\gamma,\lambda) - {\varphi_0}, \,\vartheta(T,{\varphi_0},{\vartheta_0},\gamma,\lambda) - {\vartheta_0}, {\vartheta_0}(0)\big) \end{align*} and remark that $F_\epsilon(0,0,0,0) = (0,0,0)$. To solve the equation $F_\epsilon({\varphi_0},{\vartheta_0},\gamma,\lambda)=(0,0,0)$ for $\lambda \not=0$, we use the implicit function theorem and the statement of the next lemma. \begin{lem} The linear operator $D_{({\varphi_0},{\vartheta_0},\gamma)}F_\epsilon(0,0,0,0): H^2 \times H^2 \times \setR \to H^2 \times H^2 \times \setR$ is invertible, provided $\epsilon>0$ is small enough. \end{lem} \begin{proof} With the help of Lemma \ref{lem:Nexistence}, we obtain the identity \begin{align*} D_{({\varphi_0},{\vartheta_0},\gamma)}F_\epsilon(0,0,0,0)[h_1,h_2,h_3] =&\bigg( e^{T \mathcal{L}_0^\epsilon} {h_1 \choose h_2} - {h_1 \choose h_2} + \frac{h_3}{\epsilon} \int_0^T e^{(T-s) \mathcal{L}_0^\epsilon} {\alpha \cos \theta_\epsilon \choose \cos \theta_\epsilon} \, ds \, , \, h_2(0) \bigg) \end{align*} for all $h_1,h_2 \in H^2$, $h_3 \in \setR$. Moreover, Lemma \ref{lem:L0 splitting} induces the splitting \begin{align*} H^2 \times H^2 = X_1 \oplus X_2 = N(\mathcal{L}_0^\epsilon) \oplus (H^2 \times H^2) \cap R(\mathcal{L}_0^\epsilon) \end{align*} with projections $P_1$ and $P_2$ defined by \begin{align*} P_1 &: H^2 \times H^2 \to X_1: u={u_1 \choose u_2} \mapsto {0 \choose t(u) \theta'_\epsilon}\,,\qquad P_2=I-P_1 \,, \end{align*} where $t(u) = (\alpha u_1 + u_2,\theta'_\epsilon)_{L^2}/\norm{\theta'_\epsilon}_{L^2}^2$. From Corollary \ref{lem:analytic semigroup of L0 continued} we know that $e^{T \mathcal{L}_0^\epsilon} -I : R(\mathcal{L}_0^\epsilon) \to R(\mathcal{L}_0^\epsilon)$ is invertible, and thanks to the smoothing property of $e^{T \mathcal{L}_0^\epsilon}$, that is $e^{T \mathcal{L}_0^\epsilon}(L^2\times L^2) \subset H^2\times H^2$, we obtain that $e^{T \mathcal{L}_0^\epsilon} -I : X_2 \to X_2$ is an isomorphism. Moreover, we have $e^{T \mathcal{L}_0^\epsilon} u = u$ for all $u \in X_1$. For a given element $(f_1,f_2,r) \in H^2 \times H^2 \times \setR$, the equation $D_{({\varphi_0},{\vartheta_0},\gamma)}F_\epsilon(0,0,0,0)[h_1,h_2,h_3] = (f_1,f_2,r)$ is equivalent to \begin{align*} P_1 { f_1 \choose f_2 } &= \frac{h_3}{\epsilon} \int_0^T e^{(T-s) \mathcal{L}_0^\epsilon} P_1 {\alpha \cos \theta_\epsilon \choose \cos \theta_\epsilon } \, ds \, , \\ P_2 { f_1 \choose f_2 } &= \big( e^{T \mathcal{L}_0^\epsilon} - I \big) P_2 { h_1 \choose h_2} + \frac{h_3}{\epsilon} \int_0^T e^{(T-s) \mathcal{L}_0^\epsilon} P_2 {\alpha \cos \theta_\epsilon \choose \cos \theta_\epsilon } \, , \\ h_2(0)&= r \,. \end{align*} The properties of $\theta_\epsilon$ (see Lemma \ref{lem:Nphasefunction}) imply that $t(\alpha \cos \theta_\epsilon, \cos \theta_\epsilon) >0$ and we find \begin{align*} \frac{h_3}{\epsilon} \int_0^T e^{(T-s) \mathcal{L}_0^\epsilon} P_1 {\alpha \cos \theta_\epsilon \choose \cos \theta_\epsilon } \, ds = \frac{T h_3}{\epsilon} { 0 \choose t(\alpha \cos \theta_\epsilon, \cos \theta_\epsilon ) \theta'_\epsilon } \,. \end{align*} In particular, $h_3$ is uniquely determined by $P_1{f_1\choose f_2}$. We now also obtain a unique $P_2 { h_1 \choose h_2 }$. The requirement $h_2(0) = r$ fixes $P_1 {h_1 \choose h_2}$ since \begin{align*} { h_1 \choose h_2 } = P_1 {h_1 \choose h_2} + P_2 {h_1 \choose h_2} = {0 \choose t \theta'_\epsilon} + P_2 {h_1 \choose h_2} \end{align*} and $\theta'_\epsilon(0) >0$. The lemma is proved. \end{proof} Finally, we can state the main result of this paper. \begin{theorem} \label{thm:3} Let $h \in C^{0,\beta}(\setR,L^2) + C^{0,\beta}(\setR,L^\infty)$ $(0<\beta<1)$ be a $T$-periodic function and $\epsilon >0$ be small enough. Then there exist an open ball $B \subset \setR$ centered at $0$ and smooth functions ${\varphi_0},{\vartheta_0}: B \to H^2$, $\gamma : B \to \setR$ such that \begin{align*} (\varphi,\theta) = \big(\varphi(\cdot,{\varphi_0}(\lambda),{\vartheta_0}(\lambda),\gamma(\lambda),\lambda), \theta_\epsilon + \vartheta(\cdot,{\varphi_0}(\lambda),{\vartheta_0}(\lambda),\gamma(\lambda),\lambda) \big) \end{align*} is a $T$-periodic solution for $(LLG)_\epsilon$ with external magnetic field $h_\text{ext} = \lambda h+ \gamma(\lambda)$ for every $\lambda \in B$. In particular, $m=(\cos \varphi \cos \theta, \cos \varphi \sin \theta , \sin \varphi)$ is a $T$-periodic solution for the rescaled LLG with external magnetic field $h_\text{ext}$. By scaling the statement carries over to the original LLG. \end{theorem} \noindent {\bf Remarks.} \begin{enumerate} \item[(i)] {\itshape The ``correction term'' $\gamma(\lambda)$ in $h_\text{ext} = \lambda h + \gamma(\lambda)$ corresponds to a compatibility condition.} \item[(ii)] {\itshape A similar result is true for Bloch walls in thick layers. We plan to present this in a future paper.} \end{enumerate} \medskip \noindent {\bf Acknowledgments.} This work is part of the author's PhD thesis prepared at the Max Planck Institute for Mathematics in the Sciences (MPIMiS) and submitted in June 2009 at the University of Leipzig, Germany. The author would like to thank his supervisor Stefan M{\"u}ller for the opportunity to work at MPIMiS and for having chosen an interesting problem to work on. Furthermore, the author would like to thank Helmut Abels for helpful discussions and hints on the subject. Financial support from the International Max Planck Research School `Mathematics in the Sciences' (IMPRS) is also acknowledged. \bibliographystyle{abbrv} \bibliography{article-NeelWall} \bigskip\small \noindent{\sc NWF I-Mathematik, Universit\"at Regensburg, 93040 Regensburg}\\ {\it E-mail address}: {\tt [email protected]} \end{document}
{ "config": "arxiv", "file": "1006.4768/article-NeelWall.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: Fundamental solution $\Delta$ on a torus. QUESTION [1 upvotes]: Does anyone know how the fundamental solution of the laplacian reads on a flat torus $T^2=R^2/Z^2$? I have found somewhere a formula involving the Fourier transform that looks like this $-\Delta u=f$ for $f\in C^\infty(T^2)$, then there exists a solution $u\in C^\infty(T^2)$ iff $\int f =0$. Moreover \begin{equation} u(x)=\sum_{k\neq 0}e^{2\pi ik\cdot x}(4\pi|k|^2)^{-1}\hat{f}(k). \end{equation} Is there any interpretation (an idea to obtain this from the usual fund sol on the plane?). Is it necessary to write this using Fourier transform, or there is an alternative description? REPLY [1 votes]: The formula simply follows from Fourier inversion theorem. The zero-average restriction simply follows from the fact that the range of $\Delta$ of the torus $\mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2$ consists only of such functions. As for the fundamental solution, the above observation makes it impossible to solve $\Delta \Psi = \delta$ on $\mathbb{T}^2$. A natural substitute is to consider the following compensated equation $$ \Delta \Psi = \delta - 1 \tag{1}$$ where $1$ represents the constant function with the value $1$. Then this paper constructs such a solution using elliptic function. Specifically, if $\vartheta_1$ denotes the first Jacobi theta function, then with $z = x+iy$, $$ \Psi(z) = - \frac{y^2}{8} + \frac{1}{2\pi} \log \left| \vartheta_1\left(\frac{\pi z}{2} \middle|i\right) \right|. \tag{2} $$ Perhaps a more transparent formula is $$ \begin{aligned} \Psi(z) &= - \frac{|z|^2}{4} + \lim_{N\to\infty} \sum_{\substack{k \in \mathbb{Z}^2 \\ \|k\|\leq N}} \frac{1}{2\pi} \left( \log|z + k| - \log\max\{|k|,1\} \right) \\\ &= - \frac{|z|^2}{4} + \frac{1}{2\pi} \log |z| + \frac{1}{8\pi} \sum_{k \in \mathbb{Z}^2\setminus\{0\}} \log\left| 1 - \frac{z^4}{k^4} \right|. \end{aligned} \tag{3} $$ (In the last sum, $k$'s are regarded as elements of $\mathbb{C}$ so that division makes sense.) This formula shows how this $\Psi$ is cooked up; it is a periodic sum of translates of the fundamental solution of $\Delta$ on $\mathbb{R}^2$. Despite their looks, it turns out that both $\text{(2)}$ and $\text{(3)}$ are $\mathbb{Z}^2$-periodic.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 3397748, "subset_name": null }
TITLE: Recurrence Relation Models QUESTION [0 upvotes]: Find a recurrence relation for $a_n$, the number of ways to give away $1$, $2$, or $3$ for $n$ days with the constraint that there is an even number of days when $1$ is given away. REPLY [2 votes]: $a_{n+1}=2a_{n}+(3^{n}-a_{n})$. The intuition behind this is that if you have a list of $n$ numbers with an even number of $1$'s then you can add $2$ or $3$ at the end and it still has an even number of $1$'s. Since the number of lists of $n$ numbers with all elements of the list $1,2$ or $3$ is $3^n$ then that means that there are $3^n -a_n$ that don't have an even number of $1$'s. That means if we add $1$ at the end it will have an even number of $1$'s.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 548637, "subset_name": null }
TITLE: Prove/Disprove: Let $f : X \rightarrow Y$ be a function. Do we have $f(A^c) = (f(A))^c ?$ QUESTION [0 upvotes]: let $X$ and $Y$ be two sets where $A \subset X $. Let $f : X \rightarrow Y$ be a function. Do we have $f(A^c) = (f(A))^c ?$ My attempt : I think yes, I take $ A= [0, \infty)$ , $X=Y= \mathbb{R}$ and $f(x) = x$ Now $ f( \mathbb{R} / A ) = (f(A))^c$ I thinks im not on the right track Any hints/solution will be appreciated REPLY [1 votes]: Let $X=\Bbb R$, $Y=[0,+\infty)$, $f(x)=x^2$, $A = Y$ then $f[A^c]=(0, +\infty), f[A]=Y$, so $(f[A])^c = \emptyset$. Theorem: Suppose $f:X \to Y$ obeys $\forall A \subseteq X: f[A^\complement]= f[A]^\complement$. Then $f$ is a bijection. First not that $f$ must be onto: $$f[X]=f[\emptyset^\complement] = f[\emptyset]^\complement= \emptyset^\complement= Y$$ by plugging in $A=\emptyset$. Suppose $x_1 \neq x_2$ exist in $X$ such that $f(x_1)=f(x_2)$. Call the common image point $p$. Then $p \in f[\{x_1\}^\complement]$, as witnessed by $x_2$ while $p \notin f[\{x_1\}]^\complement = \{p\}^\complement$ contradicting the property for $A=\{x_1\}$. So any non-bijection will give a counterexample, and for a bijection $f$, the property $\forall A \subseteq X: f[A^\complement]= f[A]^\complement$ does hold. So the property is equivalent to being a bijection.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 3482688, "subset_name": null }
TITLE: Is this equation valid $\gamma b^{e \log_b{n+e}} = \gamma b^e + n^e$,? QUESTION [2 upvotes]: While reading a script I found this equation: $\gamma b^{e \log_b{n+e}} = \gamma b^e + n^e$ and i cannot figure out how the author did this. I'd appreciate a step-by step equation for this equation. REPLY [4 votes]: By the definition of the logarithm $b^{\log_b n}=n$, so following the usual rules of powers of power: $b^{e\log_b n}=(b^{\log_b n})^e=n^e$. Another rule about powers is $b^{x+y}=b^x\cdot b^y$, so altogether we get $b^{e\log_b n+e}=b^{e\log_bn}\cdot b^e=n^e\cdot b^e$. Therefore it should read $$\gamma b^{e\log_b n+e}=\gamma b^e\cdot n^e,$$ and it looks like a multiplication sign was accidentally converted to a plus sign.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 2, "question_id": 308282, "subset_name": null }
TITLE: Necklace combinations with three group of beads QUESTION [4 upvotes]: I have a hard question about a way how many different necklaces can be made. Suppose that we have the following restrictions: We have 3 groups of beads: 4 triangle beads 6 square beads 8 circle beads All the beads in one group are completely identical. This means that if you put two triangle beads next to each other and then switch their positions this counts as one necklace because the beads are identical Necklaces are identical if they are identical under symmetric operations just as rotate them () or turning them around (). So if we have a necklace ordered in one way and we rotate it 180 deg or just flip a side this is count as one necklace. We need to use all the 18 beads in each and every new necklace. We can not create a necklace from 17, 16 or less than 18 beads. I read all the topics here but could not find a question about a group of identical beads. I also read Burnside lemma and Pólya_enumeration_theorem and Necklace_(combinatorics) in wikipedia, but could not find a way how to solve this and what is the correct answer. From Burnside lemma, I found that the answer should be 57, but is this correct? I used directly the formula from Burnside lemma, but it does not seem quite right for me, because I do not take into account that the three groups are with different numbers of beads. $$\frac{1}{24} * (n^6 + 3 * n^4 + 12 * n^3 + 8 * n^2)$$ where n is 3 from three groups. $$\frac{1}{24} * (3^6 + 3 * 3^4 + 12 * 3^3 + 8 * 3^2) = 57$$ However, as I said earlier despide the fact that the result looks some kind realisting I am not sure that this is the right answer, because I do not use in the formula that we have 4 triangle, 6 square and 8 circle beads. It looks like Pólya enumeration theorem weighted version is the thing that I need. However, I am not sure how to get to the right answer Thanks in advance. REPLY [1 votes]: I manage to answer the question and this is the process that I followed: I consider the 18-bead necklace in the first part of the problem. Here are the eighteen rotations expressed in cycle form where we assume that the slots are numbered from 1 to 18 in clockwise order. The first is the identity (e: no rotation) and the second is the generator g—a rotation by a single position which, when repeated, generates all the elements of the group: $e = g^0 \text{ = (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)}$ $g^1 \text{ = (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18) }$ $g^2 \text{= (1 3 5 7 9 11 13 15 17) (2 4 6 8 10 12 14 16 18)}$ $g^3 \text{= (1 4 7 10 13 16) (2 5 8 11 14 17) (3 6 9 12 15 18)}$ $g^4 \text{= (1 5 9 13 17 3 7 11 15) (2 6 10 14 18 4 8 12 16)}$ $g^5 \text{= (1 6 11 16 3 8 13 18 5 10 15 2 7 12 17 4 9 14)}$ $g^6 \text{= (1 7 13) (2 8 14) (3 9 15) (4 10 16) (5 11 17) (6 12 18)}$ $g^7 \text{= (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18)} $ $g^8 \text{= (1 9 17 7 15 5 13 3 11) (2 10 18 8 16 6 14 4 12)} $ $g^9 \text{= (1 10) (2 11) (3 12) (4 13) (5 14) (6 15) (7 16) (8 17) (9 18)} $ $g^{10} \text{= (1 11 3 13 5 15 7 17 9) (2 12 4 14 6 16 8 18 10)} $ $g^{11} \text{= (1 12 5 16 9 2 13 6 17 10 3 14 7 18 11 4 15 8)} $ $g^{12} \text{= (1 13 7) (2 14 8) (3 15 9) (4 16 10) (5 17 11) (6 18 12)} $ $g^{13} \text{= (1 14 9 4 17 12 7 2 15 10 5 18 13 8 3 16 11 6)} $ $g^{14} \text{= (1 15 11 7 3 17 13 9 5) (2 16 12 8 4 18 14 1 6)} $ $g^{15} \text{= (1 16 13 10 7 4) (2 17 14 11 8 5) (3 18 15 12 9 6)} $ $g^{16} \text{= (1 17 15 13 11 9 7 5 3) (2 18 16 14 12 10 8 6 4)} $ $g^{17} \text{= (1 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2)} $ After that I found the GCD for all cycle form's with and group them in a table: | Cycle length | Permutations | GCD with 18 | | 1 | $g^0$ | GCD(0, 18)=18 | | 2 | $g^9$ | GCD(9, 18)=9 | | 3 | $g^6$, $g^{12}$ | GCD(6, 18)=GCD(12, 18)=6 | | 6 | $g^3$, $g^{15}$ | GCD(3, 18)=GCD(15, 18)=3 | | 9 | $g^2$, $g^4$, $g^8$, $g^{10}$, $g^{14}$, $g^{16}$ | GCD(2, 18)=GCD(4, 18)=GCD(8, 18)=GCD(10, 18)=GCD(14, 18)=GCD(16, 18)=2 | | 18 | $g^1$, $g^5$, $g^7$, $g^{11}$, $g^{13}$, $g^{17}$ | GCD(1, 18)=GCD(5, 18)=GCD(7, 18)=GCD(11, 18)=GCD(13, 18)=GCD(17, 18)=1 | We have 18 permutations for rotation and lets name cycle 1 with $f_1$, cycle 2 with $f_2$ .. cycle n with $f_n$ Than the formula for Cycling index is: $$\frac{f_1^{18} + f_2^9 + 2f_3^6 + 2f_6^3 + 6f_9^2 + 6f_{18}^1}{18}$$ If we solve all the possible necklaces with three colors the result should be (for the moment we do not solve for the three colors with 4, 6 and 8 beads in respective groups): $$\frac{3^{18} + 3^9 + 2*3^6 + 2*3^3 + 6*3^2 + 6*3^1}{18} = \text{21 524 542}$$ From here because turn around is allowed we need to add and the necklace(bracelet if we follow the right terms) is with even beads we should add the symethric turn arounds. $$\frac{f_1^{18} + f_2^9 + 2f_3^6 + 2f_6^3 + 6f_9^2 + 6f_{18}^1 + 9f_1^2f_2^8 + 9f_2^9}{2 * 18}$$ and again for three colors without including the different weight: $$\frac{3^{18} + 3^9 + 2*3^6 + 2*3^3 + 6*3^2 + 6*3^1 + 9 * 3^8 + 9 * 3^9}{2 * 18} = \text{10 781 954}$$ For the moment we have all the possible necklaces and bracelets with three colors. However, in order to find the necklaces and bracelets with three colors (4 reds, 6 greens and 8 blues) we need to replace: $$f_1 = (x + y + z)$$ $$f_2 = (x^2 + y^2 + z^2)$$ $$f_3 = (x^3 + y^3 + z^3)$$ $$f_6 = (x^6 + y^6 + z^6)$$ $$f_9 = (x^9 + y^9 + z^9)$$ $$f_{18} = (x^{18} + y^{18} + z^{18})$$ and if we replace in the formula it becomes: $$\frac{(x + y + z)^{18} + (x^2 + y^2 + z^2)^9 + 2(x^3 + y^3 + z^3)^6 + 2(x^6 + y^6 + z^6)^3 + 6(x^9 + y^9 + z^9)^2 + 6(x^{18} + y^{18} + z^{18}) + 9(x + y + z)^2(x^2 + y^2 + z^2)^8 + 9(x^2 + y^2 + z^2)^9}{36}$$ Then we need to find which expressions can expand to $x^4y^6z^8$. After then by using multinominal coeficient I managed to calculate the following results 9 189 180 1260 11 340 11 340 Then I sum all of them and divide them to 36. This gives me the answer of 255 920 which is the answer of the question. We can create 255 920 bracelets with 4 red 6 green and 8 blue beads.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 4, "question_id": 3661917, "subset_name": null }
TITLE: DTFT of Impulse train is equal to 0 through my equation. QUESTION [1 upvotes]: Let me have an impulse train function as below, $$ x[n] = \sum_{m=-\infty}^{\infty} {\delta[n-f_0 m]} $$ where, $f_0 \in \textbf{Z}$. Now, I am trying to calculate its DTFT, so I put it into DTFT equation as below, $$ X(e^{j\omega}) = \sum_{n=-\infty}^{\infty} {\sum_{m=-\infty}^{\infty} {\delta[n-f_0 m]} e^{-j\omega n}} . $$ I substitute $n-f_0 m = p$, the equation can be re-written as below, $$ X(e^{j\omega}) = \sum_{m=-\infty}^{\infty} { \left(\sum_{p=-\infty}^{\infty} {\delta[p]} e^{-j\omega p} \right) e^{-j\omega f_0 m}}. $$ In the parenthesis equation goes to 1, $X(e^{j\omega})$ only has the term as below, $$ X(e^{j\omega}) = \sum_{m=-\infty}^{\infty} {e^{-j\omega f_0 m}}. $$ I divided this equation as two parts. $$ X(e^{j\omega}) = \sum_{m=0}^{\infty} {e^{-j\omega f_0 m}}+\sum_{m=0}^{\infty} {e^{j\omega f_0 m}}-1. $$ And I calculated this and got the answer which was equal to 0. $$ X(e^{j\omega}) = \frac{1}{1-e^{-j\omega f_0}} + \frac{1}{1-e^{j\omega f_0}} -1 = 0.$$ At this moment, I was stuck at here, because I could not find where is incorrect. Please give me 1) some comments for fixing this issue 2) How can I get the DTFT of impulse train? Is this approach is correct? Thank you very much in advance! @Matt L. The infinite two sums can be considered as infinite series each of them. So let me consider left part of above equation i.e. $ \sum_{m=0}^{\infty} {e^{-j\omega f_0 m}} $. Its common ratio is $e^{-j\omega f_0}$, and its first term is 1. We can get above fraction, and we also can calculate the 2nd part as the same manner. From your answer, I tried to calculate the infinite series as below. If you don't mind, this procedure is correct or not, please let me know. $$c_k = \frac{1}{N} \sum_{n=0}^{N-1} {\sum_{m=-\infty}^{\infty} {\delta[n-mN] e^{-j \frac{2\pi}{N} nk}} }.$$ Above equation can be written as below, and except the case $m=0$ all $m$'s condition are 0 because the definition of dirac delta sequence. $$\sum_{n=0}^{N-1} {\sum_{m=-\infty}^{\infty} {\delta[n-mN] e^{-j \frac{2\pi}{N} nk}} } = \sum_{m=-\infty}^{\infty} {\left( \delta[0-mN] e^{-j \frac{2\pi}{N} 0k}+\delta[1-mN] e^{-j \frac{2\pi}{N} 1k} +...+ \delta[(N-1)-mN] e^{-j \frac{2\pi}{N} (N-1)k}\right)}.$$ In other words, $ \delta[1-mN]=\delta[2-mN]=...=\delta[(N-1)-mN]$ equals to 0 whatever $m$ is. So, $c_k=1/N$. I understood up to here, and from now on I should understand DTFT of $x[n]$, i.e. $X(e^{j\omega})$. Thank you for your comment. @Matt L. Thank you always for your comment. Now I am struggling with the equation from $x[n]$ to $X(e^{j\omega})$. What I am doing is as below, I am stuck at the infinite sum. Because, $x[n]$ is $$x[n] = \frac{1}{N}\sum_{k=0}^{N-1}{e^{-j\frac{2\pi}{N}nk}},$$ the DTFT can be written as below, $$ X(e^{j\omega}) = \frac{1}{N} \sum_{k=0}^{N-1} { \sum_{n=-\infty}^{\infty}{e^{-j \left( \frac{2\pi}{N}k - \omega \right)n} } }.$$ I think this can be converted as below, $$ X(e^{j\omega}) = \frac{1}{N} \sum_{k=0}^{N-1} {2\pi \delta \left(\omega-\frac{2\pi}{N}k \right)}. $$ Since the DTFT of $e^{j2\pi kn/N}$ is $2\pi\delta(\omega-2\pi k /N)$ as you mentioned previously. Is the DTFT means that as below? I think the definition is DTFT is as below. If it is, I have no idea why the DTFT of impulse train is $k$ from $-\infty$ to $\infty$. $$\sum_{n=-\infty}^{\infty}{e^{-j \left( \frac{2\pi}{N}k - \omega \right)n} }.$$ So, the above $X(e^{j\omega})$ could be re-written as $$ X(e^{j\omega}) = \frac{1}{N} \sum_{k=0}^{N-1} {DTFT(e^{j2\pi kn/N})}.$$ Please give me some comment for fixing this, and understanding the DTFT of impulse train. Thank you always! REPLY [6 votes]: A simple approach to show that the DTFT of an impulse train is an impulse train in the frequency domain, is to represent the periodic impulse train by its Fourier series: $$x[n]=\sum_{m=-\infty}^{\infty}\delta[n-mN]=\sum_{k=0}^{N-1}c_ke^{j2\pi kn/N}\tag{1}$$ where the Fourier coefficients $c_k$ are given by $$c_k=\frac{1}{N}\sum_{n=0}^{N-1}x[n]e^{-j2\pi nk/N}$$ For the given $x[n]$, we have $c_k=1/N$ and from (1) $$x[n]=\frac{1}{N}\sum_{k=0}^{N-1}e^{j2\pi kn/N}\tag{2}$$ Since the DTFT of $e^{j2\pi kn/N}$ is $2\pi\delta(\omega-2\pi k /N)$ (periodically continued with period $2\pi$), the DTFT of $x[n]$ is $$X(e^{j\omega})=\frac{2\pi}{N}\sum_{k=-\infty}^{\infty}\delta(\omega-2\pi k/N)$$
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 1146272, "subset_name": null }
TITLE: Three different definitions of Modular Tensor Categories QUESTION [2 upvotes]: I have found three different definitions of Modular Tensor Categories. I want to know if anybody can give a sketch of proof for their equivalences (some parts are easy of course) A Modular Tensor Category $\mathscr{C}$ is a semisimple ribbon category with finite number of isomorphism classes of simple objects, such that (all three conditions are apparently equivalent) either (Turaev) The matrix $\tilde{s}$ with components $\tilde{s}_{ij}=\mathrm{tr}(\sigma_{{X_i}X_j}\circ \sigma_{{X_j}X_i})$ is invertible (trace is quantum trace, $\sigma$ is braiding isomorphism and $X_i$ are simple) (Bruguieres, Muger) Multiples of unit $\mathbf{1}$ are the only transparent objects of the category. (An object $X$ is called transparent if $\sigma_{XY}\circ\sigma_{YX}=\mathrm{id}_{Y\otimes X}$ for all objects $Y$.) Our premodular category $\mathscr{C}$ is factorizable. The fucntor $\mathscr{C}\boxtimes \mathscr{C}^\text{rev}\to \mathcal{Z}(\mathscr{C})$ is an equivalence of categories. The first two definitions I understand very well. The last one though I have no idea what it even means and what are these symbols and what functor? But in any case I'm mostly interested in a sketch of proof for equivalence of these definitions (and maybe an explanation of the meaning of the last one). For example it is quite trivial to show that Turaev results the second one. It is just some linear algebra basically and some graphical calculus. The converse though I couldn't figure out. REPLY [0 votes]: In his paper Non-degeneracy conditions for braided finite tensor categories, Shimizu shows the equivalence of these definitions in the non-semisimple case. Of course, the $s$-matrix doesn't work there, but it is not hard to show (and I believe he does that) that in the semisimple setting, non-degeneracy of the $s$-matrix is equivalent to non-degeneracy of the Hopf pairing of Lyubashenko's coend.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 2, "question_id": 1675241, "subset_name": null }
TITLE: How to calculate velocity vector from scalar angular velocity and position vector in 2D? QUESTION [1 upvotes]: I would like to know, if I have an angular velocity as scalar, how can I calculate the velocity vector. I know that the product of angular velocity and the length of the distance gives the speed, but I would like to know vector of that. I tried solving this problem many times, but honestly, I dont even know what I did, Im complete beginner, and couldnt really find a solution, or answer. REPLY [1 votes]: I assume you want to find the linear velocity if you know the angular speed. You need to realize that all the points on a rotating rigid body have the same angular speed and angular velocity but different linear speeds and velocities. A point on a rigid body that describes a circle of twice the radius as another point will have twice the linear speed because it covers twice as much distance in the same amount of time. The linear speed $v$ is related to the angular speed $\omega$ by $v=\omega r$. The corresponding vector equation is the cross product $\vec v=\vec \omega \times \vec r$. The direction of the velocity vector is perpendicular to the plane defined by $\vec \omega$ and $\vec r$ using the right hand rule. Put the thumb of your right hand perpendicular to the plane of rotation and match the circulation of your four fingers to the circulation of the rotating body. The velocity vector at a point is tangent to the circle at that point. Example The plane of rotation is $xy$ and have position vector $\vec r$ in that plane. The angular velocity is perpendicular to the plane. $\vec \omega=\{0,0,\omega\}$ and $\vec r=\{r \cos\theta,r\sin\theta,0\}$. Then $$\vec v=\vec \omega\times \vec r=\{0,0,\omega\}\times \{r \cos\theta,r\sin\theta,0\}=\omega r\{-\sin\theta,\cos\theta,0\}.$$Both the velocity and position vectors are 2D.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 703506, "subset_name": null }
\begin{document} \keywords{Local maximal operator, fractional Sobolev space, Hardy inequality} \subjclass[2010]{42B25, 46E35, 47H99} \begin{abstract} In this note we establish the boundedness properties of local maximal operators $M_G$ on the fractional Sobolev spaces $W^{s,p}(G)$ whenever $G$ is an open set in $\R^n$, $0<s<1$ and $1<p<\infty$. As an application, we characterize the fractional $(s,p)$-Hardy inequality on a bounded open set $G$ by a Maz'ya-type testing condition localized to Whitney cubes. \end{abstract} \maketitle \markboth{\textsc{H. Luiro and A. V. V\"ah\"akangas}} {\textsc{Local maximal operators}} \section{Introduction} The local Hardy--Littlewood maximal operator $M_G=f\mapsto M_G f$ is defined for an open set $\emptyset\not=G\subsetneq\R^n$ and a function $f\in L^1_{\textup{loc}}(G)$ by \[ M_G f(x)=\sup_{r}\intav_{B(x,r)}\lvert f(y)\rvert\,dy\,,\qquad x\in G\,, \] where the supremum ranges over $0<r<\dist(x,\partial G)$. Whereas the (local) Hardy--Littlewood maximal operator is often used to estimate the absolute size, its Sobolev mapping properties are perhaps less known. The classical Sobolev regularity of $M_G$ is established by Kinnunen and Lindqvist in \cite{MR1650343}; we also refer to \cite{MR2041705,MR1469106,MR1979008,MR1951818,MR2280193}. Concerning smoothness of fractional order, the first author established in \cite{MR2579688} the boundedness and continuity properties of $M_G$ on the Triebel--Lizorkin spaces $F^{s}_{pq}(G)$ whenever $G$ is an open set in $\R^n$, $0<s<1$ and $1<p,q<\infty$. Our main focus lies in the mapping properties of $M_G$ on a fractional Sobolev space $W^{s,p}(G)$ with $0<s<1$ and $1<p<\infty$, cf. Section \ref{s.notation} for the definition or \cite{MR2944369} for a survey of this space. The intrinsically defined function space $W^{s,p}(G)$ on a given domain $G$ coincides with the trace space $F^{s}_{pp}(G)$ if and only if $G$ is regular, i.e., \[ \lvert B(x,r)\cap G\rvert \simeq r^n \] whenever $x\in G$ and $0<r<1$, see \cite[Theorem 1.1]{Z} and \cite[pp. 6--7]{MR1163193}. As a consequence, if $G$ is a regular domain then $M_G$ is bounded on $W^{s,p}(G)$. Moreover, the following question arises: is $M_G$ a bounded operator on $W^{s,p}(G)$ even if $G$ is not regular, e.g., if $G$ has an exterior cusp ? Our main result provides an affirmative answer to the last question: \begin{theorem}\label{t.m.bounded} Let $\emptyset\not=G\subsetneq \R^n$ be an open set, $0<s < 1$ and $1<p < \infty$. Then, there is a constant $C=C(n,p,s)>0$ such that inequality \begin{equation}\label{e.max_bdd} \int_G \int_G \frac{\lvert M_G f(x)-M_G f(y)\rvert^p}{\lvert x-y\rvert^{n+sp}}\,dy\,dx \le C \int_G \int_G \frac{ \lvert f(x)-f(y)\rvert^p}{\lvert x-y\rvert^{n+sp}}\,dy\,dx \end{equation} holds for every $f\in L^p(G)$. In particular, the local Hardy--Littlewood maximal operator $M_G$ is bounded on the fractional Sobolev space $W^{s,p}(G)$. \end{theorem} The relatively simple proof of Theorem \ref{t.m.bounded} is based on a pointwise inequality in $\R^{2n}$, see Proposition \ref{p.maximal}. That is, for $f\in L^p(G)$ we define an auxiliary function $S(f):\R^{2n}\to \R$ \[ S(f)(x,y)=\frac{ \chi_G(x)\chi_G(y) \lvert f(x)-f(y)\rvert}{\lvert x-y\rvert^{\frac{n}{p}+s}}\,,\qquad \text{a.e. }(x,y)\in \R^{2n}\,. \] Observe that the $L^p(\R^{2n})$-norm of $S(f)$ coincides with $\lvert f\rvert_{W^{s,p}(G)}$, compare to definition \eqref{e.semi}. The key step is to show that $S(M_G f)(x,y)$ is pointwise almost everywhere dominated by \[C(n,p,s)\sum_{i,j,k,l\in \{0,1\}}\big(M_{ij}(M_{kl}(Sf))(x,y)+ M_{ij}(M_{kl}(Sf))(y,x)\big)\,,\] where each $M_{ij}$ and $M_{kl}$ is either $F\mapsto \lvert F\rvert$ or a $V$-directional maximal operator in $\R^{2n}$ that is defined in terms of a fixed $n$-dimensional subspace $V\subset \R^{2n}$, we refer to Definition \eqref{e.lower_dim}. The geometry of the open set $G$ does not have a pivotal role, hence, we are able to prove the pointwise domination without imposing additional restrictions on $G$. Theorem \ref{t.m.bounded} is then a consequence of the fact that the compositions $M_{ij}M_{kl}$ are bounded on $L^p(\R^{2n})$ if $1<p<\infty$. The described transference of the problem to the $2n$-dimensional Euclidean space is a typical step when dealing with norm estimates for the spaces $W^{s,p}(G)$, we refer to \cite{E-HSV,ihnatsyeva3,Z} for other examples. We plan to adapt the transference method to norm estimates on intrinsically defined Triebel--Lizorkin and Besov function spaces on open sets, \cite{MR1163193}. As an application of our main result, Theorem \ref{t.m.bounded}, we study fractional Hardy inequalities. Let us recall that an open set $\emptyset\not=G\subsetneq\R^n$ admits an $(s,p)$-Hardy inequality, for $0<s<1$ and $1<p<\infty$, if there exists a constant $C>0$ such that inequality \begin{equation}\label{e.hardy} \int_{G} \frac{\lvert f(x)\rvert^p}{\dist(x,\partial G)^{sp}}\,dx \le C \int_{G} \int_{G} \frac{\lvert f(x)-f(y)\rvert ^p}{\lvert x-y\rvert ^{n+sp}}\,dy\,dx \end{equation} holds for all functions $f\in C_c(G)$. These inequalities have attracted some interest recently, we refer to \cite{Dyda3,Dyda2,E-HSV,ihnatsyeva3,ihnatsyeva2,ihnatsyeva1} and the references therein. In Theorem \ref{t.second} we answer a question from \cite{Dyda3}, i.e., we characterize those bounded open sets which admit an $(s,p)$-Hardy inequality. The characterization is given in terms of a localized Maz'ya-type testing condition, where a lower bound $\ell(Q)^{n-sp}\lesssim \mathrm{cap}_{s,p}(Q,G)$ for the fractional $(s,p)$-capacities of all Whitney cubes $Q\in\mathcal{W}(G)$ is required and a quasiadditivity property of the same capacity is assumed with respect to all finite families of Whitney cubes. Aside from inequality \eqref{e.max_bdd} an important ingredient in the proof of Theorem \ref{t.second} is the estimate \begin{equation}\label{e.harnack} \intav_{2^{-1}Q} f\,dx \le C\, \inf_Q M_Gf\,, \end{equation} which holds for a constant $C>0$ that is independent of both $Q\in\mathcal{W}(G)$ and $f\in C_c(G)$. Inequality~\eqref{e.harnack} allows us to circumvent the (apparently unknown) weak Harnack inequalities for the minimizers that are associated with the $(s,p)$-capacities. The weak Harnack based approach is taken up in \cite{MR3189220}; therein the counterpart of Theorem \ref{t.second} is obtained in case of the classical Hardy inequality, i.e., for the gradient instead of the fractional Sobolev seminorm. The structure of this paper is as follows. In Section \ref{s.notation} we present the notation and recall various maximal operators. The proof of Theorem \ref{t.m.bounded} is taken up in Section \ref{s.estimates}. Finally, in Section \ref{s.application}, we give an application of our main result by characterizing fractional $(s,p)$-Hardy inequalities on bounded open sets. \section{Notation and preliminaries}\label{s.notation} \subsection*{Notation} The open ball centered at $x\in \R^n$ and with radius $r>0$ is written as $B(x,r)$. The Euclidean distance from $x\in\R^n$ to a set $E$ in $\R^n$ is written as $\dist(x,E)$. The Euclidean diameter of $E$ is $\mathrm{diam}(E)$. The Lebesgue $n$-measure of a measurable set $E$ is denoted by $\vert E\vert.$ The characteristic function of a set $E$ is written as $\chi_E$. We write $f\in C_c(G)$ if $f:G\to \R$ is a continuous function with compact support in an open set $G$. We let $C(\star,\dotsb,\star)$ denote a positive constant which depends on the quantities appearing in the parentheses only. For an open set $\emptyset\not=G\subsetneq \R^n$ in $\R^n$, we let $\mathcal{W}(G)$ be its Whitney decomposition. For the properties of Whitney cubes we refer to \cite[VI.1]{MR0290095}. In particular, we need the inequalities \begin{equation}\label{dist_est} \mathrm{diam}(Q)\le \mathrm{dist}(Q,\partial G)\le 4 \mathrm{diam}(Q)\,,\quad Q\in \mathcal{W}(G)\,. \end{equation} The center of a cube $Q\in\mathcal{W}(G)$ is written as $x_Q$ and $\ell(Q)$ is its side length. By $tQ$, $t>0$, we mean a cube whose sides are parallel to those of $Q$ and that is centered at $x_Q$ and whose side length is $t\ell(Q)$. Let $G$ be an open set in $\R^n$. Let $1< p<\infty$ and $0<s<1$ be given. We write \begin{equation}\label{e.semi} \lvert f \rvert_{W^{s,p}(G)} = \bigg( \int_G\int_{G}\frac{\lvert f(x)-f(y)\rvert^p}{\lvert x-y\rvert^{n+s p}}\, dy\,dx\,\bigg)^{1/p} \end{equation} for measurable functions $f$ on $G$ that are finite almost everywhere. By $W^{s,p}(G)$ we mean the fractional Sobolev space of functions $f$ in $L^p({G})$ with \[\lVert f\rVert_{W^{s,p}({G})} =\lVert f\rVert_{L^p({G})}+|f|_{W^{s,p}({G})}<\infty\,.\] \subsection*{Maximal operators} Let $\emptyset\not=G\subsetneq \R^n$ be an open set. The local Hardy--Littlewood maximal function of $f\in L^1_{\textup{loc}}(G)$ is defined as follows. For every $x\in G$, we write \begin{equation}\label{d.max} M_G f(x) = \sup_r \intav_{B(x,r)} \lvert f(y)\rvert\,dy\,, \end{equation} where the supremum ranges over $0< r< \dist(x,\partial G)$. For notational convenience, we write \begin{equation}\label{e.zero} \intav_{B(x,0)} \lvert f(y)\rvert \,dy = \lvert f(x)\rvert \end{equation} whenever $x\in G$ is a Lebesgue point of $\lvert f\rvert$. It is clear that, at the Lebesgue points of $\lvert f\rvert$, the supremum in \eqref{d.max} can equivalently be taken over $0\le r\le \dist(x,\partial G)$. The following lemma is from \cite[Lemma 2.3]{Dyda3}. \begin{lemma}\label{l.continuity} Let $\emptyset\not=G\subsetneq \R^n$ be an open set and $f\in C_c(G)$. Then $M_G f$ is continuous on $G$. \end{lemma} Let us fix $i,j\in \{0,1\}$ and $1<p<\infty$. For a function $F\in L^p(\R^{2n})$ we write \begin{equation}\label{e.lower_dim} M_{ij}(F)(x,y) = \sup_{r>0}\intav_{B(0,r)} \lvert F(x+iz,y+jz)\rvert \,dz \end{equation} for almost every $(x,y)\in \R^{2n}$. Observe that $M_{00}(F)=\lvert F\rvert$. By applying Fubini's theorem in suitable coordinates and boundedness of the centred Hardy--Littlewood maximal operator in $L^p(\R^n)$ we find that $M_{ij}=F\mapsto M_{ij}(F)$ is a bounded operator on $L^p(\R^{2n})$; let us remark that the measurability of $M_{ij}(F)$ for a given $F\in L^p(\R^{2n})$ can be checked by first noting that the supremum in \eqref{e.lower_dim} can be restricted to the rational numbers $r>0$ and then adapting the proof of \cite[Theorem 8.14]{MR924157} with each $r$ separately. \section{The proof of Theorem \ref{t.m.bounded}}\label{s.estimates} Within this section we prove our main result, namely Theorem \ref{t.m.bounded} that is stated in the Introduction. Let us first recall a convenient notation. Namely, for $f\in L^p(G)$ we write \[ S(f)(x,y) = S_{G,n,s,p}(f)(x,y)=\frac{ \chi_G(x)\chi_G(y) \lvert f(x)-f(y)\rvert}{\lvert x-y\rvert^{\frac{n}{p}+s}} \] for almost every $(x,y)\in \R^{2n}$. The main tool for proving Theorem \ref{t.m.bounded} is a pointwise inequality, stated in Proposition \ref{p.maximal}, which might be of independent interest. \begin{proposition}\label{p.maximal} Let $\emptyset\not=G\subsetneq\R^n$ be an open set, $0<s<1$ and $1<p<\infty$. Then there exists a constant $C=C(n,p,s)>0$ such that, for almost every $(x,y)\in\R^{2n}$, inequality \begin{equation}\label{e.dom} S(M_G f)(x,y) \le C\sum_{i,j,k,l\in \{0,1\}} \big( M_{ij}(M_{kl}(S f))(x,y) +M_{ij}(M_{kl}(S f))(y,x)\big) \end{equation} holds whenever $f\in L^p(G)$ and $S f\in L^p(\R^{2n})$. \end{proposition} By postponing the proof of Proposition \ref{p.maximal} for a while, we can prove Theorem \ref{t.m.bounded}. \begin{proof}[Proof of Theorem \ref{t.m.bounded}] Fix $f\in L^p(G)$. Without loss of generality, we may assume that the right hand side of inequality \eqref{e.max_bdd} is finite. Hence $S f \in L^p(\R^{2n})$ and inequality \eqref{e.max_bdd} is a consequence of Proposition \ref{p.maximal} and the boundedness of maximal operators $M_{ij}$ on $L^p(\R^{2n})$. \end{proof} We proceed to the postponed proof that is motivated by that of \cite[Theorem 3.2]{MR2579688}. \begin{proof}[Proof of Proposition \ref{p.maximal}] By replacing the function $f$ with $\lvert f\rvert$ we may assume that $f\ge 0$. Since $f\in L^p(G)$ and, hence, $M_G f\in L^p(G)$ we may restrict ourselves to points $(x,y)\in G\times G$ for which both $x$ and $y$ are Lebesgue points of $f$ and both $M_Gf(x)$ and $M_Gf(y)$ are finite. Moreover, by symmetry, we may further assume that $M_G f(x)>M_G f(y)$. These reductions allow us to find $0\le r(x)\le \dist(x,\partial G)$ and $0\le r(y)\le \dist(y,\partial G)$ such that the estimate \begin{align*} S(M_G f)(x,y)&=\frac{\lvert M_G f(x)-M_G f(y)\rvert}{\lvert x-y\rvert^{\frac{n}{p}+s}}\\ &=\frac{\lvert \intav_{B(x,r(x))}f\,-\intav_{B(y,r(y))}f\,\rvert}{\lvert x-y\rvert^{\frac{n}{p}+s}} \leq\frac{\lvert \intav_{B(x,r(x))}f\,-\intav_{B(y,r_2)}f\,\rvert}{\lvert x-y\rvert^{\frac{n}{p}+s}} \end{align*} is valid for any given number \[0\le r_2\le \dist(y,\partial G)\,;\] this number will be chosen in a convenient manner in the two case studies below. \smallskip \noindent {\bf{Case $r(x)\le \lvert x-y\rvert$}.} Let us denote $r_1=r(x)$ and choose \begin{equation}\label{e.case} r_2=0\,. \end{equation} If $r_1=0$, then we get from \eqref{e.case} and our notational convention \eqref{e.zero} that \[S(M_G f)(x,y)\leq S(f)(x,y)\,.\] Suppose then that $r_1>0$. Now \begin{align*} S(M_G f)(x,y)&\le \frac{1}{\lvert x-y\rvert^{\frac{n}{p}+s}}\bigg{|}\intav_{B(x,r_1)}f(z)\,dz\,-\intav_{B(y,r_2)}f(z)\,dz\,\bigg{|} \\&=\frac{1}{\lvert x-y\rvert^{\frac{n}{p}+s}} \bigg{|}\intav_{B(x,r_1)} f(z)-f(y)\,dz\,\,\bigg{|}\\ & \lesssim \intav_{B(0,r_1)} \frac{\chi_G(x+z)\chi_G(y)\lvert f(x+z)-f(y)\rvert}{\lvert x+z-y\rvert^{\frac{n}{p}+s}}\,dz\le M_{10}(S f)(x,y)\,. \end{align*} We have shown that \begin{align*} S(M_G f)(x,y) \lesssim S (f)(x,y) + M_{10}(S f)(x,y) \end{align*} and it is clear that inequality \eqref{e.dom} follows (recall that $M_{00}$ is the identity operator when restricted to non-negative functions). \smallskip \noindent {\bf{Case $r(x)>\lvert x-y\rvert$}.} Let us denote $r_1=r(x)>0$ and choose \[ r_2 = r(x) - \lvert x-y\rvert>0\,. \] We then have \begin{align*} &\bigg{|}\intav_{B(x,r_1)}f(z)\,dz\,-\intav_{B(y,r_2)}f(z)\,dz\,\bigg{|} =\bigg{|}\intav_{B(0,r_1)}f(x+z)-f(y+\frac{r_2}{r_1}z)\,dz\,\bigg{|}\\ &=\bigg{|}\intav_{B(0,r_1)}\bigg(f(x+z)-\intav_{B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)\cap G}f(a)\,da\bigg)\\ &\qquad \qquad+\bigg(\intav_{B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)\cap G}f(a)\,da -f(y+\frac{r_2}{r_1}z)\bigg)\,dz\,\bigg{|}\\ &\leq A_1+A_2\,, \end{align*} where we have written \begin{align*} A_1 &= \intav_{B(0,r_1)}\bigg(\intav_{B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)\cap G}|f(x+z)-f(a)|\,da\,\bigg)\,dz\,,\\ A_2&=\intav_{B(0,r_1)}\bigg(\intav_{B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)\cap G}|f(y+\frac{r_2}{r_1}z)-f(a)|\,da\,\bigg)\,dz\,. \end{align*} We estimate both of these terms separately, but first we need certain auxiliary estimates. \smallskip Recall that $r_2=r_1-\lvert x-y\rvert$. Hence, for every $z\in B(0,r_1)$, \begin{align*} \lvert y+\frac{r_2}{r_1}z-(x+z)\rvert&=\lvert y-x+\frac{(r_2-r_1)}{r_1} z\rvert\\ &\leq \lvert y-x\rvert+\frac{\lvert x-y\rvert}{r_1}\lvert z\rvert\leq 2\lvert y-x\rvert\,. \end{align*} This, in turn, implies that \begin{equation}\label{e.inclusion} B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)\subset B(x+z,4\lvert x-y\rvert) \end{equation} whenever $z\in B(0,r_1)$. Moreover, since $r_1> \lvert x-y\rvert$ and $\{y+\frac{r_2}{r_1}z, x+z\}\subset B(x,r_1)\subset G$ if $\lvert z\rvert< r_1$, we obtain the two equivalences \begin{equation}\label{e.equivalence} \lvert B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)\cap G\rvert \simeq \lvert x-y\rvert^n\simeq \lvert B(x+z,4\lvert x-y\rvert)\cap G\rvert \end{equation} for every $z\in B(0,r_1)$. Here the implied constants depend only on $n$. \smallskip \noindent {\bf An estimate for $A_1$}. The inclusion \eqref{e.inclusion} and inequalities \eqref{e.equivalence} show that, in the definition of $A_1$, we can replace the domain of integration in the inner integral by $B(x+z,4\lvert x-y\rvert)\cap G$ and, at the same time, control the error term while integrating on average. That is to say, \begin{align*} A_1&\lesssim \intav_{B(0,r_1)}\bigg(\intav_{B(x+z,4\lvert x-y\rvert)\cap G}|f(x+z)-f(a)|\,da\,\bigg)\,dz\,. \end{align*} By observing that both $x+z$ and $a$ in the last double integral belong to $G$ and using \eqref{e.equivalence} again, we can continue as follows: \begin{align*} \frac{A_1}{\lvert x-y\rvert^{\frac{n}{p}+s}} &\lesssim \intav_{B(0,r_1)}\bigg(\intav_{B(x+z,4\lvert x-y\rvert)} \frac{ \chi_G(x+z)\chi_G(a)\lvert f(x+z)-f(a)\rvert}{\lvert x+z-a\rvert^{\frac{n}{p}+s}}\,da\bigg)dz\\ &\lesssim\intav_{B(0,r_1)}\bigg(\intav_{B(y+z,5\lvert x-y\rvert)} S(f)(x+z,a)\,da\,\bigg)\,dz\,. \end{align*} Applying the maximal operators defined in Section \ref{s.notation} we find that \begin{align*} \frac{A_1}{\lvert x-y\rvert^{\frac{n}{p}+s}}&\lesssim \intav_{B(0,r_1)} M_{01}(Sf)(x+z,y+z)\,dz\leq M_{11}(M_{01}(Sf))(x,y)\,. \end{align*} \smallskip \noindent {\bf An estimate for $A_2$.} We use the inclusion $y+\frac{r_2}{r_1}z\in G$ for all $z\in B(0,r_1)$ and then apply the first equivalence in \eqref{e.equivalence} to obtain \begin{align*} A_2 &= \intav_{B(0,r_1)}\bigg(\intav_{B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)\cap G} \chi_G(y+\frac{r_2}{r_1}z)\chi_G(a)\lvert f(y+\frac{r_2}{r_1}z)-f(a)\rvert \,da\,\bigg)\,dz\\ &\lesssim \intav_{B(0,r_1)}\bigg(\intav_{B(y+\frac{r_2}{r_1}z,2\lvert x-y\rvert)} \chi_G(y+\frac{r_2}{r_1}z)\chi_G(a)\lvert f(y+\frac{r_2}{r_1}z)-f(a)\rvert \,da\,\bigg)\,dz\,. \end{align*} Hence, a change of variables yields \begin{align*} \frac{A_2}{\lvert x-y\rvert^{\frac{n}{p}+s}} &\lesssim \intav_{B(0,r_2)}\bigg(\intav_{B(y+z,2\lvert x-y\rvert)}\frac{\chi_G(y+z)\chi_G(a)\lvert f(y+z)-f(a)\rvert}{\lvert y+z-a\rvert^{\frac{n}{p}+s}}\,d a\,\bigg)\,dz\\ &\lesssim \intav_{B(0,r_2)}\bigg(\intav_{B(x+z,3\lvert x-y\rvert)}S(f)(y+z,a)\,da\,\bigg)\,dz\,. \end{align*} Applying operators $M_{01}$ and $M_{11}$ from Section \ref{s.notation}, we can proceed as follows \begin{align*} \frac{A_2}{\lvert x-y\rvert^{\frac{n}{p}+s}}&\lesssim\intav_{B(0,r_2)} M_{01}(Sf)(y+z,x+z)\,dz\leq M_{11}(M_{01}(Sf))(y,x)\,. \end{align*} \smallskip Combining the above estimates for $A_1$ and $A_2$ we end up with \begin{align*} S(M_Gf)(x,y)\le\frac{A_1+A_2}{\lvert x-y\rvert^{\frac{n}{p}+s}} \lesssim M_{11}(M_{01}(S f))(x,y)+M_{11}(M_{01}(S f))(y,x) \end{align*} and inequality \eqref{e.dom} follows. \end{proof} \section{Application to fractional Hardy inequalities}\label{s.application} We apply Theorem \ref{t.m.bounded} by solving a certain localisation problem for $(s,p)$-Hardy inequalities and our result is formulated in Theorem \ref{t.second} below. Recall that an open set $\emptyset\not=G\subsetneq\R^n$ admits an $(s,p)$-Hardy inequality, for $0<s<1$ and $1<p<\infty$, if there is a constant $C>0$ such that inequality \begin{equation} \int_{G} \frac{\lvert f(x)\rvert^p}{\dist(x,\partial G)^{sp}}\,dx \le C \int_{G} \int_{G} \frac{\lvert f(x)-f(y)\rvert ^p}{\lvert x-y\rvert ^{n+sp}}\,dy\,dx \end{equation} holds for all functions $f\in C_c(G)$. We need a characterization of $(s,p)$-Hardy inequality in terms of the following $(s,p)$-capacities of compact sets $K\subset G$\,; we write \[ \mathrm{cap}_{s,p}(K,G) = \inf_u \lvert u\rvert_{W^{s,p}(G)}^p\,, \] where the infimum is taken over all real-valued functions $u\in C_c(G)$ such that $u(x)\ge 1$ for every $x\in K$. The `Maz'ya-type characterization' stated in Theorem \ref{t.maz'ya} is \cite[Theorem 1.1]{Dyda3} and it extends to the case $0<p<\infty$. For information on characterizations of this type, we refer to \cite[Section 2]{MR817985} and \cite{MR2723821}. \begin{theorem}\label{t.maz'ya} Let $0<s<1$ and $1<p<\infty$. Then an open set $\emptyset\not=G\subsetneq \R^n$ admits an $(s,p)$-Hardy inequality if and only if there is a constant $C>0$ such that \begin{equation}\label{e.Mazya} \int_K \mathrm{dist}(x,\partial G)^{-sp}\,dx \le C\,\mathrm{cap}_{s,p}(K,G) \end{equation} for every compact set $K\subset G$. \end{theorem} We solve a `localisation problem of the testing condition \eqref{e.Mazya}', which is stated as a question in \cite[p. 2]{Dyda3}. Roughly speaking, we prove that if $\mathrm{cap}_{s,p}(\cdot,G)$ satisfies a quasiadditivity property, see Definition \ref{d.quasi}, then $G$ admits an $(s,p)$-Hardy inequality if and only if inequality \eqref{e.Mazya} holds for all Whitney cubes $K=Q\in\mathcal{W}(G)$. \begin{definition}\label{d.quasi} The $(s,p)$-capacity $\mathrm{cap}_{s,p}(\cdot,G)$ is weakly $\mathcal{W}(G)$-quasiadditive, if there exists a constant $N>0$ such that \begin{equation}\label{e.quasi} \sum_{Q\in\mathcal{W}(G)} \mathrm{cap}_{s,p}(K\cap Q,G) \le N\,\mathrm{cap}_{s,p}(K, G) \end{equation} whenever $K=\bigcup_{Q\in\mathcal{E}}Q$ and $\mathcal{E}\subset\mathcal{W}(G)$ is a finite family of Whitney cubes. \end{definition} More precisely, we prove the following characterization. \begin{theorem}\label{t.second} Let $0<s<1$ and $1<p<\infty$ be such that $sp<n$. Suppose that $G\not=\emptyset$ is a bounded open set in $\R^n$. Then the following conditions (A) and (B) are equivalent. \begin{itemize} \item[(A)] $G$ admits an $(s,p)$-Hardy inequality; \item[(B)] $\mathrm{cap}_{s,p}(\cdot,G)$ is weakly $\mathcal{W}(G)$-quasiadditive and there exists a constant $c>0$ such that \begin{equation}\label{e.testing} \ell(Q)^{n-sp}\le c\,\mathrm{cap}_{s,p}(Q,G) \end{equation} for every $Q\in\mathcal{W}(G)$. \end{itemize} \end{theorem} Before the proof of Theorem \ref{t.second}, let us make a remark concerning condition (B). \begin{remark} The counterexamples in \cite[Section 6]{Dyda3} show that neither one of the two conditions (i.e., weak $\mathcal{W}(G)$-quasiadditivity of the capacity and the lower bound \eqref{e.testing} for the capacities of Whitney cubes) appearing in Theorem \ref{t.second}(B) is implied by the other one. Accordingly, both of these conditions are needed for the characterization. \end{remark} \begin{proof}[Proof of Theorem \ref{t.second}] The implication from (A) to (B) follows from \cite[Proposition 4.1]{Dyda3} in combination with \cite[Lemma 2.1]{Dyda3}. In the following proof of the implication from (B) to (A) we adapt the argument given in \cite[Proposition 5.1]{Dyda3}. By Theorem \ref{t.maz'ya}, it suffices to show that \begin{equation}\label{e.wish} \int_K \mathrm{dist}(x,\partial G)^{-sp}\, dx \lesssim \mathrm{cap}_{s,p}(K,G)\,, \end{equation} whenever $K\subset G$ is compact. Let us fix a compact set $K\subset G$ and an admissible test function $u$ for $\mathrm{cap}_{s,p}(K,G)$. We partition $\mathcal{W}(G)$ as $\mathcal{W}_1\cup \mathcal{W}_2$, where \begin{align*} \mathcal{W}_1 = \{Q\in\mathcal{W}(G)\,:\, \langle u\rangle_{2^{-1}Q} :=\intav_{2^{-1}Q} u < 1/2\}\,,\qquad \mathcal{W}_2 = \mathcal{W}(G)\setminus \mathcal{W}_1\,. \end{align*} Write the left-hand side of \eqref{e.wish} as \begin{equation}\label{e.w_split} \bigg\{\sum_{Q\in\mathcal{W}_1} + \sum_{Q\in\mathcal{W}_2}\bigg\} \int_{K\cap Q} \mathrm{dist}(x,\partial G)^{-sp}\,dx\,. \end{equation} To estimate the first series we observe that, for every $Q\in\mathcal{W}_1$ and every $x\in K\cap Q$, \[ \tfrac 12=1-\tfrac12 < u(x) - \langle u\rangle_{2^{-1}Q} = \lvert u(x)-\langle u\rangle_{2^{-1}Q}\rvert\,. \] Thus, by Jensen's inequality and \eqref{dist_est}, \begin{align*} \sum_{Q\in\mathcal{W}_1} \int_{K\cap Q} \mathrm{dist}(x,\partial G)^{-sp}\,dx &\lesssim \sum_{Q\in\mathcal{W}_1} \ell(Q)^{-sp} \int_{Q} \lvert u(x)-\langle u\rangle_{2^{-1}Q}\rvert^p\,dx\\ &\lesssim \sum_{Q\in\mathcal{W}_1} \ell(Q)^{-n-sp} \int_{Q}\int_{Q} \lvert u(x)-u(y)\rvert^p\,dy\,dx\\ &\lesssim \sum_{Q\in\mathcal{W}_1} \int_{Q}\int_{Q} \frac{\lvert u(x)-u(y)\rvert^p}{\lvert x-y\rvert^{n+sp}}\,dy\,dx \\&\lesssim \lvert u\rvert_{W^{s,p}(G)}^p\,. \end{align*} Let us then focus on the remaining series in \eqref{e.w_split}. Let us consider $Q\in\mathcal{W}_2$ and $x\in Q$. Observe that $2^{-1}Q\subset B(x,\tfrac 45\mathrm{diam}(Q))$. Hence, by inequalities \eqref{dist_est}, \begin{equation}\label{e.m_est} \begin{split} M_G u(x) \gtrsim \intav_{2^{-1}Q} u(y)\,dy \geq \tfrac 12\,. \end{split} \end{equation} The support of $M_G u$ is a compact set in $G$ by the boundedness of $G$ and the fact that $u\in C_c(G)$. By Lemma \ref{l.continuity}, we find that $M_G u$ is continuous. Concluding from these remarks we find that there is $\rho>0$, depending only on $n$, such that $\rho M_G u$ is an admissible test function for $\mathrm{cap}_{s,p}(\cup_{Q\in\mathcal{W}_2} Q,G)$. The family $\mathcal{W}_2$ is finite, as $u\in C_c(G)$. Hence, by condition (B) and the inequality \eqref{e.m_est}, \begin{align*} \sum_{Q\in\mathcal{W}_2} \int_{K\cap Q} \mathrm{dist}(x,\partial G)^{-sp}\,dx &\lesssim \sum_{Q\in\mathcal{W}_2} \ell(Q)^{n-sp}\\ &\le c \sum_{Q\in\mathcal{W}_2} \mathrm{cap}_{s,p}(Q,G) \\ &\le cN \mathrm{cap}_{s,p}\Big(\bigcup_{Q\in\mathcal{W}_2} Q,G\Big)\\ &\le cN\rho^p \int_G \int_G \frac{\lvert M_G u(x)-M_G u(y)\rvert^p}{\lvert x-y\rvert^{n+sp}}\,dy\,dx\,. \end{align*} By Theorem \ref{t.m.bounded}, the last term is dominated by \[ C(n,s,p,N,c,\rho)\lvert u\rvert_{W^{s,p}(G)}^p\,. \] The desired inequality \eqref{e.wish} follows from the considerations above. \end{proof} \def\cprime{$'$} \def\cprime{$'$}
{ "config": "arxiv", "file": "1406.1637.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: Averaging over random walk on binary lattice QUESTION [4 upvotes]: I have a function $f$ defined over a bit vector of length $n$. Equivalently, this is a function defined on the set of integers $[0,\ldots,2^n-1]$. I would like to compute the mean or variance or some other statistics of $f$ over the entire domain: $$\overline{f} = \frac{1}{2^n}\sum_{i=0}^{2^n}f(i)$$ It turns out that $f$ is quite difficult to evaluate, and so I can't possibly evaluate the entire sum. Instead, I sample the bit vector space with a random walk, treating each bit as a binary dimension. Each step of the random walk flips a bit at random. What is the proper way to compute statistics if $f$ is sampled using such a random walk? In other words, what kind of weight factor or metric should be used for the sum? My gut feeling tells me that the space will not be sampled uniformly, so simply averaging all the samples on a walk will not be correct. REPLY [1 votes]: I'm not sure, but I think what you meant to ask was a practical question: how many random walk samples do I need in order to get a certain precision in estimating $\mathbb{E}(f)$? This question has been studied for many Markov chains, but ultimately, to get an answer you must know something about $f$. Here is a reference, which gives you tail bounds in terms of $\max(|f|)$, $Var(f)$ and the spectral gap of the Markov chain. There are probably more accessible papers, specifically for the case of the hypercube (but I don't know any specific one).
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 4, "question_id": 71944, "subset_name": null }
TITLE: $\lim\limits_{x\to \infty} xf(x)$ if $f(x)$ is a positive monotonically decreasing function. QUESTION [2 upvotes]: There might be several simple solutions to the following problem (I would be interested in knowing them too). However the main intention of this post is verification. I wanted to know if the argument made here is flawless. $\textbf{Question.}$ Let $f: \mathbb{R}^+ \to \mathbb{R}^+$ be a monotonically decreasing function. If $xf(x)$ does not converge to $0$, then there exist $\epsilon > 0$ and $M>0$ such that for all $x\geq M$, \begin{equation*} xf(x) \geq \epsilon. \end{equation*} $\textbf{Attempt.}$ Let us suppose the contrary. That is $\textbf{Statement 1:}$ Given $\epsilon >0$, whatever the real number $M>0$ we choose, it is possible to find $x>M$ such that $xf(x) < \epsilon$. From the given condition that $\lim\limits_{x\to\infty}xf(x) \neq 0$, we should have an $\epsilon_0 > 0$ for which $\textbf{Statement 2.}$ Whatever $M>0$ is chosen, it is possible to find $x>M$, such that $xf(x) > \epsilon_0$. Before moving further, we shall establish a fact for the given situation. Statement $2$ implies that it is possible to construct a increasing divergent sequence $\{x_k\}$ such that $x_k f(x_k) \geq \epsilon_0$. We shall show that for any such sequence $\{x_k\}$, the corresponding series $\sum f(x_k)$ diverges. Given any $x_k$ it is possible to choose a natural number $n_k$ such that $x_k < n_k$. We can thus assume to have constructed an increasing sequence $\{n_k\}$ such that $n_1 < n_2< \dots$ and $x_k < n_k$. Construct now a sequence $b_n$ such that $b_{n_k} = f(x_k)$ and $b_m = 0$ for the rest of all. Then \begin{equation*} \sum b_n = \sum f(x_n). \end{equation*} (The above means, if one diverges, other also diverges. If one converges, the other too converges to the same limit value). Clearly \begin{equation*} n_kb_{n_k} \geq x_kf(x_k) \geq \epsilon_0. \end{equation*} This implies $\sum b_n$ does not converge (because if $\sum a_n$ converges, with $a_n >0$, then $na_n \to 0$ as $n\to \infty$). Hence $\sum f(x_n)$ diverges. We now see what happens if the Statement 1 is also true. Statement 1 and Statement 2 allow us to construct two sequences $\{x_n\}$ and $\{y_n\}$ as follows: \begin{equation*} x_1 < y_1 \quad \text{and} \quad y_{n-1} < x_n < y_n \quad \text{for all $n \geq 2$} , \end{equation*} \begin{equation*} y_n > n^2 \end{equation*} and \begin{equation*} x_nf(x_n) \geq \epsilon_0 \quad \text{and} \quad y_nf(y_n) < \epsilon_0. \end{equation*} Clearly \begin{equation*} f(y_n) < \frac{\epsilon_0}{y_n}< \frac{\epsilon_0}{n^2}. \end{equation*} This implies that $\sum f(y_n)$ converges. Now $y_{n-1} < x_n$ for all $n\geq 2$. Since $f$ is decreasing, this implies that $f(x_n) \leq f(y_{n-1})$. This implies that $\sum f(x_n)$ converges, which cannot happen in the given situation. Hence Statement 1 cannot hold good. $\blacksquare$ REPLY [1 votes]: The claim is false. Consider your favourite strictly increasing sequence $a_n\nearrow \infty$ such that $\lim_{n\to\infty}\frac{a_{n}}{a_{n+1}}=0$ (namely, $a_n=n^n$). Make it $a_0=0$ for convenience. Then call $n(x)=\min\{n\in \Bbb N\,:\, a_n\ge x\}$. Then call $$f(x)=\frac1{a_{n(x)}}$$ Notice that eventually $a_{n+1}> a_n+n^{-1}$ and $a_n-n^{-1}>a_{n-1}$, and therefore $$\lim_{n\to\infty}(a_n+n^{-1})f(a_n+n^{-1})=\lim_{n\to\infty}\frac{a_n}{a_{n+1}}+\frac1{na_{n+1}}=0\\ \lim_{n\to\infty}(a_n-n^{-1})f(a_n-n^{-1})=\lim_{n\to\infty}1-\frac{1}{na_n}=1$$ This function can be adapted into a $C^\infty$ function by smoothing out the jumps, without changing any of the relevant features. You can also make it strictly decreasing at any time, by adding $e^{-x^2}$.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 2, "question_id": 4246561, "subset_name": null }
TITLE: Triangle equilateral proof QUESTION [0 upvotes]: Let $D, E$, and $F$ be points on the sides $BC, CA$, and $AB$ respectively of triangle $ABC$ such that $BD=CE=AF$ and $\angle BDF=\angle AFE$. Prove that triangle $ABC$ is equilateral. It looks really difficult to me. I tried using sine rule and cevas theorm but doesn't help!! REPLY [1 votes]: Here's a picture of the counterexample, where $BD=CE=AF$ and $\angle BDE = \angle AFE$ As you can notice in the picture the angles are slightly different, as I couldn't adjust them manually. However that can be taken care of. As you can notice in the first picture below $\angle BDE < \angle AFE$, while in the second one $\angle BDE > \angle AFE$. As the change in the angle is continuous by Intermediate Value Theorem they have to be same at some point.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 2878592, "subset_name": null }
TITLE: Prove that there are no integers $a\ge 2\:$ and $n\ge 1$ such that $a^2+3=3^n$ QUESTION [0 upvotes]: Prove that there are no integers $a\ge 2\:$ and $n\ge 1$ such that $a^2+3=3^n$. I can not find which method I use (like proof by contradiction, contraposition, proof by cases, etc.) Can you help me about how can I start? REPLY [2 votes]: $a^2 = 3(3^{n-1} - 1)\implies 3 \mid a^2\implies 3 \mid a\implies a = 3m\implies 9m^2= 3(3^{n-1}-1)\implies 3m^2=3^{n-1}-1$. This means $n=1, a = 0$. But $a \ge 2$. so $n > 1$, but then $1 = 3^{n-1} - 3m^2 = 3(3^{n-2} - m^2)\implies 3 \mid 1$ . Contradiction. So there is no solution!
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 4273098, "subset_name": null }
TITLE: Does the infinite power tower $x^{x^{x^{x^{x^{x^{x^{.{^{.^{.}}}}}}}}}}$ converge for $x < e^{-e}$? QUESTION [0 upvotes]: As per the answer at https://math.stackexchange.com/a/573040/23890, the infinite power tower $x^{x^{x^{x^{x^{x^{x^{.{^{.^{.}}}}}}}}}}$ converges if and only if $ x \in [e^{-e}, e^\frac{1}{e} ] $. Is $ e^{-e} = 0.065988... $ really the lower bound for $ x $ for this power tower to converge? I wrote a Python program to check how this power tower behaves for $ x < e^{-e} $, for example, $ x = 0.01 $. It seems for $ x = 0.01 $, the power tower converges to $ 0.941... $. from math import * def power_tower_iterations(x, iterations): result = 1 for i in range(iterations): result = x ** result return result def power_tower(x): for iterations in [10, 100, 1000, 10000, 100000, 1000000]: print(power_tower_iterations(x, iterations)) power_tower(0.01) Output: 0.9415490240371601 0.9414883685756696 0.9414883685756696 0.9414883685756696 0.9414883685756696 0.9414883685756696 So does the power tower converge for $ x < e^{-e} $ or does it not? If it does not, what error have I made above that led me to the false conclusion that the power tower converged to $0.941...$ for $x = 0.01$? REPLY [1 votes]: Hint :Nice answer is given here by @Simply ,For $0<x<e^{-e}$, it diverges. Note that for such small values of $x$, we get $t>x^{x^t}$ for when $t>y$ and $t<x^{x^t}$ when $t<y$, where $y=x^y$. In other words, adding more powers of $x$ ends up pushing it farther and farther away from $y$ instead of closer. Particularly, it approaches $0$ and $1$, for even and odd powers of $x$. In the same time limit of odd iteration is not the same with limit of even iteration for $0<x<e^{-e}$ as you showed in your code as @Rob Bland claimed above in comment
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 3695694, "subset_name": null }
TITLE: Definition of $(\exists_1x)\mathscr B(x)$ QUESTION [1 upvotes]: This question is from Introduction to Mathematical Logic by Elliot Mendelson , forth edition , page 99 about the definition of $(\exists_1x)\mathscr B(x)$. In the book , the definition is written like this: $(\exists_1x)\mathscr B(x)$ as for $$(\exists x)\mathscr B(x) \land (\forall x)(\forall y)(\mathscr B(x) \land \mathscr B(y) \to x = y)$$ In this definition, the new variable $y$ is assumed to be the first variable that does not occur in $\mathscr B(x)$. A similar convention is to be made in all other definitions where new variables are introduced. My question is , what do they exactly mean by "...assumed to be the first variable..." ? REPLY [2 votes]: The important point is that $y$ should not clash with the free variables of $\mathscr{B}(x)$. Requiring $y$ to be the first variable in the sequence of all variables $x_1, x_2, \ldots$ that does not occur in $\mathscr{B}$ is just a definite way of ensuring that $y$ does not clash. To see why this is necessary, take $\mathscr{B}(x) \equiv x > y$ so that, in $(\Bbb{Q}, >)$, for example, $(\exists_1 x) \mathscr{B}(x)$ is false. Then if we allow $y$ in the definition to clash with the free variable $y$ of $\mathscr{B}(x)$, the definition would give us: $$(\exists x)x > y \land (\forall x)(\forall y)(x > y \land y > y \to x = y)$$ which, unlike $(\exists_1 x) \mathscr{B}(x)$ is true, because $y > y$ is always false. If we avoid the clash, by picking a variable $z$ that is different from $x$ and $y$, we get the correct equivalent of $(\exists_1 x) \mathscr{B}(x)$: $$(\exists x)x > y \land (\forall x)(\forall z)(x > y \land z > y \to x = z)$$ which is false, as we would expect.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 4160168, "subset_name": null }
TITLE: Is this statement about roots of polynomials well-known? QUESTION [7 upvotes]: Here is the statement : Let $P$ be a non constant polynomial of $\mathbb{C}[X]$ which has at least two distinct roots. If $P''$ divides $P$ hence all the roots of $P$ belong to the same (real) line. The ingredients used are : $P$ has in fact only simple roots, the Gauss-Lucas theorem and the extreme points of a convex set. It seems slightly similar to Jensen's theorem. I was wondering if it was a well-known fact about the roots of a non constant complex polynomial ? If so, are there any references about it ? Thanks in advance ! REPLY [3 votes]: My answer here proves that any polynomial of degree $\,n\ge 3\,$ which satisfies $\,P''(x) \mid P(x)\,$ and has at least two distinct roots must have all $\,n\,$ roots distinct. Let $\,P(x) = \lambda(x-a)(x-b)P''(x)\,$ with $\,a \ne b\,$ and $\,c_i \big|_{i=1,2,\dots,n-2}\,$ the roots of $\,P''\,$, all distinct and different from $\,a, b\,$. If $\,d_j \big|_{j=1,2,\dots,k \,\le\,n-2}\,$ are the extreme points of the convex hull $\,C_{P''} =\text{Conv}(c_i)\,$ then $\,\text{Conv}(d_i) = C_{P''}\,$ by the Krein–Milman theorem. Let $\,C_{P'}\,$ be the convex hull of the roots of $\,P'\,$, and $\,C_P = \text{Conv}(a, b, c_i)\,$, then $\,C_{P''} \subseteq C_{P'} \subseteq C_P\,$ by the Gauss-Lucas theorem. Root $\,d_1\,$ of $\,P'' \mid P\,$ is in both $\,C_P\,$ and $\,C_{P''}\,$, so it must be in $\,C_{P'}\,$. However, $\,d_1\,$ cannot be an extreme point of $\,C_{P'}\,$ since those are among the roots of $\,P'\,$, and $\,P'\,$ has no roots in common with $\,P\,$ because all roots are simple. Therefore $\,d_1\,$ cannot be an extreme point of $\,C_P\,$, either, since an extreme point of a convex set is an extreme point of any convex subset that contains it. By symmetry, none of the $\,d_j\,$ can be extreme points of $\,C_P\,$, either. Since $\,C_P\,$ contains $\,a,b\,$ and, by convexity, the entire segment $\,\overline{ab}\,$, it follows that the only extreme points are $\,a,b\,$, and all other roots $\,c_i\,$ must lie within segment $\,\overline{ab}\,$.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 7, "question_id": 4458632, "subset_name": null }
TITLE: How Wilson's theorem implies the existence of an infinitude of composite numbers of the form $n! + 1$? QUESTION [1 upvotes]: This is a paragraph in David M. Burton, "elementary number theory, seventh edition: ": But I do not understand: 1- How Wilson's theorem implies the existence of an infinitude of composite numbers of the form $n! + 1$? This is the statement of Wilson's theorem that I know: $P$ is a prime iff $(p-1)! \equiv -1 (mod p)$ Could anyone explain this for me please? 2- I do not understand the second statement in the paragraph, especially in comparison to the first statement, does they mean that the form $n! +1$ can give us prime and composite numbers ? REPLY [1 votes]: 1) this follows if you also use Euclid's result that there are infinitely many primes, as $(p-1)!+1$ will be composite for each such prime $p$. 2) Indeed, this implies that while we are sure there are infinitely many $n$ such that $n!+1$ is composite, we don’t know if there are infinitely many $n$ for which it’s prime. In other words if $n-1$ is not prime, this doesn’t necessarily imply that $n!+1$ is prime.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 3237764, "subset_name": null }
\begin{document} \allowdisplaybreaks \renewcommand{\thefootnote}{$\star$} \renewcommand{\PaperNumber}{059} \FirstPageHeading \ShortArticleName{Projections of Singular Vectors of Verma Modules} \ArticleName{Projections of Singular Vectors of Verma Modules\\ over Rank 2 Kac--Moody Lie Algebras\footnote{This paper is a contribution to the Special Issue on Kac--Moody Algebras and Applications. The full collection is available at \href{http://www.emis.de/journals/SIGMA/Kac-Moody_algebras.html}{http://www.emis.de/journals/SIGMA/Kac-Moody{\_}algebras.html}}} \Author{Dmitry FUCHS and Constance WILMARTH} \AuthorNameForHeading{D. Fuchs and C. Wilmarth} \Address{Department of Mathematics, University of California, One Shields Ave., Davis CA 95616, USA} \Email{\href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}} \ArticleDates{Received June 29, 2008, in f\/inal form August 24, 2008; Published online August 27, 2008} \Abstract{We prove an explicit formula for a projection of singular vectors in the Verma module over a rank~2 Kac--Moody Lie algebra onto the universal enveloping algebra of the Heisenberg Lie algebra and of $sl_{2}$ (Theorem~\ref{theorem3}). The formula is derived from a more general but less explicit formula due to Feigin, Fuchs and Malikov [{\it Funct. Anal. Appl.} {\bf 20} (1986), no.~2, 103--113]. In the simpler case of $ \mathcal{A}_{1}^{1}$ the formula was obtained in [Fuchs D., {\it Funct. Anal. Appl.} {\bf 23} (1989), no.~2, 154--156].} \Keywords{Kac--Moody algebras; Verma modules; singular vectors} \Classification{17B67} \section{Introduction}\label{section1} Let $ \mathcal{G}(A)$ be the complex Kac--Moody Lie algebra corresponding to an $n \times n$ symmetrizable Cartan matrix $A$, let $N_{-}, H, N_{ + } \subset\mathcal{G}(A)$ be subalgebras generated by the groups of standard generators: $f_{ i }$, $h_i$, $ e_{ i }$, $ i = 1, \ldots, n$. Then $ \mathcal{G}(A) = N_{ - } \oplus H \oplus N_{ + }$ (as a vector space), the Lie algebras $N_{ - }$ and $N_{ + }$ are virtually nilpotent, and $H$ is commutative. Let $ \la: H \longrightarrow \mathbb{C}$ be a linear functional and let $M$ be a $ \mathcal{G}(A)$-module. A non-zero vector $w \in M$ is called a singular vector of type $ \la$ if $gw = 0$ for $g \in N_{ + }$ and $hw = \la (h) w$ for $h \in H.$ Let \begin{gather*} J_{ \la } = \{ \alpha \in \mathcal{U}(N_{ - } ) | \, \exists \textrm{ a } \mathcal{G}(A)\textrm{-module }M \textrm{ and a singular vector } w \in M\\ \phantom{J_{ \la } =\{}{}\textrm{ of type } \la \textrm{ such that } \alpha w = 0 \}. \end{gather*} Obviously $J_{ \la }$ is a left ideal of $ \mathcal{U}(N_{ - })$. It has a description in terms of Verma modules $M( \la ).$ Let $I_{ \la }$ be a one-dimensional $ (H \oplus N_{ + })$-module with $hu = \la (h)u$, $gu = 0$ for $g \in N_{ + }$ and arbitrary $u \in I_{ \la }.$ The Verma module $M( \la )$ is def\/ined as the $ \mathcal{G}(A)$-module induced by $I_{ \la }$; as a~$ \mathcal{U}(N_{ - })$-module, $M( \la )$ is a free module with one generator $u$; this ``vacuum vector'' $u$ is, with respect to the $ \mathcal{G}(A)$-module structure, a singular vector of type $ \la.$ It is easy to see that $M( \la )$ has a unique maximal proper submodule and this submodule $L( \la )$ is, actually, $J_{ \la }u.$ This observation demonstrates the fundamental importance of the following two problems. \begin{enumerate}\itemsep=0pt \item For which $ \la $ is the module $M( \la )$ reducible, that is, $L( \la ) \ne 0 $? \item If $M( \la )$ is reducible, then what are generators of $L( \la )$ (equivalently, what are generators of $J_{ \la }$)? \end{enumerate} Problem~1 is solved, in a very exhaustive way, by Kac and Kazhdan~\cite{kk:struct}. They describe a~subset $ \mathcal{S} \subseteq H^ {* }$ such that the module $M( \la )$ is reducible if and only if $ \la \in \mathcal{S}$; this subset is a~countable union of hyperplanes. (See a precise statement in Section~\ref{section2} below.) Actually, $\lambda\in\mathcal{S}$ if and only if $M(\lambda)$ contains a singular vector not proportional to $u$. A formula for such a singular vector in a wide variety of cases is given in the work of Feigin, Fuchs and Malikov~\cite{Feigin}. This formula is short and simple, but it involves the generators $f_{ i }$ raised to complex exponents; when reduced to the classical basis of $ \mathcal{U}(N_{ - } )$ the formula becomes very complicated (as shown in~\cite{Feigin} in the example $ \mathcal{G}(A) = sl_{n}).$ There remains a hope that the projection of these singular vectors onto reasonable quotients of $ \mathcal{U}(N_{ - })$ will unveil formulas that possess a more intelligible algebraic meaning, and this was shown to be the case by Fuchs with the projection over the algebra $ \mathcal{A}_{1}^{1}$ into $ \mathcal{U}(sl_{2})$ and $\mathcal{U}(\mathcal{H}),$ where $\mathcal H$ is the Heisenberg algebra~\cite{Fuchs}, work which took its inspiration from the earlier investigation of Verma modules over the Virasoro algebra by Feigin and Fuchs~\cite{FeigF}. In this note we extend these results by providing projections to $\mathcal{U}(sl_{2})$ and $\mathcal{U}(\mathcal{H})$ of the singular vectors over the family of Kac--Moody Lie algebras $ \mathcal{G}(A)$ of rank 2 (see Theorem~\ref{theorem3} in Section~\ref{section4} and a discussion in Section~\ref{section5}). As in \cite{Fuchs} and \cite{FeigF}, our formulas express the result in the form of an explicit product of polynomials of degree~2 in ${\mathcal U}({\mathcal H})$ and ${\mathcal U}(sl_2)$. It is unlikely that this work can be extended to algebras of larger rank. \section{Preliminaries}\label{section2} Let $A = (a_{ij} )$ be an integral $n \times n$ matrix with $a_{ij}=2$ for $i=j$ and $a_{ij}\le0$ for $i\ne j$. We assume that that $ A $ is symmetrizable, that is, $DA = A^{\rm sym},$ where $D = [d_1, \ldots , d_{n} ]$ is diagonal, $d_{i} \ne 0,$ and $ A^{\rm sym} $ is symmetric. To $A$ is associated a \emph{Kac--Moody} Lie algebra $\mathcal{G}(A)$ def\/ined in the following way. $\mathcal{G}(A)$ is a complex Lie algebra with the generators $e_i$, $h_i$, $f_i$, $i=1,\dots,n$ and the relations $[h_i,h_j]=0$, $[h_i,e_j]=a_{ij}e_j$, $[h_i,f_j]=-a_{ij}f_j$, $[e_i,e_j]=\delta_{ij}h_i$, $(\ad e_i)^{-a_{ij}+1}e_j=0$, $(\ad f_i)^{-a_{ij}+1}f_j=0$. There is a vector space direct sum decomposition $\mathcal{G}(A)=N_-\oplus H\oplus N_+$ where $N_-,H,N_+\subset\mathcal{G}(A)$ are subalgebras generated separately by $\{f_i\}$, $\{h_i\}$, $\{e_i\}$. Actually, $H$~is a commutative Lie algebra with the basis $\{h_i\}$. We introduce in $H$ a (possibly, degenerate) inner product by the formula $\langle h_i,h_j\rangle=d_ia_{ij}$. Fix an auxiliary $n$-dimensional complex vector space $T$ with a basis $\alpha_1,\dots,\alpha_n$; Let $\Gamma$ denote a lattice generated by $\alpha_1,\dots,\alpha_n$, and let $\Gamma_+$ be the intersection of $\Gamma$ with the (closed) positive octant. For an integral linear combination $\alpha=\sum_{i=1}^nm_i\alpha_i$, denote by $G_\alpha$ the subspace of $\mathcal{G}(A)$ spanned by monomials in $e_i$, $h_i$, $f_i$ such that for every $i$, the dif\/ference between the number of occurrences of $e_i$ and $f_i$ equals $m_i$. If $\alpha\ne 0$ and $G_\alpha\ne0$, then $\alpha$ is called a {\em root} of $\mathcal{G}(A)$. Every root is a positive, or a negative, integral linear combination of $\alpha_i$; accordingly the root is called positive or negative (and we write $\alpha>0$ or $\alpha<0$). Obviously, $N_+=\oplus_{\alpha>0}G_\alpha$, $N_-=\oplus_{\alpha<0}G_\alpha$. Remark that Verma modules have a natural grading by the semigroup $\Gamma_+$. For $ \alpha = \sum k_{i} \alpha_{i},$ let $h_\alpha = \sum k_{i} d_{i}^{ -1} h_{i}.$ We can carry the inner product from $H$ to $T$ using the formula $\langle\alpha,\beta\rangle=\langle h_\alpha,h_\beta\rangle$. If $\langle\alpha,\alpha\rangle\ne0$, then we def\/ine a ref\/lection $s_\alpha\colon H^\ast\to H^\ast$ by the formula \[ (s_\alpha\lambda)(h)=\lambda(h)-\frac{2\lambda(h_\alpha)}{\langle\alpha,\alpha\rangle}\langle h_\alpha,h\rangle. \] The similar formula \[ s_\alpha\beta=\beta-\frac{2\langle\alpha,\beta\rangle}{\langle\alpha,\alpha\rangle}\alpha \] def\/ines a ref\/lection $s_\alpha\colon T\to T$. ``Elementary ref\/lections'' $s_i=s_{\alpha_i}$ generate the action of the {\em Weyl group} $W(A)$ of $\mathcal{G}(A)$ in $H^\ast$ and in $H$. In $H^\ast$ we consider, besides the ref\/lections $s_\alpha$ the ref\/lections $s_\alpha^\rho$, $s_\alpha^\rho(\la)=s_\alpha(\lambda+\rho)-\rho$ where $\rho\in H^\ast$ is def\/ined by the formula $\rho(h_i)=1$, $1\le i\le n$. The Kac--Kazhdan criterion for reducibility of Verma modules $M( \la )$, mentioned above, has precise statement: \begin{theorem}\label{theorem1} $M( \la )$ is reducible if and only if for some positive root $\alpha$ and some positive integer~$m$, \begin{equation} ( \la + \rho )(h_{ \alpha }) -\frac{m}{2} \langle\alpha, \alpha\rangle=0. \label{eq1} \end{equation} Moreover, if $\lambda$ satisfies this equation for a unique pair $\alpha$, $m$, then all non-trivial singular vectors of $M(\lambda)$ are contained in $M(\lambda)_{m\alpha}$. \end{theorem} For $m$ and $ \alpha$ satisfying this criterion, Feigin, Fuchs and Malikov \cite{Feigin} give a description for the singular vector of degree $m\alpha$ in $M( \la ).$ In the case when $\alpha$ is a real root, that is, $\langle\alpha,\alpha\rangle\ne0$, their description is as follows. Let $s_\alpha=s_{i_N}\cdots s_{i_1}$ be a presentation of $s_\alpha\in W(A)$ as a product of elementary ref\/lections. For $\lambda\in H^\ast$, set $\lambda_0=\lambda$, $\lambda_j=s_{i_j}(\la_{j-1}+\rho)-\rho$ for $0<j\le N$. Obviously, the vector $\overrightarrow{\la_{j-1}\la_j}$ is collinear to $\alpha_{i_j}$ (or, rather, to $\langle\alpha_{i_j},\ \rangle$); let $\overrightarrow{\la_{j-1}\la_j}=\gamma_j$. Let $\alpha$ satisf\/ies (for some $m$) the equation~\eqref{eq1}. Then \begin{equation*} F(s_\alpha;\lambda)u\qquad \mbox{where}\quad F(s_\alpha;\la)=f_{i_N}^{-\gamma_N}\cdots f_{i_1}^{-\gamma_1} \end{equation*} is a singular vector in $M(\lambda)_{m\alpha}.$ Notice that the exponents in the last formula are, in general, complex numbers. It is explained in \cite{Feigin} why the expression for $F(s_\alpha;\la)$ still makes sense. \section{The case of rank two}\label{section3} In the case $n=2$, a (symmetrizable) Cartan matrix given by \[ A =\left( \begin{array}{cc} 2 & {-q} \\ {-p} & 2 \end{array} \right), \] where $p>0$, $q>0$. Since for $pq\le3$, the algebra $\mathcal{G}(A)$ is f\/inite-dimensional, we consider below the case when $pq\ge4$. Simple calculations show that $s_{\alpha_{1}}(\alpha_1) = -\alpha_1$, $s_{\alpha_{1}}(\alpha_2) = p\alpha_1 + \alpha_2$, $s_{\alpha_2}(\alpha_1) = \alpha_1+ q \alpha_2$, and $ s_{\alpha_2}(\alpha_2) = -\alpha_2$, and it is easy to check that the orbit of the root $(1,0)$ lies in the curve $qx^2 -pqxy + py^2= q$ and the orbit of $(0,1)$ lies in $qx^2 - pqxy + py^2 = p.$ (If $pq<4$, these two curves are hyperbolas sharing asymptotes, in the (degenerate) case of $pq=4$, they are pairs of parallel lines with a slope of $\frac q2$.) Def\/ine a sequence recursively by $a_0 = 0$, $a_1 = 1,$ and $a_{n} = sa_{n-1} - a_{n-2}$ where $s^2 = pq.$ Then for $\sigma^2 = \frac{q}{p}$ we can calculate \begin{gather*} (1,0) = (a_1, \sigma a_0), \\ s_2((1,0)) = (1,q) = (a_1, \sigma a_2), \\ s_1s_2((1,0)) = (pq-1,q)= (a_3, \sigma a_2), \\ s_2 s_1 s_2((1,0)) = (pq-1, q(pq-2)) = (a_3, \sigma a_4). \end{gather*} More generally, the following is true. \begin{proposition}\label{proposition1} The orbit of $(1,0)$ consists of points \[ \cdots (a_{2n-1}, \sigma a_{2n-2}), (a_{2n-1}, \sigma a_{2n}), (a_{2n+1}, \sigma a_{2n}), (a_{2n+1}, \sigma a_{2n+2}) \cdots \] determined by the sequence $ \{a_{n} \}$ above; while the orbit of $(0,1)$ consists of points \[ \cdots(\sigma^{-1} a_{2n-2}, a_{2n-1}), (\sigma^{-1} a_{2n}, a_{2n-1}), (\sigma^{-1} a_{2n}, a_{2n+1}), (\sigma^{-1} a_{2n+2}, a_{2n+1}) \cdots \] for $n \ge 1.$ \end{proposition} \begin{proof} The proof is by induction on $n$. \end{proof} Obtaining explicit coordinates for the real orbits is straightforward in the af\/f\/ine case, because of the simpler geometry. For $pq > 4$ an explicit description of the sequence $ \{ a_{n} \}$ is possible using an argument familiar to Fibonnaci enthusiasts: \begin{proposition}\label{proposition2} The $nth$ term is \begin{gather*} a_{n} = \frac {1}{ \sqrt {pq - 4}} \left( \frac{ \sqrt {pq} + \sqrt {pq - 4}}{2} \,\right) ^{n} - \frac{1}{ \sqrt{pq - 4}} \left( \frac{ \sqrt{pq} - \sqrt{pq - 4}}{2} \,\right) ^{n}. \end{gather*} \end{proposition} \begin{proof} Direct computation. \end{proof} With these real roots now labelled by the sequence $ \{ a_{n} \} ,$ we present the singular vectors indexed by them in the Verma modules over $\mathcal{G}(A)$. Write $ \lambda = x \lambda_1 + y \lambda_2$ where $ \lambda_{i} (h_{j}) = \delta_{ij},$ so that $ \lambda (h_{1}) = x$ and $ \lambda (h_{2}) = y.$ Let us def\/ine the numbers $\Ga^k_1$, $\Ga^k_2$ by the formulas: \begin{gather*} \Ga^{2m}_1 = {q\sum_{i=0}^{m-1} (-1)^{i} {2m-i-1 \choose 2m-2i-1} (pq)^{m-i-1}}, \\ \Ga^{2m}_2= {\sum_{i=0}^{m-1} (-1)^{i} {2(m-1)-i \choose 2(m-1)-2i} (pq)^{m-i-1} }, \\ \Ga^{2m+1}_1 = {\sum_{i=0}^{m} (-1)^{i} {2m-i \choose 2m-2i} (pq)^{m-i}},\\ \Ga^{2m+1}_2 ={p\sum_{i=0}^{m-1} (-1)^{i} {2m-i-1 \choose 2m-2i-1} (pq)^{m-i-1}}. \end{gather*} (Note that $\Ga^0_1=\Ga^0_2= 0.)$ The formula from \cite{Feigin} takes in our case the following form. \begin{theorem}\label{theorem2} For the algebra $ \mathcal{G} ( A)$ with Cartan matrix \begin{gather*} A = \left( \begin{array}{cc} 2 & -p \\ -q & 2 \end{array} \right) \end{gather*} the singular vectors are as follows: 1. For the root $\alpha=(a_{2n-1},\sigma a_{2n-2}),$ with $m \in \mathbb{N},$ and $ t \in \mathbb{C}$ arbitrary, \begin{gather*} F(s_{ \alpha}; \lambda)= f_1^{ \frac{ \Ga_1^{4n - 3}m\strut}{a_{2n-1}} + \Ga_2^{2n - 1}t}f_2^{ \frac{ \Ga_1^{4n - 4}m\strut}{a_{2n-1}} + \Ga_2^{2n - 2}t} f_1^{ \frac{ \Ga_1^{4n - 5}m\strut}{a_{2n-1}} + \Ga_2^{2n - 3}t}\\ \phantom{F(s_{ \alpha}; \lambda)=}{} \cdots f_2^{ \frac{ \Ga_1^{2n}m\strut}{a_{2n-1}} + \Ga_2^2t} f_1^{m} f_2^{ \frac{ \Ga_1^{2n-2}m\strut}{a_{2n-1}} - \Ga_2^2t} \cdots f_2^{ \frac{ \Ga_1^{2}m\strut}{a_{2n-1}} - \Ga_2^{2n-2}t} f_1^{ \frac{ \Ga_1^{1}m\strut}{a_{2n-1}} - \Ga_2^{2n-1}t} \end{gather*} and the vector $F(s_\alpha,\lambda)u$ is singular in $M(\la)= {M\left( \frac{m - \Ga_1^{ 2n - 1 }}{ \Ga_1^{2n-1}} - \Ga_2^{2n-1}t , \Ga_1^{2n-1} t - 1\right)}$. 2. For $ \alpha = (a_{2n-1}, \sigma a_{2n})$,{\samepage \begin{gather*} F(s_{ \alpha} ; \lambda) = f_2^{ \frac{ \Ga_2^{4n}m\strut}{\sigma^{-1} a_{2n}} + \Ga_2^{2n}t}f_1^{ \frac{ \Ga_2^{4n-1}m\strut}{\sigma^{-1}a_{2n}} + \Ga_2^{2n-1}t} f_2^{ \frac{ \Ga_2^{4n-2}m\strut}{\sigma ^{-1}a_{2n}} + \Ga_2^{2n-2}t}\\ \phantom{F(s_{ \alpha} ; \lambda) =}{} \cdots f_2^{ \frac{ \Ga_2^{2n + 2}m\strut}{\sigma^{ -1} a_{2n}} + \Ga_2^{2}t}f_1^{ m} f_2^{ \frac{ \Ga_2^{2n}m\strut}{\sigma^{-1}a_{2n}} - \Ga_2^{2}t} \cdots f_1^{ \frac{ \Ga_2^3m\strut}{\sigma^{-1}a_{2n}} - \Ga_2^{2n-1}t}f_2^{ \frac { \Ga_2^2m\strut}{\sigma^{-1}a_{2n}} - \Ga_2^{2n}t} \end{gather*} and the vector $F(s_\alpha,\lambda)u$ is singular in $M(\la)= {M \left( \Ga_2^{2n} t - 1, \frac{m - \Ga_2^{2n}}{ \Ga_2^{2n}} - \Ga_1^{2n} t\right)}$.} 3. For $ \alpha = (\sigma^{-1}a_{2n-2}, a_{2n-1})$, \begin{gather*} F( s_{ \alpha}, \lambda)= f_2^{ \frac{ \Ga_2^{4n-2}m\strut}{a_{2n-1}} + \Ga_1^{2n-2}t}f_1^{ \frac{ \Ga_2^{4n-3}m\strut}{a_{2n-1}} + \Ga_1^{2n-3}t} f_2^{ \frac{ \Ga_2^{4n-4}m\strut}{a_{2n-1}} + \Ga_1^{2n-4}t}\\ \phantom{F( s_{ \alpha}, \lambda)= }{} \cdots f_1^{ \frac{ \Ga_2^{ 2n + 1 }m\strut}{a_{2n-1}} + \Ga_1^{1}t}f_2^{ m} f_1^{ \frac{ \Ga_2^{ 2n - 1}m\strut}{a_{2n-1}} - \Ga_1^{1}t} \cdots f_1^{ \frac{ \Ga_2^3m\strut}{a_{2n-1}} - \Ga_1^{2n-3}t}f_2^{ \frac{ \Ga_2^2m\strut}{a_{2n-1}} - \Ga_1^{2n-2}t} \end{gather*} and the vector $F(s_\alpha,\lambda)u$ is singular in $M(\la)= {M\left( \Ga_2^{2n-1} t - 1, \frac{m - \Ga_2^{ 2n - 1 }}{ \Ga_2^{2n-1}} - \Ga_1^{2n-1} t\right)}.$ 4. For $ \alpha = (\sigma^{-1}a_{2n}, a_{2n-1})$, \begin{gather*} F( s_{ \alpha} ; \lambda) = f_1^{ \frac{ \Ga_1^{4n-1}m\strut}{\sigma a_{2n}} + \Ga_1^{2n-1}t}f_2^{ \frac{ \Ga_1^{4n-2}m\strut}{\sigma a_{2n}} + \Ga_1^{2n - 2}t} f_1^{ \frac{ \Ga_1^{4n-3}m\strut}{\sigma a_{2n}} + \Ga_1^{2n-3}t}\\ \phantom{F( s_{ \alpha} ; \lambda) =}{} \cdots f_1^{ \frac{ \Ga_1^{2n + 1}m\strut}{\sigma a_{2n}} + \Ga_1^{1}t}f_2^{m}f_1^{ \frac{ \Ga_1^{ 2n - 1}m\strut}{\sigma a_{2n}} - \Ga_1^{2n-3}t} \cdots f_2^{ \frac{ \Ga_1^2m\strut}{\sigma a_{2n}} - \Ga_1^{2n-2}t} f_1^{ \frac{ \Ga_1^1m\strut}{\sigma a_{2n}} - \Ga_1^{2n-1}t} \end{gather*} and the vector $F(s_\alpha,\lambda)u$ is singular in $M( \lambda) = \left( \frac{m - \Ga_1^{2n}}{ \Ga_1^{2n}} - \Ga_2^{2n} t, \Ga_1^{2n} t - 1\right).$ \end{theorem} \begin{proof} It must be checked that the vectors given above actually correspond to the Feigin--Fuchs--Malikov (FFM) procedure for obtaining singular vectors, and also that the Kac--Kazhdan criterion for reducibility is satisf\/ied. For $ \lambda = x \lambda_1 + y \lambda_2$ and the ref\/lection $s_{ \alpha} = s_{ i_{N}} \cdots s_{ i_{1}} $ (pro\-duct of simple ref\/lections) the algorithm requires successive application of the transformations $s_{1}^{ \rho} := s_1 (\lambda + \rho) - \rho$ and $s_2^{ \rho} := s_2 (\lambda + \rho) - \rho.$ One generates the list \begin{gather*} \la^0 = x \la_1 + y \la_2, \qquad \la^{j} = s_{ i_{ j } } ( \la^{ j - 1 } + \rho ) - \rho \end{gather*} and the auxiliary sequence $ \{ \overrightarrow{ \la^{ j - 1 } \la^{ j }} \}_{ j \ge 1}. $ The algorithm then gives \[ F(s_{ \alpha }; \la ) = f_{ i_{N}}^{ \theta_{N}} \cdots f_{ i_{ 1 }}^{ \theta_{ 1 }}, \] where $ \overrightarrow{ \la^{ j - 1 } \la^{ j }} = - \theta_{ j } \alpha_{ i_{ j }}$ (here $\alpha_{ i_{ j }}$ is the functional $\langle h_{ \alpha_{ i_{ j }}}, \cdot\rangle).$ So we f\/irst need to know the decomposition of $s_{ \alpha } $ into elementary ref\/lections for $ \alpha$ in the orbit of $(1,0)$ or $(0,1).$ Let $S_{i} (m)$ denote the word in $H^{*}$ beginning and ending with $s_{i}$, and containing $m$ $s_{i}$'s. For example, $S_{1}(3) = s_{1} s_{2} s_{1} s_{2} s_{1}.$ \begin{lemma}\label{lemma1} For real $ \alpha$ as above, $s_{\alpha}$ is the word \begin{gather*} (a_{2n-1}, \sigma a_{2n-2}) \longleftrightarrow S_1(2n-1), \\ (a_{2n-1}, \sigma a_{2n}) \longleftrightarrow S_2 (2n), \\ (\sigma^{-1}a_{2n}, a_{2n-1}) \longleftrightarrow S_1(2n), \\ (\sigma^{-1}a_{2n-2}, a_{2n-1}) \longleftrightarrow S_2 (2n-1). \end{gather*} \end{lemma} \begin{proof} This is an easy induction on $n.$ \end{proof} The coef\/f\/icients of collinearity $ \theta_{ j }$ have the following description. \begin{lemma}\label{lemma2} For $ \la^{k} = \La_1^{k} \la_1 + \La_2^{k} \la_2$ we have \begin{gather*} (i) \ \overrightarrow{ \la^{2n+1} \la^{2n+2}}= -( \La_2^{2n+1}+1)\langle h_{ \alpha_2}, \cdot\rangle,\\ (ii) \ \overrightarrow{ \la^{2n} \la^{2n+1}} = -( \La_1^{2n}+1)\langle h_{ \alpha_1}, \cdot\rangle. \end{gather*} \end{lemma} \begin{proof} One easily computes that $\langle h_{ \alpha_1}, \cdot\rangle = 2 \lambda_1 + q \lambda_2$, $\langle h_{ \alpha_2}, \cdot\rangle = -p \lambda_1 + 2 \lambda_2,$ and further that \begin{gather*} s_1^{ \rho} (x \la_1 + y \la_2) = (-x-2) \la_1 + (y + q(x+1)) \la_2, \\ s_2^{ \rho} (x \la_1 + y \la_2) = (x + p(y+1)) \la_1 + (-y-2) \la_2. \end{gather*} Then for $(i)$ it is verif\/ied that $\overrightarrow{ \la^1 \la^2} = -(y+qx+q+1)(-p \la_1 + 2 \la_2) = -( \La_2^1 +1)\langle h_{ \alpha_2}, \cdot\rangle.$ For $n>0,$ \begin{gather*} \la^{2n+2} = s_2^{ \rho} (s_1^{ \rho} ( \la^{2n})) =s_2^{ \rho}( (- \La_1^{2n}-2) \la_1 + ( \La_2^{2n} + q \La_1^{2n}+q) \la_2) \\ \phantom{\la^{2n+2}}{} =(- \La_1^{2n}-2+p \La_2^{2n}+pq \La_1^{2n}+pq+p) \la_1+ (- \La_2^{2n} - q \La_1^{2n} - q -2) \la_2 \end{gather*} while \begin{gather*} \la^{2n+1} = s_1^{ \rho}( \la^{2n})=(- \La_1^{2n}-2) \la_1+ ( \La_2^{2n}+q \La_1^{2n} +q) \la_2. \end{gather*} So $ \overrightarrow{ \la^{2n+1} \la^{2n+2}}= -( \La_2^{2n} + q \La_1^{2n} +q+1)( -p \la_1+2 \la_2). $ Since $ \la^{2n+1}= s_1^{ \rho}( \la^{2n})=( - \La_1^{2n}-2) \la_1+( \La_2^{2n}+q \La_1^{2n} +q) \la_2,$ we have $ - \La_2^{2n+2}-1= -( \La_2^{2n}+q \La_1^{2n}+q+1)$ as desired. The argument for~$(ii)$ is similar. \end{proof} Let us put $\Ga^k=\Ga^k_1x+\Ga^k_2y$. We will also need \begin{lemma}\label{lemma3} \begin{gather*} (i) \ \Ga^{2n+1} = p \Ga^{2n}- \Ga^{2n-1},\\ (ii) \ \Ga^{2n+2}=q \Ga^{2n+1}- \Ga^{2n}. \end{gather*} \end{lemma} \begin{proof} These can be verif\/ied directly. \end{proof} We are now in a position to show by induction that the FFM-exponents correspond to the~$ \Ga^{k}$ in the statement of Theorem~\ref{theorem2}. It suf\/f\/ices by the second lemma to show that \begin{gather*} \Ga^{2n+1} = \La_1^{2n} + 1 \qquad \mbox{and}\qquad \Ga^{2n+2} = \La_2^{2n+1} + 1. \end{gather*} Making a change of variable $x+1 \rightarrow x$ and $y+1 \rightarrow y$ one can calculate that \begin{gather*} \overrightarrow{\la^{0} \la^{1}} = -x\langle h_{\alpha_1}, \cdot\rangle, \\ \overrightarrow{ \la^{1} \la^{2}} = -(qx+y)\langle h_{ \alpha_2}, \cdot\rangle, \\ \overrightarrow{ \la^{2} \la^{3}} = -((pq-1)x + py)\langle h_{ \alpha_{1}}, \cdot\rangle, \\ \dots\dots\dots \dots\dots\dots\dots\dots\dots \end{gather*} The FFM exponents are just the coef\/f\/icients of these functionals with reversed sign. For the base case of our induction, $n=0,$ observe that $ \la^{0} = (x-1) \la_1 + (y-1) \la_2,$ hence $ \La_1^{0} + 1=x= \Ga^{1}$ (using the binomial def\/inition of $ \Ga^{1}),$ while $ \La_2^{1} +1 =y+qx$ since $ s_1^{ \rho} ( \la^{0} ) = (-x-1) \la_1 + (y-1+qx) \la_2,$ agreeing with the binomial sum $ \Ga^{2} = y+qx.$ Inductively assume that for some $(n-1) > 0$ \begin{gather*} \Ga^{2(n-1)+1} = \La_1^{2n-2} + 1 \qquad \mbox{and}\qquad \Ga^{2(n-1)+2} = \La_2^{2n-1} +1. \end{gather*} We need to show that \begin{gather*} \Ga^{2n+1}= \La_1^{2n} +1 \qquad \mbox{and}\qquad \Ga^{2n+2} = \La_2^{2n+1} +1. \end{gather*} Just observe that \begin{gather*} \La_1^{2n} +1 = \La_1^{2n-1} + p( \La_2^{2n-1} +1) +1 = ( \La_1^{2n-1} +1) + p( \La_2^{2n-1} +1) \\ \phantom{\La_1^{2n} +1 }{} = (- \La_1^{2n-2} -2 +1) + p \left[ \La_2^{2n-2} + q( \La_1^{2n-2} +1) +1 \right] \\ \phantom{\La_1^{2n} +1 }{} = -( \La_1^{2n-2} +1) +p \left[ ( \La_2^{2n-2} +1) +q( \La_1^{2n-2} +1) \right] \\ \phantom{\La_1^{2n} +1 }{} = - \Ga^{2n-1} + p( \La_2^{2n-1} +1) = - \Ga^{2n-1} +p \Ga^{2n} = \Ga^{2n+1}, \end{gather*} where the last equality comes from Lemma~\ref{lemma3} and the inductive hypothesis is used in the preceding two lines. The same tack proves that $ \Ga^{2n+2}= \La_2^{2n+1} +1.$ We now know that the FFM-exponents are as given Theorem~\ref{theorem2}. It only remains to check that the Kac--Kazhdan criterion (Theorem~\ref{theorem1}) is satisf\/ied. For $m$ and $ \alpha$ satisfying this criterion, \cite{Feigin}~give the prescription for the singular vector $F(s_{ \alpha }; \la )u$ of degree $m \alpha$ in $M( \la )$; so we need to verify the existence of such $ \alpha$ and $m.$ For $ \alpha = a \alpha_1 + b \alpha_2$ and $ \la = x \la_1 + y \la_2$, $h_{ \alpha}= ad_1^{-1} h_1 + bd_2^{-1}= \frac{a}{p} h_1 + \frac{b}{q} h_2$, so the criterion can be restated as $2(x \la_1 + y \la_2 + \rho) {\left( \frac{a}{p}h_1 + \frac{b}{q}h_2\right)} =m\langle a \alpha_1 + b \alpha_2, a \alpha_1+b \alpha_2\rangle. $ After the calculations this is \[2\left(x \frac{a}{p} +y \frac{b}{q} + \frac{a}{p} + \frac{b}{q}\right) =m\left(\frac{2a^2}{p}-2ab+\frac{2b^2}{q}\right)\] or \[(x+1) \frac{a}{p}+(y+1) \frac{b}{q} = m\left( \frac{a^2}{p} -ab+ \frac{b^2}{q}\right).\] So after change of variable $ x+1 \rightarrow x$ and $ y+1 \rightarrow y$ the Kac--Kazhdan criterion becomes \[x \frac{a}{p} + y \frac{b}{q} = m\left( \frac{a^2}{p} - ab+ \frac{b^2}{q}\right).\] We show that the integral exponent of the centermost element in the singular vectors in the statement of the theorem precisely meets the integrality requirement of the criterion. This is a case by case check, and somewhat tedious and technical; let us verify it for roots of type $(a_{2n+1}, \sigma a_{2n}),$ whose singular vector comprises $2n+1$ $f_1$'s and $2n$ $f_2$'s raised to appropriate powers; the centermost exponent is then the $2n+1$-st coef\/f\/icient of collinearity in the FFM procedure, or what we have called $ \Ga^{2n + 1}.$ A remark, a lemma, and a corollary will show that $\Ga^{2n + 1}$ does what it is supposed to. \begin{remark} $ q\Ga_2^{2n+1} = p\Ga_1^{2n},$ as is transparent from the def\/initions of $ \Ga^{2n}$ and $ \Ga^{2n+1}.$ \end{remark} The next lemma will relate the root sequence $ \{ a_{n} \}$ to the exponents of the singular vectors. \begin{lemma}\label{lemma4} The following is true for $ n \ge 0$: \begin{gather*} \Ga_1^{2n+1} = a_{2n+1}, \qquad \Ga_1^{2n+2} = \sigma a_{2n+2}. \end{gather*} \end{lemma} \begin{proof} Induction on $n.$ \end{proof} \begin{corollary}\label{corollary1} $ \Ga^{2n+1} = a_{2n+1}x + \frac{p}{q} \sigma a_{2n}y.$ \end{corollary} \begin{proof} $ \Ga^{2n+1} = \Ga_1^{2n+1}x + \Ga_2^{2n+1}y = a_{2n+1}x + \Ga_2^{2n+1}y = a_{2n+1}x + \frac{p}{q} \Ga_1^{2n} y = a_{2n+1}x + \frac{p}{q}ua_{2n}y.$ \end{proof} Finally the Kac--Kazhdan criterion for $( a_{2n+1}, ua_{2n})$ is \begin{gather*} a_{2n+1} \frac{x}{p} + ua_{2n} \frac{y}{q} = m\left(\frac{(a_{2n+1})^2}{p} - ua_{2n}a_{2n+1} + \frac{ (ua_{2n})^2}{q}\right) \end{gather*} or equivalently \begin{gather*} a_{2n+1}x+ \frac{p}{q}ua_{2n}y = m((a_{2n+1})^2 -pua_{2n}a_{2n+1}+ \frac{p}{q}(ua_{2n})^2) \\ \phantom{a_{2n+1}x+ \frac{p}{q}ua_{2n}y}{} = m( (a_{2n+1})^2 - pua_{2n}a_{2n+1} + \frac{p}{q} \frac{q}{p}(a_{2n})^2) = m' \in \mathbb{N} \end{gather*} since $ a_{2n+1}$ and $ua_{2n}$ are integral (polynomial in $p$ and $q$). But the left-hand side here is by the corollary exactly the exponent of the centermost letter in the singular vector, which by the formula given in the theorem is an integer; so the Kac--Kazhdan criterion is indeed satisf\/ied in this case. One can check in similar fashion that the remaining three cases also f\/it the integrality requirement. The singular vectors in the statement of the theorem appear as follows. The exponent of the centermost vector in all four cases must be integral: setting this expression in $x$ and $y$ equal to~$m$ one then solves for $x$ (or $y$) in terms of $m$ and $y$ (respectively, $x$); $t$ is then introduced as a~scalar multiple of $y$ (resp., $x$) to minimize notational clutter. This completes the proof of the theorem. \end{proof} \section{Projections}\label{section4} We next obtain projections of the singular vectors into the Heisenberg algebra, where they factor as products. While the theorem gives a simple and perhaps the most natural expression for the singular vectors in terms of the $ \Ga^{k}$ a change of variable is advantageous in the projection and factoring of these vectors in the Heisenberg algebra. In each case this involves setting the exponent of the vector immediately to the left of the centermost letter equal to a complex variab\-le~$ \alpha$ (which will then depend on $n$). For example the root $\gamma = (u^{-1}a_{4}, a_{3}) = (p(pq-2), pq-1)$ has from the theorem the corresponding singular vector: \begin{gather*} f_1^{ \frac{((pq)^3-5(pq)^2+6pq-1)m}{q(pq-2)}+(pq-1)t} f_2^{ \frac{q((pq)^2-4pq+3)m}{q(pq-2)}+qt} f_1^{ \frac{(pq)^2-3pq+1)m}{q(pq-2)}+t}f_2^{m}\\ \qquad{}\times f_1^{ \frac{(pq-1)m}{q(pq-2)}-t}f_2^{ \frac{qm}{q(pq-2)}-qt}f_1^{ \frac{m}{q(pq-2)}-(pq-1)t}. \end{gather*} Taking \begin{gather*} \alpha = \frac{( (pq)^2-3pq+1)m}{q(pq-2)} + t \end{gather*} the singular vector becomes \begin{gather*} f_1^{(pq-1) \alpha -pm}f_2^{q \alpha -m}f_1^{ \alpha}f_2^{m}f_1^{pm- \alpha}f_2^{q(pm - \alpha) -m}f_1^{(pq-1)(pm- \alpha)-pm} \end{gather*} which can in turn be rewritten in terms of the $ \Ga^{k} $ as: \begin{gather*} f_1^{ \Ga_2^{4} \alpha - \Ga_2^{3}m}f_2^{ \Ga_1^{2} \alpha - \Ga_1^{1}m}f_1^{ \Ga_2^{2} \alpha - \Ga_2^{1}m}f_2^{m}f_1^{- \Ga_2^{2} \alpha + \Ga_2^{3}m}f_2^{- \Ga_1^{2} \alpha + \Ga_1^{3}m}f_1^{ - \Ga_2^{4} \alpha + \Ga_2^{5}m}. \end{gather*} \begin{proposition}\label{proposition3} Under this change of variable the singular vectors take the following form: 1. For $\alpha=(a_{2n-1}, \sigma a_{2n-2})$ the corresponding $F( s_\alpha; \la)$ is \begin{gather*} f_1^{ \Ga_2^{ 2n - 1} \alpha - \Ga_2^{ 2n - 2 }m } f_2^{ \Ga_1^{ 2n - 3} \alpha - \Ga_1^{ 2n - 4}m}f_1^{ \Ga_2^{2n - 3} \alpha - \Ga_2^{2n - 4}m}f_2^{ \Ga_1^{2n - 5} \alpha - \Ga_1^{2n - 6}m}f_1^{ \Ga_2^{2n - 5} \alpha - \Ga_2^{2n - 6}m}\\ \qquad{}\cdots f_2^{ \Ga_1^{3} \alpha - \Ga_1^{2} m}f_1^{ \Ga_2^{3} \alpha - \Ga_2^{2}m}f_2^{ \Ga_1^{1} \alpha - \Ga_1^{0}m}f_1^{m}f_2^{ - \Ga_1^{1} \alpha + \Ga_1^{2}m}f_1^{- \Ga_2^{3} \alpha + \Ga_2^{4}m}f_2^{ -\Ga_1^{3} \alpha + \Ga_1^{4}m} \\ \qquad{}\cdots f_1^{ - \Ga_2^{2n-3} \alpha + \Ga_2^{2n-2}m}f_2^{ - \Ga_1^{2n-3} \alpha + \Ga_1^{2n-2}m}f_1^{ - \Ga_2^{2n-1} \alpha + \Ga_2^{2n}m}. \end{gather*} 2. For $ \gamma = ( a_{2n-1}, ua_{2n})$ the corresponding singular vector $F(s_{\gamma}; \la)$ is \begin{gather*} f_2^{ \Ga_1^{2n-1} \alpha - \Ga_1^{2n-2}m} \big( F(s_{(a_{2n-1}, ua_{2n-2})} ; \la \big) f_2^{- \Ga_1^{2n-1} \alpha + \Ga_1^{2n}m}. \end{gather*} 3. For $ \gamma = (u^{-1}a_{2n-2}, a_{2n-1})$ the singular vector $ F(s_{ \gamma}; \la)$ becomes \begin{gather*} f_2^{ \Ga_1^{2n-2} \alpha - \Ga_1^{2n-3}m}f_1^{ \Ga_2^{2n-2} \alpha - \Ga_1^{2n-3}m}f_2^{ \Ga_1^{2n-4} \alpha - \Ga_1^{2n-5}m}f_1^{ \Ga_2^{2n-4} \alpha - \Ga_2^{2n-5}m} \cdots \\ \qquad{} f_2^{ \Ga_1^{2} \alpha - \Ga_1^{1}m}f_1^{ \Ga_2^{2} \alpha - \Ga_2^{1}m}f_2^{m}f_1^{ - \Ga_2^{2} \alpha + \Ga_2^{3}m}f_2^{ - \Ga_1^{2} \alpha + \Ga_1^{3}m} \cdots f_1^{ -\Ga_2^{2n-2} \alpha + \Ga_2^{2n-1}m}f_2^{- \Ga_1^{2n-2} \alpha + \Ga_1^{2n-1}m}. \end{gather*} 4. For $ \gamma = (u^{-1}a_{2n}, a_{2n-1})$ the singular vector is \begin{gather*} f_1^{ \Ga_2^{2n} \alpha - \Ga_2^{2n-1}m} \big( F(s_{(u^{-1}a_{2n-2}, a_{2n-1)}} ; \la)\big) f_1^{- \Ga_2^{2n} \alpha + \Ga_2^{2n+1}m}. \end{gather*} \end{proposition} \begin{proof} This can be established by induction on the number of pairs transformed. \end{proof} We now project the singular vectors into the universal enveloping algebra of the three-dimensional Heisenberg algebra $\mathcal H$. Recall that this is generated by $f_1$, $f_2,$ with $[f_1, f_2] =: h$, $[f_1, h] = [f_2, h] = 0.$ Thus, the projection ${\mathcal U}(N_-)\to{\mathcal U}({\mathcal H})$ is the factorization over the (two-sided) ideal generated by $[f_1,h]$, $[f_2,h]$. Let $H_{u} := f_{2}f_{1} + uh$ for $u \in \mathbb{C}.$ Observe that $ H_{u} H_{v} = H_{v} H_{u}$, $u,v \in \mathbb{C}.$ The following relations also hold in the Heisenberg (for positive integers, and hence for arbitrary complex numbers $\alpha$, $\beta$, $u$). \begin{lemma}For $ \alpha, \beta, u \in \mathbb{C}$, \begin{enumerate}\itemsep=0pt \item[1)] $f_2^{ \alpha} H_{u} = H_{u - \alpha} f_2^{ \alpha}$; \item[2)] $f_1^{ \beta} H_{u} = H_{u+ \beta} f_1^{ \beta}$; \item[3)] $f_1^{ \alpha} f_2^{n} f_1^{n - \alpha} = H_{ \alpha } H_{ \alpha - 1 } \cdots H_{ \alpha - (n-1) }$; \item[4)] $f_2^{ \alpha} f_1^{n} f_2^{n - \alpha} = H_{1 - \alpha} H_{2 - \alpha} \cdots H_{n - \alpha}$. \end{enumerate} \end{lemma} \begin{proof} The calculations follow readily from the complex binomial formula given in \cite{Feigin}: for $ g_{1}, g_{2} \in \mathcal{G}$ a Lie algebra, $ \gamma_{1}, \gamma_{2} \in \mathbb{C},$ we have \begin{gather*} g_1^{ \gamma_1} g_2^{ \gamma_2} = g_2^{ \gamma_2} g_1^{ \gamma_1} + \sum_{j_1 = 1}^{ \infty} \sum_{j_2 = 1}^{ \infty} \binom{ \gamma_1}{j_1} \binom{ \gamma_2}{j_2} Q_{j_1 j_2}( g_1, g_2) g_2^{ \gamma_2 - j_2} g_1^{ \gamma_1 - j_1}, \end{gather*} where $ {\binom{ \gamma}{j} = \frac{ \gamma^{(j)} }{j!}}$ with $ \gamma^{(j)} = \gamma ( \gamma - 1) \cdots (\gamma - j + 1)$ and the Lie polynomials $Q_{j_1 j_2}$ can be calculated explicitly, for example using the recursion \begin{gather*} Q_{j_1 j_2}(g_1,g_2) = [g_1, Q_{j_1 - 1, j_2}(g_1, g_2)] + \sum_{v = 0}^{ j_2 - 1} \binom{j_2}{v} Q_{j_1 - 1,v}(g_1,g_2) [ \underbrace{ g_2, \dots , [g_2, [g_2 }_{j_2 - v}, g_1]] \dots ] \end{gather*} with $Q_{00} = 1$ and $Q_{0,v} = 0$ for $v > 0.$ \end{proof} Now set, for $r,s \in \mathbb{N}$ \begin{gather*} \mathcal{H}_0^{r,s} = 1 \qquad \textrm{(the empty product)}, \\ \mathcal{H}_1^{r,s} = \prod_{k=1}^{( \Ga_2^{2} - \Ga_1^{0})m}H_{( - \Ga_2^{2}+ \Ga_2^{3}- \cdots \pm\Ga_2^{r})\alpha+ ( -\Ga_1^{1}+ \Ga_1^{2}- \cdots \pm \Ga_1^{s})m \, +k}. \end{gather*} The for $j \ge 1$ def\/ine \begin{gather*} \mathcal{H}_{2j}^{r,s} = \prod_{k=1}^{( \Ga_1^{2j} - \Ga_2^{2j})m} H_{ ( \Ga_2^{2j+1}- \Ga_2^{2j+2}+ \cdots \pm \Ga_2^{r}) \alpha + ( -\Ga_1^{2j-1} + \Ga_1^{2j}- \cdots \pm \Ga_1^{s})m - \, (k-1)}, \\ \mathcal{H}_{2j+1}^{r,s} = \prod_{k=1}^{( \Ga_2^{2j+2}- \Ga_1^{2j})m}H_{(- \Ga_2^{2j+2}+ \Ga_2^{2j+3}- \cdots \pm \Ga_2^{r}) \alpha + ( \Ga_1^{2j} - \Ga_1^{2j+1}+ \cdots \pm \Ga_1^{s})m \, +k}. \end{gather*} We will also need the following, not dissimilar, but warranting its own notation: \begin{gather*} \mathcal{ \tilde{H}}_0^{r,s} = 1, \\ \mathcal{ \tilde{H}}_1^{r,s} = \prod_{k=1}^{( \Ga_2^{2} - \Ga_2^{1})m} \mathcal{H}_{( \Ga_1^{1}- \Ga_1^{2} + \cdots \pm \Ga_1^{r}) \alpha + ( \Ga_2^{2} - \Ga_2^{3} + \cdots \pm \Ga_2^{s})m - (k-1)}. \end{gather*} For $j \ge 1$ def\/ine \begin{gather*} \mathcal{ \tilde{H}}_{2j}^{r,s} = \prod_{k=1}^{( \Ga_2^{ 2j + 1 } - \Ga_2^{2j})m} H_{( - \Ga_1^{2j} + \Ga_1^{2j+1} - \cdots \pm \Ga_1^{r}) \alpha + ( \Ga_2^{2j} - \Ga_2^{2j+1} + \cdots \pm \Ga_2^{s})m \, +k}, \\ \mathcal{ \tilde{H}}_{ 2j + 1 }^{r,s} = \prod_{k=1}^{ ( \Ga_2^{ 2j + 2 } - \Ga_2^{ 2j + 1 }) m} H_{ ( \Ga_1^{2j+1} - \Ga_1^{2j+2} + \cdots \pm \Ga_1^{r}) \alpha + ( - \Ga_2^{2j+1} + \Ga_2^{2j+2} - \cdots \pm \Ga_2^{s})m - (k-1)}. \end{gather*} \begin{theorem}\label{theorem3} The singular vectors whose words $F(s_{ \alpha}; \la)$ were given in the preceding theorem project to the Heisenberg algebra as the following products: \begin{enumerate}\itemsep=0pt \item The singular vector corresponding to $(a_{2n-1}, ua_{2n-2})$ projects to \begin{gather*} \prod_{w=1}^{2n-2} \mathcal{H}_{w}^{2n-1, 2n-3}\, f_1^{( \Ga_1^{2n-1}- \Ga_1^{2n-2})m}. \end{gather*} \item The singular vector corresponding to $(a_{2n-1}, ua_{2n})$ projects to \begin{gather*} \prod_{w=1}^{2n-1} \mathcal{H}_{w}^{2n, 2n-2} \, f_2^{( \Ga_1^{2n} - \Ga_1^{2n-1})m}. \end{gather*} \item The singular vector for $(u^{-1}a_{2n-2}, a_{2n-1})$ projects to \begin{gather*} \prod_{w=1}^{2n-2} \mathcal{ \tilde{H}}_{w}^{2n-2, 2n-2} f_2^{( \Ga_2^{2n} - \Ga_2^{2n-1})m}. \end{gather*} \item The singular vector for $( u^{-1}a_{2n}, a_{2n-1})$ projects to \begin{gather*} \prod_{w=1}^{2n-1} \mathcal{ \tilde{H}}_{w}^{2n-1, 2n-1} f_1^{( \Ga_2^{2n+1} - \Ga_2^{2n})m}. \end{gather*} \end{enumerate} \end{theorem} \begin{proof} Induction on $n.$ This is completely straightforward using the change of variables for the singular vectors given in the proposition above. \end{proof} \section{Other projections }\label{section5} One might ask whether projections into other algebras, for instance into $ \mathcal{U}(sl_{2}),$ are equally possible. A homomorphism from the universal enveloping algebra of $N_{-}$ to $ \mathcal{U}(sl_2)$ is def\/ined when $p>1$ and $q>1$. It is the factorization over the (two-sided) ideal generated by $[h,f_1]-f_1$ and $[h,f_2]+f_2$ (where, as before, $h=[f_1,f_2]$). Set, for $ u \in \mathbb{C},$ \begin{gather*} J_{u} = f_2f_1 + uh - \frac{u(u - 1)}{2}. \end{gather*} It can then be checked that for $ \beta \in \mathbb{C}$ we have the following in $ \mathcal{U}(sl_2)$ (putting $e = f_1$, $f = f_2):$ \begin{gather*} f_1^{ \beta} J_{u} = J_{u + \beta} f_1^ { \beta},\qquad f_2^{ \beta}J_{u} = J_{ u - \beta}f_2^{ \beta} \end{gather*} as well as, for $n \in \mathbb{N},$ \begin{gather*} f_1^{ \beta} f_2^{n}f_1^{ n - \beta} = J_{ \beta} J_{ \beta - 1} \cdots J_{ \beta - (n - 1)}, \\ f_2^{ \beta} f_1^{n} f_2^{ n - \beta} = J_{ 1 - \beta} J_{ 2 - \beta} \cdots J_{ n - \beta}, \\ f_1^{n} f_2^{ n} = J_{1} \cdots J_{n}. \end{gather*} These properties permit the formal manipulations that af\/ford the factorization results we have already detailed. Simply substitute $J$ for $H$ in the statement of the projection theorem. \section{Concluding remarks }\label{section6} In \cite{Fuchs} it is observed that information about singular vectors of Verma modules can be used to obtain information about the homologies of nilpotent Lie algebras. Namely, the dif\/ferentials of the Bernstein--Gel'fand--Gel'fand resolution $BGG$ of $\mathbb C$ over $N_-$ (see~\cite{Bern}) are presented by matrices whose entries are singular vectors in the Verma modules. Thus, if $V$ is an $N_{ - }$-module that is trivial over the kernel of the projection of $ \mathcal{U}(N^{ - })$ onto $ \mathcal{U}(H)$ or $\mathcal{U}(sl_{2})$ considered above, then our formulas give an explicit description of $BGG \otimes V$ and $\Hom(BGG,V)$. \pdfbookmark[1]{References}{ref}
{ "config": "arxiv", "file": "0806.1976/sigma08-059.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: The amount of time using exponential distribution? QUESTION [0 upvotes]: I have no idea how to approach this kind of questions... There is train A which arrives according to an Exponential distribution with parameter $\lambda$, and train B which arrives according to an Exponential distribution with parameter $\mu$. Solve for the distribution of T, the amount of time a person will wait before either A or B arrives. John wanted to take train A to go to the party, but he realized that he would arrive too early. So he decides that he will wait for the second arrival of train A. Find the distribution of Z = $X_1 + X_2$, the amount of time that John will wait to take the train. REPLY [0 votes]: The time waiting before either A or B arrives is the minimum of the two waiting times for each train. If $X_{1}$ and $X_{2}$ are exponentially distributed, then $T = \min\{X_{1},X_{2}\}$ is also exponentially distributed. If you get stuck on showing this, check out this link How to Prove that the minimum of two exponential random variables is another The distribution of the sum of exponentials can be found using convolutions $$f_{Z}(z) = \int_{0}^{z}f_{X_{1}}(s)f_{X_{2}}(z-s)ds$$ Then plug in the density function for the exponential and evaluate the integral. (I am not sure whether you are defining parameter $\lambda$ as $f(x) = \lambda e^{-\lambda x}$ or $f(x) = \frac{1}{\lambda}e^{-x/\lambda}$
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 3316796, "subset_name": null }
\begin{document} \title{Invariance of KMS states on graph $C^{\ast}$-algebras under classical and quantum symmetry} \author{Soumalya Joardar, Arnab Mandal} \maketitle \begin{abstract} We study invariance of KMS states on graph $C^{\ast}$-algebras coming from strongly connected and circulant graphs under the classical and quantum symmetry of the graphs. We show that the unique KMS state for strongly connected graphs is invariant under quantum automorphism group of the graph. For circulant graphs, it is shown that the action of classical and quantum automorphism group preserves only one of the KMS states occurring at the critical inverse temperature. We also give an example of a graph $C^{\ast}$-algebra having more than one KMS state such that all of them are invariant under the action of classical automorphism group of the graph, but there is a unique KMS state which is invariant under the action of quantum automorphism group of the graph. \end{abstract} {\bf Keywords}: KMS states, Graph $C^{\ast}$-algebra, Quantum automorphism group. \\ {\bf Mathematics subject classification}: 05C50, 46L30, 46L89. \section {Introduction} For a $C^{\ast}$-algebra $\cla$, $(\cla,\sigma)$ is called a $C^{\ast}$-dynamical system if there is a strongly continuous map $\sigma:\mathbb{R}\raro{\rm Aut}(\cla)$. When $\clh$ is a finite dimensional Hilbert space, a $C^{\ast}$-dynamical system $(\clb(\clh),\sigma)$ is given by a self adjoint operator $H\in\clb(\clh)$ in the sense that $\sigma_{t}(A)=e^{itH}Ae^{-itH}$. For such a $C^{\ast}$-dynamical system, it is well known that at any inverse temperature $\beta\in\mathbb{R}$, the unique thermal equilibrium state is given by the Gibbs state \begin{displaymath} \omega_{\beta}(A)=\frac{{\rm Tr}(e^{-\beta H}A)}{{\rm Tr}(e^{-\beta H})}, A\in\cla. \end{displaymath} For a general $C^{\ast}$-dynamical system $(\cla,\sigma)$ the generalization of the Gibbs states are the KMS (Kubo-Martin-Schwinger) states. A KMS state for a $C^{\ast}$-dynamical system $(\cla,\sigma)$ at an inverse temperature $\beta\in\mathbb{R}$ is a state $\tau\in\cla^{\ast}$ that satisfies the KMS condition given by \begin{displaymath} \tau(ab)=\tau(b\sigma_{i\beta}(a)), \end{displaymath} for $a,b$ in a dense subalgebra of $\cla$ called the algebra of analytic elements of $(\cla,\sigma)$. For a general $C^{\ast}$-dynamical system, unlike the finite dimensional case, KMS state does not exist at every temperature. Even though they exist, generally nothing can be said about their uniqueness at a given temperature. It is worth mentioning that in Physics literature, uniqueness of KMS state often is related to phase transition and symmetry breaking.\\ \indent One of the mathematically well studied KMS states are the KMS states for the $C^{\ast}$-dynamical systems on the graph $C^{\ast}$-algebras (see \cite{Laca}, \cite{watatani}). For a finite directed graph $\Gamma$ the dynamical system is given by the $C^{\ast}$-algebra $C^{\ast}(\Gamma)$ and the automorphism $\sigma$ being the natural lift of the canonical gauge action on $C^{\ast}(\Gamma)$. In \cite{Laca} it is shown that there is a KMS state at the inverse critical temperature ${\rm ln}(\rho(D))$ if and only if $\rho(D)$ is an eigen value of $D$ with eigen vectors having all entries non negative, where $\rho(D)$ is the spectral radius of the vertex matrix $D$ of the graph. For a general graph $C^{\ast}$-algebra, we can not say anything about uniqueness of KMS states. In \cite{Laca}, for strongly connected graphs, such uniqueness result has been obtained. In fact, for strongly connected graphs, there is a unique KMS state occurring only at the critical inverse temperature. However, for another class called the circulant graphs, one can show that KMS states at the critical inverse temperature are not unique. In this context, it is interesting to study invariance of KMS states under some natural added (apart from the gauge symmetry) internal symmetry of the graph $C^{\ast}$-algebra and see if such invariance could force the KMS states to be unique in certain cases. It is shown in \cite{Web} that for a graph $\Gamma$, the graph $C^{\ast}$-algebra has a natural generalized symmetry coming from the quantum automorphism group $Q^{\rm aut}_{\rm Ban}(\Gamma)$ (see \cite{Ban}) of the graph itself. This symmetry is generalized in the sense that it contains the classical automorphism group ${\rm Aut}(\Gamma)$ of the graph. In this paper we study the invariance of the KMS states under this generalized symmetry.\\ \indent For a strongly connected graph $\Gamma$, we show that the already unique KMS state is preserved by the quantum automorphism group $Q_{\rm Ban}^{\rm aut}(\Gamma)$. This result has a rather interesting consequence on the ergodicity of the action of $Q^{\rm aut}_{\rm Ban}(\Gamma)$ on the graph. It is shown that if the row sums of the vertex matrix of a strongly connected graph are not equal, then $Q^{\rm aut}_{\rm Ban}(\Gamma)$ can not act ergodically. Then we study another class of graphs called the circulant graphs. Circulant graphs admit KMS states at the inverse critical temperature, but they are not necessarily unique. But due to the transitivity of the action of the automorphism group, it is shown that in fact there exists a unique KMS state which is invariant under the classical or quantum symmetry of the system. In fact we also show that the only temperature where the KMS state could occur is the inverse critical temperature. Finally, we show by an example that we necessarily need the invariance of the KMS state under the action of the quantum symmetry group to force the KMS state to be unique. More precisely, we give an example of a graph with 48 vertices coming from the Linear Binary Constraint system (LBCS, see \cite{nonlocal1}) where the corresponding graph $C^{\ast}$-algebra has more than one KMS state all of which are preserved by the action of the classical automorphism group of the graph. However, it has a unique $Q^{\rm aut}_{\rm Ban}(\Gamma)$ invariant KMS state. In this example also, the only possible inverse temperature at which the KMS state could occur is the inverse critical temperature. This shows that in deed where the classical symmetry fails to fix KMS state, the richer `genuine' quantum symmetry of the system plays a crucial role to fix KMS state. \section{Preliminaries} \subsection{KMS states on graph $C^{\ast}$-algebra without sink} \label{KMSgraph} A {\bf finite} directed graph is a collection of finitely many edges and vertices. If we denote the edge set of a graph $\Gamma$ by $E=(e_{1},\ldots,e_{n})$ and set of vertices of $\Gamma$ by $V=(v_{1},\ldots,v_{m})$ then recall the maps $s,t:E\raro V$ and the vertex matrix $D$ which is an $m\times m$ matrix whose $ij$-th entry is $k$ if there are $k$-number of edges from $v_{i}$ to $v_{j}$. We denote the space of paths by $E^{\ast}$ (see \cite{Laca}). $vE^{\ast}w$ will denote the set of paths between two vertices $v$ and $w$. \bdfn $\Gamma$ is said to be without sink if the map $s:E\raro V$ is surjective. Furthermore $\Gamma$ is said to be without multiple edges if the adjacency matrix $D$ has entries either $1$ or $0$. \edfn \brmrk Note that the graph $C^{\ast}$-algebra corresponding to a graph without sink is a Cuntz-Krieger algebra. Reader might see \cite{Cuntz-Krieger} for more details on Cuntz-Krieger algebra. \ermrk Now we recall some basic facts about graph $C^{\ast}$-algebras. Reader might consult \cite{Pask} for details on graph $C^{\ast}$-algebras. Let $\Gamma=\{E=(e_{1},...,e_{n}),V=(v_{1},...,v_{m})\}$ be a finite, directed graph without sink. In this paper all the graphs are {\bf finite}, without {\bf sink} and without {\bf multiple edges}. We assign partial isometries $S_{i}$'s to edges $e_{i}$ for all $i=1,...,n$ and projections $p_{v_{i}}$ to the vertices $v_{i}$ for all $i=1,...,m$. \bdfn \label{Graph} The graph $C^{\ast}$-algebra $C^{\ast}(\Gamma)$ is defined as the universal $C^{\ast}$-algebra generated by partial isometries $\{S_{i}\}_{i=1,\ldots,n}$ and mutually orthogonal projections $\{p_{v_{k}}\}_{k=1,\ldots,m}$ satisfying the following relations: \begin{displaymath} S_{i}^{\ast}S_{i}=p_{t(e_{i})}, \sum_{s(e_{j})=v_{l}}S_{j}S_{j}^{\ast}=p_{v_{l}}. \end{displaymath} \edfn In a graph $C^{\ast}$-algebra $C^{\ast}(\Gamma)$, we have the following (see Subsection 2.1 of \cite{Pask}):\\ 1. $\sum_{k=1}^{m}p_{v_{k}}=1$ and $S_{i}^{\ast}S_{j}=0$ for $i\neq j$. \vspace{0.05in}\\ 2. $S_{\mu}=S_{1}S_{2}\ldots S_{l}$ is non zero if and only if $\mu=e_{1}e_{2}\ldots e_{l}$ is a path i.e. $t(e_{i})=s(e_{i+1})$ for $i=1,\ldots,(l-1)$. \vspace{0.05in}\\ 3. $C^{\ast}(\Gamma)={\overline{\rm Sp}}\{S_{\mu}S_{\nu}^{\ast}:t(\mu)=t(\nu)\}$. \vspace{0.1in}\\ \indent Now we shall briefly discuss KMS-states on graph $C^{\ast}$-algebras coming from graphs without sink. For that we recall Toeplitz algebra $\clt C^{\ast}(\Gamma)$. Readers are referred to \cite{Laca} for details. Our convention though is opposite to that of \cite{Laca} in the sense that we interchange source projections and target projections. Also we shall modify the results of \cite{Laca} according to our need. Suppose that $\Gamma$ is a directed graph as before. A Toeplitz-Cuntz-Krieger $\Gamma$ family consists of mutually orthogonal projections $\{p_{v_{i}}:v_{i}\in V\}$ and partial isometries $\{S_{i}:e_{i}\in E\}$ such that $\{S_{i}^{\ast}S_{i}=p_{t(e_{i})}\}$ and \begin{displaymath} p_{v_{l}}\geq \sum_{s(e_{i})=v_{l}}S_{i}S_{i}^{\ast}. \end{displaymath} Toeplitz algebra $\clt C^{\ast}(\Gamma)$ is defined to be the universal $C^{\ast}$-algebra generated by the Toeplitz-Cuntz-Krieger $\Gamma$ family. It is clear from the definition that $C^{\ast}(\Gamma)$ is the quotient of $\clt C^{\ast}(\Gamma)$ by the ideal $\clj$ generated by \begin{displaymath} P:=\{p_{v_{l}}-\sum_{s(e_{i})=v_{l}}S_{i}S_{i}^{\ast}\}. \end{displaymath} The standard arguments give $\clt C^{\ast}(\Gamma)=\overline{\rm Sp} \{S_{\mu}S_{\nu}^{\ast}:t(\mu)=t(\nu)\}$. $\clt C^{\ast}(\Gamma)$ admits the usual gauge action $\gamma$ of $\mathbb{T}$ which descends to the usual gauge action on $C^{\ast}(\Gamma)$ given on the generators by $\gamma_{z}(S_{\mu}S_{\nu}^{\ast})=z^{(|\mu|-|\nu|)}S_{\mu}S_{\nu}^{\ast}$. Consequently it has a dynamics $\alpha:\mathbb{R}\raro {\rm Aut} \ C^{\ast}(\Gamma)$ which is lifted from $\gamma$ via the map $t\raro e^{it}$. We recall the following from \cite{Laca} (Proposition 2.1). \bppsn \label{exist_KMS} Let $\Gamma$ be a finite, directed, connected graph without sink and $\gamma:\mathbb{T}\raro {\rm Aut} \ \clt C^{\ast}(\Gamma)$ be the gauge action with the corresponding dynamics $\alpha:\mathbb{R}\raro {\rm Aut} \ \clt C^{\ast}(\Gamma)$. Let $\beta\in\mathbb{R}$.\\ (a) A state $\tau$ is a ${\rm KMS}_{\beta}$ state of $(\clt C^{\ast}(\Gamma),\alpha)$ if and only if \begin{displaymath} \tau(S_{\mu}S_{\nu}^{\ast})=\delta_{\mu,\nu}e^{-\beta|\mu|}\tau(p_{t(\mu)}). \end{displaymath} (b) Suppose that $\tau$ is a ${\rm KMS}_{\beta}$ state of $(\clt C^{\ast}(\Gamma),\alpha)$, and define $\cln^{\tau}=(\cln^{\tau}_{i})\in[0,\infty)^{m}$ by $\cln^{\tau}_{i}=\phi(p_{v_{i}})$. Then $\cln^{\tau}$ is a probability measure on $V$ satisfying the subinvariance condition $D\cln^{\tau}\leq e^{\beta}\cln^{\tau}$.\\ (c) A ${\rm KMS}_{\beta}$ state factors through $C^{\ast}(\Gamma)$ if and only if $(D\cln^{\tau})_{i}=e^{\beta}\cln^{\tau}_{i}$ for all $i=1,\ldots, m$ i.e. $\cln^{\tau}$ is an eigen vector of $D$ with eigen value $e^{\beta}$. \eppsn \subsubsection{KMS states at critical inverse temperature} \label{KMS} In this subsection we collect a few results on existence of KMS states at inverse critical temperature on graph $C^{\ast}$-algebras coming from graphs {\bf without} sink. For that we continue to assume $\Gamma$ to be a finite, connected graph without sink and with vertex matrix $D$. We denote the spectral radius of $D$ by $\rho(D)$. With these notations, Combining Proposition 4.1 and Corollary 4.2 of \cite{Laca}, we have the following \bppsn \label{exist1} The graph $C^{\ast}$-algebra $C^{\ast}(\Gamma)$ has a ${\rm KMS}_{{\rm ln}(\rho(D))}$ state if and only if $\rho(D)$ is an eigen value of $D$ such that it has eigen vector with all entries non negative. \eppsn \blmma \label{onetemp} Suppose $\Gamma$ is a finite directed graph without sink with vertex matrix $D$. If $\rho(D)$ is an eigen value of $D$ with an eigen vector ${\bf v}$ whose entries are strictly positive such that ${\bf v}^{T}D=\rho(D){\bf v}^{T}$, then the only possible inverse temperature where the KMS state could occur is ${\rm ln}(\rho(D))$. \elmma {\it Proof}:\\ Suppose $\beta\in\mathbb{R}$ be another possible inverse temperature where a KMS state say $\phi$ could occur. Then since we have assumed our graph to be without sink, by (c) of Proposition \ref{exist_KMS}, $e^{\beta}$ is an eigen value of $D$. Let us denote an eigen vector corresponding to $e^{\beta}$ by ${\bf w}=(w_{1},\ldots,w_{m})$ so that $w_{i}=\phi(p_{v_{i}})$. Since $\phi$ is a state, $w_{i}\geq 0$ for all $i=1,\ldots,m$ with atleast one entry strictly positive. We have \begin{eqnarray*} && D{\bf w}=e^{\beta}{\bf w}\\ &\Rightarrow& {\bf v}^{T}D{\bf w}={\bf v}^{T}e^{\beta}{\bf w}\\ & \Rightarrow& (\rho(D)-e^{\beta}){\bf v}^{T}{\bf w}=0\\ \end{eqnarray*} By assumption, all the entries of ${\bf v}$ are strictly positive and $w_{i}\geq 0$ for all $i$ with at least one entry strictly positive which imply that ${\bf v}^{T}{\bf w}\neq0$ and hence $e^{\beta}=\rho(D)$ i.e. $\beta={\rm ln}(\rho(D))$.\qed\vspace{0.1in}\\ We discuss examples of two classes of graphs which are {\bf without sink} such that they admit KMS states only at inverse critical temperature. We shall use them later in this paper. \bxmpl \rm{{\bf Strongly connected graphs}: \bdfn A graph is said to be strongly connected if $vE^{\ast}w$ is non empty for all $v,w\in V$. \edfn \bdfn An $m\times m$ matrix $D$ is said to be irreducible if for $i,j\in\{1,\ldots,m\}$, there is some $k>0$ such that $D^{k}(i,j)>0$. \edfn \bppsn A graph is strongly connected if and only if its vertex matrix is irreducible. \eppsn \bppsn An irreducible matrix $D$ has its spectral radius $\rho(D)$ as an eigen value with one dimensional eigen space spanned by a vector with all its entries strictly positive (called the Perron-Frobenius eigen vector). \eppsn As a corollary we have \bcrlre Let $\Gamma$ be a strongly connected graph. Then the graph $C^{\ast}$-algebra $C^{\ast}(\Gamma)$ has a unique ${\rm KMS}_{{\rm ln}(\rho(D))}$ state. In fact by (b) of Theorem 4.3 of \cite{Laca}, this is the only KMS state. \ecrlre}\exmpl \bxmpl\label{circulant}\rm{{\bf Circulant graphs} \bdfn A graph with $m$ vertices is said to be circulant if its automorphism group contains the cyclic group $\mathbb{Z}_{m}.$ \edfn It is easy to see that if a graph is circulant, then its vertex matrix is determined by its first row vector say $(d_{0},\ldots,d_{m-1})$. More precisely the vertex matrix $D$ of a circulant graph is given by \begin{center} $\begin{bmatrix} d_{0} & d_{1}... & d_{m-1}\\ d_{m-1} & d_{0}... & d_{m-2}\\ .\\ .\\ .\\ d_{1} & d_{2}... & d_{0} \end{bmatrix}.$ \end{center} \brmrk \label{cws} Note that a circulant graph is always without sink except the trivial case where it has no edge at all. This is because if $i$-th vertex of a circulant matrix is a sink, then the $i$-th row of the vertex matrix will be zero forcing all the rows to be identically zero. For this reason in this paper we study the circulant graphs {\bf without} sink. \ermrk Let $\epsilon$ be a primitive $m$-th root of unity. The following is well known (see \cite{note}): \bppsn For a circulant graph with vertex matrix as above, the eigen values are given by \begin{displaymath} \lambda_{l}=d_{0}+\epsilon^{l}d_{1}+\ldots+\epsilon^{(m-1)l}d_{m-1},l=0,\ldots,(m-1) \end{displaymath} \eppsn It is easy to see that $\lambda=\sum_{i=0}^{m-1}d_{i}$ is an eigen value of $D$ and it has a normalized eigen vector given by $(\frac{1}{m},\ldots,\frac{1}{m})$. Since $|\lambda_{l}|\leq \lambda$, we have \bcrlre For a circulant graph $\Gamma$ with vertex matrix $D$, $D$ has its spectral radius $\lambda$ as an eigen value with a normalized eigen vector (not necessarily unique) having all its entries non negative. \ecrlre Combining the above corollary with Proposition \ref{exist1}, we have \bcrlre \label{not_unique} For a circulant graph $\Gamma$, $C^{\ast}(\Gamma)$ has a ${\rm KMS}_{{\rm ln}(\lambda)}$ state. \ecrlre \blmma \label{circonetemp} For a circulant graph $\Gamma$, the only possible temperature where a KMS state could occur is the inverse critical temperature. \elmma {\it Proof}:\\ As we have assumed the circulant graphs are without sink, it is enough to show that the vertex matrix $D$ of $\Gamma$ satisfies the conditions of Lemma \ref{onetemp}. It is already observed that the eigen value $\lambda$ has an eigen vector with all its entries positive (column vector with all its entries $1$ to be precise). Also since the row sums are equal to column sums which are equal to $\lambda$, $(1,\ldots,1)D=\lambda(1,\ldots,1)$. Hence an application of Lemma \ref{onetemp} finishes the proof.\qed\vspace{0.1in}\\ Note that KMS state at inverse critical temperature is not necessarily unique, since the dimension of the eigen space of the eigen value $\lambda$ could be strictly larger than 1 as the following example illustrates:\\ We take the graph whose vertex matrix is given by \begin{center} $\begin{bmatrix} 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\\ 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 \end{bmatrix}. $ \end{center} Hence the graph is circulant with its spectral radius $2$ as an eigen value with multiplicity $2$. So the dimension of the corresponding eigen space is $2$ violating the uniqueness of the KMS state at the inverse critical temperature ${\rm ln}(2)$. }\exmpl \subsection{Quantum automorphism group of graphs as symmetry of graph $C^{\ast}$-algebra} \subsubsection{Compact quantum groups and quantum automorphism groups} \label{qaut} In this subsection we recall the basics of compact quantum groups and their actions on $C^{\ast}$-algebras. The facts collected in this Subsection are well known and we refer the readers to \cite{Van}, \cite{Woro}, \cite{Wang} for details. All the tensor products in this paper are minimal. \bdfn A compact quantum group (CQG) is a pair $(\clq,\Delta)$ such that $\clq$ is a unital $C^{\ast}$-algebra and $\Delta:\clq\raro\clq\ot\clq$ is a unital $C^{\ast}$-homomorphism satisfying\\ (i) $({\rm id}\ot\Delta)\circ\Delta=(\Delta\ot{\rm id})\circ\Delta$.\\ (ii) {\rm Span}$\{\Delta(\clq)(1\ot\clq)\}$ and {\rm Span}$\{\Delta(\clq)(\clq\ot 1)\}$ are dense in $\clq\ot\clq$. \edfn Given a CQG $\clq$, there is a canonical dense Hopf $\ast$-algebra $\clq_{0}$ in $\clq$ on which an antipode $\kappa$ and counit $\epsilon$ are defined. Given two CQG's $(\clq_{1},\Delta_{1})$ and $(\clq_{2},\Delta_{2})$, a CQG morphism between them is a $C^{\ast}$-homomorphism $\pi:\clq_{1}\raro\clq_{2}$ such that $(\pi\ot\pi)\circ\Delta_{1}=\Delta_{2}\circ\pi$. \bdfn Given a (unital) $C^{\ast}$-algebra $\clc$, a CQG $(\clq,\Delta)$ is said to act faithfully on $\clc$ if there is a unital $C^{\ast}$-homomorphism $\alpha:\clc\raro\clc\ot\clq$ satisfying\\ (i) $(\alpha\ot {\rm id})\circ\alpha=({\rm id}\ot \Delta)\circ\alpha$.\\ (ii) {\rm Span}$\{\alpha(\clc)(1\ot\clq)\}$ is dense in $\clc\ot\clq$.\\ (iii) The $\ast$-algebra generated by the set $\{(\omega\ot{\rm id})\circ\alpha(\clc): \omega\in\clc^{\ast}\}$ is norm-dense in $\clq$. \edfn \bdfn An action $\alpha:\clc\raro\clc\ot\clq$ is said to be ergodic if $\alpha(c)=c\ot 1$ implies $c\in\mathbb{C}1$. \edfn \bdfn \label{statepres} Given an action $\alpha$ of a CQG $\clq$ on a $C^{\ast}$-algebra $\clc$, $\alpha$ is said to preserve a state $\tau$ on $\clc$ if $(\tau\ot{\rm id})\circ\alpha(a)=\tau(a)1$ for all $a\in\clc$. \edfn For a faithful action of a CQG $(\clq,\Delta)$ on a unital $C^{\ast}$-algebra $\clc$, there is a norm dense $\ast$-subalgebra $\clc_{0}$ of $\clc$ such that the canonical Hopf-algebra $\clq_{0}$ coacts on $\clc_{0}$. \bdfn (Def 2.1 of \cite{Bichon}) Given a unital $C^{\ast}$-algebra $\clc$, quantum automorphism group of $\clc$ is a CQG $(\clq,\Delta)$ acting faithfully on $\clc$ satisfying the following universal property:\\ \indent If $(\clb,\Delta_{\clb})$ is any CQG acting faithfully on $\clc$, there is a surjective CQG morphism $\pi:\clq\raro\clb$ such that $({\rm id}\ot \pi)\circ\alpha=\beta$, where $\beta:\clc\raro\clc\ot\clb$ is the corresponding action of $(\clb,\Delta_{\clb})$ on $\clc$ and $\alpha$ is the action of $(\clq,\Delta)$ on $\clc$. \edfn \brmrk In general the universal object might fail to exist in the above category. To ensure existence one generally assumes that the action preserves some fixed state on the $C^{\ast}$-algebra in the sense of definition \ref{statepres}. We would not go into further details as we are not going to use it in this paper. For further details, reader might consult \cite{Wang}. \ermrk \bxmpl \label{S} \rm{If we take a space of $n$ points $X_{n}$ then the quantum automorphism group of the $C^{\ast}$-algebra $C(X_{n})$ is given by the CQG (denoted by $S_{n}^{+}$) which as a $C^{\ast}$-algebra is the universal $C^{\ast}$ algebra generated by $\{u_{ij}\}_{i,j=1,\ldots,n}$ satisfying the following relations (see Theorem 3.1 of \cite{Wang}): \begin{displaymath} u_{ij}^{2}=u_{ij},u_{ij}^{\ast}=u_{ij},\sum_{k=1}^{n}u_{ik}=\sum_{k=1}^{n}u_{ki}=1, i,j=1,\ldots,n. \end{displaymath} The coproduct on the generators is given by $\Delta(u_{ij})=\sum_{k=1}^{n}u_{ik}\ot u_{kj}$.}\exmpl \subsubsection{Quantum automorphism group of finite graphs and graph $C^{\ast}$-algebras} Recall the definition of finite, directed graph $\Gamma=((V=v_{1},\ldots,v_{m}),( E= e_{1},\ldots, e_{n}))$ without multiple edge from Subsection \ref{KMSgraph} and the Example \ref{S} of the CQG $S_{n}^{+}$. \bdfn \label{qsymban} $(Q^{\rm aut}_{\rm Ban}(\Gamma),\Delta)$ for a graph $\Gamma$ without multiple edge is defined to be the quotient $S^{+}_{n}/(AD-DA)$, where $A=((u_{ij}))_{i.j=1,\ldots,m}$, and $D$ is the adjacency matrix for $\Gamma$. The coproduct on the generators is given by $\Delta(u_{ij})=\sum_{k=1}^{m}u_{ik}\ot u_{kj}$. \edfn For the classical automophism group ${\rm Aut}(\Gamma)$, the commutative CQG $C({\rm Aut}(\Gamma))$ is generated by $u_{ij}$ where $u_{ij}$ is a function on $S_{n}$ taking value $1$ on the permutation which sends $i$-th vertex to $j$-th vertex and takes the value zero on other elements of the group. It is a quantum subgroup of $Q^{\rm aut}_{\rm Ban}(\Gamma)$. The surjective CQG morphism $\pi:Q^{\rm aut}_{\rm Ban}(\Gamma)\raro C({\rm Aut}(\Gamma))$ sends the generators to generators. \bthm (Lemma 3.1.1 of \cite{Fulton}) The quantum automorphism group $(Q^{\rm aut}_{\rm Ban}(\Gamma),\Delta)$ of a finite graph $\Gamma$ with $n$ edges and $m$ vertices (without multiple edge) is the universal $C^{\ast}$-algebra generated by $(u_{ij})_{i,j=1,\ldots,m}$ satisfying the following relations: \begin{eqnarray} && u_{ij}^{\ast}=u_{ij}, u_{ij}u_{ik}=\delta_{jk}u_{ij}, u_{ji}u_{ki}=\delta_{jk}u_{ji}, \sum_{l=1}^{m}u_{il}=\sum_{l=1}^{m}u_{li}=1, 1\leq i,j,k\leq m \label{1}\\ && u_{s(e_{j})i}u_{t(e_{j})k}=u_{t(e_{j})k}u_{s(e_{j})i}=u_{is(e_{j})}u_{kt(e_{j})}=u_{kt(e_{j})}u_{is(e_{j})}=0, e_{j}\in E, (i,k)\not\in E \label{2} \end{eqnarray} where the coproduct on the generators is given by $\Delta(u_{ij})=\sum_{k=1}^{m}u_{ik}\ot u_{kj}$. The $C^{\ast}$-action on the graph is given by $\alpha(p_{i})=\sum_{j}p_{j}\ot u_{ji}$, where $p_{i}$ is the function which takes value $1$ on $i$-the vertex and zero elsewhere. \ethm \brmrk \label{transpose} Since $(Q^{\rm aut}_{\rm Ban}(\Gamma),\Delta)$ is a quantum subgroup of $S_{n}^{+}$, it is a Kac algebra and hence $\kappa(u_{ij})=u_{ji}^{\ast}=u_{ji}$. Applying $\kappa$ to the equation $AD=DA$, we get $A^{T}D=DA^{T}$. \ermrk With analogy of vertex transitive action of the automorphism group of a graph, we have the following \bdfn A graph $\Gamma$ is said to be quantum vertex transitive if the generators $u_{ij}$ of $Q^{\rm aut}_{\rm Ban}(\Gamma)$ are all non zero. \edfn \brmrk \label{qvertex} It is easy to see that if a graph is vertex transitive, it must be quantum vertex transitive. \ermrk \bppsn (Corollary 3.7 of \cite{nonlocal1}) \label{ergodic} The action of $Q_{\rm Ban}^{\rm aut}(\Gamma)$ on $C(V)$ is ergodic if and only if the action is quantum vertex transitive. \eppsn \brmrk For a graph $\Gamma=(V,E)$, when we talk about ergodic action, we always take the corresponding $C^{\ast}$-algebra to be $C(V)$. \ermrk In the next proposition we shall see that in fact for a finite, connected graph $\Gamma$ without multiple edge the CQG $Q_{\rm Ban}^{\rm aut}(\Gamma)$ has a $C^{\ast}$-action on the infinite diemnsional $C^{\ast}$-algebra $C^{\ast}(\Gamma)$. \bppsn\label{aut} (see Theorem 4.1 of \cite{Web}) Given a directed graph $\Gamma$ without multiple edge, $Q^{\rm aut}_{\rm Ban}(\Gamma)$ has a $C^{\ast}$-action on $C^{\ast}(\Gamma)$. The action is given by \begin{eqnarray*} &&\alpha(p_{v_{i}})=\sum_{k=1}^{m}p_{v_{k}}\ot u_{ki}\\ && \alpha(S_{j})=\sum_{l=1}^{n}S_{l}\ot u_{s(e_{l})s(e_{j})}u_{t(e_{l})t(e_{j})}. \end{eqnarray*} \eppsn \bppsn \label{stateprojection} Suppose $\Gamma=(V=(v_{1},\ldots,v_{m}),E=(e_{1},\ldots,e_{n}))$ is a finite, directed graph without multiple edges as before. For a ${\rm KMS}_{\beta}$ state $\tau$ on the graph $C^{\ast}$-algebra $C^{\ast}(\Gamma)$, $Q_{\rm Ban}^{\rm aut}(\Gamma)$ preserves $\tau$ if and only if $(\tau\ot{\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1$ for all $i=1,\ldots,m$. \eppsn We start with proving the following Lemma which will be used to prove the Proposition. In the following Lemma, $\Gamma=(V,E)$ is again a finite, directed graph without multiple edges. \blmma \label{vertextransitive} Given any linear functional $\tau$ on $C(V)$, if $Q^{\rm aut}_{\rm Ban}(\Gamma)$ preserves $\tau$, then $\tau(p_{v_{i}})\neq\tau(p_{v_{j}})\Rightarrow u_{ij}=0$. \elmma {\it Proof}:\\ Let $i,j$ be such that $\tau(p_{v_{i}})\neq \tau(p_{v_{j}})$. By the assumption, \begin{eqnarray*} &&(\tau\ot{\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1\\ &\Rightarrow& \sum_{k}\tau(p_{v_{k}})u_{ki}=\tau(p_{v_{i}})1. \end{eqnarray*} Multiplying both sides of the last equation by $u_{ji}$ and using the orthogonality, we get $\tau(p_{v_{j}})u_{ji}=\tau(p_{v_{i}})u_{ji}$ i.e. $(\tau(p_{v_{j}})-\tau(p_{v_{i}}))u_{ji}=0$ and hence $u_{ji}=0$ as $\tau(p_{v_{i}})\neq \tau(p_{v_{j}})$. Applying $\kappa$, we get $u_{ij}=u_{ji}=0$.\qed\vspace{0.2in}\\ {\it Proof of Proposition \ref{stateprojection}}:\\ If $Q^{\rm aut}_{\rm Ban}(\Gamma)$ preserves $\tau$, then $(\tau\ot{\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1$ for all $i=1,\ldots,m$ trivially. For the converse, given $(\tau\ot{\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1$ for all $i=1,\ldots,m$, we need to show that $(\tau\ot{\rm id})\circ\alpha(S_{\mu}S_{\nu}^{\ast})=\tau(S_{\mu}S_{\nu}^{\ast})$ for all $\mu,\nu\in E^{\ast}$. The proof is similar to that of Theorem 3.5 of \cite{sou_arn2}. It is easy to see that for $|\mu|\neq|\nu|$, $(\tau\ot{\rm id})\circ\alpha(S_{\mu}S_{\nu}^{\ast})=0=\tau(S_{\mu}S_{\nu}^{\ast})$. So let $|\mu|=|\nu|$. For $\mu=\nu=e_{i_{1}}e_{i_{2}}\ldots e_{i_{p}}$, we have $S_{\mu}S_{\mu}^{\ast}=S_{i_{1}}\ldots S_{i_{p}}S_{i_{p}}^{\ast}\ldots S_{i_{1}}^{\ast}$. So \begin{eqnarray*} (\tau\ot{\rm id})\circ\alpha(S_{\mu}S_{\mu}^{\ast})&=&(\tau\ot{\rm id})(\sum S_{j_{1}}\ldots S_{j_{p}}S_{j_{p}}^{\ast}\ldots S_{j_{1}}^{\ast}\ot u_{s(e_{j_{1}})s(e_{i_{1}})}u_{t(e_{j_{1}})t(e_{i_{1}})}\\ &&\ldots u_{s(e_{j_{p}})s(e_{i_{p}})}u_{t(e_{j_{p}})t(e_{i_{p}})}u_{s(e_{j_{p}})s(e_{i_{p}})} \ldots u_{t(e_{j_{1}})t(e_{i_{1}})}u_{s(e_{j_{1}})s(e_{i_{1}})}) \end{eqnarray*} By the same argument as given in the proof of the Theorem 3.5 of \cite{sou_arn2}, for $S_{j_{1}}\ldots S_{j_{p}}=0$, $u_{s(e_{j_{1}})s(e_{i_{1}})}u_{t(e_{j_{1}})t(e_{i_{1}})}\ldots u_{s(e_{j_{p}})s(e_{i_{p}})}u_{t(e_{j_{p}})s(t_{i_{p}})}=0$ and hence the last expression equals to \begin{eqnarray*} && \sum e^{-\beta|\mu|}\tau(p_{t(e_{j_{p}})}) u_{s(e_{j_{1}})s(e_{i_{1}})}u_{t(e_{j_{1}})t(e_{i_{1}})}\ldots u_{s(e_{j_{p}})s(e_{i_{p}})}u_{t(e_{j_{p}})t(e_{i_{p}})}\\ &&u_{s(e_{j_{p}})s(e_{i_{p}})}\ldots u_{t(e_{j_{1}})t(e_{i_{1}})}u_{s(e_{j_{1}})s(e_{i_{1}})} \end{eqnarray*} Observe that any ${\rm KMS}_{\beta}$ state restricts to a state on $C(V)$ so that by Lemma \ref{vertextransitive}, for $\tau(p_{t(e_{j_{p}})})\neq\tau(p_{t(e_{i_{p}})})$, $u_{t(e_{j_{p}})t(e_{i_{p}})}=0$. Using this, the last summation reduces to \begin{eqnarray*} && e^{-\beta|\mu|}\sum\tau(p_{t(e_{i_{p}})})u_{s(e_{j_{1}})s(e_{i_{1}})}u_{t(e_{j_{1}})t(e_{i_{1}})}\ldots u_{s(e_{j_{p}})s(e_{i_{p}})}u_{t(e_{j_{p}})t(e_{i_{p}})}\\ && u_{s(e_{j_{p}})s(e_{i_{p}})}\ldots u_{t(e_{j_{1}})t(e_{i_{1}})}u_{s(e_{j_{1}})s(e_{i_{1}})}. \end{eqnarray*} Using the same arguments used in the proof of Theorem 3.13 in \cite{sou_arn} repeatedly, it can be shown that the last summation actually equals to $e^{-\beta|\mu|}\tau(p_{t(e_{i_{p}})})=\tau(S_{\mu}S_{\mu}^{\ast})$. Hence \begin{displaymath} (\tau\ot{\rm id})\circ\alpha(S_{\mu}S_{\mu}^{\ast})=\tau(S_{\mu}S_{\mu}^{\ast})1. \end{displaymath} With similar reasoning it can easily be verified that for $\mu\neq\nu$, $(\tau\ot{\rm id})\circ\alpha(S_{\mu}S_{\nu}^{\ast})=0=\tau(S_{\mu}S_{\nu}^{\ast})1$. Hence by linearity and continuity of $\tau$, for any $a\in C^{\ast}(\Gamma)$, $(\tau\ot{\rm id})\circ\alpha(a)=\tau(a).1$.\qed\vspace{0.1in}\\ If we apply Proposition \ref{stateprojection} to the action of classical automorphism group of a graph on the corresponding graph $C^{\ast}$-algebra, we get the following \blmma \label{inv_1} Given a ${\rm KMS}_{\beta}$ state $\tau$ on $C^{\ast}(\Gamma)$, we denote the vector $(\tau(p_{v_{1}}),\ldots,\tau(p_{v_{m}}))$ by $\cln^{\tau}$. If we denote the permutation matrix corresponding to an element $g\in{\rm Aut}(\Gamma)$ by $B$, then ${\rm Aut}(\Gamma)$ preserves $\tau$ if and only if $B\cln^{\tau}=\cln^{\tau}$. \elmma {\it Proof}:\\ Follows from the easy observation that $(\tau\ot{\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1$ implies $B\cln^{\tau}=\cln^{\tau}$ for the classical automorphism group of the graph.\qed. \section{Invariance of KMS states under the symmetry of graphs} In this section we shall study invariance of KMS states of certain classes of graph $C^{\ast}$-algebras under classical and quantum symmetry as mentioned in the introduction. \subsection{Strongly connected graphs} Recall the unique KMS state of $C^{\ast}(\Gamma)$ for a strongly connected graph $\Gamma$ with vertex matrix $D$. We denote the $ij$-th entry of $D$ by $d_{ij}$. The unique KMS state at the inverse critical temperature ${\rm ln}(\rho(D))$ is determined by the unique normalized Perron-Frobenius eigen vector of $D$ corresponding to the eigen value $\rho(D)$. If the state is denoted by $\tau$, the eigen vector is given by $((\tau(p_{v_{i}})))_{i=1,\ldots,m}$ where $m$ is the number of vertices. Now recall the action of $Q^{\rm aut}_{\rm Ban}(\Gamma)$ on $C^{\ast}(\Gamma)$. We continue to denote the matrix $((u_{ij}))$ by $A$. \blmma \label{0state} $(\tau\ot {\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1 \forall \ i=1,\ldots,m$. \elmma {\it Proof}:\\ For a state $\phi$ of $Q_{\rm aut}^{\rm Ban}(\Gamma)$, we denote the vector whose $i$-th entry is $(\tau\ot\phi)\circ\alpha(p_{v_{i}})$ by ${\bf v}_{\phi}$. Then \begin{eqnarray*} (D{\bf v}_{\phi})_{i}&=&\sum_{j}d_{ij}(\tau\ot\phi)\circ\alpha(p_{v_{j}})\\ &=& \sum_{j,k}\tau(p_{v_{k}})\phi(d_{ij}u_{kj})\\ &=& \sum_{k}\tau(p_{v_{k}})\phi(\sum_{j}d_{ij}u_{kj})\\ &=& \sum_{k}\tau(p_{v_{k}})\phi(\sum_{j}u_{ji}d_{jk}) \ \ (DA^{T}=A^{T}D \ by \ remark \ \ref{transpose})\\ &=& \sum_{j,k}d_{jk}\tau(p_{v_{k}})\phi(u_{ji})\\ &=&\rho(D)\sum_{j}\tau(p_{j})\phi(u_{ji}) \ \ (D((\tau(p_{i})))^{T}=\rho(D)((\tau(p_{v_{i}})))^{T})\\ &=& \rho(D) (\tau\ot\phi)\circ\alpha(p_{v_{i}}). \end{eqnarray*} Hence ${\bf v}_{\phi}$ is an eigen vector of $D$ corresponding to the eigen value $\rho(D)$. By the one dimensionality of the eigen space we have some constant $C_{\phi}$ such that $(\tau\ot \phi)\circ\alpha(p_{v_{i}})=C_{\phi}\tau(p_{v_{i}})$ for all $i=1,\ldots,m$. To determine the constant $C_{\phi}$ we take the summation over $i$ on both sides and get \begin{eqnarray*} &&\sum_{i}(\tau\ot\phi)\circ\alpha(p_{v_{i}})=C_{\phi}\sum_{i}\tau(p_{v_{i}})\\ &\Rightarrow& \sum_{i,j}\tau(p_{v_{j}})\phi(u_{ji})=C_{\phi}\\ &\Rightarrow& \sum_{j}\tau(p_{v_{j}})\sum_{i}\phi(u_{ji})=C_{\phi}\\ &\Rightarrow& C_{\phi}=1. \end{eqnarray*} From above calculation it is clear that $C_{\phi}=1$ for all states $\phi$ on $Q^{\rm aut}_{\rm Ban}(\Gamma)$. Hence $\phi((\tau\ot{\rm id})\circ\alpha(p_{v_{i}}))=\phi(\tau(p_{v_{i}})1)$ for all $i$ and for all state $\phi$ which implies that $(\tau\ot{\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1$ for all $i=1,\ldots,m$.\qed \bppsn \label{statepreserve} For a strongly connected graph $\Gamma$, $Q^{\rm aut}_{\rm Ban}(\Gamma)$ preserves the unique KMS state of $C^{\ast}(\Gamma)$. \eppsn {\it Proof}:\\ This follows from Lemma \ref{0state} and Proposition \ref{stateprojection}.\qed \brmrk In light of the above Proposition \ref{statepreserve}, for strongly connected graphs, we can relax the condition on the graph in \cite{sou_arn2}. In that paper it was assumed that all the row sums of the vertex matrix have to be equal to ensure that $Q^{\rm aut}_{\rm Ban}(\Gamma)$ belongs to the category $\clc^{\Gamma}_{\tau}$ (see \cite{sou_arn2} for notation) for the unique KMS state $\tau$ on a strongly connected graph $\Gamma$. Now we have for a strongly connected graph $\Gamma$ with its unique KMS state $\tau$, the category $\clc_{\tau}^{\Gamma}$ contains $Q^{\rm aut}_{\rm Ban}(\Gamma)$ and for that one does not require all the row sums to be equal. \ermrk We end this subsection with a proposition about the non ergodicity of the action of $Q_{\rm Ban}^{\rm aut}(\Gamma)$ on $C(V)$ for a strongly connected graph $\Gamma=(V,E)$. Note that the following proposition does not deal with states on the infinite dimensional $C^{\ast}$-algebra $C^{\ast}(\Gamma)$. \bppsn \label{erg} For a strongly connected graph $\Gamma$, if the Perron-Frobenius eigen vector is not spanned by $(1,\ldots,1)$, then the action of $Q^{\rm aut}_{\rm Ban}(\Gamma)$ is non ergodic. \eppsn {\it Proof}:\\ It follows from Lemma \ref{vertextransitive}, Lemma \ref{0state} and Proposition \ref{ergodic}.\qed \subsection{Circulant graphs} Consider a finite graph $\Gamma=(V,E)$ with $m$-vertices $(v_{1},\ldots,v_{m})$. Recall the notation $\cln^{\tau}$ for a ${\rm KMS}_{\beta}$ state $\tau$ on $C^{\ast}(\Gamma)$. \blmma \label{trans} Given a graph $\Gamma$ such that ${\rm Aut}(\Gamma)$ acts transitively on its vertices, $B\cln^{\tau}=\cln^{\tau}$ for all $B\in{\rm Aut}(\Gamma)$ if and only if $\cln^{\tau}_{i}=\cln^{\tau}_{j}$ for all $i,j=1,\ldots,m$. \elmma {\it Proof}:\\ If $\cln^{\tau}_{i}=\cln^{\phi}_{j}$ for all $i,j=1,\ldots,m$, then $B\cln^{\tau}=\cln^{\tau}$ for all $B\in{\rm Aut}(\Gamma)$ trivially. For the converse, let $\cln^{\tau}_{i}\neq \cln^{\tau}_{j}$ for some $i,j$. Since the action of the automorphism group is transitive, there is some $B\in{\rm Aut}(\Gamma)$ so that $B(v_{i})=v_{j}$ and hence $B\cln^{\tau}\neq \cln^{\tau}$.\qed\vspace{0.1in}\\ Now recall from the discussion following Corollary \ref{not_unique} and from Lemma \ref{circonetemp} that for a circulant graph, the KMS state exists only at the critical inverse temperature, but it is not necessarily unique. We shall prove that if we further assume the invariance of such a state under the action of the automorphism group of the graph, then it is unique. \bppsn For a circulant graph $\Gamma$ there exists a unique ${\rm Aut}(\Gamma)$ invariant KMS state on $C^{\ast}(\Gamma)$. \eppsn {\it Proof}:\\ Since for a circulant graph, the automorphism group acts transitively on the set of vertices, by Lemma \ref{trans} and Lemma \ref{inv_1}, a ${\rm KMS}_{\beta}$ state $\tau$ is ${\rm Aut}(\Gamma)$ invariant if and only if $\tau(p_{v_{i}})=\frac{1}{m}$ for all $i$. This coupled with the fact that $(\frac{1}{m},\ldots,\frac{1}{m})$ is an eigen vector corresponding to the eigen value $\lambda$ ($=$ spectral radius) finishes the proof of the proposition.\qed \brmrk \label{future} The group invariant KMS state is also invariant under the action of quantum automorphism group of the underlying graph. Since $\tau(p_{v_{i}})=\tau(p_{v_{j}})$, it is easy to see that for the action of $Q^{\rm aut}_{\rm Ban}(\Gamma)$, $(\tau\ot{\rm id})\circ\alpha(p_{v_{i}})=\tau(p_{v_{i}})1$ for all $i=1,\ldots,m$. Hence an application of Proposition \ref{stateprojection} finishes the proof of the claim. \ermrk \subsection{Graph of Mermin-Peres magic square game} We start this subsection by clarifying a few notations to be used in this subsection. Given an undirected graph $\Gamma$, we make it directed by declaring that both $(i,j)$ and $(j,i)$ are in the edge set whenever there is an edge between two vertices $v_{i}$ and $v_{j}$. The vertex matrix of such directed graph is symmetric by definition. In this subsection we use the notation $\overrightarrow{\Gamma}$ for the directed graph coming from an undirected graph $\Gamma$ in this way. $\Gamma$ will always denote an undirected graph. \brmrk \label{arrow} By definition, $Q^{\rm aut}_{\rm Ban}(\overrightarrow{\Gamma})\cong Q^{\rm aut}_{\rm Ban}(\Gamma)$ and hence ${\rm Aut}(\overrightarrow{\Gamma})\cong{\rm Aut}(\Gamma)$ (see \cite{Ban}). \ermrk Given two graphs $\Gamma_{1}=(V_{1},E_{1}),\Gamma_{2}=(V_{2},E_{2})$ their disjoint union $\Gamma_{1}\cup\Gamma_{2}$ is defined to be the graph $\Gamma=(V,E)$ such that $V=V_{1}\cup V_{2}$. There is an edge between two vertices $v_{i}, v_{j}\in V_{1}\cup V_{2}$ if both the vertices belong to either $\Gamma_{1}$ or $\Gamma_{2}$ and they have an edge in the corresponding graph. \bppsn \label{iso} Let $\Gamma_{1}, \Gamma_{2}$ be two non isomorphic connected graphs. Then the automorphism group of $\overrightarrow{\Gamma_{1}\cup\Gamma_{2}}$ is given by ${\rm Aut}(\overrightarrow{\Gamma_{1}})\times{\rm Aut}(\overrightarrow{\Gamma_{2}})$. \eppsn {\it Proof}:\\ Combining Theorem 2.5 of \cite{thesis} and Remark \ref{arrow}, the result follows. \qed \bppsn \label{infKMS} Let $\Gamma_{1}$ and $\Gamma_{2}$ be two non isomorphic connected graphs such that $\overrightarrow{\Gamma_{1}}$ and $\overrightarrow{\Gamma_{2}}$ has symmetric vertex matrices $D_{1}$ and $D_{2}$ having equal spectral radius say $\lambda$. Then for the graph $\Gamma=\Gamma_{1}\cup \Gamma_{2}$, $C^{\ast}(\overrightarrow{\Gamma})$ has infinitely many KMS states at the inverse critical temperature ${\rm ln}(\lambda)$ such that all of them are invariant under the action of ${\rm Aut}(\overrightarrow{\Gamma})\cong{\rm Aut}(\overrightarrow{\Gamma_{1}})\times {\rm Aut}(\overrightarrow{\Gamma_{2}})$. \eppsn To prove the proposition we require the following \blmma \label{joint} Let $A\in M_{n}(\mathbb{C})$ and $B\in M_{m}(\mathbb{C})$. Then the spectral radius of the matrix $\begin{bmatrix} A & 0_{n\times m}\\ 0_{m\times n} & B \end{bmatrix}$ is equal to ${\rm max}\{{\rm sp}(A),{\rm sp}(B)\}$. \elmma {\it Proof}:\\ It follows from the simple observation that any eigen value of the matrix $\begin{bmatrix} A & 0_{n\times m}\\ 0_{m\times n} & B \end{bmatrix}$ is either an eigen value of $A$ or an eigen value of $B$.\qed\vspace{0.2in}\\ {\it Proof of Proposition \ref{infKMS}}:\\ We assume that $\overrightarrow{\Gamma_{1}}$ has $n$-vertices and $\overrightarrow{\Gamma_{2}}$ has $m$-vertices. Let us denote the vertex matrix of $\overrightarrow{\Gamma}$ by $D$. $D$ is given by the matrix \begin{center}$\begin{bmatrix} D_{1} & 0_{n\times m}\\ 0_{m\times n} & D_{2} \end{bmatrix}$.\end{center} Then the spectral radius is equal to $\lambda$ by Lemma \ref{joint}. Also it is easy to see that the spectral radius is an eigen value of the matrix $D$. Now $\lambda$ has one dimensional eigen space for $D_{1}$ spanned by say $w_{1}$ and one dimensional eigen space for $D_{2}$ spanned by say $w_{2}$ as both the graphs are connected and hence strongly connected as directed graphs. We take both the eigen vectors normalized for convenience. Then for $D$, the eigen space corresponding to $\lambda$ is two dimensional spanned by the vectors ${\bf w_{1}}=(w_{1},0_{m})$ and ${\bf w_{2}}=(0_{n},w_{2})$ where $0_{k}$ is the zero $k$-tuple. So the eigen space of $D$ corresponding to the eigen value $\lambda$ is given by $\{\xi{\bf w_{1}}+\eta{\bf w_{2}}:(\xi,\eta)\in\mathbb{C}^2- ( 0,0) \}$. For any $(\xi,\eta)\in\mathbb{C}^{2}$, $\xi{\bf w_{1}}+\eta{\bf w_{2}}=(\xi w_{1},\eta w_{2})$. It is easy to see that there are infinitely many choice of $\xi,\eta$ such that corresponding eigen vector is normalized with all its entries non negative which in turn give rise to infinitely many KMS states. The set of normalized vectors is given by $\{(\xi w_{1},(1-\xi)w_{2}):0\leq\xi\leq 1\}$. We shall show that any $B\in{\rm Aut}(\overrightarrow{\Gamma})$ keeps such a normalized eigen vector invariant. Let ${\bf w}=(\xi w_{1},(1-\xi)w_{2})$ be one such choice. By Proposition \ref{iso}, any $B\in {\rm Aut}(\overrightarrow{\Gamma})$ can be written in the matrix form $\begin{bmatrix} B_{1} & 0_{n\times m}\\ 0_{m\times n} & B_{2}.\end{bmatrix}$, for $B_{i}\in {\rm Aut}(\overrightarrow{\Gamma_{i}})$ and $i=1,2$. Then $B{\bf w}=(\xi B_{1}w_{1},(1-\xi) B_{2}w_{2})$. Since $w_{1}, w_{2}$ are Perron-Frobenius eigen vectors of $D_{1}, D_{2}$ respectively with both the graphs $\overrightarrow{\Gamma_{1}}, \overrightarrow{\Gamma_{2}}$ strongly connected, by Lemma \ref{0state}, $B_{i}(w_{i})=w_{i}$ for $i=1,2$. So $B{\bf w}=(\xi w_{1},(1-\xi)w_{2})={\bf w}$. Now an application of Lemma \ref{inv_1} completes the proof of the proposition.\qed\vspace{0.1in}\\ Now we turn to the main object of study of this Subsection. \subsubsection{Linear Binary Constraint System (LBCS)} A linear binary constraint system (LBCS) $\clf$ consists of a family of binary variables $x_{1},\ldots,x_{n}$ and constraints $C_{1},\ldots,C_{m}$, where each $C_{l}$ is a linear equation over $\mathbb{F}_{2}$ in some subset of the variables i.e. each $C_{l}$ is of the form $\sum_{x_{i}\in S_{l}}x_{i}=b_{l}$ for some $S_{l}\subset\{x_{1},\ldots,x_{n}\}$. An LBCS is said to be {\it satisfiable} if there is an assignment of values from $\mathbb{F}_{2}$ to the variables $x_{i}$ such that every constraint $C_{l}$ is satisfied. For every LBCS there is an associated LBCS game. For details on LBCS games readers are referred to \cite{nonlocal1} and \cite{nonlocal2}. For the following definition we need the concept of perfect quantum strategy for nonlocal games. We shall not discuss it here and readers are again referred to \cite{nonlocal1}, \cite{nonlocal2} for details on quantum strategy. \bdfn An LBCS is said to be quantum satisfiable if there exists a perfect quantum strategy for the corresponding LBCS game. \edfn Now we shall give an example of an LBCS which is quantum satisfiable but not satisfiable. Consider the following LBCS: \begin{eqnarray*} && x_{1}+x_{2}+x_{3}=0 \ \ \ \ \ \ \ \ \ \ \ x_{1}+x_{4}+x_{7}=0\\ && x_{4}+x_{5}+x_{6}=0 \ \ \ \ \ \ \ \ \ \ \ x_{2}+x_{5}+x_{8}=0\\ && x_{7}+x_{8}+x_{9}=0 \ \ \ \ \ \ \ \ \ \ \ x_{3}+x_{6}+x_{9}=1, \end{eqnarray*} where the addition is over $\mathbb{F}_{2}$. It is easy to see that the above LBCS is not satisfiable since summing up all equations modulo $2$ we get $0=1$. The fact that it is quantum satisfiable is more non trivial. For that we refer the reader to \cite{nonlocal2}. The game corresponding to the above LBCS is called the Mermin-Peres magic square game. Corresponding to every LBCS $\clf$ one can associate a graph (see section 6.2 of \cite{nonlocal2}) to be denoted by $\clg(\clf)$. \bdfn \label{homo} Given an LBCS $\clf$, its homogenization $\clf_{0}$ is defined to be the LBCS obtained by assigning zero to the right hand side of every constraint $C_{l}$. \edfn We have the following (Theorem 6.2 and 6.3 of \cite{nonlocal2}) \bthm \label{quantum} Given an LBCS $\clf$, $\clf$ is (quantum) satisfiable if and only if the graphs $\clg(\clf)$ and $\clg(\clf_{0})$ are (quantum) isomorphic. \ethm From now on $\clf$ will always mean the LBCS corresponding to Mermin-Peres magic square game. In light of the Theorem \ref{quantum} and the discussion just before the Definition \ref{homo} we readily see that for the Mermin-Peres magic square game, the corresponding graphs $\clg(\clf)$ and $\clg(\clf_{0})$ are quantum isomorphic, but not isomorphic. Both the graphs are vertex transitive (in fact they are Cayley as mentioned in \cite{nonlocal1}) and hence quantum vertex transitive by Remark \ref{qvertex}. Combining the facts that $\clg(\clf)$ and $\clg(\clf_{0})$ are quantum isomorphic and quantum vertex transitive with Lemma 4.11 of \cite{nonlocal1}, we get \blmma \label{qvt} For the LBCS $\clf$ corresponding to the Mermin Peres magic square game, the disjoint union of $\clg(\clf)$ and $\clg(\clf_{0})$ is quantum vertex transitive. \elmma By Remark \ref{arrow}, \bcrlre \label{qvt1} $\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})}$ is quantum vertex transitive. \ecrlre \begin{tikzpicture}[scale=0.5, transform shape] \tikzmath{\x1 = 0; \y1 =2; \y2=-10;} \path (0,2) node[circle,draw](1){000}; \path (2,2) node[circle,draw](2){011}; \path (4,2) node[circle,draw](3){101}; \path (6,2) node[circle,draw](4){110}; \path (10,2) node[circle,draw](5){000}; \path (12,2) node[circle,draw](6){011}; \path (14,2) node[circle,draw](7){101}; \path (16,2) node[circle,draw](8){110}; \path (20,2) node[circle,draw](9){000}; \path (22,2) node[circle,draw](10){011}; \path (24,2) node[circle,draw](11){101}; \path (26,2) node[circle,draw](12){110}; \path (0,-10) node[circle,draw](13){000}; \path (2,-10) node[circle,draw](14){011}; \path (4,-10) node[circle,draw](15){101}; \path (6,-10) node[circle,draw](16){110}; \path (10,-10) node[circle,draw](17){000}; \path (12,-10) node[circle,draw](18){011}; \path (14,-10) node[circle,draw](19){101}; \path (16,-10) node[circle,draw](20){110}; \path (20,-10) node[circle,draw](21){111}; \path (22,-10) node[circle,draw](22){100}; \path (24,-10) node[circle,draw](23){010}; \path (26,-10) node[circle,draw](24){001}; \draw (1) .. controls (1,3) and (3,3) .. (3); \draw (1) .. controls (1,3.5) and (5,3.5) .. (4); \draw (2) .. controls (3,3) and (5,3) .. (4); \draw (5) .. controls (11,3) and (13,3) .. (7); \draw (5) .. controls (11,3.5) and (15,3.5) .. (8); \draw (6) .. controls (13,3) and (15,3) .. (8); \draw (9) .. controls (21,3) and (23,3) .. (11); \draw (9) .. controls (21,3.5) and (25,3.5) .. (12); \draw (10) .. controls (23,3) and (25,3) .. (12); \draw (13) .. controls (1,-11) and (3,-11) .. (15); \draw (13) .. controls (1,-11.5) and (5,-11.5) .. (16); \draw (14) .. controls (3,-11) and (5,-11) .. (16); \draw (17) .. controls (11,-11) and (13,-11) .. (19); \draw (17) .. controls (11,-11.5) and (15,-11.5) .. (20); \draw (18) .. controls (13,-11) and (15,-11) .. (20); \draw (21) .. controls (21,-11) and (23,-11) .. (23); \draw (21) .. controls (21,-11.5) and (25,-11.5) .. (24); \draw (22) .. controls (23,-11) and (25,-11) .. (24); \draw (1) -- (2); \draw (2) -- (3); \draw (3) -- (4); \draw (5) -- (6); \draw (6) -- (7); \draw (7) -- (8); \draw (9) -- (10); \draw (10) -- (11); \draw (11) -- (12); \draw (13) -- (14); \draw (14) -- (15); \draw (15) -- (16); \draw (17) -- (18); \draw (18) -- (19); \draw (19) -- (20); \draw (21) -- (22); \draw (22) -- (23); \draw (23) -- (24); \foreach \x in {15,16,19,20,21,22} \draw (1) -- (\x); \foreach \x in {15,16,17,18,23,24} \draw (2) -- (\x); \foreach \x in {13,14,19,20,23,24} \draw (3) -- (\x); \foreach \x in {13,14,17,18,21,22} \draw (4) -- (\x); \foreach \x in {14,16,18,20,21,23} \draw (5) -- (\x); \foreach \x in {14,16,17,19,22,24} \draw (6) -- (\x); \foreach \x in {13,15,18,20,22,24} \draw (7) -- (\x); \foreach \x in {13,15,17,19,21,23} \draw (8) -- (\x); \foreach \x in {14,15,18,19,21,24} \draw (9) -- (\x); \foreach \x in {14,16,17,20,22,23} \draw (10) -- (\x); \foreach \x in {13,16,18,19,22,23} \draw (11) -- (\x); \foreach \x in {13,16,17,20,21,24} \draw (12) -- (\x); \node at (3,3.8) {\Large{$x_{1}+x_2+x_3=0$}}; \node at (13,3.8) {\Large{$x_{4}+x_5+x_6=0$}}; \node at (23,3.8) {\Large{$x_{7}+x_8+x_9=0$}}; \node at (3,-11.8) {\Large{$x_{1}+x_4+x_7=0$}}; \node at (13,-11.8) {\Large{$x_{2}+x_5+x_8=0$}}; \node at (23,-11.8) {\Large{$x_{3}+x_6+x_9=1$}}; \end{tikzpicture} \begin{center} {Figure 1: Graph $\clg(\clf)$} \end{center} \begin{tikzpicture}[scale=0.5, transform shape] \tikzmath{\x1 = 0; \y1 =2; \y2=-10;} \path (0,2) node[circle,draw](1){000}; \path (2,2) node[circle,draw](2){011}; \path (4,2) node[circle,draw](3){101}; \path (6,2) node[circle,draw](4){110}; \path (10,2) node[circle,draw](5){000}; \path (12,2) node[circle,draw](6){011}; \path (14,2) node[circle,draw](7){101}; \path (16,2) node[circle,draw](8){110}; \path (20,2) node[circle,draw](9){000}; \path (22,2) node[circle,draw](10){011}; \path (24,2) node[circle,draw](11){101}; \path (26,2) node[circle,draw](12){110}; \path (0,-10) node[circle,draw](13){000}; \path (2,-10) node[circle,draw](14){011}; \path (4,-10) node[circle,draw](15){101}; \path (6,-10) node[circle,draw](16){110}; \path (10,-10) node[circle,draw](17){000}; \path (12,-10) node[circle,draw](18){011}; \path (14,-10) node[circle,draw](19){101}; \path (16,-10) node[circle,draw](20){110}; \path (20,-10) node[circle,draw](21){111}; \path (22,-10) node[circle,draw](22){100}; \path (24,-10) node[circle,draw](23){010}; \path (26,-10) node[circle,draw](24){001}; \draw (1) .. controls (1,3) and (3,3) .. (3); \draw (1) .. controls (1,3.5) and (5,3.5) .. (4); \draw (2) .. controls (3,3) and (5,3) .. (4); \draw (5) .. controls (11,3) and (13,3) .. (7); \draw (5) .. controls (11,3.5) and (15,3.5) .. (8); \draw (6) .. controls (13,3) and (15,3) .. (8); \draw (9) .. controls (21,3) and (23,3) .. (11); \draw (9) .. controls (21,3.5) and (25,3.5) .. (12); \draw (10) .. controls (23,3) and (25,3) .. (12); \draw (13) .. controls (1,-11) and (3,-11) .. (15); \draw (13) .. controls (1,-11.5) and (5,-11.5) .. (16); \draw (14) .. controls (3,-11) and (5,-11) .. (16); \draw (17) .. controls (11,-11) and (13,-11) .. (19); \draw (17) .. controls (11,-11.5) and (15,-11.5) .. (20); \draw (18) .. controls (13,-11) and (15,-11) .. (20); \draw (21) .. controls (21,-11) and (23,-11) .. (23); \draw (21) .. controls (21,-11.5) and (25,-11.5) .. (24); \draw (22) .. controls (23,-11) and (25,-11) .. (24); \draw (1) -- (2); \draw (2) -- (3); \draw (3) -- (4); \draw (5) -- (6); \draw (6) -- (7); \draw (7) -- (8); \draw (9) -- (10); \draw (10) -- (11); \draw (11) -- (12); \draw (13) -- (14); \draw (14) -- (15); \draw (15) -- (16); \draw (17) -- (18); \draw (18) -- (19); \draw (19) -- (20); \draw (21) -- (22); \draw (22) -- (23); \draw (23) -- (24); \foreach \x in {15,16,19,20,23,24} \draw (1) -- (\x); \foreach \x in {15,16,17,18,20,21} \draw (2) -- (\x); \foreach \x in {13,14,19,20,21,22} \draw (3) -- (\x); \foreach \x in {13,14,17,18,23,24} \draw (4) -- (\x); \foreach \x in {14,16,18,20,22,24} \draw (5) -- (\x); \foreach \x in {14,16,17,19,21,23} \draw (6) -- (\x); \foreach \x in {13,15,18,20,21,23} \draw (7) -- (\x); \foreach \x in {13,15,17,19,22,24} \draw (8) -- (\x); \foreach \x in {14,15,18,19,22,23} \draw (9) -- (\x); \foreach \x in {14,15,17,20,21,24} \draw (10) -- (\x); \foreach \x in {13,16,18,19,21,24} \draw (11) -- (\x); \foreach \x in {13,16,17,20,22,23} \draw (12) -- (\x); \node at (3,3.8) {\Large{$x_{1}+x_2+x_3=0$}}; \node at (13,3.8) {\Large{$x_{4}+x_5+x_6=0$}}; \node at (23,3.8) {\Large{$x_{7}+x_8+x_9=0$}}; \node at (3,-11.8) {\Large{$x_{1}+x_4+x_7=0$}}; \node at (13,-11.8) {\Large{$x_{2}+x_5+x_8=0$}}; \node at (23,-11.8) {\Large{$x_{3}+x_6+x_9=0$}}; \end{tikzpicture} \begin{center} {Figure 2: Graph $\clg(\clf_{0})$} \end{center} It can be verified that both the graphs $\clg(\clf)$ and $\clg(\clf_{0})$ are connected with $24$ vertices each such that the vertex matrices of the graphs $\overrightarrow{\clg(\clf)}$ and $\overrightarrow{\clg(\clf)_{0}}$ have spectral radius $9$. Then since they are non isomorphic, by Proposition \ref{infKMS}, $C^{\ast}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})})$ has infinitely many KMS states at the inverse critical temperature ${\rm ln}(9)$ all of which are invariant under the action of the classical automorphism group of $\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})}$. But as mentioned earlier, if we further assume that the KMS state at critical temperature is invariant under the action of $Q^{\rm aut}_{\rm Ban}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})})$, then it is necessarily unique. We prove it in the next theorem. \bthm For the LBCS $\clf$ corresponding to the Mermin-Peres magic square game, \\ $C^{\ast}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})})$ has a unique $ Q^{\rm aut}_{\rm Ban}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})})$-invariant KMS state $\tau$ given by \begin{displaymath} \tau(S_{\mu}S_{\nu}^{\ast})=\delta_{\mu,\nu}\frac{1}{9^{|\mu|}48}. \end{displaymath} \ethm {\it Proof}:\\ Since the graph $\overrightarrow{(\clg(\clf)\cup\clg(\clf_{0})})$ is quantum vertex transitive by Corollary \ref{qvt1}, for any KMS state $\tau$ on $C^{\ast}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})}))$ at the critical inverse temperature ${\rm ln}(9)$ which is preserved by $Q^{\rm aut}_{\rm Ban}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})})$, $\tau(p_{v_{i}})=\tau(p_{v_{j}})$ for all $i,j$ (see Lemma \ref{vertextransitive}). That forces $\tau(p_{v_{i}})$ to be $\frac{1}{48}$ for all $i=1,\ldots,48$. Since $(\frac{1}{48},\ldots,\frac{1}{48})$ is an eigen vector corresponding to the eigen value $9$, there is a unique KMS state on $C^{\ast}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})})$ at the inverse critical temperature ${\rm ln}(9)$ satisfying (see (a) of \ref{exist_KMS}) \begin{displaymath} \tau(S_{\mu}S_{\nu}^{\ast})=\delta_{\mu,\nu}\frac{1}{9^{|\mu|}48}. \end{displaymath} $Q^{\rm aut}_{\rm Ban}(\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})})$ preserves the above KMS state by following the same line of arguments as given in Remark \ref{future}. To complete the proof, we need to show that the only possible inverse temperature where a KMS state could occur is the critical inverse temperature. For that first notice that the graph $\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})}$ is without sink. We have already observed that the spectral radius $9$ has an eigen vector with all entries strictly positive (column vector with all its entries $\frac{1}{48}$). Also the vertex matrix of the graph $\overrightarrow{\clg(\clf)\cup\clg(\clf_{0})}$ is symmetric implying that $(\frac{1}{48},\ldots,\frac{1}{48})D=9(\frac{1}{48},\ldots,\frac{1}{48})$. Hence an application of Lemma \ref{onetemp} finishes the proof of the theorem. \qed \section*{Concluding remarks} 1. We conjecture that the converse of the Proposition \ref{erg} is true, i.e. a strongly connected graph $\Gamma$ is quantum vertex transitive if and only if the Perron-Frobenius eigen space is spanned by the vector $(1,\ldots,1)$. Even if the conjecture is false it seems that it is hard to find an example of a strongly connected graph with `small' number of vertices whose Perron-Frobenius eigen vector is spanned by $(1,\ldots,1)$, but the graph fails to be quantum vertex transitive. \vspace{0.1in}\\ 2. In all the examples considered in this paper, KMS states always occur at the inverse critical temperature. But in general it might be interesting to see if some natural symmetry could also fix the inverse temperature. In this context, one can possibly look at the graphs with sink which has richer supply of KMS states (see \cite{watatani}).\vspace{0.1in}\\ {\bf Acknowledegement}: The first author acknowledges support from Department of Science and Technology, India (DST/INSPIRE/04/2016/002469). The second author acknowledges support from Science and Engineering Research Board, India (PDF/2017/001795 ). Both the authors would like to thank Malay Ranjan Biswal, Sruthi C. K. for helping us to draw the figures as well as to find eigen values of the vertex matrices coming from the Mermin-Peres magic square game using Python.
{ "config": "arxiv", "file": "1806.09167.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: Differential and Integral calculus. QUESTION [0 upvotes]: Can anyone here explain me, why do we take the Centre of mass of a conical shell using slant height and $dl$ whereas the centre of mass of a solid cone is calculated using the vertical height and $dh$. When we try to derive the formula for the surface area of a cone, we can take the slant height into consideration but this is not the case when we try to find its volume. Don't you think that the amount of approximation in both the cases is same. I discussed this a lot with my teachers but the discussion was completely against my opinion, so please help. What I think is that the volume of a cone must be somewhere between $\frac{1}{3}\pi r^2h$ and $\frac{1}{3}\pi r^2l$ REPLY [1 votes]: You have to understand that the mass of hollow cone is only at the surface(we consider it's surface to be of thickness $t$ where $t\to0$ because we take product of two infinitesimals $dxdy\to0$) so we can't take vertical height and neglect the surface mass(when taking vertical height neglection is of a thin cone whose thickness $t\to0$) which we do in solid cone where even if we remove a thin hollow cone, the center of mass remains same( note that the the thickness of removed shell $t\to0$). Crude Explanation: When you take vertical height the mass on surface is neglected but when you take vertical height in solid cone of same dimensions, the neglection is very small in comparison to rest of the cone.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 932183, "subset_name": null }
TITLE: How are real particles created? QUESTION [1 upvotes]: The textbooks about quantum field theory I have seen so far say that all talk in popular science literature about particles being created spontaneously out of vacuum is wrong. Instead, according to QFT those virtual particles are unobservable and are just a mathematical picture of the perturbation expansion of the propagator. What I have been wondering is, how did the real particles, which are observable, get created? How does QFT describe pair production, in particular starting with vacuum and ending with a real, on-shell particle-antiparticle pair? Can anybody explain this to me and point me to some textbooks or articles elaborating on this question (no popular science, please)? REPLY [1 votes]: ''The textbooks about quantum field theory I have seen so far say that all talk in popular science literature about particles being created spontaneously out of vacuum is wrong.'' And they are right doing so. See also my essay https://www.physicsforums.com/insights/physics-virtual-particles/ ''How does QFT describe pair production, in particular starting with vacuum and ending with a real, on-shell particle-antiparticle pair?'' It doesn't. There are no such processes. Pair production is always from other particles, never from the vacuum or from a single stable particle. ''I cannot find a calculation for an amplitude <0|e+e-> or something like that.'' Because this amplitude always vanishes. All nonzero amplitudes must respect the conservation of 4-momentum, which is impossible for <0|e+e->. You can see this from the delta-function which appears in the S-matrix elements. It follows from this formula that the requested amplitude vanishes, since delta(q)=0 when q is nonzero.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 1, "question_id": 402612, "subset_name": null }
TITLE: Showing KL divergence is $0$ iff two distributions are the equal almost everywhere QUESTION [0 upvotes]: It is clear that the KL-divergence $D_\text{KL}(P\parallel Q) = 0$ if $P = Q$ almost everywhere*, for two distributions $P$ and $Q$. However, I don't know how to show the converse, i.e. that if $D_\text{KL}(P\parallel Q) = 0$, then the density functions are equal, i.e. $p(x) = q(x)$ $(P$-)almost everywhere. Since $$D_\text{KL}(P\parallel Q) = \ln\frac{p(x)}{q(x)} p(x)\,dx $$ and $\ln \frac{p(x)}{q(x)}$ could take on any real value, isn't it possible that the integral could be zero by the cancellation of some negative and positive contributions of the integrand? What would be the correct approach to showing the converse statement? *(and should this be "with respect to the distribution $P$"?) REPLY [3 votes]: Let $f(t)=-\ln t$. Then $f$ is strictly convex. We have $0=\int f(\frac {q(x)} {p(x)})p(x)\,dx \geq f(\int \frac {q(x)} {p(x)}p(x)\,dx)=f(1)=0$. Since equality holds in Jensen's inequality and $f$ is strictly convex it follows that $\frac {q(x)} {p(x)}$ is a constant, hence $=1$ almost everywhere. Hence $P=Q$.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 0, "question_id": 3592532, "subset_name": null }
\section{The Vanishing Theorem} \label{sec:Proof} To state the vanishing theorem which implies Theorem~\ref{thm:GenType}, we must introduce some notation. We think of the Seiberg-Witten invariant of a smooth, oriented, closed four-manifold $X$ (with a ``homology orientation'' -- an orientation on $(H^{0}\oplus H^{1}\oplus H^{+})(X;\R)$) and $\SpinC$ structure $\spinc$ as a homogenous polynomial map $$\SW_{(X,\spinc)}\colon \Alg(X)\longrightarrow \Z$$ of degree $$d(\spinc)=\frac{c_{1}(\spinc)^{2}-(2\chi+3\sigma)}{4}$$ on the algebra $$\Alg(X)=\Z[U]\otimes_{\Z}\Wedge^{*}H_{1}(X;\Z),$$ where $U$ is a two-dimensional generator, and $\Wedge^* H_1(X;\Z)$ is the exterior algebra on the first homology of $X$ (graded in the obvious manner). This algebra maps surjectively to the cohomology ring of the irreducible configuration space $\SWConfigIrr(X,\spinc)$ of pairs $[A,\Phi]$ of $\SpinC$ connections $A$ and somewhere non-vanishing spinors $\Phi$ modulo gauge. (We denote the full configuration space of pairs modulo gauge, i.e. where $\Phi$ is allowed to vanish, by $\SWConfig(X,\spinc)$.) As usual, the Seiberg-Witten invariant is obtained by cohomological pairings of these cohomology classes with the fundamental cycle of the moduli space $\ModSW(X,\spinc)$ of solutions to the Seiberg-Witten equations, which is naturally induced from the homology orientation. As in Section~\ref{sec:Introduction}, let $Y=Y(n,g)$ be the circle bundle over a Riemann surface with Euler number $n$ over a surface $\Sigma$ of genus $g>0$. Throughout this section, we assume that $$|n|\geq 2g-1.$$ Recall that $H^{2}(Y;\Z)\cong \Z^{2g}\oplus (\Zmod{n})$, where the $\Zmod{n}$ factor is generated by multiples of the pull-back $\pi^{*}$ of the orientation class of $\Sigma$. Indeed, there is a canonical $\SpinC$ structure $\spinct_{0}$ over $Y$ associated to the two-plane field orthogonal to the circle directions. Thus, forming the tensor product with $\spinct_{0}$ gives a canonical identification $$\SpinC(Y)\cong H^{2}(Y;\Z).$$ In particular, there are $n$ distinguished $\SpinC$ structures $\spinct_{e}$ over $Y$, indexed by $e\in \Zmod{n}$ (thought of as a subset of $H^{2}(Y;\Z)$). \begin{theorem} \label{thm:VanishingTheorem} Let $X$ be a smooth, closed, oriented four-manifold which splits along an embedded copy of $Y=Y(n,g)$ with $|n|>2g-1$, so that $X=X_{1}\#_{Y}X_{2}$ with $b_{2}^{+}(X_{i})>0$ for $i=1,2$. Fix a $\SpinC$ structure $\spinc$ on $X$, and let $\spinc|_{Y}=\spinct$. If $\spinct$ is not one of the $n$ distinguished $\SpinC$ structures on $Y$, then $\SW_{(X,\spinc)}\equiv 0$. Similarly, if $\spinct=\spinct_{\mult}$ for $2g-2<\mult<n$, then $\SW_{(X,\spinc)}\equiv 0$. Otherwise, if $\spinct=\spinct_{\mult}$ for $i=0,...,2g-2$, we have that \begin{equation} \label{eq:VanishingTheorem} \sum_{\{\spinc'|\spinc'|_{X_{1}}=\spinc|_{X_{1}}, \spinc'|_{X_{2}}=\spinc|_{X_{2}}\}}\SW_{(X,\spinc')}\equiv 0. \end{equation} \end{theorem} Note that the inclusion $Y\subset X$ gives rise to a coboundary map $\delta\colon H^{1}(Y;\Z)\rightarrow H^{2}(Y;\Z)$, whose image we denote by $\delta H^{1}(Y;\Z)$. Another way of stating Equation~\eqref{eq:VanishingTheorem} is: $$\sum_{\eta\in \delta H^{1}(Y;\Z)}\SW_{(X,\spinc+\eta)}\equiv 0.$$ The above theorem is proved by considering the ends of certain moduli spaces over cylindrical-end manifolds. In general, these ends are described in terms of the moduli spaces of the boundary $Y$, and the moduli spaces of solutions on the cylinder $\R\times Y$ (using a product metric and perturbation). Specifically, let $Y$ be a three-manifold, and let $\ModSWThree_{Y}(\spinct)$ denote the moduli space of solutions to the three-dimensional Seiberg-Witten equations over $Y$ in the $\SpinC$ structure $\spinct$. Given a pair of components $C_{1}$, $C_{2}$ in $\ModSWThree_{Y}(\spinct)$, let $\ModSW(C_{1},C_{2})$ denote the moduli space of solutions $[A,\Phi]$ to the Seiberg-Witten equations on $\R\times Y$ for which \begin{eqnarray*} \lim_{t\goesto -\infty}[A,\Phi]|_{\{t\}\times Y}\in C_{1} &{\text{and}}& \lim_{t\goesto \infty}[A,\Phi]|_{\{t\}\times Y}\in C_{2} \end{eqnarray*} The theory of~\cite{MMR} can be adapted to give the moduli space $\ModSW(C_{1},C_{2})$ a Fredholm deformation theory, and a pair of continuous ``boundary value maps'' for $i=1,2$ $$\rho_{_{C_{i}}}\colon \ModFlow(C_{1},C_{2})\longrightarrow C_{i}$$ characterized by \begin{eqnarray*} \rho_{_{C_{1}}}[A,\Phi]=\lim_{t\goesto -\infty}[A,\Phi]|_{\{t\}\times Y} &{\text{and}}& \rho_{_{C_{2}}}[A,\Phi]=\lim_{t\goesto +\infty}[A,\Phi]|_{\{t\}\times Y}. \end{eqnarray*} The moduli space $\ModFlow(C_{1},C_{2})$ admits a translation action by $\R$, and we let $\UnparModFlow(C_{1},C_{2})$ denote the quotient of this space by this action. The boundary value maps are $\R$-invariant, and hence induce boundary value maps on the quotient $$\rho_{_{C_{i}}},\colon \UnparModFlow(C_{1},C_{2})\longrightarrow C_{i}.$$ As in~\cite{KMthom} (by analogy with the cases considered by Floer, see for instance~\cite{Floer}), the solutions to the three-dimensional Seiberg-Witten equations are the critical points for a ``Chern-Simons-Dirac'' functional $\CSD$ defined on the configuration space $\SWConfig(Y,\spinct)$. The Seiberg-Witten equations on $\R\times Y$ can be naturally identified with upward gradient flowlines for this functional. (Strictly speaking, the functional $\CSD$ is real-valued only when the first Chern class $c_{1}(\spinct)$ is torsion; otherwise it is circle-valued.) Solutions in $\ModSWThree(Y,\spinct)$ whose spinor vanishes identically correspond to flat connections on the determinant line bundle for $\spinct$. By analogy with the Donaldson-Floer theory, these solutions are usually called {\em reducibles}, and those with somewhere non-vanishing spinor are called {\em irreducibles}. In the case where $Y$ is a non-trivial circle bundle over a Riemann surface these moduli spaces were studied in~\cite{MOY}, see also~\cite{HigherType} (where $Y$ is endowed with a circle-invariant metric and the Seiberg-Witten equations over it are suitably perturbed). Specifically, there is the following result: \begin{theorem} \label{thm:ModY} Let $Y$ be a circle bundle over a Riemann surface with genus $g>0$, and Euler number $|n|>2g-2$ (oriented as circle bundle with negative Euler number). Then, the moduli space $\ModSWThree_{Y}(\spinct)$ is empty unless $\spinct$ corresponds to a torsion class in $H^{2}(Y;\Z)$. Suppose that $\spinct=\spinct_{e}$ for $e\in\Zmod{n}\subset H^{2}(Y;\Z)$. Then \begin{list} {(\arabic{bean})}{\usecounter{bean}\setlength{\rightmargin}{\leftmargin}} \item \label{item:SmallK} If $0\leq \mult \leq g-1$ then $\ModSWThree_Y(\spinct)$ contains two components, a reducible one $\Jac$, identified with the Jacobian torus $H^1(\Sigma;S^{1})$, and a smooth irreducible component $\CritMan$ diffeomorphic to $\Sym^\mult(\Sigma)$. Both of these components are non-degenerate in the sense of Morse-Bott. There is an inequality $\CSD(\Jac)>\CSD(\CritMan)$, so the space $\UnparModFlow(\Jac,\CritMan)$ is empty. The space $\UnparModFlow(\CritMan,\Jac)$ is smooth of expected dimension $2\mult$; indeed it is diffeomorphic to $\Sym^\mult(\Sigma)$ under the boundary value map $$\rho_{_{\CritMan}}\colon\UnparModFlow(\CritMan,\Jac)\longrightarrow \CritMan\cong \Sym^{e}(\Sigma).$$ \newline \item If $g-1< \mult\leq 2g-2$, the Seiberg-Witten moduli spaces over both $Y$ and $\R\times Y$ in this $\SpinC$ structure are naturally identified with the corresponding moduli spaces in the $\SpinC$ structure $2g-2-\mult$, which we just described. \newline \item For all other $\mult$, $\ModSWThree_Y(\spinct)$ contains only reducibles. Furthermore, it is smoothly identified with the Jacobian torus $\Jac$. \end{list} \end{theorem} When $\mult\neq g-1$, the above theorem is a special case of Theorems~1 and 2 of ~\cite{MOY} (see especially Corollary~1.5 of~\cite{MOY}). When $\mult=g-1$, the case considered in that paper is not ``generic''. In fact, there is a natural perturbation (by some small multiple of the connection $1$-form of the Seifert fibration), which achieves the genericity stated above. This perturbation was used in~\cite{HigherType} to prove strong ``adjunction inequalities'' for manifolds which are not of simple type, and the above theorem in the case where $\mult=g-1$ is precisely Theorem~8.1 of~\cite{HigherType}. Note that the hypothesis $n>2g-2$ is required to separate the irreducible manifolds into distinct $\SpinC$ structures. Note also that if the orientation on $Y$ is reversed, the flow-lines reverse direction. The proof of Theorem~\ref{thm:VanishingTheorem} is obtained by considering the ends of the moduli spaces $\ModSW(X_{1},\spinc_{1},\Jac)$ of Seiberg-Witten monopoles over the cylindrical-end manifold $$\XExt_{1}=X_{1}\cup_{\partial X_{1}=\{0\}\times Y}[0,\infty)\times Y$$ in the $\SpinC$ structure $\spinc_{1}$, whose boundary values are reducible. We will assume, as in that theorem, that $b_2^+(X_1)>0$. In general, moduli spaces of finite energy solutions to the Seiberg-Witten equations on a manifold with cylindrical ends are not compact. (The ``finite energy condition'' in this context is equivalent to the hypothesis that the pair $[A,\Phi]$ has a well-defined boundary value.) They do, however, have ``broken flowline'' compactifications (see~\cite{MMR} and \cite{Floer}). In particular, if $C$ is a component of $\ModSWThree(Y,\spinc_{1}|_{Y})$, then for generic perturbations, the moduli space $\ModSW(X_{1},\spinc_{1},C)$ is a smooth manifold with finitely many ends indexed by components $C_{1},\ldots,C_{n}$ in the moduli space $\ModSWThree(Y,\spinc_{1}|_{Y})$, with $\CSD(C_{1})<\CSD(C_{2})<\ldots<\CSD(C_{n})<\CSD(C)$. When all the $C_{i}$ are non-degenerate in the sense of Morse-Bott, and consist of irreducibles, a neighborhood of the corresponding end is diffeomorphic to the fibered product $$\ModSW(X_{1},\spinc_{1},C_{1})\times_{C_1} \ModFlow(C_{1},C_{2})\times_{C_2}\ldots\times_{C_n} \ModFlow(C_{n},C),$$ under a certain gluing map (provided that this space is a manifold -- i.e. provided that the various boundary value maps are transverse). In particular, suppose $X_{1}$ is an oriented four-manifold with boundary, whose boundary $\partial X_{1}$ is identified with $Y=Y(n,g)$ with the orientation described in Theorem~\ref{thm:ModY}. Then, it follows from that theorem that if $\spinc_{1}|_{Y}=\spinct_{e}$ for $0\leq \mult\leq 2g-2$, then $\ModSW(X_{1},\spinc_{1},\CritMan)$ is compact (since there are no ``intermediate'' critical manifolds to be added), and $\ModSW(X_{1},\spinc_{1},\Jac)$ has a single end whose neighborhood is diffeomorphic to $$\ModSW(X_{1},\spinc_{1},\CritMan)\times (0,\infty).$$ (We use here the fact that the restriction map $\rho_{_{\CritMan}}\colon\UnparModFlow(\CritMan,\Jac)\longrightarrow \CritMan$ is a diffeomorphism.) This gluing map $$\gamma\colon \ModSW(X_{1},\spinc_{1},\CritMan)\times (0,\infty)\longrightarrow \ModSW(X_{1},\Jac)$$ is compatible with restriction to compact subsets of $\XExt_{1}$; e.g. if we consider the compact subset $X_{1}\subset \XExt_{1}$, then $$\lim_{T\goesto \infty}\gamma([A,\Phi],T)|_{X_{1}}=[A,\Phi]|_{X_{1}}.$$ We make use of the end of $\ModSW(X_{1},\spinc_{1},\Jac)$ in the following proposition. Recall that the moduli space $\ModSW(X_{1},\spinc_{1},\CritMan)$ is a smooth, compact submanifold of the irreducible configuration space of $\XExt_{1}$. It has a canonical top-dimensional homology class, denoted $[\ModSW(X_{1},\spinc_{1},\CritMan)]$, induced from the ``homology orientation'' of $X_{1}$. It inherits cohomology classes by pulling back via the boundary value map $$\Restrict{\CritMan}\colon \ModSW(X_{1},\spinc_{1},\CritMan)$$ and from the natural map $$i_{X_{1}}\colon \ModSW(X_{1},\spinc)\longrightarrow \SWConfigIrr(X_{1},\spinc)$$ given by restricting the pair $[A,\Phi]$ to the compact subset $X_{1}\subset\XExt_{1}$ (this restriction is irreducible from the unique continuation theorem for the Dirac operator). The pairings with these classes can be thought of as a ``relative Seiberg-Witten'' invariant. \begin{prop} \label{prop:TrivialRelativeInvariant} Suppose $b_2^+(X_1)>0$. Given any cohomology classes $a\in H^{*}(\SWConfigIrr(X_{1},\spinc_{1}))$ and $b\in H^{*}(\CritMan)$, the homology class $[\ModSW(X_{1},\spinc_{1},\CritMan)]$ pairs trivially with the class $i_{X_{1}}^{*}(a)\cup \rho_{_{C}}^{*}(b)$. \end{prop} \begin{proof} First, we reduce to the case where $b$ is absent (i.e. zero-dimensional). This is done in two steps, first establishing an inclusion \begin{equation} \label{eq:Inclusion} (i_{Y}\circ \Restrict{C})^{*}H^{*}(\SWConfigIrr(Y,\spinct)) \subset i_{X_{1}}^{*}(\SWConfigIrr(X_{1},\spinc_{1})), \end{equation} where both are thought of as subsets of $H^{*}(\ModSW(X_{1},\spinc_{1},\CritMan))$, and then seeing that the map $$i_{Y}^*\colon H^{*}(\SWConfigIrr(Y,\spinct))\longrightarrow H^{*}(C) $$ is surjective. To see Inclusion~\eqref{eq:Inclusion} we describe the geometric representatives for the generators of the cohomology ring $$H^{*}(\SWConfigIrr(Y,\spinct))\cong \Z[U]\otimes_{\Z} \Wedge^{*}(H_{1}(Y;\Z)).$$ Given a point $y\in Y$ and a line $\Lambda_{y}\subset W_{y}$ in the fiber of the spinor bundle over $y$, the class $U$ is Poincar\'e dual to the locus $V_{(y,\Lambda_{y})}$ of pairs $[B,\Psi]$ with $\Psi_{y}\in\Lambda_{y}$. Moreover, given a curve $\gamma\colon S^{1}\rightarrow X$, the corresponding one-dimensional cohomology class determined by the homotopy type of the map $$h_{\gamma}\colon \SWConfigIrr(Y,\spinct)\longrightarrow S^{1}$$ given by measuring the holonomy of $B$ (relative to some fixed reference connection $B_{0}$) around $\gamma$; i.e. it is Poincar\'e dual to the preimage $V_{\gamma}$ of a regular value of $h_{\gamma}$. This cohomology class is denoted $\mu[\gamma]\in H^{1}(\SWConfigIrr(Y,\spinct))$. (Note that geometric representatives cohomology classes in the configuration spaces of four-manifolds are constructed in an analogous manner.) Now, fix a curve $\gamma\subset Y$ and consider the one-parameter family of maps $$h_{t}\colon \ModSW(X_{1},\spinc_{1},\CritMan)\longrightarrow S^{1}$$ indexed by $t\in (0,1]$ defined by measuring the holonomy of $A$ around the curve $\{1/t\}\times \gamma\subset \XExt_{1}$. Since the configurations in $[A,\Phi]\in \ModSW(X_{1},\spinc_{1},\CritMan)$ converge exponentially to a stationary solution (see~\cite{MMR}), $h_{t}$ extends continuously to $t=0$. Now, $h_{0}$ represents $(i_{Y}\circ\Restrict{C})^{*}$ of the one-dimensional class $\mu[\gamma]\in H^{1}(\SWConfigIrr(Y,\spinct))$, while $h_{1}$ represents the restriction (to the moduli space) of the one-dimensional class $\mu[\gamma_{1}]\in H^{*}(\SWConfigIrr(X_{1},\spinc_{1}))$, where $\mu_{1}=\{1\}\times Y\subset X_{1}$. A similar discusion applies to the two-dimensional class to show that $\Restrict{\CritMan}^{*}U=i_{X_{1}}^{*}U$ (now we use the connection $A$ to identify the fiber ``at infinity'' with the fiber at some point inside $X$). This completes the verification of Inclusion~\eqref{eq:Inclusion}. Surjectivity of $$i_{Y}^{*}\colon H^{*}(\SWConfigIrr(Y,\spinct);\Z)\cong \Alg(Y)\rightarrow H^{*}(\CritMan;\Z)$$ follows from classical properties of the cohomology of symmetric products $\Sym^{\mult}(\Sigma)$ (see~\cite{MacDonald}), according to which the cohomology ring is generated by ``symmetrizations'' of the cohomology of $\Sigma$. It is then a straightforward verification (which is spelled out in Proposition~6.10 of~\cite{HigherType}) using the geometric interpretations of the cohomology classes given above to see that that $i_{Y}^{*}\mu(\gamma)$ corresponds to the symmetrization of $\pi(\gamma)$, while $i_{Y}^{*}U$ corresponds to the symmetrization of the point $\pi(x)$ on $\Sigma$ (where we think of $U$ as the Poincar\'e dual of $V_{(x,\Lambda)}$ for some choice of line $\Lambda\subset W_{x}$). Thus, it remains to prove that $[\ModSW(X_{1},\spinc_{1},\CritMan)]$ pairs trivially with classes in $$i_{X_{1}}^{*}H^{*}(\SWConfigIrr(X_{1},\spinc_{1})).$$ We can think of the cohomological pairing $\langle \rho_{_{\CritMan}}^{*}(b), [\ModSW(X_{1},\spinc_{1},\CritMan)]\rangle$ as counting the (signed) number of points to the Seiberg-Witten equations, which satisfy constraints in the compact subset $X_{1}\subset\XExt_{1}$; i.e. if $b=U^{d}\cdot [\mu_{1}]\cdot\ldots\cdot[\mu_{\ell}]$, where $\mu_{i}$ are curves in $X_{1}$, and $x_{1},\ldots,x_{d}$ are generic points in $X_{1}$, and $\Lambda_{1},\ldots,\Lambda_{d}$ are generic lines $\Lambda_{i}\subset W^{+}|_{\{x_{i}\}}$, then we have gemetric representatives $V_{(x_{i},\Lambda_{i})}$ and $V_{\mu_{i}}$ for these cohomology classes, so that $$\langle [\ModSW(X_{1},\spinc_{1})],b\rangle = \#\ModSW(X_{1})\cap V,$$ where $V=V_{(x_{1},\Lambda_{1})}\cap\ldots\cap V_{(x_{d},\Lambda_{d})}\cap V_{\mu_{1}}\cap \ldots \cap V_{\mu_{\ell}}$. In fact, if we consider the solutions $\ModSW(X_{1},\spinc_{1},\Jac)$ which satisfy these same constraints, then we get a manifold of dimension one with ends corresponding to $\ModSW(X_{1},\spinc_{1},\CritMan) \cap V$. Thus, counting boundary points with sign, we see that $$\#\ModSW(X_{1},\spinc_{1},\CritMan)\cap V=0.$$ \end{proof} \vskip.5cm \noindent{\bf{Proof of Theorem~\ref{thm:VanishingTheorem}}.} In the splitting $X=X_{1}\cup_{Y}X_{2}$, we can number the sides so that the boundary of $X_{1}$ is $Y$ oriented as in Theorem~\ref{thm:ModY}. Let $\spinc$ be a $\SpinC$ structure over $X$, and let $\spinc_{i}\in\SpinC(X_{i})$ denote the restriction $\spinc_{i}=\spinc|_{X_{i}}$ for $i=1,2$, and let $\spinct\in\SpinC(Y)$ denote the restriction $\spinct=\spinc|_{Y}$. Let $X(T)$ denote the Riemannian structure on $X$ obtained by inserting a cylinder $[-T,T]\times Y$ between $X_{1}$ and $X_{2}$ (but keeping the metrics on these two pieces to be fixed, and product-like near the boundary). If the Seiberg-Witten invariants for a $\SpinC$ structure $\spinc$ over $X$ is non-trivial, for any unbounded, increasing sequence of real numbers $\{T_{i}\}$, there must be a sequence of Seiberg-Witten monopoles $[A_{i},\Phi_{i}]\in\ModSW(X(T_{i}),\spinc)$. The uniform bound in the energy, and local compactness (see~\cite{KMthom}) allows one to find a sequence $\{t_{i}\}$ with $t_{i}\leq T_{i}$, so that, after passing to a subsequence if necessary, $[A_{i},\Phi_{i}]|_{\{t_{i}\}\times Y}$ converges to a stationary solution; i.e. it converges to a point in $\ModSWThree(Y,\spinct)$. Thus, from Theorem~\ref{thm:ModY}, it follows that the Seiberg-Witten invariant for $\spinc$ vanishes unless $\spinct$ is one of the $n$ distinguished $\SpinC$ structures $\spinct_{e}$ over $Y$. Suppose that $\spinct=\spinct_{e}$, for $e=0,\ldots,2g$. Note the excision principle for the index gives that $$d(\spinc)=\dim \ModSW(X_{1},\spinc_{1},\CritMan)+\dim\ModSW(X_{2},\spinc_{2},\CritMan) - \dim(\CritMan)$$ for any $\SpinC$ structure $\spinc\in \SpinC(X)$ with $\spinc|_{X_{i}}=\spinc_{i}$ for $i=1,2$ (and generic, compactly supported perturbations of the equations over $X_{i}$). We fix an integer $\ell\geq 0$ and homology classes $a_{1},\ldots,a_{m}\in H_{1}(X_{1};\Z)$, $b_{1},\ldots,b_{n}\in H_{1}(X_{2};\Z)$ with $$2\ell+m+n=d(\spinc).$$ Let $\spinc_{1}\#\spinc_{2}\subset \SpinC(X)$ denote the subset of $\SpinC$ structures on $X$: $$\spinc_{1}\#\spinc_{2}=\{\spinc'\in\SpinC(X)\big| \spinc'|_{X_{1}}=\spinc_{1}, \spinc'|_{X_{2}}=\spinc_{2}\},$$ and let $\ModSW(X(T),\spinc_{1}\#\spinc_{2})$ denote the union $$\ModSW(X(T),\spinc_{1}\#\spinc_{2})= \coprod_{\ProdSpinCs} \ModSW(X,\spinc').$$ Clearly, we have that $$\# \ModSW(X(T),\spinc_{1}\#\spinc_{2})\cap V_{1}\cap V_{2} = \sum_{\ProdSpinCs} \SW_{X,\spinc'}(U^{\ell}\cdot[a_{1}]\cdot\ldots\cdot[a_{m}] \cdot[b_{1}]\cdot\ldots\cdot[b_{n}]) $$ where $V_{i}$ are the intersection of the constraints from the $X_{i}$ side; e.g. $$V_{1}=V_{(x_{1},\Lambda_{1})}\cap\ldots\cap V_{(x_{\ell},\Lambda_{m})} \cap V_{a_{1}}\cap\ldots\cap V_{a_{m}}$$ and $$V_{2}=V_{b_{1}}\cap\ldots\cap V_{b_{n}}.$$ Thus, our aim is to prove that the total signed number of points in the cut-down moduli space $\ModSW(X(T),\spinc_{1}\#\spinc_{2})\cap V_{1}\cap V_{2}$ is zero. Given pre-compact sets $K_{i}\subset \ModSW(X_{i},\spinc_{i},\CritMan)$ for $i=1,2$, there are gluing maps defined for all sufficiently large $T$, $$ \glue_{\CritMan;T}\colon K_{1}\#_{\CritMan}K_{2} \longrightarrow \ModSW(X(T),\spinc_{1}\#\spinc_{2}), $$ where the domain is the fibered product of $K_{1}$ and $K_{2}$ over $\rho_{1}$ and $\rho_{2}$, i.e. the set of $[A_{1},\Phi_{1}]\in K_{1},[A_{2},\Phi_{2}]\in K_{2}$ with $$\rho_{1}([A_{1},\Phi_{1}])= \rho_{2}([A_{2},\Phi_{2}]),$$ and the range consists of all configurations $[A,\Phi]$ which are whose restrictions to $X_{1}$ and $X_{2}$ are sufficiently close to restrictions (to $X_{1}$ and $X_{2}$) of configurations $[A_{1},\Phi]\times [A_{2},\Phi_{2}]$ in the fibered product. We claim that for all sufficiently large $T$, the cut-down moduli space lies in the range of this map. Specifically, if we had a sequence $[A_{i},\Phi_{i}]\in \ModSW(X(T_{i}),\spinc_{1}\#\spinc_{2})$ for an increasing, unbounded sequence $\{T_{i}\}_{i=1}^{\infty}$ of real numbers, the sequence converges $\Cinfty$ locally to give a pair of Seiberg-Witten monopoles monopoles $[A_{1},\Phi_{1}]\in\ModSW(X_{1},\spinc_{1})$ and $[A_{2},\Phi_{2}]\in \ModSW(X_{2},\spinc_{2})$. These monopoles have finite energy (since the total variation of $\CSD$ is bounded in the limit), so they have boundary values, which must lie in either $\CritMan$ or $\Jac$. We exclude all but one of the four cases as follows: \begin{list} {(C-\arabic{bean})}{\usecounter{bean}\setlength{\rightmargin}{\leftmargin}} \item The case where $\Restrict{1}[A_{1},\Phi_{1}]\in\Jac$ and $\Restrict{2}[A_{2},\Phi_{2}]\in\CritMan$ is excluded since $\CSD(\CritMan)<\CSD(\Jac)$. \item \label{case:RedToRed} The case where $\Restrict{1}[A_{1},\Phi_{1}]\in\Jac$ and $\Restrict{2}[A_{2},\Phi_{2}]\in \Jac$ is excluded by a dimension count. Specifically, we must have that $$\Restrict{1}[A_{1},\Phi_{1}]=\Restrict{2}[A_{2},\Phi_{2}]$$ and $[A_{1},\Phi_{1}]\in \ModSW(X_{1},\Jac)\cap V_{1}$ and $[A_{2},\Phi_{2}]\in\ModSW(X_{2},\Jac)\cap V_{2}$, i.e. the pair $[A_{1},\Phi_{1}]\times [A_{2},\Phi_{2}]$ lies in the fibered product $\ModSW(X_{1},\spinc_{1},\Jac)\times_{\Jac}\ModSW(X_{2},\spinc_{2},\Jac)$, a space whose dimension is one less than the expected dimension $d(\spinc)$ of the moduli space. Thus, for generic representatives $V_{1}$ and $V_{2}$, this intersection is empty. \item \label{case:IrrToRed} The case where $\Restrict{1}[A_{1},\Phi_{1}]\in\CritMan$ and $\Restrict{2}[A_{2},\Phi_{2}]\in\Jac$ is excluded by a similar dimension count. We have that $[A_{1},\Phi_{1}]\in\ModSW(X_{1},\spinc_{1},\CritMan)\cap V_{1}$, $[A_{2},\Phi_{2}]\in\ModSW(X_{2},\spinc_{2},\Jac)\cap V_{2}$, and $\Restrict{1}[A_{1},\Phi_{1}]$ is connected to $\Restrict{2}[A_{2},\Phi_{2}]$ by a (uniquely determined) flow in $\UnparModFlow(\CritMan,\Jac)$. This set has expected dimension $-2$. \end{list} The remaining case is that $[A_{1},\Phi_{1}]\in\ModSW(X_{1},\spinc_{1},\CritMan)$, and $[A_{2},\Phi_{2}]\in\ModSW(X_{2},\spinc_{2},\CritMan)$, with $$\Restrict{1}[A_{1},\Phi_{1}]=\Restrict{2}[A_{2},\Phi_{2}].$$ In particular, $[A_{1},\Phi_{1}]$ lies in the compact set $\ModSW(X_{1},\spinc_{1},\CritMan)$, while $[A_{2},\Phi_{2}]$ lies in the set $\Restrict{2}^{-1}(\Restrict{1}\ModSW(X_{1},\spinc_{1},\CritMan)\cap V_{1})\cap V_{2}$ which is also compact (according to the dimension count used to exclude Case~(C-\ref{case:IrrToRed}) above). Thus, for all sufficiently large $T$, the cut-down moduli space lies in the image of the gluing map $\glue_{\CritMan;T}$. On compact subsets of $X(T)$ away from the ``neck'', gluing is a $\Cinfty$ small perturbation, which goes to zero as the neck-length is increased; in particular, for $i=1,2$, $$\lim_{T\goesto\infty}\glue_{\CritMan;T}([A_{1},\Phi_{1}]\#[A_{2},\Phi_{2}]) |_{X_{i}}= [A_{i},\Phi_{i}]|_{X_{i}}.$$ It follows from this that \begin{equation} \label{eq:SWInvariant} \#\ModSW(X(T),\spinc_{1}\#\spinc_{2})\cap V_{1}\cap V_{2} = \#\left(\left(\ModSW(X_{1},\spinc_{1},\CritMan)\cap V_{1}\right)\times_{\CritMan}\left(\ModSW(X_{2},\spinc_{2},\CritMan)\cap V_{2}\right)\right). \end{equation} The latter quantity can be thought of as cohomological pairing in $\ModSW(X_{1},\spinc_{1},\CritMan)$ as follows. Fix an oriented, $v$-dimensional submanifold $V\subset \ModSW(X_{2},\spinc_{2},\CritMan)$, and consider the function which assigns to each smooth map $$f\colon Z\longrightarrow C$$ (where $Z$ is a smooth, oriented, compact manifold whose dimensionl equals the codimension of $V$) the number of points in the fibered product $\#(Z\times_{\CritMan}V)$ (counting with sign, after arranging $f$ to be transverse to $V$). This is the pairing of the fundamental cycle of $Z$ with an induced cohomology class in $H^{d_{2}-v}(\CritMan,\Z)$. Indeed, this class can be thought of as the ``push-forward'' of the Poincar\'e dual to $V$, under a map $$(\Restrict{2})_{*}\colon H^{i}(\ModSW(X_{2},\CritMan);\Z)\longrightarrow H^{i+\dim(\CritMan)-d_{2}}(\CritMan;\Z).$$ Thus, the count in Equation~\eqref{eq:SWInvariant} can be thought of as the pairing $$\#\ModSW(X(T),\spinc_{1}\#\spinc_{2})\cap V_{1}\cap V_{2} = \langle [\ModSW(X_{1},\spinc_{1},\CritMan)], \PD(V_{1})\cup \Restrict{1}^{*}(\Restrict{2})_{*}\PD(V_{2})\rangle.$$ This pairing vanishes, according to Proposition~\ref{prop:TrivialRelativeInvariant}. This completes Theorem~\ref{thm:VanishingTheorem} in the case when $\spinct=\spinct_{e}$ for $e=0,\ldots,2g-2$. In the case when $\spinct=\spinct_{e}$ for $2g-1<e<n$, the vanishing of the Seiberg-Witten invariant for any $\spinc$ structure with $\spinc|_{Y}=\spinct$ is guaranteed by the same dimension count which we used to exclude Case~(C-\ref{case:RedToRed}) above. \qed The proof of Theorem~\ref{thm:GenType} follows from an application of Theorem~\ref{thm:VanishingTheorem}, together with the known properties of Seiberg-Witten invariants for complex surfaces of general type (see for instance~\cite{Brussee} or \cite{FriedMorg}, and also~\cite{FS}), according to which a minimal surface of general type has only two ``basic classes'' ($\SpinC$ structures for which the Seiberg-Witten invariant is non-zero), the ``canonical'' $\SpinC$ structure $\spinc_{0}$ (whose first Chern class is given by $c_{1}(\spinc_{0})=-K_{X}$, where $K_{X}\in H^{2}(X;\Z)$ is the first Chern class of the complex cotangent bundle of $X$), and its conjugate. Moreover, the basic classes of the $n$-fold blow-up ${\widehat X}=X\#n\CPbar$ are those $\SpinC$ structures $\spinc$ whose restriction away from the exceptional spheres agrees with $\spinc_{0}$ or its conjugate, and whose first Chern class evaluated on each exceptional sphere $E_i$ satisfies $$\langle c_{1}(\spinc),[E_{i}]\rangle = \pm 1.$$ In fact, since $K_{X}^{2}>0$ for a minimal surface of general type, the basic classes are in one-to-one correspondence with their first Chern classes. In view of this fact, throughout the following proof, we label the basic classes of ${\widehat X}$ by their first Chern classes. \vskip.5cm \noindent{\bf{Proof of Theorem~\ref{thm:GenType}}.} The subgroup $\delta H^{1}(Y;\Z)$ partitions $\SpinC(X)$ into orbits, and Theorem~\ref{thm:VanishingTheorem} states that if $X$ could be decomposed, then the sum of invariants under each orbit vanishes. Note moreover that if $Y$ separates $X$, then the intersection form restricted to the subgroup $\delta H^{1}(Y;\Z)$ is trivial: this is true because we can represent cohomology classes $[\omega],[\eta]\in \delta H^{1}(Y;\R)$ by differential form representatives $\omega$ and $\eta$, with $\omega|_{X_{1}}\equiv 0$ and $\eta|_{X_{2}}\equiv 0$, so that the representative for $[\omega]\cup[\eta]$, $\omega\wedge\eta$, vanishes identically. It follows from this that in each orbit, there can exist at most two basic classes, for if we had two basic classes which had the same coefficient in $K_{X}$, then their difference would have negative square. Now, suppose that $K_{X}-E_{1}-\ldots-E_{n}$ had another basic class in its orbit. We know that the other basic class would be of the form $$-K_{X}+E_{1}+\ldots+E_{a}-E_{a+1}-\ldots-E_{n},$$ after renumbering the exceptional curves if necessary. The difference $\Delta$ is $2(K_{X}-E_{1}-\ldots-E_{a})$, which must have square zero, which forces $a>0$ (recall that $K_X^2>0$ for a minimal surface of general type). Now, consider the basic class $K_{X}+E_{1}+\ldots+E_{n}$. It, too, can have at most one other basic class in its orbit, and the difference has the form $$\Delta'=2(K_{X}+\epsilon_{1}E_{1}+\ldots\epsilon_{n}E_{n}),$$ where we know that $\epsilon_{1},\ldots,\epsilon_{n}\geq 0$, in particular $\Delta-\Delta'$ is a non-zero class, which is easily seen to have negative square. But this contradicts the fact that $\Delta-\Delta'\in \delta H^{1}(Y;\Z)$. Thus, it follows that either the basic class $K_{X}-E_{1}-\ldots-E_{n}$ or $K_{X}+E_{1}+\ldots+E_{a}-E_{a+1}-\ldots-E_{n}$ is alone in its $\delta H^{1}(Y;\Z)$ orbit. But this contradicts the conclusion of Theorem~\ref{thm:VanishingTheorem}. \qed \subsection{Final Remarks} It is suggestive to compare the formal framework adopted here with that of equivariant Morse theory. Specifically, the ``Chern-Simons-Dirac'' operator on $Y$ in the set-up of Theorem~\ref{thm:ModY} has precisely two critical manifolds, a manifold of reducibles $\Jac$ (consisting of configurations whose stabilizer in the gauge group is a circle), and a manifold of irreducibles $C$ (consisting of configurations whose stabilizers are trivial). From the point of view of equivariant cohomology, then, there should be an ``equivariant Floer homology'', and an analogue of the Bott spectral sequence, whose $E_{2}$ term consists of the homology of the irreducible critical point set $H_{*}(\Sym^{e}(\Sigma);\Z)$, and the $S^{1}$-equivariant homology of the reducible manifold, which is given by $$H^{*}(\CP{\infty}\times \Jac;\Z)\cong \Z[U]\otimes_{\Z}\Wedge^{*}H_{1}(Y;\Z).$$ From this point of view, Proposition~\ref{prop:TrivialRelativeInvariant}, upon which the vanishing theorem rests, can be seen then as the calculation of the differential in this spectral sequence. The equivariant point of view has been stressed by a number of researchers in the field, including (especially in the context of gluing along rational homology three-spheres) \cite{KMlect}, \cite{AusBra}, \cite{MarcolliWang}.
{ "config": "arxiv", "file": "math0006191/provesplit.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: How do I cube/square a logarithm? QUESTION [5 upvotes]: Btw, please don't give me the answer. I just wanna know how to raise a logarithm to its cube cause I'm stuck in this part, but don't solve it for me. $$\log \sqrt[3]x = \sqrt[3]{\log x}$$ I tried different ways but. When I input the values in my calculator, it just doesn't match. My answer: is it right? $$1/3\log x =\sqrt[3]{\log x}$$ $$\log x = 3\sqrt[3]{\log x}$$ $$a = 3\sqrt[3]{\log x}$$ $$a^3 = \log x^{27} $$ $$\log x^3 = \log x^{27}$$ $$\log x^{24} = 0$$ $$ x = \sqrt[24]{1}$$ $$ x = 1$$ REPLY [0 votes]: One method you may attempt; when you get to the step: $\log(x)^3=27\log(x)$ Let $u= \log(x)$ and solve cubic: $ u^3 - 27u = 0$ so that: $u(u^2 - 27)$=0 therefore: $u(u-\sqrt{27})(u + \sqrt{27})= 0$ Now find the roots then resubstitute $\log(x) = u$; to get the answer
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 5, "question_id": 202773, "subset_name": null }
TITLE: Can the study of the quantum information structure in QFT with holographic duals be relevant to string theory? QUESTION [5 upvotes]: I'm interested in characterizing the behaviour of measures of quantum information in strongly correlated quantum field theories which admit a gravity dual description, e.g through AdS/CFT duality. In a more broad sense, my question points towards the following: in my limited experience, I use to accept that a duality means that we don't have any right to give a physical meaning to one part of the duality and use the other part as a mere computational tool. I would like to be illuminated by some experts. REPLY [5 votes]: This is an interesting question, which quite a few people (myself included) find fascinating, and by and large still remains to be explored. To my knowledge, the farthest this idea has gotten so far is in the work of my colleague, Mark van Raamsdonk: Comments on quantum gravity and entanglement and the essay Building up spacetime with quantum entanglement which won the GRF essay competition last year. If you have background in many body systems and their description in terms of quantum information, your input on this set of issues could be valuable.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 5, "question_id": 6346, "subset_name": null }
This section investigates the application of the previous results to Monte Carlo estimates constructed with the help of control variates in order to reduce the variance. We start by presenting several applications in which uniform bounds for Monte Carlo methods are of interest. Then we give the mathematical background and finally provide sharp uniform error bounds for Monte Carlo estimates that use control variates. \subsection{Uniformity in Monte Carlo procedures} Proving uniform bounds on the error of Monte Carlo methods is motivated by the following three applications. \paragraph{Latent variable models.} Suppose we are interested in estimating the distribution of the variable $X$ whose density is assumed to lie in the model \[ p_\theta (y) = \int p_\theta (y|z) \, p(z) \, \diff z \] where $\theta\in \Theta$ is the parameter to estimate, $p(z)$ is the known density of the so-called latent variable and $ \{(y,z)\mapsto p_\theta (y|z)\}_{\theta\in \Theta}$ is a model of conditional densities ($Y$ conditionally on the latent variable). This is actually a frequent situation in economics \citep{mcfadden2001economic} and medicine \citep[example 4, 6 and 9]{mcculloch2005generalized}. Given independent and identically distributed random variables $Y_1,\ldots,Y_n$ observed from the previous model with parameter $\theta_0$, the log-likelihood function takes the form \[ \theta \mapsto \sum_{i=1}^n\log\left( \int p_\theta(Y_i|z) \, p(z) \, \diff z\right). \] In most cases, each term in the previous sum is intractable and the approach proposed in \citet{mcfadden1994estimation} consists in replacing the unknown integrals by Monte Carlo estimates. In such a procedure, there is an additional estimation error compared to the statistical error of the standard maximum likelihood estimator. This additional error can be controlled in terms of the approximation error of $\int p_\theta(y|z) \, p(z) \, \diff z$ by the Monte Carlo estimate uniformly in $(y, \theta)$. \paragraph{Stochastic programming.} Consider the stochastic optimization problem \[ \min_{\theta\in \Theta} F(\theta) \qquad \text{with} \qquad F(\theta) = \expec [ f(\theta, X) ], \] where $X$ is a random variable in some space $\Xc$ with distribution $P$ and where $\Theta$ is a Euclidean set. The response functions are thus the maps $x \mapsto f(\theta, x)$ as $\theta$ ranges over $\Theta$. This problem is different from standard optimization because it takes into account some uncertainty in the output of the function $f$. One might think of the following toy example: $f$ is the output of a laboratory experiment, e.g., the amount of salt in a solution, $\theta$ is the input of the experiment, e.g., the temperature of the solution and $X$ gathers unobserved random factors that influence the output. Optimizing such kind of functions is of interest in many different fields and we refer the reader to \citet{shapiro2014lectures} for more concrete examples (see also the example below that deals with quantile estimation). This problem might be solved using two competitive approaches: the sample average approximation (SAA) and stochastic approximation techniques such as gradient descent---see \citet{nemirovski2009robust} for a comparison between both approaches. The SAA approach follows from approximating the function $F$ by a functional Monte Carlo estimate defined as \[ F_n (\theta ) = \frac{1}{n} \sum_{i=1}^n f(\theta, X_i) , \] where $X_1,\ldots,X_n$ is a random sample from $P$. Then the minimizer of the first stochastic optimization problem is approximated by the minimizer $\theta_n$ of the functional estimate $F_n$. Following the reference textbook \citet{shapiro2014lectures}, the analysis of $\theta_n$ is carried out through error bounds for the Monte Carlo estimate $F_n(\theta)$ uniformly in $\theta \in \Theta$. When sampling from $P$ is expensive, one may want to reduce the variance of the Monte Carlo estimate $F_n(\theta)$ by the method of control variates or some other method. In that case, a control on the error uniformly in $\theta$ is required. \paragraph{Quantile estimation in simulation modeling.} Quantiles are of prime importance when it comes to measure the uncertainty of random models \citep{law2000}. When the stochastic experiments are costly, variance reduction techniques such as the use of control variates are helpful \citep{hesterberg1998control,cannamela2008controlled}. Let $F(y) = \pr \{ g(X) \leq y \}$, for $y\in \reals$, be the cumulative distribution function of a transformation $g : \Xc \to \reals$ of a random element $X$ in some space $\Xc$. Suppose the interest is in the quantile $F^{-}(u) = \inf\{y \in \reals\,:\, F(y)\geq u\}$ for $u\in (0,1)$. The functions $f_y : \Xc \to \reals$ to be integrated with respect to the distribution $P$ of $X$ are thus the indicators $x \mapsto f_y(x) = I\{g(x) \le y\}$, indexed by $y \in \reals$. It is necessary to control the accuracy of an estimate of the probability $F(y) = \expec[f_y(X)]$ uniformly in $y \in \reals$ in order to have a control on the accuracy of an estimate of the quantile $F^{-}(u)$, even for a single $u \in (0, 1)$; see for instance Lemma~12 in \cite{portier2018weak}. If drawing samples from $P$ or evaluating $g$ is expensive, it may be of interest to limit the number of Monte Carlo draws $X_i$ and function evaluations $g(X_i)$. Finally, note that due to the formulation of a quantile as the minimiser of the expectation of the check function, see e.g., \citet{hjort2011asymptotics}, this example is an instance of the stochastic programming framework described before. \paragraph{Related results.} The previous examples underline the need of error bounds for Monte Carlo methods that are uniform over a family of response functions. The uniform consistency of standard Monte Carlo estimates over certain collections of functions can easily be shown by relying on Glivenko-Cantelli classes \citep{vdvaart+w:1996}; see for instance \citet{shapiro2014lectures} for applications to stochastic programming problems. Similarly, uniform convergence rates for standard Monte Carlo estimates can be derived from classical empirical process theory. We refer to \cite{gine+g:02} and the references therein for suprema over VC-type classes and to \cite{kloeckner2020empirical} and the references therein for suprema over H\"older-type classes. For variance reduction methods based on adaptive importance sampling, uniform consistency has been proven recently in \citet{delyon+p:2018} and \citet{feng2018uniform}. For control variates, however, we are not aware of any uniform error bounds and we believe the next results to be the first of their kind. \subsection{Mathematical background for control variates} \label{sec:CVback} Let $\Fc \subset L^2(P)$ be a collection of square-integrable, real-valued functions $f$ on a probability space $(\mathcal{X}, \mathcal{A}, P)$ of which we would like to calculate the integral $P(f) = \int_{\mathcal{X}} f(x) \, \diff P(x)$. Let $X_1,\ldots,X_n $ be independent random variables taking values in $\mathcal{X}$ and with common distribution $P$. The standard Monte Carlo estimate of $P(f)$ simply takes the form $P_n(f)=\frac{1}{n}\sum_{i=1}^n f(X_i)$. However, this estimator may converge slowly to $P(f)$ due to a high variance. To tackle this issue, it is common practice to use control variates, which are functions in $L^2(P)$ with known integrals. Without loss of generality, we can center the control variates $g_{n,1}\ldots,g_{n,d_n}$ and assume they have zero expectation, that is, $P(g_{n,k})=0$ for all $k\in \{1,\ldots,d_n\}$. Let $g_n = (g_{n,1},\ldots,g_{n,d_n}) $ denote the $\mathbb R^{d_n}$-valued function with the $d_n$ control variates as elements and put $h_n = (1,g_n^\T)^\T$. Similarly as before, we assume that the Gram matrix $P(g_n g_n^\T)$ is invertible. The control variate Monte Carlo estimate of $P(f)$ is given by $\hat \alpha_{n,f}$ defined as \citep[see for instance][Section~1]{portier+s:2019}, \begin{equation} \label{eq:alphanf} (\hat \alpha_{n,f}, \hat \beta_{n,f}) \in \argmin _ {\alpha \in \mathbb R, \, \beta \in \mathbb R^{d_n}} P_n ( f - \alpha - g_n ^\T \beta)^2. \end{equation} The vector $\hat \beta_{n,f}$ contains the regression coefficients for the prediction of $f$ based on the covariates $h_n$. Remark that the control variate integral estimate $\hat \alpha_{n,f}$ coincides with the integral of the least square estimate of $f$, i.e., $\hat \alpha_{n,f} = P(\hat \alpha_{n,f}+ g_n ^\T \hat \beta_{n,f})$. In addition, since $\hat \alpha_{n,f}$ can be expressed as a weighted estimate $\sum_{i=1}^n w_{i} f(X_i)$ where the weights $(w_i)_{i=1,\ldots, n}$ do not depend on the integrand $f$, there is a computational benefit to integrating multiple functions \citep[Remark 4]{leluc2019control}. It is useful to define \begin{equation*} (\alpha_{n,f}, \beta_{n,f}) \in \argmin _ {\alpha \in \mathbb R, \, \beta \in \mathbb R^d} P ( f - \alpha - g_n ^\T \beta)^2, \end{equation*} as well as the residual function \[ \eps_{n,f} = f - \alpha_{n,f} - g_n^\T \beta_{n,f}. \] Note that $ \alpha_{n,f} = P(f)$. If $ \beta_{n,f} $ would be known, the resulting oracle estimator would be \begin{equation} \label{eq:alphanforacle} \hat \alpha_{n,f}^{\mathrm{or}} = P_n[f- g_n ^\T \beta_{n,f} ]. \end{equation} The question raised in the next section is whether the control variate estimate $\hat \alpha_{n,f}$ can achieve a similar accuracy uniformly in $f \in \Fc$ as the oracle estimator $\hat \alpha_{n,f}^{\mathrm{or}}$. \subsection{Uniform error bounds} Motivated by the examples above, we provide an error bound for the control variate Monte Carlo estimate $ \hat \alpha_{n,f}$ in~\eqref{eq:alphanf} uniformly in $f \in \Fc$. Before doing so, we give a uniform error bound for the oracle estimate $\hat \alpha_{n,f}^{\mathrm{or}}$ in~\eqref{eq:alphanforacle}. This serves two purposes: first, it will be useful in the analysis of $ \hat \alpha_{n,f}$ and second, it will provide sufficient conditions for the two estimates to achieve the same level of performance. Recall $M_n$, $\gamma_n^2$ and $L_n^2$ in \eqref{eq:Mn} and \eqref{eq:Ln} and put \[ \sigma_n^2 = \supf P(\eps_{n,f}^2). \] A new assumption, $M_n^2 = \Oh \left(\sigma_n ^2 n / \log (n ) \right)$ as $n \to \infty$, in the same vein but weaker\footnote{Since $L_n^2 = M_n^2 \ninf{q_n}$ and $\gamma_n^2 \le \sigma_n^2 \ninf{q_n}$.} than $ L_n^2 = \Oh \left(\gamma_n ^2 \sqrt{ n / \log (n ) }\right)$, turns out to be useful to obtain the result. \begin{theorem}[Uniform bound on error of oracle estimator] \label{prop:oracle_bound} Assume the framework of Section~\ref{sec:CVback} and suppose that Conditions \ref{cond:qn}, \ref{cond:F} and \ref{cond:VC} hold. If $\liminf _ {n\to \infty} n^{\alpha} M_n> 0 $ for some $\alpha >0$ and if $ M_n^2 = \Oh \left(\sigma_n ^2 n / \log (n ) \right) $, then \begin{align*} \sup_{f\in \mathcal F} \left|\hat \alpha_{n,f}^{\mathrm{or}} - P(f) \right| &= \Op\left(\sigma_n \sqrt{ n^{-1} \log( n )} \right), \qquad n \to \infty. \end{align*} \end{theorem} The proof of Theorem \ref{prop:oracle_bound} is provided in Appendix \ref{app:proofMC1}. The derivation of the stated rate relies on the property that the residual class $\Ec_n = \{ \eps_{n,f} : f \in \Fc\}$ is a VC-class of functions (as detailed in the proof of Proposition~\ref{prop:cover}). Indeed, noticing that $\hat \alpha_{n,f}^{\mathrm{or}} - P(f) = P_n (\eps_{n,f})$ allows to rely on the next proposition, dedicated to suprema of empirical process. \begin{proposition}[Bound of supremum of empirical process] \label{prop:general term_Pn f} On the probability space $(\Xc, \Ac, P)$, let $\Sc$ be a VC-class of parameters $(w, B)$ with respect to the constant envelope $U \ge \sup_{\scfunc \in \Sc}\ninf{\scfunc}$. Suppose the following two conditions hold: \begin{compactenum}[(i)] \item $\tau^2 \ge \sup_{\scfunc\in\Sc} \Var_P(\scfunc)$ and $\tau \le 2U$; \item $w \ge 1$ and $B \ge 1 $. \end{compactenum} Then, for $P_n$ the empirical distribution of an independent random sample $X_1, \ldots, X_n$ from $P$, we have with probability $1-\delta$: \begin{align*} \sup_{\scfunc \in\Sc} \left| P_n(\scfunc) - P(\scfunc) \right| &\leq L \left(\tau \sqrt{w n^{-1} \log(L \theta/\delta)} + U w n^{-1} \log(L \theta/\delta ) \right), \end{align*} with $\theta = B U / \tau $ and $L>0$ a universal constant. \end{proposition} The proof of Proposition~\ref{prop:general term_Pn f} is given in Appendix~\ref{supp:prop:general term_Pn f}. In the proof, we bound the expectation of the supremum by combining a well-known symmetrization inequality \citep[Lemma~2.3.1]{vdvaart+w:1996} with Proposition~2.1 in \citet{gine2001consistency}, and we will find a rate on the deviation of the supremum around its expectation by Theorem~1.4 in \citet{talagrand1996new}. Compared to existing results such as Proposition 2.2 stated in \cite{gine2001consistency}, our version is more precise due to the explicit role played by the VC constants in the bound. The next result follows from an application of Corollary \ref{cor:simple-rate} and Theorem \ref{prop:oracle_bound} combined with some other bounds that are standard when analyzing control variates estimates. \begin{theorem}[Uniform error bound on control variate Monte Carlo estimator] \label{th:unif_cv} Assume the framework of Section~\ref{sec:CVback} and suppose that Conditions \ref{cond:qn}, \ref{cond:F} and \ref{cond:VC} hold. If $\liminf _ {n\to \infty} n^{\alpha} M_n> 0 $ for some $\alpha >0$ and if $ L_n^2 = \Oh \left( \gamma_n ^2 \sqrt{ n / \log (n ) } \right) $, then \begin{equation*} \supf \left|\hat \alpha _ f - P(f) \right| = \Op\left( \sigma_n \sqrt{ n^{-1} \log( n )} \left( 1 + \sqrt { d_n n^{-1} \ninf {q_n} } \right) \right), \qquad n \to \infty. \end{equation*} \end{theorem} The proof of Theorem \ref{th:unif_cv} is provided in Appendix \ref{app:proofMC2}. Compared to the error bound given in Theorem~\ref{prop:oracle_bound} for the oracle estimator, the error bound in Theorem~\ref{th:unif_cv} for the control variate estimator has an additional term. This term, which is due to the additional learning step that is needed to estimate the optimal control variate, vanishes as soon as $d_n \ninf{q_n} = \oh(n)$ as $n \to \infty$. This condition, which was used in \cite{newey1997convergence} as well as in \cite{portier+s:2019}, is meaningful as it relates the model complexity to the sample size, i.e., the computing time of the experiment.
{ "config": "arxiv", "file": "2006.09223/MC_unif.tex", "set_name": null, "score": null, "question_id": null, "subset_name": null }
TITLE: If Γ,A∧B⊢Δ is an axiom, then also Γ,A,B⊢Δ is an axiom QUESTION [2 upvotes]: Context I am studying sequent calculus, and I am trying to understand the proof that the rule L∧ introducing "$\land $" on the left: ${\displaystyle \quad {\cfrac {\Gamma ,A,B\vdash \Delta }{\Gamma ,A\land B\vdash \Delta }}}$ is invertible, where invertible means that as soon as I have a derivation of height $n$ of the conclusion, I also have a derivation of the same height of the premise. Here I am considering sequents for classical logic, and I define an axiom as a sequent $\Gamma \vdash \Delta $ , such that $\Gamma$ and $\Delta$ are finite multisets sharing at least one formula. The proof is by induction on the height of the derivation, and starts with the case $n=0$. Problem: The proof is easy and follows inductively from the fact that, if (1)$\displaystyle \quad \Gamma ,A\land B\vdash \Delta$ is an axiom, then also (2) $\displaystyle \quad \Gamma ,A,B\vdash \Delta$ is an axiom. My reasoning is that it can be the case that $\Gamma$ and $\Delta$ don't share any formula, and $A \land B$ is in $\Delta$, so that (1) is an axiom, but in this case I don't understand how to prove rigorously why (2) should be. Thanks for reading REPLY [1 votes]: Probably it is because you work in sequent calculus, which only admits axioms in form $\Gamma\vdash\Delta$, where $\Gamma$ and $\Delta$ share an atomic formula. I would recommend a book by Troelstra and Schwichteberg: Basic proof theory. Here this system is called $\mathrm{G}3$, see Definition 3.5.1.
{ "config": null, "file": null, "set_name": "stack_exchange", "score": 2, "question_id": 1905275, "subset_name": null }